From reinout at vanrees.org Tue Mar 3 10:30:27 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Tue, 03 Mar 2015 10:30:27 +0100 Subject: [Distutils] bootstrap.pypa.io is down Message-ID: Hi, I'm getting a "503 service unavailable" on https://bootstrap.pypa.io/ This effectively kills all "bootstrap.py" scripts of our buildouts :-( Where is the best place to report this, normally? I looked at the pypi bug tracker but didn't see a mention of pypa.io there. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From ncoghlan at gmail.com Tue Mar 3 12:30:03 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 3 Mar 2015 21:30:03 +1000 Subject: [Distutils] bootstrap.pypa.io is down In-Reply-To: References: Message-ID: On 3 March 2015 at 19:30, Reinout van Rees wrote: > Hi, > > I'm getting a "503 service unavailable" on https://bootstrap.pypa.io/ > > This effectively kills all "bootstrap.py" scripts of our buildouts :-( > > > Where is the best place to report this, normally? I looked at the pypi > bug tracker but didn't see a mention of pypa.io there. We don't currently have a great ticket tracking system for the PSF infrastructure - we should probably work out something better, so folks don't have to play "guess the project", and also so operational issues can be more clearly separated from the development projects that power the online services. Emailing infrastructure at python.org is a decent last resort option, though, including for the PyPA parts. In this case, it looks like it's back now, so it may have just been affected by the Rackspace rolling restarts earlier (https://status.python.org/incidents/5w523lrn3587) Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From graffatcolmingov at gmail.com Tue Mar 3 14:52:05 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Tue, 3 Mar 2015 07:52:05 -0600 Subject: [Distutils] bootstrap.pypa.io is down In-Reply-To: References: Message-ID: If bootstrap.pypa.io sits behind fastly, a 503 would indicate that it couldn't reach the server. That means that it is either exactly what Nick pointed out or a semi-normal error from fastly. There's a reason pip retries package downloads when it sees a 503 response status code. On Mar 3, 2015 5:30 AM, "Nick Coghlan" wrote: > On 3 March 2015 at 19:30, Reinout van Rees wrote: > > Hi, > > > > I'm getting a "503 service unavailable" on https://bootstrap.pypa.io/ > > > > This effectively kills all "bootstrap.py" scripts of our buildouts :-( > > > > > > Where is the best place to report this, normally? I looked at the pypi > > bug tracker but didn't see a mention of pypa.io there. > > We don't currently have a great ticket tracking system for the PSF > infrastructure - we should probably work out something better, so > folks don't have to play "guess the project", and also so operational > issues can be more clearly separated from the development projects > that power the online services. > > Emailing infrastructure at python.org is a decent last resort option, > though, including for the PyPA parts. > > In this case, it looks like it's back now, so it may have just been > affected by the Rackspace rolling restarts earlier > (https://status.python.org/incidents/5w523lrn3587) > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuartl at vrt.com.au Tue Mar 3 04:02:09 2015 From: stuartl at vrt.com.au (Stuart Longland) Date: Tue, 03 Mar 2015 13:02:09 +1000 Subject: [Distutils] Building deb packages for pypy Message-ID: <54F52431.8030400@vrt.com.au> Hi all, I'm currently attempting to evaluate pypy for use in a few performance-critical projects. We mainly use Debian or Ubuntu as the host platform, with our packages provided as debs. Traditionally if we needed to build debs for third-party libraries, we'd use `stdeb` to generate the source files. However I'm having a lot of fun and games trying to figure out how this is done with pypy. What is the procedure for building a deb package of a Python library using stdeb for pypy? -- _ ___ Stuart Longland - Systems Engineer \ /|_) | T: +61 7 3535 9619 \/ | \ | 38b Douglas Street F: +61 7 3535 9699 SYSTEMS Milton QLD 4064 http://www.vrt.com.au From reinout at vanrees.org Tue Mar 3 15:16:34 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Tue, 03 Mar 2015 15:16:34 +0100 Subject: [Distutils] bootstrap.pypa.io is down In-Reply-To: References: Message-ID: Nick Coghlan schreef op 03-03-15 om 12:30: > Emailing infrastructure at python.org is a decent last resort option, > though, including for the PyPA parts. Thanks. I've mailed that address to ask whether they want to have mail when bootstrap.pypa.io is down :-) > In this case, it looks like it's back now, so it may have just been > affected by the Rackspace rolling restarts earlier > (https://status.python.org/incidents/5w523lrn3587) Could be. It was down a long time, though. And this rolling restart sounds like it was too well-planned to cause the long downtime I experienced. Anyway, I've asked infrastructure@ . Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From graffatcolmingov at gmail.com Tue Mar 3 15:17:23 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Tue, 3 Mar 2015 08:17:23 -0600 Subject: [Distutils] bootstrap.pypa.io is down In-Reply-To: <54F5C0B2.4070605@vanrees.org> References: <54F5C0B2.4070605@vanrees.org> Message-ID: Heh. That wasn't apparent to me at all. But yeah, the 503 still indicates that fastly had trouble reaching the server. It's almost certainly Rackspace's restarts that caused this. On Tue, Mar 3, 2015 at 8:09 AM, Reinout van Rees wrote: > Ian Cordasco schreef op 03-03-15 om 14:52: > > If bootstrap.pypa.io sits behind fastly, a 503 would indicate that it > couldn't reach the server. That means that it is either exactly what Nick > pointed out or a semi-normal error from fastly. There's a reason pip retries > package downloads when it sees a 503 response status code. > > Well, it wasn't a temporary one-second hickup. It was down for a couple of > hours this morning. ("morning" = "morning in Europe" :-) ). > > > Reinout > > Reinout > > -- > Reinout van Rees http://reinout.vanrees.org/ > reinout at vanrees.org http://www.nelen-schuurmans.nl/ > "Learning history by destroying artifacts is a time-honored atrocity" From setuptools at bugs.python.org Thu Mar 5 15:27:07 2015 From: setuptools at bugs.python.org (Hannes Doyle) Date: Thu, 05 Mar 2015 14:27:07 +0000 Subject: [Distutils] [issue160] bootstrap of virtualenv.py fails on setuptools 12.4 Message-ID: <1425565627.75.0.0842093079418.issue160@psf.upfronthosting.co.za> New submission from Hannes Doyle: Hi, We are running virtualenv 1.7.2 with python 2.7.3 when executing the following mkdir pyenv virtualenv.py -p //bin/python2.7 pyenv I get the errors shown in the err.txt But if I do the following it works fine: mkdir pyenv wget 'https://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg#md5=fe1f997bc722265116870bc7919059ea' wget 'https://pypi.python.org/packages/source/p/pip/pip-6.0.8.tar.gz#md5=2332e6f97e75ded3bddde0ced01dbda3' virtualenv.py --never-download -p //bin/python2.7 pyenv And since this started to show at 2015-03-04 17:00 CET I suspect that it's related to the release of 12.4 that were released on the same day. I'm not sure if it might be here or in the virtualenv forum this should be posted though. Regards Hannes ---------- files: err.txt messages: 748 nosy: marulkan priority: bug status: unread title: bootstrap of virtualenv.py fails on setuptools 12.4 Added file: http://bugs.python.org/setuptools/file173/err.txt _______________________________________________ Setuptools tracker _______________________________________________ -------------- next part -------------- Running virtualenv with interpreter //bin/python2.7 New python executable in pyenv/bin/python2.7 Also creating executable in pyenv/bin/python Installing setuptools................................................ Complete output from command /home/marulkan/pyenv/bin/python2.7 -c "#!python \"\"\"Bootstra...sys.argv[1:]) " --always-copy -U setuptools: Downloading http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg Searching for setuptools Reading http://pypi.python.org/simple/setuptools/ Best match: setuptools 12.4 Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-12.4.zip#md5=b088ed7a43a93a1fd1fcabdf73bfa0bf Processing setuptools-12.4.zip Running setuptools-12.4/setup.py -q bdist_egg --dist-dir /tmp/easy_install-GP0RiD/setuptools-12.4/egg-dist-tmp-7KgrMA //lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'src_root' warnings.warn(msg) Traceback (most recent call last): File "", line 279, in File "", line 214, in main File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1712, in main File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1700, in with_ei_usage File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1716, in File "//lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "//lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "//lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 211, in run File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 446, in easy_install File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 476, in install_item File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 655, in install_eggs File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 930, in build_and_install File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 919, in run_setup File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/sandbox.py", line 62, in run_setup File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/sandbox.py", line 105, in run File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/sandbox.py", line 64, in File "setup.py", line 186, in File "//lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "//lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "//lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/bdist_egg.py", line 167, in run File "//lib/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "//lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/home/marulkan/setuptools-0.6c11-py2.7.egg/setuptools/command/egg_info.py", line 169, in run File "/home/marulkan/setuptools-0.6c11-py2.7.egg/pkg_resources.py", line 1959, in load ImportError: has no 'write_setup_requirements' attribute ---------------------------------------- ...Installing setuptools...done. Traceback (most recent call last): File "//bin/virtualenv.py", line 2429, in main() File "//bin/virtualenv.py", line 942, in main never_download=options.never_download) File "//bin/virtualenv.py", line 1052, in create_environment search_dirs=search_dirs, never_download=never_download) File "//bin/virtualenv.py", line 598, in install_setuptools search_dirs=search_dirs, never_download=never_download) File "//bin/virtualenv.py", line 570, in _install_req cwd=cwd) File "//bin/virtualenv.py", line 1020, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /home/marulkan/pyenv/bin/python2.7 -c "#!python \"\"\"Bootstra...sys.argv[1:]) " --always-copy -U setuptools failed with error code 1 From msabramo at gmail.com Thu Mar 5 17:38:48 2015 From: msabramo at gmail.com (Marc Abramowitz) Date: Thu, 5 Mar 2015 08:38:48 -0800 Subject: [Distutils] Getting more momentum for pip Message-ID: This post is meant to start a discussion on how to make sure that important PyPA projects like pip get enough eyes so that they can continually move forward. It is absolutely NOT meant to be critical of folks who volunteer their time to work on these projects, as I have the utmost respect for folks who do this often thankless work. And since the word "thankless" just popped into my head, let me say "thank you" right now to the folks who contribute to PyPA. I've noticed that when PRs are submitted to pip (by myself and by others), they often languish for a while and then sometimes become unmergeable because of conflicts. This makes me think that the folks who review the PRs are overburdened and/or the process needs a bit more structure (e.g.: each person thinks someone else is going to review the PR and so no one does it). So some ideas off the top of my head - please comment and/or add your own suggestions to the list: - Add more committers - this will ostensibly increase the number of folks reviewing so that the turnaround time is decreased. And takes some pressure off. - Obtain some kind of funding so that committers can be compensated for their work and don't feel as bad about spending time on it. These people have day jobs and families so maybe this would help; maybe not. - Introduce some bot or automation that periodically reminds about open PRs that are getting old or PRs that have become unmergeable, etc. Or maybe for each PR, it picks one or more persons who is responsible for reviewing that PR, so there is no ambiguity about who is responsible. The OpenStack folks have quite a bit of structure in their workflow (probably too much for PyPA projects which have less people working on them?), but perhaps there are some things that can be borrowed. In particular, changes need a certain amount of upvotes to be merged and reviewers usually request feedback from certain individuals. So I guess my suggestions boil down to: - Add more humans - Add more money to make humans more efficient - Add more computer automation #3 seems most appealing to me, but of course it requires humans to develop it in the first place, but at least it's an investment that could pay dividends. Thoughts? Marc -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Thu Mar 5 17:50:16 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Thu, 5 Mar 2015 10:50:16 -0600 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: On Thu, Mar 5, 2015 at 10:38 AM, Marc Abramowitz wrote: > This post is meant to start a discussion on how to make sure that important > PyPA projects like pip get enough eyes so that they can continually move > forward. It is absolutely NOT meant to be critical of folks who volunteer > their time to work on these projects, as I have the utmost respect for folks > who do this often thankless work. And since the word "thankless" just popped > into my head, let me say "thank you" right now to the folks who contribute > to PyPA. > > I've noticed that when PRs are submitted to pip (by myself and by others), > they often languish for a while and then sometimes become unmergeable > because of conflicts. > > This makes me think that the folks who review the PRs are overburdened > and/or the process needs a bit more structure (e.g.: each person thinks > someone else is going to review the PR and so no one does it). > > So some ideas off the top of my head - please comment and/or add your own > suggestions to the list: > > - Add more committers - this will ostensibly increase the number of folks > reviewing so that the turnaround time is decreased. And takes some pressure > off. > > - Obtain some kind of funding so that committers can be compensated for > their work and don't feel as bad about spending time on it. These people > have day jobs and families so maybe this would help; maybe not. > > - Introduce some bot or automation that periodically reminds about open PRs > that are getting old or PRs that have become unmergeable, etc. Or maybe for > each PR, it picks one or more persons who is responsible for reviewing that > PR, so there is no ambiguity about who is responsible. The OpenStack folks > have quite a bit of structure in their workflow (probably too much for PyPA > projects which have less people working on them?), but perhaps there are > some things that can be borrowed. In particular, changes need a certain > amount of upvotes to be merged and reviewers usually request feedback from > certain individuals. > > So I guess my suggestions boil down to: > > - Add more humans > - Add more money to make humans more efficient > - Add more computer automation > > #3 seems most appealing to me, but of course it requires humans to develop > it in the first place, but at least it's an investment that could pay > dividends. > > Thoughts? Personally I value stability over a tendency to merge every PR. I know you're not advocating that every PR be merged (or at least I hope you're not), but the only way to add more core developers is to have people who are already volunteering to review other people's pull requests. "Core developer" is a title that is not so much a privilege as it is a responsibility and it should only be given to those who are already volunteering time to contribute to pip and other PyPA projects through multiple efforts (e.g., reviewing other people's code - not just sending their own, supporting the project either on StackOverflow, irc, or somewhere else, etc.) For the short time that I subscribed to pip notifications I noticed a lot of PRs and a lot of pings but very little serious critical review. Most PRs in that period of time seemed to be feature PRs and not bug fixes. Personally, Bug Fixes are more important than Features and not ever new feature deserves to be merged, in fact, I would say probably more projects would do better by rejecting most new features. OpenStack's workflow is incredibly different from most other projects that aren't people's day jobs for a few reasons: 1. OpenStack is a globally distributed effort. Each project has hundreds of contributors and most only have 5-10 core reviewers who tend to be nominated only after a lot of sweat has been poured into a project. 2. No one can commit directly to a repository. 3. There's a lot of automation around tests but there's also a lot of ceremony in review 4. New features are strongly vetted through a blueprint and specification process that more core reviewers contribute to but do not drive (hence why people who are "core" on that process are called drivers) 5. There's a much clearer review system in place where votes actually matter and are aggregated 6. People (like myself) are literally paid to work on OpenStack. Few people work on it in their free time except to keep their job 7. pip is (in my opinion) far more stable than OpenStack and I would like to keep it that way. I know there are OpenStack Infrastructure folks who monitor this list, but OpenStack continuously reaches into private areas of projects and when those change, OpenStack tends to get angry at upstream for not having backwards compatibility in non-public APIs. pip on the other hand doesn't do that. I'm sure Donald needs help. I don't think the right kind of help now is adding more people who can merge things arbitrarily. I think the right kind of help needs to be focused towards stability and we need a better way of defining what new features belong in pip and what don't to reduce the volume of pull requests that are deprioritized purely because they're features that may or may not have been discussed with PyPA cores beforehand. From p.f.moore at gmail.com Thu Mar 5 18:07:14 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 Mar 2015 17:07:14 +0000 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: On 5 March 2015 at 16:38, Marc Abramowitz wrote: > Thoughts? It seems to me that there is another point that delays progress on a certain proportion of PRs, specifically feature requests, namely that no-one really has that strong an opinion on whether they are a good idea or not. I know I've noticed a few requests recently where my reaction was essentially "I don't have a problem with this, but I don't care enough to actually add it" (it's not so much the mechanical aspect of merging the PR, but also the fact that you take on some level of "ownership" of the PR if you merge it). That's a much harder issue to resolve, as it revolves around having a clear roadmap for pip and a good understanding of scope - something we don't really have. But if we spend too long debating project direction and abstracts like that, we get even less done. Ian's point is also a good one. We get a lot of PRs for new features (which is where my point above applies) but far fewer for bug fixes. And there's also the problem with bug fixes that often the number of core devs with access to the affected platform is limited. (I try to cover Windows for Python 3, but I only have Linux access via setting up a VM, and I have no access to OSX[1], and my interest in Python 2 is limited at best. I'm sure the other devs have similar constraints). But unlike Ian, I think we *do* need to look at features. Things like separating the build and install phases, defining a supportable API for pip (whether the CLI or importable), supporting the in-development peps like Metadata 2.0, external hosting, wheel improvements etc etc, all need doing. The PEP process is the best way to handle this - if people on this list agree on a PEP, it's much easier to develop an implementation instead of just deciding on individual PRs in isolation. Paul. [1] Anyone want to buy me a mac? ;-) From qwcode at gmail.com Thu Mar 5 18:16:41 2015 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 5 Mar 2015 09:16:41 -0800 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: So I guess my suggestions boil down to: > > - Add more humans > - Add more money to make humans more efficient > - Add more computer automation > maybe agree to always maintain < X open issues and < Y open PRs, before adding features. where x can vary as needed, but for starters, x=250, and y=25 sounds reasonable. this would: - force more work on issue and PR backlog - force making the tough decisions on whether something is realistically going to be worked on and closing it - force closing issues that are not getting the responses needed to actually work the problem. - would more likely cycle in new folks to become committers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Mar 5 19:11:28 2015 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 5 Mar 2015 10:11:28 -0800 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: > Currently there are no labels at all for any issue or PR. > there are labels https://github.com/pypa/pip/labels I put most of these last year. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Mar 5 19:14:55 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 Mar 2015 18:14:55 +0000 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: On 5 March 2015 at 17:16, Marcus Smith wrote: > So I guess my suggestions boil down to: >> >> >> - Add more humans >> - Add more money to make humans more efficient >> - Add more computer automation > > > maybe agree to always maintain < X open issues and < Y open PRs, before > adding features. > where x can vary as needed, but for starters, x=250, and y=25 sounds > reasonable. > this would: > - force more work on issue and PR backlog > - force making the tough decisions on whether something is realistically > going to be worked on and closing it > - force closing issues that are not getting the responses needed to > actually work the problem. > - would more likely cycle in new folks to become committers. That implies closing 183 issues and 65 PRs from where we are now. And when you say "adding features" presumably that means somehow forbidding people (core devs? we can't forbid anyone else...) from creating new PRs until we're below the limit. In general, I don't think it's practical in a volunteer-based project, to "force" people to hit specific targets. Having said that, though, more people doing tracker gardening would be a good thing - if anyone (core dev or not) wants to go through doing triage on PRs and issues and closing ones that can be closed, getting self-contained test cases for bugs, doing code reviews on PRs, etc, that would be great. Paul From contact at ionelmc.ro Thu Mar 5 19:07:09 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Thu, 5 Mar 2015 20:07:09 +0200 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: Triaging the issues would help. If, say, I'd like to help it's very discouraging to look in a bug tracker and not be able to filter down on what's interesting or important. Some ideas for triaging goals: * clear labels for issues that the maintainers want to be fixed (eg: if you fix it you'd get quick feedback as a maintainer is interested in that issue). * clear labels for issues that need discussions, have design issue etc (blocked issues IOW). * clear labels for what's a bug or feature. * clear labels for features that would be accepted but no maintainer has time or is interested in (the "nice to have" issues). Currently there are no labels at all for any issue or PR. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Thu, Mar 5, 2015 at 7:16 PM, Marcus Smith wrote: > So I guess my suggestions boil down to: > >> >> - Add more humans >> - Add more money to make humans more efficient >> - Add more computer automation >> > > > maybe agree to always maintain < X open issues and < Y open PRs, before > adding features. > where x can vary as needed, but for starters, x=250, and y=25 sounds > reasonable. > this would: > - force more work on issue and PR backlog > - force making the tough decisions on whether something is realistically > going to be worked on and closing it > - force closing issues that are not getting the responses needed to > actually work the problem. > - would more likely cycle in new folks to become committers. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Thu Mar 5 19:17:05 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Thu, 5 Mar 2015 20:17:05 +0200 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: I've only looked at the first couple pages. The existing labels don't indicate clearly the first/second/fourth goals. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Thu, Mar 5, 2015 at 8:11 PM, Marcus Smith wrote: > > > >> Currently there are no labels at all for any issue or PR. >> > > > there are labels https://github.com/pypa/pip/labels > I put most of these last year. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Mar 5 19:28:24 2015 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 5 Mar 2015 10:28:24 -0800 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: > > That implies closing 183 issues and 65 PRs from where we are now. And > when you say "adding features" presumably that means somehow > forbidding people (core devs? we can't forbid anyone else...) from > creating new PRs until we're below the limit. > > In general, I don't think it's practical in a volunteer-based project, > to "force" people to hit specific targets. > you certainly can't be hard line about it, but I think it's a practical rule and achievable. you just don't work on new non-critical things until you get the issue and PR counts down. As much as I would like to see pypa have a "system" of issue and PR review, I think a simple rule like this is more achievable right now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy at thesyrings.us Thu Mar 5 19:21:45 2015 From: randy at thesyrings.us (Randy Syring) Date: Thu, 05 Mar 2015 13:21:45 -0500 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: <54F89EB9.5070507@thesyrings.us> On 03/05/2015 12:07 PM, Paul Moore wrote: > It seems to me that there is another point that delays progress on a > certain proportion of PRs, specifically feature requests, namely that > no-one really has that strong an opinion on whether they are a good > idea or not. I know I've noticed a few requests recently where my > reaction was essentially "I don't have a problem with this, but I > don't care enough to actually add it" An "issue before feature request PR" policy might help here. Before someone submits the feature request PR, have them open an issue for discussion of the feature. Recommend/Require that at least one core dev be strongly in favor before giving approval to open the PR. Automatically close these issues without the approval after some amount of inactivity. Automatically lose any feature request PRs that don't have an approved issue associated with them. Put up an explanation of this policy, sympathy that it may cause frustration, but that with limited time, devs must focus their efforts, etc. The closing of issues & PRs can be automated, core devs can focus on the items they think brings real value. Bug fix PRs now stand out more. My $0.02. *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Mar 5 19:33:18 2015 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 5 Mar 2015 10:33:18 -0800 Subject: [Distutils] Getting more momentum for pip In-Reply-To: <54F89EB9.5070507@thesyrings.us> References: <54F89EB9.5070507@thesyrings.us> Message-ID: On Thu, Mar 5, 2015 at 10:21 AM, Randy Syring wrote: > > > On 03/05/2015 12:07 PM, Paul Moore wrote: > > It seems to me that there is another point that delays progress on a > certain proportion of PRs, specifically feature requests, namely that > no-one really has that strong an opinion on whether they are a good > idea or not. I know I've noticed a few requests recently where my > reaction was essentially "I don't have a problem with this, but I > don't care enough to actually add it" > > > An "issue before feature request PR" policy might help here. Before > someone submits the feature request PR, have them open an issue for > discussion of the feature. Recommend/Require that at least one core dev be > strongly in favor before giving approval to open the PR. Automatically > close these issues without the approval after some amount of inactivity. > Automatically lose any feature request PRs that don't have an approved > issue associated with them. > > Put up an explanation of this policy, sympathy that it may cause > frustration, but that with limited time, devs must focus their efforts, etc. > +1. good idea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Mar 5 19:58:49 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Mar 2015 13:58:49 -0500 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> > On Mar 5, 2015, at 11:38 AM, Marc Abramowitz wrote: > > Thoughts? So I agree with Paul in that part of the reason why some of the PRs languish is that they are adding new features that none of us feel strongly about, so nobody wants to push the button to say ?yes we?re going to support this?. Being conservative with features is generally a good thing for a (mostly) volunteer project because we do have a limited amount of manpower and more features increases the surface area we have to take care of. We also tend to have a problem saying no to features and tend to just let them languish instead of making a decision one way or the other when we don?t feel strongly. Another issue we have is that often we get requests (bug fixes or otherwise) which break tests or which themselves do not have tests written. We can?t merge these as is but it?s not unusual for the original author to not update their PR and leave it languishing with broken/no tests. It?s hard to want to close these as ?abandoned? because there?s code that may or may not be working if someone takes the time to go through and finish it, but until someone feels like doing that the PR is not able to be merged. Yet another issue is that pip?s test suite is not particularly very good. We?re missing a lot of coverage and we don?t have *any* CI running on platforms other than Ubuntu. This means that merging things is somewhat ?dangerous? because it?s easy to break things without noticing unless you pull down the change and manually test things you can try. Even then that?s not good enough unless you can test it on other platforms as well. I?m sure Paul can fill in the blank on how often the test suite simply doesn?t run on Windows because of some POSIX assumption snuck in somewhere. Another issue (that sort of ties in with the test one) is pip?s code base is simply not very good. Things are not well encapsulated and it?s non obvious what?s going to change anytime you change something. There was recently an issue where changing the order we checked for things to be uninstalled in broke an assumption that a lot of people were relying on. Obviously the biggest limiting factor is simple manpower, and most of the above issues deal with things that make our use of use of what manpower we have more inefficient. I think that the best way to increase the momentum is to explore ways to make our use of the manpower we have more effective and reduce the waste. I don?t think that an artificial limit on the number of issues or pull requests is a good path forward. Since we?re more or less entirely volunteer besides myself, I feel like an artificial limit on numbers will just mean that things don?t happen. I feel like if someone doesn?t want to close issues for whatever reasons, trying to force them to do that will just mean they are more likely to choose to spend their time elsewhere. I also worry that anything which autocloses PRs (after a period of time, or if there wasn?t an associated issue, or whatever) is somewhat hostile to potential contributors and I feel like having an open PR isn?t that big of a deal in general. I think the most effective way forward is that we need to work on fixing the things that needlessly suck up the pip core team?s time. Things that would be huge improvements in this area: * Refactoring pip to better encapsulate and separate concerns, creating boundaries between different parts[1] * Improve the test suite by covering cases that aren?t being covered, moving functional tests to unit tests * Creating a high level test suite that runs setuptools, pip, virtualenv, etc all together against real projects. * Help with getting CI for other systems setup, particularly for Windows. This may take the shape of helping Travis, or it may be setting up another service or our own CI system. Other things that would help are: * People doing in-depth reviews of the current PRs that are there and suggesting changes or pointing out issues, etc. * People triaging issues (unfortunately this one isn?t super easy with GitHub Issues since you have to be a committer to change these things). * People going through and reviewing old issues and PRs to try and figure out if the situations that caused them to be opened originally still apply or if that problem has been fixed or if the code has changed significantly enough that it?s likely to not longer exist. These sorts of things would make it *much* easier to merge new things because there would be less risk and less things involved in actually going through and figuring out if any particular merge is a good idea or not. I also think that people willing to put in the work to do things like this would be good candidates for becoming core developers themselves, which would also help by increasing the number of people we have able to review and commit. [1] http://pyvideo.org/video/1670/boundaries --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu Mar 5 20:08:52 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 Mar 2015 19:08:52 +0000 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <54F89EB9.5070507@thesyrings.us> Message-ID: On 5 March 2015 at 18:33, Marcus Smith wrote: >> An "issue before feature request PR" policy might help here. Before >> someone submits the feature request PR, have them open an issue for >> discussion of the feature. Recommend/Require that at least one core dev be >> strongly in favor before giving approval to open the PR. Automatically >> close these issues without the approval after some amount of inactivity. >> Automatically lose any feature request PRs that don't have an approved issue >> associated with them. >> >> Put up an explanation of this policy, sympathy that it may cause >> frustration, but that with limited time, devs must focus their efforts, etc. > > > +1. good idea. Agreed, good idea. We could probably also introduce some similar guidelines over issues / bug reports. When reporting a bug, please provide clear instructions on how to reproduce the bug, details of platform known to be affected, etc. If that information isn't available, and the OP doesn't respond after being prompted for the information, we'll close the bug after a given period. On a related note, maybe we need a better definition of what platforms/configurations we support. For example, do we support using pip with older versions of setuptools? ("Please upgrade setuptools and confirm if the problem still exists, if it doesn't then that's the fix and we'll close the issue") Platform-specific issues are the hardest to deal with - if something comes up that only affects Gentoo Linux, none of the core devs have any means of reproducing it without creating a VM, working out how to install Gentoo, etc... "Not supported" doesn't have to mean we won't help, but it might mean that we label the issue somehow and need 3rd party help (either from the OP or someone else) to progress the issue, and if nobody steps forward after a certain amount of time, we close the issue (or just leave it open - if it's labelled appropriately we can exclude it from any measures we want to make of open issues we *expect* to work on). Paul. From qwcode at gmail.com Thu Mar 5 20:27:16 2015 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 5 Mar 2015 11:27:16 -0800 Subject: [Distutils] Getting more momentum for pip In-Reply-To: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: > I don?t think that an artificial limit on the number of issues or pull > requests is a good path forward. > I would say "reasonable" limit, not "artificial". : ) It's a simple way to balance the efforts towards project maintenance. > I feel like if someone doesn?t want to close issues for whatever > reasons, trying to force them to do that will just mean they are more > likely to choose to spend their time elsewhere. > the counter argument is that by not responding to issues and PRs, contributers end up going elsewhere. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Mar 5 20:28:54 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Mar 2015 14:28:54 -0500 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: <3C17CBC0-DCEA-4126-9EF9-96DC021FFE73@stufft.io> > On Mar 5, 2015, at 2:27 PM, Marcus Smith wrote: > > > > > > I don?t think that an artificial limit on the number of issues or pull requests is a good path forward. > > I would say "reasonable" limit, not "artificial". : ) > It's a simple way to balance the efforts towards project maintenance. > > I feel like if someone doesn?t want to close issues for whatever reasons, trying to force them to do that will just mean they are more likely to choose to spend their time elsewhere. > > the counter argument is that by not responding to issues and PRs, contributers end up going elsewhere. > > Sure, but if we spend our time elsewhere they aren?t going to get responses either ;) --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From msabramo at gmail.com Thu Mar 5 20:32:32 2015 From: msabramo at gmail.com (Marc Abramowitz) Date: Thu, 5 Mar 2015 11:32:32 -0800 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: > * Refactoring pip to better encapsulate and separate concerns, creating boundaries between different parts These of course are a drop in the bucket of what could be done: - https://github.com/pypa/pip/pull/2404 - https://github.com/pypa/pip/pull/2410 - https://github.com/pypa/pip/pull/2411 Now probably `install` is the one that would add the most value and I briefly thought of doing that but then I thought to myself that there are so many open PRs already and one for `install` would probably break a whole bunch of them. Also I don't want to have too many open ones because I just don't like having too many open loops. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Mar 5 20:34:12 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 Mar 2015 19:34:12 +0000 Subject: [Distutils] Getting more momentum for pip In-Reply-To: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: On 5 March 2015 at 18:58, Donald Stufft wrote: > Yet another issue is that pip?s test suite is not particularly very good. > We?re missing a lot of coverage and we don?t have *any* CI running on > platforms other than Ubuntu. This means that merging things is somewhat > ?dangerous? because it?s easy to break things without noticing unless you > pull down the change and manually test things you can try. Even then that?s > not good enough unless you can test it on other platforms as well. I?m sure > Paul can fill in the blank on how often the test suite simply doesn?t run on > Windows because of some POSIX assumption snuck in somewhere. The test suite is pretty much broken on Windows, from what I recall. I intended at one point to try to get it running cleanly on Windows, but it was soul-destroyingly hard work, and I never got very far with it. Given that there's no good Windows CI service (Appveyor is great but it's very slow even on simple projects, and I think it has limits on how long test suites can run so I'm not even sure we could run pip's tests on it) I fear that any work done getting the test suite working on Windows would pretty quickly regress... If there was one thing on the infrastructure and support side of things that would help enormously, it would be someone setting up CI services for more platforms - Windows in particular, but things like the ancient RHEL systems that people keep having issues on would also be good. And resource willing to get the test suite working on those platforms. (I'd be happy to help someone work on fixing the test suite on Windows, but I really don't have the time to do it all myself). > Other things that would help are: > > * People doing in-depth reviews of the current PRs that are there and > suggesting changes or pointing out issues, etc. Very much so. Anyone can add review comments to PRs. We could (and probably should) document some quality guidelines for PRs (must include a test, must include docs if there's a user-visible impact, must be cross-platform, must work on Python 2 and 3). Having more people who can test PRs on a wider range of platforms (Windows!!!) would be great too - a simple comment "checked and confirmed on Windows" is a great help. > * People triaging issues (unfortunately this one isn?t super easy with > GitHub Issues since you have to be a committer to change these things). Hmm, that's a problem - but yes, even if they can only add comments, saying "Please close as unreproducible", "Duplicate of XXX", "Please add label YYY" would be helpful. The committers could trawl such comments occasionally and action them. > * People going through and reviewing old issues and PRs to try and figure > out if the situations that caused them to be opened originally still apply > or if that problem has been fixed or if the code has changed significantly > enough that it?s likely to not longer exist. Oh yes, please. And in particular, the old issues with repeated "+1" or "me, too" comments, ping all the people who said "me too" and ask them if they can provide a patch. And weeding out issues that only apply when using ancient versions of setuptools, that sort of thing. > These sorts of things would make it *much* easier to merge new things > because there would be less risk and less things involved in actually going > through and figuring out if any particular merge is a good idea or not. I > also think that people willing to put in the work to do things like this > would be good candidates for becoming core developers themselves, which > would also help by increasing the number of people we have able to review > and commit. +1 Paul From donald at stufft.io Thu Mar 5 20:37:06 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Mar 2015 14:37:06 -0500 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: <124687A6-D9B8-4A24-A176-C64C0D702BB8@stufft.io> > On Mar 5, 2015, at 2:32 PM, Marc Abramowitz wrote: > > > * Refactoring pip to better encapsulate and separate concerns, creating boundaries between different parts > > These of course are a drop in the bucket of what could be done: > > - https://github.com/pypa/pip/pull/2404 > - https://github.com/pypa/pip/pull/2410 > - https://github.com/pypa/pip/pull/2411 > > Now probably `install` is the one that would add the most value and I briefly thought of doing that but then I thought to myself that there are so many open PRs already and one for `install` would probably break a whole bunch of them. Also I don't want to have too many open ones because I just don't like having too many open loops. To be honest, I didn?t so much mean the commands themselves. It?s a minor improvement but it?s largely shuffling deck chairs on the titanic in my opinion. It doesn?t meaningfully make things cleaner. The things I?m talking about are more about the internals of pip, pip.index, pip.download, pip.req.*, etc. These are the ?core? parts of pip and that code is horrible and messy and actually figuring out how to clean that up would be a major big deal. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu Mar 5 20:38:06 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 Mar 2015 19:38:06 +0000 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: On 5 March 2015 at 19:32, Marc Abramowitz wrote: >> * Refactoring pip to better encapsulate and separate concerns, creating >> boundaries between different parts > > These of course are a drop in the bucket of what could be done: > > - https://github.com/pypa/pip/pull/2404 > - https://github.com/pypa/pip/pull/2410 > - https://github.com/pypa/pip/pull/2411 > > Now probably `install` is the one that would add the most value and I > briefly thought of doing that but then I thought to myself that there are so > many open PRs already and one for `install` would probably break a whole > bunch of them. Also I don't want to have too many open ones because I just > don't like having too many open loops. Much more important (and I think what Donald was probably referring to) is the internal refactorings - properly encapsulating the finder, the various requirement objects, making "build from source" and "install from wheel" into clearly defined components. Personally, while I don't think there's anything particularly *wrong* your 3 PRs mentioned above, my feeling is that they don't really add much. Given that nothing other than the pip code itself is supposed to use pip internal functions at this point in time, there's nobody who'll really gain from them. Paul From donald at stufft.io Thu Mar 5 20:53:27 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Mar 2015 14:53:27 -0500 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: > On Mar 5, 2015, at 2:34 PM, Paul Moore wrote: > > On 5 March 2015 at 18:58, Donald Stufft wrote: >> Yet another issue is that pip?s test suite is not particularly very good. >> We?re missing a lot of coverage and we don?t have *any* CI running on >> platforms other than Ubuntu. This means that merging things is somewhat >> ?dangerous? because it?s easy to break things without noticing unless you >> pull down the change and manually test things you can try. Even then that?s >> not good enough unless you can test it on other platforms as well. I?m sure >> Paul can fill in the blank on how often the test suite simply doesn?t run on >> Windows because of some POSIX assumption snuck in somewhere. > > The test suite is pretty much broken on Windows, from what I recall. I > intended at one point to try to get it running cleanly on Windows, but > it was soul-destroyingly hard work, and I never got very far with it. > Given that there's no good Windows CI service (Appveyor is great but > it's very slow even on simple projects, and I think it has limits on > how long test suites can run so I'm not even sure we could run pip's > tests on it) I fear that any work done getting the test suite working > on Windows would pretty quickly regress... > > If there was one thing on the infrastructure and support side of > things that would help enormously, it would be someone setting up CI > services for more platforms - Windows in particular, but things like > the ancient RHEL systems that people keep having issues on would also > be good. And resource willing to get the test suite working on those > platforms. (I'd be happy to help someone work on fixing the test suite > on Windows, but I really don't have the time to do it all myself). Yea, I wouldn?t personally put effort into fixing the test suite on Windows without something in place to ensure it doesn?t break again. I know the folks behind Travis CI. I know they were looking at adding Windows support and said that if someone can write a go app that boots up a Windows Azure instance and then uses winrm to run a command on it, even echo, that would get them a big step of the way towards being able to support it. OSX and lots of other POSIX based systems are also not covered of course, it would be great to setup something that other people can contribute build machines to. We can spin up anything on a Rackspace cloud (and probably other clouds too) but some things people might care about (AIX?) we can?t do. I think it wouldn?t be unreasonable to say that for things we can?t run a builder ourselves for, that people who care about that platform needs to provide us with a suitable instance. None of that matters much though without something that allows us to run tests on more platforms than whatever Travis provides us. Ideally this would support PR based testing (which means it needs some sort of VM or isolation support to do it securely) but if some platforms can?t be easily virtualized like that then a post merge trigger is acceptable too. > >> Other things that would help are: >> >> * People doing in-depth reviews of the current PRs that are there and >> suggesting changes or pointing out issues, etc. > > Very much so. Anyone can add review comments to PRs. We could (and > probably should) document some quality guidelines for PRs (must > include a test, must include docs if there's a user-visible impact, > must be cross-platform, must work on Python 2 and 3). Having more > people who can test PRs on a wider range of platforms (Windows!!!) > would be great too - a simple comment "checked and confirmed on > Windows" is a great help. Yea, I don?t have a Windows machine so I?m often times just guessing if it works on Windows, or pinging you for it. For major things I can spin up a Windows VM but that takes a good 30+ minutes to do which again goes back to that our current setup has a lot of time wasters for pip core. > >> * People triaging issues (unfortunately this one isn?t super easy with >> GitHub Issues since you have to be a committer to change these things). > > Hmm, that's a problem - but yes, even if they can only add comments, > saying "Please close as unreproducible", "Duplicate of XXX", "Please > add label YYY" would be helpful. The committers could trawl such > comments occasionally and action them. I personally get emails for every issue, closing duplicates or adding labels and such is something that takes 15 seconds to do if someone leaves a comment like that. Another option is to move our issue tracker off of Github into something else that supports non-committers being able to manage the issue tracker. Empowering non core to do more things is another thing that would be useful and requires someone to take the time to figure out how we can best do that (switch away from GH issues? To What?) and then actually do the work to make it happen (create salt states to deploy, create scripts to migrate etc). > >> * People going through and reviewing old issues and PRs to try and figure >> out if the situations that caused them to be opened originally still apply >> or if that problem has been fixed or if the code has changed significantly >> enough that it?s likely to not longer exist. > > Oh yes, please. And in particular, the old issues with repeated "+1" > or "me, too" comments, ping all the people who said "me too" and ask > them if they can provide a patch. And weeding out issues that only > apply when using ancient versions of setuptools, that sort of thing. I try to do this periodically but I mostly only do the ones that I can tell from a glance that are safe to close. > >> These sorts of things would make it *much* easier to merge new things >> because there would be less risk and less things involved in actually going >> through and figuring out if any particular merge is a good idea or not. I >> also think that people willing to put in the work to do things like this >> would be good candidates for becoming core developers themselves, which >> would also help by increasing the number of people we have able to review >> and commit. > > +1 > > Paul --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From msabramo at gmail.com Thu Mar 5 20:55:05 2015 From: msabramo at gmail.com (Marc Abramowitz) Date: Thu, 5 Mar 2015 11:55:05 -0800 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: Yeah my changes are quite trivial. And there is much more complex stuff that could be improved. IIRC, `pip/req/req_install.py` is the real behemoth. I remember feeling afraid to touch that thing. :) I do think this illustrates some of the problem though in that if those 3 very simple PRs are not merged or closed, then I don't have a lot of faith that any more complex PR that I might submit would be worth the time investment. I feel like I'm somewhat conditioned to only submit small PRs to pip because they have the lowest risk (although also the lowest reward of course). But I don't want to get too bogged down with focusing on me and my PRs. I think this involves everybody. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Mar 5 20:58:53 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Mar 2015 14:58:53 -0500 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: <27D08487-88D1-4A34-8FF0-8EE209D74C74@stufft.io> > On Mar 5, 2015, at 2:53 PM, Donald Stufft wrote: > >> >> On Mar 5, 2015, at 2:34 PM, Paul Moore wrote: >> >> On 5 March 2015 at 18:58, Donald Stufft wrote: >>> Yet another issue is that pip?s test suite is not particularly very good. >>> We?re missing a lot of coverage and we don?t have *any* CI running on >>> platforms other than Ubuntu. This means that merging things is somewhat >>> ?dangerous? because it?s easy to break things without noticing unless you >>> pull down the change and manually test things you can try. Even then that?s >>> not good enough unless you can test it on other platforms as well. I?m sure >>> Paul can fill in the blank on how often the test suite simply doesn?t run on >>> Windows because of some POSIX assumption snuck in somewhere. >> >> The test suite is pretty much broken on Windows, from what I recall. I >> intended at one point to try to get it running cleanly on Windows, but >> it was soul-destroyingly hard work, and I never got very far with it. >> Given that there's no good Windows CI service (Appveyor is great but >> it's very slow even on simple projects, and I think it has limits on >> how long test suites can run so I'm not even sure we could run pip's >> tests on it) I fear that any work done getting the test suite working >> on Windows would pretty quickly regress... >> >> If there was one thing on the infrastructure and support side of >> things that would help enormously, it would be someone setting up CI >> services for more platforms - Windows in particular, but things like >> the ancient RHEL systems that people keep having issues on would also >> be good. And resource willing to get the test suite working on those >> platforms. (I'd be happy to help someone work on fixing the test suite >> on Windows, but I really don't have the time to do it all myself). > > Yea, I wouldn?t personally put effort into fixing the test suite on Windows > without something in place to ensure it doesn?t break again. > > I know the folks behind Travis CI. I know they were looking at adding Windows > support and said that if someone can write a go app that boots up a Windows > Azure instance and then uses winrm to run a command on it, even echo, that > would get them a big step of the way towards being able to support it. > > OSX and lots of other POSIX based systems are also not covered of course, > it would be great to setup something that other people can contribute > build machines to. We can spin up anything on a Rackspace cloud (and probably > other clouds too) but some things people might care about (AIX?) we can?t do. > I think it wouldn?t be unreasonable to say that for things we can?t run a > builder ourselves for, that people who care about that platform needs to > provide us with a suitable instance. > > None of that matters much though without something that allows us to run tests > on more platforms than whatever Travis provides us. Ideally this would support > PR based testing (which means it needs some sort of VM or isolation support to > do it securely) but if some platforms can?t be easily virtualized like that then > a post merge trigger is acceptable too. > >> >>> Other things that would help are: >>> >>> * People doing in-depth reviews of the current PRs that are there and >>> suggesting changes or pointing out issues, etc. >> >> Very much so. Anyone can add review comments to PRs. We could (and >> probably should) document some quality guidelines for PRs (must >> include a test, must include docs if there's a user-visible impact, >> must be cross-platform, must work on Python 2 and 3). Having more >> people who can test PRs on a wider range of platforms (Windows!!!) >> would be great too - a simple comment "checked and confirmed on >> Windows" is a great help. > > Yea, I don?t have a Windows machine so I?m often times just guessing if > it works on Windows, or pinging you for it. For major things I can spin > up a Windows VM but that takes a good 30+ minutes to do which again goes > back to that our current setup has a lot of time wasters for pip core. > >> >>> * People triaging issues (unfortunately this one isn?t super easy with >>> GitHub Issues since you have to be a committer to change these things). >> >> Hmm, that's a problem - but yes, even if they can only add comments, >> saying "Please close as unreproducible", "Duplicate of XXX", "Please >> add label YYY" would be helpful. The committers could trawl such >> comments occasionally and action them. > > I personally get emails for every issue, closing duplicates or adding > labels and such is something that takes 15 seconds to do if someone leaves > a comment like that. Another option is to move our issue tracker off of > Github into something else that supports non-committers being able to > manage the issue tracker. Empowering non core to do more things is another > thing that would be useful and requires someone to take the time to figure > out how we can best do that (switch away from GH issues? To What?) and then > actually do the work to make it happen (create salt states to deploy, create > scripts to migrate etc). Oh, and another thing that would empower non core to do things is either figuring out a way to allow non core to restart Travis CI builds (whether this is something we run that interacts with the Travis CI API, or helping get Travis CI to support authors restarting builds on their own PRs or whatever) or coming up with a proposal for something we can use instead of Travis that solves some of these issues. Yet another improvement would be to figure out how we can make the test suite not randomly fail as much. Likely this is going to involve finding the places where we?re reaching out to places on the internet (PyPI, Github, etc) and instead mocking out that interaction or making a test server that provides whatever we?re using that external service for and then spinning up a copy of that test server whenever we run the tests and run it against that instead. > >> >>> * People going through and reviewing old issues and PRs to try and figure >>> out if the situations that caused them to be opened originally still apply >>> or if that problem has been fixed or if the code has changed significantly >>> enough that it?s likely to not longer exist. >> >> Oh yes, please. And in particular, the old issues with repeated "+1" >> or "me, too" comments, ping all the people who said "me too" and ask >> them if they can provide a patch. And weeding out issues that only >> apply when using ancient versions of setuptools, that sort of thing. > > I try to do this periodically but I mostly only do the ones that I can tell from > a glance that are safe to close. > >> >>> These sorts of things would make it *much* easier to merge new things >>> because there would be less risk and less things involved in actually going >>> through and figuring out if any particular merge is a good idea or not. I >>> also think that people willing to put in the work to do things like this >>> would be good candidates for becoming core developers themselves, which >>> would also help by increasing the number of people we have able to review >>> and commit. >> >> +1 >> >> Paul > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu Mar 5 21:09:17 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 Mar 2015 20:09:17 +0000 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: On 5 March 2015 at 19:53, Donald Stufft wrote: >> Hmm, that's a problem - but yes, even if they can only add comments, >> saying "Please close as unreproducible", "Duplicate of XXX", "Please >> add label YYY" would be helpful. The committers could trawl such >> comments occasionally and action them. > > I personally get emails for every issue, closing duplicates or adding > labels and such is something that takes 15 seconds to do if someone leaves > a comment like that. Ditto. The only real slowdown is considering whether I trust the opinion of whoever added the comment. And as people contribute more, that becomes progressively easier. > Another option is to move our issue tracker off of > Github into something else that supports non-committers being able to > manage the issue tracker. Empowering non core to do more things is another > thing that would be useful and requires someone to take the time to figure > out how we can best do that (switch away from GH issues? To What?) and then > actually do the work to make it happen (create salt states to deploy, create > scripts to migrate etc). The python core has a system of people getting tracker privileges that mean that they can and do have a group of people who contribute via issue triage, reviews, etc. Many such people graduate to becoming core developers, and those that don't still provide a hugely valuable service. It's a shame github doesn't have a way for us to do that, but of course setting up an alternative tracker would be yet another drain on our limited developer resources. Paul From graffatcolmingov at gmail.com Thu Mar 5 21:11:36 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Thu, 5 Mar 2015 14:11:36 -0600 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: On Thu, Mar 5, 2015 at 1:55 PM, Marc Abramowitz wrote: > Yeah my changes are quite trivial. And there is much more complex stuff that > could be improved. IIRC, `pip/req/req_install.py` is the real behemoth. I > remember feeling afraid to touch that thing. :) > > I do think this illustrates some of the problem though in that if those 3 > very simple PRs are not merged or closed, then I don't have a lot of faith > that any more complex PR that I might submit would be worth the time > investment. > > I feel like I'm somewhat conditioned to only submit small PRs to pip because > they have the lowest risk (although also the lowest reward of course). The lowest reward to whom? If it's a good change that fixes something (regardless of the size that's a big reward to the project and its users. I'm also not going to go back in the thread to reply all of the messages, but I want to be clear that I didn't mean every feature should be rejected. Just that for a project as critical to an ecosystem like Python's, rejecting features should be fairly easy to do and very simple. If no current core developer wants to maintain it or feels strongly about it, that should be closed. A nice, already written, form explanation would probably antagonize contributors less than a short reasoning that might come off curt. That said, you'll always lose some contributors by closing their pull requests or by trying to help them get it into a good state. In my opinion, gigantic PRs that refactor things should be rejected immediately. Those can almost certainly always be done in smaller, easier to review, and easier to understand pull requests. Mega-refactors that will take hours (or even days) to review that modify all of the tests should be absolutely out of the question if pip is going to come up with better guidelines. requests has received a handful of them, and we've tried to coach those people (who are often first-time contributors) into breaking it up but they always leave, and the project has to realize that sometimes it's an acceptable loss. Having clear guidelines is great, they should be enforced and they should be linked somewhere, like the top of the README. Regarding empowering people to close, label, and triage issues is going to be much harder. There are only two systems I can think of that allow for this: Launchpad and Trac. Trac is Django's tracker (in case people aren't aware) and Django has found a way to integrate it with GitHub. We /could/ take that approach, but that leaves the following questions: - Who is going to maintain the server(s)? - Who is going to provision them initially? - What happens if those people (ideally it's more than just one person) disappear? Will we have enough people to reduce the bus factor? - Will this ever become yet another thing that the core developers have to spend time on instead of reviewing pull requests and fixing bugs? Regarding auto-closing, please don't do it. Especially for the ones where someone didn't file a bug first, it may be the only way to figure out what's going on? And for CI, we need people who will help with the windows CI solution on more than one front clearly. I think pyca/cryptography has OS X builders on Travis, so we could probably add tests for that, but for RHEL that's going to be much harder. I think this is where Marc's reference to OpenStack fits in perfectly though. In OpenStack there's a concept of third party CI. (There's a similar notion with the buildbots for CPython.) The people who register those CI systems maintain them but the project determines whether or not that system's failing build should count against a change or not. GitHub allows for multiple statuses to be set on a commit/PR from multiple services. Thoughts? From randy at thesyrings.us Thu Mar 5 21:17:25 2015 From: randy at thesyrings.us (Randy Syring) Date: Thu, 05 Mar 2015 15:17:25 -0500 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: <54F8B9D5.8090008@thesyrings.us> On 03/05/2015 03:11 PM, Ian Cordasco wrote: > There are only two systems I can think of > that allow for this: Launchpad and Trac. There is also Atlassian Jira, they give free accounts for open source projects. You could also stick with GitHub and just give people commit rights and tell them not to commit, only manage issues. It's easy enough to back-out changes by and then ban bad actors. *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Mar 5 21:31:02 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Mar 2015 15:31:02 -0500 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> > On Mar 5, 2015, at 3:11 PM, Ian Cordasco wrote: > > On Thu, Mar 5, 2015 at 1:55 PM, Marc Abramowitz wrote: >> Yeah my changes are quite trivial. And there is much more complex stuff that >> could be improved. IIRC, `pip/req/req_install.py` is the real behemoth. I >> remember feeling afraid to touch that thing. :) >> >> I do think this illustrates some of the problem though in that if those 3 >> very simple PRs are not merged or closed, then I don't have a lot of faith >> that any more complex PR that I might submit would be worth the time >> investment. >> >> I feel like I'm somewhat conditioned to only submit small PRs to pip because >> they have the lowest risk (although also the lowest reward of course). > > The lowest reward to whom? If it's a good change that fixes something > (regardless of the size that's a big reward to the project and its > users. > > I'm also not going to go back in the thread to reply all of the > messages, but I want to be clear that I didn't mean every feature > should be rejected. Just that for a project as critical to an > ecosystem like Python's, rejecting features should be fairly easy to > do and very simple. If no current core developer wants to maintain it > or feels strongly about it, that should be closed. A nice, already > written, form explanation would probably antagonize contributors less > than a short reasoning that might come off curt. That said, you'll > always lose some contributors by closing their pull requests or by > trying to help them get it into a good state. > > In my opinion, gigantic PRs that refactor things should be rejected > immediately. Those can almost certainly always be done in smaller, > easier to review, and easier to understand pull requests. > Mega-refactors that will take hours (or even days) to review that > modify all of the tests should be absolutely out of the question if > pip is going to come up with better guidelines. requests has received > a handful of them, and we've tried to coach those people (who are > often first-time contributors) into breaking it up but they always > leave, and the project has to realize that sometimes it's an > acceptable loss. > > Having clear guidelines is great, they should be enforced and they > should be linked somewhere, like the top of the README. Github has CONTRIBUTING.rst which is an obvious place to put something like this. Giant mega PRs are certainly harder to merge than small PRs but also important is the benefit of the PR itself. A small PR that doesn?t seem to improve things much or is just shuffling around code is less useful than a PR that reorganizes some of the code to be cleaner. Sadly with how the code in pip is written, sometimes it?s just not reasonable to make small PRs because things are not well factored and changing things requires touching a lot of different areas. Those kinds of PRs are OK as a last resort, but ideally the PR will be opened up early on as a WIP and they should be attempted to be as small as possible still. > > Regarding empowering people to close, label, and triage issues is > going to be much harder. There are only two systems I can think of > that allow for this: Launchpad and Trac. Trac is Django's tracker (in > case people aren't aware) and Django has found a way to integrate it > with GitHub. We /could/ take that approach, but that leaves the > following questions: > > - Who is going to maintain the server(s)? > - Who is going to provision them initially? > - What happens if those people (ideally it's more than just one > person) disappear? Will we have enough people to reduce the bus > factor? > - Will this ever become yet another thing that the core developers > have to spend time on instead of reviewing pull requests and fixing > bugs? Hosted services are ideal for this, though that limits things a bit, but hosted services of OSS software is even better of course. There are a number of bug trackers that can be configured to allow open registration and so that anyone who registers can modify ticket states. Maintaining things is easier than setting them up (I already maintain stuff for the PSF, so adding more stuff isn?t that big of a deal) the major thing is going through all the different solutions, figuring out which ones best fit our needs, making sure it can be configured via salt instead of via a UI, etc and making a proposal to actually switch to it and what the fallout and work required to do that would be. > > Regarding auto-closing, please don't do it. Especially for the ones > where someone didn't file a bug first, it may be the only way to > figure out what's going on? > > And for CI, we need people who will help with the windows CI solution > on more than one front clearly. I think pyca/cryptography has OS X > builders on Travis, so we could probably add tests for that, but for > RHEL that's going to be much harder. I think this is where Marc's > reference to OpenStack fits in perfectly though. In OpenStack there's > a concept of third party CI. (There's a similar notion with the > buildbots for CPython.) The people who register those CI systems > maintain them but the project determines whether or not that system's > failing build should count against a change or not. GitHub allows for > multiple statuses to be set on a commit/PR from multiple services. > Thoughts? Assuming we can give someone permission to set a status on github PRs without giving them push access or other access I would be perfectly fine with a ?third party CI? solution like that. We?d need to hash out exact requirements but it?s certainly a possibility. I?d like to point out again that if anyone knows go and can write a little golang program that spins up a Windows Azure server, and runs a command using winrm, any command even echo, we can give that to Travis to help them get Windows support. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ben+python at benfinney.id.au Thu Mar 5 23:00:13 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Fri, 06 Mar 2015 09:00:13 +1100 Subject: [Distutils] Implementing large changes in small increments (was: Getting more momentum for pip) References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> Message-ID: <85twxzyuj6.fsf_-_@benfinney.id.au> Donald Stufft writes: > Sadly with how the code in pip is written, sometimes it?s just not > reasonable to make small PRs because things are not well factored and > changing things requires touching a lot of different areas. I've seen a number of other projects enforce ?small revisions only, otherwise your change gets accepted?. If actually enforced, it is a highly successful way to get meaningful review of changes, and does not appear to limit the scope of the eventual change. What does end up happening in such projects (e.g., Linux) is the community learns how to ? and teaches newcomers how to ? implement large changes as smaller refactorings, each of which results in a working system. I think the Pip developers should not fear the loss of large changes. Large changes can always be implemented as a series of small, understandable changes, if skill and design effort are brought to bear. The resulting large changes also end up being better examined and better designed. -- \ ?Well, my brother says Hello. So, hooray for speech therapy.? | `\ ?Emo Philips | _o__) | Ben Finney From ben+python at benfinney.id.au Thu Mar 5 23:06:35 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Fri, 06 Mar 2015 09:06:35 +1100 Subject: [Distutils] Implementing large changes in small increments References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> Message-ID: <85pp8nyu8k.fsf@benfinney.id.au> Ben Finney writes: > Donald Stufft writes: > > > Sadly with how the code in pip is written, sometimes it?s just not > > reasonable to make small PRs because things are not well factored and > > changing things requires touching a lot of different areas. > > I've seen a number of other projects enforce ?small revisions only, > otherwise your change gets accepted?. If actually enforced, it is a > highly successful way to get meaningful review of changes, and does not > appear to limit the scope of the eventual change. That's ?small changes only, otherwise your change gets rejected?, of course. The policy for Linux that I alluded to is in ?3 of this document: 3) Separate your changes. ------------------------- Separate each _logical change_ into a separate patch. [?] The point to remember is that each patch should make an easily understood change that can be verified by reviewers. Each patch should be justifiable on its own merits. If one patch depends on another patch in order for a change to be complete, that is OK. Simply note "this patch depends on patch X" in your patch description. [?] I would be happy to see more projects adopt this, and enforce it, for change contributions. -- \ ?Reality must take precedence over public relations, for nature | `\ cannot be fooled.? ?Richard P. Feynman, _Rogers' Commission | _o__) Report into the Challenger Crash_, 1986-06 | Ben Finney From greg.ewing at canterbury.ac.nz Fri Mar 6 02:16:21 2015 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 06 Mar 2015 14:16:21 +1300 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: <85pp8nyu8k.fsf@benfinney.id.au> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85pp8nyu8k.fsf@benfinney.id.au> Message-ID: <54F8FFE5.4000307@canterbury.ac.nz> On 03/06/2015 11:06 AM, Ben Finney wrote: > That's ?small changes only, otherwise your change gets rejected?, of > course. Yes, otherwise submitting a patch that replaces the entire source code of Python with Ruby would be a sure-fire way to get it accepted. :-) -- Greg From graffatcolmingov at gmail.com Fri Mar 6 03:51:27 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Thu, 5 Mar 2015 20:51:27 -0600 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: <54F8FFE5.4000307@canterbury.ac.nz> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85pp8nyu8k.fsf@benfinney.id.au> <54F8FFE5.4000307@canterbury.ac.nz> Message-ID: Wait, I have an idea. Let's rewrite pip in Rust! ;) On Thu, Mar 5, 2015 at 7:16 PM, Greg Ewing wrote: > On 03/06/2015 11:06 AM, Ben Finney wrote: > >> That's ?small changes only, otherwise your change gets rejected?, of >> course. > > > Yes, otherwise submitting a patch that replaces the entire source > code of Python with Ruby would be a sure-fire way to get it > accepted. :-) > > -- > Greg > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Fri Mar 6 11:56:44 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 6 Mar 2015 20:56:44 +1000 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: On 6 March 2015 at 03:16, Marcus Smith wrote: > So I guess my suggestions boil down to: >> >> >> - Add more humans >> - Add more money to make humans more efficient >> - Add more computer automation > > > > maybe agree to always maintain < X open issues and < Y open PRs, before > adding features. > where x can vary as needed, but for starters, x=250, and y=25 sounds > reasonable. This is one of the key differences between open source projects and corporate projects. In a corporate context, it's important to keep your backlog under control by saying "we're never going to invest time in fixing this" and declare such issues as "won't fix". In a community driven open source project, the situation is different. Here, because everyone is free to spend their time however they like (or however they can persuade someone to pay them), the only real reason to "won't fix" an issue is because it's a genuinely bad idea (or based on a flawed understanding of the situation) and there's no way to redeem the suggestion. For anything else, it's often better to leave it open as an opportunity for someone to persuade the core developers that it's worth accepting the liability of maintaining that code into the future - those that do the work, make the rules. It does require a lot of education work, though. Most folks (especially when just starting to learn to code) are inclined to think that contributing more code is purely beneficial. It's not, as code is a liability that costs you long term effort to maintain. As Jack Diederich puts it: "code is the enemy and you want as little of it in your product as possible". What you actually want is to *solve people's problems in the general case*, such that the net gain in time saved across the user community vastly outweighs the maintenance cost of allowing that code to exist in the project. For mature projects with a fairly well defined scope, this means the default answer is going to be "no", and the most positive likely initial answer is "maybe". Hence the split of the core CPython mailing lists into python-dev (where the default answer is just a straight up "no") and python-ideas (where the default answer is more along the lines of "that's a potentially interesting idea, let's discuss it further"). This is arguably a flaw in more recent approaches to getting folks engaged in open source projects, with a focus on quick wins and immediate contributions. While those do exist in most projects, they're generally about implementing *existing* ideas, or improving docs, or fixing bugs. For folks that want to get a *specific* issue fixed, then what they generally need to do is to learn what the core contributors care about, and how to get them interested in addressing that problem. There isn't the simple "I'm a paying customer, this is a problem I'm reporting, we have an SLA, please resolve my issue in accordance with that" dynamic that exists in a traditional vendor relationship. While it's only been intermittently effective, one useful tactic CPython core developers have occasionally applied is a "5-for-1" review trade: if someone else reviews 5 open patches, then they'll review one that the new reviewer cares about. That's a pretty straightforward quid pro quo, and gets people into the habit of contributing time to things others are interested in so as to get time contributed back to their own issues in return. It relies on core contributors having the spare capacity to offer that deal though. pythonmentors.com describes another approach, which allows folks to self-identify as wanting to invest time in becoming a core committer/reviewer themselves, rather than just wanting to get a drive-by patch merged. Those folks then gain the benefit of getting the attention of the more active mentors in the core developer group, as well as a community of like-minded peers all attempting to learn the tricks of navigating the vagaries of CPython core development. Another way is when an existing contributors deliberately recruits someone they trust to take over a task from them. In those cases, the mentorship relationship already formed outside the particular community of contributors, and is transferred into the new context. Finally, there's straight up trading of favours: if you contribute something an existing core contributor wants themselves, a) you'll likely get their attention on that original review pretty easily; and b) if you have something *else* you want reviewed, they're far more likely to try to find the time if you're already helped them cross a lingering item off their generally voluminous todo lists. Regards, Nick. P.S. The other useful thing to do is to better educate folks on how to make the case for spending work time on upstream projects and key dependencies, *without* a specific near term business need. I'm pretty happy with this piece I recently wrote for Red Hat as an example of that kind of thing: http://community.redhat.com/blog/2015/02/the-quid-pro-quo-of-open-infrastructure/ -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Mar 6 12:37:31 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 6 Mar 2015 21:37:31 +1000 Subject: [Distutils] Implementing large changes in small increments (was: Getting more momentum for pip) In-Reply-To: <85twxzyuj6.fsf_-_@benfinney.id.au> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> Message-ID: On 6 March 2015 at 08:00, Ben Finney wrote: > Donald Stufft writes: > >> Sadly with how the code in pip is written, sometimes it?s just not >> reasonable to make small PRs because things are not well factored and >> changing things requires touching a lot of different areas. > > I've seen a number of other projects enforce ?small revisions only, > otherwise your change gets accepted?. If actually enforced, it is a > highly successful way to get meaningful review of changes, and does not > appear to limit the scope of the eventual change. > > What does end up happening in such projects (e.g., Linux) is the > community learns how to ? and teaches newcomers how to ? implement large > changes as smaller refactorings, each of which results in a working > system. This is: a) a really good idea; and b) really painful without good tooling support Linux does it via emailed patchbombs, as do a lot of other open source projects which don't have a separate code review tool. That works if your contributors are used to *consuming* patches that way, but inapplicable to projects used to web based reviews. CPython uses the Reitveld instance integrated with bugs.python.org, and has the same problem as pip: incremental changes are a pain to publish, review, and merge, so we review and accept monolithic patches instead (cf the problem statement in https://www.python.org/dev/peps/pep-0462/) While the main UI is very busy, I've actually quite liked my own experience with Gerrit for http://gerrit.beaker-project.org/ (I was the dev lead for Red Hat's Beaker hardware integration testing system from Oct 2012 until mid 2014, and the product owner until a couple of weeks ago). I've never used Gerrit in the OpenStack context though, so I don't know if Donald dislikes Gerrit in its own right, or just the way OpenStack uses it. That means one option potentially worth exploring might be http://gerrithub.io/. I haven't used GerritHub yet myself, but I'm pretty sure it lets you mix & match between GitHub PRs for simple changes and GerritHub reviews for more complex ones. The Beaker workflow is an example of vanilla Gerrit usage, rather than using OpenStack's custom fork: https://beaker-project.org/dev/guide/writing-a-patch.html#submitting-your-patch http://gerrit.beaker-project.org/#/c/4025 is an example of a fairly deep patch stack, where each patch can be reviewed independently, but later patches won't be merged until after earlier ones have been submitted. (Rebasing support is also baked directly into the tool) Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Fri Mar 6 12:55:38 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 6 Mar 2015 11:55:38 +0000 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: On 5 March 2015 at 16:38, Marc Abramowitz wrote: > This makes me think that the folks who review the PRs are overburdened > and/or the process needs a bit more structure (e.g.: each person thinks > someone else is going to review the PR and so no one does it). One thing I, personally, find difficult when reviewing PRs (specifically feature requests) is the fact that I usually don't actually have a *need* for the functionality being proposed. It's very easy for me to say "this doesn't help me personally, so I'll ignore it", but that is ducking a big part of the responsibility of being a core committer. But forming a view on something I've no experience of or direct interest in is *hard*, and takes a lot of time. Discussions tend to involve a lot of people with strong opinions (e.g. the PR author) who can't move the change forward, and a few people with weaker opinions (e.g. me :-)) who can. It's very easy to think "just accept it because it helps someone". But that's a cop-out and long-term isn't a sustainable approach. It's not "thinking someone else will review the PR", it's more making a conscious decision on how much energy and effort I'm willing to put into a PR that doesn't have any benefit for me. (And even just *discussing* a PR can be a lot of energy, it's not easy to politely explain to someone that you don't think their use case, that they went to a lot of trouble writing a PR for, isn't worth it). What would help a *lot* is some sort of agreement on what pip is, and what its core goals are. Something similar to what it means to be "pythonic" for the Python language itself. At the moment, I don't think this is very clearly understood even within the core dev group (so external contributors have no hope...) And for me, it'd help avoid the endless debates that often start with the phrase "pip should..." For example, is the lack of a programmable API an issue for pip? I think it is, and having people able to write their own tools that use pip's finder, or its wheel installer, is a (long term) goal for pip, rather than, say, continually adding more pip subcommands. But I don't know if that's the consensus. And to my knowledge, no 3rd party PRs have *ever* been of the form "Encapsulate pip's functionality X in a clean API so I can use it in my script"... Or is the "pip search" command a wart that should be removed because it isn't pip's job to do PyPI searches? There's some low-hanging fruit if a more focused tool is the goal... Or should pip give you tools to replicate your current environment (pip freeze, requirements files)? What about "remove anything *not* in this requirement file"? Personally, I only use requirements files to bundle up "install this lot of stuff". I don't write the sort of thing where a "pin every dependency" philosophy is appropriate, so freeze isn't something I use. But lots of people do, so what's the workflow that pip freeze supports? The problem with discussing this sort of thing is that it's *very* wide-ranging, and tends to produce huge rambling mega-threads[1] when discussed in a public list. I'm not advocating any sort of private cabal deciding the fate of pip, but maybe somewhere where the core devs could agree their *own* opinions before having to face the public wouldn't be such a bad thing. That's more or less what I'd expected the pypa-dev list to be (as a parallel to the python-dev list) but it doesn't feel like it's turned out that way, maybe because it doesn't have a clear enough charter, or maybe because there's no obvious *other* place to direct people to for off-topic posts (like python-list is for python-dev). Or maybe grand designs are a distraction in themselves, and none of the core devs being interested in a PR means just that - not that they don't have the time, or that the use case isn't valid, or anything else. Just that they aren't interested, sorry. [1] Please, don't start a rambling mega-thread from *this* post :-) Paul PS I just spend way too long composing this email, and now I'm burned out. Maybe my time would have been better spent commenting on a couple of PRs... From ben+python at benfinney.id.au Fri Mar 6 13:09:47 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Fri, 06 Mar 2015 23:09:47 +1100 Subject: [Distutils] Implementing large changes in small increments References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> Message-ID: <85ioeexr78.fsf@benfinney.id.au> Nick Coghlan writes: > CPython uses the Reitveld instance integrated with bugs.python.org, > and has the same problem as pip: incremental changes are a pain to > publish, review, and merge, so we review and accept monolithic patches > instead (cf the problem statement in > https://www.python.org/dev/peps/pep-0462/) Fair enough. I don't know of a good code review tool for Mercurial. > While the main UI is very busy, I've actually quite liked my own > experience with Gerrit for http://gerrit.beaker-project.org/ My understanding is that Gerrit makes it tedious to review a sequence of revisions, in proportion to the number of revisions in the sequence. If I understand correctly, such a sequence must have separate reviews for every revision, and an aggregate of all the changes is not available to the reviewer. I'm impressed by GitLab's code review tool UI; see an example at . The merge request page has tabs for the discussion, the commit log, and the overall diff ? and you choose from inline diff or side-by-side diff. GitLab is free software, including all its tools; anyone can set up a GitLab instance and the project data can move from one instance to another without loss. For the purposes of the past thread where some proposed migrating to the proprietary lock-in site GitHub, those objections don't exist with GitLab: a project can migrate to a different host and keep all the valuable data it accumulated. A move to GitLab would be unobjectionable, in my view. That it has good code review features would help the issues in this thread too. If anyone knows of equivalent hosting for Mercurial with equivalent code review tools under free-software terms with no lock-in, that would be even better I think. -- \ ?Don't be misled by the enormous flow of money into bad defacto | `\ standards for unsophisticated buyers using poor adaptations of | _o__) incomplete ideas.? ?Alan Kay | Ben Finney From graffatcolmingov at gmail.com Fri Mar 6 14:38:19 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 6 Mar 2015 07:38:19 -0600 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: Anatoly, We already ruled out AppVeyor On Mar 6, 2015 2:11 AM, "anatoly techtonik" wrote: > On Thu, Mar 5, 2015 at 11:11 PM, Ian Cordasco > wrote: > > And for CI, we need people who will help with the windows CI solution > > on more than one front clearly. > > https://ci.appveyor.com/ works for open source projects. > > > -- > anatoly t. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Mar 6 14:55:02 2015 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Mar 2015 13:55:02 +0000 Subject: [Distutils] Implementing large changes in small increments (was: Getting more momentum for pip) In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> Message-ID: <20150306135501.GA31001@yuggoth.org> On 2015-03-06 21:37:31 +1000 (+1000), Nick Coghlan wrote: [...] > I've never used Gerrit in the OpenStack context though, so I don't > know if Donald dislikes Gerrit in its own right, or just the way > OpenStack uses it. [...] Having talked with him about it regularly, I gather that he (and others) dislike the Gerrit/LKML "rebase, revise and refine your patch" workflow, instead preferring a Github-like "incrementally build on your pull request with new commits" workflow... though presumably he can explain it in better detail. In my experience it comes down to a trade-off where the Github model is easier on patch submitters because they can just keep piling fixes for their pull request on top if it until the corresponding topic branch is suitable to merge, while the Gerrit model is easier on reviewers because they're reviewing a patch in context rather than a topic branch. > The Beaker workflow is an example of vanilla Gerrit usage, rather > than using OpenStack's custom fork: [...] OpenStack hasn't been running a fork of Gerrit since upgrading to 2.8 back in April 2014 (modulo a few simple backports from 2.9), and has plans to upgrade to 2.9 next month or the month after. That's not to say that there isn't a bunch of additional tooling and automation built up around it (the Zuul CI system in particular) but aside from some minimal theming and including a little Javascript to tie outside data sources into the interface it's just plain Gerrit. -- Jeremy Stanley From donald at stufft.io Fri Mar 6 17:22:08 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 Mar 2015 11:22:08 -0500 Subject: [Distutils] Implementing large changes in small increments (was: Getting more momentum for pip) In-Reply-To: <20150306135501.GA31001@yuggoth.org> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <20150306135501.GA31001@yuggoth.org> Message-ID: <98302E35-647E-4E64-B75E-BD7DEA083D35@stufft.io> > On Mar 6, 2015, at 8:55 AM, Jeremy Stanley wrote: > > On 2015-03-06 21:37:31 +1000 (+1000), Nick Coghlan wrote: > [...] >> I've never used Gerrit in the OpenStack context though, so I don't >> know if Donald dislikes Gerrit in its own right, or just the way >> OpenStack uses it. > [...] > > Having talked with him about it regularly, I gather that he (and > others) dislike the Gerrit/LKML "rebase, revise and refine your > patch" workflow, instead preferring a Github-like "incrementally > build on your pull request with new commits" workflow... though > presumably he can explain it in better detail. > > In my experience it comes down to a trade-off where the Github model > is easier on patch submitters because they can just keep piling > fixes for their pull request on top if it until the corresponding > topic branch is suitable to merge, while the Gerrit model is easier > on reviewers because they're reviewing a patch in context rather > than a topic branch. > >> The Beaker workflow is an example of vanilla Gerrit usage, rather >> than using OpenStack's custom fork: > [...] > > OpenStack hasn't been running a fork of Gerrit since upgrading to > 2.8 back in April 2014 (modulo a few simple backports from 2.9), and > has plans to upgrade to 2.9 next month or the month after. That's > not to say that there isn't a bunch of additional tooling and > automation built up around it (the Zuul CI system in particular) but > aside from some minimal theming and including a little Javascript to > tie outside data sources into the interface it's just plain Gerrit. > -- > Jeremy Stanley > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig In general I?m fine with Gerrit (or Gerrit like systems). I think Gerrit has a crappy looking interface, and I think that the interface is harder to use than GitHub but the process itself doesn?t bother me much. I do think that it would be better if it would review whole PRs instead of individual commits (and if you want to squash them, the tool should do a squash merge). I don?t think that pip?s problems are ones that would be solved by switching to a different code review tool. GitHub functions well for that task, we don?t require multiple core reviewers to agree, only one, so the merge button is functionally equivalent to a +1 button and then having a machine later do the merge. Our velocity isn?t near high enough to where we need separate check and merge gating or anything like that. I would be against moving away from GitHub for PRs without a really compelling reason, GitHub PRs are easy to use, and it?s popular. We reduce the barrier to entry to contributing by making our process the same as every other project on GitHub?s. A better test suite and a more comprehensive CI system is where most of our tooling problems are. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From erik.m.bray at gmail.com Fri Mar 6 19:51:31 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Fri, 6 Mar 2015 13:51:31 -0500 Subject: [Distutils] Installing a file into sitepackages In-Reply-To: <856983252.2975399.1424458175043.JavaMail.yahoo@mail.yahoo.com> References: <856983252.2975399.1424458175043.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Fri, Feb 20, 2015 at 1:49 PM, Stuart Axon wrote: > Hi, > In my project, I install a .pth file into site-packages, I use the data_files... in Ubuntu this seems to work OK, but in a Windows VM the file doesn't seem to be being installed: > > setup( > .... > # Install the import hook > data_files=[ > (site_packages_path, ["vext_importer.pth"] if environ.get('VIRTUAL_ENV') else []), > ], > ) > > > - Is there a better way to do this ? > > > > I realise it's a bit odd installing a .pth - my project is to allow certain packages to use the system site packages from a virtualenv - > https://github.com/stuaxo/vext Hi Stuart, I know this is old so sorry to anyone else. But since no one replied--I haven't looked too closely at what it is you're trying to accomplish. But whatever the reason, that seems like a reasonable-enough way to me if you need to get a .pth file installed into site-packages. I'm not sure why it isn't working for you in WIndows but it wasn't clear what the problem was. I just tried this in a Windows VM and it worked fine? Best, Erik From p at 2015.forums.dobrogost.net Fri Mar 6 12:56:37 2015 From: p at 2015.forums.dobrogost.net (Piotr Dobrogost) Date: Fri, 6 Mar 2015 12:56:37 +0100 Subject: [Distutils] Granting permissions to Xavier Fernandez and Marc Abramowitz? Message-ID: Hi As an external observer of pip project at github I see two men, namely Xavier Fernandez (https://github.com/xavfernandez) and Marc Abramowitz (https://github.com/msabramo) with many valuable contributions. I think it would be beneficial if they had been granted some more permissions to the project/repo. Regards, Piotr Dobrogost From techtonik at gmail.com Fri Mar 6 09:06:55 2015 From: techtonik at gmail.com (anatoly techtonik) Date: Fri, 6 Mar 2015 11:06:55 +0300 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: On Thu, Mar 5, 2015 at 7:38 PM, Marc Abramowitz wrote: > - Add more computer automation > > #3 seems most appealing to me, but of course it requires humans to develop > it in the first place, but at least it's an investment that could pay > dividends. On page https://bitbucket.org/techtonik/python-stdlib/src there is a working proof-of-concept that fetches patch files from bugs.python.org, detects filenames and sees to which module the patch belongs (module layout is described in .json file). It is possible to tweak the script to also lookup who are the recent authors for the module files. This will allow to them to review automatically. -- anatoly t. From techtonik at gmail.com Fri Mar 6 09:10:46 2015 From: techtonik at gmail.com (anatoly techtonik) Date: Fri, 6 Mar 2015 11:10:46 +0300 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> Message-ID: On Thu, Mar 5, 2015 at 11:11 PM, Ian Cordasco wrote: > And for CI, we need people who will help with the windows CI solution > on more than one front clearly. https://ci.appveyor.com/ works for open source projects. -- anatoly t. From techtonik at gmail.com Fri Mar 6 09:17:25 2015 From: techtonik at gmail.com (anatoly techtonik) Date: Fri, 6 Mar 2015 11:17:25 +0300 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85pp8nyu8k.fsf@benfinney.id.au> <54F8FFE5.4000307@canterbury.ac.nz> Message-ID: Stealing some packaging code from Go and tweaking may not be that bad idea. =) At least they use code review system to let new people learn and old people share. On Fri, Mar 6, 2015 at 5:51 AM, Ian Cordasco wrote: > Wait, I have an idea. Let's rewrite pip in Rust! ;) > > On Thu, Mar 5, 2015 at 7:16 PM, Greg Ewing wrote: >> On 03/06/2015 11:06 AM, Ben Finney wrote: >> >>> That's ?small changes only, otherwise your change gets rejected?, of >>> course. >> >> >> Yes, otherwise submitting a patch that replaces the entire source >> code of Python with Ruby would be a sure-fire way to get it >> accepted. :-) >> >> -- >> Greg >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -- anatoly t. From p at 2015.forums.dobrogost.net Fri Mar 6 20:50:53 2015 From: p at 2015.forums.dobrogost.net (piotr.dobrogost@autoera-serwer.home.pl piotr.dobrogost@autoera-serwer.home.pl) Date: Fri, 6 Mar 2015 20:50:53 +0100 (CET) Subject: [Distutils] Granting permissions to Xavier Fernandez and Marc Abramowitz? Message-ID: <980469656.73253.1425671453726.JavaMail.open-xchange@webmail.home.pl> Hi As an external observer of pip project at github I see two men, namely Xavier Fernandez (https://github.com/xavfernandez) and Marc Abramowitz (https://github.com/msabramo) with many valuable contributions. I think it would be beneficial if they had been granted some more permissions to the project/repo. Regards, Piotr Dobrogost From graffatcolmingov at gmail.com Fri Mar 6 21:42:26 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 6 Mar 2015 14:42:26 -0600 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: On Fri, Mar 6, 2015 at 5:55 AM, Paul Moore wrote: > On 5 March 2015 at 16:38, Marc Abramowitz wrote: >> This makes me think that the folks who review the PRs are overburdened >> and/or the process needs a bit more structure (e.g.: each person thinks >> someone else is going to review the PR and so no one does it). > > One thing I, personally, find difficult when reviewing PRs > (specifically feature requests) is the fact that I usually don't > actually have a *need* for the functionality being proposed. It's very > easy for me to say "this doesn't help me personally, so I'll ignore > it", but that is ducking a big part of the responsibility of being a > core committer. But forming a view on something I've no experience of > or direct interest in is *hard*, and takes a lot of time. Discussions > tend to involve a lot of people with strong opinions (e.g. the PR > author) who can't move the change forward, and a few people with > weaker opinions (e.g. me :-)) who can. It's very easy to think "just > accept it because it helps someone". But that's a cop-out and > long-term isn't a sustainable approach. +1. This is a problem I have with Flake8. People keep asking for more command-line arguments because "It's just one more option. It won't hurt anyone." But Flake8 is another project without a great set of tests. It would be easy to say "Yeah sure, just this one other option that only one person has ever asked for" but there's only ever one person reviewing pull requests - me. It's also not sustainable to keep adding poorly named command-line flags. > It's not "thinking someone else will review the PR", it's more making > a conscious decision on how much energy and effort I'm willing to put > into a PR that doesn't have any benefit for me. (And even just > *discussing* a PR can be a lot of energy, it's not easy to politely > explain to someone that you don't think their use case, that they went > to a lot of trouble writing a PR for, isn't worth it). > > What would help a *lot* is some sort of agreement on what pip is, and > what its core goals are. Something similar to what it means to be > "pythonic" for the Python language itself. At the moment, I don't > think this is very clearly understood even within the core dev group > (so external contributors have no hope...) And for me, it'd help avoid > the endless debates that often start with the phrase "pip should..." +10 > For example, is the lack of a programmable API an issue for pip? I > think it is, and having people able to write their own tools that use > pip's finder, or its wheel installer, is a (long term) goal for pip, > rather than, say, continually adding more pip subcommands. But I don't > know if that's the consensus. And to my knowledge, no 3rd party PRs > have *ever* been of the form "Encapsulate pip's functionality X in a > clean API so I can use it in my script"... If pip is ever refactored appropriately (which I acknowledge is not a trivial condition to meet), maybe then pip could consider presenting a public API, but I think there are currently too many people who already reach into pip to ignore the need for such an interface. Perhaps the answer is, as pip is refactored, to create libraries that are then vendored into pip and that people can install independently to do that one thing they need to do. > Or is the "pip search" command a wart that should be removed because > it isn't pip's job to do PyPI searches? There's some low-hanging fruit > if a more focused tool is the goal... There's also how many different replacements for "pip search" on PyPI? > Or should pip give you tools to replicate your current environment > (pip freeze, requirements files)? What about "remove anything *not* in > this requirement file"? Personally, I only use requirements files to > bundle up "install this lot of stuff". I don't write the sort of thing > where a "pin every dependency" philosophy is appropriate, so freeze > isn't something I use. But lots of people do, so what's the workflow > that pip freeze supports? > > The problem with discussing this sort of thing is that it's *very* > wide-ranging, and tends to produce huge rambling mega-threads[1] when > discussed in a public list. I'm not advocating any sort of private > cabal deciding the fate of pip, but maybe somewhere where the core > devs could agree their *own* opinions before having to face the public > wouldn't be such a bad thing. That's more or less what I'd expected > the pypa-dev list to be (as a parallel to the python-dev list) but it > doesn't feel like it's turned out that way, maybe because it doesn't > have a clear enough charter, or maybe because there's no obvious > *other* place to direct people to for off-topic posts (like > python-list is for python-dev). So sometimes private cabals need to be made in order to get a basis of what is reasonable. The WSGI working group tried to do that but that failed after about a week as more people tried to join the cabal and were allowed to do so. > Or maybe grand designs are a distraction in themselves, and none of > the core devs being interested in a PR means just that - not that they > don't have the time, or that the use case isn't valid, or anything > else. Just that they aren't interested, sorry. > > [1] Please, don't start a rambling mega-thread from *this* post :-) > > Paul > > PS I just spend way too long composing this email, and now I'm burned > out. Maybe my time would have been better spent commenting on a couple > of PRs... Go rest. These discussions can exhaust even the best rested of us. From ncoghlan at gmail.com Fri Mar 6 23:04:19 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 7 Mar 2015 08:04:19 +1000 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: <85ioeexr78.fsf@benfinney.id.au> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85ioeexr78.fsf@benfinney.id.au> Message-ID: On 6 Mar 2015 22:10, "Ben Finney" wrote: > > Nick Coghlan writes: > > > CPython uses the Reitveld instance integrated with bugs.python.org, > > and has the same problem as pip: incremental changes are a pain to > > publish, review, and merge, so we review and accept monolithic patches > > instead (cf the problem statement in > > https://www.python.org/dev/peps/pep-0462/) > > Fair enough. I don't know of a good code review tool for Mercurial. I'd like to ensure Kallithea fits that bill, but the actual work on that seems to mostly be driven by the folks at Unity3D at the moment. In the meantime, Phabricator is a decent choice if you just want to use an existing GitHub independent tool that works with either git or Mercurial. pip adopting that workflow would also be a good proof of concept for Donald's proposal to also adopt that workflow for CPython (or at least its support repos). > > While the main UI is very busy, I've actually quite liked my own > > experience with Gerrit for http://gerrit.beaker-project.org/ > > My understanding is that Gerrit makes it tedious to review a sequence of > revisions, in proportion to the number of revisions in the sequence. When the goal is to break a change up into small, independently reviewable changes that's generally a feature rather than a defect :) > If > I understand correctly, such a sequence must have separate reviews for > every revision, and an aggregate of all the changes is not available to > the reviewer. Correct, but my understanding is that when using it in tandem with GitHub, there's nothing stopping you from also submitting a PR if a reviewer wants an all-inclusive view. > I'm impressed by GitLab's code review tool UI; see an example at > . > The merge request page has tabs for the discussion, the commit log, and > the overall diff ? and you choose from inline diff or side-by-side diff. > > GitLab is free software, including all its tools; anyone can set up a > GitLab instance and the project data can move from one instance to > another without loss. For the purposes of the past thread where some > proposed migrating to the proprietary lock-in site GitHub, those > objections don't exist with GitLab: a project can migrate to a different > host and keep all the valuable data it accumulated. > > A move to GitLab would be unobjectionable, in my view. That it has good > code review features would help the issues in this thread too. It doesn't have the integration with other services and the low barriers to contribution that are the main reasons a lot of projects prefer GitHub. Of course, when your problem is already "we're receiving more contributions than we can process effectively", deciding to require a slightly higher level of engagement in order to submit a change for consideration isn't necessarily a bad thing :) > If anyone knows of equivalent hosting for Mercurial with equivalent code > review tools under free-software terms with no lock-in, that would be > even better I think. That's what I'd like forge.python.org to eventually be for the core Python ecosystem, but we don't know yet whether that's going to be an entirely self-hosted Kallithea instance (my preference) or a Phabricator instance backed by GitHub (Donald's preference). Hence my suggestion that a "forge.pypa.io" Phabricator instance might be an interesting thing to set up and start using for pip. Donald's already done the research on that in the context of https://www.python.org/dev/peps/pep-0481/ and for pip that's a matter of "just add Phabricator" without having to migrate anything (except perhaps the issues if folks wanted to do that). Cheers, Nick. > > -- > \ ?Don't be misled by the enormous flow of money into bad defacto | > `\ standards for unsophisticated buyers using poor adaptations of | > _o__) incomplete ideas.? ?Alan Kay | > Ben Finney > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Mar 6 23:14:41 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 7 Mar 2015 08:14:41 +1000 Subject: [Distutils] Granting permissions to Xavier Fernandez and Marc Abramowitz? In-Reply-To: References: Message-ID: On 7 Mar 2015 05:41, "Piotr Dobrogost"

wrote: > > Hi > > As an external observer of pip project at github I see two men, namely > Xavier Fernandez (https://github.com/xavfernandez) and Marc Abramowitz > (https://github.com/msabramo) with many valuable contributions. I > think it would be beneficial if they had been granted some more > permissions to the project/repo. How many *other* people have they helped have contributions accepted? That's one of the most important responsibilities of a core reviewer, rather than writing new code themselves. I also can't recall seeing any feedback from them on this list regarding the metadata 2.0 design PEPs or the TUF metadata PEPs, which suggests there may be a risk that they're currently focused primarily on attending to their own needs, rather than wanting to help guide the evolution of the overall Python packaging ecosystem. While that's not an issue for *submitting* change requests to pip, it's a potential problem for *accepting* them, as managing that evolution has essentially been the responsibility of the pip development team (and PyPA in general) since PEP 453 made the decision to bundle pip with the reference interpreter by default. Regards, Nick. > > Regards, > Piotr Dobrogost > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Mar 6 23:25:28 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 Mar 2015 17:25:28 -0500 Subject: [Distutils] Granting permissions to Xavier Fernandez and Marc Abramowitz? In-Reply-To: References: Message-ID: > On Mar 6, 2015, at 6:56 AM, Piotr Dobrogost

wrote: > > Hi > > As an external observer of pip project at github I see two men, namely > Xavier Fernandez (https://github.com/xavfernandez) and Marc Abramowitz > (https://github.com/msabramo) with many valuable contributions. I > think it would be beneficial if they had been granted some more > permissions to the project/repo. I?m not going to respond to the actual content of this, but I?d like to make a plea. If you think someone should be a core on one of the PyPA projects, I recommend talking to one of the existing core developer privately instead of posting publicly. Historically these projects decide on core members privately so that people?s concerns or comments on potential core members are not public. This is done for three main reasons: 1) By keeping things private core members can fully express their opinions on whether or not that person is a good candidate for core member, or whether they might be in the future if they work on something that is potentially concerning. 2) It doesn?t put any potential core members in a situation where they are feeling judged or on trial or whatever colloquium you want to insert here. 3) It allows us to privately communicate with the potential team member about what (if anything) they need to improve on if they?d like to be a core member. We don?t have defined guidelines as to what we look for in a core team member, perhaps we should, generally though it?s been looking out for people who seem to be trying to help out with the efforts that are lacking manpower. Just to be clear, this has nothing to do with either Xavier or Marc, I?m just asking that in the future if someone thinks that they notice someone who might be a good candidate for being on the core team. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Fri Mar 6 23:33:35 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 7 Mar 2015 08:33:35 +1000 Subject: [Distutils] Getting more momentum for pip In-Reply-To: References: Message-ID: On 7 Mar 2015 06:44, "Ian Cordasco" wrote: > On Fri, Mar 6, 2015 at 5:55 AM, Paul Moore wrote: > > > > The problem with discussing this sort of thing is that it's *very* > > wide-ranging, and tends to produce huge rambling mega-threads[1] when > > discussed in a public list. I'm not advocating any sort of private > > cabal deciding the fate of pip, but maybe somewhere where the core > > devs could agree their *own* opinions before having to face the public > > wouldn't be such a bad thing. That's more or less what I'd expected > > the pypa-dev list to be (as a parallel to the python-dev list) but it > > doesn't feel like it's turned out that way, maybe because it doesn't > > have a clear enough charter, or maybe because there's no obvious > > *other* place to direct people to for off-topic posts (like > > python-list is for python-dev). > > So sometimes private cabals need to be made in order to get a basis of > what is reasonable. The WSGI working group tried to do that but that > failed after about a week as more people tried to join the cabal and > were allowed to do so. It's worth noting that CPython didn't get public source control until it was already around 9 years old (see the What's New for Python 2.0). My understanding is also that the architecture & philosophy for CPython were very much set by the original Python Labs crew (Guido, Tim Peters, Barry Warsaw, Fred Drake) when they worked for Zope Corporation, just as the direction of beaker-project.org is very much governed by what the core team that works full time on it for Red Hat wants to do. Confusing "open source" and even "open governance" with "no hierarchy" is a common mistake, when the only essential requirement is that anyone is welcome to observe and even suggest changes, whether to artefacts (open source) or decision making processes (open governance). The one thing that potential (and current!) contributors have to accept is that the existing contributors are the ones that decide between "yes", "no", and "maybe, let's discuss it some more", regardless of whether the proposed change is to code, processes, or who has the authority to accept changes. All of which can be summarised in the phrase: "Those that do the work, make the rules" :) Cheers, Nick. > > > Or maybe grand designs are a distraction in themselves, and none of > > the core devs being interested in a PR means just that - not that they > > don't have the time, or that the use case isn't valid, or anything > > else. Just that they aren't interested, sorry. > > > > [1] Please, don't start a rambling mega-thread from *this* post :-) > > > > Paul > > > > PS I just spend way too long composing this email, and now I'm burned > > out. Maybe my time would have been better spent commenting on a couple > > of PRs... > > Go rest. These discussions can exhaust even the best rested of us. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Fri Mar 6 23:36:54 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 6 Mar 2015 16:36:54 -0600 Subject: [Distutils] Granting permissions to Xavier Fernandez and Marc Abramowitz? In-Reply-To: References: Message-ID: On Fri, Mar 6, 2015 at 5:56 AM, Piotr Dobrogost

wrote: > Hi > > As an external observer of pip project at github I see two men, namely > Xavier Fernandez (https://github.com/xavfernandez) and Marc Abramowitz > (https://github.com/msabramo) with many valuable contributions. I > think it would be beneficial if they had been granted some more > permissions to the project/repo. > > Regards, > Piotr Dobrogost > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig It seems incredibly inappropriate for anyone except an existing core to be proposing new cores. From graffatcolmingov at gmail.com Fri Mar 6 23:40:43 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 6 Mar 2015 16:40:43 -0600 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85pp8nyu8k.fsf@benfinney.id.au> <54F8FFE5.4000307@canterbury.ac.nz> Message-ID: On Fri, Mar 6, 2015 at 2:17 AM, anatoly techtonik wrote: > Stealing some packaging code from Go and tweaking may not be that bad idea. =) > At least they use code review system to let new people learn and old > people share. Yep and most of the people working on Go are paid to do so. Sharing information through code review is necessary inside a corporation and worth being fired over if you don't. > On Fri, Mar 6, 2015 at 5:51 AM, Ian Cordasco wrote: >> Wait, I have an idea. Let's rewrite pip in Rust! ;) >> >> On Thu, Mar 5, 2015 at 7:16 PM, Greg Ewing wrote: >>> On 03/06/2015 11:06 AM, Ben Finney wrote: >>> >>>> That's ?small changes only, otherwise your change gets rejected?, of >>>> course. >>> >>> >>> Yes, otherwise submitting a patch that replaces the entire source >>> code of Python with Ruby would be a sure-fire way to get it >>> accepted. :-) >>> >>> -- >>> Greg >>> >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > -- > anatoly t. From graffatcolmingov at gmail.com Fri Mar 6 23:43:28 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 6 Mar 2015 16:43:28 -0600 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85ioeexr78.fsf@benfinney.id.au> Message-ID: On Fri, Mar 6, 2015 at 4:04 PM, Nick Coghlan wrote: > > On 6 Mar 2015 22:10, "Ben Finney" wrote: >> >> Nick Coghlan writes: >> >> > CPython uses the Reitveld instance integrated with bugs.python.org, >> > and has the same problem as pip: incremental changes are a pain to >> > publish, review, and merge, so we review and accept monolithic patches >> > instead (cf the problem statement in >> > https://www.python.org/dev/peps/pep-0462/) >> >> Fair enough. I don't know of a good code review tool for Mercurial. > > I'd like to ensure Kallithea fits that bill, but the actual work on that > seems to mostly be driven by the folks at Unity3D at the moment. > > In the meantime, Phabricator is a decent choice if you just want to use an > existing GitHub independent tool that works with either git or Mercurial. > pip adopting that workflow would also be a good proof of concept for > Donald's proposal to also adopt that workflow for CPython (or at least its > support repos). > >> > While the main UI is very busy, I've actually quite liked my own >> > experience with Gerrit for http://gerrit.beaker-project.org/ >> >> My understanding is that Gerrit makes it tedious to review a sequence of >> revisions, in proportion to the number of revisions in the sequence. > > When the goal is to break a change up into small, independently reviewable > changes that's generally a feature rather than a defect :) > >> If >> I understand correctly, such a sequence must have separate reviews for >> every revision, and an aggregate of all the changes is not available to >> the reviewer. > > Correct, but my understanding is that when using it in tandem with GitHub, > there's nothing stopping you from also submitting a PR if a reviewer wants > an all-inclusive view. > >> I'm impressed by GitLab's code review tool UI; see an example at >> . >> The merge request page has tabs for the discussion, the commit log, and >> the overall diff ? and you choose from inline diff or side-by-side diff. >> >> GitLab is free software, including all its tools; anyone can set up a >> GitLab instance and the project data can move from one instance to >> another without loss. For the purposes of the past thread where some >> proposed migrating to the proprietary lock-in site GitHub, those >> objections don't exist with GitLab: a project can migrate to a different >> host and keep all the valuable data it accumulated. >> >> A move to GitLab would be unobjectionable, in my view. That it has good >> code review features would help the issues in this thread too. > > It doesn't have the integration with other services and the low barriers to > contribution that are the main reasons a lot of projects prefer GitHub. > > Of course, when your problem is already "we're receiving more contributions > than we can process effectively", deciding to require a slightly higher > level of engagement in order to submit a change for consideration isn't > necessarily a bad thing :) > >> If anyone knows of equivalent hosting for Mercurial with equivalent code >> review tools under free-software terms with no lock-in, that would be >> even better I think. > > That's what I'd like forge.python.org to eventually be for the core Python > ecosystem, but we don't know yet whether that's going to be an entirely > self-hosted Kallithea instance (my preference) or a Phabricator instance > backed by GitHub (Donald's preference). > > Hence my suggestion that a "forge.pypa.io" Phabricator instance might be an > interesting thing to set up and start using for pip. Donald's already done > the research on that in the context of > https://www.python.org/dev/peps/pep-0481/ and for pip that's a matter of > "just add Phabricator" without having to migrate anything (except perhaps > the issues if folks wanted to do that). > > Cheers, > Nick. > >> >> -- >> \ ?Don't be misled by the enormous flow of money into bad defacto | >> `\ standards for unsophisticated buyers using poor adaptations of | >> _o__) incomplete ideas.? ?Alan Kay | >> Ben Finney >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > I'm fairly concerned that what has turned into a "how can we increase the feedback received for people submitting pull requests" has turned into a bike shed moment for using F/LOSS tooling instead of GitHub when the cores who actually work on the project have already expressed a disinterest in moving and a satisfaction with GitHub. GitLab's UI would do nothing to improve review management. Phabricator, while nice, again adds yet another layer to the piece for new contributors to involve themselves in. GitHub is one monolith and closed source (and a company with culture problems) but that doesn't change the fact that it's the core developers choice what software to use and they've (for the time being) chosen GitHub. Can we please stop this discussion already? It's no longer beneficial or relevant. From donald at stufft.io Sat Mar 7 00:01:44 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 Mar 2015 18:01:44 -0500 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85ioeexr78.fsf@benfinney.id.au> Message-ID: <5413B547-BB3D-4608-971F-55441009667A@stufft.io> > On Mar 6, 2015, at 5:43 PM, Ian Cordasco wrote: > > On Fri, Mar 6, 2015 at 4:04 PM, Nick Coghlan wrote: >> >> On 6 Mar 2015 22:10, "Ben Finney" wrote: >>> >>> Nick Coghlan writes: >>> >>>> CPython uses the Reitveld instance integrated with bugs.python.org, >>>> and has the same problem as pip: incremental changes are a pain to >>>> publish, review, and merge, so we review and accept monolithic patches >>>> instead (cf the problem statement in >>>> https://www.python.org/dev/peps/pep-0462/) >>> >>> Fair enough. I don't know of a good code review tool for Mercurial. >> >> I'd like to ensure Kallithea fits that bill, but the actual work on that >> seems to mostly be driven by the folks at Unity3D at the moment. >> >> In the meantime, Phabricator is a decent choice if you just want to use an >> existing GitHub independent tool that works with either git or Mercurial. >> pip adopting that workflow would also be a good proof of concept for >> Donald's proposal to also adopt that workflow for CPython (or at least its >> support repos). >> >>>> While the main UI is very busy, I've actually quite liked my own >>>> experience with Gerrit for http://gerrit.beaker-project.org/ >>> >>> My understanding is that Gerrit makes it tedious to review a sequence of >>> revisions, in proportion to the number of revisions in the sequence. >> >> When the goal is to break a change up into small, independently reviewable >> changes that's generally a feature rather than a defect :) >> >>> If >>> I understand correctly, such a sequence must have separate reviews for >>> every revision, and an aggregate of all the changes is not available to >>> the reviewer. >> >> Correct, but my understanding is that when using it in tandem with GitHub, >> there's nothing stopping you from also submitting a PR if a reviewer wants >> an all-inclusive view. >> >>> I'm impressed by GitLab's code review tool UI; see an example at >>> . >>> The merge request page has tabs for the discussion, the commit log, and >>> the overall diff ? and you choose from inline diff or side-by-side diff. >>> >>> GitLab is free software, including all its tools; anyone can set up a >>> GitLab instance and the project data can move from one instance to >>> another without loss. For the purposes of the past thread where some >>> proposed migrating to the proprietary lock-in site GitHub, those >>> objections don't exist with GitLab: a project can migrate to a different >>> host and keep all the valuable data it accumulated. >>> >>> A move to GitLab would be unobjectionable, in my view. That it has good >>> code review features would help the issues in this thread too. >> >> It doesn't have the integration with other services and the low barriers to >> contribution that are the main reasons a lot of projects prefer GitHub. >> >> Of course, when your problem is already "we're receiving more contributions >> than we can process effectively", deciding to require a slightly higher >> level of engagement in order to submit a change for consideration isn't >> necessarily a bad thing :) >> >>> If anyone knows of equivalent hosting for Mercurial with equivalent code >>> review tools under free-software terms with no lock-in, that would be >>> even better I think. >> >> That's what I'd like forge.python.org to eventually be for the core Python >> ecosystem, but we don't know yet whether that's going to be an entirely >> self-hosted Kallithea instance (my preference) or a Phabricator instance >> backed by GitHub (Donald's preference). >> >> Hence my suggestion that a "forge.pypa.io" Phabricator instance might be an >> interesting thing to set up and start using for pip. Donald's already done >> the research on that in the context of >> https://www.python.org/dev/peps/pep-0481/ and for pip that's a matter of >> "just add Phabricator" without having to migrate anything (except perhaps >> the issues if folks wanted to do that). >> >> Cheers, >> Nick. >> >>> >>> -- >>> \ ?Don't be misled by the enormous flow of money into bad defacto | >>> `\ standards for unsophisticated buyers using poor adaptations of | >>> _o__) incomplete ideas.? ?Alan Kay | >>> Ben Finney >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > I'm fairly concerned that what has turned into a "how can we increase > the feedback received for people submitting pull requests" has turned > into a bike shed moment for using F/LOSS tooling instead of GitHub > when the cores who actually work on the project have already expressed > a disinterest in moving and a satisfaction with GitHub. > > GitLab's UI would do nothing to improve review management. > > Phabricator, while nice, again adds yet another layer to the piece for > new contributors to involve themselves in. GitHub is one monolith and > closed source (and a company with culture problems) but that doesn't > change the fact that it's the core developers choice what software to > use and they've (for the time being) chosen GitHub. Can we please stop > this discussion already? It's no longer beneficial or relevant. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig Tooling wise, Github PRs work well for us. I don?t (and I don?t believe that any of the other core devs) have any major issues with them. Github issues on the other hand, they function ?OK? but it would be nice to have something that we can allow anyone to modify the state of tickets to help with triage. However even this isn?t a super pressing concern because our ticket count is small enough that I don?t think there?s likely to be too many to be handled by people commenting on issues and a core team coming in to change things. However if someone has a proposal for a different issue tracker (and plans for how to migrate to it), personally I?d be willing to listen. F/OSS tooling is nice, but I honestly care a whole lot less about that and a lot more about whatever tooling is the most effective for us to get the job done. This can include hosted services (and possibly even hosted services that cost money). Written in Python is also nice, but again I honestly don?t care about that nearly as much as I care about the tooling being effective. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sat Mar 7 00:15:14 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 6 Mar 2015 23:15:14 +0000 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: <5413B547-BB3D-4608-971F-55441009667A@stufft.io> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85ioeexr78.fsf@benfinney.id.au> <5413B547-BB3D-4608-971F-55441009667A@stufft.io> Message-ID: On 6 March 2015 at 23:01, Donald Stufft wrote: > Tooling wise, Github PRs work well for us. I don?t (and I don?t believe that > any of the other core devs) have any major issues with them. > > Github issues on the other hand, they function ?OK? but it would be nice to > have something that we can allow anyone to modify the state of tickets to > help with triage. However even this isn?t a super pressing concern because > our ticket count is small enough that I don?t think there?s likely to be too > many to be handled by people commenting on issues and a core team coming in > to change things. However if someone has a proposal for a different issue > tracker (and plans for how to migrate to it), personally I?d be willing to > listen. I'm also fine with github. I don't have an issue with the issue tracker, although as Donald says it would be helpful if it had a concept of "tracker privileges" separate from "core committer". But that's *not* a big enough concern to me that I'd want to go to a different tool. Paul From graffatcolmingov at gmail.com Sat Mar 7 00:53:32 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 6 Mar 2015 17:53:32 -0600 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85ioeexr78.fsf@benfinney.id.au> <5413B547-BB3D-4608-971F-55441009667A@stufft.io> Message-ID: Has PyPA considered contacting GitHub support? I'm happy to do the same since I've wanted this for a while myself on other projects. On Fri, Mar 6, 2015 at 5:15 PM, Paul Moore wrote: > On 6 March 2015 at 23:01, Donald Stufft wrote: >> Tooling wise, Github PRs work well for us. I don?t (and I don?t believe that >> any of the other core devs) have any major issues with them. >> >> Github issues on the other hand, they function ?OK? but it would be nice to >> have something that we can allow anyone to modify the state of tickets to >> help with triage. However even this isn?t a super pressing concern because >> our ticket count is small enough that I don?t think there?s likely to be too >> many to be handled by people commenting on issues and a core team coming in >> to change things. However if someone has a proposal for a different issue >> tracker (and plans for how to migrate to it), personally I?d be willing to >> listen. > > I'm also fine with github. I don't have an issue with the issue > tracker, although as Donald says it would be helpful if it had a > concept of "tracker privileges" separate from "core committer". But > that's *not* a big enough concern to me that I'd want to go to a > different tool. > > Paul From ncoghlan at gmail.com Sat Mar 7 16:04:27 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 8 Mar 2015 01:04:27 +1000 Subject: [Distutils] Implementing large changes in small increments (was: Getting more momentum for pip) In-Reply-To: <98302E35-647E-4E64-B75E-BD7DEA083D35@stufft.io> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <20150306135501.GA31001@yuggoth.org> <98302E35-647E-4E64-B75E-BD7DEA083D35@stufft.io> Message-ID: On 7 March 2015 at 02:22, Donald Stufft wrote: > A better test suite and a more comprehensive CI system is where most of our tooling > problems are. For the cross platform CI problem, we could likely set up post-merge CI on the CPython buildbot fleet. We trust the pip team to run code there anyway (courtesy of ensurepip), but those are persistent systems, so we wouldn't want to run every PR through them. That would put you in a situation where pre-merge CI is at least giving you a check nothing is fundamentally broken, while post-merge CI would check you haven't broken any *other* environments. Working on enabling that may also be a good opportunity to finally hook the CPython Buildbot master up with the credentials it needs to run ephemeral clients on Rackspace: http://docs.buildbot.net/latest/manual/cfg-buildslaves-openstack.html Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Mar 7 16:11:25 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 8 Mar 2015 01:11:25 +1000 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: <5413B547-BB3D-4608-971F-55441009667A@stufft.io> References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85ioeexr78.fsf@benfinney.id.au> <5413B547-BB3D-4608-971F-55441009667A@stufft.io> Message-ID: On 7 March 2015 at 09:01, Donald Stufft wrote: > > F/OSS tooling is nice, but I honestly care a whole lot less about that and a > lot more about whatever tooling is the most effective for us to get the job > done. This can include hosted services (and possibly even hosted services that > cost money). Written in Python is also nice, but again I honestly don?t care > about that nearly as much as I care about the tooling being effective. Right, that's why I suggested GerritHub or Phabricator as possibilities for consideration, based on my interpretation of some of the concerns raised, since they both allow the GitHub repos to remain the "single source of truth", while adding some additional process options around them. However, it sounds like there aren't any current major tooling issues aside from GitHub's lack of support for a "Triager" level of permissions, so even the idea of potentially adopting your own suggested Phabricator+GitHub approach wouldn't rank very high on pip's process improvement list at this point. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Mar 7 16:13:12 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 8 Mar 2015 01:13:12 +1000 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85ioeexr78.fsf@benfinney.id.au> <5413B547-BB3D-4608-971F-55441009667A@stufft.io> Message-ID: On 7 March 2015 at 09:53, Ian Cordasco wrote: > Has PyPA considered contacting GitHub support? I'm happy to do the > same since I've wanted this for a while myself on other projects. I have some indirect contacts as well where I'd be happy to pass this request on - so consider that done (it may not go anywhere, but there's no harm in asking). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From msabramo at gmail.com Sat Mar 7 18:19:02 2015 From: msabramo at gmail.com (Marc Abramowitz) Date: Sat, 7 Mar 2015 09:19:02 -0800 Subject: [Distutils] Implementing large changes in small increments In-Reply-To: References: <1A310311-D3B6-4DF6-A808-780674AC0A65@stufft.io> <946B9839-6EAF-4322-BF03-B64067BAE04C@stufft.io> <85twxzyuj6.fsf_-_@benfinney.id.au> <85ioeexr78.fsf@benfinney.id.au> <5413B547-BB3D-4608-971F-55441009667A@stufft.io> Message-ID: <9556BC37-9AA0-45E5-BD23-75C9AB3403AC@gmail.com> > I have some indirect contacts as well where I'd be happy to pass this > request on - so consider that done (it may not go anywhere, but > there's no harm in asking). Yep, can't hurt. We could also try the social networking thing. https://twitter.com/msabramo/status/574256914478977025 -------------- next part -------------- An HTML attachment was scrubbed... URL: From p at 2015.forums.dobrogost.net Sat Mar 7 16:50:31 2015 From: p at 2015.forums.dobrogost.net (Piotr Dobrogost) Date: Sat, 7 Mar 2015 16:50:31 +0100 Subject: [Distutils] Granting permissions to Xavier Fernandez and Marc Abramowitz? In-Reply-To: <980469656.73253.1425671453726.JavaMail.open-xchange@webmail.home.pl> References: <980469656.73253.1425671453726.JavaMail.open-xchange@webmail.home.pl> Message-ID: On Fri, Mar 6, 2015 at 8:50 PM, piotr.dobrogost at autoera-serwer.home.pl piotr.dobrogost at autoera-serwer.home.pl

wrote: > Hi > > As an external observer of pip project at github I see two men, namely > Xavier Fernandez (https://github.com/xavfernandez) and Marc Abramowitz > (https://github.com/msabramo) with many valuable contributions. I > think it would be beneficial if they had been granted some more > permissions to the project/repo. I deliberately did not use term "core developer" as I'm aware it's not my call. Between having no special access and being "core developer" there is range of activities like labelling bugs, closing duplicated bugs and closing inactive bugs. These are permissions I had in mind. Nevertheless, thank you for all valuable feedback. Regards, Piotr Dobrogost From ncoghlan at gmail.com Sat Mar 7 23:53:49 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 8 Mar 2015 08:53:49 +1000 Subject: [Distutils] Granting permissions to Xavier Fernandez and Marc Abramowitz? In-Reply-To: References: <980469656.73253.1425671453726.JavaMail.open-xchange@webmail.home.pl> Message-ID: On 8 Mar 2015 08:43, "Piotr Dobrogost"

wrote: > > On Fri, Mar 6, 2015 at 8:50 PM, piotr.dobrogost at autoera-serwer.home.pl > piotr.dobrogost at autoera-serwer.home.pl

> wrote: > > Hi > > > > As an external observer of pip project at github I see two men, namely > > Xavier Fernandez (https://github.com/xavfernandez) and Marc Abramowitz > > (https://github.com/msabramo) with many valuable contributions. I > > think it would be beneficial if they had been granted some more > > permissions to the project/repo. > > I deliberately did not use term "core developer" as I'm aware it's not > my call. Between having no special access and being "core developer" > there is range of activities like labelling bugs, closing duplicated > bugs and closing inactive bugs. These are permissions I had in mind. Ah, in that case, such of level of permissions is unfortunately currently missing in the GitHub UX. I don't know if the core pip team would be OK with using a "don't merge anything" approach as an interim solution in the absence of GitHub adding that feature. My recollection is that there are various aspects of the GitHub UX when you have commit privileges that would make that approach quite annoying from the point of view of a contributor in that situation. Regards, Nick. > > Nevertheless, thank you for all valuable feedback. > > Regards, > Piotr Dobrogost > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuaxo2 at yahoo.com Sun Mar 8 16:05:04 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Sun, 8 Mar 2015 15:05:04 +0000 (UTC) Subject: [Distutils] Installing a file into sitepackages In-Reply-To: References: Message-ID: <1426669565.482363.1425827104055.JavaMail.yahoo@mail.yahoo.com> Hm - I was able to reproduce it just now, I tried it with and without a recent site packages. Since my project is an import hook I want it to be imported on startup, maybe I should do this in a different way ... S++ > On Saturday, March 7, 2015 1:51 AM, Erik Bray wrote: > > On Fri, Feb 20, 2015 at 1:49 PM, Stuart Axon > > wrote: >> Hi, >> In my project, I install a .pth file into site-packages, I use the > data_files... in Ubuntu this seems to work OK, but in a Windows VM the file > doesn't seem to be being installed: >> >> setup( >> .... >> # Install the import hook >> data_files=[ >> (site_packages_path, ["vext_importer.pth"] if > environ.get('VIRTUAL_ENV') else []), >> ], >> ) >> >> >> - Is there a better way to do this ? >> >> >> >> I realise it's a bit odd installing a .pth - my project is to allow > certain packages to use the system site packages from a virtualenv - >> https://github.com/stuaxo/vext > > Hi Stuart, > > I know this is old so sorry to anyone else. But since no one > replied--I haven't looked too closely at what it is you're trying to > accomplish. But whatever the reason, that seems like a > reasonable-enough way to me if you need to get a .pth file installed > into site-packages. I'm not sure why it isn't working for you in > WIndows but it wasn't clear what the problem was. I just tried this > in a Windows VM and it worked fine? > > Best, > Erik > From stuaxo2 at yahoo.com Sun Mar 8 18:14:34 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Sun, 8 Mar 2015 17:14:34 +0000 (UTC) Subject: [Distutils] Installing a file into sitepackages In-Reply-To: <165378882.507483.1425833214146.JavaMail.yahoo@mail.yahoo.com> References: <1426669565.482363.1425827104055.JavaMail.yahoo@mail.yahoo.com> <165378882.507483.1425833214146.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1574936148.543136.1425834875002.JavaMail.yahoo@mail.yahoo.com> I had a further look - and on windows the file ends up inside the .egg file, on linux it ends up inside the site packages as intended. At a guess it seems like there might be a bug in the path handling on windows. .. I wonder if it's something like this http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python which seems an easy way to get an off-by-one error in a path ? From donald at stufft.io Sun Mar 8 20:27:28 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 8 Mar 2015 15:27:28 -0400 Subject: [Distutils] API CHANGE - Migrating from MD5 to SHA2, Take 2 In-Reply-To: <20141201212344.GD5241@merlinux.eu> References: <54F34BF5-1A4B-4452-AC2D-6EE9D837074B@stufft.io> <2EC8ACAD-7E01-4B57-BD79-4882368CE04F@stufft.io> <20141201092517.GQ25600@merlinux.eu> <696CC3BA-4269-4CDB-94F0-6C44031356A5@stufft.io> <20141201212344.GD5241@merlinux.eu> Message-ID: Holger, has this happened yet? > On Dec 1, 2014, at 4:23 PM, holger krekel wrote: > > On Mon, Dec 01, 2014 at 12:45 -0600, Ian Cordasco wrote: >> On Mon, Dec 1, 2014 at 12:35 PM, Donald Stufft wrote: >>> >>>> On Dec 1, 2014, at 4:25 AM, holger krekel wrote: >>>> >>>> Hi Donald, >>>> >>>> On Sat, Nov 29, 2014 at 19:43 -0500, Donald Stufft wrote: >>>>>> On Nov 13, 2014, at 9:21 PM, Donald Stufft wrote: >>>>>> >>>>>> Starting a new thread with more explicit details at Richard?s request. >>>>>> Essentially the tl;dr here is that we'll switch to using sha2 (specifically >>>>>> sha256). >>>>> >>>>> Ping? >>>>> >>>>> Are we OK to make this change? >>>> >>>> sorry i didn't get back earlier. Before the minor release of devpi-server >>>> last week i tried for two hours to change devpi-server to accomodate >>>> your planned pypi.python.org checksum changes. >>>> >>>> I found the change cannot easily be done without changes to the underlying >>>> database schema and thus needs a major new release of devpi-server because >>>> an export/import cycle is needed. When doing that i also want to do >>>> some internal cleanup related to name normalization (and also relating >>>> to recent pypi.python.org changes) but i need a week or two i guess to >>>> do that. However i now think that if you do the pypi.python.org checksum >>>> change it shouldn't directly break devpi-server but it would remove >>>> checksum checking. I'd rather like to have a new major devpi-server >>>> release out when you do the change. Is it ok for you to wait a bit still? >>>> >>>> best, >>>> holger >>> >>> Yes, we can wait a bit. I was just going over my TODO list and making sure >>> things weren?t getting lost in the shuffle. >> >> Holger, >> >> Is there anyway people on this list can help with the updates to devpi >> so that we can get this out sooner? > > Looking at devpi/server/devpi_server/extpypi.py and > devpi/server/devpi_server/model.py mainly and changing most places > where "md5" is found in the source and adapting related tests. > > Is there a specific reason you are in a hurry if i may ask? > > best, > holger --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From stuaxo2 at yahoo.com Tue Mar 10 17:29:07 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Tue, 10 Mar 2015 16:29:07 +0000 (UTC) Subject: [Distutils] Installing a file into sitepackages In-Reply-To: <1574936148.543136.1425834875002.JavaMail.yahoo@mail.yahoo.com> References: <1574936148.543136.1425834875002.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1999388837.1877688.1426004947804.JavaMail.yahoo@mail.yahoo.com> I had more of a dig into this, with a minimal setup.py: https://gist.github.com/stuaxo/c76a042cb7aa6e77285b setup calls install_data On win32 setup.py calls install_data which copies the file into the egg - even though I have given the absolute path to sitepackages C:\> python setup.py install .... running install_data creating build\bdist.win32\egg copying TEST_FILE.TXT -> build\bdist.win32\egg\ .... On Linux the file is copied to the right path: $ python setup.py install ..... installing package data to build/bdist.linux-x86_64/egg running install_data copying TEST_FILE.TXT -> /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages .... *something* is normalising my absolute path to site packages into just '' - it's possible to see by looking at self.data_files in the 'run' function in: distutils/command/install_data.py - on windows it the first part has been changed to '' unlike on linux where it's the absolute path I set... still not sure where it's happening though. *This all took a while, as rebuilt VM and verified on 2.7.8 and 2.7.9.. S++ > On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > > I had a further look - and on windows the file ends up inside the .egg file, on > linux it ends up inside the site packages as intended. > > > At a guess it seems like there might be a bug in the path handling on windows. > .. I wonder if it's something like this > > http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python > > which seems an easy way to get an off-by-one error in a path ? > From stuaxo2 at yahoo.com Thu Mar 12 08:54:43 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Thu, 12 Mar 2015 14:54:43 +0700 Subject: [Distutils] Installing a file into sitepackages In-Reply-To: <1999388837.1877688.1426004947804.JavaMail.yahoo@mail.yahoo.com> References: <1574936148.543136.1425834875002.JavaMail.yahoo@mail.yahoo.com> <1999388837.1877688.1426004947804.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1426146883.26686.0@smtp.mail.yahoo.com> For closure: The solution was to make a Command class + implement finalize_options to fixup the paths in distribution.data_files. Source: # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b """ Install a file into the root of sitepackages on windows as well as linux. Under normal operation on win32 path_to_site_packages gets changed to '' which installs inside the .egg instead. """ import os from distutils import sysconfig from distutils.command.install_data import install_data from setuptools import setup here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) site_packages_path = sysconfig.get_python_lib() site_packages_files = ['TEST_FILE.TXT'] class _install_data(install_data): def finalize_options(self): """ On win32 the files here are changed to '' which ends up inside the .egg, change this back to the absolute path. """ install_data.finalize_options(self) global site_packages_files for i, f in enumerate(list(self.distribution.data_files)): if not isinstance(f, basestring): folder, files = f if files == site_packages_files: # Replace with absolute path version self.distribution.data_files[i] = (site_packages_path, files) setup( cmdclass={'install_data': _install_data}, name='test_install', version='0.0.1', description='', long_description='', url='https://example.com', author='Stuart Axon', author_email='stuaxo2 at yahoo.com', license='PD', classifiers=[], keywords='', packages=[], install_requires=[], data_files=[ (site_packages_path, site_packages_files), ], ) On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: > I had more of a dig into this, with a minimal setup.py: > > > https://gist.github.com/stuaxo/c76a042cb7aa6e77285b > > setup calls install_data > > On win32 setup.py calls install_data which copies the file into the > egg - even though I have given the absolute path to sitepackages > > > C:\> python setup.py install > .... > > running install_data > creating build\bdist.win32\egg > copying TEST_FILE.TXT -> build\bdist.win32\egg\ > .... > > > > On Linux the file is copied to the right path: > > > $ python setup.py install > ..... > > installing package data to build/bdist.linux-x86_64/egg > running install_data > copying TEST_FILE.TXT -> > /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages > .... > > > > *something* is normalising my absolute path to site packages into > just '' - it's possible to see by looking at self.data_files in the > 'run' function in: > > > distutils/command/install_data.py > > - on windows it the first part has been changed to '' unlike on > linux where it's the absolute path I set... still not sure where it's > happening though. > > > > *This all took a while, as rebuilt VM and verified on 2.7.8 and > 2.7.9.. > > S++ > > > > >> On Monday, March 9, 2015 12:17 AM, Stuart Axon >> wrote: >> > I had a further look - and on windows the file ends up inside the >> .egg file, on >> linux it ends up inside the site packages as intended. >> >> >> At a guess it seems like there might be a bug in the path handling >> on windows. >> .. I wonder if it's something like this >> >> >> http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python >> >> which seems an easy way to get an off-by-one error in a path ? >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Mon Mar 16 02:05:45 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 16 Mar 2015 14:05:45 +1300 Subject: [Distutils] setup_requires for dev environments Message-ID: PEP 426 addresses build requirements for distributions of Python code, but doesn't directly help with development environments. It seems to me that if we help development environments, that would be nice - and any explicit metadata there can obviously be reflected into PEP-426 data in future. For context, the main use I have for setup_requires these days is projects with a version contained within the project, and for the use of pbr in openstack (and some other git hosted) projects. Consider e.g. unittest2, which has its version information in one place inside the package; but setup imports unittest2 to get at that, so all the dependencies become setup_requires entries :(. I may change that to exec which Donald suggested on IRC [I'd been pondering something similar for a bit - but was thinking of putting e.g.a json file in the package and then reading that for version data]. testtools has a similar bunch of logic in setup.py. The openstack projects have a nice solution I think, which is that they write the egg metadata file and then read that back - both at runtime via pbr helpers and at build time when pbr takes over the build. The problem with that, of course, is that pbr then becomes a setup_requires itself. So, I'm wondering if we can do something fairly modest to make setup_requires usage nicer for devs, who won't benefit from PEP-426 work, but share all the same issues. E.g. pip install git://... / pip install filepath / pip install -e filepath should be able to figure out the setup_requires and have things Just Work. Something like: - teach pip to read setup_requires from setup.cfg setuptools doesn't need to change - it will still try to check its own setup_requires, and if an older pip had been used, that will trigger easy_install as it does currently. There's a small amount of duplicate work in the double checking, but thats tolerable IMO. We could go further and also teach setuptools how to do that, e.g. you'd put setup_requires='setuptools>someX' in setup.py and your real setup_requirements in setup.cfg. That would be better as it would avoid double-handling, but we'd need some complex mojo to make things work when setuptools decides to self-upgrade :( - so I'm inclined to stay with the bare bones solution for now. Thoughts? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From ben+python at benfinney.id.au Mon Mar 16 03:17:06 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 16 Mar 2015 13:17:06 +1100 Subject: [Distutils] setup_requires for dev environments References: Message-ID: <85fv95u1nh.fsf@benfinney.id.au> Robert Collins writes: > For context, the main use I have for setup_requires these days is > projects with a version contained within the project, and for the use > of pbr in openstack (and some other git hosted) projects. [?] > The openstack projects have a nice solution I think, which is that > they write the egg metadata file and then read that back - both at > runtime via pbr helpers and at build time when pbr takes over the > build. I'm not using ?pbr?, but yes, a big bundle of ?create an egg-info file and read it back? custom Setuptools code was the clumsy solution I ended up with for this in ?python-daemon? 2.x. If that boilerplate could be removed, and ?this is a dependency for build actions only? could just work for all users, I would be quite happy. -- \ ?The problem with television is that the people must sit and | `\ keep their eyes glued on a screen: the average American family | _o__) hasn't time for it.? ?_The New York Times_, 1939 | Ben Finney From ncoghlan at gmail.com Mon Mar 16 08:36:10 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 16 Mar 2015 17:36:10 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: Message-ID: On 16 March 2015 at 11:05, Robert Collins wrote: > PEP 426 addresses build requirements for distributions of Python code, > but doesn't directly help with development environments. It's supposed to, but updating the relevant section of the PEP has been lingering on my todo list for a while now. Short version is that you'll be able to do "pip install package[-:self:]" in order to get all the build, dev and runtime dependencies without installing the package itself. It hasn't been a priority since PEP 440 was the focus of the last pip/setuptools release, and Warehouse & TUF have been higher priority since then. So I agree it would be worthwhile to figure out an interim improvement, but don't have a strong opinion on what that should look like. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stuaxo2 at yahoo.com Mon Mar 16 10:02:12 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Mon, 16 Mar 2015 09:02:12 +0000 (UTC) Subject: [Distutils] Installing a file into sitepackages In-Reply-To: <1426146883.26686.0@smtp.mail.yahoo.com> References: <1426146883.26686.0@smtp.mail.yahoo.com> Message-ID: <1434424866.156842.1426496532846.JavaMail.yahoo@mail.yahoo.com> Hi All??? This, and another memory-leak bug were triggered by the sandbox.?? Would it be possible to either add an API to exempt files, or just allow writing within site packages, even if just for .pth files ? I'm monkey patching around these for now https://github.com/stuaxo/vext/blob/master/setup.py#L16 S++ On Thursday, March 12, 2015 2:54 PM, Stuart Axon wrote: For closure: ?The solution was to make a Command class + implement finalize_options to fixup the paths in distribution.data_files. Source: # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b"""Install a file into the root of sitepackages on windows as well as linux. Under normal operation on win32 path_to_site_packagesgets changed to '' which installs inside the .egg instead.""" import os from distutils import sysconfigfrom distutils.command.install_data import install_datafrom setuptools import setup here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) site_packages_path = sysconfig.get_python_lib()site_packages_files = ['TEST_FILE.TXT'] class _install_data(install_data):? ? def finalize_options(self):? ? ? ? """? ? ? ? On win32 the files here are changed to '' which? ? ? ? ends up inside the .egg, change this back to the? ? ? ? absolute path.? ? ? ? """? ? ? ? install_data.finalize_options(self)? ? ? ? global site_packages_files? ? ? ? for i, f in enumerate(list(self.distribution.data_files)):? ? ? ? ? ? if not isinstance(f, basestring):? ? ? ? ? ? ? ? folder, files = f? ? ? ? ? ? ? ? if files == site_packages_files:? ? ? ? ? ? ? ? ? ? # Replace with absolute path version? ? ? ? ? ? ? ? ? ? self.distribution.data_files[i] = (site_packages_path, files) setup(? ? cmdclass={'install_data': _install_data},? ? name='test_install',? ? version='0.0.1', ? ? description='',? ? long_description='',? ? url='https://example.com',? ? author='Stuart Axon',? ? author_email='stuaxo2 at yahoo.com',? ? license='PD',? ? classifiers=[],? ? keywords='',? ? packages=[], ? ? install_requires=[], ? ? data_files=[? ? ? ? (site_packages_path, site_packages_files),? ? ], ) On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: I had more of a dig into this, with a minimal setup.py:https://gist.github.com/stuaxo/c76a042cb7aa6e77285bsetup calls install_dataOn win32 setup.py calls install_data which copies the file into the egg - even though I have given the absolute path to sitepackagesC:\> python setup.py install....running install_datacreating build\bdist.win32\eggcopying TEST_FILE.TXT -> build\bdist.win32\egg\ ....On Linux the file is copied to the right path:$ python setup.py install.....installing package data to build/bdist.linux-x86_64/eggrunning install_datacopying TEST_FILE.TXT -> /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages....*something* is normalising my absolute path to site packages into just '' - it's possible to see by looking at self.data_files in the 'run' function in:distutils/command/install_data.py- on windows it the first part has been changed to '' unlike on linux where it's the absolute path I set... still not sure where it's happening though.*This all took a while, as rebuilt VM and verified on 2.7.8 and 2.7.9..S++ On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > I had a further look - and on windows the file ends up inside the .egg file, on linux it ends up inside the site packages as intended. At a guess it seems like there might be a bug in the path handling on windows. .. I wonder if it's something like this http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python which seems an easy way to get an off-by-one error in a path ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Mar 16 16:06:36 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 11:06:36 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: Message-ID: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> > On Mar 15, 2015, at 9:05 PM, Robert Collins wrote: > > PEP 426 addresses build requirements for distributions of Python code, > but doesn't directly help with development environments. > > It seems to me that if we help development environments, that would be > nice - and any explicit metadata there can obviously be reflected into > PEP-426 data in future. > > For context, the main use I have for setup_requires these days is > projects with a version contained within the project, and for the use > of pbr in openstack (and some other git hosted) projects. > > Consider e.g. unittest2, which has its version information in one > place inside the package; but setup imports unittest2 to get at that, > so all the dependencies become setup_requires entries :(. I may change > that to exec which Donald suggested on IRC [I'd been pondering > something similar for a bit - but was thinking of putting e.g.a json > file in the package and then reading that for version data]. > > testtools has a similar bunch of logic in setup.py. > > The openstack projects have a nice solution I think, which is that > they write the egg metadata file and then read that back - both at > runtime via pbr helpers and at build time when pbr takes over the > build. > > The problem with that, of course, is that pbr then becomes a > setup_requires itself. > > So, I'm wondering if we can do something fairly modest to make > setup_requires usage nicer for devs, who won't benefit from PEP-426 > work, but share all the same issues. E.g. pip install git://... / pip > install filepath / pip install -e filepath should be able to figure > out the setup_requires and have things Just Work. > > Something like: > - teach pip to read setup_requires from setup.cfg > > setuptools doesn't need to change - it will still try to check its own > setup_requires, and if an older pip had been used, that will trigger > easy_install as it does currently. There's a small amount of duplicate > work in the double checking, but thats tolerable IMO. > > We could go further and also teach setuptools how to do that, e.g. you'd put > setup_requires='setuptools>someX' in setup.py > and your real setup_requirements in setup.cfg. > > That would be better as it would avoid double-handling, but we'd need > some complex mojo to make things work when setuptools decides to > self-upgrade :( - so I'm inclined to stay with the bare bones solution > for now. > > Thoughts? I've been thinking about this proposal this morning, and my primary question is what exactly is the pain that is being caused right now, and how does this proposal help it? Is the pain that setuptools is doing the installation instead of pip? Is that pain that the dependencies are being installed into a .eggs directory instead of globally? Is it something else? I'm hesitant to want to add another psuedo standard ontop of the pile of implementation defined psuedo standards we already have, especially without fully understanding what the underlying pain point actually is and how the proposal addresses it. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Mon Mar 16 16:24:48 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 16 Mar 2015 11:24:48 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: We could support this syntax right now. It's so simple. Don't deride it as a pseudo standard, turn it into an actual standard and praise it as something practical that will not take years to implement. Then after those years have passed and the new PEP actually works and has a distutils replacement to drive it, deprecate the old standard. If you can come up with something better that can ship before 2016, by all means. [metadata] setup-requires = cffi pip pycparser >= 2.10 https://bitbucket.org/dholth/setup-requires On Mon, Mar 16, 2015 at 11:06 AM, Donald Stufft wrote: > >> On Mar 15, 2015, at 9:05 PM, Robert Collins wrote: >> >> PEP 426 addresses build requirements for distributions of Python code, >> but doesn't directly help with development environments. >> >> It seems to me that if we help development environments, that would be >> nice - and any explicit metadata there can obviously be reflected into >> PEP-426 data in future. >> >> For context, the main use I have for setup_requires these days is >> projects with a version contained within the project, and for the use >> of pbr in openstack (and some other git hosted) projects. >> >> Consider e.g. unittest2, which has its version information in one >> place inside the package; but setup imports unittest2 to get at that, >> so all the dependencies become setup_requires entries :(. I may change >> that to exec which Donald suggested on IRC [I'd been pondering >> something similar for a bit - but was thinking of putting e.g.a json >> file in the package and then reading that for version data]. >> >> testtools has a similar bunch of logic in setup.py. >> >> The openstack projects have a nice solution I think, which is that >> they write the egg metadata file and then read that back - both at >> runtime via pbr helpers and at build time when pbr takes over the >> build. >> >> The problem with that, of course, is that pbr then becomes a >> setup_requires itself. >> >> So, I'm wondering if we can do something fairly modest to make >> setup_requires usage nicer for devs, who won't benefit from PEP-426 >> work, but share all the same issues. E.g. pip install git://... / pip >> install filepath / pip install -e filepath should be able to figure >> out the setup_requires and have things Just Work. >> >> Something like: >> - teach pip to read setup_requires from setup.cfg >> >> setuptools doesn't need to change - it will still try to check its own >> setup_requires, and if an older pip had been used, that will trigger >> easy_install as it does currently. There's a small amount of duplicate >> work in the double checking, but thats tolerable IMO. >> >> We could go further and also teach setuptools how to do that, e.g. you'd put >> setup_requires='setuptools>someX' in setup.py >> and your real setup_requirements in setup.cfg. >> >> That would be better as it would avoid double-handling, but we'd need >> some complex mojo to make things work when setuptools decides to >> self-upgrade :( - so I'm inclined to stay with the bare bones solution >> for now. >> >> Thoughts? > > I've been thinking about this proposal this morning, and my primary question > is what exactly is the pain that is being caused right now, and how does this > proposal help it? Is the pain that setuptools is doing the installation instead > of pip? Is that pain that the dependencies are being installed into a .eggs > directory instead of globally? Is it something else? > > I'm hesitant to want to add another psuedo standard ontop of the pile of > implementation defined psuedo standards we already have, especially without > fully understanding what the underlying pain point actually is and how the > proposal addresses it. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From donald at stufft.io Mon Mar 16 17:03:12 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 12:03:12 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: > On Mar 16, 2015, at 11:24 AM, Daniel Holth wrote: > > We could support this syntax right now. It's so simple. Don't deride > it as a pseudo standard, turn it into an actual standard and praise it > as something practical that will not take years to implement. Then > after those years have passed and the new PEP actually works and has a > distutils replacement to drive it, deprecate the old standard. > > If you can come up with something better that can ship before 2016, by > all means. > > [metadata] > setup-requires = cffi > pip > pycparser >= 2.10 > > > > https://bitbucket.org/dholth/setup-requires It is a psuedo standard unless it is backed by a defined standard. That's not a derision, it's just a fact. The first step is to determine *what* the problem is that it's actually attempting to solve. That's not clear to me currently other than some vague statements about pain, but what pain? What's actually occuring and how does this address those problems? After figuring out what the actual problem is, we can look at the proposed solution and see how well it actually solves that problem, if there is maybe a better solution to the problem, and if the benefits outweigh the costs or not. The ease of implementation is not the only factor in deciding if something is a good idea or not. We have to take into account forwards and backwards compatiblity. If we implement it and people start to depend on it then it's something that's going to have to exist forever, and any new installer is going to have to replicate that behavior. If people don't depend on it then implementing it was a waste of time and effort. For instance, if the problem is "when setuptools does the install, then things get installed differently, with different options, SSL certs, proxies, etc" then I think a better solution is that pip does terrible hacks in order to forcibly take control of setup_requires from setuptools and installs them into a temporary directory (or something like that). That is something that would require no changes on the part of authors or people installing software, and is backwards compatible with everything that's already been published using setup_requires. That's the primary problem that I'm aware of. If I try and guess at other problems people might be solving, one might be that in order to use setup_requires you have to delay your imports until after the setup_requires get processed. This typically means you do things like imports inside of functions that get called as part of the setup.py build/install process. This isn't the most fun way to write software, however it works. Specifying the setup_requires in a static location outside would enable pip to then install those things into a temporary directory prior to executing the setup.py which then lets you do imports and other related work at the module scope of the setup.py. This particular problem I'm not sure it's worth fixing with a stop gap solution. It would require breaking the entire existing install base of installation tools if anyone actually took advantage of this fact, which I don't think is generally worth it to have slightly nicer use of things in your setup.py (essentially allowing you to import at the top level and not require subclassing command classes). So yea, what's the actual problem that this is attempting to solve? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Mon Mar 16 17:32:44 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 16 Mar 2015 12:32:44 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: Problem: Users would like to be able to import stuff in setup.py. This could be anything from a version fetcher to a replacement for distutils itself. However, if setup.py is the only place to specify these requirements there's a bit of a chicken and egg problem, unless they have unusually good setuptools knowledge, especially if you want to replace the entire setup() implementation. Problem: Having easy_install do it is not what people want and misses some important use cases. Problem: Based on empirical evidence PEP 426 will never be done. Its current purpose is to shut down discussion of pragmatic solutions. Solution: Add requirements to setup.cfg, installed by pip before setup.py is touched. Old pip: requirements will not be installed. This is what happens now if anyone tries to use a non-stdlib module in setup.py, and plenty of packages do. User will have to install the extra requirements manually before running setup.py. Proposed pip: requirements will be installed. Hooray! Result: Users will begin writing packages that only work with new pip. If we implement this, users will do the same thing they are already doing (import non-stdlib packages inside setup.py), only more often. On Mon, Mar 16, 2015 at 12:03 PM, Donald Stufft wrote: > >> On Mar 16, 2015, at 11:24 AM, Daniel Holth wrote: >> >> We could support this syntax right now. It's so simple. Don't deride >> it as a pseudo standard, turn it into an actual standard and praise it >> as something practical that will not take years to implement. Then >> after those years have passed and the new PEP actually works and has a >> distutils replacement to drive it, deprecate the old standard. >> >> If you can come up with something better that can ship before 2016, by >> all means. >> >> [metadata] >> setup-requires = cffi >> pip >> pycparser >= 2.10 >> >> >> >> https://bitbucket.org/dholth/setup-requires > > It is a psuedo standard unless it is backed by a defined standard. That's not > a derision, it's just a fact. > > The first step is to determine *what* the problem is that it's actually > attempting to solve. That's not clear to me currently other than some vague > statements about pain, but what pain? What's actually occuring and how does > this address those problems? > > After figuring out what the actual problem is, we can look at the proposed > solution and see how well it actually solves that problem, if there is maybe > a better solution to the problem, and if the benefits outweigh the costs or > not. > > The ease of implementation is not the only factor in deciding if something is > a good idea or not. We have to take into account forwards and backwards > compatiblity. If we implement it and people start to depend on it then it's > something that's going to have to exist forever, and any new installer is going > to have to replicate that behavior. If people don't depend on it then > implementing it was a waste of time and effort. > > For instance, if the problem is "when setuptools does the install, then things > get installed differently, with different options, SSL certs, proxies, etc" > then I think a better solution is that pip does terrible hacks in order to > forcibly take control of setup_requires from setuptools and installs them into > a temporary directory (or something like that). That is something that would > require no changes on the part of authors or people installing software, and > is backwards compatible with everything that's already been published using > setup_requires. That's the primary problem that I'm aware of. > > If I try and guess at other problems people might be solving, one might be > that in order to use setup_requires you have to delay your imports until after > the setup_requires get processed. This typically means you do things like > imports inside of functions that get called as part of the setup.py > build/install process. This isn't the most fun way to write software, however > it works. Specifying the setup_requires in a static location outside would > enable pip to then install those things into a temporary directory prior to > executing the setup.py which then lets you do imports and other related work > at the module scope of the setup.py. This particular problem I'm not sure it's > worth fixing with a stop gap solution. It would require breaking the entire > existing install base of installation tools if anyone actually took advantage > of this fact, which I don't think is generally worth it to have slightly nicer > use of things in your setup.py (essentially allowing you to import at the top > level and not require subclassing command classes). > > So yea, what's the actual problem that this is attempting to solve? > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > From Steve.Dower at microsoft.com Mon Mar 16 17:35:15 2015 From: Steve.Dower at microsoft.com (Steve Dower) Date: Mon, 16 Mar 2015 16:35:15 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: Donald Stufft wrote: > So yea, what's the actual problem that this is attempting to solve? ISTM (whether this is the actual intent or not) that this would be handy to differentiate between the dependencies needed when installing from a wheel vs. an sdist. Daniel's example of setup_requires including cython suggests to me that a wheel would include the compiled output and cython is not required in that case. I don't personally have a use case for this right now, though it does seem like it has potential to refer to a Python package that acts as a front-end for a compiler (and perhaps downloader/installer... hmm...) Cheers, Steve From dholth at gmail.com Mon Mar 16 18:04:02 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 16 Mar 2015 13:04:02 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: The problem with a no-stopgaps policy is that the non-stopgap solution has to be incredible to ever be greater than the accrued debt of ((current pain - reduced pain from stopgap) * all python users * years until non-stopgap) - (maintenance/documentation hassle * years since stopgap implemented * everyone who has to deal with it), and we do not know how great the non-stopgap will be. On Mon, Mar 16, 2015 at 12:35 PM, Steve Dower wrote: > Donald Stufft wrote: >> So yea, what's the actual problem that this is attempting to solve? > > ISTM (whether this is the actual intent or not) that this would be handy to differentiate between the dependencies needed when installing from a wheel vs. an sdist. Daniel's example of setup_requires including cython suggests to me that a wheel would include the compiled output and cython is not required in that case. > > I don't personally have a use case for this right now, though it does seem like it has potential to refer to a Python package that acts as a front-end for a compiler (and perhaps downloader/installer... hmm...) > > Cheers, > Steve From donald at stufft.io Mon Mar 16 18:14:31 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 13:14:31 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> > On Mar 16, 2015, at 12:32 PM, Daniel Holth wrote: > > Problem: Users would like to be able to import stuff in setup.py. This > could be anything from a version fetcher to a replacement for > distutils itself. However, if setup.py is the only place to specify > these requirements there's a bit of a chicken and egg problem, unless > they have unusually good setuptools knowledge, especially if you want > to replace the entire setup() implementation. So you *can* import things inside of a setup.py today, you just have to delay the imports by subclassing a command. You can see an example of doing this with the example command given for pytest in the documentation for pytest [1]. So this problem essentially boils down to people wanting to import at the module scope of their setup.py instead of needing to delay the import for it. This particular problem I believe the solution is worse than the problem. There is a supported solution *today* they can use and it works and importantly it works in all versions of pip and setuptools that I'm aware of. It'a also going to continue to work for years and years. [1] http://pytest.org/latest/goodpractises.html#integration-with-setuptools-test-commands > > Problem: Having easy_install do it is not what people want and misses > some important use cases. This problem I'm aware of, and as I said in my previous email I believe a better interim solution to this problem is to have pip forcibly take control over the setup_requires inside of a setup.py. This has the advantage of requiring nobody to make any changes to their packages so it'll work on all new and existing projects that rely on setup_requires and it's completely self contained within pip. > > Problem: Based on empirical evidence PEP 426 will never be done. Its > current purpose is to shut down discussion of pragmatic solutions. This is just FUD and I would appreciate it if you'd stop repeating it. It's only been ~3 months since PEP 440 was completed and released inside of pip, setuptools, and PyPI. I've since switched back over to focusing primarily on getting Warehouse ready to replace PyPI so the bulk of my time is being spent focusing on that. After that's done my plan is to likely switch back to working on putting PEP 426 through the same hard look that I put PEP 440 though to try and iron out as many problems as I can find before implementing it and pushing it out to people. The bulk of the effort of pushing the standards, pip, and PyPI through is done by a handful of people, and of those handful I believe that the largest share is done by myself. That's not to toot my own horn or any such nonsense but to simply state the fact that the available bandwidth of people able and willing to work on problems is low. However the things we bless here as official are things which need to be able to last for a decade or more, which means that they do need careful consideration before we bless them. > > Solution: Add requirements to setup.cfg, installed by pip before > setup.py is touched. > > Old pip: requirements will not be installed. This is what happens now > if anyone tries to use a non-stdlib module in setup.py, and plenty of > packages do. User will have to install the extra requirements manually > before running setup.py. > > Proposed pip: requirements will be installed. Hooray! > > Result: Users will begin writing packages that only work with new pip. > > If we implement this, users will do the same thing they are already > doing (import non-stdlib packages inside setup.py), only more often. > > > On Mon, Mar 16, 2015 at 12:03 PM, Donald Stufft wrote: >> >>> On Mar 16, 2015, at 11:24 AM, Daniel Holth wrote: >>> >>> We could support this syntax right now. It's so simple. Don't deride >>> it as a pseudo standard, turn it into an actual standard and praise it >>> as something practical that will not take years to implement. Then >>> after those years have passed and the new PEP actually works and has a >>> distutils replacement to drive it, deprecate the old standard. >>> >>> If you can come up with something better that can ship before 2016, by >>> all means. >>> >>> [metadata] >>> setup-requires = cffi >>> pip >>> pycparser >= 2.10 >>> >>> >>> >>> https://bitbucket.org/dholth/setup-requires >> >> It is a psuedo standard unless it is backed by a defined standard. That's not >> a derision, it's just a fact. >> >> The first step is to determine *what* the problem is that it's actually >> attempting to solve. That's not clear to me currently other than some vague >> statements about pain, but what pain? What's actually occuring and how does >> this address those problems? >> >> After figuring out what the actual problem is, we can look at the proposed >> solution and see how well it actually solves that problem, if there is maybe >> a better solution to the problem, and if the benefits outweigh the costs or >> not. >> >> The ease of implementation is not the only factor in deciding if something is >> a good idea or not. We have to take into account forwards and backwards >> compatiblity. If we implement it and people start to depend on it then it's >> something that's going to have to exist forever, and any new installer is going >> to have to replicate that behavior. If people don't depend on it then >> implementing it was a waste of time and effort. >> >> For instance, if the problem is "when setuptools does the install, then things >> get installed differently, with different options, SSL certs, proxies, etc" >> then I think a better solution is that pip does terrible hacks in order to >> forcibly take control of setup_requires from setuptools and installs them into >> a temporary directory (or something like that). That is something that would >> require no changes on the part of authors or people installing software, and >> is backwards compatible with everything that's already been published using >> setup_requires. That's the primary problem that I'm aware of. >> >> If I try and guess at other problems people might be solving, one might be >> that in order to use setup_requires you have to delay your imports until after >> the setup_requires get processed. This typically means you do things like >> imports inside of functions that get called as part of the setup.py >> build/install process. This isn't the most fun way to write software, however >> it works. Specifying the setup_requires in a static location outside would >> enable pip to then install those things into a temporary directory prior to >> executing the setup.py which then lets you do imports and other related work >> at the module scope of the setup.py. This particular problem I'm not sure it's >> worth fixing with a stop gap solution. It would require breaking the entire >> existing install base of installation tools if anyone actually took advantage >> of this fact, which I don't think is generally worth it to have slightly nicer >> use of things in your setup.py (essentially allowing you to import at the top >> level and not require subclassing command classes). >> >> So yea, what's the actual problem that this is attempting to solve? >> >> --- >> Donald Stufft >> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Mon Mar 16 18:17:47 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 13:17:47 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: <62293AE4-83E3-4792-B299-C344B7CDC71A@stufft.io> > On Mar 16, 2015, at 1:04 PM, Daniel Holth wrote: > > The problem with a no-stopgaps policy is that the non-stopgap solution > has to be incredible to ever be greater than the accrued debt of > ((current pain - reduced pain from stopgap) * all python users * years > until non-stopgap) - (maintenance/documentation hassle * years since > stopgap implemented * everyone who has to deal with it), and we do not > know how great the non-stopgap will be. > There is not a "no stopgaps" policy. There is a "stopgaps must be carefully considered" policy. Stopgaps which don't rely on end users needing to do anything in particular to use them and which pay attention to backwards and forward compatability are better than stopgaps that introduce new APIs/user facing features. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Mon Mar 16 19:33:19 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 16 Mar 2015 18:33:19 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: On 16 March 2015 at 17:14, Donald Stufft wrote: > The bulk of the effort of pushing the standards, pip, and PyPI through is done > by a handful of people, and of those handful I believe that the largest share > is done by myself. That's not to toot my own horn or any such nonsense but to > simply state the fact that the available bandwidth of people able and willing > to work on problems is low. However the things we bless here as official are > things which need to be able to last for a decade or more, which means that > they do need careful consideration before we bless them. As a serious question - is there anything I (or indeed anyone else) can do to make progress on PEP 426? If I'm honest, right now I don't exactly see what tool changes are needed to change it from draft to accepted to actually implemented. As far as I can see, acceptance consists largely of someone, somehow, confirming that there are no major loopholes in the spec. I think that mostly comes down to the fact that no-one has raised objections since the PEP was published, plus someone with experience of some of the more difficult distribution scenarios sanity-checking things. And then, getting it implemented in tools. I guess the tool side consists of: 1. Making pip write a pydist.json file when installing from wheel. 2. Making setuptools write pydist.json when installing from sdist, and when creating a sdist. 3. Making wheel write pydist.json when writing a wheel. (Also, when distlib writes a wheel, it should write pydist.json, but I'm considering distlib as "non-core" for the sake of this discussion). There's also presumably work to add support for specifying some of the new metadata in setup.py, which I guess is setuptools work again. Have I missed anything crucial? Paul PS I'm ignoring the "standard metadata extensions" PEP where console wrappers, and post-install scripts figure. Those are probably bigger design issues. From dholth at gmail.com Mon Mar 16 19:53:15 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 16 Mar 2015 14:53:15 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <62293AE4-83E3-4792-B299-C344B7CDC71A@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <62293AE4-83E3-4792-B299-C344B7CDC71A@stufft.io> Message-ID: No one should be asked to learn how to extend distutils, and in practice no one knows how. People have been begging for years for working setup_requires, far longer than I've been interested in it, and all they want to do is import fetch_version setup(version=fetch_version(), ...) Then they will eventually notice setup_requires has never worked the way most people expect. As a result there are too few setup.py abstractions. The other proposal is a little bit interesting. Parse setup.py without running it, extract setup_requires, and pip install locally before running the file? It would be easy as long as the setup_requires were defined as a literal list in the setup() call, but you would have to tell people they were not allowed to write normal clever Python code. I think the gotchas would be severe... Release a setuptools command class that actually works with regular setup_requires, and parses setup_requires out of a side file? But fails fetch_version case... The main reason the installer should handle setup_requires instead of setup.py is related to one of the major goals of packaging, which is to get setup.py out of the business of installing (it is OK if it is a build script). Would you be interested in a JSON-format metadata that you were willing to support long term, with a quick version 0.1 release that only adds the setup_requires feature? From donald at stufft.io Mon Mar 16 20:09:16 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 15:09:16 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: <0E732BE6-26F9-407A-93FF-D88F7E4A9F74@stufft.io> > On Mar 16, 2015, at 2:33 PM, Paul Moore wrote: > > On 16 March 2015 at 17:14, Donald Stufft wrote: >> The bulk of the effort of pushing the standards, pip, and PyPI through is done >> by a handful of people, and of those handful I believe that the largest share >> is done by myself. That's not to toot my own horn or any such nonsense but to >> simply state the fact that the available bandwidth of people able and willing >> to work on problems is low. However the things we bless here as official are >> things which need to be able to last for a decade or more, which means that >> they do need careful consideration before we bless them. > > As a serious question - is there anything I (or indeed anyone else) > can do to make progress on PEP 426? If I'm honest, right now I don't > exactly see what tool changes are needed to change it from draft to > accepted to actually implemented. > > As far as I can see, acceptance consists largely of someone, somehow, > confirming that there are no major loopholes in the spec. I think that > mostly comes down to the fact that no-one has raised objections since > the PEP was published, plus someone with experience of some of the > more difficult distribution scenarios sanity-checking things. And > then, getting it implemented in tools. > > I guess the tool side consists of: > > 1. Making pip write a pydist.json file when installing from wheel. > 2. Making setuptools write pydist.json when installing from sdist, and > when creating a sdist. > 3. Making wheel write pydist.json when writing a wheel. > > (Also, when distlib writes a wheel, it should write pydist.json, but > I'm considering distlib as "non-core" for the sake of this > discussion). > > There's also presumably work to add support for specifying some of the > new metadata in setup.py, which I guess is setuptools work again. > > Have I missed anything crucial? > Paul > > PS I'm ignoring the "standard metadata extensions" PEP where console > wrappers, and post-install scripts figure. Those are probably bigger > design issues. Probably similar to what I did for PEP 440. Start branches implementing it and try to run as much stuff through it as possible to see how it fails. When implementing PEP 440 I was running every version number that existed on PyPI through it to test to make sure things would work. That?s not going to be possible with PEP 426, but ideally we should be able to get branches in the various tools that work on it, grab the latest versions of the top N packages (or the latest versions of everything) and compare the results. Another thing is determining if there's anything else we can/should split out from PEP 426 to narrow the scope. A quick skim to refresh myself doesn't show me anything that stands out, but some thought to this wouldn't hurt. It?d also probably not hurt to go through the setuptools and pip bug trackers to see if there are any relevant issues and see how they would be effected by the new standard. PEP 426 itself isn't much in the way of groundbreaking. It's basically taking the dynamic metadata and "compiling" it down into a static form which is JSON based. The "easy" metadata (name etc) is a no brainer and probably doesn't require much poking, trying out an example of an extension probably wouldn't be the worst thing either. Even if stanardizing script wrappers, for instance, is held off, getting a demo extension using it would validate the standard. Potentially more thought/guidance should be mentioned in how projects should straddle the line between legacy systems. If a new style metadata exists in a format in addition to an old syle should the new style be preferred? Should there be any sanity checks to make sure the two aren't completely different? As an aside, one thing I've had in the back of my mind, is looking at the possibility of defining the environment markers which specify versions as PEP 440 versions and enabling all the same comparisons with them as we get in the specifiers. I think the string based comparison we currently do is kinda janky and any installer already has the tooling to handle specifiers. Generally what I did with PEP 440, which I think worked well, is I had everything pretty much implemented prior to it get accepting, and we were then able to use the things I found out while implementing it to adjust the PEP before it was formally accepted. I just didn?t merge anything until that point. This was pretty valuable in findings things where the PEP was too vague for someone to make an indepdent implementation going by what was in the PEP, or if it specified something where implementing it turned out to be hard/problematic, etc. A major reason why I?m personally focusing on Warehouse first is that integrating with Warehouse will be easier than integrating with PyPI legacy. However that doesn?t have to block anyone else, that?s just myself not wanting to spend the time integrating a new major metadata version into the old legacy code base. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Mon Mar 16 20:19:53 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 15:19:53 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <62293AE4-83E3-4792-B299-C344B7CDC71A@stufft.io> Message-ID: <738257F9-785E-45B4-9BD4-3AB4406AA990@stufft.io> > On Mar 16, 2015, at 2:53 PM, Daniel Holth wrote: > > No one should be asked to learn how to extend distutils, and in > practice no one knows how. Some people know how, pytest figured it out, pbr figured it out. There?s some documentation at https://pythonhosted.org/setuptools/setuptools.html#extending-and-reusing-setuptools It is true though that it?s not particularly great documentation and actually doing it is somewhat of an annoyance. > > People have been begging for years for working setup_requires, far > longer than I've been interested in it, and all they want to do is > > import fetch_version > setup(version=fetch_version(), ...) > > Then they will eventually notice setup_requires has never worked the > way most people expect. As a result there are too few setup.py > abstractions. > > The other proposal is a little bit interesting. Parse setup.py without > running it, extract setup_requires, and pip install locally before > running the file? It would be easy as long as the setup_requires were > defined as a literal list in the setup() call, but you would have to > tell people they were not allowed to write normal clever Python code. > I think the gotchas would be severe? I wasn?t going to try and parse the setup.py, I was going to execute it. Here?s a proof of concept I started on awhile back to try and validate the overall idea: https://github.com/pypa/pip/compare/develop...dstufft:eldritch-horror Since then I?ve thought about it more and decided it?d probably be better to instead of trying to shuffle arguments into the subprocess, have the subprocess write out into a file or stdout or something what all of the setup_requires are. This would require executing the setup.py 3x instead of 2x like pip is currently doing. This would also enable people do something like: try: import fetch_version except ImportError: def fetch_version(): return ?UNKNOWN? setup(version=fetch_version(), ?) If they are happy with mandating that their thing can only be installed from sdist with pip newer than X, because given a three pass installation (once to discover setup_requires, once to write egg_info, once to actually install) as long as the setup_requires list doesn?t rely on anything installed then the first pass can have no real information except the setup_requires. It actually wouldn?t even really be completely broken, it?d just have a nonsense version number (or whatever piece of metadata can?t be located). > > Release a setuptools command class that actually works with regular > setup_requires, and parses setup_requires out of a side file? But > fails fetch_version case... > > The main reason the installer should handle setup_requires instead of > setup.py is related to one of the major goals of packaging, which is > to get setup.py out of the business of installing (it is OK if it is a > build script). > > Would you be interested in a JSON-format metadata that you were > willing to support long term, with a quick version 0.1 release that > only adds the setup_requires feature? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Mon Mar 16 20:32:56 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 16 Mar 2015 15:32:56 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <738257F9-785E-45B4-9BD4-3AB4406AA990@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <62293AE4-83E3-4792-B299-C344B7CDC71A@stufft.io> <738257F9-785E-45B4-9BD4-3AB4406AA990@stufft.io> Message-ID: On Mon, Mar 16, 2015 at 3:19 PM, Donald Stufft wrote: > >> On Mar 16, 2015, at 2:53 PM, Daniel Holth wrote: >> >> No one should be asked to learn how to extend distutils, and in >> practice no one knows how. > > Some people know how, pytest figured it out, pbr figured it out. There?s > some documentation at https://pythonhosted.org/setuptools/setuptools.html#extending-and-reusing-setuptools > > It is true though that it?s not particularly great documentation and > actually doing it is somewhat of an annoyance. > >> >> People have been begging for years for working setup_requires, far >> longer than I've been interested in it, and all they want to do is >> >> import fetch_version >> setup(version=fetch_version(), ...) >> >> Then they will eventually notice setup_requires has never worked the >> way most people expect. As a result there are too few setup.py >> abstractions. >> >> The other proposal is a little bit interesting. Parse setup.py without >> running it, extract setup_requires, and pip install locally before >> running the file? It would be easy as long as the setup_requires were >> defined as a literal list in the setup() call, but you would have to >> tell people they were not allowed to write normal clever Python code. >> I think the gotchas would be severe? > > I wasn?t going to try and parse the setup.py, I was going to execute it. > > Here?s a proof of concept I started on awhile back to try and validate > the overall idea: https://github.com/pypa/pip/compare/develop...dstufft:eldritch-horror > > Since then I?ve thought about it more and decided it?d probably be better > to instead of trying to shuffle arguments into the subprocess, have the > subprocess write out into a file or stdout or something what all of the > setup_requires are. This would require executing the setup.py 3x instead > of 2x like pip is currently doing. > > This would also enable people do something like: > > try: > import fetch_version > except ImportError: > def fetch_version(): > return ?UNKNOWN? > > setup(version=fetch_version(), ?) > > If they are happy with mandating that their thing can only be installed from > sdist with pip newer than X, because given a three pass installation (once > to discover setup_requires, once to write egg_info, once to actually install) > as long as the setup_requires list doesn?t rely on anything installed then > the first pass can have no real information except the setup_requires. > > It actually wouldn?t even really be completely broken, it?d just have a nonsense > version number (or whatever piece of metadata can?t be located). But it would still work in older pip if the setup requirements were installed already? Users would have to try/catch every import? Explain why this is better than reading out of a different file in the sdist, no matter the format? Would it let you change your setup_requires with Python code before the initial .egg-info is written out? We've already talked too many times about my 34-line setup.py prefix that does exactly what I want by parsing and installing requirements from a config file, but people tend to complain about it not being a pip feature. If it would help, I could have it accept a different format, then it would be possible to publish backwards-compatible-with-pip sdists that parsed some preferable requirements format. If they were already installed the bw compat code would do nothing. https://bitbucket.org/dholth/setup-requires/src/03eda33c7681bc4102164c976e5a8cec3c4ffa9c/setup.py From dholth at gmail.com Mon Mar 16 20:34:42 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 16 Mar 2015 15:34:42 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <0E732BE6-26F9-407A-93FF-D88F7E4A9F74@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <0E732BE6-26F9-407A-93FF-D88F7E4A9F74@stufft.io> Message-ID: You ought to be able to get away with not supporting it in warehouse, we will surely be able to write legacy .egg-info to sdists indefinitely. On Mon, Mar 16, 2015 at 3:09 PM, Donald Stufft wrote: > >> On Mar 16, 2015, at 2:33 PM, Paul Moore wrote: >> >> On 16 March 2015 at 17:14, Donald Stufft wrote: >>> The bulk of the effort of pushing the standards, pip, and PyPI through is done >>> by a handful of people, and of those handful I believe that the largest share >>> is done by myself. That's not to toot my own horn or any such nonsense but to >>> simply state the fact that the available bandwidth of people able and willing >>> to work on problems is low. However the things we bless here as official are >>> things which need to be able to last for a decade or more, which means that >>> they do need careful consideration before we bless them. >> >> As a serious question - is there anything I (or indeed anyone else) >> can do to make progress on PEP 426? If I'm honest, right now I don't >> exactly see what tool changes are needed to change it from draft to >> accepted to actually implemented. >> >> As far as I can see, acceptance consists largely of someone, somehow, >> confirming that there are no major loopholes in the spec. I think that >> mostly comes down to the fact that no-one has raised objections since >> the PEP was published, plus someone with experience of some of the >> more difficult distribution scenarios sanity-checking things. And >> then, getting it implemented in tools. >> >> I guess the tool side consists of: >> >> 1. Making pip write a pydist.json file when installing from wheel. >> 2. Making setuptools write pydist.json when installing from sdist, and >> when creating a sdist. >> 3. Making wheel write pydist.json when writing a wheel. >> >> (Also, when distlib writes a wheel, it should write pydist.json, but >> I'm considering distlib as "non-core" for the sake of this >> discussion). >> >> There's also presumably work to add support for specifying some of the >> new metadata in setup.py, which I guess is setuptools work again. >> >> Have I missed anything crucial? >> Paul >> >> PS I'm ignoring the "standard metadata extensions" PEP where console >> wrappers, and post-install scripts figure. Those are probably bigger >> design issues. > > Probably similar to what I did for PEP 440. > > Start branches implementing it and try to run as much stuff through it as > possible to see how it fails. When implementing PEP 440 I was running every > version number that existed on PyPI through it to test to make sure things > would work. That?s not going to be possible with PEP 426, but ideally we should > be able to get branches in the various tools that work on it, grab the latest > versions of the top N packages (or the latest versions of everything) and > compare the results. > > Another thing is determining if there's anything else we can/should split out > from PEP 426 to narrow the scope. A quick skim to refresh myself doesn't show > me anything that stands out, but some thought to this wouldn't hurt. > > It?d also probably not hurt to go through the setuptools and pip bug trackers > to see if there are any relevant issues and see how they would be effected by > the new standard. > > PEP 426 itself isn't much in the way of groundbreaking. It's basically taking > the dynamic metadata and "compiling" it down into a static form which is JSON > based. The "easy" metadata (name etc) is a no brainer and probably doesn't > require much poking, trying out an example of an extension probably wouldn't > be the worst thing either. Even if stanardizing script wrappers, for instance, > is held off, getting a demo extension using it would validate the standard. > > Potentially more thought/guidance should be mentioned in how projects should > straddle the line between legacy systems. If a new style metadata exists in a > format in addition to an old syle should the new style be preferred? Should > there be any sanity checks to make sure the two aren't completely different? > > As an aside, one thing I've had in the back of my mind, is looking at the > possibility of defining the environment markers which specify versions as > PEP 440 versions and enabling all the same comparisons with them as we get > in the specifiers. I think the string based comparison we currently do is > kinda janky and any installer already has the tooling to handle specifiers. > > Generally what I did with PEP 440, which I think worked well, is I had > everything pretty much implemented prior to it get accepting, and we were then > able to use the things I found out while implementing it to adjust the PEP > before it was formally accepted. I just didn?t merge anything until that point. > This was pretty valuable in findings things where the PEP was too vague for > someone to make an indepdent implementation going by what was in the PEP, or > if it specified something where implementing it turned out to be > hard/problematic, etc. > > A major reason why I?m personally focusing on Warehouse first is that > integrating with Warehouse will be easier than integrating with PyPI legacy. > However that doesn?t have to block anyone else, that?s just myself not wanting > to spend the time integrating a new major metadata version into the old legacy > code base. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > From donald at stufft.io Mon Mar 16 21:07:04 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 16:07:04 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <62293AE4-83E3-4792-B299-C344B7CDC71A@stufft.io> <738257F9-785E-45B4-9BD4-3AB4406AA990@stufft.io> Message-ID: > On Mar 16, 2015, at 3:32 PM, Daniel Holth wrote: > > On Mon, Mar 16, 2015 at 3:19 PM, Donald Stufft wrote: >> >>> On Mar 16, 2015, at 2:53 PM, Daniel Holth wrote: >>> >>> No one should be asked to learn how to extend distutils, and in >>> practice no one knows how. >> >> Some people know how, pytest figured it out, pbr figured it out. There?s >> some documentation at https://pythonhosted.org/setuptools/setuptools.html#extending-and-reusing-setuptools >> >> It is true though that it?s not particularly great documentation and >> actually doing it is somewhat of an annoyance. >> >>> >>> People have been begging for years for working setup_requires, far >>> longer than I've been interested in it, and all they want to do is >>> >>> import fetch_version >>> setup(version=fetch_version(), ...) >>> >>> Then they will eventually notice setup_requires has never worked the >>> way most people expect. As a result there are too few setup.py >>> abstractions. >>> >>> The other proposal is a little bit interesting. Parse setup.py without >>> running it, extract setup_requires, and pip install locally before >>> running the file? It would be easy as long as the setup_requires were >>> defined as a literal list in the setup() call, but you would have to >>> tell people they were not allowed to write normal clever Python code. >>> I think the gotchas would be severe? >> >> I wasn?t going to try and parse the setup.py, I was going to execute it. >> >> Here?s a proof of concept I started on awhile back to try and validate >> the overall idea: https://github.com/pypa/pip/compare/develop...dstufft:eldritch-horror >> >> Since then I?ve thought about it more and decided it?d probably be better >> to instead of trying to shuffle arguments into the subprocess, have the >> subprocess write out into a file or stdout or something what all of the >> setup_requires are. This would require executing the setup.py 3x instead >> of 2x like pip is currently doing. >> >> This would also enable people do something like: >> >> try: >> import fetch_version >> except ImportError: >> def fetch_version(): >> return ?UNKNOWN? >> >> setup(version=fetch_version(), ?) >> >> If they are happy with mandating that their thing can only be installed from >> sdist with pip newer than X, because given a three pass installation (once >> to discover setup_requires, once to write egg_info, once to actually install) >> as long as the setup_requires list doesn?t rely on anything installed then >> the first pass can have no real information except the setup_requires. >> >> It actually wouldn?t even really be completely broken, it?d just have a nonsense >> version number (or whatever piece of metadata can?t be located). > > But it would still work in older pip if the setup requirements were > installed already? Users would have to try/catch every import? Explain > why this is better than reading out of a different file in the sdist, > no matter the format? Would it let you change your setup_requires with > Python code before the initial .egg-info is written out? It?s better because it solves the second problem you mentioned ?people don?t want easy_install to install things? for everything that uses setup_requires with no effort required on package authors. And yes it would still work in older pip if the setup requirements were installed already. It would also continue to work in setuptools/easy_install if they were installed already. It makes the existing mechanism somewhat better instead of trying to replace it with a whole new mechanism. It?s also a more general solution, for people who aren?t willing to break older things, they continue to use setup_requires with delayed imports and they get pip installing things for them. For people who don?t want to delay imports but are happy breaking older things, it lets them do what they want with only a minor amount of discomfortable (needing to catch an import error). > > We've already talked too many times about my 34-line setup.py prefix > that does exactly what I want by parsing and installing requirements > from a config file, but people tend to complain about it not being a > pip feature. If it would help, I could have it accept a different > format, then it would be possible to publish > backwards-compatible-with-pip sdists that parsed some preferable > requirements format. If they were already installed the bw compat code > would do nothing. > https://bitbucket.org/dholth/setup-requires/src/03eda33c7681bc4102164c976e5a8cec3c4ffa9c/setup.py --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Mon Mar 16 21:04:19 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Mar 2015 09:04:19 +1300 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: On 17 March 2015 at 04:06, Donald Stufft wrote: >> Thoughts? > > I've been thinking about this proposal this morning, and my primary question > is what exactly is the pain that is being caused right now, and how does this > proposal help it? Is the pain that setuptools is doing the installation instead > of pip? Is that pain that the dependencies are being installed into a .eggs > directory instead of globally? Is it something else? Thank you for thinking about it. > I'm hesitant to want to add another psuedo standard ontop of the pile of > implementation defined psuedo standards we already have, especially without > fully understanding what the underlying pain point actually is and how the > proposal addresses it. There are I think two major pain points: ease of use of not-already-usable-code and different installation logic. Different logic: For instance, in a clean venv checkout something with setup_requires (e.g. testtools) and do pip install -e . followed by unit2. For me at least this doesn't work. It ends up installing local .eggs which then aren't actually usable as they aren't on the path when unit2 runs. Not already-usable-code: See for instance https://hg.python.org/unittest2/file/8928fb47c3a9/setup.py#l13 or similar hacks everywhere. Those are the pain points. I get your concern about pseudo standards - so, what is the bar needed to put what I proposed into PEP-426 (or a new one?) - as previously stated and not (AFAICT) refuted, PEP-426 doesn't actually address either of these pain points today, since it requires an sdist to be buildable before its metadata is accessible. It's entirely reasonable to want whatever we do do to solve developer pain dovetail nicely with PEP-426, and in fact that was the reason I started a thread here rather than just whacking together a patch for pip :) The proposal addresses the two pain points in the following manner: Not already usable code: - by statically declaring the dependencies, no local code runs at all before they are installed. It won't solve things like 'build this local .so before xyz', but thats OK IMO. Different installation logic: - pip (or buildout or whatever) can avoid chaining into easy_install consistently and trivially, thus avoiding that Your proposal later in this three to do a three-way dance seems more complicated than a static expression of setup requirements, and I see no reason to have dynamic *setup* requirements. Both approaches require a new pip, so the adoption curve constraints appear identical. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Mon Mar 16 21:42:55 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 16:42:55 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: > On Mar 16, 2015, at 4:04 PM, Robert Collins wrote: > > On 17 March 2015 at 04:06, Donald Stufft wrote: > >>> Thoughts? >> >> I've been thinking about this proposal this morning, and my primary question >> is what exactly is the pain that is being caused right now, and how does this >> proposal help it? Is the pain that setuptools is doing the installation instead >> of pip? Is that pain that the dependencies are being installed into a .eggs >> directory instead of globally? Is it something else? > > Thank you for thinking about it. > >> I'm hesitant to want to add another psuedo standard ontop of the pile of >> implementation defined psuedo standards we already have, especially without >> fully understanding what the underlying pain point actually is and how the >> proposal addresses it. > > There are I think two major pain points: ease of use of > not-already-usable-code and different installation logic. > > Different logic: > For instance, in a clean venv checkout something with setup_requires > (e.g. testtools) and do pip install -e . followed by unit2. For me at > least this doesn't work. It ends up installing local .eggs which then > aren't actually usable as they aren't on the path when unit2 runs. Ahhhh, wait a minute. I think something might have just clicked here. You?re expecting/wanting the results of setup_requres to be installed into the environment itself and not just made available to the setup.py? That?s not going to work and I?d be against making that work. For something like that I?d say it would more cleanly map to something like tests_requires (in setup.py and PEP 426) and dev_requires (in PEP 426). I think that it would be reasonable for pip to install both of those types of requirements into the environment when you?re installing as an editable installation. > > Not already-usable-code: > See for instance > https://hg.python.org/unittest2/file/8928fb47c3a9/setup.py#l13 or > similar hacks everywhere. > > Those are the pain points. I get your concern about pseudo standards - > so, what is the bar needed to put what I proposed into PEP-426 (or a > new one?) - as previously stated and not (AFAICT) refuted, PEP-426 > doesn't actually address either of these pain points today, since it > requires an sdist to be buildable before its metadata is accessible. > It's entirely reasonable to want whatever we do do to solve developer > pain dovetail nicely with PEP-426, and in fact that was the reason I > started a thread here rather than just whacking together a patch for > pip :) > > The proposal addresses the two pain points in the following manner: > Not already usable code: > - by statically declaring the dependencies, no local code runs at > all before they are installed. It won't solve things like 'build this > local .so before xyz', but thats OK IMO. > Different installation logic: > - pip (or buildout or whatever) can avoid chaining into easy_install > consistently and trivially, thus avoiding that > > Your proposal later in this three to do a three-way dance seems more > complicated than a static expression of setup requirements, and I see > no reason to have dynamic *setup* requirements. Both approaches > require a new pip, so the adoption curve constraints appear identical. So it appears there?s actually two problems here, one is the one above, that you want some sort of ?these are required to do development? requirements, and that setup_requires has some problems (it?s inside of setup.py, and it?s installed by easy_install instead of pip). Ignoring the ?development requirement? problem (even though I think that?s a more interesting problem!) for a moment, I think that yea it?d be great to specify setup_requires statically, but that right now defining requirements as Python inside of setup.py is the standard we have. I?m aware that pbr routes around this standard, but that?s pbr and it?s not hardly the norm. I think that it?s worse to have a weird one off place to specify a particular type of dependency than to continue to use the normal mechanism and add things in pip to work around the deficiencies in that. The other benefit to my proposal is that every existing use of setup_requires starts to get installed by pip instead of by easy_install, which solves a whole class of problems like not supporting Wheels, proxy settings, SSL settings, etc. Going back to the development requirement problem, I think it would be reasonable for setuptools to start to gain some of the concepts from PEP 426, it already has tests_requires and I think that an official dev_requires wouldn?t be a bad idea either. If it then exposed those things as something pip could inspect we could start doing things like automatically installing them inside of a development installation. This would probably even allow backwards compat by having a setup.py dynamically add things to the setup_requires based upon what version of setuptools is executing the setup.py. If it?s an older one, add a shim that?ll implement the new functionality as a plugin instead of part of core. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Mon Mar 16 22:01:58 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 16 Mar 2015 17:01:58 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: Yes setup_requires means those dependencies that are needed for setup.py itself to run. On Mon, Mar 16, 2015 at 4:42 PM, Donald Stufft wrote: > >> On Mar 16, 2015, at 4:04 PM, Robert Collins wrote: >> >> On 17 March 2015 at 04:06, Donald Stufft wrote: >> >>>> Thoughts? >>> >>> I've been thinking about this proposal this morning, and my primary question >>> is what exactly is the pain that is being caused right now, and how does this >>> proposal help it? Is the pain that setuptools is doing the installation instead >>> of pip? Is that pain that the dependencies are being installed into a .eggs >>> directory instead of globally? Is it something else? >> >> Thank you for thinking about it. >> >>> I'm hesitant to want to add another psuedo standard ontop of the pile of >>> implementation defined psuedo standards we already have, especially without >>> fully understanding what the underlying pain point actually is and how the >>> proposal addresses it. >> >> There are I think two major pain points: ease of use of >> not-already-usable-code and different installation logic. >> >> Different logic: >> For instance, in a clean venv checkout something with setup_requires >> (e.g. testtools) and do pip install -e . followed by unit2. For me at >> least this doesn't work. It ends up installing local .eggs which then >> aren't actually usable as they aren't on the path when unit2 runs. > > Ahhhh, wait a minute. I think something might have just clicked here. > > You?re expecting/wanting the results of setup_requres to be installed > into the environment itself and not just made available to the setup.py? > That?s not going to work and I?d be against making that work. > > For something like that I?d say it would more cleanly map to something > like tests_requires (in setup.py and PEP 426) and dev_requires (in PEP 426). > I think that it would be reasonable for pip to install both of those > types of requirements into the environment when you?re installing as > an editable installation. > >> >> Not already-usable-code: >> See for instance >> https://hg.python.org/unittest2/file/8928fb47c3a9/setup.py#l13 or >> similar hacks everywhere. >> >> Those are the pain points. I get your concern about pseudo standards - >> so, what is the bar needed to put what I proposed into PEP-426 (or a >> new one?) - as previously stated and not (AFAICT) refuted, PEP-426 >> doesn't actually address either of these pain points today, since it >> requires an sdist to be buildable before its metadata is accessible. >> It's entirely reasonable to want whatever we do do to solve developer >> pain dovetail nicely with PEP-426, and in fact that was the reason I >> started a thread here rather than just whacking together a patch for >> pip :) >> >> The proposal addresses the two pain points in the following manner: >> Not already usable code: >> - by statically declaring the dependencies, no local code runs at >> all before they are installed. It won't solve things like 'build this >> local .so before xyz', but thats OK IMO. >> Different installation logic: >> - pip (or buildout or whatever) can avoid chaining into easy_install >> consistently and trivially, thus avoiding that >> >> Your proposal later in this three to do a three-way dance seems more >> complicated than a static expression of setup requirements, and I see >> no reason to have dynamic *setup* requirements. Both approaches >> require a new pip, so the adoption curve constraints appear identical. > > So it appears there?s actually two problems here, one is the one above, > that you want some sort of ?these are required to do development? > requirements, and that setup_requires has some problems (it?s inside > of setup.py, and it?s installed by easy_install instead of pip). > > Ignoring the ?development requirement? problem (even though I think that?s > a more interesting problem!) for a moment, I think that yea it?d be great > to specify setup_requires statically, but that right now defining requirements > as Python inside of setup.py is the standard we have. I?m aware that pbr > routes around this standard, but that?s pbr and it?s not hardly the norm. I > think that it?s worse to have a weird one off place to specify a particular > type of dependency than to continue to use the normal mechanism and add > things in pip to work around the deficiencies in that. > > The other benefit to my proposal is that every existing use of setup_requires > starts to get installed by pip instead of by easy_install, which solves a > whole class of problems like not supporting Wheels, proxy settings, SSL settings, > etc. > > Going back to the development requirement problem, I think it would be reasonable > for setuptools to start to gain some of the concepts from PEP 426, it already > has tests_requires and I think that an official dev_requires wouldn?t be a bad > idea either. If it then exposed those things as something pip could inspect we > could start doing things like automatically installing them inside of a development > installation. This would probably even allow backwards compat by having a setup.py > dynamically add things to the setup_requires based upon what version of setuptools > is executing the setup.py. If it?s an older one, add a shim that?ll implement the > new functionality as a plugin instead of part of core. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From robertc at robertcollins.net Mon Mar 16 23:35:55 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Mar 2015 11:35:55 +1300 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: On 17 March 2015 at 09:42, Donald Stufft wrote: > .. > Ahhhh, wait a minute. I think something might have just clicked here. > > You?re expecting/wanting the results of setup_requres to be installed > into the environment itself and not just made available to the setup.py? > That?s not going to work and I?d be against making that work. One of the common things I do is 'python setup.py sdist bdist_wheel upload -s'. That requires setup.py to work, and is a wonky world of pain right now. Anything that depends on pip intercepting setup.py isn't going to work there, is it? Debian build environments for these things generally run without the internet, without access to a pip mirror - distributors are used to having the setup requirements be safely installable, and I'd argue that setup_requires that violate that rule have been getting pushback and selection pressure for years. So I very much doubt there is a case where installing the deps is bad. I can well imagine the lack of resolver in pip issue being a concern, but its a concern regardless. > For something like that I?d say it would more cleanly map to something > like tests_requires (in setup.py and PEP 426) and dev_requires (in PEP 426). > I think that it would be reasonable for pip to install both of those > types of requirements into the environment when you?re installing as > an editable installation. I can grok that, though I suspect it runs the risk of being over-modelled. Do we know any uses of setup_requires where installing the requirements into the build environment would do the wrong thing? E.g. is it a theoretical concern, or a omg X would do Y we've-been-bitten-before issue? .. > So it appears there?s actually two problems here, one is the one above, > that you want some sort of ?these are required to do development? > requirements, and that setup_requires has some problems (it?s inside > of setup.py, and it?s installed by easy_install instead of pip). Sure. Note too that folk need to be able to run setup.py without triggering easy_install and without running pip. Thats a requirement for e.g. Debian. The way thats handled today is to have the build fail, look at it, and add build-depends lines to debian/control - the explicit metadata in Debian. If we had explicit metadata for Python sources (git, not dists), then folk could use that to reflect dependencies across semi-automatically (e.g. flagging new ones more clearly). > Ignoring the ?development requirement? problem (even though I think that?s > a more interesting problem!) for a moment, I think that yea it?d be great > to specify setup_requires statically, but that right now defining requirements > as Python inside of setup.py is the standard we have. I?m aware that pbr > routes around this standard, but that?s pbr and it?s not hardly the norm. I > think that it?s worse to have a weird one off place to specify a particular > type of dependency than to continue to use the normal mechanism and add > things in pip to work around the deficiencies in that. A concern I haven't expressed so far is that the route you're proposing is very clever. Clever tends to break, be hard to diagnose and hard to understand. I understand the benefit to folk for all the stale-won't-update packages out there, and perhaps we can do multiple things that all independently solve the issue. That is, what if we do explicit metadata *and* have pip do wonky magic. > The other benefit to my proposal is that every existing use of setup_requires > starts to get installed by pip instead of by easy_install, which solves a > whole class of problems like not supporting Wheels, proxy settings, SSL settings, > etc. Yep, which is beneficial on its own. But not a reason not to do explicit metadata :). > Going back to the development requirement problem, I think it would be reasonable > for setuptools to start to gain some of the concepts from PEP 426, it already > has tests_requires and I think that an official dev_requires wouldn?t be a bad > idea either. If it then exposed those things as something pip could inspect we > could start doing things like automatically installing them inside of a development > installation. This would probably even allow backwards compat by having a setup.py > dynamically add things to the setup_requires based upon what version of setuptools > is executing the setup.py. If it?s an older one, add a shim that?ll implement the > new functionality as a plugin instead of part of core. Ok, so what I need to know is what: - can I do - that solves my problem (not the other 1000 problems that PEP-426 solves) - that the setuptools // pip maintainers would be willing to merge. I'm happy to put tuits into this, but I don't want to boil the ocean - I want to solve the specific thing that makes me curse my laptop screen on a regular basis. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From ncoghlan at gmail.com Tue Mar 17 00:03:59 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 17 Mar 2015 09:03:59 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: On 17 Mar 2015 02:33, "Daniel Holth" wrote: > > Problem: Users would like to be able to import stuff in setup.py. This > could be anything from a version fetcher to a replacement for > distutils itself. However, if setup.py is the only place to specify > these requirements there's a bit of a chicken and egg problem, unless > they have unusually good setuptools knowledge, especially if you want > to replace the entire setup() implementation. > > Problem: Having easy_install do it is not what people want and misses > some important use cases. > > Problem: Based on empirical evidence PEP 426 will never be done. Its > current purpose is to shut down discussion of pragmatic solutions. Slight correction here: one of my current aims with PEP 426 is deliberately discouraging the discussion of solutions that only work reliably if everyone switches to a new build system first. That's a) never going to happen; and b) one of the key mistakes the distutils2 folks made that significantly hindered adoption of their work, and I don't want us to repeat it. My other key aim is to provide a public definition of what I think "good" looks like when it comes to software distribution, so I can more easily assess whether less radical proposals are still moving us closer to that goal. Making pip (and perhaps easy_install) setup.cfg aware, such that it assumes the use of d2to1 (or a semantically equivalent tool) if setup.cfg is present and hence is able to skip invoking setup.py in relevant cases, sounds like just such a positive incremental step to me, as it increases the number of situations where pip can avoid executing a Turing complete "configuration" file, without impeding the eventual adoption of a more comprehensive solution. I don't think that needs a PEP - just an RFE against pip to make it d2to1 aware for each use case where it's relevant, like installing setup.py dependencies. (And perhaps a similar RFE against setuptools) Projects that choose to rely on that new feature will be setting a high minimum installer version for their users, but some projects will be OK with that (especially projects private to a single organisation after upgrading pip on their production systems). Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 17 00:24:07 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 17 Mar 2015 09:24:07 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: On 17 Mar 2015 04:33, "Paul Moore" wrote: > > On 16 March 2015 at 17:14, Donald Stufft wrote: > > The bulk of the effort of pushing the standards, pip, and PyPI through is done > > by a handful of people, and of those handful I believe that the largest share > > is done by myself. That's not to toot my own horn or any such nonsense but to > > simply state the fact that the available bandwidth of people able and willing > > to work on problems is low. However the things we bless here as official are > > things which need to be able to last for a decade or more, which means that > > they do need careful consideration before we bless them. > > As a serious question - is there anything I (or indeed anyone else) > can do to make progress on PEP 426? If I'm honest, right now I don't > exactly see what tool changes are needed to change it from draft to > accepted to actually implemented. The main bottleneck where PEP 426 is concerned is me, and my current focus is on Red Hat & Project Atomic (e.g. http://connect.redhat.com/zones/containers, https://github.com/projectatomic/adb-atomic-developer-bundle) and the PSF (e.g. https://wiki.python.org/moin/PythonSoftwareFoundation/ProposalsForDiscussion/StrategicDecisionMakingProcess ) The main issue with PEP 426 is that are a few details in the current draft that I already think are a bad idea but haven't explicitly documented anywhere, so I need to get back and address those before it makes sense for anyone to start serious work on implementing it. However, now that I know folks are keen to help with that side, I can reprioritise getting the updates done so there's a better base to start working from. It's also worth noting that the main tweaks I want to make are specifically related to the way semantic dependencies are proposed to be defined, so everything outside that should already be fair game for formal documentation in an updated JSON schema. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Mar 17 00:30:49 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 19:30:49 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: <55CAB061-47E3-4854-82AD-F3CBF56DAE27@stufft.io> > On Mar 16, 2015, at 6:35 PM, Robert Collins wrote: > > On 17 March 2015 at 09:42, Donald Stufft wrote: >> > .. >> Ahhhh, wait a minute. I think something might have just clicked here. >> >> You?re expecting/wanting the results of setup_requres to be installed >> into the environment itself and not just made available to the setup.py? >> That?s not going to work and I?d be against making that work. > > One of the common things I do is 'python setup.py sdist bdist_wheel > upload -s'. That requires setup.py to work, and is a wonky world of > pain right now. Anything that depends on pip intercepting setup.py > isn't going to work there, is it? Debian build environments for these > things generally run without the internet, without access to a pip > mirror - distributors are used to having the setup requirements be > safely installable, and I'd argue that setup_requires that violate > that rule have been getting pushback and selection pressure for years. > So I very much doubt there is a case where installing the deps is bad. > I can well imagine the lack of resolver in pip issue being a concern, > but its a concern regardless. I?m not sure what solution is actually going to work here besides something like setup_requires. If you?re executing ``python setup.py ?`` then the only "hooks" that exist (and as far as I can tell, can exist) are ones that exist now. The only real improvement I can think of is setuptools offering a seperate API call to install the setup_requires so you can do something like: import setuptools setuptools.setup_requires("pbr", "otherthing") import pbr setuptools.setup(**pbr.get_metadata()) Anything that relies on parsing a setup.cfg is going to be roughly equivilant to that (or the current solution) because there's no other place to call that will make ``python setup.py`` work. Unless I'm missing something, how does having a setup_requires inside of setup.cfg help with this sequence of commands: $ git clone .... $ cd .... $ virtualenv .env $ .env/bin/python setup.py sdist > >> For something like that I?d say it would more cleanly map to something >> like tests_requires (in setup.py and PEP 426) and dev_requires (in PEP 426). >> I think that it would be reasonable for pip to install both of those >> types of requirements into the environment when you?re installing as >> an editable installation. > > I can grok that, though I suspect it runs the risk of being > over-modelled. Do we know any uses of setup_requires where installing > the requirements into the build environment would do the wrong thing? > E.g. is it a theoretical concern, or a omg X would do Y > we've-been-bitten-before issue? > .. setup_requires don?t just run in ?build? environments, they also run anytime you install from sdist in a final environment. To answer the question though, yes there are real concerns, some of the current setuptools extensions don?t play well if they are installed and available when you?re installing something that *doesn?t* use them. You could argue that this is a bug with those particular things, but currently some things do assume that if they are installed they are expected to be doing things to setup.py. Further more, I?m not sure that?s actually possible for this to generically work, if two different setuptools extensions both want to override the same thing, only one of them can work. So if you had two things that relied on overriding the install command (or the build command, or whatever), one would ?win? and the other would act as if it were not installed. > >> So it appears there?s actually two problems here, one is the one above, >> that you want some sort of ?these are required to do development? >> requirements, and that setup_requires has some problems (it?s inside >> of setup.py, and it?s installed by easy_install instead of pip). > > Sure. Note too that folk need to be able to run setup.py without > triggering easy_install and without running pip. Thats a requirement > for e.g. Debian. The way thats handled today is to have the build > fail, look at it, and add build-depends lines to debian/control - the > explicit metadata in Debian. If we had explicit metadata for Python > sources (git, not dists), then folk could use that to reflect > dependencies across semi-automatically (e.g. flagging new ones more > clearly). I don?t think most Debian (or Linux in General) Distributions are consuming packages directly from VCS. As I understand it from following debian-python the folks packaging Openstack are the outliers in that. That being said, most of the work thus far have ignored the part of the process where what we have is a VCS checkout. That's because it's somewhat dependent on breaking the reference cycle where the entire toolchain only really works because we assume setuptools everywhere. > >> Ignoring the ?development requirement? problem (even though I think that?s >> a more interesting problem!) for a moment, I think that yea it?d be great >> to specify setup_requires statically, but that right now defining requirements >> as Python inside of setup.py is the standard we have. I?m aware that pbr >> routes around this standard, but that?s pbr and it?s not hardly the norm. I >> think that it?s worse to have a weird one off place to specify a particular >> type of dependency than to continue to use the normal mechanism and add >> things in pip to work around the deficiencies in that. > > A concern I haven't expressed so far is that the route you're > proposing is very clever. Clever tends to break, be hard to diagnose > and hard to understand. I understand the benefit to folk for all the > stale-won't-update packages out there, and perhaps we can do multiple > things that all independently solve the issue. That is, what if we do > explicit metadata *and* have pip do wonky magic. Yea, clever is often times broken, I can agree with that. > >> The other benefit to my proposal is that every existing use of setup_requires >> starts to get installed by pip instead of by easy_install, which solves a >> whole class of problems like not supporting Wheels, proxy settings, SSL settings, >> etc. > > Yep, which is beneficial on its own. But not a reason not to do > explicit metadata :). Right, explicit metadata is absolutely the end goal. My concern is about adding more random cruft that we?ll have to support forever on the path to that. > >> Going back to the development requirement problem, I think it would be reasonable >> for setuptools to start to gain some of the concepts from PEP 426, it already >> has tests_requires and I think that an official dev_requires wouldn?t be a bad >> idea either. If it then exposed those things as something pip could inspect we >> could start doing things like automatically installing them inside of a development >> installation. This would probably even allow backwards compat by having a setup.py >> dynamically add things to the setup_requires based upon what version of setuptools >> is executing the setup.py. If it?s an older one, add a shim that?ll implement the >> new functionality as a plugin instead of part of core. > > Ok, so what I need to know is what: > - can I do > - that solves my problem (not the other 1000 problems that PEP-426 solves) > - that the setuptools // pip maintainers would be willing to merge. > > I'm happy to put tuits into this, but I don't want to boil the ocean - > I want to solve the specific thing that makes me curse my laptop > screen on a regular basis. I?ll come back to this if you can answer the above about how a setup.cfg solves the ``python setup.py ?`` situation from my first paragraph, because that?s going to influence my thoughts. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Mar 17 00:32:27 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Mar 2015 19:32:27 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> > On Mar 16, 2015, at 7:03 PM, Nick Coghlan wrote: > > > On 17 Mar 2015 02:33, "Daniel Holth" > wrote: > > > > Problem: Users would like to be able to import stuff in setup.py. This > > could be anything from a version fetcher to a replacement for > > distutils itself. However, if setup.py is the only place to specify > > these requirements there's a bit of a chicken and egg problem, unless > > they have unusually good setuptools knowledge, especially if you want > > to replace the entire setup() implementation. > > > > Problem: Having easy_install do it is not what people want and misses > > some important use cases. > > > > Problem: Based on empirical evidence PEP 426 will never be done. Its > > current purpose is to shut down discussion of pragmatic solutions. > > Slight correction here: one of my current aims with PEP 426 is deliberately discouraging the discussion of solutions that only work reliably if everyone switches to a new build system first. That's a) never going to happen; and b) one of the key mistakes the distutils2 folks made that significantly hindered adoption of their work, and I don't want us to repeat it. > > My other key aim is to provide a public definition of what I think "good" looks like when it comes to software distribution, so I can more easily assess whether less radical proposals are still moving us closer to that goal. > > Making pip (and perhaps easy_install) setup.cfg aware, such that it assumes the use of d2to1 (or a semantically equivalent tool) if setup.cfg is present and hence is able to skip invoking setup.py in relevant cases, sounds like just such a positive incremental step to me, as it increases the number of situations where pip can avoid executing a Turing complete "configuration" file, without impeding the eventual adoption of a more comprehensive solution. > > I don't think that needs a PEP - just an RFE against pip to make it d2to1 aware for each use case where it's relevant, like installing setup.py dependencies. (And perhaps a similar RFE against setuptools) > > Projects that choose to rely on that new feature will be setting a high minimum installer version for their users, but some projects will be OK with that (especially projects private to a single organisation after upgrading pip on their production systems). > > Cheers, > Nick. > > I don?t think that?s going to work, because if you only make pip aware of it then you break ``python setup.py sdist``, if you make setuptools aware of it then you don?t need pip to be aware of it because we?ll get it for free from setuptools being aware of it. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Tue Mar 17 00:39:54 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Mar 2015 12:39:54 +1300 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> Message-ID: On 17 March 2015 at 12:32, Donald Stufft wrote: > > On Mar 16, 2015, at 7:03 PM, Nick Coghlan wrote: > > > On 17 Mar 2015 02:33, "Daniel Holth" wrote: >> >> Problem: Users would like to be able to import stuff in setup.py. This >> could be anything from a version fetcher to a replacement for >> distutils itself. However, if setup.py is the only place to specify >> these requirements there's a bit of a chicken and egg problem, unless >> they have unusually good setuptools knowledge, especially if you want >> to replace the entire setup() implementation. >> >> Problem: Having easy_install do it is not what people want and misses >> some important use cases. >> >> Problem: Based on empirical evidence PEP 426 will never be done. Its >> current purpose is to shut down discussion of pragmatic solutions. > > Slight correction here: one of my current aims with PEP 426 is deliberately > discouraging the discussion of solutions that only work reliably if everyone > switches to a new build system first. That's a) never going to happen; and > b) one of the key mistakes the distutils2 folks made that significantly > hindered adoption of their work, and I don't want us to repeat it. > > My other key aim is to provide a public definition of what I think "good" > looks like when it comes to software distribution, so I can more easily > assess whether less radical proposals are still moving us closer to that > goal. > > Making pip (and perhaps easy_install) setup.cfg aware, such that it assumes > the use of d2to1 (or a semantically equivalent tool) if setup.cfg is present > and hence is able to skip invoking setup.py in relevant cases, sounds like > just such a positive incremental step to me, as it increases the number of > situations where pip can avoid executing a Turing complete "configuration" > file, without impeding the eventual adoption of a more comprehensive > solution. > > I don't think that needs a PEP - just an RFE against pip to make it d2to1 > aware for each use case where it's relevant, like installing setup.py > dependencies. (And perhaps a similar RFE against setuptools) > > Projects that choose to rely on that new feature will be setting a high > minimum installer version for their users, but some projects will be OK with > that (especially projects private to a single organisation after upgrading > pip on their production systems). > > Cheers, > Nick. > > > > I don?t think that?s going to work, because if you only make pip aware of it > then you break ``python setup.py sdist``, if you make setuptools aware of it > then you don?t need pip to be aware of it because we?ll get it for free from > setuptools being aware of it. Huh? I think the key tests are: - what happens with old tools - what happens with new tools With old tools it needs to not-break. With new tools it should be better :). Teaching pip, double-entered setup_requires (.cfg and .py). old tools keep working new tools are shiny (pip install -e / vcs then setup's easy_install call short-circuits doing nothing). Teaching only setuptools, double-entered old tools keep working new tools are not shiny, because pip isn't doing the install Teaching only setuptools, single entry old tools break (requirements absent, or you have a versioned dep on setuptools in setup.py and omg the pain) new tools are not shiny, same reason Teaching setuptools and pip, single entry old tools break - as above new tools are shiny (because pip either asks setuptools or reads setup.cfg, whatever) So I think we must teach pip, and we may teach setuptools. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From dholth at gmail.com Tue Mar 17 00:53:12 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 16 Mar 2015 19:53:12 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> Message-ID: Robert: is it a requirement to you that "python setup.py ..." should install setup_requires? For me I'd be quite happy if installing the requirements was my own problem in the absence of an installer. I would like to start writing my setup.py like this: setup.cfg: setup-requires = waf setup.py: import waf interpret setup.py arguments build with waf don't import setuptools On Mon, Mar 16, 2015 at 7:39 PM, Robert Collins wrote: > On 17 March 2015 at 12:32, Donald Stufft wrote: >> >> On Mar 16, 2015, at 7:03 PM, Nick Coghlan wrote: >> >> >> On 17 Mar 2015 02:33, "Daniel Holth" wrote: >>> >>> Problem: Users would like to be able to import stuff in setup.py. This >>> could be anything from a version fetcher to a replacement for >>> distutils itself. However, if setup.py is the only place to specify >>> these requirements there's a bit of a chicken and egg problem, unless >>> they have unusually good setuptools knowledge, especially if you want >>> to replace the entire setup() implementation. >>> >>> Problem: Having easy_install do it is not what people want and misses >>> some important use cases. >>> >>> Problem: Based on empirical evidence PEP 426 will never be done. Its >>> current purpose is to shut down discussion of pragmatic solutions. >> >> Slight correction here: one of my current aims with PEP 426 is deliberately >> discouraging the discussion of solutions that only work reliably if everyone >> switches to a new build system first. That's a) never going to happen; and >> b) one of the key mistakes the distutils2 folks made that significantly >> hindered adoption of their work, and I don't want us to repeat it. >> >> My other key aim is to provide a public definition of what I think "good" >> looks like when it comes to software distribution, so I can more easily >> assess whether less radical proposals are still moving us closer to that >> goal. >> >> Making pip (and perhaps easy_install) setup.cfg aware, such that it assumes >> the use of d2to1 (or a semantically equivalent tool) if setup.cfg is present >> and hence is able to skip invoking setup.py in relevant cases, sounds like >> just such a positive incremental step to me, as it increases the number of >> situations where pip can avoid executing a Turing complete "configuration" >> file, without impeding the eventual adoption of a more comprehensive >> solution. >> >> I don't think that needs a PEP - just an RFE against pip to make it d2to1 >> aware for each use case where it's relevant, like installing setup.py >> dependencies. (And perhaps a similar RFE against setuptools) >> >> Projects that choose to rely on that new feature will be setting a high >> minimum installer version for their users, but some projects will be OK with >> that (especially projects private to a single organisation after upgrading >> pip on their production systems). >> >> Cheers, >> Nick. >> >> >> >> I don?t think that?s going to work, because if you only make pip aware of it >> then you break ``python setup.py sdist``, if you make setuptools aware of it >> then you don?t need pip to be aware of it because we?ll get it for free from >> setuptools being aware of it. > > Huh? > > I think the key tests are: > - what happens with old tools > - what happens with new tools > > With old tools it needs to not-break. > With new tools it should be better :). > > Teaching pip, double-entered setup_requires (.cfg and .py). > old tools keep working > new tools are shiny (pip install -e / vcs then setup's easy_install > call short-circuits doing nothing). > > Teaching only setuptools, double-entered > old tools keep working > new tools are not shiny, because pip isn't doing the install > > Teaching only setuptools, single entry > old tools break (requirements absent, or you have a versioned dep on > setuptools in setup.py and omg the pain) > new tools are not shiny, same reason > > Teaching setuptools and pip, single entry > old tools break - as above > new tools are shiny (because pip either asks setuptools or reads > setup.cfg, whatever) > > So I think we must teach pip, and we may teach setuptools. > > -Rob > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From robertc at robertcollins.net Tue Mar 17 01:01:40 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Mar 2015 13:01:40 +1300 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> Message-ID: On 17 March 2015 at 12:53, Daniel Holth wrote: > Robert: is it a requirement to you that "python setup.py ..." should > install setup_requires? For me I'd be quite happy if installing the > requirements was my own problem in the absence of an installer. > > I would like to start writing my setup.py like this: > > setup.cfg: > setup-requires = waf > > setup.py: > import waf > interpret setup.py arguments > build with waf > don't import setuptools I've no particular thoughts on that. It would certainly avoid the pain of easy_install being triggered. Success criteria for my immediate personal needs: - pip install -e . works on a clean checkout of my projects - easy_install doesn't go and download stuff - my setup.py can refer to things (usually the version) inside the project itself, safely - python setup.py sdist bdist_wheel upload -s works after I've done pip install -e . (I'm happy to unpack why I've chosen those four things, but I think there's enough context in the thread by now). -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Tue Mar 17 03:36:23 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Mar 2015 15:36:23 +1300 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> Message-ID: On 17 March 2015 at 12:39, Robert Collins wrote: >> I don?t think that?s going to work, because if you only make pip aware of it >> then you break ``python setup.py sdist``, if you make setuptools aware of it >> then you don?t need pip to be aware of it because we?ll get it for free from >> setuptools being aware of it. > > Huh? > > I think the key tests are: > - what happens with old tools > - what happens with new tools > > With old tools it needs to not-break. > With new tools it should be better :). > > Teaching pip, double-entered setup_requires (.cfg and .py). > old tools keep working > new tools are shiny (pip install -e / vcs then setup's easy_install > call short-circuits doing nothing). > > Teaching only setuptools, double-entered > old tools keep working > new tools are not shiny, because pip isn't doing the install > > Teaching only setuptools, single entry > old tools break (requirements absent, or you have a versioned dep on > setuptools in setup.py and omg the pain) > new tools are not shiny, same reason > > Teaching setuptools and pip, single entry > old tools break - as above > new tools are shiny (because pip either asks setuptools or reads > setup.cfg, whatever) > > So I think we must teach pip, and we may teach setuptools. /me puts on the dunce hat. What I forgot was that as soon as we cleanup the hacks in setup.py's, that they will break. Duh. OTOH thinking about it more - Donald and I had a brief hangout to get more bandwidth on the problem - not breaking older setuptools seems an unnecessarily high bar: - distutils never knew how to install software, so a setup.py that doesn't know how to do that is no worse than distutils based setup.py's - anyone running a current pip will have things work - anyone running buildout or debian/rpm package builds etc won't care because they don't want easy_install triggered anyway, and explicitly gather deps themselves. - for anyone running pip behind firewalls etc it will be no worse (because the chain into easy_install is already broken) The arguably common case of folk not behind firewalls, running a slightly not-latest pip would be affected. But - its not deeply affected, and pip is upgrade-able everywhere :). More to the point, the choice here will be authors to opt-in knowing that potential impact. Folk can of course keep the horror in place and just use the new thing to make development nicer. So, the propsed plan going forward: - now: - I will put a minimal patch up for pip into the tracker and ask for feedback here and there - we can debate at that point whether bits of it should be in setuptools or not etc etc - likewise we can debate the use of a temporary environment or install into the target environment at that point - future - in the metabuild thing that is planned long term, handling this particular option will be placed w/in the setuptools plugin for it, making this not something that needs to be a 'standard'. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From ncoghlan at gmail.com Tue Mar 17 04:18:54 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 17 Mar 2015 13:18:54 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: On 17 March 2015 at 09:24, Nick Coghlan wrote: > The main bottleneck where PEP 426 is concerned is me, and my current focus > is on Red Hat & Project Atomic (e.g. > http://connect.redhat.com/zones/containers, > https://github.com/projectatomic/adb-atomic-developer-bundle) and the PSF > (e.g. > https://wiki.python.org/moin/PythonSoftwareFoundation/ProposalsForDiscussion/StrategicDecisionMakingProcess) I'll add in a couple of other relevant links regarding my current "not PyPA" priorities: https://forum.futurewise.org.au/t/what-role-should-foss-take-in-government/27 & http://community.redhat.com/blog/2015/02/the-quid-pro-quo-of-open-infrastructure/ We're helping to change the world by participating in pretty much any open source related activity, folks, even if it may not always feel like it :) > The main issue with PEP 426 is that are a few details in the current draft > that I already think are a bad idea but haven't explicitly documented > anywhere, so I need to get back and address those before it makes sense for > anyone to start serious work on implementing it. Ah, I *thought* I'd filed issues for all of them, I just forgot we were partway through migrating the draft PEPs repo to the PyPA org on GitHub. Full set of currently open PEP 426 issues: * https://bitbucket.org/pypa/pypi-metadata-formats/issues?status=new&status=open&component=Metadata%202.x * https://github.com/pypa/interoperability-peps/labels/PEP%20426 So I guess "finish migrating the draft PEPs and also migrate the open issues" qualifies as PEP 426/459 work that needs to be done. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Mar 17 10:52:59 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Mar 2015 09:52:59 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: On 16 March 2015 at 23:24, Nick Coghlan wrote: > However, now that I know folks are keen to help with that side, I can > reprioritise getting the updates done so there's a better base to start > working from. The thing I struggle with over PEP 426 is that as a data format definition, which explicitly describes itself as defining an in-memory representation, it's not clear to me what coding tasks are needed. (I don't have enough experience with complex build environments to help with the specification of the metadata, so my interest is in coding tasks I can help with). Writing pydist.json files is explicitly deferred to the yet to be written Sdist 1.0, Wheel 1.1, and new distribution database PEPs. The build system interface remains obscure (although there's note of a further PEP to define the command line interface) because we don't want to mandate the setup.py interface, but nobody has yet to come up with a better idea. And there's not even a mention of a PEP covering something like setup.cfg as a declarative means of providing metadata into the build system (unless that comes under the "command line API" description). So we're left in a situation where there are people willing to help, at least with the coding tasks, but no obvious implementation tasks to work on. And yet people still see problems that we expect to be fixed by "Metadata 2.0". So it does feel somewhat like a block on progress. While I understand that there are real reasons why PEP 426 needs more work before being finalised, is there no way that you (Nick and Donald mainly, but honestly anyone with a picture of what a world where PEP 426 is implemented would look like) can list some well-defined implementation tasks that people can *get on with*? The current frustration seems to me to be less about PEP 426 blocking progress, as about nobody knowing how they can actually help (as opposed to things like the current debate, which seems to be rooted in the idea that while PEP 426 should "solve" Robert's issue, nobody knows how that solution will look, and what can be done to get there). Paul From donald at stufft.io Tue Mar 17 12:05:45 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 17 Mar 2015 07:05:45 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> > On Mar 17, 2015, at 5:52 AM, Paul Moore wrote: > > On 16 March 2015 at 23:24, Nick Coghlan wrote: >> However, now that I know folks are keen to help with that side, I can >> reprioritise getting the updates done so there's a better base to start >> working from. > > The thing I struggle with over PEP 426 is that as a data format > definition, which explicitly describes itself as defining an in-memory > representation, it's not clear to me what coding tasks are needed. (I > don't have enough experience with complex build environments to help > with the specification of the metadata, so my interest is in coding > tasks I can help with). > > Writing pydist.json files is explicitly deferred to the yet to be > written Sdist 1.0, Wheel 1.1, and new distribution database PEPs. > > The build system interface remains obscure (although there's note of a > further PEP to define the command line interface) because we don't > want to mandate the setup.py interface, but nobody has yet to come up > with a better idea. And there's not even a mention of a PEP covering > something like setup.cfg as a declarative means of providing metadata > into the build system (unless that comes under the "command line API" > description). > > So we're left in a situation where there are people willing to help, > at least with the coding tasks, but no obvious implementation tasks to > work on. And yet people still see problems that we expect to be fixed > by "Metadata 2.0". So it does feel somewhat like a block on progress. > > While I understand that there are real reasons why PEP 426 needs more > work before being finalised, is there no way that you (Nick and Donald > mainly, but honestly anyone with a picture of what a world where PEP > 426 is implemented would look like) can list some well-defined > implementation tasks that people can *get on with*? The current > frustration seems to me to be less about PEP 426 blocking progress, as > about nobody knowing how they can actually help (as opposed to things > like the current debate, which seems to be rooted in the idea that > while PEP 426 should "solve" Robert's issue, nobody knows how that > solution will look, and what can be done to get there). > > Paul I would just implement it inside of Wheel. You?d technically be working on two PEPs at once, but I think the bare bones Wheel PEP is pretty simple. ?All the same things as the last PEP, except with pydist.json?. More things could be added to the Wheel PEP of course, but that?s not related to PEP 426. Even if we don?t merge the Wheel parts (though we?d be able to merge the packaging parts for an in memory representation) immediately, it?d still give some idea about how well it?ll work. Using Wheel is a good target because that already uses a static file so it?s less of a change to get PEP 426 integrated there. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Mar 17 12:30:54 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 17 Mar 2015 21:30:54 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: On 17 March 2015 at 19:52, Paul Moore wrote: > On 16 March 2015 at 23:24, Nick Coghlan wrote: >> However, now that I know folks are keen to help with that side, I can >> reprioritise getting the updates done so there's a better base to start >> working from. > > The thing I struggle with over PEP 426 is that as a data format > definition, which explicitly describes itself as defining an in-memory > representation, it's not clear to me what coding tasks are needed. (I > don't have enough experience with complex build environments to help > with the specification of the metadata, so my interest is in coding > tasks I can help with). That's a fair request. Something I perhaps haven't been clear about is that PEP 426 is *just* an interoperability spec - it defines the interfaces between tools so they can share a common data model. It deliberately *doesn't* fully define the user experience that build tools and download tools are going to wrap around it (it places *constraints* on that experience through some of its clauses, but still falls long way short of actually defining it). The idea here is to allow developers to choose the build system *they* like, and then have PyPI, pip, easy_install, et al, all be able to happily consume that metadata, regardless of the specific build system used. That's an easier bar to meet for wheel files than it is for sdist's, but the long term aim is to achieve it for both. > Writing pydist.json files is explicitly deferred to the yet to be > written Sdist 1.0, Wheel 1.1, and new distribution database PEPs. That's the timeline for the formal definition, in practice "drop pydist.json in the existing directory" as PEP 426 suggests is a fairly safe bet as to what those specs are going to say. The "don't rely on it unless the container version says it's OK to do so" caution in the PEP is primarily because wheels already ship with pydist.json metadata emitted based on an earlier draft version of the spec (from before I gutted it and moved the optional sections out to PEP 459 as standard extension modules, which is also the change that did the most damage when it came to invalidating the current jsonschema definition). On the generation side, there's thinking about how the d2to1 (i.e. setup.cfg) and setuptools (i.e. setup.py) APIs for entering the new metadata might look, taking into account that existing input files need to continue to work, and existing output files need to continue to be generated. Most of that can be worked through even while some specific field names are still being tinkered with. One option on that front may be to propose introducing a setup.yaml file as the preferred human-facing format that both d2to1 and setuptools understand, as I suspect some of the concepts in PEP 426 are going to be hellishly awkward to map to the ini-style syntax of setup.cfg (in particular, the nested format for defining conditional dependencies based on extras and environment markets). One particularly valuable capability is being able to take an existing project and generating pydist.json for it based on the existing metadata files without needing to change anything. https://www.python.org/dev/peps/pep-0426/#appendix-a-conversion-notes-for-legacy-metadata covers the current partial implementations of that feature which all need updating to account for subsequent changes to the spec. One of the overarching goals here is to be able to publish this info as static metadata on PyPI so folks can more easily do full dependency network analysis without having to download all of PyPI. Vinay built a system like that based on distlib (by first downloading all of PyPI), and one of the ideas behind PEP 426 is to let us publish that kind of info for easier analysis even before we upgrade the actual installers to make use of it. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Mar 17 12:33:47 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Mar 2015 11:33:47 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> Message-ID: On 17 March 2015 at 11:05, Donald Stufft wrote: > I would just implement it inside of Wheel. You?d technically be working > on two PEPs at once, but I think the bare bones Wheel PEP is pretty > simple. ?All the same things as the last PEP, except with pydist.json?. > More things could be added to the Wheel PEP of course, but that?s not > related to PEP 426. Even if we don?t merge the Wheel parts (though we?d > be able to merge the packaging parts for an in memory representation) > immediately, it?d still give some idea about how well it?ll work. Using > Wheel is a good target because that already uses a static file so it?s > less of a change to get PEP 426 integrated there. OK, cool. So bdist_wheel to write a pydist.json into the wheel, and then I guess wheel install (and pip) will just pick it up and dump it in the dist-info folder because they do that anyway. Sounds easy enough. Which only leaves the question of how users specify the metadata. My feeling is that anything that isn't already covered by arguments to setup() should be specified declaratively. That may be in setup.cfg, but ini format may be a PITA for some of the more structured data. I can have a think about that... Thanks, I'll work on this. What it means in practice will be that projects wanting to specify Metadata 2.0 data will be able to do so if they build wheels. Nothing will use that metadata, but that's OK. My aim was to pick off an easy target and look like I was helping without having to do any of the hard jobs ;-) Paul From donald at stufft.io Tue Mar 17 12:38:14 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 17 Mar 2015 07:38:14 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> Message-ID: <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> > On Mar 17, 2015, at 7:33 AM, Paul Moore wrote: > > On 17 March 2015 at 11:05, Donald Stufft wrote: >> I would just implement it inside of Wheel. You?d technically be working >> on two PEPs at once, but I think the bare bones Wheel PEP is pretty >> simple. ?All the same things as the last PEP, except with pydist.json?. >> More things could be added to the Wheel PEP of course, but that?s not >> related to PEP 426. Even if we don?t merge the Wheel parts (though we?d >> be able to merge the packaging parts for an in memory representation) >> immediately, it?d still give some idea about how well it?ll work. Using >> Wheel is a good target because that already uses a static file so it?s >> less of a change to get PEP 426 integrated there. > > OK, cool. So bdist_wheel to write a pydist.json into the wheel, and > then I guess wheel install (and pip) will just pick it up and dump it > in the dist-info folder because they do that anyway. Sounds easy > enough. I would also modify pip to start using it as part of the validation of this PEP (inside of a PR). That should ?close the gap? and say ?hey look we have a Proof of Concept here of this all working?. > > Which only leaves the question of how users specify the metadata. My > feeling is that anything that isn't already covered by arguments to > setup() should be specified declaratively. That may be in setup.cfg, > but ini format may be a PITA for some of the more structured data. I > can have a think about that? > I would personally declare it inside of setup.py like everything else, yea setup.py is gross and unfun in 2015, however I think having two different locations for metadata inside of setuptools based on what era of spec that metadata came from is going to be super confusing for end users. What PEP 426 + The Yet to be Done Wheel Spec To Update would mean is that people can use something *other* than setuptools as their build tool for building Wheels. PEP 426 + The Yet to be Done Sdist 2.0 Spec starts paving the way for the same thing in source distributions. > Thanks, I'll work on this. What it means in practice will be that > projects wanting to specify Metadata 2.0 data will be able to do so if > they build wheels. Nothing will use that metadata, but that's OK. My > aim was to pick off an easy target and look like I was helping without > having to do any of the hard jobs ;-) > > Paul --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Tue Mar 17 12:40:36 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Mar 2015 11:40:36 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: On 17 March 2015 at 11:30, Nick Coghlan wrote: > That's a fair request. Something I perhaps haven't been clear about is > that PEP 426 is *just* an interoperability spec - it defines the > interfaces between tools so they can share a common data model. I think that was pretty clear actually. The problem is that as an interoperability spec, it shouldn't really be blocking any work going on, and yet it does seem to - setup_requires, postinstall scripts, things like that get stalled by "when Metadata 2.0 is signed off". In reality, even a *draft* of Metadata 2.0 should be making stuff like that easier. People wanting to implement the actual behaviour should be able to write their code in terms of "some API to get the Metadata 2.0 data" and then go from there. We can add any kind of hack we want for now to provide that API, safe in the knowledge that later we can change that API without invalidating the feature. It's just that that isn't really happening at the moment. Maybe another thing to work on is a basic "get_metadata" API in the packaging library, to be that to-be-improved hack? Paul From p.f.moore at gmail.com Tue Mar 17 12:43:21 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Mar 2015 11:43:21 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> Message-ID: On 17 March 2015 at 11:38, Donald Stufft wrote: >> OK, cool. So bdist_wheel to write a pydist.json into the wheel, and >> then I guess wheel install (and pip) will just pick it up and dump it >> in the dist-info folder because they do that anyway. Sounds easy >> enough. > > I would also modify pip to start using it as part of the validation of > this PEP (inside of a PR). That should ?close the gap? and say ?hey look > we have a Proof of Concept here of this all working?. I'm still not clear what you expect pip to *do* with the metadata. It's just data, there's no functionality specified in the PEP. >> Which only leaves the question of how users specify the metadata. My >> feeling is that anything that isn't already covered by arguments to >> setup() should be specified declaratively. That may be in setup.cfg, >> but ini format may be a PITA for some of the more structured data. I >> can have a think about that? >> > > I would personally declare it inside of setup.py like everything else, > yea setup.py is gross and unfun in 2015, however I think having two > different locations for metadata inside of setuptools based on what era > of spec that metadata came from is going to be super confusing for end > users. OK, that makes sense. But that involves setuptools hacking, which I'm not touching with a bargepole :-) Paul From ncoghlan at gmail.com Tue Mar 17 12:47:14 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 17 Mar 2015 21:47:14 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> Message-ID: On 17 March 2015 at 21:05, Donald Stufft wrote: > >> On Mar 17, 2015, at 5:52 AM, Paul Moore wrote: >> While I understand that there are real reasons why PEP 426 needs more >> work before being finalised, is there no way that you (Nick and Donald >> mainly, but honestly anyone with a picture of what a world where PEP >> 426 is implemented would look like) can list some well-defined >> implementation tasks that people can *get on with*? The current >> frustration seems to me to be less about PEP 426 blocking progress, as >> about nobody knowing how they can actually help (as opposed to things >> like the current debate, which seems to be rooted in the idea that >> while PEP 426 should "solve" Robert's issue, nobody knows how that >> solution will look, and what can be done to get there). >> >> Paul > > I would just implement it inside of Wheel. You?d technically be working > on two PEPs at once, but I think the bare bones Wheel PEP is pretty > simple. ?All the same things as the last PEP, except with pydist.json?. > More things could be added to the Wheel PEP of course, but that?s not > related to PEP 426. Even if we don?t merge the Wheel parts (though we?d > be able to merge the packaging parts for an in memory representation) > immediately, it?d still give some idea about how well it?ll work. Using > Wheel is a good target because that already uses a static file so it?s > less of a change to get PEP 426 integrated there. +1 from me. As noted in my other message, I believe the machinery to generate it is actually already there, it's just generating it using the old format from before I moved the optional parts into extensions instead. It's actually possible we could adopt a multi-phase approach to rolling out PEP 426, such as: Phase 0 (today): wheel generates various iterations of draft pydist.json files in wheel 1.0 format files Phase 1: PEP 426 is declared provisional (Oops, I still need to propose that update to PEP 1...) Phase 2: PyPI extracts and publishes the still-provisional PEP 426 metadata from uploaded wheel files Phase 3: We define wheel 1.1 to include pydist.json At this point, we'll have mostly exercised the backwards compatibility in pydist.json, rather than the new stuff. I don't have a clear view as to how the adoption of the *new* capabilities will work, as we get the old chicken-and-egg problem of needing to update the build side and the install side at the same time to really gain from them. One possible way to go would be to have the initial pydist.json consumers be redistribution tools like pyp2rpm, while pip continues to rely solely on the old metadata files. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Tue Mar 17 12:49:11 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 17 Mar 2015 07:49:11 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> Message-ID: <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> > On Mar 17, 2015, at 7:43 AM, Paul Moore wrote: > > On 17 March 2015 at 11:38, Donald Stufft wrote: >>> OK, cool. So bdist_wheel to write a pydist.json into the wheel, and >>> then I guess wheel install (and pip) will just pick it up and dump it >>> in the dist-info folder because they do that anyway. Sounds easy >>> enough. >> >> I would also modify pip to start using it as part of the validation of >> this PEP (inside of a PR). That should ?close the gap? and say ?hey look >> we have a Proof of Concept here of this all working?. > > I'm still not clear what you expect pip to *do* with the metadata. > It's just data, there's no functionality specified in the PEP. What pip does now with metadata, Look at it for dependency information when installing the Wheel, show it when doing ``pip show``, handle the Provides metadata making something ?Provide? something else, show warnings for the obsoleted-by metadata, handle extensions (including failing if there is a critical extension we don?t understand). I?m not actually sure what you mean by ?there?s no functionality specified in the PEP? because there is quite a bit of new functionality that?s implicitly inside the PEP just from the new types of data it includes. > >>> Which only leaves the question of how users specify the metadata. My >>> feeling is that anything that isn't already covered by arguments to >>> setup() should be specified declaratively. That may be in setup.cfg, >>> but ini format may be a PITA for some of the more structured data. I >>> can have a think about that? >>> >> >> I would personally declare it inside of setup.py like everything else, >> yea setup.py is gross and unfun in 2015, however I think having two >> different locations for metadata inside of setuptools based on what era >> of spec that metadata came from is going to be super confusing for end >> users. > > OK, that makes sense. But that involves setuptools hacking, which I'm > not touching with a bargepole :-) Ha, that makes sense :) For a proof of concept doing whatever makes sense to you in that regards makes sense too FWIW. > > Paul --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Tue Mar 17 13:33:13 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Mar 2015 12:33:13 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> Message-ID: On 17 March 2015 at 11:49, Donald Stufft wrote: >> I'm still not clear what you expect pip to *do* with the metadata. >> It's just data, there's no functionality specified in the PEP. > > What pip does now with metadata, Look at it for dependency information when > installing the Wheel, show it when doing ``pip show``, handle the Provides > metadata making something ?Provide? something else, show warnings for the > obsoleted-by metadata, handle extensions (including failing if there is a > critical extension we don?t understand). Hmm, OK. At the moment that stuff (except pip show) is all covered by the running of the egg_info command, I guess. So you're saying that pip should first check if a requirement has new-style metadata and if it does, skip the egg_info command and use pydist.json. I guess that would be good - it'd solve the problems we see with numpy-related packages that need things installed just to run setup.py egg_info. It wasn't something I'd particularly considered, but thanks for the clarification. Paul From donald at stufft.io Tue Mar 17 13:34:05 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 17 Mar 2015 08:34:05 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> Message-ID: <9DD34FC8-F2CF-435C-9E54-CDECD3A597D8@stufft.io> > On Mar 17, 2015, at 8:33 AM, Paul Moore wrote: > > On 17 March 2015 at 11:49, Donald Stufft wrote: >>> I'm still not clear what you expect pip to *do* with the metadata. >>> It's just data, there's no functionality specified in the PEP. >> >> What pip does now with metadata, Look at it for dependency information when >> installing the Wheel, show it when doing ``pip show``, handle the Provides >> metadata making something ?Provide? something else, show warnings for the >> obsoleted-by metadata, handle extensions (including failing if there is a >> critical extension we don?t understand). > > Hmm, OK. > > At the moment that stuff (except pip show) is all covered by the > running of the egg_info command, I guess. So you're saying that pip > should first check if a requirement has new-style metadata and if it > does, skip the egg_info command and use pydist.json. I guess that > would be good - it'd solve the problems we see with numpy-related > packages that need things installed just to run setup.py egg_info. > > It wasn't something I'd particularly considered, but thanks for the > clarification. > > Paul There is no egg_info command inside of a Wheel, it?s currently looking at foo.whl/foo.dist-info/METADATA for that. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Tue Mar 17 13:36:57 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Mar 2015 12:36:57 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <9DD34FC8-F2CF-435C-9E54-CDECD3A597D8@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> <9DD34FC8-F2CF-435C-9E54-CDECD3A597D8@stufft.io> Message-ID: On 17 March 2015 at 12:34, Donald Stufft wrote: > There is no egg_info command inside of a Wheel, it?s currently looking > at foo.whl/foo.dist-info/METADATA for that. Doh, of course. I thought I'd checked that, must have been looking in the wrong place (it's not a bit of the code I'm that familiar with). Paul From dholth at gmail.com Tue Mar 17 13:54:54 2015 From: dholth at gmail.com (Daniel Holth) Date: Tue, 17 Mar 2015 08:54:54 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> Message-ID: The wheel spec itself is intentionally designed to be ignorant of the (setuptools) metadata; you don't have to read that to install the files where they belong. Wheels are also not too hard to generate without-setuptools. There's a wscript in the wheel source code that can generate a wheel of bdist_wheel using waf. wwwwwwwwwww. There's also an old patch that allows the Bento build system to generate wheels. Of course wheel currently works by converting all the setuptools static metadata from the .egg-info directory into a different format in the .dist-info directory, after setuptools is done running. bdist_wheel generates PEP 426 data here: https://bitbucket.org/pypa/wheel/src/bdf053a70200c5857c250c2044a2d91da23db4a9/wheel/metadata.py?at=default#cl-90 All the files setup.py currently dumps into the .egg-info directory are generated by setuptools plugins. It would be neat to pull the dist-info generation out of wheel and put it in one of these plugins for setuptools. Once the plugin was installed, every setuptools package would automatically get the new file. However IIRC wheel may have needed one value that's hard to get at this point in the execution. Alias the egg-info command to dist-info; have it generate a .dist-info directory; make sure setuptools treats .dist-info about the same as .egg-info even in a source checkout. https://bitbucket.org/pypa/setuptools/src/31b56862b41ce24ffe5e28434b98fa35f34d30b4/setuptools/command/egg_info.py?at=default#cl-378 https://bitbucket.org/pypa/setuptools/src/31b56862b41ce24ffe5e28434b98fa35f34d30b4/setup.py?at=default#cl-117 Recall that setup.py is still moderately OK as a build script. It is legitimate to need software to build software. We just want the metadata it generates to always be the same, and for it to not also be the installer. setup-requires solves a different problem than pydist.json. You should be able to use setup-requires in a source checkout even if you don't have a (complete) pydist.json, install the build system; run the metadata generation phase of your build system to convert some metadata "in a file format that can have comments" to the json file; continue. On Tue, Mar 17, 2015 at 8:33 AM, Paul Moore wrote: > On 17 March 2015 at 11:49, Donald Stufft wrote: >>> I'm still not clear what you expect pip to *do* with the metadata. >>> It's just data, there's no functionality specified in the PEP. >> >> What pip does now with metadata, Look at it for dependency information when >> installing the Wheel, show it when doing ``pip show``, handle the Provides >> metadata making something ?Provide? something else, show warnings for the >> obsoleted-by metadata, handle extensions (including failing if there is a >> critical extension we don?t understand). > > Hmm, OK. > > At the moment that stuff (except pip show) is all covered by the > running of the egg_info command, I guess. So you're saying that pip > should first check if a requirement has new-style metadata and if it > does, skip the egg_info command and use pydist.json. I guess that > would be good - it'd solve the problems we see with numpy-related > packages that need things installed just to run setup.py egg_info. > > It wasn't something I'd particularly considered, but thanks for the > clarification. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From dholth at gmail.com Tue Mar 17 14:01:13 2015 From: dholth at gmail.com (Daniel Holth) Date: Tue, 17 Mar 2015 09:01:13 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> Message-ID: So for me, metadata, while fine to have, is not the blocker for "generating wheels with some other build system". Having the other build system is the blocker. Still Bento is a pretty great candidate. WAF is also promising. From ncoghlan at gmail.com Tue Mar 17 14:17:01 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 17 Mar 2015 23:17:01 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> Message-ID: On 17 Mar 2015 23:01, "Daniel Holth" wrote: > > So for me, metadata, while fine to have, is not the blocker for > "generating wheels with some other build system". Having the other > build system is the blocker. For me, it's not knowing what "done" looks like, even if a candidate alternative build system was available. I know pip doesn't need the whole setuptools feature set to generate wheels, but I don't know what subset it actually uses, nor what changes when cross compiling C extensions in Linux. As far as I'm aware, *nobody* actually knows the answer to that right now, so figuring it out will likely involve some code archaeology. > Still Bento is a pretty great candidate. > WAF is also promising. Agreed. This is where I think making pip d2to1 aware could be worthwhile - if we can find a of making the emulation target for an alternate build system be d2to1 rather than the whole of setuptools it may descope the interoperability problem enough to make it easier to get started on something practical. Cheers, Nick. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Mar 17 14:23:07 2015 From: dholth at gmail.com (Daniel Holth) Date: Tue, 17 Mar 2015 09:23:07 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> Message-ID: On Tue, Mar 17, 2015 at 9:17 AM, Nick Coghlan wrote: > > On 17 Mar 2015 23:01, "Daniel Holth" wrote: >> >> So for me, metadata, while fine to have, is not the blocker for >> "generating wheels with some other build system". Having the other >> build system is the blocker. > > For me, it's not knowing what "done" looks like, even if a candidate > alternative build system was available. I know pip doesn't need the whole > setuptools feature set to generate wheels, but I don't know what subset it > actually uses, nor what changes when cross compiling C extensions in Linux. > > As far as I'm aware, *nobody* actually knows the answer to that right now, > so figuring it out will likely involve some code archaeology. Yes, it's pretty obvious what "build a wheel with everything set to default" should look like, but I can't immediately envision "pass these standardized arguments if you need to compile a wheel for ARM while running on x86". :( From p.f.moore at gmail.com Tue Mar 17 14:33:32 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Mar 2015 13:33:32 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> <7CA191EF-0225-426F-8B9B-783107C32FA1@stufft.io> <4FF855B2-0E65-4D2A-B485-740B175B343A@stufft.io> <011A5549-BDB5-41A8-AB79-3F89936EC66B@stufft.io> Message-ID: On 17 March 2015 at 13:17, Nick Coghlan wrote: > Agreed. This is where I think making pip d2to1 aware could be worthwhile - > if we can find a of making the emulation target for an alternate build > system be d2to1 rather than the whole of setuptools it may descope the > interoperability problem enough to make it easier to get started on > something practical. Could you clarify what you mean by this? I'm not sure what awareness of d2to1 is needed (I assume you're talking about the PyPI package here). Ignoring a whole lot of unpleasant details, pip's interface for building wheels is roughly '"python setup.py bdist_wheel" needs to work'. Any build system you like can implement that... (Of course the details are why you said "I know pip doesn't need the whole setuptools feature set to generate wheels, but I don't know what subset it actually uses", I understand that...) Paul From qwcode at gmail.com Tue Mar 17 16:26:48 2015 From: qwcode at gmail.com (Marcus Smith) Date: Tue, 17 Mar 2015 08:26:48 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> Message-ID: > For instance, if the problem is "when setuptools does the install, then > things > get installed differently, with different options, SSL certs, proxies, etc" > then I think a better solution is that pip does terrible hacks in order to > forcibly take control of setup_requires from setuptools and installs them > into > a temporary directory (or something like that). That is something that > would > require no changes on the part of authors or people installing software, > and > is backwards compatible with everything that's already been published using > setup_requires. Donald, could you add a pip issue for the "forcibly take control" idea (if we don't have one already?) this comes up a fair amount, and it would be nice to be able to link to this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Tue Mar 17 16:34:58 2015 From: qwcode at gmail.com (Marcus Smith) Date: Tue, 17 Mar 2015 08:34:58 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: > So you *can* import things inside of a setup.py today, you just have to.... I think it's time for the Packaging User Guide to try to cover "setup_requires"... -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Mar 17 21:31:53 2015 From: dholth at gmail.com (Daniel Holth) Date: Tue, 17 Mar 2015 16:31:53 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <4460B069-E783-440F-9CBA-8209A413F5D4@stufft.io> Message-ID: In other setup_requires old news, a couple of years ago I did an "autosetuptools" branch of pip which would automatically install setuptools (if it was not already installed) when installing sdists. In this case you could think of setuptools as an implicit setup_requires member. Setuptools would not be installed if only wheels were being installed. It might be helpful to think of setuptools-style setup_requires differently than "must be available before setup.py can run at all" setup_requires. On Tue, Mar 17, 2015 at 11:34 AM, Marcus Smith wrote: > >> So you *can* import things inside of a setup.py today, you just have >> to.... > > I think it's time for the Packaging User Guide to try to cover > "setup_requires"... From chrism at plope.com Wed Mar 18 16:13:53 2015 From: chrism at plope.com (Chris McDonough) Date: Wed, 18 Mar 2015 11:13:53 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <62293AE4-83E3-4792-B299-C344B7CDC71A@stufft.io> Message-ID: <55099631.8080309@plope.com> On 03/16/2015 02:53 PM, Daniel Holth wrote: > No one should be asked to learn how to extend distutils, and in > practice no one knows how. > > People have been begging for years for working setup_requires, far > longer than I've been interested in it, and all they want to do is > > import fetch_version > setup(version=fetch_version(), ...) > > Then they will eventually notice setup_requires has never worked the > way most people expect. As a result there are too few setup.py > abstractions. FWIW, this particular use case (retrieving the version by importing it or a function that returns it after it reads a file or whatever), is dodgy. It's way better that code that needs version info inside the package consult pkg_resources or some similar system: import pkg_resources version = pkg_resources.get_distribution('mydistro').version I realize there are other use cases that setup_requires solves, and that using pkg_resources can be a performance issue. I also realize that people badly want to be able to "from mypkg import version" or "from mypkg import get_version". But I'd try to come up with a different sample use case for driving decision-making because IMO we should dissuade them from doing that. Python packaging should be able to provide them this information, they should not need to provide it themselves. - C From chris.barker at noaa.gov Wed Mar 18 16:33:03 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 18 Mar 2015 08:33:03 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> Message-ID: <-2699129433549336064@unknownmsgid> Folks, I'm having a hard time catching up with this, but maybe a few comments from someone well outside the trenches will be helpful. And note that one of use cases I'm now wrestling with is using another package manager (conda), and setuptools is currently a PITA in that context. >> you that "python setup.py ..." should >> install setup_requires? For me I'd be quite happy if installing the >> requirements was my own problem in the absence of an installer. Yes, yes, yes! Separation of concerns is key here -- the tool that builds and/or installs one package shouldn't do ANYTHING to get me dependencies, except maybe warn that they are not there. And raise a sensible error if a build dependency is not there. Particularly for developers, they really are capable of installing dependencies. > I've no particular thoughts on that. It would certainly avoid the pain > of easy_install being triggered. Ahh! Is that why this is so painful? Not only is setuptools trying to install stuff for me, but it's using easy_install to do so? Aargh! > Success criteria for my immediate personal needs: > - pip install -e . works on a clean checkout of my projects Sure. > - easy_install doesn't go and download stuff easy_install doesn't do anything, ever! > - my setup.py can refer to things (usually the version) inside the > project itself, safely Yeah, that would be nice. A few other notes: If I have this right, this thread, and a number of other issues are triggered by the fact that setup() is not declarative -- i.e. You don't have access to the metadata until it's been run. But maybe we can kludge a declarative interface I top of the existing system. Something like requiring: Setup_params = a_big_dict_of_stuff_to_pass_to_setup setup(**a_big_dict_of_stuff_to_pass_to_setup) Code could look for that big dict before running setup. If it's not there, you don't get any new functionality. Note that I'm wary of a completely declarative system, there ends up being a lot if stuff you don't want to hard-code, so you have to start building up a whole macro-expansion system, etc. I'd much rather simply let the user build up a python data structure however they want -- the default, simple cases would still be basic declarative hard-coding. I suppose it's too late now, but the really painful parts of all this seem to be due to overly aggressive backward compatibility. We now have wheels, but also eggs, we now have pip, but also easy_install, etc. Perhaps it's time to restore "distribute" -- setuptools without the cruft. Folks could choose to use distribute (or maybe setuptools2) instead of setuptools, and not get the cruft. pip would, of course, still need to work with setuptools, and setuptools would have to be maintained, but it would give us a path forward out of the mess. Another issue I don't see a way out of is that the package name that you use to ask for a package, say on pypi, is not necessarily the name of the python package you can import. So it's really tricky to check if a package is installed independently of the package manager at hand. This is the source of my conda issues -- conda installs the dependencies, but setuptools doesn't know that, so it tries to do it again -- ouch. Final note: setuptools has always bugged me, even though it provides some great features. I think all my struggles with it come down to a key issue: it does not make clear distinctions between what should happen at build-time vs install-time vs run-time. For example: I don't want it downloading and installing dependencies when I go to build. That's an install- time task. I don't want it selecting versions at run time--also an install time task. There are others I can't recall -- but a couple years back I was bundling up an application with py2exe and py2app, and found I had to put an enormous amount of cruft in to satisfy setuptools at run time (including setuptools itself) -- it was pretty painful. And of course, using it within another package manager, like conda -- I really want it to build, and only build, I'm taking care of dependencies another way. OK, I've had my rant! -Chris From p.f.moore at gmail.com Wed Mar 18 16:43:43 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 18 Mar 2015 15:43:43 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <-2699129433549336064@unknownmsgid> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: Just a couple of comments On 18 March 2015 at 15:33, Chris Barker - NOAA Federal wrote: > I suppose it's too late now, but the really painful parts of all this > seem to be due to overly aggressive backward compatibility. We now > have wheels, but also eggs, we now have pip, but also easy_install, > etc. Agreed. But the problem we have here is that any system that fails to work for even a tiny proportion of packages on PyPI is a major issue. And we don't have *any* control over those packages - if they do the most insane things in their setup.py, and don't release a new version using new tools, we have to support those insane things, or deal with the bug reports. Maybe we should say "sorry, your package needs to change or we won't help", but traditionally the worst packaging arguments have started that way (see, for example, the distribute or distutils2 flamewars). People are much more positive these days, so maybe we could do something along those lines, but it's hard to test that assumption without risking the peace... > Final note: setuptools has always bugged me, even though it provides > some great features. I think all my struggles with it come down to a > key issue: it does not make clear distinctions between what should > happen at build-time vs install-time vs run-time. Agreed entirely. It's a long slow process though to migrate away from the problems of setuptools without losing the great features at the same time... Thanks for your thoughts! Paul From contact at ionelmc.ro Wed Mar 18 17:02:20 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 18 Mar 2015 18:02:20 +0200 Subject: [Distutils] setup_requires for dev environments In-Reply-To: <-2699129433549336064@unknownmsgid> References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Wed, Mar 18, 2015 at 5:33 PM, Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > I don't want it downloading and installing dependencies when I go to > build. That's an install- time task. > Sounds to me like you should not use setup_requires then - if you don't like what it does. Also, for the whole distutils-sig, I don't understand all the fuss around this much maligned feature - there are plenty of options to manage build-time dependencies and tasks - one certainly doesn't need to shoehorn a full blown build system into setup.py - there's make, invoke, shell scripts and plenty of other systems that can do that just fine?. Using too many tools is bad, but misusing tools is far worse. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Mar 18 17:49:34 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 18 Mar 2015 16:49:34 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On 18 March 2015 at 16:02, Ionel Cristian M?rie? wrote: > one certainly doesn't need to shoehorn a full blown build system into > setup.py - there's make, invoke, shell scripts and plenty of other systems > that can do that just fine. Just to insert a little history here, before distutils (setup.py) was invented, Python packages used all sorts of tools to build. Often shell scripts and/or make were used, which in essence meant that the packages were unusable on Windows - even if there was no need for them to be. Distutils may be bad, but it's still far better than what it replaced :-) Paul From dholth at gmail.com Wed Mar 18 18:37:38 2015 From: dholth at gmail.com (Daniel Holth) Date: Wed, 18 Mar 2015 13:37:38 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Wed, Mar 18, 2015 at 12:02 PM, Ionel Cristian M?rie? wrote: > > On Wed, Mar 18, 2015 at 5:33 PM, Chris Barker - NOAA Federal > wrote: >> >> I don't want it downloading and installing dependencies when I go to >> build. That's an install- time task. > > > Sounds to me like you should not use setup_requires then - if you don't like > what it does. The behavior we're aiming for would be: "installer run setup.py" - installs things "python setup.py" - does not install things From wichert at wiggy.net Wed Mar 18 19:55:31 2015 From: wichert at wiggy.net (Wichert Akkerman) Date: Wed, 18 Mar 2015 19:55:31 +0100 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: > On 18 Mar 2015, at 17:49, Paul Moore wrote: > > On 18 March 2015 at 16:02, Ionel Cristian M?rie? wrote: >> one certainly doesn't need to shoehorn a full blown build system into >> setup.py - there's make, invoke, shell scripts and plenty of other systems >> that can do that just fine. > > Just to insert a little history here, before distutils (setup.py) was > invented, Python packages used all sorts of tools to build. Often > shell scripts and/or make were used, which in essence meant that the > packages were unusable on Windows - even if there was no need for them > to be. For what it?s worth I have C++ Python module which is build using CMake, and the experience has been extremely pleasant. CMake has lots of useful documentation, while trying to figure out how to do OS and package detection to figure out the right compile and link options for distutils is an awful experience and leads to nightmares such as https://github.com/python-pillow/Pillow/blob/master/setup.py or https://github.com/lxml/lxml/blob/master/setup.py Wichert. From donald at stufft.io Wed Mar 18 23:11:47 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 18 Mar 2015 18:11:47 -0400 Subject: [Distutils] Finalizing PEP 440 Message-ID: It's been ~3 months or so since PEP 440 support was released in pip and setuptools. I think at this point we've resolved any major issues with the PEP 440 spec that can be resolved without a whole new PEP. In addition it's now been out long enough that people have adjusted processes/tooling to account for it and changing things at this stage is likely going to be more disruptive than just leaving it as it is. Given that, I suggest that we mark the PEP as no longer provisional. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ben+python at benfinney.id.au Thu Mar 19 03:21:42 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Thu, 19 Mar 2015 13:21:42 +1100 Subject: [Distutils] setup_requires for dev environments References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <62293AE4-83E3-4792-B299-C344B7CDC71A@stufft.io> <55099631.8080309@plope.com> Message-ID: <858uetsp55.fsf@benfinney.id.au> Chris McDonough writes: > FWIW, this particular use case (retrieving the version by importing it > or a function that returns it after it reads a file or whatever), is > dodgy. It's way better that code that needs version info inside the > package consult pkg_resources or some similar system: > > import pkg_resources > version = pkg_resources.get_distribution('mydistro').version That's all fine once the distribution is *installed*, and I agree ?pkg_resources? is appropriate for querying the version of an already-installed Python distribution. But the whole point here (AIUI) is that the ?setup.py? is responsible for storing that information in the distribution. And ?setup.py? may need to import third-party modules in order to get the version information. For many projects, the version information is best stored in a central place and ?setup.py? is just one consumer of many for that information. Getting the version information may itself need distributions installed (e.g. in my case, Docutils). > I realize there are other use cases that setup_requires solves, and > that using pkg_resources can be a performance issue. The issue isn't importing ?pkg_resources?. The issue is generating the distribution, which ?pkg_resources? can't help with. > Python packaging should be able to provide them this information, they > should not need to provide it themselves. Once the distribution is installed: I agree. While generating the distribution ? the point where ?setup_requires? is meant to help ? no, I disagree. We're trying to get information such that it can be fed to Distutils, since Distutils can't know until it's told. -- \ ?Nothing exists except atoms and empty space; everything else | `\ is opinion.? ?Democritus | _o__) | Ben Finney From donald at stufft.io Thu Mar 19 03:57:59 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 18 Mar 2015 22:57:59 -0400 Subject: [Distutils] JSONP: Deprecation and Intent to Remove Message-ID: For awhile now PyPI has supported JSONP on the /pypi/*/json API to allow people to access the JSON data in a cross origin request. JSONP is problematic psuedo standard which has niggly edge cases which make it hard to fully secure. Browsers have a much better standard through CORS to handle this use case. As of now this endpoint has CORS enabled on it and any new or existing consumers of this API should switch to using CORS instead of JSONP. Warehouse will not be implementing the JSONP endpoint so when we switch PyPI to the Warehouse code base anything still relying on JSONP will break. Thanks! --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From richard at python.org Thu Mar 19 04:06:01 2015 From: richard at python.org (Richard Jones) Date: Thu, 19 Mar 2015 03:06:01 +0000 Subject: [Distutils] JSONP: Deprecation and Intent to Remove In-Reply-To: References: Message-ID: +1, JSONP was an interim hack solution way before CORS was an option. On Thu, 19 Mar 2015 at 13:58 Donald Stufft wrote: > For awhile now PyPI has supported JSONP on the /pypi/*/json API to allow > people > to access the JSON data in a cross origin request. JSONP is problematic > psuedo > standard which has niggly edge cases which make it hard to fully secure. > Browsers have a much better standard through CORS to handle this use case. > > As of now this endpoint has CORS enabled on it and any new or existing > consumers of this API should switch to using CORS instead of JSONP. > Warehouse > will not be implementing the JSONP endpoint so when we switch PyPI to the > Warehouse code base anything still relying on JSONP will break. > > Thanks! > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Mar 19 06:23:12 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 19 Mar 2015 15:23:12 +1000 Subject: [Distutils] Finalizing PEP 440 In-Reply-To: References: Message-ID: On 19 March 2015 at 08:11, Donald Stufft wrote: > It's been ~3 months or so since PEP 440 support was released in pip and > setuptools. I think at this point we've resolved any major issues with the > PEP 440 spec that can be resolved without a whole new PEP. In addition it's > now been out long enough that people have adjusted processes/tooling to account > for it and changing things at this stage is likely going to be more disruptive > than just leaving it as it is. > > Given that, I suggest that we mark the PEP as no longer provisional. Agreed, and I've updated the PEP accordingly: https://hg.python.org/peps/rev/8d7b218a99a8 Cheers, Nick. P.S. /me also bumps officially defining the "provisional PEP acceptance" approach in PEP 1 even further down the todo list :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Mar 19 06:41:14 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 19 Mar 2015 15:41:14 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On 19 March 2015 at 02:49, Paul Moore wrote: > On 18 March 2015 at 16:02, Ionel Cristian M?rie? wrote: >> one certainly doesn't need to shoehorn a full blown build system into >> setup.py - there's make, invoke, shell scripts and plenty of other systems >> that can do that just fine. > > Just to insert a little history here, before distutils (setup.py) was > invented, Python packages used all sorts of tools to build. Often > shell scripts and/or make were used, which in essence meant that the > packages were unusable on Windows - even if there was no need for them > to be. > > Distutils may be bad, but it's still far better than what it replaced :-) Aye, without distutils there's no fpm, pyp2rpm, etc. At least Fedora has migrated its packaging policy from invoking setup.py directly to invoking it via pip instead, but that wouldn't be possible without the commitment to make sure that everything that builds today keeps building tomorrow. What's changed in the 16-17 years since distutils was first designed is the rise of open source usage on Windows and Mac OS X clients, together with the desire for development and data analysis focused user level package management on Linux (which major Linux distros currently tend not to provide, although some of us would like to if we can come up with a reasonable approach that keeps the long term sustaining engineering costs under control [1]). The "simpler" packaging systems like npm, etc get to be simpler because they're written for specific problem domains (e.g. public cloud hosted web service development for npm), so it's much easier to cover all the relevant use cases. With setuptools/distutils the use cases are just as sprawling as the use cases for Python itself, so we're trying to cover needs that range from kids tinkering on their Raspberry Pi's, to multinationals operating public cloud infrastructure, to folks writing web services, to scientists doing computational research, to financial analysts, to spooks on air-gapped networks, to industrial control systems, etc, etc, etc. Solving the software distribution problems of any *one* niche is hard enough that you can build large profitable ecosystems on the back of them (this is why platform specific app stores are so popular), but that's a relatively simple and straightforward problem compared to figuring out how to build the backbone infrastructure that lets open source developers learn one set of software distribution tooling themselves, while still being to relatively easily feed into all of the other downstream systems :) Cheers, Nick. [1] https://fedoraproject.org/wiki/Env_and_Stacks/Projects/UserLevelPackageManagement -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Thu Mar 19 09:12:00 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 19 Mar 2015 08:12:00 +0000 Subject: [Distutils] JSONP: Deprecation and Intent to Remove In-Reply-To: References: Message-ID: On 19 March 2015 at 02:57, Donald Stufft wrote: > For awhile now PyPI has supported JSONP on the /pypi/*/json API to allow people > to access the JSON data in a cross origin request. JSONP is problematic psuedo > standard which has niggly edge cases which make it hard to fully secure. > Browsers have a much better standard through CORS to handle this use case. > > As of now this endpoint has CORS enabled on it and any new or existing > consumers of this API should switch to using CORS instead of JSONP. Warehouse > will not be implementing the JSONP endpoint so when we switch PyPI to the > Warehouse code base anything still relying on JSONP will break. For those of us who don't know (and are too lazy to google CORS :-)) could you provide an example of how to replace uses of the JSON API? For example, a script I currently use has: url = 'https://pypi.python.org/pypi/' + args.name req = requests.get(url + "/json") data = req.json() url = data['info'].get('home_page', url) Thanks, Paul From donald at stufft.io Thu Mar 19 09:15:55 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 19 Mar 2015 04:15:55 -0400 Subject: [Distutils] JSONP: Deprecation and Intent to Remove In-Reply-To: References: Message-ID: <2DBD16E2-287D-4232-9D20-7345D7EF4859@stufft.io> > On Mar 19, 2015, at 4:12 AM, Paul Moore wrote: > > On 19 March 2015 at 02:57, Donald Stufft wrote: >> For awhile now PyPI has supported JSONP on the /pypi/*/json API to allow people >> to access the JSON data in a cross origin request. JSONP is problematic psuedo >> standard which has niggly edge cases which make it hard to fully secure. >> Browsers have a much better standard through CORS to handle this use case. >> >> As of now this endpoint has CORS enabled on it and any new or existing >> consumers of this API should switch to using CORS instead of JSONP. Warehouse >> will not be implementing the JSONP endpoint so when we switch PyPI to the >> Warehouse code base anything still relying on JSONP will break. > > For those of us who don't know (and are too lazy to google CORS :-)) > could you provide an example of how to replace uses of the JSON API? > For example, a script I currently use has: > > url = 'https://pypi.python.org/pypi/' + args.name > req = requests.get(url + "/json") > data = req.json() > url = data['info'].get('home_page', url) > > Thanks, > Paul If you?re using a script this doesn?t effect you, JSONP and CORS are two methods for allowing the javascript on example.com to access a JSON URL on example.net. They are ways of getting around the fact that the browser doesn't generally allow cross origin requests. JSONP is problematic for a variety of security reasons, and it exists primarily as a hack to work around the fact that browsers didn't let you make HTTP requests with javascript to another domain. CORS is the standard, supported, and secure way of doing it. It's also a heck of a lot simpler. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu Mar 19 10:29:00 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 19 Mar 2015 09:29:00 +0000 Subject: [Distutils] JSONP: Deprecation and Intent to Remove In-Reply-To: <2DBD16E2-287D-4232-9D20-7345D7EF4859@stufft.io> References: <2DBD16E2-287D-4232-9D20-7345D7EF4859@stufft.io> Message-ID: On 19 March 2015 at 08:15, Donald Stufft wrote: > If you?re using a script this doesn?t effect you, JSONP and CORS are two > methods for allowing the javascript on example.com to access a JSON URL on > example.net. They are ways of getting around the fact that the browser doesn't > generally allow cross origin requests. > > JSONP is problematic for a variety of security reasons, and it exists primarily > as a hack to work around the fact that browsers didn't let you make HTTP > requests with javascript to another domain. CORS is the standard, supported, > and secure way of doing it. It's also a heck of a lot simpler. Cool, thanks for the clarification. Paul From leorochael at gmail.com Thu Mar 19 14:32:58 2015 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Thu, 19 Mar 2015 10:32:58 -0300 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On 18 March 2015 at 14:37, Daniel Holth wrote: > [...] > > The behavior we're aiming for would be: > > "installer run setup.py" - installs things > "python setup.py" - does not install things Besides that, I'd add that we're also looking for: "python setup.py" (by itself) should not raise ImportError, even if setup.py needs extra things installed for certain operations (egg_info, build, sdist, develop, install). IMO, the biggest pain point is not people putting crazy stuff in setup.py to get version numbers. For me, the biggest pain point is when setup.py needs to import other packages in order to even know how to build: So I'd like to suggest the following series of small improvements to both pip and setuptools: * setuptools: `python setup.py setup_requires` dumps its setup_requires keyword in 'requirements.txt' format It's is already in this format, so should be trivial, but allows one to do something like: $ python setup.py setup_requires > setup_requires.txt $ pip install -r setup_requires.txt Or in one bash line: $ pip install -r <( python setup.py setup_requires ) * setuptools: setup.py gains the ability to accept callables in most (all?) of its parameters. This will allow people to move all top level setup.py imports into functions, so that we can turn code like this: from setuptools import setup, Extension import numpy setup(ext_modules=[ Extension("_cos_doubles", sources=["cos_doubles.c", "cos_doubles.i"], include_dirs=[numpy.get_include()])]) Into this: from setuptools import setup, Extension def ext_modules(): import numpy return [ Extension("_cos_doubles", sources=["cos_doubles.c", "cos_doubles.i"], include_dirs=[numpy.get_include()]) ] setup(ext_modules=ext_modules setup_requires=['setuptools']) * pip: When working with an sdist, before running "setup.py egg_info" in a sandbox, pip would run "setup.py setup_requires", install those packages in the sandbox (not in the main environment), then run "egg_info", "wheel", etc. Notice that the changes proposed above are all backward compatible, create no additional pain, and allow developers to move all top level setup.py craziness inside functions. After that, we can consider making setup.py not call the easy_install functionality when it finds a setup_requires keyword while running other commands, but just report if those packages are not available. PS: Yes, I've already proposed something similar recently: https://mail.python.org/pipermail/distutils-sig/2015-January/025682.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Mar 19 14:57:54 2015 From: dholth at gmail.com (Daniel Holth) Date: Thu, 19 Mar 2015 09:57:54 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: If that's what you want then we could say the spec was to put the requirements in setup_requires.txt, in the requirements.txt format, which pip would eventually look for and install before running setup.py On Thu, Mar 19, 2015 at 9:32 AM, Leonardo Rochael Almeida wrote: > > On 18 March 2015 at 14:37, Daniel Holth wrote: >> >> [...] >> >> The behavior we're aiming for would be: >> >> "installer run setup.py" - installs things >> "python setup.py" - does not install things > > > Besides that, I'd add that we're also looking for: "python setup.py" (by > itself) should not raise ImportError, even if setup.py needs extra things > installed for certain operations (egg_info, build, sdist, develop, install). > > IMO, the biggest pain point is not people putting crazy stuff in setup.py to > get version numbers. For me, the biggest pain point is when setup.py needs > to import other packages in order to even know how to build: > > So I'd like to suggest the following series of small improvements to both > pip and setuptools: > > * setuptools: `python setup.py setup_requires` dumps its setup_requires > keyword in 'requirements.txt' format > > It's is already in this format, so should be trivial, but allows one to do > something like: > > $ python setup.py setup_requires > setup_requires.txt > $ pip install -r setup_requires.txt > > Or in one bash line: > > $ pip install -r <( python setup.py setup_requires ) > > * setuptools: setup.py gains the ability to accept callables in most (all?) > of its parameters. > > This will allow people to move all top level setup.py imports into > functions, so that we can turn code like this: > > from setuptools import setup, Extension > import numpy > > setup(ext_modules=[ > Extension("_cos_doubles", > sources=["cos_doubles.c", "cos_doubles.i"], > include_dirs=[numpy.get_include()])]) > > Into this: > > from setuptools import setup, Extension > > def ext_modules(): > import numpy > return [ > Extension("_cos_doubles", > sources=["cos_doubles.c", "cos_doubles.i"], > include_dirs=[numpy.get_include()]) > ] > > setup(ext_modules=ext_modules > setup_requires=['setuptools']) > > * pip: When working with an sdist, before running "setup.py egg_info" in a > sandbox, pip would run "setup.py setup_requires", install those packages in > the sandbox (not in the main environment), then run "egg_info", "wheel", > etc. > > Notice that the changes proposed above are all backward compatible, create > no additional pain, and allow developers to move all top level setup.py > craziness inside functions. > > After that, we can consider making setup.py not call the easy_install > functionality when it finds a setup_requires keyword while running other > commands, but just report if those packages are not available. > > > PS: Yes, I've already proposed something similar recently: > https://mail.python.org/pipermail/distutils-sig/2015-January/025682.html From dholth at gmail.com Thu Mar 19 15:12:47 2015 From: dholth at gmail.com (Daniel Holth) Date: Thu, 19 Mar 2015 10:12:47 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: ... except that there are plenty of reasons we wouldn't want the requirements.txt format, mainly because pip shouldn't automatically install concrete dependencies that contain git:// urls etc. On Thu, Mar 19, 2015 at 9:57 AM, Daniel Holth wrote: > If that's what you want then we could say the spec was to put the > requirements in setup_requires.txt, in the requirements.txt format, > which pip would eventually look for and install before running > setup.py > > On Thu, Mar 19, 2015 at 9:32 AM, Leonardo Rochael Almeida > wrote: >> >> On 18 March 2015 at 14:37, Daniel Holth wrote: >>> >>> [...] >>> >>> The behavior we're aiming for would be: >>> >>> "installer run setup.py" - installs things >>> "python setup.py" - does not install things >> >> >> Besides that, I'd add that we're also looking for: "python setup.py" (by >> itself) should not raise ImportError, even if setup.py needs extra things >> installed for certain operations (egg_info, build, sdist, develop, install). >> >> IMO, the biggest pain point is not people putting crazy stuff in setup.py to >> get version numbers. For me, the biggest pain point is when setup.py needs >> to import other packages in order to even know how to build: >> >> So I'd like to suggest the following series of small improvements to both >> pip and setuptools: >> >> * setuptools: `python setup.py setup_requires` dumps its setup_requires >> keyword in 'requirements.txt' format >> >> It's is already in this format, so should be trivial, but allows one to do >> something like: >> >> $ python setup.py setup_requires > setup_requires.txt >> $ pip install -r setup_requires.txt >> >> Or in one bash line: >> >> $ pip install -r <( python setup.py setup_requires ) >> >> * setuptools: setup.py gains the ability to accept callables in most (all?) >> of its parameters. >> >> This will allow people to move all top level setup.py imports into >> functions, so that we can turn code like this: >> >> from setuptools import setup, Extension >> import numpy >> >> setup(ext_modules=[ >> Extension("_cos_doubles", >> sources=["cos_doubles.c", "cos_doubles.i"], >> include_dirs=[numpy.get_include()])]) >> >> Into this: >> >> from setuptools import setup, Extension >> >> def ext_modules(): >> import numpy >> return [ >> Extension("_cos_doubles", >> sources=["cos_doubles.c", "cos_doubles.i"], >> include_dirs=[numpy.get_include()]) >> ] >> >> setup(ext_modules=ext_modules >> setup_requires=['setuptools']) >> >> * pip: When working with an sdist, before running "setup.py egg_info" in a >> sandbox, pip would run "setup.py setup_requires", install those packages in >> the sandbox (not in the main environment), then run "egg_info", "wheel", >> etc. >> >> Notice that the changes proposed above are all backward compatible, create >> no additional pain, and allow developers to move all top level setup.py >> craziness inside functions. >> >> After that, we can consider making setup.py not call the easy_install >> functionality when it finds a setup_requires keyword while running other >> commands, but just report if those packages are not available. >> >> >> PS: Yes, I've already proposed something similar recently: >> https://mail.python.org/pipermail/distutils-sig/2015-January/025682.html From chris.barker at noaa.gov Thu Mar 19 16:38:38 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 19 Mar 2015 08:38:38 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Wed, Mar 18, 2015 at 9:02 AM, Ionel Cristian M?rie? wrote: > > On Wed, Mar 18, 2015 at 5:33 PM, Chris Barker - NOAA Federal < > chris.barker at noaa.gov> wrote: > >> I don't want it downloading and installing dependencies when I go to >> build. That's an install- time task. >> > > Sounds to me like you should not use setup_requires then - if you don't > like what it does. > My use case at the moment is trying to build conda packages from other peoples' Python packages - if they use setup_requires, etc, then I'm stuck with it. Also -- for my packages, I want them to be easy to build and deploy by others that aren't using conda -- so I need a way to do that - which would be setuptools' features. So I'd like the features of the "official" python packaging tools to cleanly separated and not assume that if you're using setuptools you are also using pip, etc.... Also, for the whole distutils-sig, I don't understand all the fuss around > this much maligned feature - there are plenty of options to manage > build-time dependencies and tasks - one certainly doesn't need to shoehorn > a full blown > build system into setup.py - there's make, invoke, shell scripts and plenty > of other systems that can do that just fine?. > None of those are cross platform, though. That still may be the way to go. I like to keep in mind that with all this pain, in fact, even raw distutils is freaking awesome at making the easy stuff easy. ( pip and pypi too...) i.e. I can write a simple C extnaion (or Cyton, even), and a very simple boilerplate setup.py will let it build and install on all major platfroms out of the box. Then I put it up on PyPi and anyone can do a "pip install my_package" and away they go. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Mar 19 16:53:54 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 19 Mar 2015 08:53:54 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Wed, Mar 18, 2015 at 10:37 AM, Daniel Holth wrote: > The behavior we're aiming for would be: > > "installer run setup.py" - installs things > "python setup.py" - does not install things > yup. Which, now that I look at it, is not so different than: python setup.py build # does not isntall anything python setup.py install # only install the particular package pip install setup.py ( maybe not pass teh setup.py directly, but maybe? ) # uses pip to find and install the dependencies. and could we get there with: python setup.py build --no-deps python setup.py install --no-deps (I'd like the no-deps flag to be the default, but that probably would have to wait for a depreciation period) None of this solves the "how to get meta-data without installing the package" problem -- which I think is what started this thread. For that, it seems the really hacky way to get there is to establish a meta-data standard to be put in setup.py -- a bunch of standard naems to be defined in the module namespace: packge_version = "1.2.3" setup_requires == ["packagea", "packageb>=2.3",] ... (or maybe all in a big dict: package_meta_data = {"package_version": "1.2.3" setup_requires : ["packagea", "packageb>=2.3",] ... } (those values would be passed in to setup() as well, of course) That way, install tools, etc, could import teh setup.py, not run setup, and have access to the meta data. Of course, this would only work with packages that followed the standard, and it would be a long time until it was common, but we've got to have a direction to head to. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Mar 19 17:09:25 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 19 Mar 2015 09:09:25 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Thu, Mar 19, 2015 at 7:12 AM, Daniel Holth wrote: > ... except that there are plenty of reasons we wouldn't want the > requirements.txt format, mainly because pip shouldn't automatically > install concrete dependencies that contain git:// urls etc. is that format problem, or a pip feature issue? and this is a one-way street -- setuptools would dump a list of requirements -- would it ever HAVE a git:// url to dump? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Thu Mar 19 17:12:03 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Thu, 19 Mar 2015 18:12:03 +0200 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Thu, Mar 19, 2015 at 5:38 PM, Chris Barker wrote: > My use case at the moment is trying to build conda packages from other > peoples' Python packages - if they use setup_requires, etc, then I'm stuck > with it. ?Worth considering?, if you can afford it, to have a local patch that you apply before building. Then you have all the necessary fixes (like remove the setup_requires) in that patch file. This is a popular approach in Debian packages - they can have all kinds of fixes for the upstream code. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Mar 19 17:13:01 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 19 Mar 2015 09:13:01 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Thu, Mar 19, 2015 at 6:57 AM, Daniel Holth wrote: > If that's what you want then we could say the spec was to put the > requirements in setup_requires.txt, in the requirements.txt format, > which pip would eventually look for and install before running > setup.py > yes, that would be great -- and while we are at it, put the run-time dependencies in requirements.txt too. I brought this up a while ago, and it seems that requirements.txt is for applications, and setting install_requires in the setup.py is for package dependencies. But as we've seen, this creates problems -- so why not just keep all the dependency info in an external file??? Though this would not be backward compatible with all the setup.pys out there in the wild now... -Chris > > On Thu, Mar 19, 2015 at 9:32 AM, Leonardo Rochael Almeida > wrote: > > > > On 18 March 2015 at 14:37, Daniel Holth wrote: > >> > >> [...] > >> > >> The behavior we're aiming for would be: > >> > >> "installer run setup.py" - installs things > >> "python setup.py" - does not install things > > > > > > Besides that, I'd add that we're also looking for: "python setup.py" (by > > itself) should not raise ImportError, even if setup.py needs extra things > > installed for certain operations (egg_info, build, sdist, develop, > install). > > > > IMO, the biggest pain point is not people putting crazy stuff in > setup.py to > > get version numbers. For me, the biggest pain point is when setup.py > needs > > to import other packages in order to even know how to build: > > > > So I'd like to suggest the following series of small improvements to both > > pip and setuptools: > > > > * setuptools: `python setup.py setup_requires` dumps its setup_requires > > keyword in 'requirements.txt' format > > > > It's is already in this format, so should be trivial, but allows one to > do > > something like: > > > > $ python setup.py setup_requires > setup_requires.txt > > $ pip install -r setup_requires.txt > > > > Or in one bash line: > > > > $ pip install -r <( python setup.py setup_requires ) > > > > * setuptools: setup.py gains the ability to accept callables in most > (all?) > > of its parameters. > > > > This will allow people to move all top level setup.py imports into > > functions, so that we can turn code like this: > > > > from setuptools import setup, Extension > > import numpy > > > > setup(ext_modules=[ > > Extension("_cos_doubles", > > sources=["cos_doubles.c", "cos_doubles.i"], > > include_dirs=[numpy.get_include()])]) > > > > Into this: > > > > from setuptools import setup, Extension > > > > def ext_modules(): > > import numpy > > return [ > > Extension("_cos_doubles", > > sources=["cos_doubles.c", "cos_doubles.i"], > > include_dirs=[numpy.get_include()]) > > ] > > > > setup(ext_modules=ext_modules > > setup_requires=['setuptools']) > > > > * pip: When working with an sdist, before running "setup.py egg_info" > in a > > sandbox, pip would run "setup.py setup_requires", install those packages > in > > the sandbox (not in the main environment), then run "egg_info", "wheel", > > etc. > > > > Notice that the changes proposed above are all backward compatible, > create > > no additional pain, and allow developers to move all top level setup.py > > craziness inside functions. > > > > After that, we can consider making setup.py not call the easy_install > > functionality when it finds a setup_requires keyword while running other > > commands, but just report if those packages are not available. > > > > > > PS: Yes, I've already proposed something similar recently: > > https://mail.python.org/pipermail/distutils-sig/2015-January/025682.html > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Mar 19 17:17:19 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 19 Mar 2015 09:17:19 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Thu, Mar 19, 2015 at 9:12 AM, Ionel Cristian M?rie? wrote: > ?Worth considering?, if you can afford it, to have a local patch that you > apply before building. Then you have all the necessary fixes (like remove > the setup_requires) in that patch file. > yup -- that's a option -- but a really painful one! I did, in fact, find an incantation that works: $PYTHON setup.py install --single-version-externally-managed --record=/tmp/record.txt but boy, is that ugly, and hard to remember why not a --no-deps flag? (and I have no idea what the --record thing is, or if it's even neccessary... -Chris This is a popular approach in Debian packages - they can have all kinds of > fixes for the upstream code. > > > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Thu Mar 19 17:26:36 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Thu, 19 Mar 2015 18:26:36 +0200 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: The --record is for making a list of installed files. You don't need it if you don't use record.txt anywhere. As for --single-version-externally-managed, that's unrelated to your setup_requires pain - you probably already have the eggs around, so they aren't redownloaded. What --single-version-externally-managed does is force the package to install in non-egg form (as distutils would). That also means only setup.py that uses setuptools will have the --single-version-externally-managed option available. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Thu, Mar 19, 2015 at 6:17 PM, Chris Barker wrote: > On Thu, Mar 19, 2015 at 9:12 AM, Ionel Cristian M?rie? > wrote: > >> ?Worth considering?, if you can afford it, to have a local patch that you >> apply before building. Then you have all the necessary fixes (like remove >> the setup_requires) in that patch file. >> > > yup -- that's a option -- but a really painful one! > > I did, in fact, find an incantation that works: > > $PYTHON setup.py install --single-version-externally-managed > --record=/tmp/record.txt > > but boy, is that ugly, and hard to remember why not a --no-deps flag? > > (and I have no idea what the --record thing is, or if it's even > neccessary... > > -Chris > > > This is a popular approach in Debian packages - they can have all kinds of >> fixes for the upstream code. >> >> >> >> Thanks, >> -- Ionel Cristian M?rie?, http://blog.ionelmc.ro >> > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Mar 19 17:46:00 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 19 Mar 2015 09:46:00 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Wed, Mar 18, 2015 at 8:43 AM, Paul Moore wrote: > > I suppose it's too late now, but the really painful parts of all this > > seem to be due to overly aggressive backward compatibility. We now > > have wheels, but also eggs, we now have pip, but also easy_install, > > etc. > > Agreed. But the problem we have here is that any system that fails to > work for even a tiny proportion of packages on PyPI is a major issue. > And we don't have *any* control over those packages - if they do the > most insane things in their setup.py, and don't release a new version > using new tools, we have to support those insane things, or deal with > the bug reports. > > Maybe we should say "sorry, your package needs to change or we won't > help" Indeed -- I agree that it's key to support all the old kruft -- but it's key to support that with the package manger / installation tool, i.e. pip. We want pip install to "just work" for most of the packages already on PyPi for sure. But that doesn't mean that the newer-and-better setuptools needs to support all the same old cruft. If it were called something different: (distribute? ;-) ) then folks couldn't simply replace: from setuptool simport setup with from distribute import setup and be done, but they would only make that change if they wanted to make that change. Of course, then we'd be supporting both setuptools and distribute, and having to make sure that pip (and wheel) worked with both... so maybe just too much a maintenance headache, but breaking backward compatibility gets you a way forward that keeping it does not (py3 anyone?) I suppose the greater danger is that every feature in setuptools is there because someone wanted it -- so it would be easy for the "new" thing to grow all the same kruft.... > that way (see, for example, the distribute or distutils2 flamewars). > IIRC, distribute was always imported as "setuptools" -- so born to create strife and/or accumulate all the same kruft. I guess I have no idea if there was a big problem with the architecture of setuptools requiring a big shift -- all I see are problems with the API and feature set.....and by definition you can't change those and be backward compatible... > Agreed entirely. It's a long slow process though to migrate away from > the problems of setuptools without losing the great features at the > same time... > That slog is MUCH longer and harder if you need to keep backward compatibility though. But I suppose the alternative is to build something no one uses! Is there any move to have a deprecation process for setuptools features? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Mar 19 17:56:58 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 19 Mar 2015 16:56:58 +0000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On 19 March 2015 at 16:46, Chris Barker wrote: > I guess I have no idea if there was a big problem with the architecture of > setuptools requiring a big shift -- all I see are problems with the API and > feature set.....and by definition you can't change those and be backward > compatible... The ideal situation (at least in my mind) is to define a clear and straightforward command line API that pip uses to interface with the build system. So basically if we say that all pip needs to work is "python setup.py build" or whatever, then people can write any setup.py they like, use distutils, setuptools, bento, or even just a custom script. The hard bit is being completely clear on what the setup.py invocation is required to *do*. At the moment, it's "whatever setuptools/distutils does", which is why we end up relying on all sorts of obscure setuptools behaviours. > That slog is MUCH longer and harder if you need to keep backward > compatibility though. > > But I suppose the alternative is to build something no one uses! > > Is there any move to have a deprecation process for setuptools features? I have no idea, unfortunately. Setuptools has its own goals and plans, that I don't particularly follow. I get the impression that maintaining existing behaviour is a fairly high priority, though - there are a *lot* of projects that use all sorts of obscure parts of setuptools, so backward compatibility is probably pretty vital to them. Paul From chris.barker at noaa.gov Thu Mar 19 17:56:14 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 19 Mar 2015 09:56:14 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Thu, Mar 19, 2015 at 9:26 AM, Ionel Cristian M?rie? wrote: > The --record is for making a list of installed files. You don't need it if > you don't use record.txt anywhere. > thanks -- I"ll take that out... This was a cut and paste form teh net after much frustration -- once I got somethign that worked, I decided I was done -- I had no energy for figuring out why it worked... > As for --single-version-externally-managed, that's unrelated to your > setup_requires pain - you probably already have the eggs around, so they > aren't redownloaded. > well, what conda does to build a package is create a whole new empty environment, then install the dependencies (itself, without pip or easy_install, or...), then runs setup.py install (for python packages anyway). In this case, that step failed, or got ugly, anyway, as setuptools didn't think the dependent packages were installed, so tried to install them itself -- maybe that's because the dependency wasn't installed as an egg? I can't recall at the moment whether that failed (I think so, but not sure why), but I certainly didn't want all those eggs re-installed. > What --single-version-externally-managed does is force the package to > install in non-egg form (as distutils would). > hmm -- interesting -- this really was a dependency issue -- so it must change _something_ about how it looks for dependencies... > That also means only setup.py that uses setuptools will have the > --single-version-externally-managed option available. > yup -- so I need to tack that on when needed, and can't just do it for all python packages... Thanks -- that does make things a bit more clear! -CHB > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > > On Thu, Mar 19, 2015 at 6:17 PM, Chris Barker > wrote: > >> On Thu, Mar 19, 2015 at 9:12 AM, Ionel Cristian M?rie? < >> contact at ionelmc.ro> wrote: >> >>> ?Worth considering?, if you can afford it, to have a local patch that >>> you apply before building. Then you have all the necessary fixes (like >>> remove the setup_requires) in that patch file. >>> >> >> yup -- that's a option -- but a really painful one! >> >> I did, in fact, find an incantation that works: >> >> $PYTHON setup.py install --single-version-externally-managed >> --record=/tmp/record.txt >> >> but boy, is that ugly, and hard to remember why not a --no-deps flag? >> >> (and I have no idea what the --record thing is, or if it's even >> neccessary... >> >> -Chris >> >> >> This is a popular approach in Debian packages - they can have all kinds >>> of fixes for the upstream code. >>> >>> >>> >>> Thanks, >>> -- Ionel Cristian M?rie?, http://blog.ionelmc.ro >>> >> >> >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim at tim-smith.us Thu Mar 19 17:45:23 2015 From: tim at tim-smith.us (Tim Smith) Date: Thu, 19 Mar 2015 09:45:23 -0700 Subject: [Distutils] setup_requires for dev environments Message-ID: On Thu, Mar 19, 2015 at 7:12 AM, Daniel Holth wrote: > So I'd like to suggest the following series of small improvements to both > pip and setuptools: > > * setuptools: `python setup.py setup_requires` dumps its setup_requires > keyword in 'requirements.txt' format > > It's is already in this format, so should be trivial, but allows one to do > something like: > > $ python setup.py setup_requires > setup_requires.txt > $ pip install -r setup_requires.txt > > Or in one bash line: > > $ pip install -r <( python setup.py setup_requires ) > A way of learning about setup_requires dependencies would be helpful for homebrew-pypi-poet [1], which helps generate Homebrew formulae for applications implemented in Python. Homebrew prefers to specify all dependencies explicitly in the formula rather than allowing easy_install or pip to perform dependency resolution at formula install time. [2] We use installed metadata to learn about install_requires dependencies but setup_requires dependencies aren't captured AFAIK (are they?) so formula authors have to identify those by hand. This isn't critically painful for me but I thought it was an interesting example use case. Tim [1] https://github.com/tdsmith/homebrew-pypi-poet [2] As Chris Barker notes, --single-version-externally-managed is a good way to get setuptools-based setup.py's to just install the package; --single-version-externally-managed hands the install process over to distutils. Of course, distutils setup.py's do not support the --single-version-externally-managed option. To have a consistent CLI interface, Homebrew borrows a shim from pip's source to make sure we always call setuptools.setup() when we run setup.py so that we can hand the install back to distutils: https://github.com/Homebrew/homebrew/blob/master/Library/Homebrew/language/python.rb#L78-L94 -- thanks to Donald for pointing to the right place in the pip code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Mar 19 19:40:20 2015 From: dholth at gmail.com (Daniel Holth) Date: Thu, 19 Mar 2015 14:40:20 -0400 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: The reason you should not have to run setup.py to dump out the setup-requires is that, in the what-people-tend-to-expect definition, setup.py cannot run without those requirements being installed first. There is a similar problem with putting setup-requires in the PEP 426 Metadata. As long as we have setup.py we are going to have to generate the PEP 426 metadata from setup.py itself for any legacy package or any package where setup.py cannot be overhauled. One option would be to have a partial PEP 426 metadata for just the setup-requires which would be overwritten into a .dist-info directory by the data from setup.py itself when that runs. On Thu, Mar 19, 2015 at 12:56 PM, Chris Barker wrote: > On Thu, Mar 19, 2015 at 9:26 AM, Ionel Cristian M?rie? > wrote: >> >> The --record is for making a list of installed files. You don't need it if >> you don't use record.txt anywhere. > > > thanks -- I"ll take that out... This was a cut and paste form teh net after > much frustration -- once I got somethign that worked, I decided I was done > -- I had no energy for figuring out why it worked... > >> >> As for --single-version-externally-managed, that's unrelated to your >> setup_requires pain - you probably already have the eggs around, so they >> aren't redownloaded. > > > well, what conda does to build a package is create a whole new empty > environment, then install the dependencies (itself, without pip or > easy_install, or...), then runs setup.py install (for python packages > anyway). In this case, that step failed, or got ugly, anyway, as setuptools > didn't think the dependent packages were installed, so tried to install them > itself -- maybe that's because the dependency wasn't installed as an egg? > > I can't recall at the moment whether that failed (I think so, but not sure > why), but I certainly didn't want all those eggs re-installed. > >> >> What --single-version-externally-managed does is force the package to >> install in non-egg form (as distutils would). > > > hmm -- interesting -- this really was a dependency issue -- so it must > change _something_ about how it looks for dependencies... > >> >> That also means only setup.py that uses setuptools will have the >> --single-version-externally-managed option available. > > > yup -- so I need to tack that on when needed, and can't just do it for all > python packages... > > Thanks -- that does make things a bit more clear! > > -CHB > > > >> >> >> Thanks, >> -- Ionel Cristian M?rie?, http://blog.ionelmc.ro >> >> On Thu, Mar 19, 2015 at 6:17 PM, Chris Barker >> wrote: >>> >>> On Thu, Mar 19, 2015 at 9:12 AM, Ionel Cristian M?rie? >>> wrote: >>>> >>>> Worth considering, if you can afford it, to have a local patch that you >>>> apply before building. Then you have all the necessary fixes (like remove >>>> the setup_requires) in that patch file. >>> >>> >>> yup -- that's a option -- but a really painful one! >>> >>> I did, in fact, find an incantation that works: >>> >>> $PYTHON setup.py install --single-version-externally-managed >>> --record=/tmp/record.txt >>> >>> but boy, is that ugly, and hard to remember why not a --no-deps flag? >>> >>> (and I have no idea what the --record thing is, or if it's even >>> neccessary... >>> >>> -Chris >>> >>> >>>> This is a popular approach in Debian packages - they can have all kinds >>>> of fixes for the upstream code. >>>> >>>> >>>> >>>> Thanks, >>>> -- Ionel Cristian M?rie?, http://blog.ionelmc.ro >>> >>> >>> >>> >>> -- >>> >>> Christopher Barker, Ph.D. >>> Oceanographer >>> >>> Emergency Response Division >>> NOAA/NOS/OR&R (206) 526-6959 voice >>> 7600 Sand Point Way NE (206) 526-6329 fax >>> Seattle, WA 98115 (206) 526-6317 main reception >>> >>> Chris.Barker at noaa.gov >> >> > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From tim at tim-smith.us Thu Mar 19 20:06:48 2015 From: tim at tim-smith.us (Tim Smith) Date: Thu, 19 Mar 2015 12:06:48 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: Message-ID: Apologies: misattributed Leonardo Rochael Almeida 's comments to Daniel Holth when I was cutting and pasting from my digest. On Thu, Mar 19, 2015 at 9:45 AM, Tim Smith wrote: > On Thu, Mar 19, 2015 at 7:12 AM, Daniel Holth wrote: > >> So I'd like to suggest the following series of small improvements to both >> pip and setuptools: >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Mar 19 21:38:29 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 20 Mar 2015 06:38:29 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On 19 Mar 2015 23:33, "Leonardo Rochael Almeida" wrote: > > > On 18 March 2015 at 14:37, Daniel Holth wrote: >> >> [...] >> >> The behavior we're aiming for would be: >> >> "installer run setup.py" - installs things >> "python setup.py" - does not install things > > > Besides that, I'd add that we're also looking for: "python setup.py" (by itself) should not raise ImportError, even if setup.py needs extra things installed for certain operations (egg_info, build, sdist, develop, install). > > IMO, the biggest pain point is not people putting crazy stuff in setup.py to get version numbers. For me, the biggest pain point is when setup.py needs to import other packages in order to even know how to build: > > So I'd like to suggest the following series of small improvements to both pip and setuptools: > > * setuptools: `python setup.py setup_requires` dumps its setup_requires keyword in 'requirements.txt' format I believe setuptools can already do this (as "setup-requirements.txt"), but it's a generated file that people tend not to check into source control. Saying that file *should* be checked into source control (and teaching pip about it when looking for dependencies) might be a reasonable improvement - CPython certainly checks in several generated files to reduce the number of tools needed to build CPython in the typical case. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Mar 19 21:51:49 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 19 Mar 2015 13:51:49 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Thu, Mar 19, 2015 at 9:56 AM, Chris Barker wrote: > On Thu, Mar 19, 2015 at 9:26 AM, Ionel Cristian M?rie? > wrote: > >> The --record is for making a list of installed files. You don't need it >> if you don't use record.txt anywhere. >> > > thanks -- I"ll take that out... > Actually, I took that out, and got: running install error: You must specify --record or --root when building system packages so it's needed I guess. By the way, the error I get if I do a raw setup.py install is: """ RuntimeError: Setuptools downloading is disabled in conda build. Be sure to add all dependencies in the meta.yaml url= https://pypi.python.org/simple/petulant-bear/r Command failed: /bin/bash -x -e /Users/chris.barker/PythonStuff/IOOS_packages/conda-recipes/wicken/build.sh """ so setuptools is trying to install petulant-bear, but conda has disables that. But it is, in fact installed, conda having done that to prepare the environment. So this is why I just want to tell setuptools to not try to download and install dependencies But we're getting off topic here -- should probably put in a feature request for "--no-deps" for install and build commands. -CHB > This was a cut and paste form teh net after much frustration -- once I got > somethign that worked, I decided I was done -- I had no energy for figuring > out why it worked... > > >> As for --single-version-externally-managed, that's unrelated to your >> setup_requires pain - you probably already have the eggs around, so they >> aren't redownloaded. >> > > well, what conda does to build a package is create a whole new empty > environment, then install the dependencies (itself, without pip or > easy_install, or...), then runs setup.py install (for python packages > anyway). In this case, that step failed, or got ugly, anyway, as setuptools > didn't think the dependent packages were installed, so tried to install > them itself -- maybe that's because the dependency wasn't installed as an > egg? > > I can't recall at the moment whether that failed (I think so, but not sure > why), but I certainly didn't want all those eggs re-installed. > > >> What --single-version-externally-managed does is force the package to >> install in non-egg form (as distutils would). >> > > hmm -- interesting -- this really was a dependency issue -- so it must > change _something_ about how it looks for dependencies... > > >> That also means only setup.py that uses setuptools will have the >> --single-version-externally-managed option available. >> > > yup -- so I need to tack that on when needed, and can't just do it for all > python packages... > > Thanks -- that does make things a bit more clear! > > -CHB > > > > >> >> Thanks, >> -- Ionel Cristian M?rie?, http://blog.ionelmc.ro >> >> On Thu, Mar 19, 2015 at 6:17 PM, Chris Barker >> wrote: >> >>> On Thu, Mar 19, 2015 at 9:12 AM, Ionel Cristian M?rie? < >>> contact at ionelmc.ro> wrote: >>> >>>> ?Worth considering?, if you can afford it, to have a local patch that >>>> you apply before building. Then you have all the necessary fixes (like >>>> remove the setup_requires) in that patch file. >>>> >>> >>> yup -- that's a option -- but a really painful one! >>> >>> I did, in fact, find an incantation that works: >>> >>> $PYTHON setup.py install --single-version-externally-managed >>> --record=/tmp/record.txt >>> >>> but boy, is that ugly, and hard to remember why not a --no-deps flag? >>> >>> (and I have no idea what the --record thing is, or if it's even >>> neccessary... >>> >>> -Chris >>> >>> >>> This is a popular approach in Debian packages - they can have all kinds >>>> of fixes for the upstream code. >>>> >>>> >>>> >>>> Thanks, >>>> -- Ionel Cristian M?rie?, http://blog.ionelmc.ro >>>> >>> >>> >>> >>> -- >>> >>> Christopher Barker, Ph.D. >>> Oceanographer >>> >>> Emergency Response Division >>> NOAA/NOS/OR&R (206) 526-6959 voice >>> 7600 Sand Point Way NE (206) 526-6329 fax >>> Seattle, WA 98115 (206) 526-6317 main reception >>> >>> Chris.Barker at noaa.gov >>> >> >> > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Fri Mar 20 00:07:08 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 20 Mar 2015 01:07:08 +0200 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On Thu, Mar 19, 2015 at 10:38 PM, Nick Coghlan wrote: > I believe setuptools can already do this (as "setup-requirements.txt"), > but it's a generated file that people tend not to check into source control. ?Isn't that just some project's convention - they just read it up ?in setup.py? Setuptools doesn't do anything with it by itself. Also, if pip were to support a setup-requirements.txt, should setuptools also support that natively? What about "repository url" dependencies? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Mar 20 00:50:09 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 19 Mar 2015 16:50:09 -0700 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: Message-ID: On Thu, Mar 19, 2015 at 9:45 AM, Tim Smith wrote: > A way of learning about setup_requires dependencies would be helpful for > homebrew-pypi-poet [1], which helps generate Homebrew formulae for > applications implemented in Python. > Indeed -- conda is similar -- it provides a "conda skelton pypi" command, that grabs a package from pypi and (tries to) create a conda build setup for it. similarly to brew, the intent is to capture and handle the dependencies with conda's system. I don't have anything to do with the development, but I _think_ it actually builds the package in order to then extract the dependency meta-data -- it would be nice to not do that. It actually succeeds with a lot of packages without any hand-editing after the fact, to it's not so bad! > As Chris Barker notes, --single-version-externally-managed is a good way > to get setuptools-based setup.py's to just install the package; > --single-version-externally-managed hands the install process over to > distutils. Of course, distutils setup.py's do not support the > --single-version-externally-managed option. > yeah, conda jsut uses plain "setup.py install" by dfault, you have to go in and add --single-version-externally-managed by hand to the build script. Maybe it would be better to add that automatically, and let the few packages that don't use setuptools remove it by hand... -CHB > To have a consistent CLI interface, Homebrew borrows a shim from pip's > source to make sure we always call setuptools.setup() when we run setup.py > so that we can hand the install back to distutils: > https://github.com/Homebrew/homebrew/blob/master/Library/Homebrew/language/python.rb#L78-L94 > -- thanks to Donald for pointing to the right place in the pip code. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Mar 20 10:19:52 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 20 Mar 2015 19:19:52 +1000 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> <-2699129433549336064@unknownmsgid> Message-ID: On 20 Mar 2015 09:07, "Ionel Cristian M?rie?" wrote: > > > On Thu, Mar 19, 2015 at 10:38 PM, Nick Coghlan wrote: >> >> I believe setuptools can already do this (as "setup-requirements.txt"), but it's a generated file that people tend not to check into source control. > > > ?Isn't that just some project's convention - they just read it up ?in setup.py? Setuptools doesn't do anything with it by itself. I mean the setuptools feature that writes "setup_requires.txt" to the metadata directory so you can read it without running setup.py again. However looking at https://pythonhosted.org/setuptools/history.html shows that both times it has been added (8.4 and 12.4) the feature has had to be reverted due to breaking upgrades from earlier versions :( As long as setuptools lacks the ability to generate that file, I suspect this discussion will remain largely theoretical. Regards, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr.andrew.straw at gmail.com Fri Mar 20 14:16:40 2015 From: dr.andrew.straw at gmail.com (Andrew Straw) Date: Fri, 20 Mar 2015 14:16:40 +0100 Subject: [Distutils] Building deb packages for pypy In-Reply-To: <54F52431.8030400@vrt.com.au> References: <54F52431.8030400@vrt.com.au> Message-ID: Dear Stuart, I just spent a little time unsuccessfully trying to get stdeb to build .debs for pypy on the the pybuild-dev branch. If you want to play around, that's where I'd start. Perhaps it's something simple I overlooked. Best, Andrew On Tue, Mar 3, 2015 at 4:02 AM, Stuart Longland wrote: > Hi all, > > I'm currently attempting to evaluate pypy for use in a few > performance-critical projects. We mainly use Debian or Ubuntu as the > host platform, with our packages provided as debs. > > Traditionally if we needed to build debs for third-party libraries, we'd > use `stdeb` to generate the source files. However I'm having a lot of > fun and games trying to figure out how this is done with pypy. > > What is the procedure for building a deb package of a Python library > using stdeb for pypy? > -- > _ ___ Stuart Longland - Systems Engineer > \ /|_) | T: +61 7 3535 9619 > \/ | \ | 38b Douglas Street F: +61 7 3535 9699 > SYSTEMS Milton QLD 4064 http://www.vrt.com.au > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr.andrew.straw at gmail.com Fri Mar 20 14:32:31 2015 From: dr.andrew.straw at gmail.com (Andrew Straw) Date: Fri, 20 Mar 2015 14:32:31 +0100 Subject: [Distutils] Building deb packages for pypy In-Reply-To: References: <54F52431.8030400@vrt.com.au> Message-ID: Suddenly, for no reason I know of, things are now working for me with pypy on the pybuild-dev branch of stdeb. On Fri, Mar 20, 2015 at 2:16 PM, Andrew Straw wrote: > Dear Stuart, > > I just spent a little time unsuccessfully trying to get stdeb to build > .debs for pypy on the the pybuild-dev branch. If you want to play around, > that's where I'd start. Perhaps it's something simple I overlooked. > > Best, > Andrew > > > On Tue, Mar 3, 2015 at 4:02 AM, Stuart Longland > wrote: > >> Hi all, >> >> I'm currently attempting to evaluate pypy for use in a few >> performance-critical projects. We mainly use Debian or Ubuntu as the >> host platform, with our packages provided as debs. >> >> Traditionally if we needed to build debs for third-party libraries, we'd >> use `stdeb` to generate the source files. However I'm having a lot of >> fun and games trying to figure out how this is done with pypy. >> >> What is the procedure for building a deb package of a Python library >> using stdeb for pypy? >> -- >> _ ___ Stuart Longland - Systems Engineer >> \ /|_) | T: +61 7 3535 9619 >> \/ | \ | 38b Douglas Street F: +61 7 3535 9699 >> SYSTEMS Milton QLD 4064 http://www.vrt.com.au >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From petsuter at gmail.com Sat Mar 21 16:19:09 2015 From: petsuter at gmail.com (Peter Suter) Date: Sat, 21 Mar 2015 16:19:09 +0100 Subject: [Distutils] 2.0dev-r123 no longer greater than 1.0? Message-ID: <550D8BED.4080507@gmail.com> Hi from pkg_resources import parse_version parse_version('2.0dev-r123') > parse_version('1.0dev') In setuptools 7 and below this was |True|. In setuptools 8 and above this is |False|. A bug? Are these tags not supported anymore? The documentation still mentions them extensively, e.g. in: https://pythonhosted.org/setuptools/setuptools.html#managing-continuous-releases-using-subversion setup.cfg options are recommended that generate similar versions: [egg_info] tag_build = .dev tag_svn_revision = 1 Am I missing something? Thanks, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Mar 21 17:21:23 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 21 Mar 2015 16:21:23 +0000 Subject: [Distutils] 2.0dev-r123 no longer greater than 1.0? In-Reply-To: <550D8BED.4080507@gmail.com> References: <550D8BED.4080507@gmail.com> Message-ID: On 21 March 2015 at 15:19, Peter Suter wrote: > Hi > > from pkg_resources import parse_version > parse_version('2.0dev-r123') > parse_version('1.0dev') > > In setuptools 7 and below this was True. In setuptools 8 and above this is > False. > > A bug? Are these tags not supported anymore? > > The documentation still mentions them extensively, e.g. in: > https://pythonhosted.org/setuptools/setuptools.html#managing-continuous-releases-using-subversion > > setup.cfg options are recommended that generate similar versions: > > [egg_info] > tag_build = .dev > tag_svn_revision = 1 > > Am I missing something? Version numbers are now standardised under PEP 440 (https://www.python.org/dev/peps/pep-0440/). Under that PEP, post-releases come before pre-releases. "r" represents a post-release and "dev" a pre-release. So your version isn't a valid PEP 440 version, and gets parsed as a legacy version. Legacy versions then get sorted before any PEP 440 version, such as 1.0dev. It looks like the setuptools documentation hasn't been updated (setuptools now uses PEP 440 versions since 8.0, see https://pythonhosted.org/setuptools/history.html#id48). Paul From petsuter at gmail.com Sat Mar 21 17:46:13 2015 From: petsuter at gmail.com (Peter Suter) Date: Sat, 21 Mar 2015 17:46:13 +0100 Subject: [Distutils] 2.0dev-r123 no longer greater than 1.0? In-Reply-To: References: <550D8BED.4080507@gmail.com> Message-ID: <550DA055.6020004@gmail.com> On 21.03.2015 17:21, Paul Moore wrote: > On 21 March 2015 at 15:19, Peter Suter wrote: >> Hi >> >> from pkg_resources import parse_version >> parse_version('2.0dev-r123') > parse_version('1.0dev') >> >> In setuptools 7 and below this was True. In setuptools 8 and above this is >> False. >> >> A bug? Are these tags not supported anymore? >> >> The documentation still mentions them extensively, e.g. in: >> https://pythonhosted.org/setuptools/setuptools.html#managing-continuous-releases-using-subversion >> >> setup.cfg options are recommended that generate similar versions: >> >> [egg_info] >> tag_build = .dev >> tag_svn_revision = 1 >> >> Am I missing something? > Version numbers are now standardised under PEP 440 > (https://www.python.org/dev/peps/pep-0440/). Under that PEP, > post-releases come before pre-releases. "r" represents a post-release > and "dev" a pre-release. So your version isn't a valid PEP 440 > version, and gets parsed as a legacy version. Legacy versions then get > sorted before any PEP 440 version, such as 1.0dev. > > It looks like the setuptools documentation hasn't been updated > (setuptools now uses PEP 440 versions since 8.0, see > https://pythonhosted.org/setuptools/history.html#id48). OK, thanks. So what is the equivalent PEP 440 SVN tagged development version? 2.0.dev123? How can I get setuptools to create that instead? It looks like with the above egg_info setuptools automatically adds the "-r". From p.f.moore at gmail.com Sat Mar 21 18:22:37 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 21 Mar 2015 17:22:37 +0000 Subject: [Distutils] 2.0dev-r123 no longer greater than 1.0? In-Reply-To: <550DA055.6020004@gmail.com> References: <550D8BED.4080507@gmail.com> <550DA055.6020004@gmail.com> Message-ID: On 21 March 2015 at 16:46, Peter Suter wrote: > OK, thanks. > So what is the equivalent PEP 440 SVN tagged development version? > 2.0.dev123? How can I get setuptools to create that instead? It looks like > with the above egg_info setuptools automatically adds the "-r" That, I don't know, to be honest. Hopefully one of the setuptools experts can help you there. Paul From donald at stufft.io Sat Mar 21 19:26:57 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 21 Mar 2015 14:26:57 -0400 Subject: [Distutils] 2.0dev-r123 no longer greater than 1.0? In-Reply-To: <550DA055.6020004@gmail.com> References: <550D8BED.4080507@gmail.com> <550DA055.6020004@gmail.com> Message-ID: <66459152-42A5-4CC9-BABB-E1EF2FDE4002@stufft.io> > On Mar 21, 2015, at 12:46 PM, Peter Suter wrote: > > On 21.03.2015 17:21, Paul Moore wrote: >> On 21 March 2015 at 15:19, Peter Suter wrote: >>> Hi >>> >>> from pkg_resources import parse_version >>> parse_version('2.0dev-r123') > parse_version('1.0dev') >>> >>> In setuptools 7 and below this was True. In setuptools 8 and above this is >>> False. >>> >>> A bug? Are these tags not supported anymore? >>> >>> The documentation still mentions them extensively, e.g. in: >>> https://pythonhosted.org/setuptools/setuptools.html#managing-continuous-releases-using-subversion >>> >>> setup.cfg options are recommended that generate similar versions: >>> >>> [egg_info] >>> tag_build = .dev >>> tag_svn_revision = 1 >>> >>> Am I missing something? >> Version numbers are now standardised under PEP 440 >> (https://www.python.org/dev/peps/pep-0440/). Under that PEP, >> post-releases come before pre-releases. "r" represents a post-release >> and "dev" a pre-release. So your version isn't a valid PEP 440 >> version, and gets parsed as a legacy version. Legacy versions then get >> sorted before any PEP 440 version, such as 1.0dev. >> >> It looks like the setuptools documentation hasn't been updated >> (setuptools now uses PEP 440 versions since 8.0, see >> https://pythonhosted.org/setuptools/history.html#id48). > OK, thanks. > So what is the equivalent PEP 440 SVN tagged development version? 2.0.dev123? How can I get setuptools to create that instead? It looks like with the above egg_info setuptools automatically adds the "-r". > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig 2.0.dev123 is the correct version number to use, setuptools might need additional updates to it to properly generate those though. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From wichert at wiggy.net Mon Mar 23 09:44:41 2015 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 23 Mar 2015 09:44:41 +0100 Subject: [Distutils] pip user-agent string Message-ID: I noticed that pip sends a pretty strange User-Agent string: 127.0.0.1 - - [23/Mar/2015:08:31:31 +0000] "GET /python/ HTTP/1.1" 200 17676 "-" "pip/6.0.8 {\x22cpu\x22:\x22x86_64\x22,\x22distro\x22:{\x22id\x22:\x22trusty\x22,\x22libc\x22:{\x22lib\x22:\x22glibc\x22,\x22version\x22:\x222.4\x22},\x22name\x22:\x22Ubuntu\x22,\x22version\x22:\x2214.04\x22},\x22implementation\x22:{\x22name\x22:\x22CPython\x22,\x22version\x22:\x222.7.6\x22},\x22installer\x22:{\x22name\x22:\x22pip\x22,\x22version\x22:\x226.0.8\x22},\x22python\x22:\x222.7.6\x22,\x22system\x22:{\x22name\x22:\x22Linux\x22,\x22release\x22:\x223.13.0-46-generic\x22}}" This looks like a strangely encoded JSON blob. Can that be changed to something more human readable and parseable by standard log analysers? Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From drsalists at gmail.com Mon Mar 23 18:47:29 2015 From: drsalists at gmail.com (Dan Stromberg) Date: Mon, 23 Mar 2015 10:47:29 -0700 Subject: [Distutils] Statically linking part of an extension module? Message-ID: Hi folks. I want to build a pair of wheels - one for numpy, one for scipy. And I want to statically link atlas (with blas and lapack) into these wheels. I don't want to statically link numpy or scipy into the Python interpreter. The goal is to decrease the frequency with which new wheels need to be built as the OS changes (RHEL 6.6), and to avoid having to deploy rpm's - we want to only deploy wheels if possible. How practical is this? I have some guesses how to do it (passing in $CFLAGS, $LDFLAGS and $CC perhaps), but if there's a better way I'd love to know about it. Thanks! From drsalists at gmail.com Mon Mar 23 19:36:16 2015 From: drsalists at gmail.com (Dan Stromberg) Date: Mon, 23 Mar 2015 11:36:16 -0700 Subject: [Distutils] force static linking In-Reply-To: References: Message-ID: On Thu, Sep 11, 2014 at 5:28 AM, gordon wrote: > Hello, > > I am attempting to build statically linked distributions. > > I am using docker containers to ensure the deployment environment matches > the build environment so there is no compatibility concern. > > Is there any way to force static linking so that wheels can be installed > into a virtual env without requiring specific packages on the host? Maybe pass -static in $LDFLAGS? Just a wild guess really. From bill at baddogconsulting.com Mon Mar 23 19:41:24 2015 From: bill at baddogconsulting.com (Bill Deegan) Date: Mon, 23 Mar 2015 11:41:24 -0700 Subject: [Distutils] force static linking In-Reply-To: References: Message-ID: Gordon, If you are sure that your dev and production environments match, then you should have same shared libraries on both, and no need for static linkage? -Bill On Mon, Mar 23, 2015 at 11:36 AM, Dan Stromberg wrote: > On Thu, Sep 11, 2014 at 5:28 AM, gordon wrote: > > Hello, > > > > I am attempting to build statically linked distributions. > > > > I am using docker containers to ensure the deployment environment matches > > the build environment so there is no compatibility concern. > > > > Is there any way to force static linking so that wheels can be installed > > into a virtual env without requiring specific packages on the host? > > Maybe pass -static in $LDFLAGS? Just a wild guess really. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From drsalists at gmail.com Mon Mar 23 19:45:22 2015 From: drsalists at gmail.com (Dan Stromberg) Date: Mon, 23 Mar 2015 11:45:22 -0700 Subject: [Distutils] force static linking In-Reply-To: References: Message-ID: Is this the general perspective on static linking of python module dependencies? That if your systems are the same, you don't need to? I want static linking too, but if it's swimming upstream in a fast river, I may reconsider. Thanks. On Mon, Mar 23, 2015 at 11:41 AM, Bill Deegan wrote: > Gordon, > > If you are sure that your dev and production environments match, then you > should have same shared libraries on both, and no need for static linkage? > > -Bill > > On Mon, Mar 23, 2015 at 11:36 AM, Dan Stromberg wrote: >> >> On Thu, Sep 11, 2014 at 5:28 AM, gordon wrote: >> > Hello, >> > >> > I am attempting to build statically linked distributions. >> > >> > I am using docker containers to ensure the deployment environment >> > matches >> > the build environment so there is no compatibility concern. >> > >> > Is there any way to force static linking so that wheels can be installed >> > into a virtual env without requiring specific packages on the host? >> >> Maybe pass -static in $LDFLAGS? Just a wild guess really. >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > From p.f.moore at gmail.com Mon Mar 23 19:55:03 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 23 Mar 2015 18:55:03 +0000 Subject: [Distutils] force static linking In-Reply-To: References: Message-ID: On 23 March 2015 at 18:45, Dan Stromberg wrote: > Is this the general perspective on static linking of python module > dependencies? That if your systems are the same, you don't need to? > > I want static linking too, but if it's swimming upstream in a fast > river, I may reconsider. On Windows, it's not uncommon to statically link dependencies into a wheel. Obviously that's a very different environment, and the situation isn't the same, but it's certainly something that *can* be done. I don't know how easy it is to do it on Linux, though (on Windows, it's mostly a case of building static libs for your dependencies and linking to those rather than to DLLs, AIUI). Paul. From chris.barker at noaa.gov Mon Mar 23 20:07:40 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 23 Mar 2015 12:07:40 -0700 Subject: [Distutils] force static linking In-Reply-To: References: Message-ID: On Mon, Mar 23, 2015 at 11:45 AM, Dan Stromberg wrote: > Is this the general perspective on static linking of python module > dependencies? That if your systems are the same, you don't need to? > That's general -- nothing specific to python here. There _may_ be a difference in that you might be more likely to want to distribute a binary python module, and no be sure of the level of compatibility of the host sytem -- particularly if you use a non-standard or not-comon lib, or one you want built a particular way -- like ATLAS, BLAS, etc... I want static linking too, but if it's swimming upstream in a fast > river, I may reconsider. > well it's a slow river... The easiest way is to make sure that you only have the static version of the libs on the system you build on. You may be able to do that by passing something like --disable-shared to configure, or you can just kludge it and delete the shared libs after you build and install. -Chris > Thanks. > > On Mon, Mar 23, 2015 at 11:41 AM, Bill Deegan > wrote: > > Gordon, > > > > If you are sure that your dev and production environments match, then you > > should have same shared libraries on both, and no need for static > linkage? > > > > -Bill > > > > On Mon, Mar 23, 2015 at 11:36 AM, Dan Stromberg > wrote: > >> > >> On Thu, Sep 11, 2014 at 5:28 AM, gordon wrote: > >> > Hello, > >> > > >> > I am attempting to build statically linked distributions. > >> > > >> > I am using docker containers to ensure the deployment environment > >> > matches > >> > the build environment so there is no compatibility concern. > >> > > >> > Is there any way to force static linking so that wheels can be > installed > >> > into a virtual env without requiring specific packages on the host? > >> > >> Maybe pass -static in $LDFLAGS? Just a wild guess really. > >> _______________________________________________ > >> Distutils-SIG maillist - Distutils-SIG at python.org > >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From petsuter at gmail.com Mon Mar 23 20:41:34 2015 From: petsuter at gmail.com (Peter Suter) Date: Mon, 23 Mar 2015 20:41:34 +0100 Subject: [Distutils] 2.0dev-r123 no longer greater than 1.0? In-Reply-To: <66459152-42A5-4CC9-BABB-E1EF2FDE4002@stufft.io> References: <550D8BED.4080507@gmail.com> <550DA055.6020004@gmail.com> <66459152-42A5-4CC9-BABB-E1EF2FDE4002@stufft.io> Message-ID: <55106C6E.5060106@gmail.com> On 21.03.2015 19:26, Donald Stufft wrote: >> On Mar 21, 2015, at 12:46 PM, Peter Suter wrote: >> >> On 21.03.2015 17:21, Paul Moore wrote: >>> On 21 March 2015 at 15:19, Peter Suter wrote: >>>> Hi >>>> >>>> from pkg_resources import parse_version >>>> parse_version('2.0dev-r123') > parse_version('1.0dev') >>>> >>>> In setuptools 7 and below this was True. In setuptools 8 and above this is >>>> False. >>>> >>>> A bug? Are these tags not supported anymore? >>>> >>>> The documentation still mentions them extensively, e.g. in: >>>> https://pythonhosted.org/setuptools/setuptools.html#managing-continuous-releases-using-subversion >>>> >>>> setup.cfg options are recommended that generate similar versions: >>>> >>>> [egg_info] >>>> tag_build = .dev >>>> tag_svn_revision = 1 >>>> >>>> Am I missing something? >>> Version numbers are now standardised under PEP 440 >>> (https://www.python.org/dev/peps/pep-0440/). Under that PEP, >>> post-releases come before pre-releases. "r" represents a post-release >>> and "dev" a pre-release. So your version isn't a valid PEP 440 >>> version, and gets parsed as a legacy version. Legacy versions then get >>> sorted before any PEP 440 version, such as 1.0dev. >>> >>> It looks like the setuptools documentation hasn't been updated >>> (setuptools now uses PEP 440 versions since 8.0, see >>> https://pythonhosted.org/setuptools/history.html#id48). >> OK, thanks. >> So what is the equivalent PEP 440 SVN tagged development version? 2.0.dev123? How can I get setuptools to create that instead? It looks like with the above egg_info setuptools automatically adds the "-r". >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > 2.0.dev123 is the correct version number to use, setuptools might need additional updates to it to properly generate those though. Thanks for the info. Is there a ticket I can follow, or should I create one? Also, is using tag_svn_revision generally not recommended? It's rather useful when it works, but it's been a source of problems more than once. From ncoghlan at gmail.com Mon Mar 23 23:29:41 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 24 Mar 2015 08:29:41 +1000 Subject: [Distutils] force static linking In-Reply-To: References: Message-ID: On 24 Mar 2015 05:16, "Chris Barker" wrote: > > On Mon, Mar 23, 2015 at 11:45 AM, Dan Stromberg wrote: >> >> Is this the general perspective on static linking of python module >> dependencies? That if your systems are the same, you don't need to? > > > That's general -- nothing specific to python here. > > There _may_ be a difference in that you might be more likely to want to distribute a binary python module, and no be sure of the level of compatibility of the host sytem -- particularly if you use a non-standard or not-comon lib, or one you want built a particular way -- like ATLAS, BLAS, etc... > >> I want static linking too, but if it's swimming upstream in a fast >> river, I may reconsider. > > > well it's a slow river... > > The easiest way is to make sure that you only have the static version of the libs on the system you build on. You may be able to do that by passing something like --disable-shared to configure, or you can just kludge it and delete the shared libs after you build and install. The "not swimming upriver" approach is to look at conda for language independent cross-platform user level package management :) It's purpose built to handle the complexities of the scientific Python stack, while the default Python specific toolchain is more aimed at cases where you can relatively easily rely on Linux system libraries. Cheers, Nick. > > -Chris > > >> >> Thanks. >> >> On Mon, Mar 23, 2015 at 11:41 AM, Bill Deegan wrote: >> > Gordon, >> > >> > If you are sure that your dev and production environments match, then you >> > should have same shared libraries on both, and no need for static linkage? >> > >> > -Bill >> > >> > On Mon, Mar 23, 2015 at 11:36 AM, Dan Stromberg wrote: >> >> >> >> On Thu, Sep 11, 2014 at 5:28 AM, gordon wrote: >> >> > Hello, >> >> > >> >> > I am attempting to build statically linked distributions. >> >> > >> >> > I am using docker containers to ensure the deployment environment >> >> > matches >> >> > the build environment so there is no compatibility concern. >> >> > >> >> > Is there any way to force static linking so that wheels can be installed >> >> > into a virtual env without requiring specific packages on the host? >> >> >> >> Maybe pass -static in $LDFLAGS? Just a wild guess really. >> >> _______________________________________________ >> >> Distutils-SIG maillist - Distutils-SIG at python.org >> >> https://mail.python.org/mailman/listinfo/distutils-sig >> > >> > >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Tue Mar 24 09:56:26 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Tue, 24 Mar 2015 10:56:26 +0200 Subject: [Distutils] Installing a file into sitepackages In-Reply-To: <1434424866.156842.1426496532846.JavaMail.yahoo@mail.yahoo.com> References: <1426146883.26686.0@smtp.mail.yahoo.com> <1434424866.156842.1426496532846.JavaMail.yahoo@mail.yahoo.com> Message-ID: Hey, If you just want to copy a out-of-package file into site-package you could just override the build command and copy it there (in the build dir). Here's an example: https://github.com/ionelmc/python-hunter/blob/master/setup.py#L27-L31 - it seems to work fine with wheels. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Mon, Mar 16, 2015 at 11:02 AM, Stuart Axon wrote: > Hi All > This, and another memory-leak bug were triggered by the sandbox. > Would it be possible to either add an API to exempt files, or just allow > writing within site packages, even if just for .pth files ? > > I'm monkey patching around these for now > https://github.com/stuaxo/vext/blob/master/setup.py#L16 > > S++ > > > > On Thursday, March 12, 2015 2:54 PM, Stuart Axon > wrote: > > > > For closure: The solution was to make a Command class + implement > finalize_options to fixup the paths in distribution.data_files. > > > Source: > > # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b > """ > Install a file into the root of sitepackages on windows as well as linux. > > Under normal operation on win32 path_to_site_packages > gets changed to '' which installs inside the .egg instead. > """ > > import os > > from distutils import sysconfig > from distutils.command.install_data import install_data > from setuptools import setup > > here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) > > site_packages_path = sysconfig.get_python_lib() > site_packages_files = ['TEST_FILE.TXT'] > > class _install_data(install_data): > def finalize_options(self): > """ > On win32 the files here are changed to '' which > ends up inside the .egg, change this back to the > absolute path. > """ > install_data.finalize_options(self) > global site_packages_files > for i, f in enumerate(list(self.distribution.data_files)): > if not isinstance(f, basestring): > folder, files = f > if files == site_packages_files: > # Replace with absolute path version > self.distribution.data_files[i] = (site_packages_path, > files) > > setup( > cmdclass={'install_data': _install_data}, > name='test_install', > version='0.0.1', > > description='', > long_description='', > url='https://example.com', > author='Stuart Axon', > author_email='stuaxo2 at yahoo.com', > license='PD', > classifiers=[], > keywords='', > packages=[], > > install_requires=[], > > data_files=[ > (site_packages_path, site_packages_files), > ], > > ) > > > > On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: > > I had more of a dig into this, with a minimal setup.py: > https://gist.github.com/stuaxo/c76a042cb7aa6e77285b setup calls > install_data On win32 setup.py calls install_data which copies the file > into the egg - even though I have given the absolute path to sitepackages > C:\> python setup.py install .... running install_data creating > build\bdist.win32\egg copying TEST_FILE.TXT -> build\bdist.win32\egg\ .... > On Linux the file is copied to the right path: $ python setup.py install > ..... installing package data to build/bdist.linux-x86_64/egg running > install_data copying TEST_FILE.TXT -> > /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages .... > *something* is normalising my absolute path to site packages into just '' - > it's possible to see by looking at self.data_files in the 'run' function > in: distutils/command/install_data.py - on windows it the first part has > been changed to '' unlike on linux where it's the absolute path I set... > still not sure where it's happening though. *This all took a while, as > rebuilt VM and verified on 2.7.8 and 2.7.9.. S++ > > On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > > I had a further look - and on windows the file ends up inside the .egg > file, on linux it ends up inside the site packages as intended. At a guess > it seems like there might be a bug in the path handling on windows. .. I > wonder if it's something like this > http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python > which seems an easy way to get an off-by-one error in a path ? > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Mar 24 10:20:15 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 24 Mar 2015 22:20:15 +1300 Subject: [Distutils] setup_requires for dev environments In-Reply-To: References: <5809D653-E729-4BB9-AFFE-7A5FFE2E0A05@stufft.io> <42D3C936-361D-4FA4-A887-6BCBFA9DBC67@stufft.io> Message-ID: On 17 March 2015 at 15:36, Robert Collins wrote: ... > So, the propsed plan going forward: > - now: > - I will put a minimal patch up for pip into the tracker and ask > for feedback here and there Feedback solicited! https://github.com/pypa/pip/pull/2603 The week delay was refactoring the guts of prepare_files to make the patch small. Note too that the patch is probably got code in the wrong place etc - but its small enough to reason about :). > - we can debate at that point whether bits of it should be in > setuptools or not etc etc > - likewise we can debate the use of a temporary environment or > install into the target environment at that point > - future > - in the metabuild thing that is planned long term, handling this > particular option will be placed w/in the setuptools plugin for it, > making this not something that needs to be a 'standard'. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From stuaxo2 at yahoo.com Tue Mar 24 10:36:18 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Tue, 24 Mar 2015 09:36:18 +0000 (UTC) Subject: [Distutils] Installing a file into sitepackages In-Reply-To: References: Message-ID: <1105349950.312608.1427189778074.JavaMail.yahoo@mail.yahoo.com> Hi,?This works from pypi - but not when installing from source with? python setup.py install? which stops this nifty thing from working: PYTHON_HUNTER="module='os.path'" python yourapp.py ?Sandbox monkeypatches os.file, so I think it catches you using copy.??? Maybe we need a common API for code that runs at startup? S++ On Tuesday, March 24, 2015 3:56 PM, Ionel Cristian M?rie? wrote: Hey, If you just want to copy a out-of-package file into site-package you could just override the build command and copy it there (in the build dir). Here's an example: https://github.com/ionelmc/python-hunter/blob/master/setup.py#L27-L31 -? it seems to work fine with wheels. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Mon, Mar 16, 2015 at 11:02 AM, Stuart Axon wrote: Hi All??? This, and another memory-leak bug were triggered by the sandbox.?? Would it be possible to either add an API to exempt files, or just allow writing within site packages, even if just for .pth files ? I'm monkey patching around these for now https://github.com/stuaxo/vext/blob/master/setup.py#L16 S++ On Thursday, March 12, 2015 2:54 PM, Stuart Axon wrote: For closure: ?The solution was to make a Command class + implement finalize_options to fixup the paths in distribution.data_files. Source: # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b"""Install a file into the root of sitepackages on windows as well as linux. Under normal operation on win32 path_to_site_packagesgets changed to '' which installs inside the .egg instead.""" import os from distutils import sysconfigfrom distutils.command.install_data import install_datafrom setuptools import setup here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) site_packages_path = sysconfig.get_python_lib()site_packages_files = ['TEST_FILE.TXT'] class _install_data(install_data):? ? def finalize_options(self):? ? ? ? """? ? ? ? On win32 the files here are changed to '' which? ? ? ? ends up inside the .egg, change this back to the? ? ? ? absolute path.? ? ? ? """? ? ? ? install_data.finalize_options(self)? ? ? ? global site_packages_files? ? ? ? for i, f in enumerate(list(self.distribution.data_files)):? ? ? ? ? ? if not isinstance(f, basestring):? ? ? ? ? ? ? ? folder, files = f? ? ? ? ? ? ? ? if files == site_packages_files:? ? ? ? ? ? ? ? ? ? # Replace with absolute path version? ? ? ? ? ? ? ? ? ? self.distribution.data_files[i] = (site_packages_path, files) setup(? ? cmdclass={'install_data': _install_data},? ? name='test_install',? ? version='0.0.1', ? ? description='',? ? long_description='',? ? url='https://example.com',? ? author='Stuart Axon',? ? author_email='stuaxo2 at yahoo.com',? ? license='PD',? ? classifiers=[],? ? keywords='',? ? packages=[], ? ? install_requires=[], ? ? data_files=[? ? ? ? (site_packages_path, site_packages_files),? ? ], ) On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: I had more of a dig into this, with a minimal setup.py:https://gist.github.com/stuaxo/c76a042cb7aa6e77285bsetup calls install_dataOn win32 setup.py calls install_data which copies the file into the egg - even though I have given the absolute path to sitepackagesC:\> python setup.py install....running install_datacreating build\bdist.win32\eggcopying TEST_FILE.TXT -> build\bdist.win32\egg\ ....On Linux the file is copied to the right path:$ python setup.py install.....installing package data to build/bdist.linux-x86_64/eggrunning install_datacopying TEST_FILE.TXT -> /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages....*something* is normalising my absolute path to site packages into just '' - it's possible to see by looking at self.data_files in the 'run' function in:distutils/command/install_data.py- on windows it the first part has been changed to '' unlike on linux where it's the absolute path I set... still not sure where it's happening though.*This all took a while, as rebuilt VM and verified on 2.7.8 and 2.7.9..S++ On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > I had a further look - and on windows the file ends up inside the .egg file, on linux it ends up inside the site packages as intended. At a guess it seems like there might be a bug in the path handling on windows. .. I wonder if it's something like this http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python which seems an easy way to get an off-by-one error in a path ? _______________________________________________ Distutils-SIG maillist? -? Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Tue Mar 24 10:58:40 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Tue, 24 Mar 2015 11:58:40 +0200 Subject: [Distutils] Installing a file into sitepackages In-Reply-To: <1105349950.312608.1427189778074.JavaMail.yahoo@mail.yahoo.com> References: <1105349950.312608.1427189778074.JavaMail.yahoo@mail.yahoo.com> Message-ID: Hmmmm, good catch. It appears that when `setup.py install` is used the .pth file is there, but it's inside the egg (wrong place). Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Tue, Mar 24, 2015 at 11:36 AM, Stuart Axon wrote: > Hi, > This works from pypi - but not when installing from source with python > setup.py install which stops this nifty thing from working: > > PYTHON_HUNTER="module='os.path'" python yourapp.py > > > Sandbox monkeypatches os.file, so I think it catches you using copy. > Maybe we need a common API for code that runs at startup? > > S++ > > > > On Tuesday, March 24, 2015 3:56 PM, Ionel Cristian M?rie? < > contact at ionelmc.ro> wrote: > > > > Hey, > > If you just want to copy a out-of-package file into site-package you could > just override the build command and copy it there (in the build dir). > Here's an example: > https://github.com/ionelmc/python-hunter/blob/master/setup.py#L27-L31 - > it seems to work fine with wheels. > > > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > > On Mon, Mar 16, 2015 at 11:02 AM, Stuart Axon wrote: > > Hi All > This, and another memory-leak bug were triggered by the sandbox. > Would it be possible to either add an API to exempt files, or just allow > writing within site packages, even if just for .pth files ? > > I'm monkey patching around these for now > https://github.com/stuaxo/vext/blob/master/setup.py#L16 > > S++ > > > > On Thursday, March 12, 2015 2:54 PM, Stuart Axon > wrote: > > > > For closure: The solution was to make a Command class + implement > finalize_options to fixup the paths in distribution.data_files. > > > Source: > > # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b > """ > Install a file into the root of sitepackages on windows as well as linux. > > Under normal operation on win32 path_to_site_packages > gets changed to '' which installs inside the .egg instead. > """ > > import os > > from distutils import sysconfig > from distutils.command.install_data import install_data > from setuptools import setup > > here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) > > site_packages_path = sysconfig.get_python_lib() > site_packages_files = ['TEST_FILE.TXT'] > > class _install_data(install_data): > def finalize_options(self): > """ > On win32 the files here are changed to '' which > ends up inside the .egg, change this back to the > absolute path. > """ > install_data.finalize_options(self) > global site_packages_files > for i, f in enumerate(list(self.distribution.data_files)): > if not isinstance(f, basestring): > folder, files = f > if files == site_packages_files: > # Replace with absolute path version > self.distribution.data_files[i] = (site_packages_path, > files) > > setup( > cmdclass={'install_data': _install_data}, > name='test_install', > version='0.0.1', > > description='', > long_description='', > url='https://example.com', > author='Stuart Axon', > author_email='stuaxo2 at yahoo.com', > license='PD', > classifiers=[], > keywords='', > packages=[], > > install_requires=[], > > data_files=[ > (site_packages_path, site_packages_files), > ], > > ) > > > > On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: > > I had more of a dig into this, with a minimal setup.py: > https://gist.github.com/stuaxo/c76a042cb7aa6e77285b setup calls > install_data On win32 setup.py calls install_data which copies the file > into the egg - even though I have given the absolute path to sitepackages > C:\> python setup.py install .... running install_data creating > build\bdist.win32\egg copying TEST_FILE.TXT -> build\bdist.win32\egg\ .... > On Linux the file is copied to the right path: $ python setup.py install > ..... installing package data to build/bdist.linux-x86_64/egg running > install_data copying TEST_FILE.TXT -> > /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages .... > *something* is normalising my absolute path to site packages into just '' - > it's possible to see by looking at self.data_files in the 'run' function > in: distutils/command/install_data.py - on windows it the first part has > been changed to '' unlike on linux where it's the absolute path I set... > still not sure where it's happening though. *This all took a while, as > rebuilt VM and verified on 2.7.8 and 2.7.9.. S++ > > On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > > I had a further look - and on windows the file ends up inside the .egg > file, on linux it ends up inside the site packages as intended. At a guess > it seems like there might be a bug in the path handling on windows. .. I > wonder if it's something like this > http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python > which seems an easy way to get an off-by-one error in a path ? > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuaxo2 at yahoo.com Tue Mar 24 18:05:42 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Tue, 24 Mar 2015 17:05:42 +0000 (UTC) Subject: [Distutils] Installing a file into sitepackages In-Reply-To: References: Message-ID: <323345584.588155.1427216742923.JavaMail.yahoo@mail.yahoo.com> Yeah, I think it's a bit weird that it ends up there, makes sense for most files, but .pth files are only evaluated in site-packages - the sandbox should probably treat them differently... ?S++ On Tuesday, March 24, 2015 4:59 PM, Ionel Cristian M?rie? wrote: Hmmmm, good catch. It appears that when `setup.py install` is used the .pth file is there, but it's inside the egg (wrong place). Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Tue, Mar 24, 2015 at 11:36 AM, Stuart Axon wrote: Hi,?This works from pypi - but not when installing from source with? python setup.py install? which stops this nifty thing from working: PYTHON_HUNTER="module='os.path'" python yourapp.py ?Sandbox monkeypatches os.file, so I think it catches you using copy.??? Maybe we need a common API for code that runs at startup? S++ On Tuesday, March 24, 2015 3:56 PM, Ionel Cristian M?rie? wrote: Hey, If you just want to copy a out-of-package file into site-package you could just override the build command and copy it there (in the build dir). Here's an example: https://github.com/ionelmc/python-hunter/blob/master/setup.py#L27-L31 -? it seems to work fine with wheels. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Mon, Mar 16, 2015 at 11:02 AM, Stuart Axon wrote: Hi All??? This, and another memory-leak bug were triggered by the sandbox.?? Would it be possible to either add an API to exempt files, or just allow writing within site packages, even if just for .pth files ? I'm monkey patching around these for now https://github.com/stuaxo/vext/blob/master/setup.py#L16 S++ On Thursday, March 12, 2015 2:54 PM, Stuart Axon wrote: For closure: ?The solution was to make a Command class + implement finalize_options to fixup the paths in distribution.data_files. Source: # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b"""Install a file into the root of sitepackages on windows as well as linux. Under normal operation on win32 path_to_site_packagesgets changed to '' which installs inside the .egg instead.""" import os from distutils import sysconfigfrom distutils.command.install_data import install_datafrom setuptools import setup here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) site_packages_path = sysconfig.get_python_lib()site_packages_files = ['TEST_FILE.TXT'] class _install_data(install_data):? ? def finalize_options(self):? ? ? ? """? ? ? ? On win32 the files here are changed to '' which? ? ? ? ends up inside the .egg, change this back to the? ? ? ? absolute path.? ? ? ? """? ? ? ? install_data.finalize_options(self)? ? ? ? global site_packages_files? ? ? ? for i, f in enumerate(list(self.distribution.data_files)):? ? ? ? ? ? if not isinstance(f, basestring):? ? ? ? ? ? ? ? folder, files = f? ? ? ? ? ? ? ? if files == site_packages_files:? ? ? ? ? ? ? ? ? ? # Replace with absolute path version? ? ? ? ? ? ? ? ? ? self.distribution.data_files[i] = (site_packages_path, files) setup(? ? cmdclass={'install_data': _install_data},? ? name='test_install',? ? version='0.0.1', ? ? description='',? ? long_description='',? ? url='https://example.com',? ? author='Stuart Axon',? ? author_email='stuaxo2 at yahoo.com',? ? license='PD',? ? classifiers=[],? ? keywords='',? ? packages=[], ? ? install_requires=[], ? ? data_files=[? ? ? ? (site_packages_path, site_packages_files),? ? ], ) On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: I had more of a dig into this, with a minimal setup.py:https://gist.github.com/stuaxo/c76a042cb7aa6e77285bsetup calls install_dataOn win32 setup.py calls install_data which copies the file into the egg - even though I have given the absolute path to sitepackagesC:\> python setup.py install....running install_datacreating build\bdist.win32\eggcopying TEST_FILE.TXT -> build\bdist.win32\egg\ ....On Linux the file is copied to the right path:$ python setup.py install.....installing package data to build/bdist.linux-x86_64/eggrunning install_datacopying TEST_FILE.TXT -> /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages....*something* is normalising my absolute path to site packages into just '' - it's possible to see by looking at self.data_files in the 'run' function in:distutils/command/install_data.py- on windows it the first part has been changed to '' unlike on linux where it's the absolute path I set... still not sure where it's happening though.*This all took a while, as rebuilt VM and verified on 2.7.8 and 2.7.9..S++ On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > I had a further look - and on windows the file ends up inside the .egg file, on linux it ends up inside the site packages as intended. At a guess it seems like there might be a bug in the path handling on windows. .. I wonder if it's something like this http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python which seems an easy way to get an off-by-one error in a path ? _______________________________________________ Distutils-SIG maillist? -? Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Tue Mar 24 19:10:19 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Tue, 24 Mar 2015 20:10:19 +0200 Subject: [Distutils] pip upgrade woes Message-ID: Hello, There's one issue with pip upgrade that annoys me occasionally, and when it does it's very annoying. Every so often me or some customer has to upgrade some core packages like pip, setuptools or virtualenv on some machine. Now this becomes very annoying because said packages were installed there with either easy_install or just `setup.py install`. Several upgrades like that and now the machine has a hadful of eggs there. Lots of mistakes were made but what's done is done. Now, if `pip upgrade pip setuptools virtualenv` it will run around a bit, flap its wings and in the end it's not gonna fly like an eagle, and won't be able to go beyond it's cursed fences. An so, I feel like chicken farmer when I try to upgrade packages and pip can't upgrade them. Cause those old eggs are still going to be first on sys.path. And when I try to run pip it's still that old one. Sometimes few `pip uninstall` solve the issue, but most of the time I have to manually remove files because pip can't figure out what files to remove. One big issue is that pip uninstall only uninstalls the first package it finds, and similarly, pip install will only uninstall the first package it finds before coping the new files. This whole process becomes a whole lot more annoying when you have to explain someone how to cleanup this mess and get latest pip and setuptools. So I'm wondering if there's a better way to cleanup machines like that. Any ideas? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Mar 24 19:29:01 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 24 Mar 2015 18:29:01 +0000 Subject: [Distutils] pip upgrade woes In-Reply-To: References: Message-ID: On 24 March 2015 at 18:10, Ionel Cristian M?rie? wrote: > There's one issue with pip upgrade that annoys me occasionally, and when it > does it's very annoying. > > Every so often me or some customer has to upgrade some core packages like > pip, setuptools or virtualenv on some machine. Now this becomes very > annoying because said packages were installed there with either easy_install > or just `setup.py install`. Several upgrades like that and now the machine > has a hadful of eggs there. Lots of mistakes were made but what's done is > done. > > Now, if `pip upgrade pip setuptools virtualenv` it will run around a bit, > flap its wings and in the end it's not gonna fly like an eagle, and won't be > able to go beyond it's cursed fences. An so, I feel like chicken farmer when > I try to upgrade packages and pip can't upgrade them. Cause those old eggs > are still going to be first on sys.path. And when I try to run pip it's > still that old one. > > Sometimes few `pip uninstall` solve the issue, but most of the time I have > to manually remove files because pip can't figure out what files to remove. > > One big issue is that pip uninstall only uninstalls the first package it > finds, and similarly, pip install will only uninstall the first package it > finds before coping the new files. > > This whole process becomes a whole lot more annoying when you have to > explain someone how to cleanup this mess and get latest pip and setuptools. > > So I'm wondering if there's a better way to cleanup machines like that. Any > ideas? If I understand your problem correctly, the issue is that these machines have older installs of packages, added by tools that don't have uninstall capabilities, using formats that are not used by pip. You're not able to get pip uninstall to work either because the uninstall data isn't in a form pip can handle, or because there is no uninstall data. I suspect the "first on sys.path" issue is caused by setuptools' .pth files, which are even messier to tidy up, from what I recall of my infrequent dealings with them. It should be possible to write some sort of "purge" command that searches out and removes all traces of a package like that. I wouldn't want pip to do anything like that automatically, for obvious reasons. I'm not even that keen on it being a new pip subcommand, because there's a lot of risk of breakage (I'd imagine the code would need a certain amount of guesswork to identify non-standard files and directories that "belong" to a package). As a start, I'd suggest looking at writing some sort of independent purge-package command that you could use when you hit problems (pip install -U setuptools... weirdness happens, so purge-package setuptools; pip install setuptools). If all the scenarios that tool handles end up being clearly defined and low-risk, then it might be that there's scope for a "pip purge" command of some sort based on it, that removes all trace of a package even when the standard uninstall metadata isn't available. Maybe someone else here with more understanding of the history of easy_install and how eggs used to work (maybe still do, for all I know...) can offer something more specific. Sorry that I can't. Paul From robertc at robertcollins.net Tue Mar 24 22:28:07 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 25 Mar 2015 10:28:07 +1300 Subject: [Distutils] d2to1 setup.cfg schema Message-ID: This is a break-out thread from the centi-thread that spawned about setup-requires. d2to1 defined some metadata keys in setup.cfg,in particular 'name' and 'requires-dist'. Confusing 'requires-dist' contains the 'install_requires' one might use with setuptools' setup() function. Since the declarative setup-requires concept also involves putting dependencies in setup.cfg (but setup_requires rather than install_requires), I followed the naming convention d2to1 had started. But - all the reviewers (and I agree) think this is confusing and non-obvious. Since d2to1 is strictly a build-time thing - it reflects the keys into the metadata and thus your egg-info/requires.txt is unaltered in output, I think its reasonable to argue that we don't need to be compatible with it. OTOH folk using d2to1 would not gain the benefits that declarative setup-requires may offer in setuptools // pip. What do folk think? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From jaraco at jaraco.com Tue Mar 24 20:46:52 2015 From: jaraco at jaraco.com (Jason R. Coombs) Date: Tue, 24 Mar 2015 19:46:52 +0000 Subject: [Distutils] pip upgrade woes In-Reply-To: References: Message-ID: Setuptools advertises in its description, "Easily download, build, install, upgrade, and uninstall Python packages". It was intended to support the uninstall model, though at the time that was written, "easy" meant easier than finding and removing each of the files by hand. If you're using eggs, the default install model for Setuptools, they can be uninstalled by removing any reference to it from easy-install.pth (which causes it not to be added to sys.path) or by removing the .egg from the site dir (in which case Setuptools will remove the easy-install.pth reference). It's not a clean uninstall in all cases, because any scripts will still linger. Nevertheless, it's still a model that works reasonably well. Setuptools can't help with distutils-installed packages. Note that Setuptools will always install itself as an egg unless installed by another manager such as pip or apt. -----Original Message----- From: pypa-dev at googlegroups.com [mailto:pypa-dev at googlegroups.com] On Behalf Of Paul Moore Sent: Tuesday, 24 March, 2015 14:29 To: Ionel Cristian M?rie? Cc: pypa-dev; DistUtils mailing list Subject: Re: pip upgrade woes On 24 March 2015 at 18:10, Ionel Cristian M?rie? wrote: > There's one issue with pip upgrade that annoys me occasionally, and > when it does it's very annoying. > > Every so often me or some customer has to upgrade some core packages > like pip, setuptools or virtualenv on some machine. Now this becomes > very annoying because said packages were installed there with either > easy_install or just `setup.py install`. Several upgrades like that > and now the machine has a hadful of eggs there. Lots of mistakes were > made but what's done is done. > > Now, if `pip upgrade pip setuptools virtualenv` it will run around a > bit, flap its wings and in the end it's not gonna fly like an eagle, > and won't be able to go beyond it's cursed fences. An so, I feel like > chicken farmer when I try to upgrade packages and pip can't upgrade > them. Cause those old eggs are still going to be first on sys.path. > And when I try to run pip it's still that old one. > > Sometimes few `pip uninstall` solve the issue, but most of the time I > have to manually remove files because pip can't figure out what files to remove. > > One big issue is that pip uninstall only uninstalls the first package > it finds, and similarly, pip install will only uninstall the first > package it finds before coping the new files. > > This whole process becomes a whole lot more annoying when you have to > explain someone how to cleanup this mess and get latest pip and setuptools. > > So I'm wondering if there's a better way to cleanup machines like > that. Any ideas? If I understand your problem correctly, the issue is that these machines have older installs of packages, added by tools that don't have uninstall capabilities, using formats that are not used by pip. You're not able to get pip uninstall to work either because the uninstall data isn't in a form pip can handle, or because there is no uninstall data. I suspect the "first on sys.path" issue is caused by setuptools' .pth files, which are even messier to tidy up, from what I recall of my infrequent dealings with them. It should be possible to write some sort of "purge" command that searches out and removes all traces of a package like that. I wouldn't want pip to do anything like that automatically, for obvious reasons. I'm not even that keen on it being a new pip subcommand, because there's a lot of risk of breakage (I'd imagine the code would need a certain amount of guesswork to identify non-standard files and directories that "belong" to a package). As a start, I'd suggest looking at writing some sort of independent purge-package command that you could use when you hit problems (pip install -U setuptools... weirdness happens, so purge-package setuptools; pip install setuptools). If all the scenarios that tool handles end up being clearly defined and low-risk, then it might be that there's scope for a "pip purge" command of some sort based on it, that removes all trace of a package even when the standard uninstall metadata isn't available. Maybe someone else here with more understanding of the history of easy_install and how eggs used to work (maybe still do, for all I know...) can offer something more specific. Sorry that I can't. Paul From ncoghlan at gmail.com Tue Mar 24 23:32:07 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 25 Mar 2015 08:32:07 +1000 Subject: [Distutils] pip upgrade woes In-Reply-To: References: Message-ID: On 25 Mar 2015 04:29, "Paul Moore" wrote: > > As a start, I'd suggest looking at writing some sort of independent > purge-package command that you could use when you hit problems (pip > install -U setuptools... weirdness happens, so purge-package > setuptools; pip install setuptools). If all the scenarios that tool > handles end up being clearly defined and low-risk, then it might be > that there's scope for a "pip purge" command of some sort based on it, > that removes all trace of a package even when the standard uninstall > metadata isn't available. I like this idea, especially if the tool was made aware of the system package manager date stores (at least for apt and rpm) and could hence emit the appropriate dependency respecting system command for removing them in those cases rather than attempting to remove them directly. > Maybe someone else here with more understanding of the history of > easy_install and how eggs used to work (maybe still do, for all I > know...) can offer something more specific. Sorry that I can't. Jason already gave details for the egg case. For purging "setup.py install" cases, a purge tool could potentially make an educated guess by looking at the contents of a wheel or egg file from PyPI (perhaps looking at both the oldest and newest release that provides such files. Cheers, Nick. > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 24 23:33:32 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 25 Mar 2015 08:33:32 +1000 Subject: [Distutils] pip upgrade woes In-Reply-To: References: Message-ID: On 25 Mar 2015 08:32, "Nick Coghlan" wrote: > I like this idea, especially if the tool was made aware of the system package manager date stores (at least for apt and rpm) and could hence emit the appropriate dependency respecting system command for removing them in those cases rather than attempting to remove them directly. Oops, "data stores", not "date stores". Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 24 23:51:37 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 25 Mar 2015 08:51:37 +1000 Subject: [Distutils] d2to1 setup.cfg schema In-Reply-To: References: Message-ID: On 25 Mar 2015 07:35, "Robert Collins" wrote: > > This is a break-out thread from the centi-thread that spawned about > setup-requires. > > d2to1 defined some metadata keys in setup.cfg,in particular 'name' and > 'requires-dist'. Confusing 'requires-dist' contains the > 'install_requires' one might use with setuptools' setup() function. That particular name comes from PEP 345: https://www.python.org/dev/peps/pep-0345/ Extending d2to1 to accept "install-requires" as meaning the same thing as the existing "requires-dist" (and complaining if a setup.cfg file contains both) would make sense to me, as it provides a more obvious migration path from setuptools, and pairs up nicely with a new "setup-requires" section for setup.py dependencies. (It also occurs to me that we should probably ask the d2to1 folks if they'd be interested in bringing the project under the PyPA banner as happened with setuptools, distlib, etc. It's emerged as a key piece of the transition from Turing complete build process customisation to static build metadata configuration) > Since the declarative setup-requires concept also involves putting > dependencies in setup.cfg (but setup_requires rather than > install_requires), I followed the naming convention d2to1 had started. > But - all the reviewers (and I agree) think this is confusing and > non-obvious. > > Since d2to1 is strictly a build-time thing - it reflects the keys into > the metadata and thus your egg-info/requires.txt is unaltered in > output, I think its reasonable to argue that we don't need to be > compatible with it. > > OTOH folk using d2to1 would not gain the benefits that declarative > setup-requires may offer in setuptools // pip. As the converse of the above, I think pip should also accept the PEP 345 defined "requires-dist" as equivalent to "install-requires" (and similarly complain if a file defines both, but in pip's case, only emitting a warning and then treating them as a single combined section) > What do folk think? To summarise my view: I think it makes the most sense to use setuptools inspired section names, and teach d2to1 about them, while also having pip understand the existing PEP 345 defined section name. Cheers, Nick. > > -Rob > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Wed Mar 25 03:02:46 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 25 Mar 2015 04:02:46 +0200 Subject: [Distutils] Installing a file into sitepackages In-Reply-To: <1105349950.312608.1427189778074.JavaMail.yahoo@mail.yahoo.com> References: <1105349950.312608.1427189778074.JavaMail.yahoo@mail.yahoo.com> Message-ID: This seems to do the trick: class EasyInstallWithPTH(easy_install): def run(self): easy_install.run(self) for path in glob(join(dirname(__file__), 'src', '*.pth')): dest = join(self.install_dir, basename(path)) self.copy_file(path, dest) Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Tue, Mar 24, 2015 at 11:36 AM, Stuart Axon wrote: > Hi, > This works from pypi - but not when installing from source with python > setup.py install which stops this nifty thing from working: > > PYTHON_HUNTER="module='os.path'" python yourapp.py > > > Sandbox monkeypatches os.file, so I think it catches you using copy. > Maybe we need a common API for code that runs at startup? > > S++ > > > > On Tuesday, March 24, 2015 3:56 PM, Ionel Cristian M?rie? < > contact at ionelmc.ro> wrote: > > > > Hey, > > If you just want to copy a out-of-package file into site-package you could > just override the build command and copy it there (in the build dir). > Here's an example: > https://github.com/ionelmc/python-hunter/blob/master/setup.py#L27-L31 - > it seems to work fine with wheels. > > > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > > On Mon, Mar 16, 2015 at 11:02 AM, Stuart Axon wrote: > > Hi All > This, and another memory-leak bug were triggered by the sandbox. > Would it be possible to either add an API to exempt files, or just allow > writing within site packages, even if just for .pth files ? > > I'm monkey patching around these for now > https://github.com/stuaxo/vext/blob/master/setup.py#L16 > > S++ > > > > On Thursday, March 12, 2015 2:54 PM, Stuart Axon > wrote: > > > > For closure: The solution was to make a Command class + implement > finalize_options to fixup the paths in distribution.data_files. > > > Source: > > # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b > """ > Install a file into the root of sitepackages on windows as well as linux. > > Under normal operation on win32 path_to_site_packages > gets changed to '' which installs inside the .egg instead. > """ > > import os > > from distutils import sysconfig > from distutils.command.install_data import install_data > from setuptools import setup > > here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) > > site_packages_path = sysconfig.get_python_lib() > site_packages_files = ['TEST_FILE.TXT'] > > class _install_data(install_data): > def finalize_options(self): > """ > On win32 the files here are changed to '' which > ends up inside the .egg, change this back to the > absolute path. > """ > install_data.finalize_options(self) > global site_packages_files > for i, f in enumerate(list(self.distribution.data_files)): > if not isinstance(f, basestring): > folder, files = f > if files == site_packages_files: > # Replace with absolute path version > self.distribution.data_files[i] = (site_packages_path, > files) > > setup( > cmdclass={'install_data': _install_data}, > name='test_install', > version='0.0.1', > > description='', > long_description='', > url='https://example.com', > author='Stuart Axon', > author_email='stuaxo2 at yahoo.com', > license='PD', > classifiers=[], > keywords='', > packages=[], > > install_requires=[], > > data_files=[ > (site_packages_path, site_packages_files), > ], > > ) > > > > On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: > > I had more of a dig into this, with a minimal setup.py: > https://gist.github.com/stuaxo/c76a042cb7aa6e77285b setup calls > install_data On win32 setup.py calls install_data which copies the file > into the egg - even though I have given the absolute path to sitepackages > C:\> python setup.py install .... running install_data creating > build\bdist.win32\egg copying TEST_FILE.TXT -> build\bdist.win32\egg\ .... > On Linux the file is copied to the right path: $ python setup.py install > ..... installing package data to build/bdist.linux-x86_64/egg running > install_data copying TEST_FILE.TXT -> > /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages .... > *something* is normalising my absolute path to site packages into just '' - > it's possible to see by looking at self.data_files in the 'run' function > in: distutils/command/install_data.py - on windows it the first part has > been changed to '' unlike on linux where it's the absolute path I set... > still not sure where it's happening though. *This all took a while, as > rebuilt VM and verified on 2.7.8 and 2.7.9.. S++ > > On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > > I had a further look - and on windows the file ends up inside the .egg > file, on linux it ends up inside the site packages as intended. At a guess > it seems like there might be a bug in the path handling on windows. .. I > wonder if it's something like this > http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python > which seems an easy way to get an off-by-one error in a path ? > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed Mar 25 07:30:44 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 25 Mar 2015 19:30:44 +1300 Subject: [Distutils] d2to1 setup.cfg schema In-Reply-To: References: Message-ID: On 25 March 2015 at 11:51, Nick Coghlan wrote: > Extending d2to1 to accept "install-requires" as meaning the same thing as > the existing "requires-dist" (and complaining if a setup.cfg file contains > both) would make sense to me, as it provides a more obvious migration path > from setuptools, and pairs up nicely with a new "setup-requires" section for > setup.py dependencies. I'm inclined to patch setuptools directly; with setuptools no longer decaying, we don't need to work around the codebase - we can work on it. > (It also occurs to me that we should probably ask the d2to1 folks if they'd > be interested in bringing the project under the PyPA banner as happened with > setuptools, distlib, etc. It's emerged as a key piece of the transition from > Turing complete build process customisation to static build metadata > configuration) Thanks for reminding me that transitioning to static build metadata configuration is a /goal/ - that should make the debate around my PR simpler :). > As the converse of the above, I think pip should also accept the PEP 345 > defined "requires-dist" as equivalent to "install-requires" (and similarly > complain if a file defines both, but in pip's case, only emitting a warning > and then treating them as a single combined section) I've implemented supporting both, erroring if both are present at once, and not warning (at this stage - we can add a warning later methinks). Tis rude to warn when things are bleeding edge. >> What do folk think? > > To summarise my view: I think it makes the most sense to use setuptools > inspired section names, and teach d2to1 about them, while also having pip > understand the existing PEP 345 defined section name. Roughly done; we're pending Jason's input and buy-in ATM on the pip PR :) -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From stuaxo2 at yahoo.com Wed Mar 25 13:51:40 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Wed, 25 Mar 2015 12:51:40 +0000 (UTC) Subject: [Distutils] Installing a file into sitepackages In-Reply-To: References: Message-ID: <622323935.1192595.1427287900674.JavaMail.yahoo@mail.yahoo.com> That looks much cleaner than my one, I'll give it a try..? does it work on python3, just found out my one does not. ?S++ On Wednesday, March 25, 2015 9:03 AM, Ionel Cristian M?rie? wrote: This seems to do the trick: class EasyInstallWithPTH(easy_install): ??? def run(self): ??????? easy_install.run(self) ??????? for path in glob(join(dirname(__file__), 'src', '*.pth')): ??????????? dest = join(self.install_dir, basename(path)) ??????????? self.copy_file(path, dest) Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Tue, Mar 24, 2015 at 11:36 AM, Stuart Axon wrote: Hi,?This works from pypi - but not when installing from source with? python setup.py install? which stops this nifty thing from working: PYTHON_HUNTER="module='os.path'" python yourapp.py ?Sandbox monkeypatches os.file, so I think it catches you using copy.??? Maybe we need a common API for code that runs at startup? S++ On Tuesday, March 24, 2015 3:56 PM, Ionel Cristian M?rie? wrote: Hey, If you just want to copy a out-of-package file into site-package you could just override the build command and copy it there (in the build dir). Here's an example: https://github.com/ionelmc/python-hunter/blob/master/setup.py#L27-L31 -? it seems to work fine with wheels. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Mon, Mar 16, 2015 at 11:02 AM, Stuart Axon wrote: Hi All??? This, and another memory-leak bug were triggered by the sandbox.?? Would it be possible to either add an API to exempt files, or just allow writing within site packages, even if just for .pth files ? I'm monkey patching around these for now https://github.com/stuaxo/vext/blob/master/setup.py#L16 S++ On Thursday, March 12, 2015 2:54 PM, Stuart Axon wrote: For closure: ?The solution was to make a Command class + implement finalize_options to fixup the paths in distribution.data_files. Source: # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b"""Install a file into the root of sitepackages on windows as well as linux. Under normal operation on win32 path_to_site_packagesgets changed to '' which installs inside the .egg instead.""" import os from distutils import sysconfigfrom distutils.command.install_data import install_datafrom setuptools import setup here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) site_packages_path = sysconfig.get_python_lib()site_packages_files = ['TEST_FILE.TXT'] class _install_data(install_data):? ? def finalize_options(self):? ? ? ? """? ? ? ? On win32 the files here are changed to '' which? ? ? ? ends up inside the .egg, change this back to the? ? ? ? absolute path.? ? ? ? """? ? ? ? install_data.finalize_options(self)? ? ? ? global site_packages_files? ? ? ? for i, f in enumerate(list(self.distribution.data_files)):? ? ? ? ? ? if not isinstance(f, basestring):? ? ? ? ? ? ? ? folder, files = f? ? ? ? ? ? ? ? if files == site_packages_files:? ? ? ? ? ? ? ? ? ? # Replace with absolute path version? ? ? ? ? ? ? ? ? ? self.distribution.data_files[i] = (site_packages_path, files) setup(? ? cmdclass={'install_data': _install_data},? ? name='test_install',? ? version='0.0.1', ? ? description='',? ? long_description='',? ? url='https://example.com',? ? author='Stuart Axon',? ? author_email='stuaxo2 at yahoo.com',? ? license='PD',? ? classifiers=[],? ? keywords='',? ? packages=[], ? ? install_requires=[], ? ? data_files=[? ? ? ? (site_packages_path, site_packages_files),? ? ], ) On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: I had more of a dig into this, with a minimal setup.py:https://gist.github.com/stuaxo/c76a042cb7aa6e77285bsetup calls install_dataOn win32 setup.py calls install_data which copies the file into the egg - even though I have given the absolute path to sitepackagesC:\> python setup.py install....running install_datacreating build\bdist.win32\eggcopying TEST_FILE.TXT -> build\bdist.win32\egg\ ....On Linux the file is copied to the right path:$ python setup.py install.....installing package data to build/bdist.linux-x86_64/eggrunning install_datacopying TEST_FILE.TXT -> /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages....*something* is normalising my absolute path to site packages into just '' - it's possible to see by looking at self.data_files in the 'run' function in:distutils/command/install_data.py- on windows it the first part has been changed to '' unlike on linux where it's the absolute path I set... still not sure where it's happening though.*This all took a while, as rebuilt VM and verified on 2.7.8 and 2.7.9..S++ On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > I had a further look - and on windows the file ends up inside the .egg file, on linux it ends up inside the site packages as intended. At a guess it seems like there might be a bug in the path handling on windows. .. I wonder if it's something like this http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python which seems an easy way to get an off-by-one error in a path ? _______________________________________________ Distutils-SIG maillist? -? Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From drsalists at gmail.com Wed Mar 25 20:50:49 2015 From: drsalists at gmail.com (Dan Stromberg) Date: Wed, 25 Mar 2015 12:50:49 -0700 Subject: [Distutils] "pip wheel numpy" giving undefined reference errors Message-ID: Hi again. When I try to build a numpy wheel (using openblas statically, and libpython2.7 statically as well), I get a lot of errors like: /tmp/pip-build-ZlPgN7/numpy/build/src.linux-x86_64-2.7/numpy/core/include/numpy/__multiarray_api.h:1642: undefined reference to `PyExc_AttributeError' /tmp/pip-build-ZlPgN7/numpy/build/src.linux-x86_64-2.7/numpy/core/include/numpy/__multiarray_api.h:1642: undefined reference to `PyErr_SetString' /tmp/pip-build-ZlPgN7/numpy/build/src.linux-x86_64-2.7/numpy/core/include/numpy/__multiarray_api.h:1642: undefined reference to `PyExc_ImportError' My first thought was "OK, I'll put -lpython2.7 in $LDFLAGS and export it", but that didn't appear to help. So I stuck -lpython2.7 in $CC (again exporting), but that didn't help either. I can see my settings being used, but they're prefixed instead of suffixed, which is perhaps causing them to be ignored due to the order sensitivity of the linker. What do I need to do to /append/ linker options when using "pip wheel"? This is on Redhat Enterprise Linux 6.5 with gcc 4.7 and GNU ld 2.20.51.0.2-5.36.el6 . And Python 2.7, unfortunately. Thanks! From donald at stufft.io Thu Mar 26 12:46:02 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 26 Mar 2015 07:46:02 -0400 Subject: [Distutils] Translate Warehouse (PyPI) into languages other than English? Message-ID: I have an open issue [1] on Warehouse about whether it makes sense to attempt to translate Warehouse (and thus PyPI once Warehouse is deployed there) into languages other than English. I'm not going to rehash everything that's been said on the issue, but would instead like to direct anyone who has a strong opinion one way or the other to please go weigh in on that issue. In particular I would love it if people who are not native English speakers could weigh in about if they think it would be helpful to people who don't speak English natively. I personally feel utterly unqualified to answer the question of usefulness since I am a native English speaker and I only speak English. [1] https://github.com/pypa/warehouse/issues/402 --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From wes.turner at gmail.com Tue Mar 24 00:46:04 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 23 Mar 2015 18:46:04 -0500 Subject: [Distutils] force static linking In-Reply-To: References: Message-ID: For outsourcing upstream packaging of (cross-platform) dependencies, I'll second +1 conda. Pants Build bundles everything into a static executable (PEX); and works with pip requirements.txt files. On Mar 23, 2015 5:30 PM, "Nick Coghlan" wrote: > > On 24 Mar 2015 05:16, "Chris Barker" wrote: > > > > On Mon, Mar 23, 2015 at 11:45 AM, Dan Stromberg > wrote: > >> > >> Is this the general perspective on static linking of python module > >> dependencies? That if your systems are the same, you don't need to? > > > > > > That's general -- nothing specific to python here. > > > > There _may_ be a difference in that you might be more likely to want to > distribute a binary python module, and no be sure of the level of > compatibility of the host sytem -- particularly if you use a non-standard > or not-comon lib, or one you want built a particular way -- like ATLAS, > BLAS, etc... > > > >> I want static linking too, but if it's swimming upstream in a fast > >> river, I may reconsider. > > > > > > well it's a slow river... > > > > The easiest way is to make sure that you only have the static version of > the libs on the system you build on. You may be able to do that by passing > something like --disable-shared to configure, or you can just kludge it and > delete the shared libs after you build and install. > > The "not swimming upriver" approach is to look at conda for language > independent cross-platform user level package management :) > > It's purpose built to handle the complexities of the scientific Python > stack, while the default Python specific toolchain is more aimed at cases > where you can relatively easily rely on Linux system libraries. > > Cheers, > Nick. > > > > > -Chris > > > > > >> > >> Thanks. > >> > >> On Mon, Mar 23, 2015 at 11:41 AM, Bill Deegan < > bill at baddogconsulting.com> wrote: > >> > Gordon, > >> > > >> > If you are sure that your dev and production environments match, then > you > >> > should have same shared libraries on both, and no need for static > linkage? > >> > > >> > -Bill > >> > > >> > On Mon, Mar 23, 2015 at 11:36 AM, Dan Stromberg > wrote: > >> >> > >> >> On Thu, Sep 11, 2014 at 5:28 AM, gordon wrote: > >> >> > Hello, > >> >> > > >> >> > I am attempting to build statically linked distributions. > >> >> > > >> >> > I am using docker containers to ensure the deployment environment > >> >> > matches > >> >> > the build environment so there is no compatibility concern. > >> >> > > >> >> > Is there any way to force static linking so that wheels can be > installed > >> >> > into a virtual env without requiring specific packages on the host? > >> >> > >> >> Maybe pass -static in $LDFLAGS? Just a wild guess really. > >> >> _______________________________________________ > >> >> Distutils-SIG maillist - Distutils-SIG at python.org > >> >> https://mail.python.org/mailman/listinfo/distutils-sig > >> > > >> > > >> _______________________________________________ > >> Distutils-SIG maillist - Distutils-SIG at python.org > >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > > > > > > > -- > > > > Christopher Barker, Ph.D. > > Oceanographer > > > > Emergency Response Division > > NOAA/NOS/OR&R (206) 526-6959 voice > > 7600 Sand Point Way NE (206) 526-6329 fax > > Seattle, WA 98115 (206) 526-6317 main reception > > > > Chris.Barker at noaa.gov > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rps at haystack.mit.edu Thu Mar 26 16:38:23 2015 From: rps at haystack.mit.edu (robert schaefer) Date: Thu, 26 Mar 2015 11:38:23 -0400 Subject: [Distutils] setuptools Message-ID: <53B4429A-575A-48F6-9CE6-046B56A2A844@haystack.mit.edu> Halloooo distutils! When I clicked on the link and download the setuptools .gz file, the name of the file had a "dist/? prepended to it. The backslash in the name causes all sorts of problems and I had to rename the file before expanding, then again after expanding because the backslash came back. What I want to know is, Is this a bug or a feature? And if it is a feature, please explain what you are trying to do? thanks, bob s. ----------------------------------- robert schaefer Atmospheric Sciences Group MIT Haystack Observatory Westford, MA 01886 email: rps at haystack.mit.edu voice: 781-981-5767 www: http://www.haystack.mit.edu -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From drsalists at gmail.com Thu Mar 26 20:44:49 2015 From: drsalists at gmail.com (Dan Stromberg) Date: Thu, 26 Mar 2015 12:44:49 -0700 Subject: [Distutils] "pip wheel numpy" giving undefined reference errors In-Reply-To: References: Message-ID: On Wed, Mar 25, 2015 at 12:50 PM, Dan Stromberg wrote: > Hi again. > > When I try to build a numpy wheel (using openblas statically, and > libpython2.7 statically as well), I get a lot of errors like: > > /tmp/pip-build-ZlPgN7/numpy/build/src.linux-x86_64-2.7/numpy/core/include/numpy/__multiarray_api.h:1642: > undefined reference to `PyExc_AttributeError' > /tmp/pip-build-ZlPgN7/numpy/build/src.linux-x86_64-2.7/numpy/core/include/numpy/__multiarray_api.h:1642: > undefined reference to `PyErr_SetString' > /tmp/pip-build-ZlPgN7/numpy/build/src.linux-x86_64-2.7/numpy/core/include/numpy/__multiarray_api.h:1642: > undefined reference to `PyExc_ImportError' > > My first thought was "OK, I'll put -lpython2.7 in $LDFLAGS and export > it", but that didn't appear to help. I got past this by creating a drs-gcc and drs-gfortran shell wrappers, that feed $LDFLAGS "$@" $LDFLAGS as command line arguments to gcc and gfortran respectively. I had to define $CC and $FC to make the wrappers get used - before invoking pip wheel. I also defined $F90 - not sure if one or both of those fortran variables helped. I've done a lot of software builds, but I've never had to resort to that kind of trickery before. Perhaps that's because most of my builds were using dynamic libraries rather than static. From robertc at robertcollins.net Thu Mar 26 23:48:42 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 27 Mar 2015 11:48:42 +1300 Subject: [Distutils] setuptools In-Reply-To: <53B4429A-575A-48F6-9CE6-046B56A2A844@haystack.mit.edu> References: <53B4429A-575A-48F6-9CE6-046B56A2A844@haystack.mit.edu> Message-ID: On 27 March 2015 at 04:38, robert schaefer wrote: > Halloooo distutils! > > When I clicked on the link and download the setuptools .gz file, the name of the file had a "dist/? prepended to it. > The backslash in the name causes all sorts of problems and I had to rename the file before expanding, then again after expanding because the backslash came back. > > What I want to know is, Is this a bug or a feature? And if it is a feature, please explain what you are trying to do? I'm sorry, I have no idea what you're referring to :) Could you perhaps attach a transcript of your shell session, or screenshots or something? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From contact at ionelmc.ro Fri Mar 27 10:56:38 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 27 Mar 2015 11:56:38 +0200 Subject: [Distutils] Installing a file into sitepackages In-Reply-To: <622323935.1192595.1427287900674.JavaMail.yahoo@mail.yahoo.com> References: <622323935.1192595.1427287900674.JavaMail.yahoo@mail.yahoo.com> Message-ID: Also, a similar command subclass can be written for `develop`. So far i got 3 subclasses, for: build, easy_install and develop. Did I miss something important? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Wed, Mar 25, 2015 at 2:51 PM, Stuart Axon wrote: > That looks much cleaner than my one, I'll give it a try.. does it work on > python3, just found out my one does not. > > S++ > > > > On Wednesday, March 25, 2015 9:03 AM, Ionel Cristian M?rie? < > contact at ionelmc.ro> wrote: > > > > This seems to do the trick: > > class EasyInstallWithPTH(easy_install): > def run(self): > easy_install.run(self) > for path in glob(join(dirname(__file__), 'src', '*.pth')): > dest = join(self.install_dir, basename(path)) > self.copy_file(path, dest) > > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > > On Tue, Mar 24, 2015 at 11:36 AM, Stuart Axon wrote: > > Hi, > This works from pypi - but not when installing from source with python > setup.py install which stops this nifty thing from working: > > PYTHON_HUNTER="module='os.path'" python yourapp.py > > > Sandbox monkeypatches os.file, so I think it catches you using copy. > Maybe we need a common API for code that runs at startup? > > S++ > > > > On Tuesday, March 24, 2015 3:56 PM, Ionel Cristian M?rie? < > contact at ionelmc.ro> wrote: > > > > Hey, > > If you just want to copy a out-of-package file into site-package you could > just override the build command and copy it there (in the build dir). > Here's an example: > https://github.com/ionelmc/python-hunter/blob/master/setup.py#L27-L31 - > it seems to work fine with wheels. > > > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > > On Mon, Mar 16, 2015 at 11:02 AM, Stuart Axon wrote: > > Hi All > This, and another memory-leak bug were triggered by the sandbox. > Would it be possible to either add an API to exempt files, or just allow > writing within site packages, even if just for .pth files ? > > I'm monkey patching around these for now > https://github.com/stuaxo/vext/blob/master/setup.py#L16 > > S++ > > > > On Thursday, March 12, 2015 2:54 PM, Stuart Axon > wrote: > > > > For closure: The solution was to make a Command class + implement > finalize_options to fixup the paths in distribution.data_files. > > > Source: > > # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b > """ > Install a file into the root of sitepackages on windows as well as linux. > > Under normal operation on win32 path_to_site_packages > gets changed to '' which installs inside the .egg instead. > """ > > import os > > from distutils import sysconfig > from distutils.command.install_data import install_data > from setuptools import setup > > here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) > > site_packages_path = sysconfig.get_python_lib() > site_packages_files = ['TEST_FILE.TXT'] > > class _install_data(install_data): > def finalize_options(self): > """ > On win32 the files here are changed to '' which > ends up inside the .egg, change this back to the > absolute path. > """ > install_data.finalize_options(self) > global site_packages_files > for i, f in enumerate(list(self.distribution.data_files)): > if not isinstance(f, basestring): > folder, files = f > if files == site_packages_files: > # Replace with absolute path version > self.distribution.data_files[i] = (site_packages_path, > files) > > setup( > cmdclass={'install_data': _install_data}, > name='test_install', > version='0.0.1', > > description='', > long_description='', > url='https://example.com', > author='Stuart Axon', > author_email='stuaxo2 at yahoo.com', > license='PD', > classifiers=[], > keywords='', > packages=[], > > install_requires=[], > > data_files=[ > (site_packages_path, site_packages_files), > ], > > ) > > > > On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: > > I had more of a dig into this, with a minimal setup.py: > https://gist.github.com/stuaxo/c76a042cb7aa6e77285b setup calls > install_data On win32 setup.py calls install_data which copies the file > into the egg - even though I have given the absolute path to sitepackages > C:\> python setup.py install .... running install_data creating > build\bdist.win32\egg copying TEST_FILE.TXT -> build\bdist.win32\egg\ .... > On Linux the file is copied to the right path: $ python setup.py install > ..... installing package data to build/bdist.linux-x86_64/egg running > install_data copying TEST_FILE.TXT -> > /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages .... > *something* is normalising my absolute path to site packages into just '' - > it's possible to see by looking at self.data_files in the 'run' function > in: distutils/command/install_data.py - on windows it the first part has > been changed to '' unlike on linux where it's the absolute path I set... > still not sure where it's happening though. *This all took a while, as > rebuilt VM and verified on 2.7.8 and 2.7.9.. S++ > > On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > > I had a further look - and on windows the file ends up inside the .egg > file, on linux it ends up inside the site packages as intended. At a guess > it seems like there might be a bug in the path handling on windows. .. I > wonder if it's something like this > http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python > which seems an easy way to get an off-by-one error in a path ? > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Fri Mar 27 19:16:09 2015 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 28 Mar 2015 07:16:09 +1300 Subject: [Distutils] setuptools In-Reply-To: References: <53B4429A-575A-48F6-9CE6-046B56A2A844@haystack.mit.edu> Message-ID: On 28 March 2015 at 00:56, robert schaefer wrote: > > On the web page at the URL https://pypi.python.org/pypi/setuptools > search for "setuptools-14.3.1.tar.gz (md5)? > press that link. > The file that is returned to my ~/Downloads folder is > "dist\setuptools-14.3.1.tar.gz" not "setuptools-14.3.1.tar.gz? > My version of tar takes issue with the ?\? in the filename. > Perhaps the problem is with tar? > > My mac automatically unzips. Here?s a copy of my shell session: > >> tar -xvf dist\setuptools-14.3.1.tar > tar: Error opening archive: Failed to open 'distsetuptools-14.3.1.tar' Interesting! - what browser are you using? The the response headers I see using Chromium for https://pypi.python.org/packages/source/s/setuptools/setuptools-14.3.1.tar.gz are below. The important bits to note are: Content-Type: application/octet-stream which signals that the content shouldn't be decompressed or otherwise altered and the lack of a Content-Disposition header (that might influence the filename). Could you grab the headers you are receiving with whatever browser you're using, and perhaps try a different browser? Accept-Ranges: bytes Age: 597596 Cache-Control: max-age=31557600, public Connection: Keep-Alive Content-Length: 627737 Content-Type: application/octet-stream Date: Fri, 27 Mar 2015 18:09:06 GMT ETag: "cdba2741b16acaa3ed06c2252623f6b9" Keep-Alive: timeout=10, max=50 Public-Key-Pins: max-age=600; includeSubDomains; pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18="; pin-sha256="5C8kvU039KouVrl52D0eZSGf4Onjo4Khs8tmyTlV3nU="; pin-sha256="5C8kvU039KouVrl52D0eZSGf4Onjo4Khs8tmyTlV3nU="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="TUDnr0MEoJ3of7+YliBMBVFB4/gJsv5zO7IxD9+YoWI="; pin-sha256="x4QzPSC810K5/cMjb05Qm4k3Bw5zBn4lTdO/nEW/Td4="; Server: nginx/1.6.2 Strict-Transport-Security: max-age=31536000; includeSubDomains Via: 1.1 varnish Via: 1.1 varnish X-Cache: HIT, HIT X-Cache-Hits: 28, 5 X-Clacks-Overhead: GNU Terry Pratchett X-PyPI-Last-Serial: 1470656 X-Served-By: cache-iad2136-IAD, cache-akl6421-AKL X-Timer: S1427479746.401369,VS0,VE0 -- Robert Collins Distinguished Technologist HP Converged Cloud From drsalists at gmail.com Sat Mar 28 00:11:51 2015 From: drsalists at gmail.com (Dan Stromberg) Date: Fri, 27 Mar 2015 16:11:51 -0700 Subject: [Distutils] Can I upload a binary wheel to a local devpi on Linux using pip? Do I need setup.py? Message-ID: Is it possible to use "pip wheel" to upload a binary wheel on Linux, to a local devpi server? Or do I need to get to a setup.py and do an upload from there? It seems a shame to build the wheel without need of a setup.py (I believe setup.py is only used behind the scenes), only to need a setup.py to upload the result. This makes me wonder about devpi and binary wheels on Linux: http://pythonwheels.com C extensions PyPI currently only allows uploading platform-specific wheels for Windows and Mac OS X. It is still useful to create wheels for these platforms, as it avoids the need for your users to compile the package when installing. I'm doing "pip -v -v -v wheel numpy" (for example), and I have a pip.conf and .pypirc (both pointing at our local devpi). Thanks! From donald at stufft.io Sat Mar 28 00:15:38 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 27 Mar 2015 19:15:38 -0400 Subject: [Distutils] Can I upload a binary wheel to a local devpi on Linux using pip? Do I need setup.py? In-Reply-To: References: Message-ID: <319DE069-7514-467C-A299-8A22BC9EB66F@stufft.io> Use twine to upload > On Mar 27, 2015, at 7:11 PM, Dan Stromberg wrote: > > Is it possible to use "pip wheel" to upload a binary wheel on Linux, > to a local devpi server? Or do I need to get to a setup.py and do an > upload from there? It seems a shame to build the wheel without need of > a setup.py (I believe setup.py is only used behind the scenes), only > to need a setup.py to upload the result. > > This makes me wonder about devpi and binary wheels on Linux: > http://pythonwheels.com > C extensions > PyPI currently only allows uploading platform-specific wheels for > Windows and Mac OS X. It is still useful to create wheels for these > platforms, as it avoids the need for your users to compile the package > when installing. > > I'm doing "pip -v -v -v wheel numpy" (for example), and I have a > pip.conf and .pypirc (both pointing at our local devpi). > > Thanks! > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Mon Mar 30 01:13:09 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 30 Mar 2015 09:13:09 +1000 Subject: [Distutils] Can I upload a binary wheel to a local devpi on Linux using pip? Do I need setup.py? In-Reply-To: <319DE069-7514-467C-A299-8A22BC9EB66F@stufft.io> References: <319DE069-7514-467C-A299-8A22BC9EB66F@stufft.io> Message-ID: On 28 Mar 2015 09:16, "Donald Stufft" wrote: > > Use twine to upload As Donald notes, twine can handle the upload, and devpi (unlike the public PyPI) will happily host Linux wheel files. The main trap to watch out for when using that approach in the current system is that the wheel filenames won't say which Linux distro they were built against, so you may get cryptic runtime errors if they escape the intended environments. Cheers, Nick. > > > > On Mar 27, 2015, at 7:11 PM, Dan Stromberg wrote: > > > > Is it possible to use "pip wheel" to upload a binary wheel on Linux, > > to a local devpi server? Or do I need to get to a setup.py and do an > > upload from there? It seems a shame to build the wheel without need of > > a setup.py (I believe setup.py is only used behind the scenes), only > > to need a setup.py to upload the result. > > > > This makes me wonder about devpi and binary wheels on Linux: > > http://pythonwheels.com > > C extensions > > PyPI currently only allows uploading platform-specific wheels for > > Windows and Mac OS X. It is still useful to create wheels for these > > platforms, as it avoids the need for your users to compile the package > > when installing. > > > > I'm doing "pip -v -v -v wheel numpy" (for example), and I have a > > pip.conf and .pypirc (both pointing at our local devpi). > > > > Thanks! > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ubernostrum at gmail.com Mon Mar 30 06:49:51 2015 From: ubernostrum at gmail.com (James Bennett) Date: Sun, 29 Mar 2015 23:49:51 -0500 Subject: [Distutils] Versioned trove classifiers for Django Message-ID: Following up on some IRC discussion with other folks: There is precedent (Plone) for PyPI trove classifiers corresponding to particular versions of a framework. So I'd like to get feedback on the idea of expanding that, particularly in the case of Django. The rationale here is that the ecosystem of Django-related packages is quite large, but -- as I know all too well from a project I'm working on literally at this moment -- it can be difficult to ensure that all of one's dependencies are compatible with the version of Django one happens to be using. Adding trove classifier support at the level of individual versions of Django would, I think, greatly simplify this: tools could easily analyze which packages are compatible with an end user's chosen version, there'd be far less manual guesswork, etc., and the rate of creation of new classifiers would be relatively low (we tend to have one X.Y release/year or thereabouts, and that's the level of granularity needed). Assuming there's consensus around the idea of doing this, what would be the correct procedure for getting such classifiers set up and maintained? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pydanny at gmail.com Mon Mar 30 06:57:33 2015 From: pydanny at gmail.com (Daniel Greenfeld) Date: Sun, 29 Mar 2015 21:57:33 -0700 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: I complete support James. This trove classifier is something that could be pretty easily plugged right into Django Packages and the rest of the Django ecosystem. --Daniel Greenfeld On Sun, Mar 29, 2015 at 9:49 PM, James Bennett wrote: > Following up on some IRC discussion with other folks: > > There is precedent (Plone) for PyPI trove classifiers corresponding to > particular versions of a framework. So I'd like to get feedback on the idea > of expanding that, particularly in the case of Django. > > The rationale here is that the ecosystem of Django-related packages is quite > large, but -- as I know all too well from a project I'm working on literally > at this moment -- it can be difficult to ensure that all of one's > dependencies are compatible with the version of Django one happens to be > using. > > Adding trove classifier support at the level of individual versions of > Django would, I think, greatly simplify this: tools could easily analyze > which packages are compatible with an end user's chosen version, there'd be > far less manual guesswork, etc., and the rate of creation of new classifiers > would be relatively low (we tend to have one X.Y release/year or > thereabouts, and that's the level of granularity needed). > > Assuming there's consensus around the idea of doing this, what would be the > correct procedure for getting such classifiers set up and maintained? > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- 'Knowledge is Power' Daniel Greenfeld Principal at Cartwheel Web; co-author of Two Scoops of Django twoscoopspress.org | pydanny.com From richard at python.org Mon Mar 30 06:58:41 2015 From: richard at python.org (Richard Jones) Date: Mon, 30 Mar 2015 04:58:41 +0000 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: Hi James, I tend to just require that there already exists a number of packages that would use the classifier. Sounds like that's the case? Richard On Mon, 30 Mar 2015 at 15:50 James Bennett wrote: > Following up on some IRC discussion with other folks: > > There is precedent (Plone) for PyPI trove classifiers corresponding to > particular versions of a framework. So I'd like to get feedback on the idea > of expanding that, particularly in the case of Django. > > The rationale here is that the ecosystem of Django-related packages is > quite large, but -- as I know all too well from a project I'm working on > literally at this moment -- it can be difficult to ensure that all of one's > dependencies are compatible with the version of Django one happens to be > using. > > Adding trove classifier support at the level of individual versions of > Django would, I think, greatly simplify this: tools could easily analyze > which packages are compatible with an end user's chosen version, there'd be > far less manual guesswork, etc., and the rate of creation of new > classifiers would be relatively low (we tend to have one X.Y release/year > or thereabouts, and that's the level of granularity needed). > > Assuming there's consensus around the idea of doing this, what would be > the correct procedure for getting such classifiers set up and maintained? > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ubernostrum at gmail.com Mon Mar 30 07:01:16 2015 From: ubernostrum at gmail.com (James Bennett) Date: Mon, 30 Mar 2015 00:01:16 -0500 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: On Sun, Mar 29, 2015 at 11:58 PM, Richard Jones wrote: > > I tend to just require that there already exists a number of packages that > would use the classifier. Sounds like that's the case? > I don't have a count handy, but yes, I suspect the number of packages which currently use the "Framwork :: Django" classifier is significant, and that with documentation we could easily get most of them to start using a versioned classifier :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pydanny at gmail.com Mon Mar 30 07:03:33 2015 From: pydanny at gmail.com (Daniel Greenfeld) Date: Sun, 29 Mar 2015 22:03:33 -0700 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: Richard, If you look at https://www.djangopackages.com you'll see that 631 packages are Python 3 compatible out of 2714 listed. This number has been growing steadily, as package maintainers have learned that they can get listed there by utilizing the trove classifier system. It wasn't hard for us to get that started. Since compatibility for packages across Django version is a big deal, having this on PyPI and downstream tools like Django Packages will be an immense help to the Django ecosystem. --Danny On Sun, Mar 29, 2015 at 9:58 PM, Richard Jones wrote: > Hi James, > > I tend to just require that there already exists a number of packages that > would use the classifier. Sounds like that's the case? > > > Richard > > On Mon, 30 Mar 2015 at 15:50 James Bennett wrote: >> >> Following up on some IRC discussion with other folks: >> >> There is precedent (Plone) for PyPI trove classifiers corresponding to >> particular versions of a framework. So I'd like to get feedback on the idea >> of expanding that, particularly in the case of Django. >> >> The rationale here is that the ecosystem of Django-related packages is >> quite large, but -- as I know all too well from a project I'm working on >> literally at this moment -- it can be difficult to ensure that all of one's >> dependencies are compatible with the version of Django one happens to be >> using. >> >> Adding trove classifier support at the level of individual versions of >> Django would, I think, greatly simplify this: tools could easily analyze >> which packages are compatible with an end user's chosen version, there'd be >> far less manual guesswork, etc., and the rate of creation of new classifiers >> would be relatively low (we tend to have one X.Y release/year or >> thereabouts, and that's the level of granularity needed). >> >> Assuming there's consensus around the idea of doing this, what would be >> the correct procedure for getting such classifiers set up and maintained? >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- 'Knowledge is Power' Daniel Greenfeld Principal at Cartwheel Web; co-author of Two Scoops of Django twoscoopspress.org | pydanny.com From richard at python.org Mon Mar 30 07:04:42 2015 From: richard at python.org (Richard Jones) Date: Mon, 30 Mar 2015 05:04:42 +0000 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: OK, so what's the set of versions you'd like to see? On Mon, 30 Mar 2015 at 16:03 Daniel Greenfeld wrote: > Richard, > > If you look at https://www.djangopackages.com you'll see that 631 > packages are Python 3 compatible out of 2714 listed. This number has > been growing steadily, as package maintainers have learned that they > can get listed there by utilizing the trove classifier system. It > wasn't hard for us to get that started. > > Since compatibility for packages across Django version is a big deal, > having this on PyPI and downstream tools like Django Packages will be > an immense help to the Django ecosystem. > > --Danny > > On Sun, Mar 29, 2015 at 9:58 PM, Richard Jones wrote: > > Hi James, > > > > I tend to just require that there already exists a number of packages > that > > would use the classifier. Sounds like that's the case? > > > > > > Richard > > > > On Mon, 30 Mar 2015 at 15:50 James Bennett > wrote: > >> > >> Following up on some IRC discussion with other folks: > >> > >> There is precedent (Plone) for PyPI trove classifiers corresponding to > >> particular versions of a framework. So I'd like to get feedback on the > idea > >> of expanding that, particularly in the case of Django. > >> > >> The rationale here is that the ecosystem of Django-related packages is > >> quite large, but -- as I know all too well from a project I'm working on > >> literally at this moment -- it can be difficult to ensure that all of > one's > >> dependencies are compatible with the version of Django one happens to be > >> using. > >> > >> Adding trove classifier support at the level of individual versions of > >> Django would, I think, greatly simplify this: tools could easily analyze > >> which packages are compatible with an end user's chosen version, > there'd be > >> far less manual guesswork, etc., and the rate of creation of new > classifiers > >> would be relatively low (we tend to have one X.Y release/year or > >> thereabouts, and that's the level of granularity needed). > >> > >> Assuming there's consensus around the idea of doing this, what would be > >> the correct procedure for getting such classifiers set up and > maintained? > >> _______________________________________________ > >> Distutils-SIG maillist - Distutils-SIG at python.org > >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > > -- > 'Knowledge is Power' > Daniel Greenfeld > Principal at Cartwheel Web; co-author of Two Scoops of Django > twoscoopspress.org | pydanny.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ubernostrum at gmail.com Mon Mar 30 07:06:08 2015 From: ubernostrum at gmail.com (James Bennett) Date: Mon, 30 Mar 2015 00:06:08 -0500 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: On Mon, Mar 30, 2015 at 12:04 AM, Richard Jones wrote: > OK, so what's the set of versions you'd like to see? > > The current upstream-supported version set is Django 1.4, Django 1.6, Django 1.7. Soon 1.6 will drop out and be replaced by 1.8, but that's just because we're coming up on a release. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pydanny at gmail.com Mon Mar 30 07:07:35 2015 From: pydanny at gmail.com (Daniel Greenfeld) Date: Sun, 29 Mar 2015 22:07:35 -0700 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: Since a lot of sites are still migrating from 1.5, shouldn't we add that version to the mix? 1.4 1.5 1.6 1.7 --Danny On Sun, Mar 29, 2015 at 10:06 PM, James Bennett wrote: > On Mon, Mar 30, 2015 at 12:04 AM, Richard Jones wrote: >> >> OK, so what's the set of versions you'd like to see? >> > > The current upstream-supported version set is Django 1.4, Django 1.6, Django > 1.7. Soon 1.6 will drop out and be replaced by 1.8, but that's just because > we're coming up on a release. -- 'Knowledge is Power' Daniel Greenfeld Principal at Cartwheel Web; co-author of Two Scoops of Django twoscoopspress.org | pydanny.com From ubernostrum at gmail.com Mon Mar 30 07:09:29 2015 From: ubernostrum at gmail.com (James Bennett) Date: Mon, 30 Mar 2015 00:09:29 -0500 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: I would be OK with including 1.5 just for completeness' sake. -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Mon Mar 30 07:13:23 2015 From: richard at python.org (Richard Jones) Date: Mon, 30 Mar 2015 05:13:23 +0000 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: Added! Framework :: Django :: 1.4 Framework :: Django :: 1.5 Framework :: Django :: 1.6 Framework :: Django :: 1.7 Framework :: Django :: 1.8 On Mon, 30 Mar 2015 at 16:09 James Bennett wrote: > I would be OK with including 1.5 just for completeness' sake. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Mar 30 16:19:14 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 30 Mar 2015 10:19:14 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) Message-ID: I was wondering when someone would attempt this. Simple .ini file and some conventions, and you have a pip-installable wheel. No bdist_wheel, setup.py, MANIFEST or setuptools involved. I like it. Still alpha/beta. https://github.com/takluyver/flit From donald at stufft.io Mon Mar 30 16:32:56 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 10:32:56 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: Message-ID: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> > On Mar 30, 2015, at 10:19 AM, Daniel Holth wrote: > > I was wondering when someone would attempt this. Simple .ini file and > some conventions, and you have a pip-installable wheel. No > bdist_wheel, setup.py, MANIFEST or setuptools involved. I like it. > Still alpha/beta. > > https://github.com/takluyver/flit > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig Wheels without sdists are likely a generally bad idea, downstream redistributors are not going to like them. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Mon Mar 30 16:46:14 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 30 Mar 2015 10:46:14 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: The approach doesn't exclude the possibility of a source distribution, it's only a few weeks old. I would suggest that if you have to choose between having a setup.py, not having a setup.py, and not having the package on pypi at all because the packager can't figure out setup.py, prefer the second option. On Mon, Mar 30, 2015 at 10:32 AM, Donald Stufft wrote: > >> On Mar 30, 2015, at 10:19 AM, Daniel Holth wrote: >> >> I was wondering when someone would attempt this. Simple .ini file and >> some conventions, and you have a pip-installable wheel. No >> bdist_wheel, setup.py, MANIFEST or setuptools involved. I like it. >> Still alpha/beta. >> >> https://github.com/takluyver/flit >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > Wheels without sdists are likely a generally bad idea, downstream > redistributors are not going to like them. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > From graffatcolmingov at gmail.com Mon Mar 30 16:50:22 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 30 Mar 2015 09:50:22 -0500 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 9:46 AM, Daniel Holth wrote: > The approach doesn't exclude the possibility of a source distribution, > it's only a few weeks old. I would suggest that if you have to choose > between having a setup.py, not having a setup.py, and not having the > package on pypi at all because the packager can't figure out setup.py, > prefer the second option. > Is figuring out setup.py still a thing? Between cookiecutter laying out your project with a setup.py and the Packaging Guide, how many people still have trouble setting up a setup.py for a package? It almost seems like this is a solution wanting for a problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xav.fernandez at gmail.com Mon Mar 30 16:58:55 2015 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Mon, 30 Mar 2015 16:58:55 +0200 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: > > Wheels without sdists are likely a generally bad idea, downstream > redistributors are not going to like them. > Why do you think that ? Wheels seem way simpler/saner than all the possible things setup.py can do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Mar 30 16:59:40 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 30 Mar 2015 10:59:40 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: Yes, setup.py should die. Flit is one example, and you can understand it not by copy/pasting, but by spending half an hour reading its complete source code. On Mon, Mar 30, 2015 at 10:50 AM, Ian Cordasco wrote: > On Mon, Mar 30, 2015 at 9:46 AM, Daniel Holth wrote: >> >> The approach doesn't exclude the possibility of a source distribution, >> it's only a few weeks old. I would suggest that if you have to choose >> between having a setup.py, not having a setup.py, and not having the >> package on pypi at all because the packager can't figure out setup.py, >> prefer the second option. > > > Is figuring out setup.py still a thing? Between cookiecutter laying out your > project with a setup.py and the Packaging Guide, how many people still have > trouble setting up a setup.py for a package? It almost seems like this is a > solution wanting for a problem. From donald at stufft.io Mon Mar 30 17:04:53 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 11:04:53 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: <60A915D8-FC52-4119-9607-DB43E95A4701@stufft.io> > On Mar 30, 2015, at 10:58 AM, Xavier Fernandez wrote: > > Wheels without sdists are likely a generally bad idea, downstream > redistributors are not going to like them. > > Why do you think that ? Wheels seem way simpler/saner than all the possible things setup.py can do. Wheels are simpler than a setup.py, because wheels are a binary format and Wheels don?t need to handle things like build software because it?s already been built. However downstream redistributors will not accept a Wheel as the source for a package because it is a binary format. It doesn?t matter if you can unzip it and there is pure python there, it is still a binary format. So if you release only Wheels you?re essentially saying that downstream redistributors will never package your software (or any software that depends on it). A few issues that Wheel only has: * If your project has a C extension, downstream redistributes need access to the source code not the compiled code (as does anyone wanting to use the project on platform you didn?t release for). * If your project has tests that don?t get installed, they should get shipped as part of the sdist so that downstream can run them as part of their packaging activities to ensure they didn?t break anything in the test suite. However if you?re installing from Wheel you can?t do that. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From graffatcolmingov at gmail.com Mon Mar 30 17:05:45 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 30 Mar 2015 10:05:45 -0500 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 9:59 AM, Daniel Holth wrote: > Yes, setup.py should die. Flit is one example, and you can understand > it not by copy/pasting, but by spending half an hour reading its > complete source code. > In other words, no one should read the docs because that's a waste of time? Because a lot of time has been poured into the packaging docs and if they're not sufficient, then instead of improving them, people should write undocumented tools that force people to read the source? I'm not sure how that's better than what we already have. Further, how does flit deal with C-extensions? How does flit deal with C-extensions distributed on Linux? How does it generate the appropriate wheels for that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Mar 30 17:06:27 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 11:06:27 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: <7D985EEB-E445-4C1C-B254-65D0B91A6596@stufft.io> > On Mar 30, 2015, at 10:59 AM, Daniel Holth wrote: > > Yes, setup.py should die. Flit is one example, and you can understand > it not by copy/pasting, but by spending half an hour reading its > complete source code. I don?t have a problem with flit (although I?m not sure it?s easier, it appears it took the setup.py keyword args and turned them into ini file directives). I do have a problem with any solution which doesn?t include sdist support. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From xav.fernandez at gmail.com Mon Mar 30 17:17:08 2015 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Mon, 30 Mar 2015 17:17:08 +0200 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: <60A915D8-FC52-4119-9607-DB43E95A4701@stufft.io> References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <60A915D8-FC52-4119-9607-DB43E95A4701@stufft.io> Message-ID: Fair enough, I didn't think of compiled wheels :) And having a clean way to run tests for the provided wheel is indeed an other good point. On Mon, Mar 30, 2015 at 5:04 PM, Donald Stufft wrote: > > On Mar 30, 2015, at 10:58 AM, Xavier Fernandez > wrote: > > Wheels without sdists are likely a generally bad idea, downstream >> redistributors are not going to like them. >> > > Why do you think that ? Wheels seem way simpler/saner than all the > possible things setup.py can do. > > > Wheels are simpler than a setup.py, because wheels are a binary format and > Wheels don?t need to > handle things like build software because it?s already been built. However > downstream redistributors > will not accept a Wheel as the source for a package because it is a binary > format. It doesn?t matter > if you can unzip it and there is pure python there, it is still a binary > format. So if you release only Wheels > you?re essentially saying that downstream redistributors will never > package your software (or any > software that depends on it). > > A few issues that Wheel only has: > > * If your project has a C extension, downstream redistributes need access > to the source code not the > compiled code (as does anyone wanting to use the project on platform you > didn?t release for). > > * If your project has tests that don?t get installed, they should get > shipped as part of the sdist so that > downstream can run them as part of their packaging activities to ensure > they didn?t break anything > in the test suite. However if you?re installing from Wheel you can?t do > that. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Mar 30 17:18:20 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 30 Mar 2015 11:18:20 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: setup.py as implemented with distutils/setuptools has a bit of a Goldilocks problem: it's just right for a medium-complexity project but when your project is very simple it's too hard, and when you get to the point where you are trying to extend distutils by writing a 10,000 line extension, yikes. So it's fantastic to be able to just avoid distutils entirely if it isn't the right size for your project. This example, flit, does not invoke any code from distutils, setuptools or bdist_wheel to do its thing. A source release could just be an archive of the repository. On Mon, Mar 30, 2015 at 10:59 AM, Daniel Holth wrote: > Yes, setup.py should die. Flit is one example, and you can understand > it not by copy/pasting, but by spending half an hour reading its > complete source code. > > On Mon, Mar 30, 2015 at 10:50 AM, Ian Cordasco > wrote: >> On Mon, Mar 30, 2015 at 9:46 AM, Daniel Holth wrote: >>> >>> The approach doesn't exclude the possibility of a source distribution, >>> it's only a few weeks old. I would suggest that if you have to choose >>> between having a setup.py, not having a setup.py, and not having the >>> package on pypi at all because the packager can't figure out setup.py, >>> prefer the second option. >> >> >> Is figuring out setup.py still a thing? Between cookiecutter laying out your >> project with a setup.py and the Packaging Guide, how many people still have >> trouble setting up a setup.py for a package? It almost seems like this is a >> solution wanting for a problem. From graffatcolmingov at gmail.com Mon Mar 30 17:22:20 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 30 Mar 2015 10:22:20 -0500 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 10:18 AM, Daniel Holth wrote: > setup.py as implemented with distutils/setuptools has a bit of a > Goldilocks problem: it's just right for a medium-complexity project > but when your project is very simple it's too hard, and when you get > to the point where you are trying to extend distutils by writing a > 10,000 line extension, yikes. So it's fantastic to be able to just > avoid distutils entirely if it isn't the right size for your project. > This example, flit, does not invoke any code from distutils, > setuptools or bdist_wheel to do its thing. > > A source release could just be an archive of the repository. > You still have not answered how reading flit's source code to get it working is better than using cookiecutter to generate a project, and using `python setup.py bdist_wheel sdist` (which is well-documented, and has tons of answered questions on sites like StackOverflow to help in case of a problem). -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Mar 30 17:23:14 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 30 Mar 2015 16:23:14 +0100 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: <7D985EEB-E445-4C1C-B254-65D0B91A6596@stufft.io> References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <7D985EEB-E445-4C1C-B254-65D0B91A6596@stufft.io> Message-ID: On 30 March 2015 at 16:06, Donald Stufft wrote: >> On Mar 30, 2015, at 10:59 AM, Daniel Holth wrote: >> >> Yes, setup.py should die. Flit is one example, and you can understand >> it not by copy/pasting, but by spending half an hour reading its >> complete source code. > > I don?t have a problem with flit (although I?m not sure it?s easier, it > appears it took the setup.py keyword args and turned them into ini file > directives). I do have a problem with any solution which doesn?t include > sdist support. Personally, I could see a benefit to something that allowed me to write my setup.py as import fancytool fancytool.setup() and got everything from a static file. But otherwise worked just like distutils/setuptools (i.e. fancytool.setup() calls setuptools.setup() behind the scenes). I'd be happy if it only handled pure-python packages, and if it didn't cover complicated things. I wouldn't be worried that I had to manually install fancytool before my setup.py worked (no setup_requires stuff here, thanks!) I don't see "don't use setuptools behind the scenes" as a necessary goal. I *do* see "make the UI simple for 90% of projects" as a worthwhile goal. I like the idea and goal behind flit. I'm not sure the implementation strategy is sufficiently compatible with existing practices, though. Paul From graffatcolmingov at gmail.com Mon Mar 30 17:26:42 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 30 Mar 2015 10:26:42 -0500 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 10:23 AM, Ionel Cristian M?rie? wrote: > > On Mon, Mar 30, 2015 at 6:05 PM, Ian Cordasco > wrote: > >> In other words, no one should read the docs because that's a waste of >> time? Because a lot of time has been poured into the packaging docs and if >> they're not sufficient, then instead of improving them, people should write >> undocumented tools that force people to read the source? I'm not sure how >> that's better than what we already have. > > > ?A waste of time? No. But it sure is poor investment of time for newbie > users. It's a futile struggle to document (or read about) all the features > of distutils and setuptools, not because it can't be documented, but > because there's too much ground to cover and PyPA's packaging.python.org > approach is to ?just give an overview of what's available and avoid giving > any real best practice recommendations as much as possible. There may be > good reasons for that but that's not a sensible approach to giving users a > "pitfall free" learning path. > So for new python programmers (or newbie users in general) reading the entire source of another package to understand it is a better experience? -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Mar 30 17:31:16 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 30 Mar 2015 11:31:16 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 11:22 AM, Ian Cordasco wrote: > On Mon, Mar 30, 2015 at 10:18 AM, Daniel Holth wrote: >> >> setup.py as implemented with distutils/setuptools has a bit of a >> Goldilocks problem: it's just right for a medium-complexity project >> but when your project is very simple it's too hard, and when you get >> to the point where you are trying to extend distutils by writing a >> 10,000 line extension, yikes. So it's fantastic to be able to just >> avoid distutils entirely if it isn't the right size for your project. >> This example, flit, does not invoke any code from distutils, >> setuptools or bdist_wheel to do its thing. >> >> A source release could just be an archive of the repository. > > > You still have not answered how reading flit's source code to get it working > is better than using cookiecutter to generate a project, and using `python > setup.py bdist_wheel sdist` (which is well-documented, and has tons of > answered questions on sites like StackOverflow to help in case of a > problem). It's simple. distutils, setuptools, and bdist_wheel are all terrible! They solve a 57,000 package strong backwards compatibility problem, and bdist_wheel specifically helps correct the tightly coupled build-install design flaw in distutils, but if you can avoid them great! You probably would not have to actually read the source code to flit in order to use it, but if it came down to that then it would not take very long :) From contact at ionelmc.ro Mon Mar 30 17:23:42 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Mon, 30 Mar 2015 18:23:42 +0300 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 6:05 PM, Ian Cordasco wrote: > In other words, no one should read the docs because that's a waste of > time? Because a lot of time has been poured into the packaging docs and if > they're not sufficient, then instead of improving them, people should write > undocumented tools that force people to read the source? I'm not sure how > that's better than what we already have. ?A waste of time? No. But it sure is poor investment of time for newbie users. It's a futile struggle to document (or read about) all the features of distutils and setuptools, not because it can't be documented, but because there's too much ground to cover and PyPA's packaging.python.org approach is to ?just give an overview of what's available and avoid giving any real best practice recommendations as much as possible. There may be good reasons for that but that's not a sensible approach to giving users a "pitfall free" learning path. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Mar 30 17:33:27 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 30 Mar 2015 16:33:27 +0100 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On 30 March 2015 at 16:26, Ian Cordasco wrote: >> A waste of time? No. But it sure is poor investment of time for newbie >> users. It's a futile struggle to document (or read about) all the features >> of distutils and setuptools, not because it can't be documented, but because >> there's too much ground to cover and PyPA's packaging.python.org approach is >> to just give an overview of what's available and avoid giving any real best >> practice recommendations as much as possible. There may be good reasons for >> that but that's not a sensible approach to giving users a "pitfall free" >> learning path. > > So for new python programmers (or newbie users in general) reading the > entire source of another package to understand it is a better experience? Note that if you want a copy-and-paste level "quick start" setup.py, https://github.com/pypa/sampleproject is basically what you want. The packaging user guide uses it as its recommended starting point. If you don't like it, that probably means you already have enough knowledge to write your own version :-) Paul From donald at stufft.io Mon Mar 30 17:34:17 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 11:34:17 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: > On Mar 30, 2015, at 11:18 AM, Daniel Holth wrote: > > setup.py as implemented with distutils/setuptools has a bit of a > Goldilocks problem: it's just right for a medium-complexity project > but when your project is very simple it's too hard, and when you get > to the point where you are trying to extend distutils by writing a > 10,000 line extension, yikes. So it's fantastic to be able to just > avoid distutils entirely if it isn't the right size for your project. > This example, flit, does not invoke any code from distutils, > setuptools or bdist_wheel to do its thing. > > A source release could just be an archive of the repository. > An archive of the repository is not the same thing as a source release. Honestly, most of my setup.py?s look basically the same as a flit ini file, just inside of python instead of ini. For example, I?m not sure how something like https://github.com/pypa/packaging/blob/master/setup.py or https://github.com/pypa/warehouse/blob/master/setup.py or https://github.com/pypa/twine/blob/master/setup.py or https://github.com/pypa/readme/blob/master/setup.py would be improved by moving it to a ini file instead of a python file. The current toolchain absolutely has some problems, but I?m not convinced that shuffling around the same data into different locations is the answer to those problems. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Mon Mar 30 17:36:10 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 30 Mar 2015 11:36:10 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 11:34 AM, Donald Stufft wrote: > >> On Mar 30, 2015, at 11:18 AM, Daniel Holth wrote: >> >> setup.py as implemented with distutils/setuptools has a bit of a >> Goldilocks problem: it's just right for a medium-complexity project >> but when your project is very simple it's too hard, and when you get >> to the point where you are trying to extend distutils by writing a >> 10,000 line extension, yikes. So it's fantastic to be able to just >> avoid distutils entirely if it isn't the right size for your project. >> This example, flit, does not invoke any code from distutils, >> setuptools or bdist_wheel to do its thing. >> >> A source release could just be an archive of the repository. >> > > An archive of the repository is not the same thing as a source release. > > Honestly, most of my setup.py?s look basically the same as a flit ini > file, just inside of python instead of ini. For example, I?m not sure > how something like https://github.com/pypa/packaging/blob/master/setup.py > or https://github.com/pypa/warehouse/blob/master/setup.py or > https://github.com/pypa/twine/blob/master/setup.py or > https://github.com/pypa/readme/blob/master/setup.py would be improved by > moving it to a ini file instead of a python file. > > The current toolchain absolutely has some problems, but I?m not convinced > that shuffling around the same data into different locations is the answer > to those problems. The way to solve the problems is to allow anyone to try by providing good hooks that do not require extending distutils. From donald at stufft.io Mon Mar 30 17:39:22 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 11:39:22 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: <6107F98C-3E30-4393-9068-66D2594CF8E6@stufft.io> > On Mar 30, 2015, at 11:36 AM, Daniel Holth wrote: > > On Mon, Mar 30, 2015 at 11:34 AM, Donald Stufft wrote: >> >>> On Mar 30, 2015, at 11:18 AM, Daniel Holth wrote: >>> >>> setup.py as implemented with distutils/setuptools has a bit of a >>> Goldilocks problem: it's just right for a medium-complexity project >>> but when your project is very simple it's too hard, and when you get >>> to the point where you are trying to extend distutils by writing a >>> 10,000 line extension, yikes. So it's fantastic to be able to just >>> avoid distutils entirely if it isn't the right size for your project. >>> This example, flit, does not invoke any code from distutils, >>> setuptools or bdist_wheel to do its thing. >>> >>> A source release could just be an archive of the repository. >>> >> >> An archive of the repository is not the same thing as a source release. >> >> Honestly, most of my setup.py?s look basically the same as a flit ini >> file, just inside of python instead of ini. For example, I?m not sure >> how something like https://github.com/pypa/packaging/blob/master/setup.py >> or https://github.com/pypa/warehouse/blob/master/setup.py or >> https://github.com/pypa/twine/blob/master/setup.py or >> https://github.com/pypa/readme/blob/master/setup.py would be improved by >> moving it to a ini file instead of a python file. >> >> The current toolchain absolutely has some problems, but I?m not convinced >> that shuffling around the same data into different locations is the answer >> to those problems. > > The way to solve the problems is to allow anyone to try by providing > good hooks that do not require extending distutils. Sure. Which is why that?s what we?ve essentially been doing. Like I said I have no problems with flit itself other than I think it needs to produce sdists to be a reasonable solution. I don?t personally think it?s much easier to work with than a simple setup.py (and in my experience, I often want a simple.py plus a little extra) but I have no problem with its existence as long as it learns how to produce sdists. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From xav.fernandez at gmail.com Mon Mar 30 17:39:39 2015 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Mon, 30 Mar 2015 17:39:39 +0200 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: I think the point was not to say that documentation is useless (and there is some: http://flit.readthedocs.org/en/latest/ ) but that the code/implementation is much simpler than the combination of distutils/setuptools/bdist_wheel. On Mon, Mar 30, 2015 at 5:26 PM, Ian Cordasco wrote: > > > On Mon, Mar 30, 2015 at 10:23 AM, Ionel Cristian M?rie? < > contact at ionelmc.ro> wrote: > >> >> On Mon, Mar 30, 2015 at 6:05 PM, Ian Cordasco > > wrote: >> >>> In other words, no one should read the docs because that's a waste of >>> time? Because a lot of time has been poured into the packaging docs and if >>> they're not sufficient, then instead of improving them, people should write >>> undocumented tools that force people to read the source? I'm not sure how >>> that's better than what we already have. >> >> >> ?A waste of time? No. But it sure is poor investment of time for newbie >> users. It's a futile struggle to document (or read about) all the features >> of distutils and setuptools, not because it can't be documented, but >> because there's too much ground to cover and PyPA's packaging.python.org >> approach is to ?just give an overview of what's available and avoid giving >> any real best practice recommendations as much as possible. There may be >> good reasons for that but that's not a sensible approach to giving users a >> "pitfall free" learning path. >> > > So for new python programmers (or newbie users in general) reading the > entire source of another package to understand it is a better experience? > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Mar 30 17:41:39 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 30 Mar 2015 16:41:39 +0100 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On 30 March 2015 at 16:34, Donald Stufft wrote: > The current toolchain absolutely has some problems, but I?m not convinced > that shuffling around the same data into different locations is the answer > to those problems. Moving the data to a location that isn't Turing-complete would be a benefit. Changing the processing backend to something different once the declarative UI is in common use would be a lot easier than where we are now. Even a 90% solution would be worthwhile here - we don't *have* to cater for every use case in one go. Paul From donald at stufft.io Mon Mar 30 17:45:14 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 11:45:14 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> > On Mar 30, 2015, at 11:41 AM, Paul Moore wrote: > > On 30 March 2015 at 16:34, Donald Stufft wrote: >> The current toolchain absolutely has some problems, but I?m not convinced >> that shuffling around the same data into different locations is the answer >> to those problems. > > Moving the data to a location that isn't Turing-complete would be a > benefit. Changing the processing backend to something different once > the declarative UI is in common use would be a lot easier than where > we are now. Even a 90% solution would be worthwhile here - we don't > *have* to cater for every use case in one go. > > Paul Well, parts of it are turing complete, since it pulls the version number out of the module itself and that?s just Python too. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From contact at ionelmc.ro Mon Mar 30 17:47:47 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Mon, 30 Mar 2015 18:47:47 +0300 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 6:39 PM, Xavier Fernandez wrote: > I think the point was not to say that documentation is useless (and there > is some: http://flit.readthedocs.org/en/latest/ ) but that the > code/implementation is much simpler than the combination of > distutils/setuptools/bdist_wheel. > > On Mon, Mar 30, 2015 at 5:26 PM, Ian Cordasco > wrote: > >> >> So for new python programmers (or newbie users in general) reading the >> entire source of another package to understand it is a better experience? >> >> ? ?To put that in context, flit goes for less than 600 SLOC while distutils+setuptools+wheel amount to? over 20000 SLOC. At that ratio arguments for distutils+setuptools+wheel documentation seem unreasonable. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Mar 30 17:52:38 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 30 Mar 2015 16:52:38 +0100 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> Message-ID: On 30 March 2015 at 16:45, Donald Stufft wrote: > Well, parts of it are turing complete, since it pulls the version number > out of the module itself and that?s just Python too. Sorry, I wasn't specifically looking at flit there. But I'm in the camp that says just put the version in your ini file and in your module, and don't worry that you have it in 2 places. If managing version numbers is the biggest showstopper in moving to declarative metadata, then we've won :-) Paul From dholth at gmail.com Mon Mar 30 17:52:54 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 30 Mar 2015 11:52:54 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: This is also a great time to be getting working setup-requires. A pip-compatible flit sdist could declare flit as a setup requirement, and its setup.py could translate install/bdist_wheel to the appropriate flit calls. On Mon, Mar 30, 2015 at 11:47 AM, Ionel Cristian M?rie? wrote: > On Mon, Mar 30, 2015 at 6:39 PM, Xavier Fernandez > wrote: >> >> I think the point was not to say that documentation is useless (and there >> is some: http://flit.readthedocs.org/en/latest/ ) but that the >> code/implementation is much simpler than the combination of >> distutils/setuptools/bdist_wheel. >> >> On Mon, Mar 30, 2015 at 5:26 PM, Ian Cordasco >> wrote: >>> >>> >>> So for new python programmers (or newbie users in general) reading the >>> entire source of another package to understand it is a better experience? >>> > To put that in context, flit goes for less than 600 SLOC while > distutils+setuptools+wheel amount to over 20000 SLOC. At that ratio > arguments for distutils+setuptools+wheel documentation seem unreasonable. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From graffatcolmingov at gmail.com Mon Mar 30 17:55:52 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 30 Mar 2015 10:55:52 -0500 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 10:47 AM, Ionel Cristian M?rie? wrote: > On Mon, Mar 30, 2015 at 6:39 PM, Xavier Fernandez > wrote: > >> I think the point was not to say that documentation is useless (and there >> is some: http://flit.readthedocs.org/en/latest/ ) but that the >> code/implementation is much simpler than the combination of >> distutils/setuptools/bdist_wheel. >> >> On Mon, Mar 30, 2015 at 5:26 PM, Ian Cordasco > > wrote: >> >>> >>> So for new python programmers (or newbie users in general) reading the >>> entire source of another package to understand it is a better experience? >>> >>> ? > ?To put that in context, flit goes for less than 600 SLOC while > distutils+setuptools+wheel amount to? over 20000 SLOC. At that ratio > arguments for distutils+setuptools+wheel documentation seem unreasonable. > To be clear, no one should ever be advocating to "just read the source" as a form of documentation. This is why the Packaging guide exists (because no one should ever be expected to read the distutils, setuptools, or wheel source to use it). Code is never as self-documenting as people like to believe. And since we're talking about new users (without defining what they're new to) reading the source should only be for educational purposes. cookiecutter will serve new users better than flit or anything else. cookiecutter will teach new users good package structure and take care of the (possibly hard parts) of a setup.py. Then, when the "new user" goes to publish it, there's tons of prior documentation on how to do it. If they run into problems using flit they have the skimpy documentation or the source. Yeah, it's "easy" to read 600 SLOC for you, but what about for some "new user"? Are they new to python? Why do they have to care about reading the source if something else will "just work" as documented for their "simple" use case? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Mar 30 17:56:57 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 11:56:57 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> Message-ID: <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> > On Mar 30, 2015, at 11:52 AM, Paul Moore wrote: > > On 30 March 2015 at 16:45, Donald Stufft wrote: >> Well, parts of it are turing complete, since it pulls the version number >> out of the module itself and that?s just Python too. > > Sorry, I wasn't specifically looking at flit there. But I'm in the > camp that says just put the version in your ini file and in your > module, and don't worry that you have it in 2 places. If managing > version numbers is the biggest showstopper in moving to declarative > metadata, then we've won :-) > > Paul Honestly, I don?t think that setup.py as a development interface is that bad. It gets really bad when we start sticking it inside of a sdist and using that as part of the installation metadata. It?s not unusual for me to want (or need) to do something a little bit different in a project, or something that the original authors didn?t quite intend to do. This is perfectly valid and fine inside of a file that only ever gets executed on a developer machine. However it *needs* to be ?compiled? down to a static file when creating a sdist. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Mon Mar 30 18:14:29 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 30 Mar 2015 12:14:29 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 11:55 AM, Ian Cordasco wrote: > > > On Mon, Mar 30, 2015 at 10:47 AM, Ionel Cristian M?rie? > wrote: >> >> On Mon, Mar 30, 2015 at 6:39 PM, Xavier Fernandez >> wrote: >>> >>> I think the point was not to say that documentation is useless (and there >>> is some: http://flit.readthedocs.org/en/latest/ ) but that the >>> code/implementation is much simpler than the combination of >>> distutils/setuptools/bdist_wheel. >>> >>> On Mon, Mar 30, 2015 at 5:26 PM, Ian Cordasco >>> wrote: >>>> >>>> >>>> So for new python programmers (or newbie users in general) reading the >>>> entire source of another package to understand it is a better experience? >>>> >> To put that in context, flit goes for less than 600 SLOC while >> distutils+setuptools+wheel amount to over 20000 SLOC. At that ratio >> arguments for distutils+setuptools+wheel documentation seem unreasonable. > > > To be clear, no one should ever be advocating to "just read the source" as a > form of documentation. This is why the Packaging guide exists (because no > one should ever be expected to read the distutils, setuptools, or wheel > source to use it). > > Code is never as self-documenting as people like to believe. And since we're > talking about new users (without defining what they're new to) reading the > source should only be for educational purposes. cookiecutter will serve new > users better than flit or anything else. cookiecutter will teach new users > good package structure and take care of the (possibly hard parts) of a > setup.py. Then, when the "new user" goes to publish it, there's tons of > prior documentation on how to do it. If they run into problems using flit > they have the skimpy documentation or the source. > > Yeah, it's "easy" to read 600 SLOC for you, but what about for some "new > user"? Are they new to python? Why do they have to care about reading the > source if something else will "just work" as documented for their "simple" > use case? No one has advocated reading the source code instead of reading the documentation. From contact at ionelmc.ro Mon Mar 30 18:15:31 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Mon, 30 Mar 2015 19:15:31 +0300 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 6:55 PM, Ian Cordasco wrote: > Then, when the "new user" goes to publish it, there's tons of prior > documentation on how to do it. If they run into problems using flit they > have the skimpy documentation or the source. ?Now it might have skimpy docs and no users, but that's largely a product of time. I think `flit` should be judged on what it can be in the future, not all what it's right now. To put it in picture, the argument you're making is like comparing the amazon rainforest to a banana milkshake recipe.? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Mon Mar 30 18:55:46 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 30 Mar 2015 11:55:46 -0500 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 11:14 AM, Daniel Holth wrote: > On Mon, Mar 30, 2015 at 11:55 AM, Ian Cordasco > wrote: > > > > > > On Mon, Mar 30, 2015 at 10:47 AM, Ionel Cristian M?rie? < > contact at ionelmc.ro> > > wrote: > >> > >> On Mon, Mar 30, 2015 at 6:39 PM, Xavier Fernandez > >> wrote: > >>> > >>> I think the point was not to say that documentation is useless (and > there > >>> is some: http://flit.readthedocs.org/en/latest/ ) but that the > >>> code/implementation is much simpler than the combination of > >>> distutils/setuptools/bdist_wheel. > >>> > >>> On Mon, Mar 30, 2015 at 5:26 PM, Ian Cordasco > >>> wrote: > >>>> > >>>> > >>>> So for new python programmers (or newbie users in general) reading the > >>>> entire source of another package to understand it is a better > experience? > >>>> > >> To put that in context, flit goes for less than 600 SLOC while > >> distutils+setuptools+wheel amount to over 20000 SLOC. At that ratio > >> arguments for distutils+setuptools+wheel documentation seem > unreasonable. > > > > > > To be clear, no one should ever be advocating to "just read the source" > as a > > form of documentation. This is why the Packaging guide exists (because no > > one should ever be expected to read the distutils, setuptools, or wheel > > source to use it). > > > > Code is never as self-documenting as people like to believe. And since > we're > > talking about new users (without defining what they're new to) reading > the > > source should only be for educational purposes. cookiecutter will serve > new > > users better than flit or anything else. cookiecutter will teach new > users > > good package structure and take care of the (possibly hard parts) of a > > setup.py. Then, when the "new user" goes to publish it, there's tons of > > prior documentation on how to do it. If they run into problems using flit > > they have the skimpy documentation or the source. > > > > Yeah, it's "easy" to read 600 SLOC for you, but what about for some "new > > user"? Are they new to python? Why do they have to care about reading the > > source if something else will "just work" as documented for their > "simple" > > use case? > > No one has advocated reading the source code instead of reading the > documentation. > Thankfully this is a publicly archived list. Quoting yourself: > Flit is one example, and you can understand it not by copy/pasting, > but by spending half an hour reading its complete source code. In which you advocate reading the source of a tool over using setup.py which has countless resources written about it on the internet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Mon Mar 30 19:00:00 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 30 Mar 2015 12:00:00 -0500 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: On Mon, Mar 30, 2015 at 11:15 AM, Ionel Cristian M?rie? wrote: > > > On Mon, Mar 30, 2015 at 6:55 PM, Ian Cordasco > wrote: > >> Then, when the "new user" goes to publish it, there's tons of prior >> documentation on how to do it. If they run into problems using flit they >> have the skimpy documentation or the source. > > > ?Now it might have skimpy docs and no users, but that's largely a product > of time. I think `flit` should be judged on what it can be in the future, > not all what it's right now. To put it in picture, the argument you're > making is like comparing the amazon rainforest to a banana milkshake > recipe.? > Well to make a better comparison, we're discussing a healthy diet to a sugar laced treat. The healthy diet is sustainable because it is well documented and has tons of users with experience with it, but has pitfalls in that it can be expensive at times, while the sugar laced treat is good for a quick blood sugar spike but will leave you thoroughly unsatisfied and eventually wanting for the healthy diet. New users, (like some people who prefer high sugar diets) may prefer the initial simplicity, but at the cost of having to do a lot more work up front. Those who follow a healthy diet (and ostensibly exercise regiment) will have tooling to not have to worry about how to construct a healthy diet. New users who find and work with those already on a healthy diet will learn the tools that help them avoid the pitfalls of starting on a healthy diet (e.g., tools that generate setup.py for you and maintain the 98% use case, which is inevitably where new users fall*) and will be better off for the long term. * Note, new users still has yet to be defined by anyone advocating that this is better for new users because of mystical reasons (like being able to read the source code). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian at feete.org Mon Mar 30 17:33:01 2015 From: ian at feete.org (Ian Foote) Date: Mon, 30 Mar 2015 16:33:01 +0100 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> Message-ID: <55196CAD.5000407@feete.org> On 30/03/15 16:05, Ian Cordasco wrote: > On Mon, Mar 30, 2015 at 9:59 AM, Daniel Holth > wrote: > > Yes, setup.py should die. Flit is one example, and you can understand > it not by copy/pasting, but by spending half an hour reading its > complete source code. > > > In other words, no one should read the docs because that's a waste of > time? Because a lot of time has been poured into the packaging docs > and if they're not sufficient, then instead of improving them, people > should write undocumented tools that force people to read the source? > I'm not sure how that's better than what we already have. You're attacking a strawman. Flit does have documentation. What Daniel was trying to say is that flit is small enough to understand by just reading the source code. Regards, Ian F -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Mar 30 19:56:39 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 13:56:39 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: <55196CAD.5000407@feete.org> References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <55196CAD.5000407@feete.org> Message-ID: > On Mar 30, 2015, at 11:33 AM, Ian Foote wrote: > > > On 30/03/15 16:05, Ian Cordasco wrote: >> On Mon, Mar 30, 2015 at 9:59 AM, Daniel Holth > wrote: >> Yes, setup.py should die. Flit is one example, and you can understand >> it not by copy/pasting, but by spending half an hour reading its >> complete source code. >> >> In other words, no one should read the docs because that's a waste of time? Because a lot of time has been poured into the packaging docs and if they're not sufficient, then instead of improving them, people should write undocumented tools that force people to read the source? I'm not sure how that's better than what we already have. > > You're attacking a strawman. Flit does have documentation. What Daniel was trying to say is that flit is small enough to understand by just reading the source code. Meh, comparisons by SLOC are silly in either direction. The size of the code base isn?t an interesting discriminator in judging quality. Python does not try to be a language that minimizes the number of lines of code and when looking at straight SLOC (in abstract of these two projects) it?s entirely possible to have something harder to understand with less SLOC and easier to understand with more. In particular looking at these two pieces of software, trying to compare SLOC is even more silly given the two toolchains don?t even come close to the same feature set. Ian C, is pointing out that being able to understand it ?just by reading the source code? (whatever that means, what software can?t you understand by reading the source code?) isn?t an extremely useful discriminator either because it?s premised on the fact that there is going to be a common situation where you *need* to read the source code to understand it. So the statement is in itself not particularly useful because it?s premised on something that is fundamentally user hostile. An end user shouldn?t care if it?s written in 600 lines, 6000 lines, or 6 million lines. What matters to them is the interface it provides and the documentation around it and often the ?ease? of reading the code is used to excuse poor (or non existent) documentation. IOW, using SLOC or size or anything of the like as a discriminator is akin to saying ?well it can fit on a floppy? as a measure of quality. None of this is particularly centered around flit (or distutils) so I?m not speaking about either project there. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Mon Mar 30 20:03:29 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 30 Mar 2015 14:03:29 -0400 Subject: [Distutils] Isn't it neat that pip-installable packages can be built without distutils? Message-ID: What else can we do to make the experience better for those who would try to replace distutils? From donald at stufft.io Mon Mar 30 20:11:14 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 14:11:14 -0400 Subject: [Distutils] Isn't it neat that pip-installable packages can be built without distutils? In-Reply-To: References: Message-ID: <62D3A130-8622-4317-B2E2-C341B2A5D5F5@stufft.io> > On Mar 30, 2015, at 2:03 PM, Daniel Holth wrote: > > What else can we do to make the experience better for those who would > try to replace distutils? Continue work on the interoperability standards that exist for that very purpose. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From aclark at aclark.net Mon Mar 30 21:46:19 2015 From: aclark at aclark.net (Alex Clark) Date: Mon, 30 Mar 2015 15:46:19 -0400 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: On 3/30/15 1:13 AM, Richard Jones wrote: > Added! > > Framework :: Django :: 1.4 > Framework :: Django :: 1.5 > Framework :: Django :: 1.6 > Framework :: Django :: 1.7 > Framework :: Django :: 1.8 Great! In Plone's case, we also added every foreseeable future version, which it appears you have already done. Tha gives folks the ability to cut new releases of add-ons against beta and rc versions of Django, which is awesome. > > > > On Mon, 30 Mar 2015 at 16:09 James Bennett > wrote: > > I would be OK with including 1.5 just for completeness' sake. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Alex Clark ? http://about.me/alex.clark From barry at python.org Tue Mar 31 00:05:36 2015 From: barry at python.org (Barry Warsaw) Date: Mon, 30 Mar 2015 18:05:36 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: <20150330180536.0c0eedae@anarchist.wooz.org> On Mar 30, 2015, at 11:56 AM, Donald Stufft wrote: >Honestly, I don?t think that setup.py as a development interface is that >bad. Especially when you cargo cult most of a new project's basic infrastructure[*] from one that's already working. Sweat out the first one, then reuse. ;) Cheers, -Barry [*] For me, not just setup.py but also tox.ini, coverage.ini, MANIFEST.in, other helpers, readmes, license files, source templates, etc. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Tue Mar 31 00:36:14 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 31 Mar 2015 08:36:14 +1000 Subject: [Distutils] Isn't it neat that pip-installable packages can be built without distutils? In-Reply-To: <62D3A130-8622-4317-B2E2-C341B2A5D5F5@stufft.io> References: <62D3A130-8622-4317-B2E2-C341B2A5D5F5@stufft.io> Message-ID: On 31 Mar 2015 04:11, "Donald Stufft" wrote: > > > > On Mar 30, 2015, at 2:03 PM, Daniel Holth wrote: > > > > What else can we do to make the experience better for those who would > > try to replace distutils? > > Continue work on the interoperability standards that exist for that very > purpose. One of the other key missing pieces is a tool independent regression test suite that defines the expected setup.py CLI. The existing setuptools, distutils & distutils2 test suites do a lot of their testing via Python level APIs rather than being written as external functional tests for the setup.py CLI they provide. So one place to start would be to work on creating such an independent test suite (from a technical perspective, I'd suggest basing it on pytest & tox), focusing initially on the commands that pip invokes, and using the pip/virtualenv/setuptools/distutils/distutils2 test suites as guides for options and environment variable settings to check. After that, figuring out how to scan the Debian & Fedora repos to extract the distro level build commands used for various projects would provide an additional source of good test suite fodder. The advantage of such a test suite project is that it would start to meaningfully quantify what "replacing distutils" actually means from the practical perspective of "calling setup.py at the command line". Regards, Nick. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Tue Mar 31 01:03:15 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 30 Mar 2015 19:03:15 -0400 Subject: [Distutils] d2to1 setup.cfg schema In-Reply-To: References: Message-ID: On Tue, Mar 24, 2015 at 6:51 PM, Nick Coghlan wrote: > > On 25 Mar 2015 07:35, "Robert Collins" wrote: >> >> This is a break-out thread from the centi-thread that spawned about >> setup-requires. >> >> d2to1 defined some metadata keys in setup.cfg,in particular 'name' and >> 'requires-dist'. Confusing 'requires-dist' contains the >> 'install_requires' one might use with setuptools' setup() function. > > That particular name comes from PEP 345: > https://www.python.org/dev/peps/pep-0345/ > > Extending d2to1 to accept "install-requires" as meaning the same thing as > the existing "requires-dist" (and complaining if a setup.cfg file contains > both) would make sense to me, as it provides a more obvious migration path > from setuptools, and pairs up nicely with a new "setup-requires" section for > setup.py dependencies. I would be fine with that, and other similar changes. Better documentation of the format is needed too--it used to just rely on the distutils2 documentation, but since distutils2 is dead d2to1 deserves documentation in its own right. > (It also occurs to me that we should probably ask the d2to1 folks if they'd > be interested in bringing the project under the PyPA banner as happened with > setuptools, distlib, etc. It's emerged as a key piece of the transition from > Turing complete build process customisation to static build metadata > configuration) As "the d2to1 folks", more or less, (not counting the pbr folks who've done their own thing and might have opinions), I would be fine with this. It has been on my agenda for over a year to release an update to d2to1 under a new name--something less tied to the failed distutils2 project (along with its own documentation, see above). The new name I've been working under cheekily called "setup.cfg", that is, the actual Python package is named "setup.cfg". You import setup.cfg in your setup.py and it basically does the rest. But if that ends up being deemed too confusing/silly that would be understandable--I'm open to other ideas. >> Since the declarative setup-requires concept also involves putting >> dependencies in setup.cfg (but setup_requires rather than >> install_requires), I followed the naming convention d2to1 had started. >> But - all the reviewers (and I agree) think this is confusing and >> non-obvious. >> >> Since d2to1 is strictly a build-time thing - it reflects the keys into >> the metadata and thus your egg-info/requires.txt is unaltered in >> output, I think its reasonable to argue that we don't need to be >> compatible with it. >> >> OTOH folk using d2to1 would not gain the benefits that declarative >> setup-requires may offer in setuptools // pip. I haven't followed this whole discussion (I started to in the beginning, but haven't kept up), but I'm not really sure what's being said here. d2to1 *does* support declaring setup-requires dependencies in setup.cfg, and those dependencies should be loaded before any hook scripts are used. Everything in d2to1 is done via various hook points, and the hook functions can be either shipped with the package, or come from external requirements installed via setup-requires. It works pretty well in most cases. Erik From robertc at robertcollins.net Tue Mar 31 01:07:08 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 31 Mar 2015 12:07:08 +1300 Subject: [Distutils] d2to1 setup.cfg schema In-Reply-To: References: Message-ID: On 31 March 2015 at 12:03, Erik Bray wrote: > I haven't followed this whole discussion (I started to in the > beginning, but haven't kept up), but I'm not really sure what's being > said here. d2to1 *does* support declaring setup-requires dependencies > in setup.cfg, and those dependencies should be loaded before any hook > scripts are used. Everything in d2to1 is done via various hook > points, and the hook functions can be either shipped with the package, > or come from external requirements installed via setup-requires. It > works pretty well in most cases. Oh, it does!? I was looking through the source and couldn't figure it out. What key is looked for for setup-requires? Also does it define a schema for extra-requires? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From ncoghlan at gmail.com Tue Mar 31 01:14:51 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 31 Mar 2015 09:14:51 +1000 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <60A915D8-FC52-4119-9607-DB43E95A4701@stufft.io> Message-ID: On 31 Mar 2015 01:17, "Xavier Fernandez" wrote: > > Fair enough, I didn't think of compiled wheels :) > And having a clean way to run tests for the provided wheel is indeed an other good point. To elaborate on Donald's answer, one of our general requirements downstream in distro land is the ability to rebuild the entire distro from source without human intervention (for example: https://beaker-project.org/docs-develop/user-guide/beaker-provided-tasks.html#distribution-rebuild for Fedora & derivatives) We need to build from source not just to ensure our binaries match the published source code, but also because our build systems are designed to let us *patch* the packages before we build them. This is what lets us backport security updates, bug fixes, and sometimes even entire features without needing to rebase a package on a new upstream release of a project. We also like to be able to automatically *generate* package build instructions from upstream metadata. Those tools aren't perfect today (too much info is missing from current generation upstream metadata), but they're a lot better than nothing. Historically those tools have relied on "setup.py install" working on an unpacked sdist. These days in Fedora we aim to rely on "pip install" working instead of invoking setup.py directly, but there are over 15000 Python packages in Fedora (and a similar number in Debian), and Fedora's policy only switched to preferring indirection through pip for Python package builds last year (AFAIK Debian still favours invoking setup.py directly, hopefully Barry will correct me if I'm wrong). But as long as those source package builds keep working downstream, we don't really care what's happening behind the scenes in the upstream build systems. It's mildly irritating when packaging a new release of an already included project turns into a yak shaving expedition to package a whole new build system as a dependency, but that's just one of the consequences of running an open source integration project rather than writing everything from scratch yourself. It's also one of the reasons why d2to1 is such a neat hack - that presents as plain distutils at sdist build time, while allowing the use of distutils2 features at package definition time upstream. Cheers, Nick. > > On Mon, Mar 30, 2015 at 5:04 PM, Donald Stufft wrote: >> >> >>> On Mar 30, 2015, at 10:58 AM, Xavier Fernandez wrote: >>> >>>> Wheels without sdists are likely a generally bad idea, downstream >>>> redistributors are not going to like them. >>> >>> >>> Why do you think that ? Wheels seem way simpler/saner than all the possible things setup.py can do. >> >> >> Wheels are simpler than a setup.py, because wheels are a binary format and Wheels don?t need to >> handle things like build software because it?s already been built. However downstream redistributors >> will not accept a Wheel as the source for a package because it is a binary format. It doesn?t matter >> if you can unzip it and there is pure python there, it is still a binary format. So if you release only Wheels >> you?re essentially saying that downstream redistributors will never package your software (or any >> software that depends on it). >> >> A few issues that Wheel only has: >> >> * If your project has a C extension, downstream redistributes need access to the source code not the >> compiled code (as does anyone wanting to use the project on platform you didn?t release for). >> >> * If your project has tests that don?t get installed, they should get shipped as part of the sdist so that >> downstream can run them as part of their packaging activities to ensure they didn?t break anything >> in the test suite. However if you?re installing from Wheel you can?t do that. >> >> --- >> Donald Stufft >> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Tue Mar 31 01:16:45 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 30 Mar 2015 19:16:45 -0400 Subject: [Distutils] d2to1 setup.cfg schema In-Reply-To: References: Message-ID: On Mon, Mar 30, 2015 at 7:07 PM, Robert Collins wrote: > On 31 March 2015 at 12:03, Erik Bray wrote: > >> I haven't followed this whole discussion (I started to in the >> beginning, but haven't kept up), but I'm not really sure what's being >> said here. d2to1 *does* support declaring setup-requires dependencies >> in setup.cfg, and those dependencies should be loaded before any hook >> scripts are used. Everything in d2to1 is done via various hook >> points, and the hook functions can be either shipped with the package, >> or come from external requirements installed via setup-requires. It >> works pretty well in most cases. > > Oh, it does!? I was looking through the source and couldn't figure it > out. What key is looked for for setup-requires? Also does it define a > schema for extra-requires? Yeah, sorry about that. That's one of those things that was never actually supported in distutils2 by the time it went poof, and that I added later. You can use: [metadata] setup-requires-dist = foo So say, for example you have some package called "versionutils" that's used to generate the package's version number (by reading it from another file, tacking on VCS info, etc.) You can use: [metadata] setup-requires-dist = versionutils [global] setup-hooks = versionutils.version_hook or something to that effect. It will ensure versionutils is importable (this uses easy_install just like the normal setup_requires feature in setuptools; I would like to change this one day to instead use something like Daniel's setup-requires [1] trick). It will then, fairly early in the setup process, hand the package metadata over to versionutils.version_hook, and let it insert a version string. Erik [1] https://bitbucket.org/dholth/setup-requires From aclark at aclark.net Tue Mar 31 02:33:02 2015 From: aclark at aclark.net (Alex Clark) Date: Mon, 30 Mar 2015 20:33:02 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: On 3/30/15 11:56 AM, Donald Stufft wrote: > >> On Mar 30, 2015, at 11:52 AM, Paul Moore wrote: >> >> On 30 March 2015 at 16:45, Donald Stufft wrote: >>> Well, parts of it are turing complete, since it pulls the version number >>> out of the module itself and that?s just Python too. >> >> Sorry, I wasn't specifically looking at flit there. But I'm in the >> camp that says just put the version in your ini file and in your >> module, and don't worry that you have it in 2 places. If managing >> version numbers is the biggest showstopper in moving to declarative >> metadata, then we've won :-) >> >> Paul > > Honestly, I don?t think that setup.py as a development interface is that > bad. It gets really bad when we start sticking it inside of a sdist and > using that as part of the installation metadata. > > It?s not unusual for me to want (or need) to do something a little bit > different in a project, or something that the original authors didn?t > quite intend to do. This is perfectly valid and fine inside of a file > that only ever gets executed on a developer machine. However it *needs* > to be ?compiled? down to a static file when creating a sdist. Right, that is my understanding: setup.py is fine except when it is executed on installation. But I think there is a slight cognitive advantage to setup.ini vs. setup.py. You can never execute an ini file, even in development. So the same file can (somehow) be used in development and production without "compiling down" first. In other words: maybe switching to ini is the right thing to do long term. However, the practicality of doing so may be so small (due to disutils/setuptools baggage and/or inability to overcome setup.py momentum) that "compiling down" (setup.py) becomes a more attractive first step, at least. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Alex Clark ? http://about.me/alex.clark From donald at stufft.io Tue Mar 31 02:49:24 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 30 Mar 2015 20:49:24 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: <89109446-BCA0-43A6-9D76-66470678C8A4@stufft.io> > On Mar 30, 2015, at 8:33 PM, Alex Clark wrote: > > On 3/30/15 11:56 AM, Donald Stufft wrote: >> >>> On Mar 30, 2015, at 11:52 AM, Paul Moore wrote: >>> >>> On 30 March 2015 at 16:45, Donald Stufft wrote: >>>> Well, parts of it are turing complete, since it pulls the version number >>>> out of the module itself and that?s just Python too. >>> >>> Sorry, I wasn't specifically looking at flit there. But I'm in the >>> camp that says just put the version in your ini file and in your >>> module, and don't worry that you have it in 2 places. If managing >>> version numbers is the biggest showstopper in moving to declarative >>> metadata, then we've won :-) >>> >>> Paul >> >> Honestly, I don?t think that setup.py as a development interface is that >> bad. It gets really bad when we start sticking it inside of a sdist and >> using that as part of the installation metadata. >> >> It?s not unusual for me to want (or need) to do something a little bit >> different in a project, or something that the original authors didn?t >> quite intend to do. This is perfectly valid and fine inside of a file >> that only ever gets executed on a developer machine. However it *needs* >> to be ?compiled? down to a static file when creating a sdist. > > > Right, that is my understanding: setup.py is fine except when it is executed on installation. > > But I think there is a slight cognitive advantage to setup.ini vs. setup.py. You can never execute an ini file, even in development. So the same file can (somehow) be used in development and production without "compiling down" first. In other words: maybe switching to ini is the right thing to do long term. > > However, the practicality of doing so may be so small (due to disutils/setuptools baggage and/or inability to overcome setup.py momentum) that "compiling down" (setup.py) becomes a more attractive first step, at least. > I think trying to use the same file is an attractive nuisance. Separating them lets you do a lot more for developer convenience without hurting the ability of the installation side of things. For instance, it?s somewhat common to want to import the module in order to pull the version out of it. That?s not something we?d want to do outside of the developer side of things, but it?s something that?s completely reasonable to do in the developer toolchain (either implicitly as in flit, or explicitly as in setup.py). IOW, splitting these into the ?developer side? tooling and the ?format that automated installers consume? lets you define both of them to be the best at what they are doing instead of trying to put something together that satisfies two competing use cases. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From barry at python.org Tue Mar 31 03:59:48 2015 From: barry at python.org (Barry Warsaw) Date: Mon, 30 Mar 2015 21:59:48 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <60A915D8-FC52-4119-9607-DB43E95A4701@stufft.io> Message-ID: <20150330215948.25e7b63c@anarchist.wooz.org> On Mar 31, 2015, at 09:14 AM, Nick Coghlan wrote: >We need to build from source not just to ensure our binaries match the >published source code, but also because our build systems are designed to >let us *patch* the packages before we build them. This is what lets us >backport security updates, bug fixes, and sometimes even entire features >without needing to rebase a package on a new upstream release of a project. For Debian, all this, plus it's required by the Debian Social Contract and Debian Free Software Guidelines. >(AFAIK Debian still favours invoking setup.py directly, hopefully Barry >will correct me if I'm wrong). Correct. For any setup.py/distutils/setuptools-based project, we have one preferred, recommended, and non-deprecated "build system", and though it's fairly flexible, it mostly relies on a working setup.py. As Debian Jessie is frozen right now, I don't expect that to change for now. We have sort of a love-hate relationship with pip, but it would be interesting to sit down at Pycon and discuss how we might like to see downstream distro support improve or change as it relates to common upstream packaging standards. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From p.f.moore at gmail.com Tue Mar 31 09:04:28 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 31 Mar 2015 08:04:28 +0100 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: On 30 March 2015 at 16:56, Donald Stufft wrote: > Honestly, I don?t think that setup.py as a development interface is that > bad. It gets really bad when we start sticking it inside of a sdist and > using that as part of the installation metadata. > > It?s not unusual for me to want (or need) to do something a little bit > different in a project, or something that the original authors didn?t > quite intend to do. This is perfectly valid and fine inside of a file > that only ever gets executed on a developer machine. However it *needs* > to be ?compiled? down to a static file when creating a sdist. Hmm, I don't think I'd ever really understood the distinction between "development setup" and "sdist" that clearly. I take your point, it's the sdist level that we want to avoid executable metadata formats in. Paul From p.f.moore at gmail.com Tue Mar 31 09:10:58 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 31 Mar 2015 08:10:58 +0100 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: On 31 March 2015 at 08:04, Paul Moore wrote: > On 30 March 2015 at 16:56, Donald Stufft wrote: >> Honestly, I don?t think that setup.py as a development interface is that >> bad. It gets really bad when we start sticking it inside of a sdist and >> using that as part of the installation metadata. >> >> It?s not unusual for me to want (or need) to do something a little bit >> different in a project, or something that the original authors didn?t >> quite intend to do. This is perfectly valid and fine inside of a file >> that only ever gets executed on a developer machine. However it *needs* >> to be ?compiled? down to a static file when creating a sdist. > > Hmm, I don't think I'd ever really understood the distinction between > "development setup" and "sdist" that clearly. I take your point, it's > the sdist level that we want to avoid executable metadata formats in. Thinking some more about that, my confusion is probably in part because pip doesn't distinguish between a "development directory" and a sdist at the moment. For both, it runs "setup.py bdist_wheel/install". So I guess work on a new sdist format would have to include pip learning to distinguish between a sdist and a working directory, and installing (or building wheels from) the two things differently. Paul From ncoghlan at gmail.com Tue Mar 31 14:06:58 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 31 Mar 2015 22:06:58 +1000 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: On 31 March 2015 at 17:10, Paul Moore wrote: > On 31 March 2015 at 08:04, Paul Moore wrote: >> On 30 March 2015 at 16:56, Donald Stufft wrote: >>> Honestly, I don?t think that setup.py as a development interface is that >>> bad. It gets really bad when we start sticking it inside of a sdist and >>> using that as part of the installation metadata. >>> >>> It?s not unusual for me to want (or need) to do something a little bit >>> different in a project, or something that the original authors didn?t >>> quite intend to do. This is perfectly valid and fine inside of a file >>> that only ever gets executed on a developer machine. However it *needs* >>> to be ?compiled? down to a static file when creating a sdist. >> >> Hmm, I don't think I'd ever really understood the distinction between >> "development setup" and "sdist" that clearly. I take your point, it's >> the sdist level that we want to avoid executable metadata formats in. > > Thinking some more about that, my confusion is probably in part > because pip doesn't distinguish between a "development directory" and > a sdist at the moment. For both, it runs "setup.py > bdist_wheel/install". So I guess work on a new sdist format would have > to include pip learning to distinguish between a sdist and a working > directory, and installing (or building wheels from) the two things > differently. Yep, the current PEP 426 draft suggests that sdists should grow a "dist-info" directory (akin to wheel files and installed packages), while development directories would continue to lack any of the generated metadata. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Tue Mar 31 14:20:25 2015 From: dholth at gmail.com (Daniel Holth) Date: Tue, 31 Mar 2015 08:20:25 -0400 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: Most sdists already have static metadata in the form of a PKG-INFO file or an .egg-info directory. If it has dependencies it's almost guaranteed to have .egg-info because distutils does not support dependencies. The only problem is that the static metadata is not trustworthy because the dependencies often change based on the target environment as detected in setup.py. To work reliably pip has to execute setup.py before it can download a package's install_requires. If we want to improve the dependency resolver we would need to put install_requires in a static file and promise that it was actually static. On Tue, Mar 31, 2015 at 8:06 AM, Nick Coghlan wrote: > On 31 March 2015 at 17:10, Paul Moore wrote: >> On 31 March 2015 at 08:04, Paul Moore wrote: >>> On 30 March 2015 at 16:56, Donald Stufft wrote: >>>> Honestly, I don?t think that setup.py as a development interface is that >>>> bad. It gets really bad when we start sticking it inside of a sdist and >>>> using that as part of the installation metadata. >>>> >>>> It?s not unusual for me to want (or need) to do something a little bit >>>> different in a project, or something that the original authors didn?t >>>> quite intend to do. This is perfectly valid and fine inside of a file >>>> that only ever gets executed on a developer machine. However it *needs* >>>> to be ?compiled? down to a static file when creating a sdist. >>> >>> Hmm, I don't think I'd ever really understood the distinction between >>> "development setup" and "sdist" that clearly. I take your point, it's >>> the sdist level that we want to avoid executable metadata formats in. >> >> Thinking some more about that, my confusion is probably in part >> because pip doesn't distinguish between a "development directory" and >> a sdist at the moment. For both, it runs "setup.py >> bdist_wheel/install". So I guess work on a new sdist format would have >> to include pip learning to distinguish between a sdist and a working >> directory, and installing (or building wheels from) the two things >> differently. > > Yep, the current PEP 426 draft suggests that sdists should grow a > "dist-info" directory (akin to wheel files and installed packages), > while development directories would continue to lack any of the > generated metadata. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Tue Mar 31 16:53:27 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 31 Mar 2015 15:53:27 +0100 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: On 31 March 2015 at 13:06, Nick Coghlan wrote: >> Thinking some more about that, my confusion is probably in part >> because pip doesn't distinguish between a "development directory" and >> a sdist at the moment. For both, it runs "setup.py >> bdist_wheel/install". So I guess work on a new sdist format would have >> to include pip learning to distinguish between a sdist and a working >> directory, and installing (or building wheels from) the two things >> differently. > > Yep, the current PEP 426 draft suggests that sdists should grow a > "dist-info" directory (akin to wheel files and installed packages), > while development directories would continue to lack any of the > generated metadata. The thing is, because sdists include setup.py and the whole build layout, there's no point (at the moment) in pip looking at any sdist-specific data. We have to be able to do "pip install /my/dev/directory", and so reusing the same code for a sdist is the obvious thing to do. If the sdist metadata meant that we could somehow do things *better* for a sdist than for a dev directory, then maybe it would be worth doing that. But at the moment, it basically gains us nothing to use the metadata from a sdist. It's not quite that simple, I know. But until we work out how to do something useful with a sdist that we can't do with a dev checkout, it's hard to justify treating sdists specially. Paul