From p.f.moore at gmail.com Fri Mar 1 00:00:20 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Feb 2013 23:00:20 +0000 Subject: [Distutils] time for packaging summit at pycon In-Reply-To: References: Message-ID: On 28 February 2013 03:54, Nick Coghlan wrote: > It's only a couple of hours, because my aim is mostly to encourage > information sharing through lightning talks about some of the key > projects involved in packaging and distribution, rather than making > any actual firm decisions about anything. I hope the presentations > will stimulate an ongoing discussion throughout the conference, > following on into the sprints and post-conference development. Will someone be writing up and/or recording the sessions and any discussions? Paul From pnasrat at gmail.com Fri Mar 1 00:15:09 2013 From: pnasrat at gmail.com (Paul Nasrat) Date: Thu, 28 Feb 2013 18:15:09 -0500 Subject: [Distutils] time for packaging summit at pycon In-Reply-To: References: Message-ID: I'd love for at least some IRC chatter or notes as I can't make it to pycon this year. Particularly if there is activity during the sprints on this stuff so I can contribute from home Paul On 28 February 2013 18:00, Paul Moore wrote: > On 28 February 2013 03:54, Nick Coghlan wrote: > > It's only a couple of hours, because my aim is mostly to encourage > > information sharing through lightning talks about some of the key > > projects involved in packaging and distribution, rather than making > > any actual firm decisions about anything. I hope the presentations > > will stimulate an ongoing discussion throughout the conference, > > following on into the sprints and post-conference development. > > Will someone be writing up and/or recording the sessions and any > discussions? > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Fri Mar 1 10:06:51 2013 From: glyph at twistedmatrix.com (Glyph) Date: Fri, 1 Mar 2013 01:06:51 -0800 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362065981.2370.95.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> Message-ID: <61AD2FEC-FE91-4587-BD85-5273AE0F2622@twistedmatrix.com> On Feb 28, 2013, at 7:39 AM, Mark McLoughlin wrote: > I always felt that the Python community tended more towards the former > approach, but there always exceptions to the rule - to unfairly pick one > one project, sqlalchemy seems to have an API that often changes > incompatibly. For what it's worth, Twisted takes backwards compatibility very seriously. I try to mention this frequently in the hopes that more Python projects will adopt a similar policy: . I hear that openstack has chosen to avail themselves of some different networking libraries... but you have deprived me of the opportunity for snark by apparently having no difficulty with compatibility in that layer :). Unfortunately, the tenor of the community has changed somewhat recently, with Python 3 being a clarion call for people who want to screw over their users with an incompatible release. There has been some cultural fallout from that, because it sets a bad example. Personally, I have found myself in a dozen serious conversations over the last two years or so when talking to other package maintainers (names withheld to protect the guilty) contemplating some massive incompatible change with no warning, because "everything is going to break anyway, so why not take the opportunity"; it's important that cooler heads prevail here :). Luckily, in most cases I was able to convince those folks that multiple simultaneous levels of breakage are a bad idea... However, many, perhaps even most, projects have made the transition gracefully without any API breakage, as we are attempting to, and to their credit, the core developers *have* said repeatedly that breaking API while porting to Python 3 is a singularly terrible idea. I see that most of your problems have been with PyParsing, though. PyParsing has also been infamously bad in terms of having no stable API, despite being well-post-1.0. When I last used it, the breakage was so bad that we just started bundling in a specific version in our code; even dot-releases were breaking things majorly. You can still see that here: . Overall though, your impression of the Python community at large *is* accurate; breakages come with deprecation warnings beforehand, and mature packages try hard to maintain some semblance of a compatible API, even if there is some disagreement over what "compatible" means. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinout at vanrees.org Fri Mar 1 14:53:14 2013 From: reinout at vanrees.org (Reinout van Rees) Date: Fri, 01 Mar 2013 14:53:14 +0100 Subject: [Distutils] buildout and build-time dependencies In-Reply-To: <0C0112E3-716D-4D5B-B4E0-DEC6933BABB8@artsci.wustl.edu> References: <0C0112E3-716D-4D5B-B4E0-DEC6933BABB8@artsci.wustl.edu> Message-ID: On 28-02-13 20:31, Ben Acland wrote: > tl;dr: how to handle build time dependencies between python modules > using buildout, without looking stupid or including .tar.gz files in my > repo. numpy and scipy are a hell to install. I basically install them in the OS and use the "syseggrecipe" to get the globally-installed ones in my buildout: https://pypi.python.org/pypi/syseggrecipe Alternatively, for all your "compile/make/make install" fun there's https://pypi.python.org/pypi/zc.recipe.cmmi Problem, it isn't "compile/make/make install" but a "setup.py install". Well, you mention that including the .tgz is a problem. You *can* download it with a download recipe: https://pypi.python.org/pypi/gocept.download/0.9.5 And another alternative is to create a custom build recipe like exists for lxml: https://pypi.python.org/pypi/z3c.recipe.staticlxml/ I guess that the syseggrecipe is the handiest at the moment. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "If you're not sure what to do, make something. -- Paul Graham" From pombredanne at nexb.com Sat Mar 2 00:40:46 2013 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Fri, 1 Mar 2013 15:40:46 -0800 Subject: [Distutils] buildout and build-time dependencies In-Reply-To: References: <0C0112E3-716D-4D5B-B4E0-DEC6933BABB8@artsci.wustl.edu> Message-ID: On Mar 1, 2013 5:54 AM, "Reinout van Rees" wrote: > > On 28-02-13 20:31, Ben Acland wrote: >> >> tl;dr: how to handle build time dependencies between python modules >> using buildout, without looking stupid or including .tar.gz files in my >> repo. > numpy and scipy are a hell to install. Would there be something we could do to help these important projects be easier to install? -- Philippe -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sat Mar 2 03:14:48 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 1 Mar 2013 21:14:48 -0500 Subject: [Distutils] wheel setup.cfg extensions Message-ID: Where should these go / be defined? The wheel project understands some setup.cfg extensions that are important for Metadata 2.0 and wheel. The [metadata] section allows you to override the setup(install_requires=[...]) with values containing environment markers. [metadata] also lets you specify a LICENSE.txt to be copied into the .dist-info directory (not mentioned in METADATA) just so the resulting wheel is less likely to violate the common "include the license with all copies of this software" restriction. The [wheel] section lets you instruct bdist_wheel to tag your wheel as a universal wheel. [metadata] provides-extra = tool signatures faster-signatures requires-dist = distribute >= 0.6.30 argparse; python_version == '2.6' keyring; extra == 'signatures' dirspec; sys.platform != 'win32' and extra == 'signatures' ed25519ll; extra == 'faster-signatures' license-file = LICENSE.txt [wheel] universal=1 # use py2.py3 tag for pure-python dist From acland at me.com Sat Mar 2 14:21:51 2013 From: acland at me.com (Ben Acland) Date: Sat, 02 Mar 2013 07:21:51 -0600 Subject: [Distutils] buildout and build-time dependencies Message-ID: <285CF175-812A-43BB-99D8-5C593360568F@me.com> > On Mar 1, 2013 5:54 AM, "Reinout van Rees" wrote: >> >> On 28-02-13 20:31, Ben Acland wrote: >>> >>> tl;dr: how to handle build time dependencies between python modules >>> using buildout, without looking stupid or including .tar.gz files in my >>> repo. >> numpy and scipy are a hell to install. > Would there be something we could do to help these important projects be > easier to install? I had a thought last night, after digging through zc.buildout for a while. In buildout's easy_install.py's _call_easy_install, around line 299 in v2.0.1, you call distribute's easy_install using this: exit_code = subprocess.call( list(args), env=dict(os.environ, PYTHONPATH=path)) I *think* it would work to pass in a list of other egg-producing parts, then look for those eggs at this point and add them to the path. Alternatively, a list of paths would get the job done, but you'd have to lock down the egg version before hard coding the path. Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Sun Mar 3 14:25:41 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 3 Mar 2013 13:25:41 +0000 (UTC) Subject: [Distutils] distlib 0.1.0 released on PyPI Message-ID: The first release of distlib - 0.1.0 - is now available on PyPI. Distlib is a low-level library of packaging functionality which is intended to be used as the basis for third-party packaging tools. This release contains the following components: 1. distlib.database - this implements a database of installed distributions, as defined by PEP 376, and distribution dependency graph logic. Support is also provided for non-installed distributions (i.e. distributions registered with metadata on an index like PyPI), including the ability to scan for dependencies and the building of dependency graphs. 2. distlib.index - this implements a means of performing operations on an index, such as registering a project, uploading a distribution or uploading documentation. Support is included for verifying SSL connections (with domain matching) and signing/verifying packages using GnuPG. 3. distlib.metadat - this implements distribution metadata as defined by PEP 345, PEP 314 and PEP 241. Support for more recent initiatives (e.g. PEP 426 - Metadata 2.0) awaits their finalisation. 4. distlib.markers - this implements environment markers as defined by PEP 345. 5. distlib.manifest - this implements lists of files used in packaging source distributions. 6. distlib.locators - allows the finding of distributions, whether on PyPI (using XML-RPC or via the "simple" scraping interface), local directories or some other source. A locator using extended JSON metadata is provided which allows dependency resolution without the need to download any distribution. 7. distlib.resources - this allows access to data files stored in Python packages, both in the file system and in .zip files. 8. distlib.scripts - facilitates installing of scripts with adjustment of shebang lines and support for native Windows executable launchers. 9. distlib.version - implements version specifiers as defined by PEP 386, but also supports working with "legacy" versions (setuptools/distribute) and semantic versions. Support for the latest version numbering scheme (PEP 426) is not far off. 10. distlib.wheel - this provides support for building and installing from the Wheel format for binary distributions (see PEP 427). 11. distlib.util - this contains miscellaneous functions and classes which are useful in packaging, but which do not fit neatly into one of the other packages in distlib. The package implements enhanced globbing functionality such as the ability to use ** in patterns to specify recursing into subdirectories. Documentation is available at [1], which will be regularly updated as development progresses, and [2], which will be updated when a release is made on PyPI. You should be able to add Disqus comments to the documentation at [1] to indicate improvements you'd like to see, or clarifications you'd like to add. Continuous integratiin test results are available at [3]. Issues should be raised using the BitBucket tracker at [4]. You can clone the repository at [5] and submit pull requests, if you'd like to contribute. I welcome your feedback. Regards, Vinay Sajip [1] http://distlib.readthedocs.org/ [2] http://pythonhosted.org/distlib/ [3] https://travis-ci.org/vsajip/distlib/ [4] https://bitbucket.org/vinay.sajip/distlib/issues/new [5] https://bitbucket.org/vinay.sajip/distlib/ From ncoghlan at gmail.com Sun Mar 3 15:42:26 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Mar 2013 00:42:26 +1000 Subject: [Distutils] distlib 0.1.0 released on PyPI In-Reply-To: References: Message-ID: Nice work on getting this published Vinay! Hopefully support for an accepted PEP 426 can become a highlight of 0.2.0 :) Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From markmc at redhat.com Sun Mar 3 16:54:33 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Sun, 03 Mar 2013 15:54:33 +0000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362065981.2370.95.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> Message-ID: <1362326073.5909.12.camel@sorcha> Hey, On Thu, 2013-02-28 at 15:39 +0000, Mark McLoughlin wrote: ... > However, OpenStack is starting to get burned more often and some are > advocating taking the second approach to managing our dependencies: > > http://lists.openstack.org/pipermail/openstack-dev/2013-February/thread.html#6014 > http://lists.openstack.org/pipermail/openstack-dev/2013-February/thread.html#6041 > http://lists.openstack.org/pipermail/openstack-dev/2012-November/002075.html > > It's probably not worthwhile for everyone to try and read the nuances of > those threads. The tl;dr is we're hurting and hurting bad. Is this a > problem the OpenStack and Python communities want to solve together? Or > does the Python community fundamentally seem themselves as taking the > same approach as the Ruby and Java communities? ... I've tried to digest PEP426 and collected my thoughts below. Apologies if you read through them and they offer you nothing useful. I'm trying to imagine the future we're driving towards here where OpenStack is no longer suffering and how PEP426 helped get us there e.g. where - Many libraries use semantic versioning and OpenStack specifies x.y compatible versions in their dependency lists. Incompatible changes only get made in x+N.y and OpenStack continues using the x.y+N.z versions. - All the libraries which don't use semantic versioning use another clear versioning scheme that allows OpenStack to avoid incompatible updates while still using compatible updates. - Incompatible versions of the same library are routinely installed in parallel. Does PEP426 here, or is all the work to be done in tools like PyPI, pip, setuptools, etc. Apps somehow specify which version they want to use since the incompatible versions of the library use the same namespace. There are fairly large gaps in my understanding of the plan here. How quickly will we see libraries adopt PEP426? Will python 2.6/2.7 based projects be able to consume these? How do parallel installs work in practice? etc. Thanks, Mark. == Basic Principles == What we (OpenStack) need from you (python library maintainers): 1) API stability or, at least, predictable instability. We'd like to say "we require version N or any compatible later version" because just saying "we require version N" means our users can't use our application with later incompatible versions of your library. 2) Support for installing multiple incompatible versions at once[*]. If there are two applications in the same system/distribution that require incompatible versions of the same library, you need to support having both installed at once. [1] - throws me back to Havoc Pennington's way-back-when essay on parallel installable incompatible versions of GNOME libraries: http://www106.pair.com/rhp/parallel.html == Versioning == Semantic versioning is appealing here because, assuming all libraries adopt it, it becomes very easy for us to predict which versions will be incompatible. For any API unstable library (0.x in semantic versioning), we need to pin to a very specific version and require distributions to package that exact version in order to run OpenStack. When moving to a newer version of the library, we need to move all OpenStack projects at once. Ideally, we'd just avoid such libraries. Implied in semantic versioning, though, is that it's possible for distributions to include both version X.y.z and X+N.y.z == PEP426 == === Date Based Versioning === OpenStack uses date based versioning for its server components and we've begun using it for the Oslo libraries described above. PEP426 says: Date based release numbers are explicitly excluded from compatibility with this scheme, as they hinder automatic translation to other versioning schemes, as well as preventing the adoption of semantic versioning without changing the name of the project. Accordingly, a leading release component greater than or equal to 1980 is an error. This seems odd and the rationale seems week. It looks like an abritrary half-way house between requiring semantic versioning and allowing any scheme of increasing versions. I'm really not sure how we'd deal with this gracefully - assuming we have releases out there which require e.g. oslo-config>=2013.1 and we switch to: Version: 1.2 Private-Version: 2013.2 then is the >=2013.1 requirement satisfied by the private version? If not, should we just go ahead and take the pain now of releasing 2013.1 as 1.1? === Compatible Release Version Specifier === I know this is copied from Ruby's spermy operator, but it seems to be to be fairly useless (overly pessimistic, at least) with semantic versioning. i.e. assuming x.y.z releases, where an increase in X means that incompatible changes have been made then: ~>1.2.3 unnecessarily limits you to the 1.2.y series of releases. Now, in an ideal world, you would just say: ~>1.2 and you're good for the entire 1.x.y but often you know what your app doesn't work with 1.2.2 and there's a fix in 1.2.3 that your app absolutely needs. A random implementation detail I don't get here ... we use 'pip install -r' a lot in OpenStack and so have files with e.g. PasteDeploy>=1.5.0 how will you specify compatible releases in these files? What's the equivalent of PasteDeploy~>1.5.0 ? === Multiple Release Streams === If version 1.1.0 and version 1.2.0 were already published to PyPI, we'd like to be able to release a 1.1.1 bugfix update. I saw some mention that PyPI would then treat 1.1.1 as the latest release, but I can't find where I read that now. Could someone confirm this won't be an issue in future. I guess I'm unclear how the PEP relatest to problems with PyPI specifically. === Python 2 vs 3 === Apparently, part of our woes are down to library maintainers releasing a version of their library that isn't supported on Python 2 (OpenStack currently supports running on 2.6 and 2.7). This is obviously an incompatible change from our perspective. Going back the basic principles above, we'd like to be able to: 1) specify that we "require version N or any later compatible version that supports python 2.6. and 2.7". 2) have both the python 2 and python 3 compatible versions installed on the same system or included in the same distribution. Environment markers in PEP426 sound tantalizingly close to (1) but AFAICT the semantics of markers are "the set of things that must be true for this field to be considered" i.e. Requires-Dist: PasteDeploy (>=1.5.0); python_version == '2.6' or python_version == '2.7' means PasteDeploy is required on python 2.6/2.7 installs, but not otherwise. I really don't know what the story with (2) is. == Oslo Libraries == The approach we had planned to take to versioning for the Oslo family of libraries in OpenStack is: - APIs can be marked as deprecated in a release, kept around for another release and then removed. For example, if we deprecated an API in the H release it would not be remove until 1 year later in the J release. - If we wanted to go down the route of a version with incompatible API changes with no deprecation period for the old APIs, we'd actually just introduce the new APIs under a new library name like oslo-config2. We could continue to support the old APIs as a separate project for a time. This is an approach tailored to Oslo's use case as a library used by server projects on a 6 month coordinated release cycle. It allows servers from the H release to use Oslo libraries from the I release, further allowing servers from the H and I release to be in the same system/distribution. To encode this in a requirements file, I guess we'd do: oslo-config>=2013.1,<2014.1 although that does mean predicting that 2014.1 changes which are incompatible with 2013.1, even though it may not actually do so. This approach doesn't work so well for "normal" Python libraries because it's impossible to predict when it will be reasonable to remove a deprecated API. Who's to say that no-one will have a 3 year old application installed on their system that they expect to still run fine even if they update to a newer version of your library? From markmc at redhat.com Sun Mar 3 17:07:41 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Sun, 03 Mar 2013 16:07:41 +0000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> Message-ID: <1362326861.5909.20.camel@sorcha> Hey On Thu, 2013-02-28 at 10:46 -0500, Daniel Holth wrote: > Briefly in PEP 426 we are likely to copy the Ruby behavior as the > default (without using the ~> operator itself) which is to depend on > "the remainder of a particular release series". In Ruby gems ~> 4.2.3 > means >= 4.2.3, < 4.3.0 and the version numbers are expected to say > something about backwards compatibility. Thanks! Couple of questions on that in my other mail: 1) This doesn't seem to jive with semantic versioning where you want to say "1.2.3 or any later compatible version" rather than "1.2.3 or any later version in the 1.2 series" 2) How do you do this in requires.txt type files without the operator? > On PyPI the version numbers don't necessarily mean anything but I hope > that will change. Ok, and catalog-sig is the place to follow progress there? > I consider it good form for a setup.py to declare as loose > dependencies as possible (no version qualifier or a >= version > qualifier) and for an application to provide a requires.txt or a > buildout that has stricter requirements. Interesting! I feel like I'm missing some context on the latter part - mostly because I hadn't come across buildout, so more reading for me! - but if the idea is that a buildout/requires.txt specifies the versions that a developer should use when working on the project ... how do you avoid a situation where developers are happily working on one stack of libraries and the app either no longer works with the minimum versions specified in setup.py or the latest versions published upstream? Thanks, Mark. From markmc at redhat.com Sun Mar 3 17:17:14 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Sun, 03 Mar 2013 16:17:14 +0000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <61AD2FEC-FE91-4587-BD85-5273AE0F2622@twistedmatrix.com> References: <1362065981.2370.95.camel@sorcha> <61AD2FEC-FE91-4587-BD85-5273AE0F2622@twistedmatrix.com> Message-ID: <1362327434.5909.25.camel@sorcha> On Fri, 2013-03-01 at 01:06 -0800, Glyph wrote: > > On Feb 28, 2013, at 7:39 AM, Mark McLoughlin > wrote: > > > I always felt that the Python community tended more towards the > > former > > approach, but there always exceptions to the rule - to unfairly pick > > one > > one project, sqlalchemy seems to have an API that often changes > > incompatibly. > > For what it's worth, Twisted takes backwards compatibility very > seriously. I try to mention this frequently in the hopes that more > Python projects will adopt a similar policy: > . Very nice! > I hear that openstack has chosen to avail themselves of some > different networking libraries... but you have deprived me of the > opportunity for snark by apparently having no difficulty with > compatibility in that layer :). I won't digress into discussing other difficulties in that layer :) [snip useful and interesting context] > Overall though, your impression of the Python community at large *is* > accurate; breakages come with deprecation warnings beforehand, and > mature packages try hard to maintain some semblance of a compatible > API, even if there is some disagreement over what "compatible" means. Thanks! That gives me hope that OpenStack should continue assuming by default that Python library updates will be compatible. The difficulty is thus identifying the individual versioning semantics for all the libraries we use and tailoring our requirement specifications to each library. Cheers, Mark. From reinout at vanrees.org Sun Mar 3 22:20:11 2013 From: reinout at vanrees.org (Reinout van Rees) Date: Sun, 03 Mar 2013 22:20:11 +0100 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362326861.5909.20.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326861.5909.20.camel@sorcha> Message-ID: On 03-03-13 17:07, Mark McLoughlin wrote: >> >I consider it good form for a setup.py to declare as loose >> >dependencies as possible (no version qualifier or a >= version >> >qualifier) and for an application to provide a requires.txt or a >> >buildout that has stricter requirements. > Interesting! > > I feel like I'm missing some context on the latter part - mostly because > I hadn't come across buildout, so more reading for me! - but if the idea > is that a buildout/requires.txt specifies the versions that a developer > should use when working on the project ... how do you avoid a situation > where developers are happily working on one stack of libraries and the > app either no longer works with the minimum versions specified in > setup.py or the latest versions published upstream? Unless you explicitly test for this, you cannot really know. - If you always unpin the vesions and grab the latest one, you cannot be sure it still works with the minimum version. - If you pin on the lowest version, you cannot know if it breaks with the latest. You could test for this explicitly by having two different buildouts (or requirements.txt files) and test both. Perhaps some input from my practice can help. In reality it isn't so much of a problem. - If I encounter a problem ("hey, I now should use djangorestframework 2.x instead of the 0.3.x I was using because some package uses the new imports"), I fix up my setup.py. A colleague switched to the latest djangorestframework with an incompatible API, so my code broke. I added the ">= 2.0" requirement to my setup.py, ensuring I'm at least getting a good explicit error message instead of a unclear ImportError when running my site. - Sites are conservative and are pinned down solid. - Libraries are more loose. Often certain packages aren't pinned, which makes you note errors earlier. The 2nd and 3rd point provide a bit of certainty at both ends of the spectrum. The 1st is a handy method to ensure nothing goes terribly wrong. Ok, it is reactive instead of proactive, but you should encounter errors early enough (when developing features) so you can update your min/max versions before you do a release. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "If you're not sure what to do, make something. -- Paul Graham" From ncoghlan at gmail.com Mon Mar 4 09:11:20 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Mar 2013 18:11:20 +1000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362326073.5909.12.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> Message-ID: On Mon, Mar 4, 2013 at 1:54 AM, Mark McLoughlin wrote: > I've tried to digest PEP426 and collected my thoughts below. Apologies > if you read through them and they offer you nothing useful. > > I'm trying to imagine the future we're driving towards here where > OpenStack is no longer suffering and how PEP426 helped get us there > > e.g. where > > - Many libraries use semantic versioning and OpenStack specifies x.y > compatible versions in their dependency lists. Incompatible changes > only get made in x+N.y and OpenStack continues using the x.y+N.z > versions. Yep, PEP 426 pushes projects heavily in that direction by making it the path of least resistance. You *can* do something different if necessary, it's just less convenient than simply using the first 9 clauses of semantic versioning. > - All the libraries which don't use semantic versioning use another > clear versioning scheme that allows OpenStack to avoid incompatible > updates while still using compatible updates. Yes, the PEP will explicitly encourage projects to document how to define dependencies if the default "compatible release" clause isn't appropriate. > - Incompatible versions of the same library are routinely installed > in parallel. Does PEP426 here, or is all the work to be done in > tools like PyPI, pip, setuptools, etc. Apps somehow specify which > version they want to use since the incompatible versions of the > library use the same namespace. PEP 426 doesn't help here, and isn't intended to. virtualenv (and the integrated venv in 3.3+) are the main solution being offered in this space (the primary difference with simple bundling is that you still keep track of your dependencies, so you have some hope of properly rolling out security fixes). Fedora (at least) is experimenting with similar capabilities through software collections. IMO, the success of iOS, Android (and Windows) as deployment targets means the ISVs have spoken: bundling is the preferred option once you get above the core OS level. That means it's on us to support bundling in a way that doesn't make rolling out security updates a nightmare for system administrators, rather than trying to tell ISVs that bundling isn't supported. > There are fairly large gaps in my understanding of the plan here. How > quickly will we see libraries adopt PEP426? Will python 2.6/2.7 based > projects be able to consume these? PEP 426 is aimed primarily at the current ecosystem of tools (setuptools/distribute/pip/zc.buildout and others). There are limitations to the current PyPI API that prevent them from taking full advantage of the extended metadata, however, so full exploitation won't be possible until the server side gets sorted out (which is an ongoing discussion). > How do parallel installs work in > practice? etc. At the moment, they really don't. setuptools/distribute do allow it to some degree, but it can go wrong if you're not sufficiently careful with it (e.g. http://git.beaker-project.org/cgit/beaker/commit/?h=develop&id=d4077a118627b947a3c814cd3ff9280afeeecd73). > > Thanks, > Mark. > > == Basic Principles == > > What we (OpenStack) need from you (python library maintainers): > > 1) API stability or, at least, predictable instability. We'd like to > say "we require version N or any compatible later version" > because just saying "we require version N" means our users can't > use our application with later incompatible versions of your > library. I assume you mean s/incompatible/compatible/ in the last example. I agree, which is why PEP 426 is deliberately more opinionated than PEP 345 about what versions should mean (although I need to revise how that advocacy is currently handled). > 2) Support for installing multiple incompatible versions at > once[*]. If there are two applications in the same > system/distribution that require incompatible versions of the > same library, you need to support having both installed at > once. virtualenv/venv cover this (see http://www.virtualenv.org/en/1.9.X/ and http://www.python.org/dev/peps/pep-0405/ > == Versioning == > > Semantic versioning is appealing here because, assuming all libraries > adopt it, it becomes very easy for us to predict which versions will > be incompatible. > > For any API unstable library (0.x in semantic versioning), we need to > pin to a very specific version and require distributions to package > that exact version in order to run OpenStack. When moving to a newer > version of the library, we need to move all OpenStack projects at > once. Ideally, we'd just avoid such libraries. > > Implied in semantic versioning, though, is that it's possible for > distributions to include both version X.y.z and X+N.y.z My point of view is that the system Python is there primarily to run system utilities and user scripts, rather than arbitrary Python applications. Users can install alternate versions of software into their user site directories, or into virtual environments. Projects are, of course, also free to include part of their version number in the project name. The challenge of dynamic linking different on-disk versions of a module into a process is that: - the import system simply isn't set up to work that way (setuptools/distribute try to fake it by adjusting sys.path, but that can go quite wrong at times) - it's confusing for users, since it isn't always clear which version they're going to see - errors can appear arbitrarily late, since module loading is truly dynamic > == PEP426 == > > === Date Based Versioning === > > OpenStack uses date based versioning for its server components and > we've begun using it for the Oslo libraries described above. PEP426 > says: > > Date based release numbers are explicitly excluded from > compatibility with this scheme, as they hinder automatic translation > to other versioning schemes, as well as preventing the adoption > of semantic versioning without changing the name of the > project. Accordingly, a leading release component greater than or > equal to 1980 is an error. > > This seems odd and the rationale seems week. It looks like an > abritrary half-way house between requiring semantic versioning and > allowing any scheme of increasing versions. The assumption is that the version field will follow semantic versioning (that's not where the PEP started, it was originally non-committal on the matter like PEP 345 - however I've come to the conclusion that this assumption needs to be made explicit, but still need to update various parts of the PEP that are still wishy-washy about it). However, we can't actually enforce semantic versioning because we can't automatically detect if a project is adhering to those semantics. What we *can* enforce is the rejection of version numbers that obviously look like years rather than a semantic version. > I'm really not sure how we'd deal with this gracefully - assuming we > have releases out there which require e.g. oslo-config>=2013.1 and we > switch to: > > Version: 1.2 > Private-Version: 2013.2 > > then is the >=2013.1 requirement satisfied by the private version? If > not, should we just go ahead and take the pain now of releasing 2013.1 > as 1.1? No version of the metadata will ever support ordered comparisons of the "Private-Version" field, since that's an arbitrary tag with no implied ordering mechanism. I expect we'll add support for == and != comparisons against private tags somewhere along the line once PyPI is publishing the enhanced metadata properly. > === Compatible Release Version Specifier === > > I know this is copied from Ruby's spermy operator, but it seems to be > to be fairly useless (overly pessimistic, at least) with semantic > versioning. > > i.e. assuming x.y.z releases, where an increase in X means that > incompatible changes have been made then: > > ~>1.2.3 > > unnecessarily limits you to the 1.2.y series of releases. Now, in an > ideal world, you would just say: > > ~>1.2 > > and you're good for the entire 1.x.y but often you know what your app > doesn't work with 1.2.2 and there's a fix in 1.2.3 that your app > absolutely needs. Handling that kind of situation is why version specifiers allow multiple clauses that are ANDed together: mydependency (1.2, >= 1.2.3) > > A random implementation detail I don't get here ... we use 'pip install > -r' a lot in OpenStack and so have files with e.g. > > PasteDeploy>=1.5.0 > > how will you specify compatible releases in these files? What's the > equivalent of > > PasteDeploy~>1.5.0 > > ? Note that the pkg_resources specifier format (no parens allowed) is NOT the same as the PEP 345/426 format (parens required). The former is designed to work with arbitrary version schemes, while the latter is explicitly constrained to version schemes that abide by the constraints described in PEP 386. I expect the setuptools/distribute developers will choose a tilde-based operator to represent compatible release clauses in the legacy formats (likely Ruby's ~>, although it could be something like ~= or ~~ instead). > === Multiple Release Streams === > > If version 1.1.0 and version 1.2.0 were already published to PyPI, > we'd like to be able to release a 1.1.1 bugfix update. I saw some > mention that PyPI would then treat 1.1.1 as the latest release, but I > can't find where I read that now. > > Could someone confirm this won't be an issue in future. I guess I'm > unclear how the PEP relatest to problems with PyPI specifically. There are major issues with the way PyPI publishes metadata - in particular, installers must currently download and *run* setup.py in order to find out the dependencies for most packages. Dev releases are also prone to being picked up as the latest release, because there's no explicit guideline for how installers should identify and exclude development releases when satisfying dependencies. > === Python 2 vs 3 === > > Apparently, part of our woes are down to library maintainers releasing > a version of their library that isn't supported on Python 2 (OpenStack > currently supports running on 2.6 and 2.7). > > This is obviously an incompatible change from our perspective. Going > back the basic principles above, we'd like to be able to: > > 1) specify that we "require version N or any later compatible > version that supports python 2.6. and 2.7". You'll need to figure out which version broke compatibility and add a "< first-broken-version" clause to the version specifier for that dependency (pointing out to the offending upstream that it isn't cool to break backwards compatibility like that in a nominally backwards compatible release may also be worthwhile). However, once Requires-Python metadata is more readily available, installers should also automatically exclude any candidate versions with an unsatisfied "Requires-Python" constraint. Single source and 2to3 based Python 2.6+/3.2+ compatibility is unfortunately currently somewhat clumsy to declare, since version specifiers currently have no "or" notation: Requires-Python: >= 2.6, < 4.0, !=3.0.*, !=3.1.* Probably something to look at for metadata 2.1 (I won't delay metadata 2.0 over it, since single source compatibility *can* be specified accurately once I update the PEP to include the ".*" notation, it's just ugly). > 2) have both the python 2 and python 3 compatible versions installed > on the same system or included in the same distribution. > > Environment markers in PEP426 sound tantalizingly close to (1) but > AFAICT the semantics of markers are "the set of things that must be > true for this field to be considered" i.e. > > Requires-Dist: PasteDeploy (>=1.5.0); python_version == '2.6' or python_version == '2.7' > > means PasteDeploy is required on python 2.6/2.7 installs, but not > otherwise. Correct. > I really don't know what the story with (2) is. virtualenv/venv/software collections. > > == Oslo Libraries == > > The approach we had planned to take to versioning for the Oslo family > of libraries in OpenStack is: > > - APIs can be marked as deprecated in a release, kept around for > another release and then removed. For example, if we deprecated > an API in the H release it would not be remove until 1 year later > in the J release. That actually matches the traditional deprecation model for CPython. However, it's hard to handle cleanly with automated dependency analysis because it's hard to answer the "is this a backwards compatible release?" question quickly and automatically, since the answer is "it should be, but may not be if the application is currently triggering deprecation warnings". Depending on how conservative people are, they'll either choose the more restrictive option of disallowing forward compatibility, or the optimistic approach of declaring forward compatibility with the rest of the 3.x series. > - If we wanted to go down the route of a version with incompatible > API changes with no deprecation period for the old APIs, we'd > actually just introduce the new APIs under a new library name > like oslo-config2. We could continue to support the old APIs as a > separate project for a time. > > This is an approach tailored to Oslo's use case as a library used by > server projects on a 6 month coordinated release cycle. It allows > servers from the H release to use Oslo libraries from the I release, > further allowing servers from the H and I release to be in the same > system/distribution. > > To encode this in a requirements file, I guess we'd do: > > oslo-config>=2013.1,<2014.1 > > although that does mean predicting that 2014.1 changes which are > incompatible with 2013.1, even though it may not actually do so. PEP 426 explicitly forbids you from doing that, since it disallows version numbers that look like dates. But you can add a leading zero to make it legal (I'll be changing the advice in the PEP to suggest this solution for date-based version compatibility): oslo-config (>= 0.2013.1, < 0.2014.1) I'll also be updating the "compatible release" clause specification to pay attention to leading zeroes so that "0.N" and "0.N+1" are considered incompatible by default (which again works in well with this approach). > This approach doesn't work so well for "normal" Python libraries > because it's impossible to predict when it will be reasonable to > remove a deprecated API. Who's to say that no-one will have a 3 year > old application installed on their system that they expect to still > run fine even if they update to a newer version of your library? Yes, we've become a lot more conservative with this even in the standard library. Semantic versioning is a better approach in general. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Mon Mar 4 13:04:48 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 4 Mar 2013 07:04:48 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> Message-ID: Just a short note to state the obvious - the specific pycparsing library from the example did exactly what it should have done and can do in the current system, by incrementing its major version number for a backwards-incompatible release. Dependency management is a job though. From barry at python.org Mon Mar 4 16:59:18 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 10:59:18 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> Message-ID: <20130304105918.56044b63@anarchist.wooz.org> On Mar 04, 2013, at 06:11 PM, Nick Coghlan wrote: >My point of view is that the system Python is there primarily to run >system utilities and user scripts, rather than arbitrary Python >applications. Users can install alternate versions of software into >their user site directories, or into virtual environments. Projects >are, of course, also free to include part of their version number in >the project name. I have the same opinion, but it doesn't make developers very happy at all. Cheers, -Barry From dholth at gmail.com Mon Mar 4 17:23:24 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 4 Mar 2013 11:23:24 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <20130304105918.56044b63@anarchist.wooz.org> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <20130304105918.56044b63@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 10:59 AM, Barry Warsaw wrote: > On Mar 04, 2013, at 06:11 PM, Nick Coghlan wrote: > >>My point of view is that the system Python is there primarily to run >>system utilities and user scripts, rather than arbitrary Python >>applications. Users can install alternate versions of software into >>their user site directories, or into virtual environments. Projects >>are, of course, also free to include part of their version number in >>the project name. > > I have the same opinion, but it doesn't make developers very happy at all. > > Cheers, > -Barry Apple does (or did) something very useful by not including the stdlib source code in OS X's builtin Python, making the system Python so hilariously useless for development that no one would attempt it after the trick has been discovered. After a few frustrating hours figuring out what they've actually done it hits you "OH that Python is not for me", you download a working version from python.org, and you're on your way. :-P From nad at acm.org Mon Mar 4 18:12:25 2013 From: nad at acm.org (Ned Deily) Date: Mon, 04 Mar 2013 09:12:25 -0800 Subject: [Distutils] Library instability on PyPI and impact on OpenStack References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <20130304105918.56044b63@anarchist.wooz.org> Message-ID: In article , Daniel Holth wrote: > Apple does (or did) something very useful by not including the stdlib > source code in OS X's builtin Python, making the system Python so > hilariously useless for development that no one would attempt it after > the trick has been discovered. After a few frustrating hours figuring > out what they've actually done it hits you "OH that Python is not for > me", you download a working version from python.org, and you're on > your way. :-P FWIW, the stdlib py files are installed as part of the Xcode Command Line Tools component. -- Ned Deily, nad at acm.org From markmc at redhat.com Mon Mar 4 18:23:26 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Mon, 04 Mar 2013 17:23:26 +0000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> Message-ID: <1362417806.5909.151.camel@sorcha> Hi Nick, Thanks for the detailed reply. I'll stick to PEP426 related topics in this mail. Generally speaking, PEP426 looks like good progress, but the biggest problem I see now is the lack of parallel installs for incompatible versions. On Mon, 2013-03-04 at 18:11 +1000, Nick Coghlan wrote: > On Mon, Mar 4, 2013 at 1:54 AM, Mark McLoughlin wrote: > > I've tried to digest PEP426 and collected my thoughts below. Apologies > > if you read through them and they offer you nothing useful. > > > > I'm trying to imagine the future we're driving towards here where > > OpenStack is no longer suffering and how PEP426 helped get us there > > > > e.g. where > > > > - Many libraries use semantic versioning and OpenStack specifies x.y > > compatible versions in their dependency lists. Incompatible changes > > only get made in x+N.y and OpenStack continues using the x.y+N.z > > versions. > > Yep, PEP 426 pushes projects heavily in that direction by making it > the path of least resistance. You *can* do something different if > necessary, it's just less convenient than simply using the first 9 > clauses of semantic versioning. > > > - All the libraries which don't use semantic versioning use another > > clear versioning scheme that allows OpenStack to avoid incompatible > > updates while still using compatible updates. > > Yes, the PEP will explicitly encourage projects to document how to > define dependencies if the default "compatible release" clause isn't > appropriate. Ok, understood. Is there a case for putting this information into the metadata? e.g. if this was version 1.2.3 of a library and the project's policy is to bump the major version if an incompatible API change is made, then you can do: Requires-Dist: foo (1.2) or: Requires-Dist: foo (1.2,>=1.2.3) or: Requires-Dist: foo (>=1.2.3,<2) but how about if foo-1.2.3 contained: Compatible-With: 1 and foo-2.0.0 contained: Compatible-With: 2 then this: Requires-Dist: foo (1.2) could reject 2.0.0 as satisfying the compatible release specification. > > == PEP426 == > > > > === Date Based Versioning === > > > > OpenStack uses date based versioning for its server components and > > we've begun using it for the Oslo libraries described above. PEP426 > > says: > > > > Date based release numbers are explicitly excluded from > > compatibility with this scheme, as they hinder automatic translation > > to other versioning schemes, as well as preventing the adoption > > of semantic versioning without changing the name of the > > project. Accordingly, a leading release component greater than or > > equal to 1980 is an error. > > > > This seems odd and the rationale seems week. It looks like an > > abritrary half-way house between requiring semantic versioning and > > allowing any scheme of increasing versions. > > The assumption is that the version field will follow semantic > versioning (that's not where the PEP started, it was originally > non-committal on the matter like PEP 345 - however I've come to the > conclusion that this assumption needs to be made explicit, but still > need to update various parts of the PEP that are still wishy-washy > about it). However, we can't actually enforce semantic versioning > because we can't automatically detect if a project is adhering to > those semantics. What we *can* enforce is the rejection of version > numbers that obviously look like years rather than a semantic version. I sympathise with wanting to do *something* here. Predictable instability was the first thing I listed above and semantic versioning would give that. However, I also described the approach I'm taking to predictable instability with the Oslo libraries. Any APIs we want to remove will be marked as deprecated for a year, then removed. We can encoded that as: oslo-config>=2013.1,<2014.1 if Compatible-With existed, we could do: Compatible-With: 2013.1,2013.2 If we used semantic versioning strictly, we'd bump the major release every 6 months (since every release may see APIs which were deprecated a year ago removed). Basically, PEP426 desires to enforce some meaning around the major number and API instability, while not enforcing a meaning of API stability for the micro version. I'm not sure it achieves a whole lot except making my life harder without actually forcing me into adopting a more predictable versioning scheme. And to be clear - this approach of removing APIs after a year is because we assume only other OpenStack projects are using Oslo libraries. If that changed, we'd probably keep deprecated APIs around for a lot longer. > > === Compatible Release Version Specifier === > > > > I know this is copied from Ruby's spermy operator, but it seems to be > > to be fairly useless (overly pessimistic, at least) with semantic > > versioning. > > > > i.e. assuming x.y.z releases, where an increase in X means that > > incompatible changes have been made then: > > > > ~>1.2.3 > > > > unnecessarily limits you to the 1.2.y series of releases. Now, in an > > ideal world, you would just say: > > > > ~>1.2 > > > > and you're good for the entire 1.x.y but often you know what your app > > doesn't work with 1.2.2 and there's a fix in 1.2.3 that your app > > absolutely needs. > > Handling that kind of situation is why version specifiers allow > multiple clauses that are ANDed together: > > mydependency (1.2, >= 1.2.3) Thanks. > > A random implementation detail I don't get here ... we use 'pip install > > -r' a lot in OpenStack and so have files with e.g. > > > > PasteDeploy>=1.5.0 > > > > how will you specify compatible releases in these files? What's the > > equivalent of > > > > PasteDeploy~>1.5.0 > > > > ? > > Note that the pkg_resources specifier format (no parens allowed) is > NOT the same as the PEP 345/426 format (parens required). The former > is designed to work with arbitrary version schemes, while the latter > is explicitly constrained to version schemes that abide by the > constraints described in PEP 386. I expect the setuptools/distribute > developers will choose a tilde-based operator to represent compatible > release clauses in the legacy formats (likely Ruby's ~>, although it > could be something like ~= or ~~ instead). Ok, figures. Cheers, Mark. From dholth at gmail.com Mon Mar 4 18:35:38 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 4 Mar 2013 12:35:38 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <20130304105918.56044b63@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 12:12 PM, Ned Deily wrote: > In article > , > Daniel Holth wrote: >> Apple does (or did) something very useful by not including the stdlib >> source code in OS X's builtin Python, making the system Python so >> hilariously useless for development that no one would attempt it after >> the trick has been discovered. After a few frustrating hours figuring >> out what they've actually done it hits you "OH that Python is not for >> me", you download a working version from python.org, and you're on >> your way. :-P > > FWIW, the stdlib py files are installed as part of the Xcode Command > Line Tools component. Only half-serious; I come to the same conclusion for different reasons on RHEL or Ubuntu. Plone solves it by just bundling the Python source code to make things predictable. From markmc at redhat.com Mon Mar 4 18:36:45 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Mon, 04 Mar 2013 17:36:45 +0000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> Message-ID: <1362418605.5909.163.camel@sorcha> Hey, On parallel installs ... On Mon, 2013-03-04 at 18:11 +1000, Nick Coghlan wrote: > On Mon, Mar 4, 2013 at 1:54 AM, Mark McLoughlin wrote: > > - Incompatible versions of the same library are routinely installed > > in parallel. Does PEP426 here, or is all the work to be done in > > tools like PyPI, pip, setuptools, etc. Apps somehow specify which > > version they want to use since the incompatible versions of the > > library use the same namespace. > > PEP 426 doesn't help here, and isn't intended to. virtualenv (and the > integrated venv in 3.3+) are the main solution being offered in this > space (the primary difference with simple bundling is that you still > keep track of your dependencies, so you have some hope of properly > rolling out security fixes). Fedora (at least) is experimenting with > similar capabilities through software collections. IMO, the success of > iOS, Android (and Windows) as deployment targets means the ISVs have > spoken: bundling is the preferred option once you get above the core > OS level. That means it's on us to support bundling in a way that > doesn't make rolling out security updates a nightmare for system > administrators, rather than trying to tell ISVs that bundling isn't > supported. The adoption of semantic versioning without parallel installs really worries me. It says that incompatible API changes are ok if you bump your major version, but the incompatible change is no less painful for users and distros. Your "bundling is the preferred option once you get above core OS level" is exactly the kind of clarity I hoped for from this thread. However, it does mean that OpenStack and distros who include OpenStack need to figure out how to bundle OpenStack and its required libraries as a single stack. You're right that Fedora has been experimenting with Software Collections, but that doesn't mean it's a solved problem: http://lists.fedoraproject.org/pipermail/devel/2012-December/thread.html#174872 > > How do parallel installs work in > > practice? etc. > > At the moment, they really don't. setuptools/distribute do allow it to > some degree, but it can go wrong if you're not sufficiently careful > with it (e.g. http://git.beaker-project.org/cgit/beaker/commit/?h=develop&id=d4077a118627b947a3c814cd3ff9280afeeecd73). We do something similar in Fedora right now for sqlalchemy 0.7 and migrate 0.5. It's not pretty. Is there any work going on to make this more usable? > > == Versioning == > > > > Semantic versioning is appealing here because, assuming all libraries > > adopt it, it becomes very easy for us to predict which versions will > > be incompatible. > > > > For any API unstable library (0.x in semantic versioning), we need to > > pin to a very specific version and require distributions to package > > that exact version in order to run OpenStack. When moving to a newer > > version of the library, we need to move all OpenStack projects at > > once. Ideally, we'd just avoid such libraries. > > > > Implied in semantic versioning, though, is that it's possible for > > distributions to include both version X.y.z and X+N.y.z > > My point of view is that the system Python is there primarily to run > system utilities and user scripts, rather than arbitrary Python > applications. Users can install alternate versions of software into > their user site directories, or into virtual environments. Projects > are, of course, also free to include part of their version number in > the project name. You mentioned Software Collections - that means bundling all OpenStack's Python requirements in e.g. /opt/openstack-grizzly/ > The challenge of dynamic linking different on-disk versions of a > module into a process is that: > - the import system simply isn't set up to work that way > (setuptools/distribute try to fake it by adjusting sys.path, but that > can go quite wrong at times) > - it's confusing for users, since it isn't always clear which version > they're going to see > - errors can appear arbitrarily late, since module loading is truly dynamic If parallel incompatible installs is a hopeless problem in Python, why the push to semantic versioning then rather than saying that incompatible API changes should mean a name change? Thanks, Mark. From donald.stufft at gmail.com Mon Mar 4 18:44:17 2013 From: donald.stufft at gmail.com (Donald Stufft) Date: Mon, 4 Mar 2013 12:44:17 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362418605.5909.163.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> Message-ID: On Monday, March 4, 2013 at 12:36 PM, Mark McLoughlin wrote: > Hey, > > On parallel installs ... > > On Mon, 2013-03-04 at 18:11 +1000, Nick Coghlan wrote: > > On Mon, Mar 4, 2013 at 1:54 AM, Mark McLoughlin wrote: > > > - Incompatible versions of the same library are routinely installed > > > in parallel. Does PEP426 here, or is all the work to be done in > > > tools like PyPI, pip, setuptools, etc. Apps somehow specify which > > > version they want to use since the incompatible versions of the > > > library use the same namespace. > > > > > > > > > PEP 426 doesn't help here, and isn't intended to. virtualenv (and the > > integrated venv in 3.3+) are the main solution being offered in this > > space (the primary difference with simple bundling is that you still > > keep track of your dependencies, so you have some hope of properly > > rolling out security fixes). Fedora (at least) is experimenting with > > similar capabilities through software collections. IMO, the success of > > iOS, Android (and Windows) as deployment targets means the ISVs have > > spoken: bundling is the preferred option once you get above the core > > OS level. That means it's on us to support bundling in a way that > > doesn't make rolling out security updates a nightmare for system > > administrators, rather than trying to tell ISVs that bundling isn't > > supported. > > > > > The adoption of semantic versioning without parallel installs really > worries me. It says that incompatible API changes are ok if you bump > your major version, but the incompatible change is no less painful for > users and distros. > > Your "bundling is the preferred option once you get above core OS level" > is exactly the kind of clarity I hoped for from this thread. However, it > does mean that OpenStack and distros who include OpenStack need to > figure out how to bundle OpenStack and its required libraries as a > single stack. > > You're right that Fedora has been experimenting with Software > Collections, but that doesn't mean it's a solved problem: > > http://lists.fedoraproject.org/pipermail/devel/2012-December/thread.html#174872 > > > > How do parallel installs work in > > > practice? etc. > > > > > > > > > At the moment, they really don't. setuptools/distribute do allow it to > > some degree, but it can go wrong if you're not sufficiently careful > > with it (e.g. http://git.beaker-project.org/cgit/beaker/commit/?h=develop&id=d4077a118627b947a3c814cd3ff9280afeeecd73). > > > > > We do something similar in Fedora right now for sqlalchemy 0.7 and > migrate 0.5. It's not pretty. > > Is there any work going on to make this more usable? > > > > == Versioning == > > > > > > Semantic versioning is appealing here because, assuming all libraries > > > adopt it, it becomes very easy for us to predict which versions will > > > be incompatible. > > > > > > For any API unstable library (0.x in semantic versioning), we need to > > > pin to a very specific version and require distributions to package > > > that exact version in order to run OpenStack. When moving to a newer > > > version of the library, we need to move all OpenStack projects at > > > once. Ideally, we'd just avoid such libraries. > > > > > > Implied in semantic versioning, though, is that it's possible for > > > distributions to include both version X.y.z and X+N.y.z > > > > > > > > > My point of view is that the system Python is there primarily to run > > system utilities and user scripts, rather than arbitrary Python > > applications. Users can install alternate versions of software into > > their user site directories, or into virtual environments. Projects > > are, of course, also free to include part of their version number in > > the project name. > > > > > You mentioned Software Collections - that means bundling all OpenStack's > Python requirements in e.g. > > /opt/openstack-grizzly/ > > > The challenge of dynamic linking different on-disk versions of a > > module into a process is that: > > - the import system simply isn't set up to work that way > > (setuptools/distribute try to fake it by adjusting sys.path, but that > > can go quite wrong at times) > > - it's confusing for users, since it isn't always clear which version > > they're going to see > > - errors can appear arbitrarily late, since module loading is truly dynamic > > > > > If parallel incompatible installs is a hopeless problem in Python, why > the push to semantic versioning then rather than saying that > incompatible API changes should mean a name change? > > Forcing a name change feels ugly as all hell. I don't really see what parallel installs has much to do with anything. I don't bundle anything and i'm ideologically opposed to it generally but I don't typically have a need for parallel installs because I use virtual environments. Why don't you utilize those? (Not being snarky, actually curious). > > Thanks, > Mark. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org (mailto:Distutils-SIG at python.org) > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Mar 4 19:07:18 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 4 Mar 2013 13:07:18 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362417806.5909.151.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362417806.5909.151.camel@sorcha> Message-ID: On Mon, Mar 4, 2013 at 12:23 PM, Mark McLoughlin wrote: > Hi Nick, > > Thanks for the detailed reply. I'll stick to PEP426 related topics in > this mail. > > Generally speaking, PEP426 looks like good progress, but the biggest > problem I see now is the lack of parallel installs for incompatible > versions. I don't know what to do with the "runtime linker" feature provided by setuptools' pkg_resources or Ruby's Gem; the approach taken is to treat it as a separate problem from the "project name, version, and list of requirements" that 426 tries to specify. PEP 376 (database of installed Python distributions; defines the .dist-info directory) has more to do with whether parallel installs happen. Oh, do you mean two incompatible versions loaded during the same invocation? Due to the way JavaScript works (no global import namespace), npm seems to allow dependencies to have their own private instances of their sub-dependencies. > On Mon, 2013-03-04 at 18:11 +1000, Nick Coghlan wrote: >> On Mon, Mar 4, 2013 at 1:54 AM, Mark McLoughlin wrote: >> > I've tried to digest PEP426 and collected my thoughts below. Apologies >> > if you read through them and they offer you nothing useful. >> > >> > I'm trying to imagine the future we're driving towards here where >> > OpenStack is no longer suffering and how PEP426 helped get us there >> > >> > e.g. where >> > >> > - Many libraries use semantic versioning and OpenStack specifies x.y >> > compatible versions in their dependency lists. Incompatible changes >> > only get made in x+N.y and OpenStack continues using the x.y+N.z >> > versions. >> >> Yep, PEP 426 pushes projects heavily in that direction by making it >> the path of least resistance. You *can* do something different if >> necessary, it's just less convenient than simply using the first 9 >> clauses of semantic versioning. >> >> > - All the libraries which don't use semantic versioning use another >> > clear versioning scheme that allows OpenStack to avoid incompatible >> > updates while still using compatible updates. >> >> Yes, the PEP will explicitly encourage projects to document how to >> define dependencies if the default "compatible release" clause isn't >> appropriate. > > Ok, understood. > > Is there a case for putting this information into the metadata? > > e.g. if this was version 1.2.3 of a library and the project's policy is > to bump the major version if an incompatible API change is made, then > you can do: > > Requires-Dist: foo (1.2) > > or: > > Requires-Dist: foo (1.2,>=1.2.3) > > or: > > Requires-Dist: foo (>=1.2.3,<2) > > but how about if foo-1.2.3 contained: > > Compatible-With: 1 > > and foo-2.0.0 contained: > > Compatible-With: 2 > > then this: > > Requires-Dist: foo (1.2) > > could reject 2.0.0 as satisfying the compatible release specification. > >> > == PEP426 == >> > >> > === Date Based Versioning === >> > >> > OpenStack uses date based versioning for its server components and >> > we've begun using it for the Oslo libraries described above. PEP426 >> > says: >> > >> > Date based release numbers are explicitly excluded from >> > compatibility with this scheme, as they hinder automatic translation >> > to other versioning schemes, as well as preventing the adoption >> > of semantic versioning without changing the name of the >> > project. Accordingly, a leading release component greater than or >> > equal to 1980 is an error. >> > >> > This seems odd and the rationale seems week. It looks like an >> > abritrary half-way house between requiring semantic versioning and >> > allowing any scheme of increasing versions. >> >> The assumption is that the version field will follow semantic >> versioning (that's not where the PEP started, it was originally >> non-committal on the matter like PEP 345 - however I've come to the >> conclusion that this assumption needs to be made explicit, but still >> need to update various parts of the PEP that are still wishy-washy >> about it). However, we can't actually enforce semantic versioning >> because we can't automatically detect if a project is adhering to >> those semantics. What we *can* enforce is the rejection of version >> numbers that obviously look like years rather than a semantic version. > > I sympathise with wanting to do *something* here. Predictable > instability was the first thing I listed above and semantic versioning > would give that. We should take it all back and derive the whole version number from a sha hash, e.g. the Mercurial revision number. A Previous-Version: tag (multiple-use of course) lets you assemble a DAG of the release history. Just kidding :-) > However, I also described the approach I'm taking to predictable > instability with the Oslo libraries. Any APIs we want to remove will be > marked as deprecated for a year, then removed. > > We can encoded that as: > > oslo-config>=2013.1,<2014.1 > > if Compatible-With existed, we could do: > > Compatible-With: 2013.1,2013.2 > > If we used semantic versioning strictly, we'd bump the major release > every 6 months (since every release may see APIs which were deprecated a > year ago removed). > > Basically, PEP426 desires to enforce some meaning around the major > number and API instability, while not enforcing a meaning of API > stability for the micro version. > > I'm not sure it achieves a whole lot except making my life harder > without actually forcing me into adopting a more predictable versioning > scheme. We try :-) I never dreamed it would be possible to force package producers to not break things with their new releases. I think you just have to know your publishers and do plenty of testing. The goal of the version specifiers section of PEP 426 is only to define a sort order and comparison operators. We threw in semantic versioning to recommend something more sane than the "mostly compat with pkg_resources" behavior that we are stuck with until the indices get smarter. I never hoped to be able to force packagers to actually do a good job of maintaining backwards compatibility. For example, zipimport._zip_directory_cache... it's there, and people started depending on it, and it's become part of the API even though it was never meant to be one. If I remove it, which version number or compatible-with tag do I have to specify? Take a look at https://crate.io/packages/apipkg/ ; it is a neat way to make it clear which parts of your program are the API. From p.f.moore at gmail.com Mon Mar 4 20:41:44 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 4 Mar 2013 19:41:44 +0000 Subject: [Distutils] PEP 426 (Metadata 2.0) - Requires-Dist and setuptools/distribute Message-ID: In thinking about how virtualenv would describe the packages it wants to preload in PEP 426 metadata form, it occurred to me that there are scenarios with setuptools and distribute where it's not obvious how to state the requirement you want. Specifically, if you want to install setuptools if it is present, but if not fall back to distribute (for example, if you have a local package repository and no access to PyPI, but setuptools may or may not be present). I appreciate this is a fairly obscure case. It comes up with virtualenv because virtualenv uses locally-available distributions by default, only going to PyPI if it has to. So in that case (depending on user options) I could genuinely want to pick whichever of setuptools or distribute is present, and I don't care which, as it saves a network lookup. I'm actually using the distlib locator API, not the PEP 426 fields themselves, but (a) distlib locators use the same syntax, as far as I'm aware, and (b) I think the Requires-Dist syntax makes a good language for specifying distribution requirements in any context, so I'd hate to end up with 2 slightly-different forms. If the answer is that the spec doesn't support that, then fine. I'll have to manually code for it. But I'd hate to write code I didn't need to :-) Paul. From dholth at gmail.com Mon Mar 4 21:00:30 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 4 Mar 2013 15:00:30 -0500 Subject: [Distutils] PEP 426 (Metadata 2.0) - Requires-Dist and setuptools/distribute In-Reply-To: References: Message-ID: On Mon, Mar 4, 2013 at 2:41 PM, Paul Moore wrote: > In thinking about how virtualenv would describe the packages it wants > to preload in PEP 426 metadata form, it occurred to me that there are > scenarios with setuptools and distribute where it's not obvious how to > state the requirement you want. Specifically, if you want to install > setuptools if it is present, but if not fall back to distribute (for > example, if you have a local package repository and no access to PyPI, > but setuptools may or may not be present). > > I appreciate this is a fairly obscure case. It comes up with > virtualenv because virtualenv uses locally-available distributions by > default, only going to PyPI if it has to. So in that case (depending > on user options) I could genuinely want to pick whichever of > setuptools or distribute is present, and I don't care which, as it > saves a network lookup. > > I'm actually using the distlib locator API, not the PEP 426 fields > themselves, but (a) distlib locators use the same syntax, as far as > I'm aware, and (b) I think the Requires-Dist syntax makes a good > language for specifying distribution requirements in any context, so > I'd hate to end up with 2 slightly-different forms. > > If the answer is that the spec doesn't support that, then fine. I'll > have to manually code for it. But I'd hate to write code I didn't need > to :-) > > Paul. We do have Provides-Dist, although the best way to implement it is an open question. From p.f.moore at gmail.com Mon Mar 4 21:20:17 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 4 Mar 2013 20:20:17 +0000 Subject: [Distutils] PEP 426 (Metadata 2.0) - Requires-Dist and setuptools/distribute In-Reply-To: References: Message-ID: On 4 March 2013 20:00, Daniel Holth wrote: > On Mon, Mar 4, 2013 at 2:41 PM, Paul Moore wrote: >> In thinking about how virtualenv would describe the packages it wants >> to preload in PEP 426 metadata form, it occurred to me that there are >> scenarios with setuptools and distribute where it's not obvious how to >> state the requirement you want. Specifically, if you want to install >> setuptools if it is present, but if not fall back to distribute (for >> example, if you have a local package repository and no access to PyPI, >> but setuptools may or may not be present). [...] > > We do have Provides-Dist, although the best way to implement it is an > open question. Good point. So distribute would have "Provides-Dist: setuptools" and I could just require setuptools. Given that none of this is supported yet, I'm happy that the spec covers this case, but still need to work around it for the immediate future. Thanks, Paul From pje at telecommunity.com Mon Mar 4 21:55:18 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 4 Mar 2013 15:55:18 -0500 Subject: [Distutils] PEP 426 (Metadata 2.0) - Requires-Dist and setuptools/distribute In-Reply-To: References: Message-ID: On Mon, Mar 4, 2013 at 2:41 PM, Paul Moore wrote: > In thinking about how virtualenv would describe the packages it wants > to preload in PEP 426 metadata form, it occurred to me that there are > scenarios with setuptools and distribute where it's not obvious how to > state the requirement you want. Specifically, if you want to install > setuptools if it is present, but if not fall back to distribute (for > example, if you have a local package repository and no access to PyPI, > but setuptools may or may not be present). > > I appreciate this is a fairly obscure case. It comes up with > virtualenv because virtualenv uses locally-available distributions by > default, only going to PyPI if it has to. So in that case (depending > on user options) I could genuinely want to pick whichever of > setuptools or distribute is present, and I don't care which, as it > saves a network lookup. > > I'm actually using the distlib locator API, not the PEP 426 fields > themselves, but (a) distlib locators use the same syntax, as far as > I'm aware, and (b) I think the Requires-Dist syntax makes a good > language for specifying distribution requirements in any context, so > I'd hate to end up with 2 slightly-different forms. > > If the answer is that the spec doesn't support that, then fine. I'll > have to manually code for it. But I'd hate to write code I didn't need > to :-) This is a good point; people have been wanting setuptools to support alternate dependencies (i.e., I need this *or* that) for a long time, and not just for that particular use case. From p.f.moore at gmail.com Mon Mar 4 22:46:30 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 4 Mar 2013 21:46:30 +0000 Subject: [Distutils] Metadata files, dist-info/egg-info and migration paths Message-ID: This is peripherally related to PEP 426, to the extent that PEP 426 specifies that the distribution metadata goes in the dist-info directory defined by PEP 376. The dist-info directory conceptually replaces the old de-facto standard egg-info directory. But neither PEP 376 nor PEP 426 mention anything about what should happen to the *other* files that currently reside in egg-info. These are basically setuptools extensions, for things like namespace packages, entry points, zipped egg support, etc. As far as I am aware, recent versions of distribute look for the setuptools metadata files in the dist-info directory if it's present. So for projects using distribute, moving all of the metadata files to dist-info makes sense. But there's no release of setuptools that supports this, so what should happen there? The issue is with built distributions. Core python distutils still writes an egg-info directory (and that won't change till 3.4, and only distribute supports Python 3, so no issue there). Setuptools and distribute both write to egg-info, so there's no compatibility issue there either. But the wheel format uses dist-info internally, and installation is defined as only producing a dist-info directory (by unpacking the one in the wheel). There are two questions here that bear discussion. First of all, when creating a wheel, should builders put custom metadata files from the existing egg-info data into the dist-info directory. I would suggest that yes, they should, as otherwise that data is lost completely - in particular, setuptools entry points (and hence executable wrappers) fail without the entry_points.txt file. There is some support in distlib for replacement functionality in some of these areas (exports, the EXPORTS file and script wrappers) but this is at an early stage and there's no migration path defined yet that I'm aware of. The more difficult question is what should happen when a wheel is installed. At the moment, tools write out the dist-info directory and that's it. That works fine for projects using distribute, or ones that don't use setuptools-style metadata. But projects using setuptools under Python 2 won't be able to see the metadata. Should we require that in order to use wheels, distribute should be used (or a suitably patched setuptools, should that become available)? Or should wheel installers write a legacy egg-info directory for use by setuptools (I'd suggest that this should only happen on Python 2, and even then probably only if a specific "legacy" flag was set). I have no real knowledge of what to do here - my suggestions above are relatively uninformed, and in particular I have little knowledge of what is common among people still using Python 2. What do the experts think? Paul. From dholth at gmail.com Mon Mar 4 22:57:07 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 4 Mar 2013 16:57:07 -0500 Subject: [Distutils] Metadata files, dist-info/egg-info and migration paths In-Reply-To: References: Message-ID: On Mon, Mar 4, 2013 at 4:46 PM, Paul Moore wrote: > This is peripherally related to PEP 426, to the extent that PEP 426 > specifies that the distribution metadata goes in the dist-info > directory defined by PEP 376. The dist-info directory conceptually > replaces the old de-facto standard egg-info directory. But neither PEP > 376 nor PEP 426 mention anything about what should happen to the > *other* files that currently reside in egg-info. These are basically > setuptools extensions, for things like namespace packages, entry > points, zipped egg support, etc. As far as I am aware, recent versions > of distribute look for the setuptools metadata files in the dist-info > directory if it's present. So for projects using distribute, moving > all of the metadata files to dist-info makes sense. But there's no > release of setuptools that supports this, so what should happen there? > > The issue is with built distributions. Core python distutils still > writes an egg-info directory (and that won't change till 3.4, and only > distribute supports Python 3, so no issue there). Setuptools and > distribute both write to egg-info, so there's no compatibility issue > there either. But the wheel format uses dist-info internally, and > installation is defined as only producing a dist-info directory (by > unpacking the one in the wheel). > > There are two questions here that bear discussion. First of all, when > creating a wheel, should builders put custom metadata files from the > existing egg-info data into the dist-info directory. I would suggest > that yes, they should, as otherwise that data is lost completely - in > particular, setuptools entry points (and hence executable wrappers) > fail without the entry_points.txt file. There is some support in > distlib for replacement functionality in some of these areas (exports, > the EXPORTS file and script wrappers) but this is at an early stage > and there's no migration path defined yet that I'm aware of. > > The more difficult question is what should happen when a wheel is > installed. At the moment, tools write out the dist-info directory and > that's it. That works fine for projects using distribute, or ones that > don't use setuptools-style metadata. But projects using setuptools > under Python 2 won't be able to see the metadata. Should we require > that in order to use wheels, distribute should be used (or a suitably > patched setuptools, should that become available)? Or should wheel > installers write a legacy egg-info directory for use by setuptools > (I'd suggest that this should only happen on Python 2, and even then > probably only if a specific "legacy" flag was set). > > I have no real knowledge of what to do here - my suggestions above are > relatively uninformed, and in particular I have little knowledge of > what is common among people still using Python 2. What do the experts > think? > > Paul. Hmmmmm. You have to use distribute >= 0.6.28 or a currently unavailable suitably patched setuptools. It would be fascinating or horrifying to convert the other way, from .dist-info to .egg-info. The installer is not currently smart or perverted enough to do that. Wheels are currently made by converting .egg-info to .dist-info. The wheel converter should preserve any file it doesn't understand. Besides the common metadata, a few distributions register custom metadata. In distribute the pkg_resources "get distribution metadata(filename)" API works identically for dist-info and egg-info directories. HTH, Daniel From markmc at redhat.com Mon Mar 4 23:29:56 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Mon, 04 Mar 2013 22:29:56 +0000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> Message-ID: <1362436196.5909.222.camel@sorcha> On Mon, 2013-03-04 at 12:44 -0500, Donald Stufft wrote: > On Monday, March 4, 2013 at 12:36 PM, Mark McLoughlin wrote: > > If parallel incompatible installs is a hopeless problem in Python, > > why > > the push to semantic versioning then rather than saying that > > incompatible API changes should mean a name change? > Forcing a name change feels ugly as all hell. I don't really see what > parallel installs has much to do with anything. I don't bundle anything > and i'm ideologically opposed to it generally but I don't typically have > a need for parallel installs because I use virtual environments. Why > don't you utilize those? (Not being snarky, actually curious). It's a fair question. To answer it with a question, how do you imagine Linux distributions using virtual environments such that: $> yum install -y openstack-nova uses a virtual environment? How does it differ from bundling? (Not being snarky, actually curious :) The approach that some Fedora folks are trying out is called "Software Collections". It's not Python specific, but it's basically the same as a virtual environment. For OpenStack, I think we'd probably have all the Python libraries we require installed under e.g. /opt/rh/openstack-$version so that you could have programs from two different releases of OpenStack installed on the same system. Long time packagers are usually horrified at this idea e.g. http://lists.fedoraproject.org/pipermail/devel/2012-December/thread.html#174872 Some of the things to think about: - Each of the Python libraries under /opt/rh/openstack-$version would come from new packages like openstack-$version-python-eventlet.rpm - how many applications in Fedora would have a big stack of "bundled" python packages like OpenStack? 5, 10, 50, 100? Let's say it's 10 and each one stack has 20 packages. That's 200 new packages which need to be maintained by Fedora versus the current situation where we (painfully) make a single stack of libraries work for all applications. - How many of these 200 new packages are essentially duplicates? Once you go down the route of having applications bundle libraries like this, there's going to basically be no sharing. - What's the chance that that all of these 200 packages will be kept up to date? If an application works with a given version of a library and it can stick with that version, it will. As a Python library maintainer, wow do you like the idea of 10 different versions of you library included in Fedora? - The next time a security issue is found in a common Python library, does Fedora now have to rush out 10 parallel fixes for it? You can see that reaction in mails like this: http://lists.fedoraproject.org/pipermail/devel/2012-December/174944.html and the "why can't these losers just maintain compatibility" view: http://lists.fedoraproject.org/pipermail/devel/2012-December/175028.html http://lists.fedoraproject.org/pipermail/devel/2012-December/174929.html Notice folks complaining about Ruby and Java here, not Python. I can see Python embracing semantic versioning and "just use venv" shortly leading to Python being included in the list of "heretics". Thanks, Mark. [1] - http://docs.fedoraproject.org/en-US/Fedora_Contributor_Documentation/1/html-single/Software_Collections_Guide/index.html From barry at python.org Mon Mar 4 23:49:16 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 17:49:16 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362436196.5909.222.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> Message-ID: <20130304174916.32173c8f@anarchist.wooz.org> On Mar 04, 2013, at 10:29 PM, Mark McLoughlin wrote: >The approach that some Fedora folks are trying out is called "Software >Collections". It's not Python specific, but it's basically the same as a >virtual environment. It's a serious problem, and I think it will be made more so by the incursions into mobile platforms, where app isolation is actually an important feature. The security implications of duplication are bad enough, but in many cases, it's an issue of just plain functionality. On some platforms, often a PyPI version of a package will not work out of the box, and has to be patched to deal with various platform issues. The advantage of having packages come from the distro is because there's a higher likelihood that it will both continue to be secure *and* continue to work! As you point out, this isn't necessarily Python specific. We see much hand-wringing about Go packaging in Debian, and static linking for mobile apps. It would however be nice if Python itself had a better story for concurrent multiversion libraries, even better if it could be made compatible with schemes that the distros are coming up with to deal with this issue. -Barry From vinay_sajip at yahoo.co.uk Tue Mar 5 00:17:40 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 4 Mar 2013 23:17:40 +0000 (UTC) Subject: [Distutils] Metadata files, dist-info/egg-info and migration paths References: Message-ID: Paul Moore gmail.com> writes: > There are two questions here that bear discussion. First of all, when > creating a wheel, should builders put custom metadata files from the > existing egg-info data into the dist-info directory. I would suggest > that yes, they should, as otherwise that data is lost completely - in > particular, setuptools entry points (and hence executable wrappers) > fail without the entry_points.txt file. There is some support in > distlib for replacement functionality in some of these areas (exports, > the EXPORTS file and script wrappers) but this is at an early stage > and there's no migration path defined yet that I'm aware of. distlib could be changed to use entry_points.txt as the filename for now. The file format is the same. > The more difficult question is what should happen when a wheel is > installed. At the moment, tools write out the dist-info directory and > that's it. That works fine for projects using distribute, or ones that > don't use setuptools-style metadata. But projects using setuptools > under Python 2 won't be able to see the metadata. Should we require > that in order to use wheels, distribute should be used (or a suitably > patched setuptools, should that become available)? Or should wheel > installers write a legacy egg-info directory for use by setuptools > (I'd suggest that this should only happen on Python 2, and even then > probably only if a specific "legacy" flag was set). > I agree with Daniel here - I don't see any point in writing to .egg-info, and if people can't use distribute, they'll have to wait for setuptools to get compatibility. Presumably, the only reason for not using distribute would be that there's some bug in it which is not in setuptools, and if it's important enough, surely people will contribute the fix, or if they can't, stick with the status quo? I think it makes sense to assist people in migrating from old packaging standards to new ones by reading e.g. old formats, as wheel and wheeler.py do, but if we persist in writing old formats, then it doesn't seem like any migration is actually happening. Regards, Vinay Sajip From markmc at redhat.com Mon Mar 4 23:30:10 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Mon, 04 Mar 2013 22:30:10 +0000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362417806.5909.151.camel@sorcha> Message-ID: <1362436210.5909.223.camel@sorcha> On Mon, 2013-03-04 at 13:07 -0500, Daniel Holth wrote: > On Mon, Mar 4, 2013 at 12:23 PM, Mark McLoughlin wrote: > > Hi Nick, > > > > Thanks for the detailed reply. I'll stick to PEP426 related topics in > > this mail. > > > > Generally speaking, PEP426 looks like good progress, but the biggest > > problem I see now is the lack of parallel installs for incompatible > > versions. > > I don't know what to do with the "runtime linker" feature provided by > setuptools' pkg_resources or Ruby's Gem; the approach taken is to > treat it as a separate problem from the "project name, version, and > list of requirements" that 426 tries to specify. PEP 376 (database of > installed Python distributions; defines the .dist-info directory) has > more to do with whether parallel installs happen. Oh, do you mean two > incompatible versions loaded during the same invocation? Not sure, I'm following you but what I mean is where you have foo-1.2.3 required by program A and foo-2.0 (which isn't backwards compatible with foo-1.2.3) required by program B. Both are installed in the standard python path and both programs explicitly say which versions they need. We seem to be pretty close to this with eggs and pkg_resources etc., but Nick seems to be concerned that it's very error prone. ... > > However, I also described the approach I'm taking to predictable > > instability with the Oslo libraries. Any APIs we want to remove will be > > marked as deprecated for a year, then removed. > > > > We can encoded that as: > > > > oslo-config>=2013.1,<2014.1 > > > > if Compatible-With existed, we could do: > > > > Compatible-With: 2013.1,2013.2 > > > > If we used semantic versioning strictly, we'd bump the major release > > every 6 months (since every release may see APIs which were deprecated a > > year ago removed). > > > > Basically, PEP426 desires to enforce some meaning around the major > > number and API instability, while not enforcing a meaning of API > > stability for the micro version. > > > > I'm not sure it achieves a whole lot except making my life harder > > without actually forcing me into adopting a more predictable versioning > > scheme. > > We try :-) > > I never dreamed it would be possible to force package producers to not > break things with their new releases. I think you just have to know > your publishers and do plenty of testing. We do a lot of testing: http://status.openstack.org/zuul/ but we also depend on a fairly large bunch of libraries: https://github.com/markmc/requirements/blob/652644c/tools/pip-requires https://github.com/markmc/requirements/blob/652644c/tools/test-requires The conclusion here is that we need to go through each of those libraries, talk to upstream and figure how to cap our version ranges to avoid unexpected incompatible updates. That's a lot of publishers to talk to! > The goal of the version specifiers section of PEP 426 is only to > define a sort order and comparison operators. We threw in semantic > versioning to recommend something more sane than the "mostly compat > with pkg_resources" behavior that we are stuck with until the indices > get smarter. I never hoped to be able to force packagers to actually > do a good job of maintaining backwards compatibility. Rejecting date based versions looks to be trying to force packagers into a scheme whereby the major version doesn't increment needlessly. If PEP426 isn't trying to force publishers to do better, then this looks like a pretty random potshot to take :) > For example, > zipimport._zip_directory_cache... it's there, and people started > depending on it, and it's become part of the API even though it was > never meant to be one. If I remove it, which version number or > compatible-with tag do I have to specify? IMHO, it'd be fair game to just remove - it's obviously intended to be private. > Take a look at https://crate.io/packages/apipkg/ ; it is a neat way to > make it clear which parts of your program are the API. Interesting, indeed. Cheers, Mark. From ncoghlan at gmail.com Tue Mar 5 00:29:33 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Mar 2013 09:29:33 +1000 Subject: [Distutils] PEP 426 (Metadata 2.0) - Requires-Dist and setuptools/distribute In-Reply-To: References: Message-ID: On 5 Mar 2013 06:55, "PJ Eby" wrote: > > On Mon, Mar 4, 2013 at 2:41 PM, Paul Moore wrote: > > In thinking about how virtualenv would describe the packages it wants > > to preload in PEP 426 metadata form, it occurred to me that there are > > scenarios with setuptools and distribute where it's not obvious how to > > state the requirement you want. Specifically, if you want to install > > setuptools if it is present, but if not fall back to distribute (for > > example, if you have a local package repository and no access to PyPI, > > but setuptools may or may not be present). > > > > I appreciate this is a fairly obscure case. It comes up with > > virtualenv because virtualenv uses locally-available distributions by > > default, only going to PyPI if it has to. So in that case (depending > > on user options) I could genuinely want to pick whichever of > > setuptools or distribute is present, and I don't care which, as it > > saves a network lookup. > > > > I'm actually using the distlib locator API, not the PEP 426 fields > > themselves, but (a) distlib locators use the same syntax, as far as > > I'm aware, and (b) I think the Requires-Dist syntax makes a good > > language for specifying distribution requirements in any context, so > > I'd hate to end up with 2 slightly-different forms. > > > > If the answer is that the spec doesn't support that, then fine. I'll > > have to manually code for it. But I'd hate to write code I didn't need > > to :-) > > This is a good point; people have been wanting setuptools to support > alternate dependencies (i.e., I need this *or* that) for a long time, > and not just for that particular use case. PEP 426 has another similar-but-not-identical problem: the current version doesn't have a particularly clean way to write "2.6+ or 3.2+" for the Requires-Python field. Cheers, Nick. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Mar 5 00:39:49 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 4 Mar 2013 23:39:49 +0000 Subject: [Distutils] Metadata files, dist-info/egg-info and migration paths In-Reply-To: References: Message-ID: On 4 March 2013 23:17, Vinay Sajip wrote: > I agree with Daniel here - I don't see any point in writing to .egg-info, and if > people can't use distribute, they'll have to wait for setuptools to get > compatibility. Presumably, the only reason for not using distribute would be > that there's some bug in it which is not in setuptools, and if it's important > enough, surely people will contribute the fix, or if they can't, stick with the > status quo? That;'s generally my view. I'm looking at changing virtualenv to install the default packages (setuptools/distribute and pip) from wheels. In doing that, pip's console script entry points will only work if distribute is used, so in effect this means making virtualenv always use distribute. This certainly won't be for a while yet, but I want to be sure it's not going to enrage a horde of dedicated setuptools users... Paul. From donald.stufft at gmail.com Tue Mar 5 01:22:00 2013 From: donald.stufft at gmail.com (Donald Stufft) Date: Mon, 4 Mar 2013 19:22:00 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362436196.5909.222.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> Message-ID: <80190B1FCD874568A5BEE659226C5461@gmail.com> On Monday, March 4, 2013 at 5:29 PM, Mark McLoughlin wrote: > On Mon, 2013-03-04 at 12:44 -0500, Donald Stufft wrote: > > On Monday, March 4, 2013 at 12:36 PM, Mark McLoughlin wrote: > > > > > If parallel incompatible installs is a hopeless problem in Python, > > > why > > > the push to semantic versioning then rather than saying that > > > incompatible API changes should mean a name change? > > > > > > > Forcing a name change feels ugly as all hell. I don't really see what > > parallel installs has much to do with anything. I don't bundle anything > > and i'm ideologically opposed to it generally but I don't typically have > > a need for parallel installs because I use virtual environments. Why > > don't you utilize those? (Not being snarky, actually curious). > > > > > It's a fair question. > > To answer it with a question, how do you imagine Linux distributions > using virtual environments such that: > > $> yum install -y openstack-nova > > uses a virtual environment? How does it differ from bundling? (Not being > snarky, actually curious :) > > Ah, See I don't install Python software via package managers so for that use case yes it'd probably be a lot like bundling. So the biggest problem with the setuptools style multi versioning that I can remember is that it modified the global system state via pth files. I've got an idea for a system that would work for this use case however it would result in app start up taking longer (meaninfully longer? I don't know for sure). Although it would require either support from the linux packagers or installation via a third party tool. > > The approach that some Fedora folks are trying out is called "Software > Collections". It's not Python specific, but it's basically the same as a > virtual environment. > > For OpenStack, I think we'd probably have all the Python libraries we > require installed under e.g. /opt/rh/openstack-$version so that you > could have programs from two different releases of OpenStack installed > on the same system. > > Long time packagers are usually horrified at this idea e.g. > > http://lists.fedoraproject.org/pipermail/devel/2012-December/thread.html#174872 > > Some of the things to think about: > > - Each of the Python libraries under /opt/rh/openstack-$version would > come from new packages like openstack-$version-python-eventlet.rpm - > how many applications in Fedora would have a big stack of "bundled" > python packages like OpenStack? 5, 10, 50, 100? Let's say it's 10 > and each one stack has 20 packages. That's 200 new packages which > need to be maintained by Fedora versus the current situation where > we (painfully) make a single stack of libraries work for all > applications. > > - How many of these 200 new packages are essentially duplicates? Once > you go down the route of having applications bundle libraries like > this, there's going to basically be no sharing. > > - What's the chance that that all of these 200 packages will be kept > up to date? If an application works with a given version of a > library and it can stick with that version, it will. As a Python > library maintainer, wow do you like the idea of 10 different > versions of you library included in Fedora? > > - The next time a security issue is found in a common Python library, > does Fedora now have to rush out 10 parallel fixes for it? > > You can see that reaction in mails like this: > > http://lists.fedoraproject.org/pipermail/devel/2012-December/174944.html > > and the "why can't these losers just maintain compatibility" view: > > http://lists.fedoraproject.org/pipermail/devel/2012-December/175028.html > http://lists.fedoraproject.org/pipermail/devel/2012-December/174929.html > > Notice folks complaining about Ruby and Java here, not Python. I can see > Python embracing semantic versioning and "just use venv" shortly leading > to Python being included in the list of "heretics". > > Thanks, > Mark. > > [1] - http://docs.fedoraproject.org/en-US/Fedora_Contributor_Documentation/1/html-single/Software_Collections_Guide/index.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 5 08:56:53 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Mar 2013 17:56:53 +1000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362436196.5909.222.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> Message-ID: On Tue, Mar 5, 2013 at 8:29 AM, Mark McLoughlin wrote: > On Mon, 2013-03-04 at 12:44 -0500, Donald Stufft wrote: >> On Monday, March 4, 2013 at 12:36 PM, Mark McLoughlin wrote: > >> > If parallel incompatible installs is a hopeless problem in Python, >> > why >> > the push to semantic versioning then rather than saying that >> > incompatible API changes should mean a name change? >> Forcing a name change feels ugly as all hell. I don't really see what >> parallel installs has much to do with anything. I don't bundle anything >> and i'm ideologically opposed to it generally but I don't typically have >> a need for parallel installs because I use virtual environments. Why >> don't you utilize those? (Not being snarky, actually curious). > > It's a fair question. > > To answer it with a question, how do you imagine Linux distributions > using virtual environments such that: > > $> yum install -y openstack-nova > > uses a virtual environment? How does it differ from bundling? (Not being > snarky, actually curious :) > > The approach that some Fedora folks are trying out is called "Software > Collections". It's not Python specific, but it's basically the same as a > virtual environment. > > For OpenStack, I think we'd probably have all the Python libraries we > require installed under e.g. /opt/rh/openstack-$version so that you > could have programs from two different releases of OpenStack installed > on the same system. > > Long time packagers are usually horrified at this idea e.g. > > http://lists.fedoraproject.org/pipermail/devel/2012-December/thread.html#174872 Yes, it's the eternal tension between "I only care about making a wide variety of applications on as easy to maintain on platform X as possible" view of the sysadmin and the "I only care about making application Y as easy to maintain on a wide variety of platforms as possible" view of the developer. Windows, Android, Mac OS X, etc, pretty much dial their software distribution model all the way towards the developer end of the spectrum. Linux distro maintainers need to realise that the language communities are almost entirely down the developer end of this spectrum, where sustainable cross-platform support is much higher priority than making life easier for administrators for any given platform. We're willing to work with distros to make deployment of security updates easier, but any proposals that involve people voluntarily making cross-platform development harder simpler aren't going to be accepted. > - How many of these 200 new packages are essentially duplicates? Once > you go down the route of having applications bundle libraries like > this, there's going to basically be no sharing. There's no sharing only if you *actually* bundle the dependencies into each virtualenv. While full bundling is the only mode pip currently implements, completely isolating each virtualenv, it doesn't *have* to work that way. In particular, PEP 426 offers the opportunity to add a "compatible release" mode to pip/virtualenv where the tool can maintain a shared pool of installed libraries, and use *.pth files to make an appropriate version available in each venv. Updating the shared version to a more recent release would then automatically update any venvs with a *.pth file that reference that release. For example, suppose an application requires "somedep (1.3)". This requires at least version 1.3, and won't accept 2.0. The latest available qualifying version might be "1.5.3". At the moment, pip will install a *copy* of somedep 1.5.3 into the application's virtualenv. However, it doesn't have to do that. It could, instead, install somedep 1.5.3 into a location like "/usr/lib/shared/pip-python/somedep1/", and then add a "somedep1.pth" file to the virtualenv that references "/usr/lib/shared/pip-python/somedep1/". Now, suppose we install another app, also using a virtualenv, that requires "somedep (1.6)". The new version 1.6.0 is available now, so we install it into the shared location and *both* applications will end up using somedep 1.6.0. A security update is released for "somedep" as 1.6.1 - we install it into the shared location, and now both applications are using 1.6.1 instead of 1.6.0. Yay, that's what we wanted, just as if we had runtime version selection, only the selection happens at install time (when adding the *.pth file to the virtualenv) rather than at application startup. Finally, we install a third application that needs "somedep (2.1)". We can't overwrite the shared version, because it isn't compatible. Fortunately, what we can do instead is install it to "/usr/lib/shared/pip-python/somedep2/" and create a "somedep2.pth" file in that environment. The two virtualenvs relying on "somedep1" are blissfully unaware anything has changed because that version never appears anywhere on their sys.path. Could you use this approach for the actual system site-packages directory? No, because sys.path would become insanely long with that many *.pth files. However, you could likely symlink to releases stored in the *.pth friendly distribution store. But for application specific virtual environments, it should be fine. If any distros want that kind of thing to become a reality, though, they're going to have to step up and contribute it. As noted above, for the current tool development teams, the focus is on distributing, maintaining and deploying cross-platform applications, not making it easy to do security updates on a Linux distro. I believe it's possible to satisfy both parties, but it's going to be up to the distros to offer a viable plan for meeting their needs without disrupting existing upstream practices. I will note that making this kind of idea more feasible is one of the reasons I am making "compatible release" the *default* in PEP 426 version specifiers, but it still needs people to actually figure out the details and write the code. I will also note that the filesystem layout described above is *far* more amenable to safe runtime selection of packages than the current pkg_resources method. The critical failure mode in pkg_resources that can lead to a lot of weirdness is that it can end up pushing site-packages itself on to the front of sys.path which can shadow a *lot* of modules (in particular, an installed copy of the software you're currently working on may shadow the version in your source checkout - this is the bug the patch I linked earlier was needed to resolve). Runtime selection would need more work than getting virtual environments to work that way, but it's certainly feasible once the installation layout is updated. > - What's the chance that that all of these 200 packages will be kept > up to date? If an application works with a given version of a > library and it can stick with that version, it will. As a Python > library maintainer, wow do you like the idea of 10 different > versions of you library included in Fedora? That's a problem the distros need to manage by offering patches to how virtual environments and installation layouts work, rather than lamenting the fact that cross-platform developers and distro maintainers care about different things. > - The next time a security issue is found in a common Python library, > does Fedora now have to rush out 10 parallel fixes for it? Not if Fedora contributes the changes needed to support parallel installs without requiring changes to existing Python applications and libraries. > You can see that reaction in mails like this: > > http://lists.fedoraproject.org/pipermail/devel/2012-December/174944.html > > and the "why can't these losers just maintain compatibility" view: > > http://lists.fedoraproject.org/pipermail/devel/2012-December/175028.html > http://lists.fedoraproject.org/pipermail/devel/2012-December/174929.html > > Notice folks complaining about Ruby and Java here, not Python. I can see > Python embracing semantic versioning and "just use venv" shortly leading > to Python being included in the list of "heretics". Unlike Java, the Python community generally sees *actual* bundling as evil - expressing constraints relative to a published package index is a different thing. Dependencies in Python are typically only brought together into a cohesive, pinned set of version by application developers and system integrators - the frameworks and libraries often express quite loose version requirements (and receive complaints if they're overly restrictive). The distros just have a harder problem than most because the set of packages they're trying to bring together is so large, they're bound to run into many cases of packages that have mutually incompatible dependencies. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From markmc at redhat.com Tue Mar 5 13:27:47 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Tue, 05 Mar 2013 12:27:47 +0000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> Message-ID: <1362486467.5909.310.camel@sorcha> Hi Nick, On Tue, 2013-03-05 at 17:56 +1000, Nick Coghlan wrote: > On Tue, Mar 5, 2013 at 8:29 AM, Mark McLoughlin wrote: > > On Mon, 2013-03-04 at 12:44 -0500, Donald Stufft wrote: > >> On Monday, March 4, 2013 at 12:36 PM, Mark McLoughlin wrote: > > > >> > If parallel incompatible installs is a hopeless problem in Python, > >> > why > >> > the push to semantic versioning then rather than saying that > >> > incompatible API changes should mean a name change? > >> Forcing a name change feels ugly as all hell. I don't really see what > >> parallel installs has much to do with anything. I don't bundle anything > >> and i'm ideologically opposed to it generally but I don't typically have > >> a need for parallel installs because I use virtual environments. Why > >> don't you utilize those? (Not being snarky, actually curious). > > > > It's a fair question. > > > > To answer it with a question, how do you imagine Linux distributions > > using virtual environments such that: > > > > $> yum install -y openstack-nova > > > > uses a virtual environment? How does it differ from bundling? (Not being > > snarky, actually curious :) > > > > The approach that some Fedora folks are trying out is called "Software > > Collections". It's not Python specific, but it's basically the same as a > > virtual environment. > > > > For OpenStack, I think we'd probably have all the Python libraries we > > require installed under e.g. /opt/rh/openstack-$version so that you > > could have programs from two different releases of OpenStack installed > > on the same system. > > > > Long time packagers are usually horrified at this idea e.g. > > > > http://lists.fedoraproject.org/pipermail/devel/2012-December/thread.html#174872 > > Yes, it's the eternal tension between "I only care about making a wide > variety of applications on as easy to maintain on platform X as > possible" view of the sysadmin and the "I only care about making > application Y as easy to maintain on a wide variety of platforms as > possible" view of the developer. > > Windows, Android, Mac OS X, etc, pretty much dial their software > distribution model all the way towards the developer end of the > spectrum. Linux distro maintainers need to realise that the language > communities are almost entirely down the developer end of this > spectrum, where sustainable cross-platform support is much higher > priority than making life easier for administrators for any given > platform. I'm with you to there, but it's a bit of a vicious circle - app developers bundle to insulate themselves from platform instability and then the platform maintainers no longer see a benefit to platform stability. > We're willing to work with distros to make deployment of > security updates easier, but any proposals that involve people > voluntarily making cross-platform development harder simpler aren't > going to be accepted. Let's take it to an extreme and say there was some way to force library maintainers to never make incompatible API changes without renaming the project. Not what I'm suggesting, of course. Doing that would not make app developers lives any/much harder since they're bundling anyway. It does, however, make life more difficult for the platform maintainers. So, what I think you're really saying would be rejected is "any proposals which make platform maintenance harder while not providing a material benefit to app maintainers who bundle". The concrete thing we're discussing here is that distros get screwed if the incompatible API changes are commonplace and there is no easy way for the distro to ship/install multiple versions of the same API without going down the route of every app in the distro bundling their own version of the API. I don't see why that's a problem that necessarily requires disrupting cross-platform app authors in order to address. > > - How many of these 200 new packages are essentially duplicates? Once > > you go down the route of having applications bundle libraries like > > this, there's going to basically be no sharing. > > There's no sharing only if you *actually* bundle the dependencies into > each virtualenv. While full bundling is the only mode pip currently > implements, completely isolating each virtualenv, it doesn't *have* to > work that way. In particular, PEP 426 offers the opportunity to add a > "compatible release" mode to pip/virtualenv where the tool can > maintain a shared pool of installed libraries, and use *.pth files to > make an appropriate version available in each venv. Updating the > shared version to a more recent release would then automatically > update any venvs with a *.pth file that reference that release. > > For example, suppose an application requires "somedep (1.3)". This > requires at least version 1.3, and won't accept 2.0. The latest > available qualifying version might be "1.5.3". > > At the moment, pip will install a *copy* of somedep 1.5.3 into the > application's virtualenv. However, it doesn't have to do that. Awesome! Now we're getting on to figuring out a solution to the "parallel installs of multiple incompatible versions" issue :) > It > could, instead, install somedep 1.5.3 into a location like > "/usr/lib/shared/pip-python/somedep1/", and then add a > "somedep1.pth" file to the virtualenv that references > "/usr/lib/shared/pip-python/somedep1/". > > Now, suppose we install another app, also using a virtualenv, that > requires "somedep (1.6)". The new version 1.6.0 is available now, so > we install it into the shared location and *both* applications will > end up using somedep 1.6.0. > > A security update is released for "somedep" as 1.6.1 - we install it > into the shared location, and now both applications are using 1.6.1 > instead of 1.6.0. Yay, that's what we wanted, just as if we had > runtime version selection, only the selection happens at install time > (when adding the *.pth file to the virtualenv) rather than at > application startup. > > Finally, we install a third application that needs "somedep (2.1)". We > can't overwrite the shared version, because it isn't compatible. > Fortunately, what we can do instead is install it to > "/usr/lib/shared/pip-python/somedep2/" and create a > "somedep2.pth" file in that environment. The two virtualenvs relying > on "somedep1" are blissfully unaware anything has changed because that > version never appears anywhere on their sys.path. > > Could you use this approach for the actual system site-packages > directory? No, because sys.path would become insanely long with that > many *.pth files. However, you could likely symlink to releases stored > in the *.pth friendly distribution store. But for application specific > virtual environments, it should be fine. Ok, there's a tonne of details there about pip, virtualenv and .pth files that are going over my head right now, but the general idea I'm taking away is: - the system has multiple versions of somedep installed under /usr somewhere - the latest version (2.1) is what you get if you naively just do 'import somedep' - most applications should instead somehow explicitly say they need ~>1.3, ->1.6 or ~->2.0 or whatever - distros would have multiple copies of the same library, but only one copy for each incompatible stream rather than one copy for each application That's definitely workable. > If any distros want that kind of thing to become a reality, though, > they're going to have to step up and contribute it. As noted above, > for the current tool development teams, the focus is on distributing, > maintaining and deploying cross-platform applications, not making it > easy to do security updates on a Linux distro. I believe it's possible > to satisfy both parties, but it's going to be up to the distros to > offer a viable plan for meeting their needs without disrupting > existing upstream practices. Point well taken. However, I am surprised the pendulum has swung such that Python platform maintainers only worry about app maintainers and not distro maintainers. And also, Python embracing incompatible updates and bundling is either a new upstream practice or just that doesn't well understood on the distro side. > I will note that making this kind of idea more feasible is one of the > reasons I am making "compatible release" the *default* in PEP 426 > version specifiers, but it still needs people to actually figure out > the details and write the code. I still think that going down this road without the parallel installs issue solved is a dangerous move for Python. Leaving aside pain for distros for a moment, there was a perception (perhaps misguided) that Python was a more stable platform that e.g. Ruby or Java. If Python library maintainers will see PEP426 as a license to make incompatible changes more often so long as they bump their major number, then that perception will change. > I will also note that the filesystem layout described above is *far* > more amenable to safe runtime selection of packages than the current > pkg_resources method. The critical failure mode in pkg_resources that > can lead to a lot of weirdness is that it can end up pushing > site-packages itself on to the front of sys.path which can shadow a > *lot* of modules (in particular, an installed copy of the software > you're currently working on may shadow the version in your source > checkout - this is the bug the patch I linked earlier was needed to > resolve). Runtime selection would need more work than getting virtual > environments to work that way, but it's certainly feasible once the > installation layout is updated. Ok. > > - What's the chance that that all of these 200 packages will be kept > > up to date? If an application works with a given version of a > > library and it can stick with that version, it will. As a Python > > library maintainer, wow do you like the idea of 10 different > > versions of you library included in Fedora? > > That's a problem the distros need to manage by offering patches to how > virtual environments and installation layouts work, rather than > lamenting the fact that cross-platform developers and distro > maintainers care about different things. I'm not lamenting what cross-platform developers care about. I'm lamenting that the Python platform maintainers care more about the cross-platform developers than distro maintainers :) > > - The next time a security issue is found in a common Python library, > > does Fedora now have to rush out 10 parallel fixes for it? > > Not if Fedora contributes the changes needed to support parallel > installs without requiring changes to existing Python applications and > libraries. "Patches welcome" - I get it. > > You can see that reaction in mails like this: > > > > http://lists.fedoraproject.org/pipermail/devel/2012-December/174944.html > > > > and the "why can't these losers just maintain compatibility" view: > > > > http://lists.fedoraproject.org/pipermail/devel/2012-December/175028.html > > http://lists.fedoraproject.org/pipermail/devel/2012-December/174929.html > > > > Notice folks complaining about Ruby and Java here, not Python. I can see > > Python embracing semantic versioning and "just use venv" shortly leading > > to Python being included in the list of "heretics". > > Unlike Java, the Python community generally sees *actual* bundling as > evil I think what you call "*actual* bundling" is what I think of as "vendorisation" - i.e. where an app actually copies a library into its source tree? By bundling, I mean that an app sees itself as in control of the versions of its dependencies. The app developer fundamentally thinks she is delivering a specific stack of dependencies and her application code on top rather than installing just their app and running it on a stable platform. > - expressing constraints relative to a published package index is > a different thing. Dependencies in Python are typically only brought > together into a cohesive, pinned set of version by application > developers and system integrators - the frameworks and libraries often > express quite loose version requirements (and receive complaints if > they're overly restrictive). > > The distros just have a harder problem than most because the set of > packages they're trying to bring together is so large, they're bound > to run into many cases of packages that have mutually incompatible > dependencies. It only takes two apps requiring two incompatible versions of the same libary for this to become an issue. A specific example that concerns OpenStack is you will often want server A from version N installed alongside server B from version N+1. This is especially true while you're migrating your deployment from version N to N+1 since you probably want to upgrade a server at a time. Thus, in OpenStack's case, it only takes one of our dependencies to release an incompatible version for this to become an issue. Python can be a stable platform and OpenStack wouldn't bundle, or it can be an unstable platform without parallel installs and OpenStack will bundle, or it can be an unstable platform with parallel installs and OpenStack won't have to bundle. Anyway, sounds like we have some ideas for parallel installs we can investigate. Thanks, Mark. From donald.stufft at gmail.com Tue Mar 5 13:34:09 2013 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 5 Mar 2013 07:34:09 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362486467.5909.310.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> <1362486467.5909.310.camel@sorcha> Message-ID: On Tuesday, March 5, 2013 at 7:27 AM, Mark McLoughlin wrote: > Hi Nick, > > On Tue, 2013-03-05 at 17:56 +1000, Nick Coghlan wrote: > > On Tue, Mar 5, 2013 at 8:29 AM, Mark McLoughlin wrote: > > > On Mon, 2013-03-04 at 12:44 -0500, Donald Stufft wrote: > > > > On Monday, March 4, 2013 at 12:36 PM, Mark McLoughlin wrote: > > > > > > > > > > > If parallel incompatible installs is a hopeless problem in Python, > > > > > why > > > > > the push to semantic versioning then rather than saying that > > > > > incompatible API changes should mean a name change? > > > > > > > > > > > > > Forcing a name change feels ugly as all hell. I don't really see what > > > > parallel installs has much to do with anything. I don't bundle anything > > > > and i'm ideologically opposed to it generally but I don't typically have > > > > a need for parallel installs because I use virtual environments. Why > > > > don't you utilize those? (Not being snarky, actually curious). > > > > > > > > > > > > > It's a fair question. > > > > > > To answer it with a question, how do you imagine Linux distributions > > > using virtual environments such that: > > > > > > $> yum install -y openstack-nova > > > > > > uses a virtual environment? How does it differ from bundling? (Not being > > > snarky, actually curious :) > > > > > > The approach that some Fedora folks are trying out is called "Software > > > Collections". It's not Python specific, but it's basically the same as a > > > virtual environment. > > > > > > For OpenStack, I think we'd probably have all the Python libraries we > > > require installed under e.g. /opt/rh/openstack-$version so that you > > > could have programs from two different releases of OpenStack installed > > > on the same system. > > > > > > Long time packagers are usually horrified at this idea e.g. > > > > > > http://lists.fedoraproject.org/pipermail/devel/2012-December/thread.html#174872 > > > > Yes, it's the eternal tension between "I only care about making a wide > > variety of applications on as easy to maintain on platform X as > > possible" view of the sysadmin and the "I only care about making > > application Y as easy to maintain on a wide variety of platforms as > > possible" view of the developer. > > > > Windows, Android, Mac OS X, etc, pretty much dial their software > > distribution model all the way towards the developer end of the > > spectrum. Linux distro maintainers need to realise that the language > > communities are almost entirely down the developer end of this > > spectrum, where sustainable cross-platform support is much higher > > priority than making life easier for administrators for any given > > platform. > > > > > I'm with you to there, but it's a bit of a vicious circle - app > developers bundle to insulate themselves from platform instability and > then the platform maintainers no longer see a benefit to platform > stability. > > > We're willing to work with distros to make deployment of > > security updates easier, but any proposals that involve people > > voluntarily making cross-platform development harder simpler aren't > > going to be accepted. > > > > > Let's take it to an extreme and say there was some way to force library > maintainers to never make incompatible API changes without renaming the > project. Not what I'm suggesting, of course. > > Doing that would not make app developers lives any/much harder since > they're bundling anyway. > > It does, however, make life more difficult for the platform maintainers. > > So, what I think you're really saying would be rejected is "any > proposals which make platform maintenance harder while not providing a > material benefit to app maintainers who bundle". > > The concrete thing we're discussing here is that distros get screwed if > the incompatible API changes are commonplace and there is no easy way > for the distro to ship/install multiple versions of the same API without > going down the route of every app in the distro bundling their own > version of the API. > > I don't see why that's a problem that necessarily requires disrupting > cross-platform app authors in order to address. > > > > - How many of these 200 new packages are essentially duplicates? Once > > > you go down the route of having applications bundle libraries like > > > this, there's going to basically be no sharing. > > > > > > > > > There's no sharing only if you *actually* bundle the dependencies into > > each virtualenv. While full bundling is the only mode pip currently > > implements, completely isolating each virtualenv, it doesn't *have* to > > work that way. In particular, PEP 426 offers the opportunity to add a > > "compatible release" mode to pip/virtualenv where the tool can > > maintain a shared pool of installed libraries, and use *.pth files to > > make an appropriate version available in each venv. Updating the > > shared version to a more recent release would then automatically > > update any venvs with a *.pth file that reference that release. > > > > For example, suppose an application requires "somedep (1.3)". This > > requires at least version 1.3, and won't accept 2.0. The latest > > available qualifying version might be "1.5.3". > > > > At the moment, pip will install a *copy* of somedep 1.5.3 into the > > application's virtualenv. However, it doesn't have to do that. > > > > > Awesome! Now we're getting on to figuring out a solution to the > "parallel installs of multiple incompatible versions" issue :) > > > It > > could, instead, install somedep 1.5.3 into a location like > > "/usr/lib/shared/pip-python/somedep1/", and then add a > > "somedep1.pth" file to the virtualenv that references > > "/usr/lib/shared/pip-python/somedep1/". > > > > Now, suppose we install another app, also using a virtualenv, that > > requires "somedep (1.6)". The new version 1.6.0 is available now, so > > we install it into the shared location and *both* applications will > > end up using somedep 1.6.0. > > > > A security update is released for "somedep" as 1.6.1 - we install it > > into the shared location, and now both applications are using 1.6.1 > > instead of 1.6.0. Yay, that's what we wanted, just as if we had > > runtime version selection, only the selection happens at install time > > (when adding the *.pth file to the virtualenv) rather than at > > application startup. > > > > Finally, we install a third application that needs "somedep (2.1)". We > > can't overwrite the shared version, because it isn't compatible. > > Fortunately, what we can do instead is install it to > > "/usr/lib/shared/pip-python/somedep2/" and create a > > "somedep2.pth" file in that environment. The two virtualenvs relying > > on "somedep1" are blissfully unaware anything has changed because that > > version never appears anywhere on their sys.path. > > > > Could you use this approach for the actual system site-packages > > directory? No, because sys.path would become insanely long with that > > many *.pth files. However, you could likely symlink to releases stored > > in the *.pth friendly distribution store. But for application specific > > virtual environments, it should be fine. > > > > > Ok, there's a tonne of details there about pip, virtualenv and .pth > files that are going over my head right now, but the general idea I'm > taking away is: > > - the system has multiple versions of somedep installed under /usr > somewhere > > - the latest version (2.1) is what you get if you naively just do > 'import somedep' > > - most applications should instead somehow explicitly say they need > ~>1.3, ->1.6 or ~->2.0 or whatever > > - distros would have multiple copies of the same library, but only > one copy for each incompatible stream rather than one copy for each > application > > That's definitely workable. > > > If any distros want that kind of thing to become a reality, though, > > they're going to have to step up and contribute it. As noted above, > > for the current tool development teams, the focus is on distributing, > > maintaining and deploying cross-platform applications, not making it > > easy to do security updates on a Linux distro. I believe it's possible > > to satisfy both parties, but it's going to be up to the distros to > > offer a viable plan for meeting their needs without disrupting > > existing upstream practices. > > > > > Point well taken. > > However, I am surprised the pendulum has swung such that Python platform > maintainers only worry about app maintainers and not distro maintainers. > > And also, Python embracing incompatible updates and bundling is either a > new upstream practice or just that doesn't well understood on the distro > side. > > > I will note that making this kind of idea more feasible is one of the > > reasons I am making "compatible release" the *default* in PEP 426 > > version specifiers, but it still needs people to actually figure out > > the details and write the code. > > > > > I still think that going down this road without the parallel installs > issue solved is a dangerous move for Python. Leaving aside pain for > distros for a moment, there was a perception (perhaps misguided) that > Python was a more stable platform that e.g. Ruby or Java. > > If Python library maintainers will see PEP426 as a license to make > incompatible changes more often so long as they bump their major number, > then that perception will change. > > I still don't really see how this is related to PEP426 unless PEP426 has gotten a lot larger since I last looked at it. Where in particular a distribution gets installed is left up to the installers to sort out. And making sure that the installed versions exist in sys.path is similarly out of scope for PEP426. > > > I will also note that the filesystem layout described above is *far* > > more amenable to safe runtime selection of packages than the current > > pkg_resources method. The critical failure mode in pkg_resources that > > can lead to a lot of weirdness is that it can end up pushing > > site-packages itself on to the front of sys.path which can shadow a > > *lot* of modules (in particular, an installed copy of the software > > you're currently working on may shadow the version in your source > > checkout - this is the bug the patch I linked earlier was needed to > > resolve). Runtime selection would need more work than getting virtual > > environments to work that way, but it's certainly feasible once the > > installation layout is updated. > > > > > Ok. > > > > - What's the chance that that all of these 200 packages will be kept > > > up to date? If an application works with a given version of a > > > library and it can stick with that version, it will. As a Python > > > library maintainer, wow do you like the idea of 10 different > > > versions of you library included in Fedora? > > > > > > > > > That's a problem the distros need to manage by offering patches to how > > virtual environments and installation layouts work, rather than > > lamenting the fact that cross-platform developers and distro > > maintainers care about different things. > > > > > I'm not lamenting what cross-platform developers care about. I'm > lamenting that the Python platform maintainers care more about the > cross-platform developers than distro maintainers :) > > > > - The next time a security issue is found in a common Python library, > > > does Fedora now have to rush out 10 parallel fixes for it? > > > > > > > > > Not if Fedora contributes the changes needed to support parallel > > installs without requiring changes to existing Python applications and > > libraries. > > > > > "Patches welcome" - I get it. > > > > You can see that reaction in mails like this: > > > > > > http://lists.fedoraproject.org/pipermail/devel/2012-December/174944.html > > > > > > and the "why can't these losers just maintain compatibility" view: > > > > > > http://lists.fedoraproject.org/pipermail/devel/2012-December/175028.html > > > http://lists.fedoraproject.org/pipermail/devel/2012-December/174929.html > > > > > > Notice folks complaining about Ruby and Java here, not Python. I can see > > > Python embracing semantic versioning and "just use venv" shortly leading > > > to Python being included in the list of "heretics". > > > > > > > > > Unlike Java, the Python community generally sees *actual* bundling as > > evil > > > > > I think what you call "*actual* bundling" is what I think of as > "vendorisation" - i.e. where an app actually copies a library into its > source tree? > > By bundling, I mean that an app sees itself as in control of the > versions of its dependencies. The app developer fundamentally thinks she > is delivering a specific stack of dependencies and her application code > on top rather than installing just their app and running it on a stable > platform. > > In this case the app would be one unit and an upgrade in one of the bundled dependencies would require a new version of the App. In my mind when an app either bundles or vendorizes they take responsibility for those dependencies and making sure they are up to date because they are now part of the logical unit that is the app. > > > - expressing constraints relative to a published package index is > > a different thing. Dependencies in Python are typically only brought > > together into a cohesive, pinned set of version by application > > developers and system integrators - the frameworks and libraries often > > express quite loose version requirements (and receive complaints if > > they're overly restrictive). > > > > The distros just have a harder problem than most because the set of > > packages they're trying to bring together is so large, they're bound > > to run into many cases of packages that have mutually incompatible > > dependencies. > > > > > It only takes two apps requiring two incompatible versions of the same > libary for this to become an issue. > > A specific example that concerns OpenStack is you will often want server > A from version N installed alongside server B from version N+1. This is > especially true while you're migrating your deployment from version N to > N+1 since you probably want to upgrade a server at a time. > > Thus, in OpenStack's case, it only takes one of our dependencies to > release an incompatible version for this to become an issue. > > Python can be a stable platform and OpenStack wouldn't bundle, or it can > be an unstable platform without parallel installs and OpenStack will > bundle, or it can be an unstable platform with parallel installs and > OpenStack won't have to bundle. > > Anyway, sounds like we have some ideas for parallel installs we can > investigate. > > Thanks, > Mark. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markmc at redhat.com Tue Mar 5 13:50:45 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Tue, 05 Mar 2013 12:50:45 +0000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> <1362486467.5909.310.camel@sorcha> Message-ID: <1362487845.5909.319.camel@sorcha> On Tue, 2013-03-05 at 07:34 -0500, Donald Stufft wrote: > On Tuesday, March 5, 2013 at 7:27 AM, Mark McLoughlin wrote: > > On Tue, 2013-03-05 at 17:56 +1000, Nick Coghlan wrote: ... > > > I will note that making this kind of idea more feasible is one of > > > the > > > reasons I am making "compatible release" the *default* in PEP 426 > > > version specifiers, but it still needs people to actually figure > > > out > > > the details and write the code. > > > > > > I still think that going down this road without the parallel > > installs > > issue solved is a dangerous move for Python. Leaving aside pain for > > distros for a moment, there was a perception (perhaps misguided) > > that > > Python was a more stable platform that e.g. Ruby or Java. > > > > > > If Python library maintainers will see PEP426 as a license to make > > incompatible changes more often so long as they bump their major > > number, > > then that perception will change. > I still don't really see how this is related to PEP426 unless PEP426 > has gotten > a lot larger since I last looked at it. Where in particular a > distribution gets > installed is left up to the installers to sort out. And making sure > that the installed > versions exist in sys.path is similarly out of scope for PEP426. Sorry, maybe I'm being obtuse. I can see people read PEP426 and thinking "oh, awesome! Python now has a mechanism for handling incompatible API changes! Now I can get rid of that crufty backwards compat code and bump my major number!". My point is that it's (potentially) damaging to send that message to library maintainers before Python has the infrastructure for sanely dealing with parallel installs. ... > > I think what you call "*actual* bundling" is what I think of as > > "vendorisation" - i.e. where an app actually copies a library into > > its > > source tree? > > > > > > By bundling, I mean that an app sees itself as in control of the > > versions of its dependencies. The app developer fundamentally thinks > > she > > is delivering a specific stack of dependencies and her application > > code > > on top rather than installing just their app and running it on a > > stable > > platform. > In this case the app would be one unit and an upgrade in one of the > bundled dependencies would require a new version of the App. In my > mind when an app either bundles or vendorizes they take responsibility > for those dependencies and making sure they are up to date because > they are now part of the logical unit that is the app. Right, exactly - the idea Nick lays out allows the dependencies to be managed by the system platform maintainer. That would be a huge improvement over bundling. Cheers, Mark. From donald.stufft at gmail.com Tue Mar 5 13:55:16 2013 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 5 Mar 2013 07:55:16 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362487845.5909.319.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> <1362486467.5909.310.camel@sorcha> <1362487845.5909.319.camel@sorcha> Message-ID: On Tuesday, March 5, 2013 at 7:50 AM, Mark McLoughlin wrote: > > I still don't really see how this is related to PEP426 unless PEP426 > > has gotten > > a lot larger since I last looked at it. Where in particular a > > distribution gets > > installed is left up to the installers to sort out. And making sure > > that the installed > > versions exist in sys.path is similarly out of scope for PEP426. > > > > > Sorry, maybe I'm being obtuse. > > I can see people read PEP426 and thinking "oh, awesome! Python now has a > mechanism for handling incompatible API changes! Now I can get rid of > that crufty backwards compat code and bump my major number!". > > My point is that it's (potentially) damaging to send that message to > library maintainers before Python has the infrastructure for sanely > dealing with parallel installs. Gotcha, you think that codifying how to version with regards to breaking API compatibility will lead to more people breaking backwards compatibility. That's a fair concern, and there's not much that can be done inside of PEP426 to alleviate it. However I will say that PEP426 doesn't really contain much in the way of new ideas, but rather codifies a lot of existing practices within the Python community so that tools can get simpler and more accurate without having to resort to heuristics and guessing. You could also argue that this would _help_ with backwards compatibility because there is now a suggested way of declaring when 2 releases are no longer compatible by incrementing the major version number. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 5 13:55:55 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Mar 2013 22:55:55 +1000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> <1362486467.5909.310.camel@sorcha> Message-ID: On Tue, Mar 5, 2013 at 10:34 PM, Donald Stufft wrote: > On Tuesday, March 5, 2013 at 7:27 AM, Mark McLoughlin wrote: >> If Python library maintainers will see PEP426 as a license to make >> incompatible changes more often so long as they bump their major number, >> then that perception will change. > > I still don't really see how this is related to PEP426 unless PEP426 has > gotten > a lot larger since I last looked at it. Where in particular a distribution > gets > installed is left up to the installers to sort out. And making sure that the > installed > versions exist in sys.path is similarly out of scope for PEP426. Mark is worried that explicitly endorsing semantic versioning in PEP 426 will encourage package developers to gleefully break backwards compatibility whenever they want to, so the situation quickly degenerates into a mess of incompatible version requirements as you move higher up the stack. If such a situation occurs, then it is a potential problem for large applications like OpenStack (with a deep-and-broad dependency stack) and especially for full Linux distributions that are trying to jam large fractions of PyPI (plus their own distro-specific code) into a single coherent dependency tree. That's actually the opposite of the intent, though - as I see it, while most Python developers are doing the right thing and offering reasonable deprecation periods, there are also some cases where backwards compatibility is broken without suitable advance notice, or people aren't aware of the implications of depending on 0.x releases, or, as recently happened with the requests 0.9 -> 1.0 transition, developers bump their minimum required version of a dependency in an incompatible way. I don't expect a sudden stampede of "Hey, there's an official way to indicate I'm breaking backwards compatibility, I'm going to do it when I wouldn't have before!", no matter what any PEP says. However, I also see dependency management as primarily an integrator's problem, and something to be solved primarily external to the libraries themselves, through virtual environments, *.pth files and appropriate installation layouts. The only responsibility I see as lying with the upstream library and application developers is to accurately declare their dependencies (and to make them as broad as is reasonable). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Mar 5 14:03:18 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Mar 2013 23:03:18 +1000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> <1362486467.5909.310.camel@sorcha> <1362487845.5909.319.camel@sorcha> Message-ID: On Tue, Mar 5, 2013 at 10:55 PM, Donald Stufft wrote: > On Tuesday, March 5, 2013 at 7:50 AM, Mark McLoughlin wrote: > > I still don't really see how this is related to PEP426 unless PEP426 > has gotten > a lot larger since I last looked at it. Where in particular a > distribution gets > installed is left up to the installers to sort out. And making sure > that the installed > versions exist in sys.path is similarly out of scope for PEP426. > > > Sorry, maybe I'm being obtuse. > > I can see people read PEP426 and thinking "oh, awesome! Python now has a > mechanism for handling incompatible API changes! Now I can get rid of > that crufty backwards compat code and bump my major number!". > > My point is that it's (potentially) damaging to send that message to > library maintainers before Python has the infrastructure for sanely > dealing with parallel installs. > > Gotcha, you think that codifying how to version with regards to breaking > API compatibility will lead to more people breaking backwards compatibility. > > That's a fair concern, and there's not much that can be done inside of > PEP426 > to alleviate it. However I will say that PEP426 doesn't really contain much > in the way of new ideas, but rather codifies a lot of existing practices > within the > Python community so that tools can get simpler and more accurate without > having to resort to heuristics and guessing. > > You could also argue that this would _help_ with backwards compatibility > because > there is now a suggested way of declaring when 2 releases are no longer > compatible > by incrementing the major version number. Yeah, I think if we can come up with a clear plan whereby distros can create a suitable installation layout such that parallel versions can be installed and imported, even for Python libraries that have no idea parallel installation is possible, it should alleviate a lot of the concerns. My main point is that most Python software assumes there will only be one version of a given library on sys.path, so relying on the libraries themselves to specify *at runtime* which version they want isn't going to be workable. However, the install time metadata is a much better candidate for doing something useful (and is also a bit more amenable to being adjusted by the distro maintainers when it is overly restrictive). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Tue Mar 5 14:28:47 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 5 Mar 2013 08:28:47 -0500 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362487845.5909.319.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> <1362486467.5909.310.camel@sorcha> <1362487845.5909.319.camel@sorcha> Message-ID: On Tue, Mar 5, 2013 at 7:50 AM, Mark McLoughlin wrote: > On Tue, 2013-03-05 at 07:34 -0500, Donald Stufft wrote: >> On Tuesday, March 5, 2013 at 7:27 AM, Mark McLoughlin wrote: > >> > On Tue, 2013-03-05 at 17:56 +1000, Nick Coghlan wrote: > > ... >> > > I will note that making this kind of idea more feasible is one of >> > > the >> > > reasons I am making "compatible release" the *default* in PEP 426 >> > > version specifiers, but it still needs people to actually figure >> > > out >> > > the details and write the code. >> > >> > >> > I still think that going down this road without the parallel >> > installs >> > issue solved is a dangerous move for Python. Leaving aside pain for >> > distros for a moment, there was a perception (perhaps misguided) >> > that >> > Python was a more stable platform that e.g. Ruby or Java. >> > >> > >> > If Python library maintainers will see PEP426 as a license to make >> > incompatible changes more often so long as they bump their major >> > number, >> > then that perception will change. >> I still don't really see how this is related to PEP426 unless PEP426 >> has gotten >> a lot larger since I last looked at it. Where in particular a >> distribution gets >> installed is left up to the installers to sort out. And making sure >> that the installed >> versions exist in sys.path is similarly out of scope for PEP426. > > Sorry, maybe I'm being obtuse. > > I can see people read PEP426 and thinking "oh, awesome! Python now has a > mechanism for handling incompatible API changes! Now I can get rid of > that crufty backwards compat code and bump my major number!". It will have exactly the opposite effect. Semver is awesome because it encourages the developer to first define an API at all, then think about whether each new release breaks the API, and last encode that information in the version number instead of incrementing each component by whatever random number feels right. All that extra thinking will lead to better APIs. You will be amazed at how much backwards compatible cruft some developers will keep around rather than endure the psychological anguish of bumping the major version number. From p.f.moore at gmail.com Tue Mar 5 14:36:12 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 5 Mar 2013 13:36:12 +0000 Subject: [Distutils] Distribute: will not build on Python 3 with --install-XXX arguments Message-ID: What I'm doing is checking out distribute, then trying to install it into a temporary directory. However, when I do this, I seem to be getting the files in python 2 form, *not* having been converted using 2to3. The resulting build is therefore broken. Can anyone suggest what might be going on here? I've been looking at this for hours now, and I'm no nearer to working out what is going on :-( I'm running on Windows 7, 64-bit, using Python 3.3 in Powershell. The same happens in cmd.exe and Python 3.2. There are NO packages installed in my Python installation, it's a straight install from the python.org installer. PS 13:26 C:\Work\Scratch >hg clone http://bitbucket.org/tarek/distribute dtest real URL is https://bitbucket.org/tarek/distribute requesting all changes adding changesets adding manifests adding file changes added 1160 changesets with 2491 changes to 443 files updating to branch default 98 files updated, 0 files merged, 0 files removed, 0 files unresolved PS 13:26 C:\Work\Scratch >cd dtest PS 13:26 C:\Work\Scratch\dtest >mkdir a Directory: C:\Work\Scratch\dtest Mode LastWriteTime Length Name ---- ------------- ------ ---- d---- 05/03/2013 13:26 a PS 13:26 C:\Work\Scratch\dtest >py setup.py install --install-purelib=aa\purelib --install-platlib=aa\platlib --install-scripts=aa\scripts --install-headers=aa\headers --install-data=aa\data [... lots of setup.py output clipped ...] byte-compiling aa\purelib\setuptools\tests\test_sdist.py to test_sdist.cpython-33.pyc File "aa\purelib\setuptools\tests\test_sdist.py", line 152 except UnicodeDecodeError, e: ^ SyntaxError: invalid syntax byte-compiling aa\purelib\setuptools\tests\test_test.py to test_test.cpython-33.pyc byte-compiling aa\purelib\setuptools\tests\test_upload_docs.py to test_upload_docs.cpython-33.pyc byte-compiling aa\purelib\setuptools\tests\__init__.py to __init__.cpython-33.pyc byte-compiling aa\purelib\setuptools\__init__.py to __init__.cpython-33.pyc byte-compiling aa\purelib\site.py to site.cpython-33.pyc byte-compiling aa\purelib\_markerlib\markers.py to markers.cpython-33.pyc byte-compiling aa\purelib\_markerlib\__init__.py to __init__.cpython-33.pyc running install_egg_info Writing aa\purelib\distribute-0.6.36-py3.3.egg-info After install bootstrap. Creating aa\purelib\setuptools-0.6c11-py3.3.egg-info Creating aa\purelib\setuptools.pth PS 13:27 C:\Work\Scratch\dtest > It's that SyntaxError that confirms that the generated files are invalid :-( Paul From ncoghlan at gmail.com Tue Mar 5 14:49:39 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Mar 2013 23:49:39 +1000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: <1362486467.5909.310.camel@sorcha> References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> <1362486467.5909.310.camel@sorcha> Message-ID: On Tue, Mar 5, 2013 at 10:27 PM, Mark McLoughlin wrote: > Ok, there's a tonne of details there about pip, virtualenv and .pth > files that are going over my head right now, but the general idea I'm > taking away is: > > - the system has multiple versions of somedep installed under /usr > somewhere > > - the latest version (2.1) is what you get if you naively just do > 'import somedep' > > - most applications should instead somehow explicitly say they need > ~>1.3, ->1.6 or ~->2.0 or whatever > > - distros would have multiple copies of the same library, but only > one copy for each incompatible stream rather than one copy for each > application > > That's definitely workable. That's the general idea. There's a fair bit of work needed to get there though, and the on-disk layouts for the distros would look fairly different from the current approach of just dumping everything in Python's site-packages directory. It also won't solve *all* compatibility problems, so the distros may still be on the hook to carry some compatibility patches in order to get a common dependency that works for everyone. (To be honest, I expect software collections will end up needing a similar capability, for all the reasons you gave earlier in response to the suggestion of just using virtualenv with the existing simple bundling approach) >> If any distros want that kind of thing to become a reality, though, >> they're going to have to step up and contribute it. As noted above, >> for the current tool development teams, the focus is on distributing, >> maintaining and deploying cross-platform applications, not making it >> easy to do security updates on a Linux distro. I believe it's possible >> to satisfy both parties, but it's going to be up to the distros to >> offer a viable plan for meeting their needs without disrupting >> existing upstream practices. > > Point well taken. > > However, I am surprised the pendulum has swung such that Python platform > maintainers only worry about app maintainers and not distro maintainers. If you saw my rant on the Fedora python-devel list, you may have guessed that a lot of the expressed frustration with the distro approach and the lack of willingness to understand why bundling is such an attractive option for cross-platform development comes from me personally, rather than the Python community in general. However, there's also a general dearth of distro people on the upstream packaging lists to balance the web app developers - I'm one of the ones that is *most sympathetic* to the distro point of view, and that should worry you a bit. > And also, Python embracing incompatible updates and bundling is either a > new upstream practice or just that doesn't well understood on the distro > side. The popularity of bundling is primarily in devops and integrated application stacks. You can't deploy on Windows without bundling, and when you have to manage your own security updates for other platforms anyway, the distros refusing to accommodate that model is just frustrating. The distro packagers say "we don't trust the developers of all of the apps we ship to provide timely security fixes" and the app developers say "we don't trust the packagers of all of the distros we support to provide timely security fixes" and everybody gets annoyed with each other because their concerns and priorities are so wildly divergent. And we're definitely *not* embracing incompatible updates, we're acknowledging their inevitable existence in volunteer driven projects, and defining a way to at least communicate them clearly when they happen. >> I will note that making this kind of idea more feasible is one of the >> reasons I am making "compatible release" the *default* in PEP 426 >> version specifiers, but it still needs people to actually figure out >> the details and write the code. > > I still think that going down this road without the parallel installs > issue solved is a dangerous move for Python. Leaving aside pain for > distros for a moment, there was a perception (perhaps misguided) that > Python was a more stable platform that e.g. Ruby or Java. > > If Python library maintainers will see PEP426 as a license to make > incompatible changes more often so long as they bump their major number, > then that perception will change. This isn't a concern I had considered, but I'll review the wording in the PEP with that in mind (there's a bunch of other stuff I need to fix in the version scheme description anyway). > By bundling, I mean that an app sees itself as in control of the > versions of its dependencies. The app developer fundamentally thinks she > is delivering a specific stack of dependencies and her application code > on top rather than installing just their app and running it on a stable > platform. I'm actually trying to push *against* that by making the "compatible release" specifier the default in PEP 426 rather than requiring a separate operator (if you look at PEP 345, the predecessor that defines metadata 1.2, it pinned dependencies by default). Actual pinning (with "==") should be limited to devops type situations, where an application really is explicitly controlling it's entire stack, but that's not the kind of metadata anyone should be publishing on PyPI. I can see how the perception of the current PEP might be different without that background of knowing precisely what the previous version said, though. > Anyway, sounds like we have some ideas for parallel installs we can > investigate. Yes, I'm definitely not opposed to the idea of parallel installs - I'm just opposed to the idea of parallel install systems that rely on changes to PyPI packages in order for them to work properly. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Mar 5 14:52:06 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Mar 2013 23:52:06 +1000 Subject: [Distutils] Library instability on PyPI and impact on OpenStack In-Reply-To: References: <1362065981.2370.95.camel@sorcha> <1362326073.5909.12.camel@sorcha> <1362418605.5909.163.camel@sorcha> <1362436196.5909.222.camel@sorcha> <1362486467.5909.310.camel@sorcha> <1362487845.5909.319.camel@sorcha> Message-ID: On Tue, Mar 5, 2013 at 11:28 PM, Daniel Holth wrote: > You will be amazed at how much backwards compatible cruft some > developers will keep around rather than endure the psychological > anguish of bumping the major version number. We also have Python 3 to hit them over the head with. "Look how much pain the core devs put everyone through with the Python 3 transition! Do you want that for your users?! Do you?!" :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From pje at telecommunity.com Tue Mar 5 15:59:12 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 5 Mar 2013 09:59:12 -0500 Subject: [Distutils] PEP 426 (Metadata 2.0) - Requires-Dist and setuptools/distribute In-Reply-To: References: Message-ID: On Mon, Mar 4, 2013 at 3:20 PM, Paul Moore wrote: > On 4 March 2013 20:00, Daniel Holth wrote: >> On Mon, Mar 4, 2013 at 2:41 PM, Paul Moore wrote: >>> In thinking about how virtualenv would describe the packages it wants >>> to preload in PEP 426 metadata form, it occurred to me that there are >>> scenarios with setuptools and distribute where it's not obvious how to >>> state the requirement you want. Specifically, if you want to install >>> setuptools if it is present, but if not fall back to distribute (for >>> example, if you have a local package repository and no access to PyPI, >>> but setuptools may or may not be present). > [...] >> >> We do have Provides-Dist, although the best way to implement it is an >> open question. > > Good point. So distribute would have "Provides-Dist: setuptools" and I > could just require setuptools. Given that none of this is supported > yet, I'm happy that the spec covers this case, but still need to work > around it for the immediate future. Provides-Dist doesn't actually work for most of the use cases for alternates, though. For example, if a package that wants to require one of the various mysql adapters, it doesn't make any sense for the mysql packages to declare that they provide each other. ;-) From ncoghlan at gmail.com Tue Mar 5 23:39:09 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Mar 2013 08:39:09 +1000 Subject: [Distutils] PEP 426 (Metadata 2.0) - Requires-Dist and setuptools/distribute In-Reply-To: References: Message-ID: On 6 Mar 2013 00:59, "PJ Eby" wrote: > > On Mon, Mar 4, 2013 at 3:20 PM, Paul Moore wrote: > > On 4 March 2013 20:00, Daniel Holth wrote: > >> On Mon, Mar 4, 2013 at 2:41 PM, Paul Moore wrote: > >>> In thinking about how virtualenv would describe the packages it wants > >>> to preload in PEP 426 metadata form, it occurred to me that there are > >>> scenarios with setuptools and distribute where it's not obvious how to > >>> state the requirement you want. Specifically, if you want to install > >>> setuptools if it is present, but if not fall back to distribute (for > >>> example, if you have a local package repository and no access to PyPI, > >>> but setuptools may or may not be present). > > [...] > >> > >> We do have Provides-Dist, although the best way to implement it is an > >> open question. > > > > Good point. So distribute would have "Provides-Dist: setuptools" and I > > could just require setuptools. Given that none of this is supported > > yet, I'm happy that the spec covers this case, but still need to work > > around it for the immediate future. > > Provides-Dist doesn't actually work for most of the use cases for > alternates, though. For example, if a package that wants to require > one of the various mysql adapters, it doesn't make any sense for the > mysql packages to declare that they provide each other. ;-) The adapter developers just need to agree on a virtual provides they will all publish. Of course, that won't be effective without an entry points extension to get the adapters loaded in a consistent fashion. Still, one step at a time :) Cheers, Nick. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Mar 6 11:28:56 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 6 Mar 2013 10:28:56 +0000 Subject: [Distutils] Distribute: will not build on Python 3 with --install-XXX arguments In-Reply-To: References: Message-ID: On 5 March 2013 13:36, Paul Moore wrote: > What I'm doing is checking out distribute, then trying to install it > into a temporary directory. However, when I do this, I seem to be > getting the files in python 2 form, *not* having been converted using > 2to3. The resulting build is therefore broken. > > Can anyone suggest what might be going on here? I've been looking at > this for hours now, and I'm no nearer to working out what is going on > :-( Any thoughts, anyone? Are there any distribute experts around? At the moment, the only option I can find for getting an "uninstalled" distribute is setup.py build and then grab the contents of build/lib. And that's a really ugly hack :-( Paul From vinay_sajip at yahoo.co.uk Wed Mar 6 12:22:24 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 6 Mar 2013 11:22:24 +0000 (UTC) Subject: [Distutils] =?utf-8?q?Distribute=3A_will_not_build_on_Python_3_wi?= =?utf-8?q?th=09--install-XXX_arguments?= References: Message-ID: Paul Moore gmail.com> writes: > > On 5 March 2013 13:36, Paul Moore gmail.com> wrote: > > What I'm doing is checking out distribute, then trying to install it > > into a temporary directory. However, when I do this, I seem to be > > getting the files in python 2 form, *not* having been converted using > > 2to3. The resulting build is therefore broken. > > > > Can anyone suggest what might be going on here? I've been looking at > > this for hours now, and I'm no nearer to working out what is going on > > > > Any thoughts, anyone? Are there any distribute experts around? > > At the moment, the only option I can find for getting an "uninstalled" > distribute is setup.py build and then grab the contents of build/lib. > And that's a really ugly hack > If it feels like you're yak shaving and you'd rather not be, you could look at distribute3 [1], which doesn't need 2to3. It was synchronised with distribute in Jan 2013, so it might work for you if you don't need some more recent functionality/bug-fix. Regards, Vinay Sajip [1] https://bitbucket.org/vinay.sajip/distribute3 From marius at pov.lt Wed Mar 6 12:46:29 2013 From: marius at pov.lt (Marius Gedminas) Date: Wed, 6 Mar 2013 13:46:29 +0200 Subject: [Distutils] [venv] distribute-0.6.35 fails on python 3.3? In-Reply-To: References: <512C5455.2080300@simplistix.co.uk> <20130226092006.GA9967@fridge.pov.lt> Message-ID: <20130306114629.GA4522@fridge.pov.lt> On Tue, Feb 26, 2013 at 06:52:41PM +0000, Vinay Sajip wrote: > > > How hard would it be to rewrite distribute to use a common language > > > subset instead of relying on 2to3? > > I did this months ago, when doing PEP 405 venv development was getting > old fast because of the 2to3 delay: > > https://bitbucket.org/vinay.sajip/distribute3 > > It's been mentioned on this list, but none of the distribute maintainers > appear to have picked up on it. I filed an upstream issue to bring it to their attention: https://bitbucket.org/tarek/distribute/issue/356/installation-of-distribute-is-slow-on > It's been a short while (10 Jan 2013) since I synchronised with the > main repo, but it typically doesn't take all that long to do. > > All the tests pass whenever I merge upstream changes, but I'm not sure > how much confidence the maintainers have about test coverage and > whether all tests passing means enough for them to release into the > wild. Marius Gedminas -- C is quirky, flawed, and an enormous success. -- Dennis M. Ritchie -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 190 bytes Desc: Digital signature URL: From p.f.moore at gmail.com Wed Mar 6 12:57:31 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 6 Mar 2013 11:57:31 +0000 Subject: [Distutils] Distribute: will not build on Python 3 with --install-XXX arguments In-Reply-To: References: Message-ID: On 6 March 2013 11:22, Vinay Sajip wrote: > If it feels like you're yak shaving and you'd rather not be, you could look at > distribute3 [1], which doesn't need 2to3. It was synchronised with distribute in > Jan 2013, so it might work for you if you don't need some more recent > functionality/bug-fix. Thanks. I had tried distribute3 last night and it failed. But it works today... :-( There seems to be some annoyingly subtle environmental dependencies which affect how distribute's setup.py behaves. but I have no idea what's going on, or how to get to the bottom of it. If I do the build from a virtualenv with distribute installed, everything works fine. So I think the conclusion I have to come to is that to do anything other than install distribute from source, I need to be working in an environment (probably a virtualenv) with distribute already installed. That's a nuisance for what I'm trying to achieve, as it has bootstrapping consequences, but if that's what's needed, so be it. Thanks for the help in any case, Paul From regebro at gmail.com Wed Mar 6 13:24:18 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 6 Mar 2013 13:24:18 +0100 Subject: [Distutils] distribute-0.6.35 fails on python 3.3? In-Reply-To: <512C5455.2080300@simplistix.co.uk> References: <512C5455.2080300@simplistix.co.uk> Message-ID: On Tue, Feb 26, 2013 at 7:21 AM, Chris Withers wrote: > Hi All, > > Can anyone else reproduce this? Essentially pip can not upgrade distribute or setuptools on Python 3. This is annoying, and it seems the easiest way to solve it is to stop using 2to3 for distribute. This is a massive change though, and since the test coverage is dismal its' going to create loads of bugs. But perhaps we simply need to accept that. //Lennart From vinay_sajip at yahoo.co.uk Wed Mar 6 13:50:49 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 6 Mar 2013 12:50:49 +0000 (GMT) Subject: [Distutils] Distribute: will not build on Python 3 with --install-XXX arguments In-Reply-To: References: Message-ID: <1362574249.71067.YahooMailNeo@web171402.mail.ir2.yahoo.com> > From: Paul Moore > Thanks. I had tried distribute3 last night and it failed. But it works > today... :-( I've just synchronised distribute3 with distribute again. All tests pass on 2.7 and 3.2 (I wasn't able to test with 3.3) - you might want to check the latest version. > If I do the build from a virtualenv with distribute installed, > everything works fine. So I think the conclusion I have to come to is > that to do anything other than install distribute from source, I need > to be working in an environment (probably a virtualenv) with > distribute already installed. That's a nuisance for what I'm trying to > achieve, as it has bootstrapping consequences, but if that's what's > needed, so be it. Still no joy with the uninstalled setuptools, then? It seemed to work for me without any special work, but that was on Linux. Regards, Vinay Sajip From p.f.moore at gmail.com Wed Mar 6 14:20:30 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 6 Mar 2013 13:20:30 +0000 Subject: [Distutils] Distribute: will not build on Python 3 with --install-XXX arguments In-Reply-To: <1362574249.71067.YahooMailNeo@web171402.mail.ir2.yahoo.com> References: <1362574249.71067.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 6 March 2013 12:50, Vinay Sajip wrote: > Still no joy with the uninstalled setuptools, then? It seemed to work for me without any special work, but that was on Linux. Yeah, again it seems to be subtle environmental issues. Sometimes it works, sometimes it doesn't, never with a useful error message :-( It's mainly for setup.py - the hooks that override normal distutils functionality seem to be the flaky bit. The errors generally seem to be coming from core distutils, when I should be running setuptools (things like "unknown option, --single-version-externally-managed" even though setuptools is on sys.path). Not worth wasting any more time over, in my opinion. Paul. From dholth at gmail.com Wed Mar 6 14:28:29 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 6 Mar 2013 08:28:29 -0500 Subject: [Distutils] distribute-0.6.35 fails on python 3.3? In-Reply-To: References: <512C5455.2080300@simplistix.co.uk> Message-ID: On Wed, Mar 6, 2013 at 7:24 AM, Lennart Regebro wrote: > On Tue, Feb 26, 2013 at 7:21 AM, Chris Withers wrote: >> Hi All, >> >> Can anyone else reproduce this? > > Essentially pip can not upgrade distribute or setuptools on Python 3. > > This is annoying, and it seems the easiest way to solve it is to stop > using 2to3 for distribute. > This is a massive change though, and since the test coverage is dismal > its' going to create loads of bugs. But perhaps we simply need to > accept that. It has always been problematic to upgrade distribute (a pip dependency) with pip. Under certain circumstances pip can uninstall distribute before trying to install it again, which fails because at that point distribute isn't installed. On Windows and on Python 3 the issue is worse due to 2to3 and file locking issues. Virtualenv is a more reliable way to upgrade distribute. From askedrelic at gmail.com Fri Mar 8 17:08:47 2013 From: askedrelic at gmail.com (Matt Behrens) Date: Fri, 8 Mar 2013 08:08:47 -0800 Subject: [Distutils] Add optional password_command .pypirc value Message-ID: After doing some research last night on storing/accessing passwords in the OSX Keychain (http://asktherelic.com/2013/03/07/storing-command-line-passwords-in-keychain/), I was curious why the .pypirc doesn't support something like this when asking for the password during 'upload', to not have your pypi password in plaintext on your system. As far as I can see from the source, the password is read straight from the .pypirc config: https://bitbucket.org/tarek/distribute/src/188dcdb7f0873f1b382e8bde65377c5f43266f9f/setuptools/command/upload.py?at=default#cl-66 and fails if the password value doesn't exist: https://bitbucket.org/tarek/distribute/issue/291/allow-password-to-be-omitted-from-pypirc I'm curious about implementing: 1. a password_command to support integration with external password tools (1password, keychain, keyring python lib) The implementation from the program I am trying to emulate, pianobar, is here: https://github.com/PromyLOPh/pianobar/blob/master/src/main.c#L135 just a /bin/sh for nix/osx. Could run cmd.exe for windows cross-platform compatibility. 2. better notification to the user about trying to upload with an empty password or using get_pass if empty password The only other reference to something like this is from several years ago here: http://bugs.python.org/issue4394 Does this seem like it's worth making a patch? -- Matt Behrens -------------- next part -------------- An HTML attachment was scrubbed... URL: From regebro at gmail.com Fri Mar 8 17:32:51 2013 From: regebro at gmail.com (Lennart Regebro) Date: Fri, 8 Mar 2013 17:32:51 +0100 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: References: Message-ID: On Fri, Mar 8, 2013 at 5:08 PM, Matt Behrens wrote: > Does this seem like it's worth making a patch? Personally I think it's better to the the ssh way and support uploading via ssh with uploaded ssh keys, and deprecate the password support for uploading. That way there is no problems with integrating a bunch of different platform specific keyring systems. That said, this is just me talking, and doing counts more. :-) //Lennart From donald at stufft.io Fri Mar 8 18:01:08 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 8 Mar 2013 12:01:08 -0500 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: References: Message-ID: On Mar 8, 2013, at 11:32 AM, Lennart Regebro wrote: > On Fri, Mar 8, 2013 at 5:08 PM, Matt Behrens wrote: >> Does this seem like it's worth making a patch? > > Personally I think it's better to the the ssh way and support > uploading via ssh with uploaded ssh keys, and deprecate the password > support for uploading. That way there is no problems with integrating > a bunch of different platform specific keyring systems. I dislike hijacking SSH to tunnel a HTTP protocol over and adding more reliance on SSH keys means a lost SSH key becomes _even_ worse than it already is. I don't think the long term answer is key rings either. > > That said, this is just me talking, and doing counts more. :-) > > //Lennart > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From regebro at gmail.com Fri Mar 8 18:47:51 2013 From: regebro at gmail.com (Lennart Regebro) Date: Fri, 8 Mar 2013 18:47:51 +0100 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: References: Message-ID: On Fri, Mar 8, 2013 at 6:01 PM, Donald Stufft wrote: > I dislike hijacking SSH to tunnel a HTTP protocol over I'm not sure we have to hijack or tunnel anything. :-) > and adding more reliance on SSH keys means a lost SSH key becomes _even_ worse than it already is. I don't follow that argument. You can have separate keys in separate places if you like. //Lennart From donald at stufft.io Fri Mar 8 18:57:54 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 8 Mar 2013 12:57:54 -0500 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: References: Message-ID: <455F9CF1-BBEE-414B-AAB7-237901564853@stufft.io> On Mar 8, 2013, at 12:47 PM, Lennart Regebro wrote: > On Fri, Mar 8, 2013 at 6:01 PM, Donald Stufft wrote: >> I dislike hijacking SSH to tunnel a HTTP protocol over > > I'm not sure we have to hijack or tunnel anything. :-) If you're uploading via SSH you'll open a SSH tunnel and then POST to PyPI over that tunnel. > >> and adding more reliance on SSH keys means a lost SSH key becomes _even_ worse than it already is. > > I don't follow that argument. You can have separate keys in separate > places if you like. Ideally you can sure. Security that only deals in ideal and doesn't pay attention to what people will actually do in the general case is a problem. The general case people will reuse their typical SSH keys, thus placing more reliance on a single secret across multiple services (Github, bitbucket, SSH, PyPI). Encouraging authentication token sharing is a bad practice. HTTP has a token that is functionally similar to SSH keys. Client side SSL certificates. They would function fine and enable similar uses as SSH keys. > > //Lennart ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From a.badger at gmail.com Fri Mar 8 19:47:08 2013 From: a.badger at gmail.com (Toshio Kuratomi) Date: Fri, 8 Mar 2013 10:47:08 -0800 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: <455F9CF1-BBEE-414B-AAB7-237901564853@stufft.io> References: <455F9CF1-BBEE-414B-AAB7-237901564853@stufft.io> Message-ID: <20130308184708.GW8610@unaka.lan> On Fri, Mar 08, 2013 at 12:57:54PM -0500, Donald Stufft wrote: > On Mar 8, 2013, at 12:47 PM, Lennart Regebro wrote: > > > On Fri, Mar 8, 2013 at 6:01 PM, Donald Stufft wrote: > >> I dislike hijacking SSH to tunnel a HTTP protocol over > > > > I'm not sure we have to hijack or tunnel anything. :-) > > If you're uploading via SSH you'll open a SSH tunnel and then POST to PyPI over that tunnel. > > > > >> and adding more reliance on SSH keys means a lost SSH key becomes _even_ worse than it already is. > > > > I don't follow that argument. You can have separate keys in separate > > places if you like. > > Ideally you can sure. Security that only deals in ideal and doesn't pay attention to what people will actually do in the general case is a problem. The general case people will reuse their typical SSH keys, thus placing more reliance on a single secret across multiple services (Github, bitbucket, SSH, PyPI). Encouraging authentication token sharing is a bad practice. > > HTTP has a token that is functionally similar to SSH keys. Client side SSL certificates. They would function fine and enable similar uses as SSH keys. > If we're choosing between SSH keys and SSL certificates the client side tools for SSH are much more mature than the ones for SSL. The numerous ssh-agents, for instance, allow the ssh key to be encrypted on disk but the user is only prompted for a password when the agent has to read the key (which could be after a timeout or once when the ssh-agent starts up). SSL certificate use for comandline usage doesn't yet have that sort of tool so SSL certificates are often unencrypted on disk if they're being used for commandline access. -Toshio -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From tseaver at palladion.com Sat Mar 9 07:00:03 2013 From: tseaver at palladion.com (Tres Seaver) Date: Sat, 09 Mar 2013 01:00:03 -0500 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: <455F9CF1-BBEE-414B-AAB7-237901564853@stufft.io> References: <455F9CF1-BBEE-414B-AAB7-237901564853@stufft.io> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/08/2013 12:57 PM, Donald Stufft wrote: > If you're uploading via SSH you'll open a SSH tunnel and then POST to > PyPI over that tunnel. That isn't a hard requirment. The PyPI software could add a command-line script used for uploads which depended on the identity indicated by the SSH-authenticated session. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlE6z+MACgkQ+gerLs4ltQ6DggCfZ62QIeUlx5A7A5cnuwU5jTqJ pN8AoN0T0P20qwo5r5p7aheyYi3cGL2L =SdYC -----END PGP SIGNATURE----- From regebro at gmail.com Sat Mar 9 07:25:28 2013 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 9 Mar 2013 07:25:28 +0100 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: <455F9CF1-BBEE-414B-AAB7-237901564853@stufft.io> References: <455F9CF1-BBEE-414B-AAB7-237901564853@stufft.io> Message-ID: On Fri, Mar 8, 2013 at 6:57 PM, Donald Stufft wrote: > If you're uploading via SSH you'll open a SSH tunnel and then POST to PyPI over that tunnel. You are not required to use HTTP, there are several other protocols you can use such as SCP of SFTP. Not that I think it matters which protocol we use. > Ideally you can sure. Security that only deals in ideal and doesn't pay attention to what people will actually do in the general case is a problem. The general case people will reuse their typical SSH keys, thus placing more reliance on a single secret across multiple services (Github, bitbucket, SSH, PyPI). Often they will reuse passwords too. > Encouraging authentication token sharing is a bad practice. So don't do that. :-) > HTTP has a token that is functionally similar to SSH keys. Client side SSL certificates. They would function fine and enable similar uses as SSH keys. Every time I've used that it has been very complicated and usually not worked well or cross-platform. Perhaps that situation has changed? //Lennart From ncoghlan at gmail.com Sat Mar 9 08:18:12 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 9 Mar 2013 17:18:12 +1000 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: References: <455F9CF1-BBEE-414B-AAB7-237901564853@stufft.io> Message-ID: On Sat, Mar 9, 2013 at 4:25 PM, Lennart Regebro wrote: > On Fri, Mar 8, 2013 at 6:57 PM, Donald Stufft wrote: >> HTTP has a token that is functionally similar to SSH keys. Client side SSL certificates. They would function fine and enable similar uses as SSH keys. > > Every time I've used that it has been very complicated and usually not > worked well or cross-platform. Perhaps that situation has changed? Pulp (http://pulpproject.org) handles it fairly well IMO - the CLI includes a "pulp-admin auth login" command which just uses Basic Auth over HTTPS. This returns a server-generated cert that is saved to disk and is valid for a week. After a week, you have to log in again to refresh your cert (this is to mitigate the problem Toshio noted: the cert is stored unencrypted on disk. Without the expiry date, this approach would be just as bad as storing the password itself in the clear). There's a bit of fiddling client side to use the cached cert, and server side to check it, but the user experience is pretty smooth. (Pulp is GPL, while PyPI is now BSD, so if we do go down this path, someone that hasn't read the Pulp code will need to implement it, or else I can talk to the Pulp team about getting those parts relicensed under a more permissive license) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Sat Mar 9 19:28:44 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 9 Mar 2013 18:28:44 +0000 Subject: [Distutils] distlib - installer support vs runtime support Message-ID: As part of a discussion on a pip issue, it was noted that if pip depends on distlib for installer capabilities (e.g., locators, wheel installation, version and requirement parsing and matching, etc) then the user needs distlib installed as well as pip. That shouldn't be an issue for an end user, any more than having pip itself installed would be. However, there are some capabilites of distlib (notably the "exports" functionality that replaces setuptools' entry points) that are intended to be used at runtime, by the end user. (I suspect, but haven't checked, that the exe wrapper functionality depends on exports as well - it certainly does in setuptools). That means that the end user *is* affected by the fact that pip depends on distlib. For example, if the user needs a later version of distlib, it's quite likely that he won't be able to "pip install" it, as pip typically has trouble upgrading its own dependencies (for obvious reasons...) Would it be worth considering splitting distlib into two separate parts - one that is intended solely for writers of installers and similar tools, and another for "runtime support" functions that end users would use? It may not be a practical thing to achieve, but it would be worth at least understanding the trade-offs involved. Paul. PS The same issue exists in setuptools, but as far as I am aware, setuptools was deliberately designed to provide both the installation tools and runtime support in the one package. (And IIUC, even with setuptools, pkg_resources is the only component that includes runtime support, so in theory it is possible to split the two parts up). From dholth at gmail.com Sat Mar 9 20:32:53 2013 From: dholth at gmail.com (Daniel Holth) Date: Sat, 9 Mar 2013 14:32:53 -0500 Subject: [Distutils] distlib - installer support vs runtime support In-Reply-To: References: Message-ID: It would be great to maintain and install pkg_resources separately. The idea has come up before, including the idea of just putting pkg_resources in the system library without the rest of setuptools (it will stay on pypi now - pip install pkg_resources) On Mar 9, 2013 1:29 PM, "Paul Moore" wrote: > As part of a discussion on a pip issue, it was noted that if pip > depends on distlib for installer capabilities (e.g., locators, wheel > installation, version and requirement parsing and matching, etc) then > the user needs distlib installed as well as pip. That shouldn't be an > issue for an end user, any more than having pip itself installed would > be. > > However, there are some capabilites of distlib (notably the "exports" > functionality that replaces setuptools' entry points) that are > intended to be used at runtime, by the end user. (I suspect, but > haven't checked, that the exe wrapper functionality depends on exports > as well - it certainly does in setuptools). That means that the end > user *is* affected by the fact that pip depends on distlib. For > example, if the user needs a later version of distlib, it's quite > likely that he won't be able to "pip install" it, as pip typically has > trouble upgrading its own dependencies (for obvious reasons...) > > Would it be worth considering splitting distlib into two separate > parts - one that is intended solely for writers of installers and > similar tools, and another for "runtime support" functions that end > users would use? It may not be a practical thing to achieve, but it > would be worth at least understanding the trade-offs involved. > > Paul. > > PS The same issue exists in setuptools, but as far as I am aware, > setuptools was deliberately designed to provide both the installation > tools and runtime support in the one package. (And IIUC, even with > setuptools, pkg_resources is the only component that includes runtime > support, so in theory it is possible to split the two parts up). > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Sat Mar 9 22:49:37 2013 From: pje at telecommunity.com (PJ Eby) Date: Sat, 9 Mar 2013 16:49:37 -0500 Subject: [Distutils] distlib - installer support vs runtime support In-Reply-To: References: Message-ID: On Sat, Mar 9, 2013 at 1:28 PM, Paul Moore wrote: > Would it be worth considering splitting distlib into two separate > parts - one that is intended solely for writers of installers and > similar tools, and another for "runtime support" functions that end > users would use? It may not be a practical thing to achieve, but it > would be worth at least understanding the trade-offs involved. > > Paul. > > PS The same issue exists in setuptools, but as far as I am aware, > setuptools was deliberately designed to provide both the installation > tools and runtime support in the one package. (And IIUC, even with > setuptools, pkg_resources is the only component that includes runtime > support, so in theory it is possible to split the two parts up). Yeah, the goal in setuptools case was to bootstrap dependency handling by only installing one thing. A major headache that's hitting me right now about that is that it's basically impossible to do that and provide SSL verification support at the same time, for versions of Python <2.6. So I'm pretty soon going to have to face this challenge also, in order to get to a "secure by default" setuptools for Pythons 2.3-2.5. Ironically, setuptools doesn't have any problems updating itself or its dependencies, though (if it had them). For example, if setuptools depended on one version of distlib, and the user requested installation of another version, this wouldn't affect setuptools at all, because the require()s baked into its scripts would still refer to the same distlib egg as before. Under the pip+virtualenv paradigm, though, there aren't any eggs, so you can't pin a single project to a dependency version, without giving pip its own virtualenv. And I've tried to think of a workaround to suggest, but the challenge is that they all boil down to something that requires you to already have distlib, in order to manage its own installation. ;-) Probably the best way to work around it is to just give pip its own virtualenv -- in the form of an executable zip file. That is, install pip and its dependencies to a single directory, add a __main__.py, and zip up the whole thing with a #!python header. Voila -- a self-contained application you can drop on your path, and update by just downloading a new copy of. Of course, that only works if pip is an application, rather than a library. Setuptools can't take this tack because it wants to be importable. But if pip only needs to be runnable, not importable, you could bundle *all* your dependencies in a single file, as long as they're all pure Python. From vinay_sajip at yahoo.co.uk Sun Mar 10 01:15:08 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 10 Mar 2013 00:15:08 +0000 (UTC) Subject: [Distutils] distlib - installer support vs runtime support References: Message-ID: Paul Moore gmail.com> writes: > Would it be worth considering splitting distlib into two separate > parts - one that is intended solely for writers of installers and > similar tools, and another for "runtime support" functions that end > users would use? It may not be a practical thing to achieve, but it > would be worth at least understanding the trade-offs involved. While this could be done, it would not exactly be elegant, and IMO it would be the wrong way to address the valid concerns you mention. It would make more sense for pip to *contain* a specific version of distlib for its use (e.g. as a .zip) so that it never worries about conflicts with another copy. This is the approach that Django takes and it seems to work reasonably well for that project. Re. the size of distlib - it's larger than pip, because pip relies on a external dependency (setuptools/distribute) to do a lot of its work, whereas distlib is self-contained. So, direct size comparisons can be misleading (e.g. distlib contains a _backport package for 2.6 support, which is not tiny; distlib's tests contain tests for the backports plus a complete copy of unittest2, again for 2.6 support). The concerns about stability (in terms of API stability, as well as the presence of bugs) are more valid than size. Given distlib's youth, we can expect to see some API changes, as well as to see some bugs flushed out of the woodwork. (Obviously, I would aim to keep API changes to a minimum.) A good level of stability is generally achieved after a period of usage in anger followed by feedback based on that usage. Until then, the test suite, coverage results and docs will need to be used to give an indication of quality. Regards, Vinay Sajip From ncoghlan at gmail.com Sun Mar 10 02:14:17 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 10 Mar 2013 11:14:17 +1000 Subject: [Distutils] distlib - installer support vs runtime support In-Reply-To: References: Message-ID: On 10 Mar 2013 10:16, "Vinay Sajip" wrote: > > Paul Moore gmail.com> writes: > > > Would it be worth considering splitting distlib into two separate > > parts - one that is intended solely for writers of installers and > > similar tools, and another for "runtime support" functions that end > > users would use? It may not be a practical thing to achieve, but it > > would be worth at least understanding the trade-offs involved. > > While this could be done, it would not exactly be elegant, and IMO it would be > the wrong way to address the valid concerns you mention. It would make more > sense for pip to *contain* a specific version of distlib for its use (e.g. as a > .zip) so that it never worries about conflicts with another copy. This is the > approach that Django takes and it seems to work reasonably well for that > project. > > Re. the size of distlib - it's larger than pip, because pip relies on a > external dependency (setuptools/distribute) to do a lot of its work, whereas > distlib is self-contained. So, direct size comparisons can be misleading (e.g. > distlib contains a _backport package for 2.6 support, which is not tiny; > distlib's tests contain tests for the backports plus a complete copy of > unittest2, again for 2.6 support). > > The concerns about stability (in terms of API stability, as well as the > presence of bugs) are more valid than size. Given distlib's youth, we can > expect to see some API changes, as well as to see some bugs flushed out of the > woodwork. (Obviously, I would aim to keep API changes to a minimum.) > > A good level of stability is generally achieved after a period of usage in > anger followed by feedback based on that usage. Until then, the test suite, > coverage results and docs will need to be used to give an indication of > quality. pip vendoring its own copy of distlib sounds like the best workaround for now, as it addresses both the bootstrapping problem and the API stability question. Longer term, something like the import engine PEP may let us implement a cleaner solution. Cheers, Nick. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From askedrelic at gmail.com Fri Mar 8 08:40:48 2013 From: askedrelic at gmail.com (Matt Behrens) Date: Thu, 7 Mar 2013 23:40:48 -0800 Subject: [Distutils] Add optional password_command .pypirc value Message-ID: <14FB91750BCB4723BE2ACD49B35F4DB8@gmail.com> After doing some research tonight on storing/accessing passwords in the OSX Keychain (http://asktherelic.com/2013/03/07/storing-command-line-passwords-in-keychain/), I was curious why the .pypirc doesn't support something like this when asking for the password during 'upload', to not have your pypi password in plaintext on your system. As far as I can see from the source, the password is read straight from the .pypirc config: https://bitbucket.org/tarek/distribute/src/188dcdb7f0873f1b382e8bde65377c5f43266f9f/setuptools/command/upload.py?at=default#cl-66 and fails if the password value doesn't exist: https://bitbucket.org/tarek/distribute/issue/291/allow-password-to-be-omitted-from-pypirc I'm curious about implementing: 1. a password_command to support integration with external password tools (1password, keychain, keyring python lib) The implementation from the program I am trying to emulate, pianobar, is here: https://github.com/PromyLOPh/pianobar/blob/master/src/main.c#L135 just a /bin/sh for nix/osx. Could run cmd.exe for windows cross-platform compatibility. 2. better notification to the user about trying to upload with an empty password or using get_pass if empty password The only other reference to something like this is from several years ago here: http://bugs.python.org/issue4394 Does this seem like it's worth making a patch for? -- Matt Behrens -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Sun Mar 10 03:33:27 2013 From: pje at telecommunity.com (PJ Eby) Date: Sat, 9 Mar 2013 21:33:27 -0500 Subject: [Distutils] distlib - installer support vs runtime support In-Reply-To: References: Message-ID: On Sat, Mar 9, 2013 at 8:14 PM, Nick Coghlan wrote: > > On 10 Mar 2013 10:16, "Vinay Sajip" wrote: >> >> Paul Moore gmail.com> writes: >> >> > Would it be worth considering splitting distlib into two separate >> > parts - one that is intended solely for writers of installers and >> > similar tools, and another for "runtime support" functions that end >> > users would use? It may not be a practical thing to achieve, but it >> > would be worth at least understanding the trade-offs involved. >> >> While this could be done, it would not exactly be elegant, and IMO it >> would be >> the wrong way to address the valid concerns you mention. It would make >> more >> sense for pip to *contain* a specific version of distlib for its use (e.g. >> as a >> .zip) so that it never worries about conflicts with another copy. This is >> the >> approach that Django takes and it seems to work reasonably well for that >> project. >> >> Re. the size of distlib - it's larger than pip, because pip relies on a >> external dependency (setuptools/distribute) to do a lot of its work, >> whereas >> distlib is self-contained. So, direct size comparisons can be misleading >> (e.g. >> distlib contains a _backport package for 2.6 support, which is not tiny; >> distlib's tests contain tests for the backports plus a complete copy of >> unittest2, again for 2.6 support). >> >> The concerns about stability (in terms of API stability, as well as the >> presence of bugs) are more valid than size. Given distlib's youth, we can >> expect to see some API changes, as well as to see some bugs flushed out of >> the >> woodwork. (Obviously, I would aim to keep API changes to a minimum.) >> >> A good level of stability is generally achieved after a period of usage in >> anger followed by feedback based on that usage. Until then, the test >> suite, >> coverage results and docs will need to be used to give an indication of >> quality. > > pip vendoring its own copy of distlib sounds like the best workaround for > now, as it addresses both the bootstrapping problem and the API stability > question. > > Longer term, something like the import engine PEP may let us implement a > cleaner solution. I've been giving the bootstrapping issue a bit more thought, though, and I think there's a way to take the pain out of multi-file bootstraps of things like pip and setuptools and whatnot. Suppose you make a .py file that contains (much like ez_setup.py) a list of files to download, along with their checksums. And suppose this .py file basically just has some code that checks a local cache directory for those files, adds them to sys.path if they're found, downloads them if they're not (validating the checksum), and then proceeds to import a specific entry point and run it? At that point, you could distribute a Python application (like pip) "in" a single .py file. The file is for a specific version and a specific set of dependencies, but if someone wants to update it, they just download the new .py file and discard the old one. (Technically, it needn't have a .py extension, and for Windows you might want to ship it as a .exe+"-script.py/pyw" wrapper pair.) The really interesting bit is that you could generate the .py part using nothing more than a Metadata 2.0 file for the target app, access to the /simple index, and a specification of which script/entry-point the .py file is supposed to represent. We could call this concept PyStart, as it's a little bit like Java WebStart. It would work equally well with wheels or eggs, though with wheels you'd need to unpack any "impure" ones to a subdirectory. The main idea, though, is that if properly done, the PyStart script for an application wouldn't need to understand version parsing or comparison, PyPI APIs, environment markers, or any of that crud. (It *might* need to understand platform tags to some extent, if "impure" code is involved.) Heck, it wouldn't even care about SSL cert verification, as it'll be relying on its hardcoded hashes for verification purposes. So as long as you download the PyStart script from a trusted source, you're ready to go. (This is an important requirement for setuptools going forward, because it may have to depend on more than one other project (e.g. Requests + SSL backport) in order to implement full SSL security.) This is far from a finished idea, but I think it has some merit as an approach for distributing "zero-install" Python apps in general, while working around the hairy problem of things like, "so, to install pip, you need to have setuptools installed. But you can't use setuptools to install pip, because that wouldn't be secure. So, make sure you download *both* things by hand, oh, and don't forget to compile the SSL backport..." Thoughts? From glyph at twistedmatrix.com Sun Mar 10 08:25:13 2013 From: glyph at twistedmatrix.com (Glyph) Date: Sat, 9 Mar 2013 23:25:13 -0800 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: <14FB91750BCB4723BE2ACD49B35F4DB8@gmail.com> References: <14FB91750BCB4723BE2ACD49B35F4DB8@gmail.com> Message-ID: On Mar 7, 2013, at 11:40 PM, Matt Behrens wrote: > After doing some research tonight on storing/accessing passwords in the OSX Keychain (http://asktherelic.com/2013/03/07/storing-command-line-passwords-in-keychain/), I was curious why the .pypirc doesn't support something like this when asking for the password during 'upload', to not have your pypi password in plaintext on your system. > > As far as I can see from the source, the password is read straight from the .pypirc config: > > https://bitbucket.org/tarek/distribute/src/188dcdb7f0873f1b382e8bde65377c5f43266f9f/setuptools/command/upload.py?at=default#cl-66 > > and fails if the password value doesn't exist: > > https://bitbucket.org/tarek/distribute/issue/291/allow-password-to-be-omitted-from-pypirc > > I'm curious about implementing: > > 1. a password_command to support integration with external password tools (1password, keychain, keyring python lib) > > The implementation from the program I am trying to emulate, pianobar, is here:https://github.com/PromyLOPh/pianobar/blob/master/src/main.c#L135 just a /bin/sh for nix/osx. Could run cmd.exe for windows cross-platform compatibility. > > 2. better notification to the user about trying to upload with an empty password or using get_pass if empty password > > The only other reference to something like this is from several years ago here: http://bugs.python.org/issue4394 > > Does this seem like it's worth making a patch for? Secure password storage is always worth working on :). Have you heard of the Keyring module? It already supports a cross-platform interface to this sort of thing, including the OS X keychain. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Mar 10 12:29:52 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 10 Mar 2013 11:29:52 +0000 Subject: [Distutils] distlib - installer support vs runtime support In-Reply-To: References: Message-ID: On 10 March 2013 00:15, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > >> Would it be worth considering splitting distlib into two separate >> parts - one that is intended solely for writers of installers and >> similar tools, and another for "runtime support" functions that end >> users would use? It may not be a practical thing to achieve, but it >> would be worth at least understanding the trade-offs involved. > > While this could be done, it would not exactly be elegant, and IMO it would be > the wrong way to address the valid concerns you mention. It would make more > sense for pip to *contain* a specific version of distlib for its use (e.g. as a > .zip) so that it never worries about conflicts with another copy. This is the > approach that Django takes and it seems to work reasonably well for that > project. Thanks for the comments. It looks like pip will indeed contain a copy of distlib, at least for now. And I think you're right, that is the best solution there. My other interest was with regard to virtualenv. Here, the particular "runtime support" issue that bothers me is the way that setuptools wrapper scripts use entry points. As a result, something like nosetests.exe will not work without setuptools being present, simply because it looks up the code to run using the entry point mechanisms (the actual code itself does not need setuptools). So virtualenv pretty much *has* to preinstall setuptools (or at least pkg_resources), as pip uses exe-wrappers, and those won't use an embedded copy. But looking at the code generated by distlib's script wrappers, I see that it does not use the exports functionality of distlib, and as a result distlib-generated wrappers can be used without distlib being present. So my apologies here - it looks like my concern was unfounded. Paul. From tarek at ziade.org Sun Mar 10 12:34:35 2013 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Sun, 10 Mar 2013 12:34:35 +0100 Subject: [Distutils] Add optional password_command .pypirc value In-Reply-To: References: <14FB91750BCB4723BE2ACD49B35F4DB8@gmail.com> Message-ID: <513C6FCB.5030706@ziade.org> On 3/10/13 8:25 AM, Glyph wrote: > > [..] > Secure password storage is always worth working on :). > > Have you heard of the Keyring module? > It already supports a > cross-platform interface to this sort of thing, including the OS X > keychain. > > -glyph Did you know guys I have initially created this project in a GSOC for a future Distutils integration ? That was the primary use case so +1 :) Cheers Tarek -- Tarek Ziad? ? http://ziade.org ? @tarek_ziade -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sun Mar 10 13:13:56 2013 From: dholth at gmail.com (Daniel Holth) Date: Sun, 10 Mar 2013 08:13:56 -0400 Subject: [Distutils] distlib - installer support vs runtime support In-Reply-To: References: Message-ID: The pkg_resources script entry point is there so the right eggs can be added to sys.path based on solving dependencies for the invoked package. On Mar 10, 2013 7:30 AM, "Paul Moore" wrote: > > On 10 March 2013 00:15, Vinay Sajip wrote: > > Paul Moore gmail.com> writes: > > > >> Would it be worth considering splitting distlib into two separate > >> parts - one that is intended solely for writers of installers and > >> similar tools, and another for "runtime support" functions that end > >> users would use? It may not be a practical thing to achieve, but it > >> would be worth at least understanding the trade-offs involved. > > > > While this could be done, it would not exactly be elegant, and IMO it would be > > the wrong way to address the valid concerns you mention. It would make more > > sense for pip to *contain* a specific version of distlib for its use (e.g. as a > > .zip) so that it never worries about conflicts with another copy. This is the > > approach that Django takes and it seems to work reasonably well for that > > project. > > Thanks for the comments. It looks like pip will indeed contain a copy > of distlib, at least for now. And I think you're right, that is the > best solution there. > > My other interest was with regard to virtualenv. Here, the particular > "runtime support" issue that bothers me is the way that setuptools > wrapper scripts use entry points. As a result, something like > nosetests.exe will not work without setuptools being present, simply > because it looks up the code to run using the entry point mechanisms > (the actual code itself does not need setuptools). So virtualenv > pretty much *has* to preinstall setuptools (or at least > pkg_resources), as pip uses exe-wrappers, and those won't use an > embedded copy. > > But looking at the code generated by distlib's script wrappers, I see > that it does not use the exports functionality of distlib, and as a > result distlib-generated wrappers can be used without distlib being > present. So my apologies here - it looks like my concern was > unfounded. > > Paul. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sun Mar 10 19:37:35 2013 From: dholth at gmail.com (Daniel Holth) Date: Sun, 10 Mar 2013 14:37:35 -0400 Subject: [Distutils] PEP 427 (wheel) clarifications Message-ID: A few more clarifications to PEP 427 "wheel". + The ``b'#!pythonw'`` convention is allowed. ``b'#!pythonw'`` indicates + a GUI script instead of a console script. + ... { "hash": "sha256=ADD-r2urObZHcxBW3Cr-vDCu5RJwT4CaRTHiFmbcIYY" } +(The hash value is the same format used in RECORD.) + +What's the deal with "purelib" vs. "platlib"? + Wheel preserves the historic "purelib" vs. "platlib" distinction + even though both map to the same install location in any system the + author could find. + + For example, a wheel with "Root-Is-Purelib: false" with all its files + in ``{name}-{version}.data/purelib`` is equivalent to a wheel with + "Root-Is-Purelib: true" with those same files in the root, and it + is legal to have files in both the "purelib" and "platlib" categories. + + In practice a wheel should have only one of "purelib" or "platlib" + depending on whether it is pure Python or not and those files should + be at the root. From ncoghlan at gmail.com Sun Mar 10 23:56:53 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 11 Mar 2013 08:56:53 +1000 Subject: [Distutils] PEP 427 (wheel) clarifications In-Reply-To: References: Message-ID: On 11 Mar 2013 04:38, "Daniel Holth" wrote: > > A few more clarifications to PEP 427 "wheel". > > + The ``b'#!pythonw'`` convention is allowed. ``b'#!pythonw'`` indicates > + a GUI script instead of a console script. > + > > ... > > { "hash": "sha256=ADD-r2urObZHcxBW3Cr-vDCu5RJwT4CaRTHiFmbcIYY" } > > +(The hash value is the same format used in RECORD.) > + > > +What's the deal with "purelib" vs. "platlib"? > + Wheel preserves the historic "purelib" vs. "platlib" distinction > + even though both map to the same install location in any system the > + author could find. > + > + For example, a wheel with "Root-Is-Purelib: false" with all its files > + in ``{name}-{version}.data/purelib`` is equivalent to a wheel with > + "Root-Is-Purelib: true" with those same files in the root, and it > + is legal to have files in both the "purelib" and "platlib" categories. > + > + In practice a wheel should have only one of "purelib" or "platlib" > + depending on whether it is pure Python or not and those files should > + be at the root. This isn't entirely accurate - purelib vs platlib is there mainly for distro maintainers converting to distro specific layouts, and anyone that moves purelib on to a common network share. Agreed that wheels are easier to deal with when they only use one or the other, though. Cheers, Nick. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Mon Mar 11 16:36:17 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 11 Mar 2013 11:36:17 -0400 Subject: [Distutils] distlib - installer support vs runtime support In-Reply-To: References: Message-ID: On Sat, Mar 9, 2013 at 9:33 PM, PJ Eby wrote: > On Sat, Mar 9, 2013 at 8:14 PM, Nick Coghlan wrote: >> Longer term, something like the import engine PEP may let us implement a >> cleaner solution. > > I've been giving the bootstrapping issue a bit more thought, though, > and I think there's a way to take the pain out of multi-file > bootstraps of things like pip and setuptools and whatnot. > > Suppose you make a .py file that contains (much like ez_setup.py) a > list of files to download, along with their checksums. And suppose > this .py file basically just has some code that checks a local cache > directory for those files, adds them to sys.path if they're found, > downloads them if they're not (validating the checksum), and then > proceeds to import a specific entry point and run it? I've been thinking about something like this lately too. A simple script like your proposed pystart could be used to generate these files. I've gone one further and considered a format that includes dependencies (or at the very least install-time dependencies right in the .py file itself as a base64 string. This would work fine at least for small dependencies--no downloads necessary and you're guaranteed to get the right version that actually installs the damn package. For packages using setuptools it's also possible to just include a dependency as an egg or tar.gz right in your source dist and add something like "[easy_install]\nfind_links = ." to your setup.cfg and it'll go. But I'm not sure that's exactly the use case you're after here. Erik From pje at telecommunity.com Mon Mar 11 18:29:09 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 11 Mar 2013 13:29:09 -0400 Subject: [Distutils] distlib - installer support vs runtime support In-Reply-To: References: Message-ID: On Mon, Mar 11, 2013 at 11:36 AM, Erik Bray wrote: > On Sat, Mar 9, 2013 at 9:33 PM, PJ Eby wrote: >> On Sat, Mar 9, 2013 at 8:14 PM, Nick Coghlan wrote: >>> Longer term, something like the import engine PEP may let us implement a >>> cleaner solution. >> >> I've been giving the bootstrapping issue a bit more thought, though, >> and I think there's a way to take the pain out of multi-file >> bootstraps of things like pip and setuptools and whatnot. >> >> Suppose you make a .py file that contains (much like ez_setup.py) a >> list of files to download, along with their checksums. And suppose >> this .py file basically just has some code that checks a local cache >> directory for those files, adds them to sys.path if they're found, >> downloads them if they're not (validating the checksum), and then >> proceeds to import a specific entry point and run it? > > I've been thinking about something like this lately too. A simple > script like your proposed pystart could be used to generate these > files. I've gone one further and considered a format that includes > dependencies (or at the very least install-time dependencies right in > the .py file itself as a base64 string. This would work fine at least > for small dependencies--no downloads necessary and you're guaranteed > to get the right version that actually installs the damn package. You could, but it makes the initial download bigger and adds more moving parts. If you have to download some of them, might as well do all of them. (Besides, the approach I outlined allows for sharing distributions and avoiding repeated downloads.) > For packages using setuptools it's also possible to just include a > dependency as an egg or tar.gz right in your source dist and add > something like "[easy_install]\nfind_links = ." to your setup.cfg and > it'll go. But I'm not sure that's exactly the use case you're after > here. No. A particular goal behind this idea is that a pystart script should be cross-platform and ideally cross-Python-version as well, using environment markers to determine what should be downloaded. So, syntactically, the script would be written so that it runs on as many Python versions as possible, which shouldn't be too hard since it will basically be just looking for a series of files, downloading and hashing them. You'd have one function that does a "check for this file, download and extract if needed (w/hash validation), add to sys.path", and a bunch of if/then blocks calling that function with different name/url/hash triplets according to the current platform/python version markers. Then the bottom of the script would import the script entry point and run it. (And the top would have a preamble to import stuff from the right places based on Python version.) Alternatively, for a simpler code generation model, it could just stick everything in a giant data structure at the top, and append standard code that checks the environment markers. But I'd rather not have to include full environment marker interpretation, so translating the markers to Python seems advisable, even if it's as lambdas in the data structure. From pje at telecommunity.com Thu Mar 14 01:54:49 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 13 Mar 2013 20:54:49 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement Message-ID: Jason Coombs (head of the Distribute project) and I are working on merging the bulk of the improvements distribute made into the setuptools code base. He has volunteered to take over maintenance of setuptools, and I welcome his assistance. I appreciate the contributions made by the distribute maintainers over the years, and am glad to have Jason's help in getting those contributions into setuptools as well. Continuing to keep the code bases separate isn't helping anybody, and as setuptools moves once again into active development to deal with the upcoming shifts in the Python-wide packaging infrastructure (the new PEPs, formats, SSL, TUF, etc.), it makes sense to combine efforts. Aside from the problems experienced by people with one package that are fixed in the other, the biggest difficulties with the fork right now are faced by the maintainers of setuptools-driven projects like pip, virtualenv, and buildout, who have to either take sides in a conflict, or spend additional time and effort testing and integrating with both setuptools and distribute. We'd like to end that pain and simplify matters for end users by bringing distribute enhancements to setuptools and phasing out the distribute fork as soon as is practical. In the short term, our goal is to consolidate the projects to prevent duplication, wasted effort, and incompatibility, so that we can start moving forward. This merge will allow us to combine resources and teams, so that we may focus on a stable but actively-maintained toolset. In the longer term, the goal is for setuptools as a concept to become obsolete. For the first time, the Python packaging world has gotten to a point where there are PEPs *and implementations* for key parts of the packaging infrastructure that offer the potential to get rid of setuptools entirely. (Vinay Sajip's work on distlib, Daniel Holth's work on the "wheel" format, and Nick Coghlan's taking up the reins of the packaging PEPs and providing a clear vision for a new way of doing things -- these are just a few of the developments in recent play.) "Obsolete", however, doesn't mean unmaintained or undeveloped. In fact, for the "new way of doing things" to succeed, setuptools will need a lot of new features -- some small, some large -- to provide a migration path. At the moment, the merge is not yet complete. We are working on a common repository where the two projects' history has been spliced together, and are cleaning up the branch heads to facilitate re-merging them. We'd hoped to have this done by PyCon, but there have been a host of personal, health, and community issues consuming much of our available work time. But we decided to go ahead and make an announcement *now*, because with the big shifts taking place in the packaging world, there are people who need to know about the upcoming merge in order to make the best decisions about their own projects (e.g. pip, buildout, etc.) and to better support their own users. Thank you once again to all the distribute contributors, for the many fine improvements you've made to the setuptools package over the years, and I hope that you'll continue to make them in the future. (Especially as I begin to phase myself out of an active role in the project!) I now want to turn the floor over to Jason, who's put together a Roadmap/FAQ for what's going to be happening with the project going forward. We'll then both be here in the thread to address any questions or concerns you might have. From jaraco at jaraco.com Thu Mar 14 01:57:18 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Thu, 14 Mar 2013 00:57:18 +0000 Subject: [Distutils] Setuptools-Distribute merge announcement Message-ID: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> As PJE mentioned in his e-mail, he and I have been working on a merge of the code lines of Setuptools and Distribute. I'm excited about this transition and I hope you are too. In this message, I will provide some answers based on questions that he and I encountered in our discussions and subsequent merge activity. If you have further questions, please direct them to both of us and we intend to answer promptly and also update the FAQ at the wiki (https://bitbucket.org/jaraco/setuptools/wiki/Setuptools%20and%20Distribute% 20Merge%20FAQ). - Jason R. Coombs Where does the merge occur? The merge is occurring between the heads of the default branch of Distribute and the setuptools-0.6 branch of Setuptools. The Setuptools SVN repo has been converted to a Mercurial repo hosted on Bitbucket. The work is still underway, so the exact changesets included may change, although the anticipated merge targets are Setuptools at 0.6c12 and Distribute at 0.6.35. What happens to other branches? Distribute 0.7 was abandoned long ago and won't be included in the resulting code tree, but may be retained for posterity in the original repo. Setuptools default branch (also 0.7 development) may also be abandoned or may be incorporated into the new merged line if desirable (and as resources allow). What history is lost/changed? As setuptools was not on Mercurial when the fork occurred and as Distribute did not include the full setuptools history (prior to the creation of the setuptools-0.6 branch), the two source trees were not compatible. In order to most effectively communicate the code history, the Distribute code was grafted onto the (originally private) setuptools Mercurial repo. Although this grafting maintained the full code history with names, dates, and changes, it did lose the original hashes of those changes. Therefore, references to changes by hash (including tags) are lost. Additionally, any heads that were not actively merged into the Distribute 0.6.35 release were also omitted. As a result, the changesets included in the merge repo are those from the original setuptools repo and all changesets ancestral to the Distribute 0.6.35 release. What features will be in the merged code base? In general, all "features" added in distribute will be included in setuptools. Where there exist conflicts or undesirable features, we will be explicit about what these limitations are. Changes that are backward-incompatible from setuptools 0.6 to distribute will likely be removed, and these also will be well documented. Bootstrapping scripts (ez_setup/distribute_setup) and docs, as with distribute, will be maintained in the repository and built as part of the release process. Documentation and bootstrapping scripts will be hosted at python.org, as they are with distribute now. Documentation at telecommunity will be updated to refer or redirect to the new, merged docs. On the whole, the merged setuptools should be largely compatible with the latest releases of both setuptools and distribute and will be an easy transition for users of either library. Who is invited to contribute? Who is excluded? While we've worked privately to initiate this merge due to the potential sensitivity of the topic, no one is excluded from this effort. We invite all members of the community, especially those most familiar with Python packaging and its challenges to join us in the effort. We have lots of ideas for how we'd like to improve the codebase, release process, everything. Like distribute, the post-merge setuptools will have its source hosted on bitbucket. (So if you're currently a distribute contributor, about the only thing that's going to change is the URL of the repository you follow.) Also like distribute, it'll support Python 3, and hopefully we'll soon merge Vinay Sajip's patches to make it run on Python 3 without needing 2to3 to be run on the code first. Why Setuptools and not Distribute or another name? We do understand that this announcement might be unsettling for some. The setuptools name has been subjected to a lot of deprecation in recent years, so the idea that it will now be the preferred name instead of distribute might be somewhat difficult or disorienting for some. We considered use of another name (Distribute or an entirely new name), but that would serve to only complicate matters further. Instead, our goal is to simplify the packaging landscape but without losing any hard-won advancements. We hope that the people who worked to spread the first message will be equally enthusiastic about spreading the new one, and we especially look forward to seeing the new posters and slogans celebrating the new setuptools. What is the timeframe of release? There are no hard timeframes for any of this effort, although progress is underway and a draft merge is underway and being tested privately. As an unfunded volunteer effort, our time to put in on it is limited, and we've both had some recent health and other challenges that have made working on this difficult, which in part explains why we haven't met our original deadline of a completed merge before PyCon. What version number can I expect for the new release? The new release will roughly follow the previous trend for setuptools and release the new release as 0.7. This number is somewhat arbitrary, but we wanted something other than 0.6 to distinguish it from its ancestor forks but not 1.0 to avoid putting too much emphasis on the release itself and to focus on merging the functionality. In the future, the project will likely adopt a versioning scheme similar to semver to convey semantic meaning about the release in the version number. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6638 bytes Desc: not available URL: From qwcode at gmail.com Thu Mar 14 02:01:22 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 13 Mar 2013 18:01:22 -0700 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: References: Message-ID: woo hoo!!! awesome!!! Marcus On Wed, Mar 13, 2013 at 5:54 PM, PJ Eby wrote: > Jason Coombs (head of the Distribute project) and I are working on > merging the bulk of the improvements distribute made into the > setuptools code base. He has volunteered to take over maintenance of > setuptools, and I welcome his assistance. I appreciate the > contributions made by the distribute maintainers over the years, and > am glad to have Jason's help in getting those contributions into > setuptools as well. Continuing to keep the code bases separate isn't > helping anybody, and as setuptools moves once again into active > development to deal with the upcoming shifts in the Python-wide > packaging infrastructure (the new PEPs, formats, SSL, TUF, etc.), it > makes sense to combine efforts. > > Aside from the problems experienced by people with one package that > are fixed in the other, the biggest difficulties with the fork right > now are faced by the maintainers of setuptools-driven projects like > pip, virtualenv, and buildout, who have to either take sides in a > conflict, or spend additional time and effort testing and integrating > with both setuptools and distribute. We'd like to end that pain and > simplify matters for end users by bringing distribute enhancements to > setuptools and phasing out the distribute fork as soon as is > practical. > > In the short term, our goal is to consolidate the projects to prevent > duplication, wasted effort, and incompatibility, so that we can start > moving forward. This merge will allow us to combine resources and > teams, so that we may focus on a stable but actively-maintained > toolset. In the longer term, the goal is for setuptools as a concept > to become obsolete. For the first time, the Python packaging world > has gotten to a point where there are PEPs *and implementations* for > key parts of the packaging infrastructure that offer the potential to > get rid of setuptools entirely. (Vinay Sajip's work on distlib, > Daniel Holth's work on the "wheel" format, and Nick Coghlan's taking > up the reins of the packaging PEPs and providing a clear vision for a > new way of doing things -- these are just a few of the developments in > recent play.) > > "Obsolete", however, doesn't mean unmaintained or undeveloped. In > fact, for the "new way of doing things" to succeed, setuptools will > need a lot of new features -- some small, some large -- to provide a > migration path. > > At the moment, the merge is not yet complete. We are working on a > common repository where the two projects' history has been spliced > together, and are cleaning up the branch heads to facilitate > re-merging them. We'd hoped to have this done by PyCon, but there > have been a host of personal, health, and community issues consuming > much of our available work time. But we decided to go ahead and make > an announcement *now*, because with the big shifts taking place in the > packaging world, there are people who need to know about the upcoming > merge in order to make the best decisions about their own projects > (e.g. pip, buildout, etc.) and to better support their own users. > > Thank you once again to all the distribute contributors, for the many > fine improvements you've made to the setuptools package over the > years, and I hope that you'll continue to make them in the future. > (Especially as I begin to phase myself out of an active role in the > project!) > > I now want to turn the floor over to Jason, who's put together a > Roadmap/FAQ for what's going to be happening with the project going > forward. We'll then both be here in the thread to address any > questions or concerns you might have. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Thu Mar 14 02:09:17 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 13 Mar 2013 21:09:17 -0400 Subject: [Distutils] Distribute: will not build on Python 3 with --install-XXX arguments In-Reply-To: References: <1362574249.71067.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Wed, Mar 6, 2013 at 8:20 AM, Paul Moore wrote: > On 6 March 2013 12:50, Vinay Sajip wrote: >> Still no joy with the uninstalled setuptools, then? It seemed to work for me without any special work, but that was on Linux. > > Yeah, again it seems to be subtle environmental issues. Sometimes it > works, sometimes it doesn't, never with a useful error message :-( > > It's mainly for setup.py - the hooks that override normal distutils > functionality seem to be the flaky bit. The errors generally seem to > be coming from core distutils, when I should be running setuptools > (things like "unknown option, --single-version-externally-managed" > even though setuptools is on sys.path). > > Not worth wasting any more time over, in my opinion. > Paul. FWIW, from a discussion w/Jason earlier this week, I believe this is probably because distribute doesn't ship a built setuptools.egg-info/entry_points.txt, unlike setuptools which includes this file in revision control specifically to prevent such bootstrapping issues. So, the problem should be going away at some point. From me at rpatterson.net Thu Mar 14 02:52:24 2013 From: me at rpatterson.net (Ross Patterson) Date: Wed, 13 Mar 2013 18:52:24 -0700 Subject: [Distutils] Setuptools-Distribute merge announcement References: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> Message-ID: <87li9qvjc7.fsf@rpatterson.net> "Jason R. Coombs" writes: > > Who is invited to contribute? Who is excluded? As long as the merged project, whatever it's called, doesn't become as closed of to community contributions or input as led to the fork in the first place, a merge seems like a great idea. Personally, if I have the option, I'd probably choose to stick to "distribute" until I have some experience indicating that the merged project won't have the same problems that led to the original fork. Meantime, I raise my glass to the merge and to the Distribute developers for keeping things moving! Ross From dholth at gmail.com Thu Mar 14 02:56:19 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 13 Mar 2013 21:56:19 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: <87li9qvjc7.fsf@rpatterson.net> References: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> <87li9qvjc7.fsf@rpatterson.net> Message-ID: On Wed, Mar 13, 2013 at 9:52 PM, Ross Patterson wrote: > "Jason R. Coombs" writes: >> >> Who is invited to contribute? Who is excluded? > > As long as the merged project, whatever it's called, doesn't become as > closed of to community contributions or input as led to the fork in the > first place, a merge seems like a great idea. Personally, if I have the > option, I'd probably choose to stick to "distribute" until I have some > experience indicating that the merged project won't have the same > problems that led to the original fork. > > Meantime, I raise my glass to the merge and to the Distribute developers > for keeping things moving! Distribeaut! I welcome our new setuptools overlords, and it will be so much easier to just call it setuptools always, and eventually put it on a shelf. Stunning news. :-) Daniel From carl at oddbird.net Thu Mar 14 06:34:39 2013 From: carl at oddbird.net (Carl Meyer) Date: Wed, 13 Mar 2013 23:34:39 -0600 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: References: Message-ID: <5141616F.50104@oddbird.net> On 03/13/2013 06:54 PM, PJ Eby wrote: > Jason Coombs (head of the Distribute project) and I are working on > merging the bulk of the improvements distribute made into the > setuptools code base. He has volunteered to take over maintenance of > setuptools, and I welcome his assistance. This merge is very good news for Python packaging. Kudos and thanks to you and Jason. This also seems like an appropriate opportunity to say: thank you for building the first working packaging system for Python (the one that all popular packaging tools are still based on), and for all the work you've put into it over the years. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Thu Mar 14 07:08:42 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 13 Mar 2013 23:08:42 -0700 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> References: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> Message-ID: On Wed, Mar 13, 2013 at 5:57 PM, Jason R. Coombs wrote: > As PJE mentioned in his e-mail, he and I have been working on a merge of the > code lines of Setuptools and Distribute. I'm excited about this transition > and I hope you are too. Yay, thank you both for working hard to make this happen, and also for announcing it now for the sake of my sanity (and probably Jason's too) at the packaging and distribution mini-summit on Friday night :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Thu Mar 14 09:53:07 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 14 Mar 2013 08:53:07 +0000 (GMT) Subject: [Distutils] Setuptools-Distribute merge announcement Message-ID: <1363251187.152.YahooMailNeo@web171404.mail.ir2.yahoo.com> Carl Meyer oddbird.net> writes: > > On 03/13/2013 06:54 PM, PJ Eby wrote: > > Jason Coombs (head of the Distribute project) and I are working on > > merging the bulk of the improvements distribute made into the > > setuptools code base.? He has volunteered to take over maintenance of > > setuptools, and I welcome his assistance. > > This merge is very good news for Python packaging. Kudos and thanks to > you and Jason. > > This also seems like an appropriate opportunity to say: thank you for > building the first working packaging system for Python (the one that all > popular packaging tools are still based on), and for all the work you've > put into it over the years. > Emphatic +1! Regards, Vinay Sajip From agroszer.ll at gmail.com Thu Mar 14 10:05:39 2013 From: agroszer.ll at gmail.com (Adam GROSZER) Date: Thu, 14 Mar 2013 10:05:39 +0100 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> References: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> Message-ID: <514192E3.6060001@gmail.com> Hello, On 03/14/2013 01:57 AM, Jason R. Coombs wrote: > As PJE mentioned in his e-mail, he and I have been working on a merge of > the code lines of Setuptools and Distribute. I'm excited about this > transition and I hope you are too. I think I can offer you some help, by providing some windows support in the means of testing with various python versions and building binary packages/installers. Note, current tests are failing... http://winbot.zope.org/builders -- Best regards, Adam GROSZER -- Quote of the day: Men show their character in nothing more clearly than by what they think laughable. - Goethe From p.f.moore at gmail.com Thu Mar 14 10:33:09 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 14 Mar 2013 09:33:09 +0000 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: References: Message-ID: On 14 March 2013 00:54, PJ Eby wrote: > Jason Coombs (head of the Distribute project) and I are working on > merging the bulk of the improvements distribute made into the > setuptools code base. Absolutely fantastic news!!! Thanks to both Jason and PJE for making this happen. Paul From jim at zope.com Thu Mar 14 12:36:37 2013 From: jim at zope.com (Jim Fulton) Date: Thu, 14 Mar 2013 07:36:37 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: References: Message-ID: On Wed, Mar 13, 2013 at 8:54 PM, PJ Eby wrote: > Jason Coombs (head of the Distribute project) and I are working on > merging the bulk of the improvements distribute made into the > setuptools code base. He has volunteered to take over maintenance of > setuptools, and I welcome his assistance. I appreciate the > contributions made by the distribute maintainers over the years, and > am glad to have Jason's help in getting those contributions into > setuptools as well. Continuing to keep the code bases separate isn't > helping anybody, and as setuptools moves once again into active > development to deal with the upcoming shifts in the Python-wide > packaging infrastructure (the new PEPs, formats, SSL, TUF, etc.), it > makes sense to combine efforts. That's awesome news. Thanks Phillip and Jason! Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton Jerky is better than bacon! http://zo.pe/Kqm From jim at zope.com Thu Mar 14 12:37:44 2013 From: jim at zope.com (Jim Fulton) Date: Thu, 14 Mar 2013 07:37:44 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: <5141616F.50104@oddbird.net> References: <5141616F.50104@oddbird.net> Message-ID: On Thu, Mar 14, 2013 at 1:34 AM, Carl Meyer wrote: > On 03/13/2013 06:54 PM, PJ Eby wrote: >> Jason Coombs (head of the Distribute project) and I are working on >> merging the bulk of the improvements distribute made into the >> setuptools code base. He has volunteered to take over maintenance of >> setuptools, and I welcome his assistance. > > This merge is very good news for Python packaging. Kudos and thanks to > you and Jason. > > This also seems like an appropriate opportunity to say: thank you for > building the first working packaging system for Python (the one that all > popular packaging tools are still based on), and for all the work you've > put into it over the years. Yup. Well said. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton Jerky is better than bacon! http://zo.pe/Kqm From pnasrat at gmail.com Thu Mar 14 13:08:21 2013 From: pnasrat at gmail.com (Paul Nasrat) Date: Thu, 14 Mar 2013 08:08:21 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: References: Message-ID: This is great news from the pip/virtualenv front. Thanks to both you for setuptools and the distribute contributors for all your work so far. Paul On 13 March 2013 20:54, PJ Eby wrote: > Jason Coombs (head of the Distribute project) and I are working on > merging the bulk of the improvements distribute made into the > setuptools code base. He has volunteered to take over maintenance of > setuptools, and I welcome his assistance. I appreciate the > contributions made by the distribute maintainers over the years, and > am glad to have Jason's help in getting those contributions into > setuptools as well. Continuing to keep the code bases separate isn't > helping anybody, and as setuptools moves once again into active > development to deal with the upcoming shifts in the Python-wide > packaging infrastructure (the new PEPs, formats, SSL, TUF, etc.), it > makes sense to combine efforts. > > Aside from the problems experienced by people with one package that > are fixed in the other, the biggest difficulties with the fork right > now are faced by the maintainers of setuptools-driven projects like > pip, virtualenv, and buildout, who have to either take sides in a > conflict, or spend additional time and effort testing and integrating > with both setuptools and distribute. We'd like to end that pain and > simplify matters for end users by bringing distribute enhancements to > setuptools and phasing out the distribute fork as soon as is > practical. > > In the short term, our goal is to consolidate the projects to prevent > duplication, wasted effort, and incompatibility, so that we can start > moving forward. This merge will allow us to combine resources and > teams, so that we may focus on a stable but actively-maintained > toolset. In the longer term, the goal is for setuptools as a concept > to become obsolete. For the first time, the Python packaging world > has gotten to a point where there are PEPs *and implementations* for > key parts of the packaging infrastructure that offer the potential to > get rid of setuptools entirely. (Vinay Sajip's work on distlib, > Daniel Holth's work on the "wheel" format, and Nick Coghlan's taking > up the reins of the packaging PEPs and providing a clear vision for a > new way of doing things -- these are just a few of the developments in > recent play.) > > "Obsolete", however, doesn't mean unmaintained or undeveloped. In > fact, for the "new way of doing things" to succeed, setuptools will > need a lot of new features -- some small, some large -- to provide a > migration path. > > At the moment, the merge is not yet complete. We are working on a > common repository where the two projects' history has been spliced > together, and are cleaning up the branch heads to facilitate > re-merging them. We'd hoped to have this done by PyCon, but there > have been a host of personal, health, and community issues consuming > much of our available work time. But we decided to go ahead and make > an announcement *now*, because with the big shifts taking place in the > packaging world, there are people who need to know about the upcoming > merge in order to make the best decisions about their own projects > (e.g. pip, buildout, etc.) and to better support their own users. > > Thank you once again to all the distribute contributors, for the many > fine improvements you've made to the setuptools package over the > years, and I hope that you'll continue to make them in the future. > (Especially as I begin to phase myself out of an active role in the > project!) > > I now want to turn the floor over to Jason, who's put together a > Roadmap/FAQ for what's going to be happening with the project going > forward. We'll then both be here in the thread to address any > questions or concerns you might have. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at zope.com Thu Mar 14 13:26:07 2013 From: jim at zope.com (Jim Fulton) Date: Thu, 14 Mar 2013 08:26:07 -0400 Subject: [Distutils] [Catalog-sig] Packaging & Distribution Mini-Summit at PyCon US In-Reply-To: References: Message-ID: On Thu, Feb 7, 2013 at 10:19 AM, Jim Fulton wrote: > On Wed, Feb 6, 2013 at 3:15 AM, Nick Coghlan wrote: >> As folks may be aware, I am moderating a panel called "Directions in >> Packaging" on the Saturday afternoon at PyCon US. >> >> Before that though, I am also organising what I am calling a >> "Packaging & Distribution Mini-Summit" as an open space on the Friday >> night (we have one of the larger open space rooms reserved, so we >> should have a fair bit of space if a decent crowd turns up). > > I wasn't going to be at PyCon, but I changed my plans specifically to > participate in this. Thanks for setting this up. > >> An overview of what I'm hoping we can achieve at the session is at >> https://us.pycon.org/2013/community/openspaces/packaginganddistributionminisummit/ >> (that page should be editable by anyone that has registered for PyCon >> US). > > Cool. A major difficulty in these sorts of discussions is that people > have different problems they want to solve and argue about solutions > without clearly stating their problems. > > If you don't mind, I'll try to find some time in the next few days to > add a section > to that page to list goals/problems. OK, well, hopefully better late than never. I took a stab at adding this to the end of: https://us.pycon.org/2013/community/openspaces/packaginganddistributionminisummit/ Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From pje at telecommunity.com Thu Mar 14 17:25:00 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 14 Mar 2013 12:25:00 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: <514192E3.6060001@gmail.com> References: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> <514192E3.6060001@gmail.com> Message-ID: On Thu, Mar 14, 2013 at 5:05 AM, Adam GROSZER wrote: > I think I can offer you some help, by providing some windows support in the > means of testing with various python versions and building binary > packages/installers. > > Note, current tests are failing... > > http://winbot.zope.org/builders That looks like the same problem other people are seeing; the problem is that the source you're building from lacks a proper setuptools.egg-info/entry_points.txt. Are you building from revision control directly, or from an sdist? A possible workaround is to build with a working version of setuptools or distribute on sys.path when you run setup.py. The distribute modules will get imported, but the egg-info will get picked up from elsewhere, enabling the proper functionality. (Setuptools doesn't have this problem because it includes the entry_points.txt and other critical .egg-info files in its revision control.) It would appear that either the entry_points.txt was recently removed, or there was some other workaround for its absence which has recently been removed; I have not had time to investigate, and in any case am not that familiar with the distribute side of things. From tarek at ziade.org Thu Mar 14 17:49:07 2013 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 14 Mar 2013 09:49:07 -0700 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> References: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> Message-ID: <5141FF83.1060107@ziade.org> Congrats, This is a good move for packaging. I am very glad the merge is happening, knowing that it's now managed by a community of contributors. Cheers Tarek -- Tarek Ziad? ? http://ziade.org ? @tarek_ziade From tseaver at palladion.com Thu Mar 14 18:36:11 2013 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 14 Mar 2013 13:36:11 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Thanks for the announcment, and especially thanks to all those who have worked on setuptools and distribute. Particular thanks to PJE for having both devised the thing and worked out how to get the community to adopt it. I'm looking forward to the packaging BoF now, instead of dreading it. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlFCCosACgkQ+gerLs4ltQ7qNgCgx+PDk8GYdmVIq1fbGdvqFt6K EaEAniahF//OJkZQ/LVnJx6m1DqS0r+D =0P09 -----END PGP SIGNATURE----- From tarek at ziade.org Fri Mar 15 07:20:38 2013 From: tarek at ziade.org (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 14 Mar 2013 23:20:38 -0700 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: <5141FF83.1060107@ziade.org> References: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> <5141FF83.1060107@ziade.org> Message-ID: <5142BDB6.10802@ziade.org> On 3/14/13 9:49 AM, Tarek Ziad? wrote: > Congrats, > > This is a good move for packaging. I am very glad the merge is > happening, knowing that it's now managed by a community of contributors. > > Cheers > Tarek > Oh btw, I was told Philip was saying in private he was agreeing to do the merge as long as I was not involved in it ! :) I am totally fine with this, as I am not involved in packaging anymore. But please make sure he's not ending up being the *only )one maintaining it, because you would end up back to square one: having a project locked up by a single guy. Good luck ! Tarek -- Tarek Ziad? ? http://ziade.org ? @tarek_ziade From giulio.genovese at gmail.com Fri Mar 15 22:46:53 2013 From: giulio.genovese at gmail.com (Giulio Genovese) Date: Fri, 15 Mar 2013 17:46:53 -0400 Subject: [Distutils] distribute does not install with python3 on Ubuntu machine Message-ID: If I run (on an Ubuntu machine) the command: sudo pip install --upgrade distribute Everything goes well and the package installs. But if I run the same with Python3 with: sudo pip-3.2 install --upgrade distribute I get this: "File "setuptools/dist.py", line 103 except ValueError, e: ^ SyntaxError: invalid syntax" I think the problem is that with someone recently forgot to put the parentheses (i.e. "except ValueError, e:" should be "except (ValueError, e):") and therefore this does not work anymore with python3. It should be easy to fix. Furthermore, this problem seems to be widespread within the package. Giulio -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.horn at gmail.com Sun Mar 17 00:03:20 2013 From: kevin.horn at gmail.com (Kevin Horn) Date: Sat, 16 Mar 2013 18:03:20 -0500 Subject: [Distutils] python-meta-packaging resource hub Message-ID: Howdy! I've been lurking on the list for a while, but have been pretty quiet so far. I was watching the live stream of the PyCon packaging panel today and the pytjon-meta-packaging resource hub idea was mentioned, which I hadn't heard of before, and was spurred to action. So I created a skeleton Sphinx project in a fork of this project here: https://bitbucket.org/khorn/python-meta-packaging Sadly, bitbucket won't let me do a pull request for some reason. If anyone knows why that is, feel free to let me know. (FYI I think it's because the main repo has no commits in it, which makes the "Create a pull request not load properly.) At any rate, somebody somewhere should feel free pull it into the main repo, so we can get things moving on that front. I'm happy to help out where I can. -- Kevin Horn -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sun Mar 17 01:06:37 2013 From: dholth at gmail.com (Daniel Holth) Date: Sat, 16 Mar 2013 17:06:37 -0700 Subject: [Distutils] pip merges wheel Message-ID: Earlier today we merged the existing wheel branch into mainline pip. This adds opt-in wheel install support (built into pip, "pip install --use-wheel ...") and the convenient "pip wheel ..." command for creating the wheels you need. "pip wheel ..." uses the wheel reference implementation ("pip install wheel") to compile a dependency tree as .whl archives. Used together with "pip install --use-wheel ..." it provides a powerful way to speed up repeated installs and reap other good packaging benefits. I've been using this code in production for months and it works well. I am now a pip maintainer. We are committed to offering excellent wheel support in pip including a good way to produce and consume the format. In the future we will likely refactor the code to offer the same features with more distlib and less setuptools but this change will be mostly transparent to the end user. Once everyone is comfortable with the format we will move towards installing wheels by default when they are available instead of requiring the --use-wheel flag. Enjoy! Please share your experiences with the new feature. The most common issue is that you must install distribute >= 0.6.34 for everything to work well. Daniel Holth From qwcode at gmail.com Sun Mar 17 01:30:05 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 16 Mar 2013 17:30:05 -0700 Subject: [Distutils] python-meta-packaging resource hub In-Reply-To: References: Message-ID: Hello Kevin: I have admin access. I'll look at this in a bit. I'm trying to find a sphinx project that was posted recently that offered an index for compiler tools. that seemed to have a good template we could kickstart with can someone point us to where that was? Marcus On Sat, Mar 16, 2013 at 4:03 PM, Kevin Horn wrote: > Howdy! > > I've been lurking on the list for a while, but have been pretty quiet so > far. > > I was watching the live stream of the PyCon packaging panel today and the > pytjon-meta-packaging resource hub idea was mentioned, which I hadn't heard > of before, and was spurred to action. > > So I created a skeleton Sphinx project in a fork of this project here: > > https://bitbucket.org/khorn/python-meta-packaging > > Sadly, bitbucket won't let me do a pull request for some reason. If > anyone knows why that is, feel free to let me know. > (FYI I think it's because the main repo has no commits in it, which makes > the "Create a pull request not load properly.) > > At any rate, somebody somewhere should feel free pull it into the main > repo, so we can get things moving on that front. I'm happy to help out > where I can. > > -- > Kevin Horn > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Sun Mar 17 19:50:28 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 17 Mar 2013 11:50:28 -0700 Subject: [Distutils] python-meta-packaging resource hub In-Reply-To: References: Message-ID: Kevin: I added an initial readme, so pull requests work now. as for the compiler page I was talking about, here's the email that announced the page, and the github project for the src. http://mail.python.org/pipermail/python-announce-list/2013-February/009777.html it was just a thought. that tag icons and categories being the main reason it popped to mind. If you're into sphinx and want to kickstart something for us like that, then you can now actually submit a pull. barring that, I'll would likely just post a simple TOC structure later today or tomorrow that PEP people and project owners can start filling in. Marcus On Sat, Mar 16, 2013 at 5:30 PM, Marcus Smith wrote: > Hello Kevin: > I have admin access. I'll look at this in a bit. > I'm trying to find a sphinx project that was posted recently that offered > an index for compiler tools. > that seemed to have a good template we could kickstart with > can someone point us to where that was? > Marcus > > On Sat, Mar 16, 2013 at 4:03 PM, Kevin Horn wrote: > >> Howdy! >> >> I've been lurking on the list for a while, but have been pretty quiet so >> far. >> >> I was watching the live stream of the PyCon packaging panel today and the >> pytjon-meta-packaging resource hub idea was mentioned, which I hadn't heard >> of before, and was spurred to action. >> >> So I created a skeleton Sphinx project in a fork of this project here: >> >> https://bitbucket.org/khorn/python-meta-packaging >> >> Sadly, bitbucket won't let me do a pull request for some reason. If >> anyone knows why that is, feel free to let me know. >> (FYI I think it's because the main repo has no commits in it, which makes >> the "Create a pull request not load properly.) >> >> At any rate, somebody somewhere should feel free pull it into the main >> repo, so we can get things moving on that front. I'm happy to help out >> where I can. >> >> -- >> Kevin Horn >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.horn at gmail.com Mon Mar 18 01:48:54 2013 From: kevin.horn at gmail.com (Kevin Horn) Date: Sun, 17 Mar 2013 19:48:54 -0500 Subject: [Distutils] python-meta-packaging resource hub In-Reply-To: References: Message-ID: On Mar 17, 2013 1:50 PM, "Marcus Smith" wrote: > > Kevin: > I added an initial readme, so pull requests work now. Good news. I'll send a pull req in a little while. > as for the compiler page I was talking about, here's the email that announced the page, and the github project for the src. > > http://mail.python.org/pipermail/python-announce-list/2013-February/009777.html > > it was just a thought. that tag icons and categories being the main reason it popped to mind. > If you're into sphinx and want to kickstart something for us like that, then you can now actually submit a pull. I've worked with Sphinx quite a bit. I'll check it out and see what I can manage. > barring that, I'll would likely just post a simple TOC structure later today or tomorrow that PEP people and project owners can start filling in. > Marcus > > > On Sat, Mar 16, 2013 at 5:30 PM, Marcus Smith wrote: >> >> Hello Kevin: >> I have admin access. I'll look at this in a bit. >> I'm trying to find a sphinx project that was posted recently that offered an index for compiler tools. >> that seemed to have a good template we could kickstart with >> can someone point us to where that was? >> Marcus >> >> On Sat, Mar 16, 2013 at 4:03 PM, Kevin Horn wrote: >>> >>> Howdy! >>> >>> I've been lurking on the list for a while, but have been pretty quiet so far. >>> >>> I was watching the live stream of the PyCon packaging panel today and the pytjon-meta-packaging resource hub idea was mentioned, which I hadn't heard of before, and was spurred to action. >>> >>> So I created a skeleton Sphinx project in a fork of this project here: >>> >>> https://bitbucket.org/khorn/python-meta-packaging >>> >>> Sadly, bitbucket won't let me do a pull request for some reason. If anyone knows why that is, feel free to let me know. >>> (FYI I think it's because the main repo has no commits in it, which makes the "Create a pull request not load properly.) >>> >>> At any rate, somebody somewhere should feel free pull it into the main repo, so we can get things moving on that front. I'm happy to help out where I can. >>> >>> -- >>> Kevin Horn >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> http://mail.python.org/mailman/listinfo/distutils-sig >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.horn at gmail.com Mon Mar 18 03:10:10 2013 From: kevin.horn at gmail.com (Kevin Horn) Date: Sun, 17 Mar 2013 21:10:10 -0500 Subject: [Distutils] python-meta-packaging resource hub In-Reply-To: References: Message-ID: On Sun, Mar 17, 2013 at 7:48 PM, Kevin Horn wrote: > > On Mar 17, 2013 1:50 PM, "Marcus Smith" wrote: > > > > Kevin: > > I added an initial readme, so pull requests work now. > > Good news. I'll send a pull req in a little while. > OK, so initially I ran into "repository is unrelated", apparently because I forked before there were any commits in the upstream repo. So I re-forked, moved over my changes, and tried to make another pull request, and now I get a big "Access Denied" message. Is there some permission I need just to submit a pull request? I've never run into that before... -- Kevin Horn -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.horn at gmail.com Mon Mar 18 03:13:22 2013 From: kevin.horn at gmail.com (Kevin Horn) Date: Sun, 17 Mar 2013 21:13:22 -0500 Subject: [Distutils] python-meta-packaging resource hub In-Reply-To: References: Message-ID: On Sun, Mar 17, 2013 at 7:48 PM, Kevin Horn wrote: > > On Mar 17, 2013 1:50 PM, "Marcus Smith" wrote: > > > > Kevin: > > I added an initial readme, so pull requests work now. > > Good news. I'll send a pull req in a little while. > > > as for the compiler page I was talking about, here's the email that > announced the page, and the github project for the src. > > > > > http://mail.python.org/pipermail/python-announce-list/2013-February/009777.html > > > > it was just a thought. that tag icons and categories being the main > reason it popped to mind. > > If you're into sphinx and want to kickstart something for us like that, > then you can now actually submit a pull. > > I've worked with Sphinx quite a bit. I'll check it out and see what I can > manage. > > > barring that, I'll would likely just post a simple TOC structure later > today or tomorrow that PEP people and project owners can start filling in. > > Marcus > > > > I checked out the compiler page a bit, and it looks like they're using markdown and pandoc to build that site. We can probably put something together that is somewhat similar, if we like, though it would be helpful to know what features of that site were the ones that people liked. I can put together a Sphinx theme if we want a custom one, and maybe some custom ReST directives as Sphinx extensions if we want to go that far. What is it that people want to see with this site? -- Kevin Horn -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.horn at gmail.com Mon Mar 18 03:33:05 2013 From: kevin.horn at gmail.com (Kevin Horn) Date: Sun, 17 Mar 2013 21:33:05 -0500 Subject: [Distutils] python-meta-packaging resource hub In-Reply-To: References: Message-ID: On Sun, Mar 17, 2013 at 9:10 PM, Kevin Horn wrote: > On Sun, Mar 17, 2013 at 7:48 PM, Kevin Horn wrote: > >> >> On Mar 17, 2013 1:50 PM, "Marcus Smith" wrote: >> > >> > Kevin: >> > I added an initial readme, so pull requests work now. >> >> Good news. I'll send a pull req in a little while. >> > > OK, so initially I ran into "repository is unrelated", apparently because > I forked before there were any commits in the upstream repo. > > So I re-forked, moved over my changes, and tried to make another pull > request, and now I get a big "Access Denied" message. > > Is there some permission I need just to submit a pull request? I've never > run into that before... > > Ugh, ignore this. I was clicking the pull request button in the wrong window... Pull request sent. (finally) -- Kevin Horn -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Mon Mar 18 04:38:53 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 17 Mar 2013 20:38:53 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: the pip docs have a cookbook entry now for the wheel support http://www.pip-installer.org/en/latest/cookbook.html#building-and-installing-wheels the usage reference is up to date as well http://www.pip-installer.org/en/latest/usage.html Marcus On Sat, Mar 16, 2013 at 5:06 PM, Daniel Holth wrote: > Earlier today we merged the existing wheel branch into mainline pip. > This adds opt-in wheel install support (built into pip, "pip install > --use-wheel ...") and the convenient "pip wheel ..." command for > creating the wheels you need. > > "pip wheel ..." uses the wheel reference implementation ("pip install > wheel") to compile a dependency tree as .whl archives. Used together > with "pip install --use-wheel ..." it provides a powerful way to speed > up repeated installs and reap other good packaging benefits. I've been > using this code in production for months and it works well. > > I am now a pip maintainer. We are committed to offering excellent > wheel support in pip including a good way to produce and consume the > format. In the future we will likely refactor the code to offer the > same features with more distlib and less setuptools but this change > will be mostly transparent to the end user. Once everyone is > comfortable with the format we will move towards installing wheels by > default when they are available instead of requiring the --use-wheel > flag. > > Enjoy! Please share your experiences with the new feature. The most > common issue is that you must install distribute >= 0.6.34 for > everything to work well. > > Daniel Holth > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Mon Mar 18 07:04:51 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 17 Mar 2013 23:04:51 -0700 Subject: [Distutils] python-meta-packaging resource hub In-Reply-To: References: Message-ID: ok, project is initialized (thanks kevin) and a tentative TOC is in place, enough at least to get the various players to start making pull requests for the portions they own or have expertise in. see here: https://python-meta-packaging.readthedocs.org/en/latest/# P.S. Kevin, I guess follow the updates, and if you have ideas to better organize or present the content, feel free to submit pulls. -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Mon Mar 18 10:08:51 2013 From: glyph at twistedmatrix.com (Glyph) Date: Mon, 18 Mar 2013 02:08:51 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: On Mar 16, 2013, at 5:06 PM, Daniel Holth wrote: > Earlier today we merged the existing wheel branch into mainline pip. > This adds opt-in wheel install support (built into pip, "pip install > --use-wheel ...") and the convenient "pip wheel ..." command for > creating the wheels you need. Hi! I don't really understand all the issues that lead to the creation of the Wheel format, and I don't care to. But, I _very_ much would like some of the advertised benefits, such as being able to install binary packages on Windows. Well, _I_ don't care about installing binary packages on Windows. But I want users of Twisted on Windows to be able to 'pip install twisted' and just get a working install without having to learn about MSVCRT versions and the various miseries thereof. I am quite excited about the possibility that such a situation might be near our grasp. My understanding is that in order to achieve this nirvana, what we must do is: A twisted developer, on each supported Windows configuration, must 'pip install wheel; pip wheel ./Twisted' and place that build artifact on PyPI. Make our dependencies do the same thing. Tell our users to do 'pip install --use-wheel twisted' I have two questions: first, is this sequence of steps accurate, and if it is, why is '--use-wheel' not just the default? Does this option just mean '--please-work --no-really' or is there some functional change in behavior in using wheels that might cause problems? Thanks a lot! -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Mar 18 11:22:25 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 18 Mar 2013 10:22:25 +0000 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: On 18 March 2013 09:08, Glyph wrote: > My understanding is that in order to achieve this nirvana, what we must do > is: (Daniel may wish to chime in with more details) > A twisted developer, on each supported Windows configuration, must 'pip > install wheel; pip wheel ./Twisted' and place that build artifact on PyPI. To create the wheel, just run pip wheel , as you say on each supported Windows configuration. And it doesn't have to just be Windows - if you care to, builds for other platforms can also be created the same way. This may save on the "you need the following dev packages installed" type of FAQs. > Make our dependencies do the same thing. If the dependencies don't, install from source will still work, so it's probably not crucial for pure Python dependencies. > Tell our users to do 'pip install --use-wheel twisted' > > I have two questions: first, is this sequence of steps accurate, See above, but basically yes. > and if it > is, why is '--use-wheel' not just the default? Does this option just mean > '--please-work --no-really' or is there some functional change in behavior > in using wheels that might cause problems? It simply reflects that the wheel format and pip's support of it is relatively new, and so somewhat experimental. There's no functional change other than the fact that you get a wheel install if one's available - or at least there shouldn't be :-) In due course, I expect --use-wheel to become the default. We may offer a --no-wheel option at that stage, if it seems useful. Paul From ncoghlan at gmail.com Mon Mar 18 15:46:47 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 07:46:47 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: On Sun, Mar 17, 2013 at 8:38 PM, Marcus Smith wrote: > the pip docs have a cookbook entry now for the wheel support > http://www.pip-installer.org/en/latest/cookbook.html#building-and-installing-wheels > > the usage reference is up to date as well > http://www.pip-installer.org/en/latest/usage.html Great news! Given Glyph's questions, It may be a good idea to add a "publishing wheels" entry to the cookbook, explaining: 1. After building the wheel, publishing it to PyPI means users on a matching system can download and install it without building it 2. The current caveats on handling C extensions that are *wrappers* around shared libraries/DLLs that are expected to be installed on the target system, rather than CPython-only C extensions. (Specifically, bundled DLLs on Windows should work just fine, but depending on system binaries on *nix systems can cause problems if other users have the right SO version installed, and bundling on *nix systems may require tweaking of the environment to find the bundled SO instead of only looking in the system paths) 3. Users currently need to add a "--use-wheel" flag to enable the wheel support. It is expected that using wheels will become the default behaviour in the feature, but we want it to be opt in until it has seen some more widespread real word usage. Cheers, Nick. > > Marcus > > > On Sat, Mar 16, 2013 at 5:06 PM, Daniel Holth wrote: >> >> Earlier today we merged the existing wheel branch into mainline pip. >> This adds opt-in wheel install support (built into pip, "pip install >> --use-wheel ...") and the convenient "pip wheel ..." command for >> creating the wheels you need. >> >> "pip wheel ..." uses the wheel reference implementation ("pip install >> wheel") to compile a dependency tree as .whl archives. Used together >> with "pip install --use-wheel ..." it provides a powerful way to speed >> up repeated installs and reap other good packaging benefits. I've been >> using this code in production for months and it works well. >> >> I am now a pip maintainer. We are committed to offering excellent >> wheel support in pip including a good way to produce and consume the >> format. In the future we will likely refactor the code to offer the >> same features with more distlib and less setuptools but this change >> will be mostly transparent to the end user. Once everyone is >> comfortable with the format we will move towards installing wheels by >> default when they are available instead of requiring the --use-wheel >> flag. >> >> Enjoy! Please share your experiences with the new feature. The most >> common issue is that you must install distribute >= 0.6.34 for >> everything to work well. >> >> Daniel Holth >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From Steve.Dower at microsoft.com Mon Mar 18 17:34:25 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Mon, 18 Mar 2013 16:34:25 +0000 Subject: [Distutils] self.introduce(distutils-sig) Message-ID: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> Hi all I just joined up after the various discussions at PyCon and wanted to say hi. (If you were also there and want to put a face/voice to the name, I did the Visual Studio demo at one of the lightning talks.) The main reason I want to get involved is the openly acknowledged lack of Windows expertise that's available. I work at Microsoft and part of my job description is to contribute code/testing/time/documentation/help/etc. to CPython. (I can also do testing/time/help for other projects, but copyrightable artifacts are more complicated and, for now, not okay with our lawyers.) I expect I'll mainly be lurking until I can be useful, which is why I wanted to start with this post. I'm pretty good with Windows, and I have direct access to all the experts and internal mailing lists. So just shout out when something comes up and I'll be happy to clarify or research an answer. Cheers, Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From regebro at gmail.com Mon Mar 18 22:14:12 2013 From: regebro at gmail.com (Lennart Regebro) Date: Mon, 18 Mar 2013 14:14:12 -0700 Subject: [Distutils] distribute does not install with python3 on Ubuntu machine In-Reply-To: References: Message-ID: On Fri, Mar 15, 2013 at 2:46 PM, Giulio Genovese wrote: > sudo pip-3.2 install --upgrade distribute > I get this: > "File "setuptools/dist.py", line 103 > except ValueError, e: You can't upgrade distribute with pip under Python 3. This is a known problem. https://github.com/pypa/pip/issues/650 > I think the problem is that with someone recently forgot to put the > parentheses (i.e. "except ValueError, e:" should be "except (ValueError, > e):") and therefore this does not work anymore with python3. No, the syntax under Python 3 is except ValueError as 3, but that doesn't work with Python 2.4 and Python 2.5 and we are still supporting them. > It should be easy to fix. It isn't. The solution is to not try to upgrade distribute with pip under Python 3. Uninstall it and install it again instead. //Lennart From regebro at gmail.com Mon Mar 18 22:16:50 2013 From: regebro at gmail.com (Lennart Regebro) Date: Mon, 18 Mar 2013 14:16:50 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: On Sat, Mar 16, 2013 at 5:06 PM, Daniel Holth wrote: > Earlier today we merged the existing wheel branch into mainline pip. > This adds opt-in wheel install support (built into pip, "pip install > --use-wheel ...") and the convenient "pip wheel ..." command for > creating the wheels you need. I still think it is unfortunate that we are starting to extend pip to be a tool for developers to create distributions. It would be better of pip was kept as an install tool, and we added the utilities for creating distributions separate. The other option is of course that we start adding all sorts of development commands to pip, such as build, test, sdist etc. But I do think it's the wrong place. //Lennart From aclark at aclark.net Mon Mar 18 22:26:07 2013 From: aclark at aclark.net (Alex Clark) Date: Mon, 18 Mar 2013 17:26:07 -0400 Subject: [Distutils] self.introduce(distutils-sig) References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On 2013-03-18 16:34:25 +0000, Steve Dower said: > Hi all > ? > I just joined up after the various discussions at PyCon and wanted to > say hi. (If you were also there and want to put a face/voice to the > name, I did the Visual Studio demo at one of the lightning talks.) > ? > The main reason I want to get involved is the openly acknowledged lack > of Windows expertise that?s available. I work at Microsoft and part of > my job description is to contribute > code/testing/time/documentation/help/etc. to CPython. (I can also do > testing/time/help for other projects, but copyrightable artifacts are > more complicated and, for now, not okay with our lawyers.) > ? > I expect I?ll mainly be lurking until I can be useful, which is why I > wanted to start with this post. I?m pretty good with Windows, and I > have direct access to all the experts and internal mailing lists. So > just shout out when something comes up and I?ll be happy to clarify or > research an answer. Welcome! > ? > Cheers, > Steve > ? > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Alex Clark ? http://about.me/alex.clark From ncoghlan at gmail.com Mon Mar 18 23:04:06 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 15:04:06 -0700 Subject: [Distutils] Parallel installation of incompatible versions Message-ID: pkg_resources.requires() is our only current solution for parallel installation of incompatible versions. This can be made to work and is a lot better than the nothing we had before it was created, but also has quite a few issues (and it can be a nightmare to debug when it goes wrong). Based on the exchanges with Mark McLoughlin the other week, and chatting to Matthias Klose here at the PyCon US sprints, I think I have a design that will let us support parallel installs in a way that builds on existing standards, while behaving more consistently in edge cases and without making sys.path ridiculously long even in systems with large numbers of potentially incompatible dependencies. The core of this proposal is to create an updated version of the installation database format that defines semantics for *.pth files inside .dist-info directories. Specifically, whereas *.pth files directly in site-packages are processed automatically when Python starts up, those inside dist-info directories would be processed only when explicitly requested (probably through a new distlib API). The processing of the *.pth file would insert it into the path immediately before the path entry containing the .dist-info directory (this is to avoid an issue with the pkg_resources insert-at-the-front-of-sys.path behaviour where system packages can end up shadowing those from a local source checkout, without running into the issue with append-to-the-end-of-sys.path where a specifically requested version is shadowed by a globally installed version) To use CherryPy2 and CherryPy3 on Fedora as an example, what this would allow is for CherryPy3 to be installed normally (i.e. directly in site-packages), while CherryPy2 would be installed as a split install, with the .dist-info going into site-packages and the actual package going somewhere else (more on that below). A cherrypy2.pth file inside the dist-info directory would reference the external location where cherrypy 2.x can be found. To use this at runtime, you would do something like: distlib.some_new_requires_api("CherryPy (2.2)") import cherrypy The other part of this question is how to avoid the potential explosion of one sys.path entry per dependency. The first part of that is that for cases where there is no incompatible version installed, there won't be a *.pth file, and hence no extra sys.path entry (the module/package will just be installed directly into site-packages as usual). The second part has to do with a possible way to organise the versioned installs: group them by the initial fragment of the version number according to semantic versioning. For example, define a "versioned-packages" directory that sits adjacent to "site-packages". When doing the parallel install of CherryPy2 the actual *code* would be installed into "versioned-packages/2/", with the cherrypy2.pth file pointing to that directory. For 0.x releases, there would be a directory per minor version, while for higher releases, there would only be a directory per major version. The nice thing though is that Python wouldn't actually care about the actual layout of the installed versions, so long as the *.pth files in the dist-info directories described the mapping correctly. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From qwcode at gmail.com Mon Mar 18 23:13:29 2013 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 18 Mar 2013 15:13:29 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: > I still think it is unfortunate that we are starting to extend pip to > be a tool for developers to create distributions. It would be better > of pip was kept as an install tool, and we added the utilities for > creating distributions separate. > I understand where you're coming from, but a few thoughts: 1) pip is *currently* very much a build tool in that it build/installs from source archives, but I understand the new model is for pip to eventually be working with pre-built wheels much of the time, with no build system required. 2) the motivation for "pip wheel" is *not* really for building single wheels for the project you're developing. For that use case, I agree it makes more sense conceptually to install the "wheel" package and use it's "bdist_wheel" setuptools extension (i.e. "python setup.py bdist_wheel" ) 3) the real motivation for "pip wheel" (which is a builder convenience tool) is to help people *install* from wheels *now* given that pypi won't be full of wheels for a time to come. This allows people to quickly and easily get all their source archive dependencies converted with a single command (if you're using requirements files) and start gaining the benefits of wheel with very little fiddling. 4) even when pypi is full of wheels, I can imagine people wanting to build certain dependencies with different build options, and "pip wheel" could help with that. 5) I can imagine "pip wheel" disappearing at some point down the road. We'll have to see, but until then, "pip wheel" is going to very critical IMO in getting people familiar with and actually using wheel-based installs. > The other option is of course that we start adding all sorts of > development commands to pip, such as build, test, sdist etc. But I do > think it's the wrong place. > I've thought of that too, but that's a discussion for another day or parallel universe. Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Mar 18 23:25:52 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 15:25:52 -0700 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Mon, Mar 18, 2013 at 9:34 AM, Steve Dower wrote: > I expect I?ll mainly be lurking until I can be useful, which is why I wanted > to start with this post. I?m pretty good with Windows, and I have direct > access to all the experts and internal mailing lists. So just shout out when > something comes up and I?ll be happy to clarify or research an answer. Great to hear and thanks :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Mar 18 23:31:55 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 15:31:55 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 3:13 PM, Marcus Smith wrote: >> The other option is of course that we start adding all sorts of >> development commands to pip, such as build, test, sdist etc. But I do >> think it's the wrong place. > > > I've thought of that too, but that's a discussion for another day or > parallel universe. The meta-build hooks are definitely a topic for post-metadata-2.0 :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From eric at trueblade.com Mon Mar 18 23:51:00 2013 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 18 Mar 2013 18:51:00 -0400 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: <51479A54.7020606@trueblade.com> On 3/18/2013 5:16 PM, Lennart Regebro wrote: > On Sat, Mar 16, 2013 at 5:06 PM, Daniel Holth wrote: >> Earlier today we merged the existing wheel branch into mainline pip. >> This adds opt-in wheel install support (built into pip, "pip install >> --use-wheel ...") and the convenient "pip wheel ..." command for >> creating the wheels you need. > > I still think it is unfortunate that we are starting to extend pip to > be a tool for developers to create distributions. It would be better > of pip was kept as an install tool, and we added the utilities for > creating distributions separate. I completely agree. And those tools to build distributions could be downloaded separately. -- Eric. From ncoghlan at gmail.com Tue Mar 19 00:13:46 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 16:13:46 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: <51479A54.7020606@trueblade.com> References: <51479A54.7020606@trueblade.com> Message-ID: On Mon, Mar 18, 2013 at 3:51 PM, Eric V. Smith wrote: > On 3/18/2013 5:16 PM, Lennart Regebro wrote: >> On Sat, Mar 16, 2013 at 5:06 PM, Daniel Holth wrote: >>> Earlier today we merged the existing wheel branch into mainline pip. >>> This adds opt-in wheel install support (built into pip, "pip install >>> --use-wheel ...") and the convenient "pip wheel ..." command for >>> creating the wheels you need. >> >> I still think it is unfortunate that we are starting to extend pip to >> be a tool for developers to create distributions. It would be better >> of pip was kept as an install tool, and we added the utilities for >> creating distributions separate. > > I completely agree. And those tools to build distributions could be > downloaded separately. As Marcus already noted, we cannot achieve this nirvana until the primary distribution format for Python software is something other than source archives. pip's support of the wheel format is a necessary step on that path - it's already a build system *because* our current "installation" command is "./setup.py install", and handling that command invokes the build system. Eventually I expect pip will grow a "--wheel-only" option to run it in strict "installer only" mode, but the ecosystem is a long way from supporting that being a useful option (especially since there are some cases which will still require falling back to the "build from source" model). Cheers, Nick. > > -- > Eric. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From barry at python.org Tue Mar 19 00:37:43 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 18 Mar 2013 16:37:43 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: <20130318163743.20621c6c@anarchist> On Mar 18, 2013, at 02:16 PM, Lennart Regebro wrote: >I still think it is unfortunate that we are starting to extend pip to >be a tool for developers to create distributions. It would be better >of pip was kept as an install tool, and we added the utilities for >creating distributions separate. +1. Doesn't this violate Nick's mission for killing `setup.py install` which IIUC is motivated by wanting to separate building and installing? I really want pip to just be about installing and use other tools to build source and binary distributions. -Barry From barry at python.org Tue Mar 19 00:39:05 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 18 Mar 2013 16:39:05 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: <51479A54.7020606@trueblade.com> Message-ID: <20130318163905.31e8a643@anarchist> On Mar 18, 2013, at 04:13 PM, Nick Coghlan wrote: >Eventually I expect pip will grow a "--wheel-only" option to run it in >strict "installer only" mode, but the ecosystem is a long way from >supporting that being a useful option (especially since there are some >cases which will still require falling back to the "build from source" >model). If that's the end goal, then it should be the default now. -Barry From donald at stufft.io Tue Mar 19 00:41:12 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 18 Mar 2013 19:41:12 -0400 Subject: [Distutils] pip merges wheel In-Reply-To: <20130318163905.31e8a643@anarchist> References: <51479A54.7020606@trueblade.com> <20130318163905.31e8a643@anarchist> Message-ID: <88CAEB24-80B5-45F5-ADC0-2F112CB12D5E@stufft.io> On Mar 18, 2013, at 7:39 PM, Barry Warsaw wrote: > On Mar 18, 2013, at 04:13 PM, Nick Coghlan wrote: > >> Eventually I expect pip will grow a "--wheel-only" option to run it in >> strict "installer only" mode, but the ecosystem is a long way from >> supporting that being a useful option (especially since there are some >> cases which will still require falling back to the "build from source" >> model). > > If that's the end goal, then it should be the default now. > > -Barry > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig --wheel-only as the default now would make approximately 3 things installable from PyPI, one of which is wheel itself. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From regebro at gmail.com Tue Mar 19 00:48:04 2013 From: regebro at gmail.com (Lennart Regebro) Date: Mon, 18 Mar 2013 16:48:04 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 3:13 PM, Marcus Smith wrote: > 1) pip is *currently* very much a build tool in that it build/installs from > source archives, but I understand the new model is for pip to eventually be > working with pre-built wheels much of the time, with no build system > required. I used the word "build tool" in the packaging summit, that was the wrong word. My point is that is currently a tool to *install* distributions as opposed to *create* distributions. I think it's best if it is kept that way. I'd rather see the wheel command go into another tool used to build distributions (and perhaps upload them to the cheeseshop). [ I overheard the proposal to call such a tool "pup". ;-) ] //Lennart From ncoghlan at gmail.com Tue Mar 19 00:52:05 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 16:52:05 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: <20130318163905.31e8a643@anarchist> References: <51479A54.7020606@trueblade.com> <20130318163905.31e8a643@anarchist> Message-ID: On Mon, Mar 18, 2013 at 4:39 PM, Barry Warsaw wrote: > On Mar 18, 2013, at 04:13 PM, Nick Coghlan wrote: > >>Eventually I expect pip will grow a "--wheel-only" option to run it in >>strict "installer only" mode, but the ecosystem is a long way from >>supporting that being a useful option (especially since there are some >>cases which will still require falling back to the "build from source" >>model). > > If that's the end goal, then it should be the default now. No, user experience is king. Right now, defaulting to wheel-only would be an awful user experience (because you wouldn't be able to install anything), as well as being completely backwards incompatible with the current behaviour (because everything would break). Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Mar 19 00:59:27 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 16:59:27 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 4:48 PM, Lennart Regebro wrote: > On Mon, Mar 18, 2013 at 3:13 PM, Marcus Smith wrote: >> 1) pip is *currently* very much a build tool in that it build/installs from >> source archives, but I understand the new model is for pip to eventually be >> working with pre-built wheels much of the time, with no build system >> required. > > I used the word "build tool" in the packaging summit, that was the wrong word. > My point is that is currently a tool to *install* distributions as > opposed to *create* distributions. I think it's best if it is kept > that way. I'd rather see the wheel command go into another tool used > to build distributions (and perhaps upload them to the cheeseshop). No, that's not the intended use case of "pip wheel". It's intended for use as a multi-stage installation tool. Stage 1: install from sdist on a staging server which has development dependencies installed Stage 2: upload to a private PyPI index you control (perhaps just a directory published internally over HTTP) Stage 3: install from wheel on production servers without development dependencies For actual development use, it makes a lot more sense to install the wheel project and just use ./setup.py bdist_wheel. Cheers, Nick. > > [ I overheard the proposal to call such a tool "pup". ;-) ] > > //Lennart > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Tue Mar 19 01:28:00 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 18 Mar 2013 20:28:00 -0400 Subject: [Distutils] pip merges wheel In-Reply-To: <51479A54.7020606@trueblade.com> References: <51479A54.7020606@trueblade.com> Message-ID: I do understand the confusion. Binary package formats have more than one use. Coincidentally we have implemented the slightly different "cache compiles" and "distribute software" features using the same format. It might help if you can imagine that "pip wheel" produces a different format than "python setup.py bdist_wheel upload". Daniel From ncoghlan at gmail.com Tue Mar 19 02:15:05 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 18:15:05 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: <51479A54.7020606@trueblade.com> Message-ID: On Mon, Mar 18, 2013 at 5:28 PM, Daniel Holth wrote: > I do understand the confusion. Binary package formats have more than > one use. Coincidentally we have implemented the slightly different > "cache compiles" and "distribute software" features using the same > format. It might help if you can imagine that "pip wheel" produces a > different format than "python setup.py bdist_wheel upload". I think it helps more to just imagine that it produces the same format, but for different reasons. That particular imagining has the virtue of being accurate :) We do need to do some work on better explaining that wheel has two major use cases, and that these two use cases correspond to "pip wheel" (for caching your own local builds of an sdist) and "./setup.py bdist_wheel upload" for publication of pre-built binaries via PyPI. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From marius at pov.lt Tue Mar 19 07:25:52 2013 From: marius at pov.lt (Marius Gedminas) Date: Tue, 19 Mar 2013 08:25:52 +0200 Subject: [Distutils] distribute does not install with python3 on Ubuntu machine In-Reply-To: References: Message-ID: <20130319062551.GA25562@fridge.pov.lt> On Mon, Mar 18, 2013 at 02:14:12PM -0700, Lennart Regebro wrote: > On Fri, Mar 15, 2013 at 2:46 PM, Giulio Genovese > wrote: > > sudo pip-3.2 install --upgrade distribute > > I get this: > > "File "setuptools/dist.py", line 103 > > except ValueError, e: > > You can't upgrade distribute with pip under Python 3. This is a known problem. > > https://github.com/pypa/pip/issues/650 > > > I think the problem is that with someone recently forgot to put the > > parentheses (i.e. "except ValueError, e:" should be "except (ValueError, > > e):") and therefore this does not work anymore with python3. > > No, the syntax under Python 3 is except ValueError as 3, but that > doesn't work with Python 2.4 and Python 2.5 and we are still > supporting them. > > > It should be easy to fix. > > It isn't. The solution is to not try to upgrade distribute with pip > under Python 3. Uninstall it and install it again instead. Wouldn't merging Vinay's distribute3 branch fix this? http://mail.python.org/pipermail/distutils-sig/2013-February/019990.html Marius Gedminas -- * philiKON wonders what niemeyer is committing :) *** benji_york is now known as benji murder? -- #zope3-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 190 bytes Desc: Digital signature URL: From agroszer.ll at gmail.com Tue Mar 19 10:06:09 2013 From: agroszer.ll at gmail.com (Adam GROSZER) Date: Tue, 19 Mar 2013 10:06:09 +0100 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: References: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> <514192E3.6060001@gmail.com> Message-ID: <51482A81.9060904@gmail.com> Hello, On 03/14/2013 05:25 PM, PJ Eby wrote: > On Thu, Mar 14, 2013 at 5:05 AM, Adam GROSZER wrote: >> I think I can offer you some help, by providing some windows support in the >> means of testing with various python versions and building binary >> packages/installers. >> >> Note, current tests are failing... >> >> http://winbot.zope.org/builders > http://winbot.zope.org/builders/distribute_dev%20py_265_win32/builds/202 > That looks like the same problem other people are seeing; the problem > is that the source you're building from lacks a proper > setuptools.egg-info/entry_points.txt. Are you building from revision > control directly, or from an sdist? It's built from the bitbucket repo to provide the earliest possible warnings. Whatever you push there gets tested. http://winbot.zope.org/builders/distribute_dev%20py_265_win32/builds/202/steps/hg/logs/stdio > > A possible workaround is to build with a working version of setuptools > or distribute on sys.path when you run setup.py. The distribute > modules will get imported, but the egg-info will get picked up from > elsewhere, enabling the proper functionality. (Setuptools doesn't > have this problem because it includes the entry_points.txt and other > critical .egg-info files in its revision control.) It's built with an almost pristine python, which has just pywin32 installed. > > It would appear that either the entry_points.txt was recently removed, > or there was some other workaround for its absence which has recently > been removed; I have not had time to investigate, and in any case am > not that familiar with the distribute side of things. > Looks like running python.exe setup.py test is the right command to run the tests, isn't it? -- Best regards, Adam GROSZER -- Quote of the day: The more times you run over a dead cat, the flatter it gets. From jim at zope.com Tue Mar 19 16:57:19 2013 From: jim at zope.com (Jim Fulton) Date: Tue, 19 Mar 2013 11:57:19 -0400 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Mon, Mar 18, 2013 at 12:34 PM, Steve Dower wrote: > I just joined up after the various discussions at PyCon and wanted to say > hi. (If you were also there and want to put a face/voice to the name, I did > the Visual Studio demo at one of the lightning talks.) That was a very cool demo. > The main reason I want to get involved is the openly acknowledged lack of > Windows expertise that?s available. I work at Microsoft and part of my job > description is to contribute code/testing/time/documentation/help/etc. to > CPython. (I can also do testing/time/help for other projects, but > copyrightable artifacts are more complicated and, for now, not okay with our > lawyers.) > > > > I expect I?ll mainly be lurking until I can be useful, which is why I wanted > to start with this post. I?m pretty good with Windows, and I have direct > access to all the experts and internal mailing lists. So just shout out when > something comes up and I?ll be happy to clarify or research an answer. At the packaging panel, an issue was raised regarding issues with 32-bit and 64-bit windows packages. I don't remember the details. Were you there? If not, maybe someone can describe the issue here. Also an idea, fwiw: it would be awesome if MS provided something like travis-ci that executed tests on windows for open-source projects hosted in github (and other places like bitbucket, which I prefer). Maybe projects would start sporting "Windows: passing" buttons. :) Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From donald at stufft.io Tue Mar 19 17:10:36 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 19 Mar 2013 12:10:36 -0400 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Mar 19, 2013, at 11:57 AM, Jim Fulton wrote: > On Mon, Mar 18, 2013 at 12:34 PM, Steve Dower wrote: >> I just joined up after the various discussions at PyCon and wanted to say >> hi. (If you were also there and want to put a face/voice to the name, I did >> the Visual Studio demo at one of the lightning talks.) > > That was a very cool demo. > >> The main reason I want to get involved is the openly acknowledged lack of >> Windows expertise that?s available. I work at Microsoft and part of my job >> description is to contribute code/testing/time/documentation/help/etc. to >> CPython. (I can also do testing/time/help for other projects, but >> copyrightable artifacts are more complicated and, for now, not okay with our >> lawyers.) >> >> >> >> I expect I?ll mainly be lurking until I can be useful, which is why I wanted >> to start with this post. I?m pretty good with Windows, and I have direct >> access to all the experts and internal mailing lists. So just shout out when >> something comes up and I?ll be happy to clarify or research an answer. > > At the packaging panel, an issue was raised regarding issues with 32-bit and > 64-bit windows packages. I don't remember the details. Were you there? > If not, maybe someone can describe the issue here. IIRC the issue is that the installers generated by distutils and friends are specific to either 32bit or 64bit windows. If I recall from my windows days 64bit windows puts it's "I'm installed here" info in the registry under a different location than 32bit installers. So When you attempt to use a 32bit installer it looks in the 32bit location, finds nothing and claims Python isn't installed. This is coupled with the fact if people publish installers at all for Windows it's typically 32bit only. I could of course be remembering wrong :) > > Also an idea, fwiw: it would be awesome if MS provided something like > travis-ci that executed tests on windows for open-source projects > hosted in github (and other places like bitbucket, which I prefer). > Maybe projects would start sporting "Windows: passing" buttons. :) Especially if this worked _with_ travis :) > > Jim > > -- > Jim Fulton > http://www.linkedin.com/in/jimfulton > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From Steve.Dower at microsoft.com Tue Mar 19 17:21:23 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Tue, 19 Mar 2013 16:21:23 +0000 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> > From: Jim Fulton > On Mon, Mar 18, 2013 at 12:34 PM, Steve Dower > wrote: > > I just joined up after the various discussions at PyCon and wanted to > > say hi. (If you were also there and want to put a face/voice to the > > name, I did the Visual Studio demo at one of the lightning talks.) > > That was a very cool demo. Thanks! > > The main reason I want to get involved is the openly acknowledged lack > > of Windows expertise that's available. I work at Microsoft and part of > > my job description is to contribute > > code/testing/time/documentation/help/etc. to CPython. (I can also do > > testing/time/help for other projects, but copyrightable artifacts are > > more complicated and, for now, not okay with our > > lawyers.) > > > > I expect I'll mainly be lurking until I can be useful, which is why I > > wanted to start with this post. I'm pretty good with Windows, and I > > have direct access to all the experts and internal mailing lists. So > > just shout out when something comes up and I'll be happy to clarify or > research an answer. > > At the packaging panel, an issue was raised regarding issues with 32-bit and > 64-bit windows packages. I don't remember the details. Were you there? > If not, maybe someone can describe the issue here. As I understand, the issue is the same as between different versions of Python and comes down to not being able to assume a compiler on Windows machines. It's easy to make a source file that will compile for any ABI and platform, but distributing binaries requires each one to be built separately. This doesn't have to be an onerous task - it can be scripted quite easily once you have all the required compilers - but it does take more effort than simply sharing a source file. Another issue is the CPython installer itself is very biased towards only having one of 32/64 and not both, despite having 4 possible configurations (excluding virtualenv and xcopy installs). The default installer settings will try and put 32 and 64 in the same folder, which is easily solved, but the registration information also goes into the same location. On Windows Vista and later (possibly XP, but I'm not going to promise that) there is automatic redirection for 32/64 that separates it, so that an installer will find the right path, but the per-user installs don't get this and installing both 32 and 64-bit versions will simply overwrite each other. As a result, it's hard for an MSI installer to find the right target version, and because it contains specific binaries finding the right version is essential. (I know so much about this because an IDE also has to find the versions, and there are simply some situations where it is impossible.) > Also an idea, fwiw: it would be awesome if MS provided something like > travis-ci that executed tests on windows for open-source projects hosted in > github (and other places like bitbucket, which I prefer). > Maybe projects would start sporting "Windows: passing" buttons. :) Bitbucket is starting to get some love here, and we've been pushing to get Mercurial on equal standing with Git internally. Right now, our small Python team isn't influential enough to get a commitment to a testing service, but there's absolutely no reason why one can't be set up with Windows VMs on any of the cloud services out there (pip is already using AWS for this). Funding from the PSF may be easier than funding from MS, though I've never tried to get funding from the PSF before so I could be wrong :) > Jim > > -- > Jim Fulton > http://www.linkedin.com/in/jimfulton > From ncoghlan at gmail.com Tue Mar 19 18:02:42 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Mar 2013 10:02:42 -0700 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 3:04 PM, Nick Coghlan wrote: > The second part has to do with a possible way to organise the > versioned installs: group them by the initial fragment of the version > number according to semantic versioning. For example, define a > "versioned-packages" directory that sits adjacent to "site-packages". > When doing the parallel install of CherryPy2 the actual *code* would > be installed into "versioned-packages/2/", with the cherrypy2.pth file > pointing to that directory. For 0.x releases, there would be a > directory per minor version, while for higher releases, there would > only be a directory per major version. Jason pointed out this wouldn't actually work, since you might have spurious version conflicts in this model (e.g. if you require v2.x of one dependency, but v3.x of another). So it would need to be 1 directory per parallel installed versioned package. The "but what about long sys.paths?" problem can be dealt with as a performance issue for the import system. Cheers, Nick. > > The nice thing though is that Python wouldn't actually care about the > actual layout of the installed versions, so long as the *.pth files in > the dist-info directories described the mapping correctly. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jim at zope.com Tue Mar 19 18:22:20 2013 From: jim at zope.com (Jim Fulton) Date: Tue, 19 Mar 2013 13:22:20 -0400 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 6:04 PM, Nick Coghlan wrote: > pkg_resources.requires() is our only current solution for parallel > installation of incompatible versions. Well, one of them. Buildout is another. At a lower level, self-contained things (eggs, wheels, jars, Mac .app directories) is the solution. requires() and buildout are just higher-level applications. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From pje at telecommunity.com Tue Mar 19 18:54:37 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 19 Mar 2013 13:54:37 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: <51482A81.9060904@gmail.com> References: <7E79234E600438479EC119BD241B48D63FD1FAF8@CH1PRD0611MB432.namprd06.prod.outlook.com> <514192E3.6060001@gmail.com> <51482A81.9060904@gmail.com> Message-ID: On Tue, Mar 19, 2013 at 5:06 AM, Adam GROSZER wrote: > Looks like running > python.exe setup.py test > is the right command to run the tests, isn't it? It certainly should be. As I said, I'm not familiar with the distribute side, so maybe Jason or somebody else can lend a hand here. I know the problem doesn't exist in setuptools now, and it won't be present post-merge either, because we're reverting all changes on the distribute side that introduced bugs relative to setuptools. But short of somebody just sticking a valid setuptools.egg-info/entry_points.txt (and PKG-INFO) into distribute as a stopgap, I've got nothing to suggest for fixing it on the distribute side. From richard at python.org Tue Mar 19 19:04:25 2013 From: richard at python.org (Richard Jones) Date: Tue, 19 Mar 2013 11:04:25 -0700 Subject: [Distutils] PEP DRAFT - Inclusion of pip bootstrap in Python installation Message-ID: Hi all, I present for your deliberation a draft PEP for the inclusion of a pip bootstrap program in Python 3.4. Discussion of this PEP should remain here on the distutils SIG. The PEP is revision controlled in my bitbucket account https://bitbucket.org/r1chardj0n3s/pypi-pep (this is also where I'm intending to develop the implementation.) Richard PEP: XXXX Title: Inclusion of pip bootstrap in Python installation Version: Last-Modified: Author: Richard Jones BDFL-Delegate: Nick Coghlan Discussions-To: Status: Draft Type: Standards Track Created: 18-Mar-2013 Python-Version: 3.4 Post-History: 19-Mar-2013 Abstract ======== This PEP proposes the inclusion of a pip boostrap executable in the Python installation to simplify the use of 3rd-party modules by Python users. This PEP does not propose to include the pip implementation in the Python standard library. Nor does it propose to implement any package management or installation mechanisms beyond those provided by PEPs 470 ("The Wheel Binary Package Format 1.0") and TODO distlib PEP. Rationale ========= Currently the user story for installing 3rd-party Python modules is not as simple as it could be. It requires that all 3rd-party modules inform the user of how to install the installer, typically via a link to the installer. That link may be out of date or the steps required to perform the install of the installer may be enough of a roadblock to prevent the user from further progress. Large Python projects which emphasise a low barrier to entry have shied away from depending on third party packages because of the introduction of this potential stumbling block for new users. With the inclusion of the package installer command in the standard Python installation the barrier to installing additional software is considerably reduced. It is hoped that this will therefore increase the likelihood that Python projects will reuse third party software. It is also hoped that this is reduces the number of proposals to include more and more software in the Python standard library, and therefore that more popular Python software is more easily upgradeable beyond requiring Python installation upgrades. Proposal ======== Python install includes an executable called "pip" that attempts to import pip machinery. If it can then the pip command proceeds as normal. If it cannot it will bootstrap pip by downloading the pip implementation wheel file. Once installed, the pip command proceeds as normal. A boostrap is used in the place of a the full pip code so that we don't have to bundle pip and also the install tool is upgradeable outside of the regular Python upgrade timeframe and processes. To avoid issues with sudo we will have the bootstrap default to installing the pip implementation to the per-user site-packages directory defined in PEP 370 and implemented in Python 2.6/3.0. Since we avoid installing to the system Python we also avoid conflicting with any other packaging system (on Linux systems, for example.) If the user is inside a virtual environment (TODO PEP ref) then the pip implementation will be installed into that virtual environment. The bootstrapping process will proceed as follows: 1. The user system has Python (3.4+) installed. In the "scripts" directory of the Python installation there is the bootstrap script called "pip". 2. The user will invoke a pip command, typically "pip install ", for example "pip install Django". 3. The boostrap script will attempt to import the pip implementation. If this succeeds, the pip command is processed normally. 4. On failing to import the pip implementation the bootstrap notifies the user that it is "upgrading pip" and contacts PyPI to obtain the latest download wheel file (see PEP 427.) 5. Upon downloading the file it is installed using the distlib installation machinery for wheel packages. Upon completing the installation the user is notified that "pip has been upgraded." TODO how is it verified? 6. The pip tool may now import the pip implementation and continues to process the requested user command normally. Users may be running in an environment which cannot access the public Internet and are relying solely on a local package repository. They would use the "-i" (Base URL of Python Package Index) argument to the "pip install" command. This use case will be handled by: 1. Recognising the command-line arguments that specify alternative or additional locations to discover packages and attempting to download the package from those locations. 2. If the package is not found there then we attempt to donwload it using the standard "https://pypi.python.org/pypi/simple/pip" index. 3. If that also fails, for any reason, we indicate to the user the operation we were attempting, the reason for failure (if we know it) and display further instructions for downloading and installing the file manually. Manual installation of the pip implementation will be supported through the manual download of the wheel file and "pip install ". This installation will not perform standard pip installation steps of saving the file to a cache directory or updating any local database of installed files. The download of the pip implementation install file should be performed securely. The transport from pypi.python.org will be done over HTTPS but the CA certificate check will most likely not be performed. Therefore we will utilise the embedded signature support in the wheel format to validate the downloaded file. Beyond those arguments controlling index location and download options, the "pip" boostrap command may support further standard pip options for verbosity, quietness and logging. The "--no-install" option to the "pip" command will not affect the bootstrapping process. An additional new Python package will be proposed, "pypublish", which will be a tool for publishing packages to PyPI. It would replace the current "python setup.py register" and "python setup.py upload" distutils commands. Again because of the measured Python release cycle and extensive existing Python installations these commands are difficult to bugfix and extend. Additionally it is desired that the "register" and "upload" commands be able to be performed over HTTPS with certificate validation. Since shipping CA certificate keychains with Python is not really feasible (updating the keychain is quite difficult to manage) it is desirable that those commands, and the accompanying keychain, be made installable and upgradeable outside of Python itself. Implementation ============== TBD Risks ===== The Fedora variant of Linux has had a separate program called "pip" (a Perl package installer) available for install for some time. The current Python "pip" program is installed as "pip-python". It is hoped that the Fedora community will resolve this issue by renaming the Perl installer. Currently pip depends upon setuptools functionality. It is intended that before Python 3.4 is shipped that the required functionlity will be present in Python's standard library as the distlib module, and that pip would be modified to use that functionality when present. TODO PEP reference for distlib References ========== None, so far, beyond the PEPs. Acknowledgments =============== Nick Coghlan for his thoughts on the proposal and dealing with the Red Hat issue. Jannis Leidel and Carl Meyer for their thoughts. Copyright ========= This document has been placed in the public domain. From pje at telecommunity.com Tue Mar 19 19:06:42 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 19 Mar 2013 14:06:42 -0400 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 6:04 PM, Nick Coghlan wrote: > pkg_resources.requires() is our only current solution for parallel > installation of incompatible versions. This can be made to work and is > a lot better than the nothing we had before it was created, but also > has quite a few issues (and it can be a nightmare to debug when it > goes wrong). > > Based on the exchanges with Mark McLoughlin the other week, and > chatting to Matthias Klose here at the PyCon US sprints, I think I > have a design that will let us support parallel installs in a way that > builds on existing standards, while behaving more consistently in edge > cases and without making sys.path ridiculously long even in systems > with large numbers of potentially incompatible dependencies. > > The core of this proposal is to create an updated version of the > installation database format that defines semantics for *.pth files > inside .dist-info directories. > > Specifically, whereas *.pth files directly in site-packages are > processed automatically when Python starts up, those inside dist-info > directories would be processed only when explicitly requested > (probably through a new distlib API). The processing of the *.pth file > would insert it into the path immediately before the path entry > containing the .dist-info directory (this is to avoid an issue with > the pkg_resources insert-at-the-front-of-sys.path behaviour where > system packages can end up shadowing those from a local source > checkout, without running into the issue with > append-to-the-end-of-sys.path where a specifically requested version > is shadowed by a globally installed version) > > To use CherryPy2 and CherryPy3 on Fedora as an example, what this > would allow is for CherryPy3 to be installed normally (i.e. directly > in site-packages), while CherryPy2 would be installed as a split > install, with the .dist-info going into site-packages and the actual > package going somewhere else (more on that below). A cherrypy2.pth > file inside the dist-info directory would reference the external > location where cherrypy 2.x can be found. > > To use this at runtime, you would do something like: > > distlib.some_new_requires_api("CherryPy (2.2)") > import cherrypy > > The other part of this question is how to avoid the potential > explosion of one sys.path entry per dependency. The first part of that > is that for cases where there is no incompatible version installed, > there won't be a *.pth file, and hence no extra sys.path entry (the > module/package will just be installed directly into site-packages as > usual). > > The second part has to do with a possible way to organise the > versioned installs: group them by the initial fragment of the version > number according to semantic versioning. For example, define a > "versioned-packages" directory that sits adjacent to "site-packages". > When doing the parallel install of CherryPy2 the actual *code* would > be installed into "versioned-packages/2/", with the cherrypy2.pth file > pointing to that directory. For 0.x releases, there would be a > directory per minor version, while for higher releases, there would > only be a directory per major version. > > The nice thing though is that Python wouldn't actually care about the > actual layout of the installed versions, so long as the *.pth files in > the dist-info directories described the mapping correctly. Could you perhaps spell out why this is better than just dropping .whl files (or unpacked directories) into site-packages or equivalent? Also, one thing that actually confuses me about this proposal is that it sounds like you are saying you'd have two CherryPy.dist-info directories in site-packages, which sounds broken to me; the whole point of the existing protocol for .dist-info was that it allowed you to determine the importable versions from a single listdir(). Your approach would break that feature, because you'd have to: 1. Read each .dist-info directory to find .pth files 2. Open and read all the .pth files 3. Compare the .pth file contents with sys.path to find out what is actually *on* sys.path This is a lot more complexity and I/O overhead than PEP 376 and its antecedents in pkg_resources et al. In contrast, if you use .whl files or directories, you can both determine the available versions *and* the active versions from a single directory read. And on everything but Windows, those could be symlinks to the target location rather than an actual file or directory, thus giving you the same kind of layout flexibility as what you've proposed. (Or, if you want a solution that works the same across platforms, just re-invent .egg-link files, which are basically a super-symlink anyway.) From dholth at gmail.com Tue Mar 19 19:07:52 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 19 Mar 2013 14:07:52 -0400 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: Message-ID: FWIW I always thought the long sys.path problem was a bug that could be solved by improving sys.path.__repr__() On Tue, Mar 19, 2013 at 2:06 PM, PJ Eby wrote: > On Mon, Mar 18, 2013 at 6:04 PM, Nick Coghlan wrote: >> pkg_resources.requires() is our only current solution for parallel >> installation of incompatible versions. This can be made to work and is >> a lot better than the nothing we had before it was created, but also >> has quite a few issues (and it can be a nightmare to debug when it >> goes wrong). >> >> Based on the exchanges with Mark McLoughlin the other week, and >> chatting to Matthias Klose here at the PyCon US sprints, I think I >> have a design that will let us support parallel installs in a way that >> builds on existing standards, while behaving more consistently in edge >> cases and without making sys.path ridiculously long even in systems >> with large numbers of potentially incompatible dependencies. >> >> The core of this proposal is to create an updated version of the >> installation database format that defines semantics for *.pth files >> inside .dist-info directories. >> >> Specifically, whereas *.pth files directly in site-packages are >> processed automatically when Python starts up, those inside dist-info >> directories would be processed only when explicitly requested >> (probably through a new distlib API). The processing of the *.pth file >> would insert it into the path immediately before the path entry >> containing the .dist-info directory (this is to avoid an issue with >> the pkg_resources insert-at-the-front-of-sys.path behaviour where >> system packages can end up shadowing those from a local source >> checkout, without running into the issue with >> append-to-the-end-of-sys.path where a specifically requested version >> is shadowed by a globally installed version) >> >> To use CherryPy2 and CherryPy3 on Fedora as an example, what this >> would allow is for CherryPy3 to be installed normally (i.e. directly >> in site-packages), while CherryPy2 would be installed as a split >> install, with the .dist-info going into site-packages and the actual >> package going somewhere else (more on that below). A cherrypy2.pth >> file inside the dist-info directory would reference the external >> location where cherrypy 2.x can be found. >> >> To use this at runtime, you would do something like: >> >> distlib.some_new_requires_api("CherryPy (2.2)") >> import cherrypy >> >> The other part of this question is how to avoid the potential >> explosion of one sys.path entry per dependency. The first part of that >> is that for cases where there is no incompatible version installed, >> there won't be a *.pth file, and hence no extra sys.path entry (the >> module/package will just be installed directly into site-packages as >> usual). >> >> The second part has to do with a possible way to organise the >> versioned installs: group them by the initial fragment of the version >> number according to semantic versioning. For example, define a >> "versioned-packages" directory that sits adjacent to "site-packages". >> When doing the parallel install of CherryPy2 the actual *code* would >> be installed into "versioned-packages/2/", with the cherrypy2.pth file >> pointing to that directory. For 0.x releases, there would be a >> directory per minor version, while for higher releases, there would >> only be a directory per major version. >> >> The nice thing though is that Python wouldn't actually care about the >> actual layout of the installed versions, so long as the *.pth files in >> the dist-info directories described the mapping correctly. > > Could you perhaps spell out why this is better than just dropping .whl > files (or unpacked directories) into site-packages or equivalent? > > Also, one thing that actually confuses me about this proposal is that > it sounds like you are saying you'd have two CherryPy.dist-info > directories in site-packages, which sounds broken to me; the whole > point of the existing protocol for .dist-info was that it allowed you > to determine the importable versions from a single listdir(). Your > approach would break that feature, because you'd have to: > > 1. Read each .dist-info directory to find .pth files > 2. Open and read all the .pth files > 3. Compare the .pth file contents with sys.path to find out what is > actually *on* sys.path > > This is a lot more complexity and I/O overhead than PEP 376 and its > antecedents in pkg_resources et al. > > In contrast, if you use .whl files or directories, you can both > determine the available versions *and* the active versions from a > single directory read. And on everything but Windows, those could be > symlinks to the target location rather than an actual file or > directory, thus giving you the same kind of layout flexibility as what > you've proposed. > > (Or, if you want a solution that works the same across platforms, just > re-invent .egg-link files, which are basically a super-symlink > anyway.) > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From pje at telecommunity.com Tue Mar 19 19:16:27 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 19 Mar 2013 14:16:27 -0400 Subject: [Distutils] PEP DRAFT - Inclusion of pip bootstrap in Python installation In-Reply-To: References: Message-ID: On Tue, Mar 19, 2013 at 2:04 PM, Richard Jones wrote: > The Fedora variant of Linux has had a separate program called "pip" (a Perl > package installer) available for install for some time. The current Python "pip" > program is installed as "pip-python". It is hoped that the Fedora community will > resolve this issue by renaming the Perl installer. A modest suggestion: renaming pip to "pypi" (Python Package Installer) will address this and other issues, especially if the 'pypi' command grows register/publish functions as well. Yes, it puts pip in a privileged position, but really it's just going to be acknowledging the status quo. As soon as pip can handle multi-version installs, binaries, and plugin scenarios as well as easy_install can, there will be no reason to keep easy_install around or bother upgrading it to do TUF or whatever else comes down the pike. And I'm not aware of any other competition (buildout isn't really aimed at the same space), so I don't think there's any reason not to just bless "pip" as *the* "pypi" tool. (I suppose there is a small possibility of confusion between the tool and the site, but then again, if you look at e.g. PHP's pear, the command line tools and repository have the same name. And Perl has a "cpan shell" for accessing CPAN, etc. I don't recall anybody in those communities being confused by those distinctions.) From pje at telecommunity.com Tue Mar 19 19:30:36 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 19 Mar 2013 14:30:36 -0400 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: Message-ID: On Tue, Mar 19, 2013 at 1:02 PM, Nick Coghlan wrote: > The "but what > about long sys.paths?" problem can be dealt with as a performance > issue for the import system. And already has been, actually. ;-) In addition to the changes made in the import system for 3.3, there's another improvement possible, relative to today's easy_install-based path lengths. Currently, easy_install dumps everything into subdirectories, so sys.path is *always* long for the default case, and *just* as long for non-default apps. However, if instead of just listing default versions in easy-install.pth, those versions were actually installed PEP 376-style, and only the non-default versions had to be added to sys.path for the apps that need them, then you'd see a dramatic shortening of sys.path for *all* apps and scenarios. Today, if you have N default libraries and an average of M non-default libraries per app, plus C as the constant minimum sys.path length, then sys.path is N+C by default, and N+C+M for each app that uses non-default libraries. But, under a hybrid scheme of PEP 376 for defaults plus an extension for non-defaults, the default sys.path length is C, and C+M for the apps needing non-default versions. In other words, N disappears -- and "N" is usually a lot bigger than C or M. TBH, if I had access to the time machine right now, easy_install would have worked this way from the start, instead of using easy-install.pth as a version-switching mechanism. (The main reason it didn't work out that way to begin with, is because the .egg-info concept wasn't invented until much later in easy_install's development.) From ncoghlan at gmail.com Tue Mar 19 19:58:29 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Mar 2013 11:58:29 -0700 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Tue, Mar 19, 2013 at 9:21 AM, Steve Dower wrote: > Bitbucket is starting to get some love here, and we've been pushing to get Mercurial on equal standing with Git internally. Right now, our small Python team isn't influential enough to get a commitment to a testing service, but there's absolutely no reason why one can't be set up with Windows VMs on any of the cloud services out there (pip is already using AWS for this). Funding from the PSF may be easier than funding from MS, though I've never tried to get funding from the PSF before so I could be wrong :) I use Shining Panda CI for my open source package testing, and they do offer Windows testing under Jenkins. However the free accounts aren't able to access the Windows buildbots - you need to pay for the Windows server time (which seems fair enough to me, given the difference in licensing costs between Debian and Windows, which are the two kinds of test environment they offer). The PSF offers funding for various things like Meetup.com fees for user groups, as well as funding for development sprints, so they could be interested in a program offering automated Windows testing through one of the CI services. The way to do that would be to put together a proposal for the board to consider, with a suggested budget and a mechanism for projects to apply for funding. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Tue Mar 19 20:05:00 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 19 Mar 2013 15:05:00 -0400 Subject: [Distutils] PEP DRAFT - Inclusion of pip bootstrap in Python installation In-Reply-To: References: Message-ID: On Tue, Mar 19, 2013 at 2:04 PM, Richard Jones wrote: > Hi all, > > I present for your deliberation a draft PEP for the inclusion of a pip > bootstrap program in Python 3.4. Discussion of this PEP should remain > here on the distutils SIG. > > The PEP is revision controlled in my bitbucket account > https://bitbucket.org/r1chardj0n3s/pypi-pep (this is also where I'm > intending to develop the implementation.) > > > Richard > > PEP: XXXX > Title: Inclusion of pip bootstrap in Python installation > Version: > Last-Modified: > Author: Richard Jones > BDFL-Delegate: Nick Coghlan > Discussions-To: > Status: Draft > Type: Standards Track > Created: 18-Mar-2013 > Python-Version: 3.4 > Post-History: 19-Mar-2013 > > Abstract > ======== > > This PEP proposes the inclusion of a pip boostrap executable in the Python > installation to simplify the use of 3rd-party modules by Python users. > > This PEP does not propose to include the pip implementation in the Python > standard library. Nor does it propose to implement any package management or > installation mechanisms beyond those provided by PEPs 470 ("The Wheel Binary > Package Format 1.0") and TODO distlib PEP. > > > Rationale > ========= > > Currently the user story for installing 3rd-party Python modules is > not as simple as it could be. It requires that all 3rd-party modules inform > the user of how to install the installer, typically via a link to the > installer. That link may be out of date or the steps required to perform the > install of the installer may be enough of a roadblock to prevent the user from > further progress. > > Large Python projects which emphasise a low barrier to entry have shied away > from depending on third party packages because of the introduction of this > potential stumbling block for new users. > > With the inclusion of the package installer command in the standard Python > installation the barrier to installing additional software is considerably > reduced. It is hoped that this will therefore increase the likelihood that > Python projects will reuse third party software. > > It is also hoped that this is reduces the number of proposals to include > more and more software in the Python standard library, and therefore that > more popular Python software is more easily upgradeable beyond requiring > Python installation upgrades. > > > Proposal > ======== > > Python install includes an executable called "pip" that attempts to import pip > machinery. If it can then the pip command proceeds as normal. If it cannot it > will bootstrap pip by downloading the pip implementation wheel file. > Once installed, the pip command proceeds as normal. > > A boostrap is used in the place of a the full pip code so that we don't have > to bundle pip and also the install tool is upgradeable outside of the regular > Python upgrade timeframe and processes. > > To avoid issues with sudo we will have the bootstrap default to installing the > pip implementation to the per-user site-packages directory defined in PEP 370 > and implemented in Python 2.6/3.0. Since we avoid installing to the system > Python we also avoid conflicting with any other packaging system (on Linux > systems, for example.) If the user is inside a virtual environment (TODO PEP > ref) then the pip implementation will be installed into that virtual > environment. > > The bootstrapping process will proceed as follows: > > 1. The user system has Python (3.4+) installed. In the "scripts" directory of > the Python installation there is the bootstrap script called "pip". > 2. The user will invoke a pip command, typically "pip install ", for > example "pip install Django". > 3. The boostrap script will attempt to import the pip implementation. If this > succeeds, the pip command is processed normally. > 4. On failing to import the pip implementation the bootstrap notifies the user > that it is "upgrading pip" and contacts PyPI to obtain the latest download > wheel file (see PEP 427.) > 5. Upon downloading the file it is installed using the distlib installation > machinery for wheel packages. Upon completing the installation the user > is notified that "pip has been upgraded." TODO how is it verified? > 6. The pip tool may now import the pip implementation and continues to process > the requested user command normally. > > Users may be running in an environment which cannot access the public Internet > and are relying solely on a local package repository. They would use the "-i" > (Base URL of Python Package Index) argument to the "pip install" command. This > use case will be handled by: > > 1. Recognising the command-line arguments that specify alternative or additional > locations to discover packages and attempting to download the package > from those locations. > 2. If the package is not found there then we attempt to donwload it using > the standard "https://pypi.python.org/pypi/simple/pip" index. > 3. If that also fails, for any reason, we indicate to the user the operation > we were attempting, the reason for failure (if we know it) and display > further instructions for downloading and installing the file manually. > > Manual installation of the pip implementation will be supported through the > manual download of the wheel file and "pip install ". > > This installation will not perform standard pip installation steps of saving the > file to a cache directory or updating any local database of installed files. > > The download of the pip implementation install file should be performed > securely. The transport from pypi.python.org will be done over HTTPS but the CA > certificate check will most likely not be performed. Therefore we will > utilise the embedded signature support in the wheel format to validate the > downloaded file. > > Beyond those arguments controlling index location and download options, the > "pip" boostrap command may support further standard pip options for verbosity, > quietness and logging. > > The "--no-install" option to the "pip" command will not affect the bootstrapping > process. > > An additional new Python package will be proposed, "pypublish", which will be a > tool for publishing packages to PyPI. It would replace the current "python > setup.py register" and "python setup.py upload" distutils commands. Again > because of the measured Python release cycle and extensive existing Python > installations these commands are difficult to bugfix and extend. Additionally > it is desired that the "register" and "upload" commands be able to be performed > over HTTPS with certificate validation. Since shipping CA certificate keychains > with Python is not really feasible (updating the keychain is quite difficult to > manage) it is desirable that those commands, and the accompanying keychain, be > made installable and upgradeable outside of Python itself. > > > Implementation > ============== > > TBD > > > Risks > ===== > > The Fedora variant of Linux has had a separate program called "pip" (a Perl > package installer) available for install for some time. The current Python "pip" > program is installed as "pip-python". It is hoped that the Fedora community will > resolve this issue by renaming the Perl installer. > > Currently pip depends upon setuptools functionality. It is intended that before > Python 3.4 is shipped that the required functionlity will be present in Python's > standard library as the distlib module, and that pip would be modified to use > that functionality when present. TODO PEP reference for distlib > > > References > ========== > > None, so far, beyond the PEPs. > > > Acknowledgments > =============== > > Nick Coghlan for his thoughts on the proposal and dealing with the Red Hat > issue. > > Jannis Leidel and Carl Meyer for their thoughts. > > > Copyright > ========= > > This document has been placed in the public domain. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ++1 on including a bootstrap wheel-only installer for getting pip -1 on giving it the same name -1 on scripts that start with "py". "python -m something"? Wheel's builtin signature checking is both controversial and awesome. Its most important properties are that it is key centric (signatures ONLY mean the signer possessed the private signing key and do not by themselves assert any other hard-to-verify information like e-mail, timestamps etc.); keys are short and referenced literally; it has a simple ~500 line pure-Python implementation including the crypto math with optional C speedups. The reference uses a highly convenient elliptic curve scheme called Ed25519 developed by respected cryptographers Bernstein et al. I would be OK with trusting the cheeseshop to make all decisions needed for "get me the newest version of pip" to simplify the bootstrap installer and to trust that version based only on the integrity of the SSL connection. Would it be feasible/helpful to include a short root CA list "the 3 that pypi is allowed to buy certificates from" for this purpose? It's also possible to use the downloaded archive to initiate a more complete install. Wheel's own installer can do this but it may be too circular for some: "python wheel-1.0.0-py2.py3-none-any.whl/wheel install wheel-1.0.0-py2.py3-none-any.whl". With some form of distlib in the standard library this might not be needed. It would be a weekend project to include a vendorized version of pkg_resources with pip. This version of pip would be able to do everything pip does now, except install from sdist. The hardest part would be complaining in all the right places when setuptools wasn't installed. We will want something to replace setup.py register / upload / ... and pip definitely isn't the place for it. I'm not too worried about this since you will be able to download it once we have the bootstrap install feature in place. From donald at stufft.io Tue Mar 19 20:11:49 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 19 Mar 2013 15:11:49 -0400 Subject: [Distutils] PEP DRAFT - Inclusion of pip bootstrap in Python installation In-Reply-To: References: Message-ID: <055558DA-0520-43CE-B691-0EA3B40EC0C7@stufft.io> On Mar 19, 2013, at 2:04 PM, Richard Jones wrote: > Hi all, > > I present for your deliberation a draft PEP for the inclusion of a pip > bootstrap program in Python 3.4. Discussion of this PEP should remain > here on the distutils SIG. > > The PEP is revision controlled in my bitbucket account > https://bitbucket.org/r1chardj0n3s/pypi-pep (this is also where I'm > intending to develop the implementation.) > > > Richard > > PEP: XXXX > Title: Inclusion of pip bootstrap in Python installation > Version: > Last-Modified: > Author: Richard Jones > BDFL-Delegate: Nick Coghlan > Discussions-To: > Status: Draft > Type: Standards Track > Created: 18-Mar-2013 > Python-Version: 3.4 > Post-History: 19-Mar-2013 > > Abstract > ======== > > This PEP proposes the inclusion of a pip boostrap executable in the Python > installation to simplify the use of 3rd-party modules by Python users. > > This PEP does not propose to include the pip implementation in the Python > standard library. Nor does it propose to implement any package management or > installation mechanisms beyond those provided by PEPs 470 ("The Wheel Binary > Package Format 1.0") and TODO distlib PEP. > > > Rationale > ========= > > Currently the user story for installing 3rd-party Python modules is > not as simple as it could be. It requires that all 3rd-party modules inform > the user of how to install the installer, typically via a link to the > installer. That link may be out of date or the steps required to perform the > install of the installer may be enough of a roadblock to prevent the user from > further progress. > > Large Python projects which emphasise a low barrier to entry have shied away > from depending on third party packages because of the introduction of this > potential stumbling block for new users. > > With the inclusion of the package installer command in the standard Python > installation the barrier to installing additional software is considerably > reduced. It is hoped that this will therefore increase the likelihood that > Python projects will reuse third party software. > > It is also hoped that this is reduces the number of proposals to include > more and more software in the Python standard library, and therefore that > more popular Python software is more easily upgradeable beyond requiring > Python installation upgrades. > > > Proposal > ======== > > Python install includes an executable called "pip" that attempts to import pip > machinery. If it can then the pip command proceeds as normal. If it cannot it > will bootstrap pip by downloading the pip implementation wheel file. > Once installed, the pip command proceeds as normal. > > A boostrap is used in the place of a the full pip code so that we don't have > to bundle pip and also the install tool is upgradeable outside of the regular > Python upgrade timeframe and processes. > > To avoid issues with sudo we will have the bootstrap default to installing the > pip implementation to the per-user site-packages directory defined in PEP 370 > and implemented in Python 2.6/3.0. Since we avoid installing to the system > Python we also avoid conflicting with any other packaging system (on Linux > systems, for example.) If the user is inside a virtual environment (TODO PEP > ref) then the pip implementation will be installed into that virtual > environment. > > The bootstrapping process will proceed as follows: > > 1. The user system has Python (3.4+) installed. In the "scripts" directory of > the Python installation there is the bootstrap script called "pip". > 2. The user will invoke a pip command, typically "pip install ", for > example "pip install Django". > 3. The boostrap script will attempt to import the pip implementation. If this > succeeds, the pip command is processed normally. > 4. On failing to import the pip implementation the bootstrap notifies the user > that it is "upgrading pip" and contacts PyPI to obtain the latest download > wheel file (see PEP 427.) > 5. Upon downloading the file it is installed using the distlib installation > machinery for wheel packages. Upon completing the installation the user > is notified that "pip has been upgraded." TODO how is it verified? > 6. The pip tool may now import the pip implementation and continues to process > the requested user command normally. > > Users may be running in an environment which cannot access the public Internet > and are relying solely on a local package repository. They would use the "-i" > (Base URL of Python Package Index) argument to the "pip install" command. This > use case will be handled by: > > 1. Recognising the command-line arguments that specify alternative or additional > locations to discover packages and attempting to download the package > from those locations. > 2. If the package is not found there then we attempt to donwload it using > the standard "https://pypi.python.org/pypi/simple/pip" index. > 3. If that also fails, for any reason, we indicate to the user the operation > we were attempting, the reason for failure (if we know it) and display > further instructions for downloading and installing the file manually. > > Manual installation of the pip implementation will be supported through the > manual download of the wheel file and "pip install ". > > This installation will not perform standard pip installation steps of saving the > file to a cache directory or updating any local database of installed files. > > The download of the pip implementation install file should be performed > securely. The transport from pypi.python.org will be done over HTTPS but the CA > certificate check will most likely not be performed. Therefore we will > utilise the embedded signature support in the wheel format to validate the > downloaded file. Major concern is how will this deal with key revocation? If the key used to sign the pip wheels gets compromised what's the path for this tool to revoke the key? On the side of the HTTPS I've been looking into this recently as far as Python goes. If openssl is correctly configured (this is the case on Linux, and any Python compiled against the OSX OpenSSL) then `urllib.request.urlopen('https://pypi.python.org/', cadefault=True) will do the right thing. This gets sticker on cases where openssl _isn't_ configured with a default set of certificates (Windows i'm assuming, Homebrew on OSX for sure) this will cause a certificate error. It's possible a workable solution can be designed via SSL. > > Beyond those arguments controlling index location and download options, the > "pip" boostrap command may support further standard pip options for verbosity, > quietness and logging. > > The "--no-install" option to the "pip" command will not affect the bootstrapping > process. > > An additional new Python package will be proposed, "pypublish", which will be a > tool for publishing packages to PyPI. It would replace the current "python > setup.py register" and "python setup.py upload" distutils commands. Again > because of the measured Python release cycle and extensive existing Python > installations these commands are difficult to bugfix and extend. Additionally > it is desired that the "register" and "upload" commands be able to be performed > over HTTPS with certificate validation. Since shipping CA certificate keychains > with Python is not really feasible (updating the keychain is quite difficult to > manage) it is desirable that those commands, and the accompanying keychain, be > made installable and upgradeable outside of Python itself. > > > Implementation > ============== > > TBD > > > Risks > ===== > > The Fedora variant of Linux has had a separate program called "pip" (a Perl > package installer) available for install for some time. The current Python "pip" > program is installed as "pip-python". It is hoped that the Fedora community will > resolve this issue by renaming the Perl installer. > > Currently pip depends upon setuptools functionality. It is intended that before > Python 3.4 is shipped that the required functionlity will be present in Python's > standard library as the distlib module, and that pip would be modified to use > that functionality when present. TODO PEP reference for distlib > > > References > ========== > > None, so far, beyond the PEPs. > > > Acknowledgments > =============== > > Nick Coghlan for his thoughts on the proposal and dealing with the Red Hat > issue. > > Jannis Leidel and Carl Meyer for their thoughts. > > > Copyright > ========= > > This document has been placed in the public domain. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Mar 19 20:14:42 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Mar 2013 12:14:42 -0700 Subject: [Distutils] PEP DRAFT - Inclusion of pip bootstrap in Python installation In-Reply-To: References: Message-ID: On Tue, Mar 19, 2013 at 11:16 AM, PJ Eby wrote: > On Tue, Mar 19, 2013 at 2:04 PM, Richard Jones wrote: >> The Fedora variant of Linux has had a separate program called "pip" (a Perl >> package installer) available for install for some time. The current Python "pip" >> program is installed as "pip-python". It is hoped that the Fedora community will >> resolve this issue by renaming the Perl installer. > > A modest suggestion: renaming pip to "pypi" (Python Package Installer) > will address this and other issues, especially if the 'pypi' command > grows register/publish functions as well. Unfortunately, this would just make the confusion with pypy worse, as well as put the community through yet another name change. Persisting with the "pip" name seems to be the best of the available options (the only wrinkle is that Perl tool sitting in the Fedora repos, but as far as we can tell that's just an old package that even Perl people don't use) > Yes, it puts pip in a privileged position, but really it's just going > to be acknowledging the status quo. As soon as pip can handle > multi-version installs, binaries, and plugin scenarios as well as > easy_install can, there will be no reason to keep easy_install around > or bother upgrading it to do TUF or whatever else comes down the pike. > And I'm not aware of any other competition (buildout isn't really > aimed at the same space), so I don't think there's any reason not to > just bless "pip" as *the* "pypi" tool. Yep, that's where all this is going (except we'll be keeping the pip name). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From richard at python.org Tue Mar 19 20:36:09 2013 From: richard at python.org (Richard Jones) Date: Tue, 19 Mar 2013 12:36:09 -0700 Subject: [Distutils] PEP DRAFT - Inclusion of pip bootstrap in Python installation In-Reply-To: References: Message-ID: On 19 March 2013 11:16, PJ Eby wrote: > On Tue, Mar 19, 2013 at 2:04 PM, Richard Jones wrote: >> The Fedora variant of Linux has had a separate program called "pip" (a Perl >> package installer) available for install for some time. The current Python "pip" >> program is installed as "pip-python". It is hoped that the Fedora community will >> resolve this issue by renaming the Perl installer. > > A modest suggestion: renaming pip to "pypi" (Python Package Installer) > will address this and other issues, especially if the 'pypi' command > grows register/publish functions as well. We did discuss using a name other than pip - the main reason being that Fedora Linux has the Perl "pip" tool. We are hoping that will be resolved in our favour. There's just too much momentum in the community behind the "pip" name. If it does not work out then we may have to revisit the issue. If we do then it'd probably be "pyinstall" or something equally elegant <0.5 wink> > (I suppose there is a small possibility of confusion between the tool > and the site, but then again, if you look at e.g. PHP's pear, the > command line tools and repository have the same name. And Perl has a > "cpan shell" for accessing CPAN, etc. I don't recall anybody in those > communities being confused by those distinctions.) Interestingly when I asked my Perl-using friends about "pip" the Perl tool, they indicated that these days they use "cpanm" by preference. Well, first they said "pip - isn't that the Python installer?" :-) Richard From dholth at gmail.com Tue Mar 19 20:37:03 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 19 Mar 2013 15:37:03 -0400 Subject: [Distutils] PEP DRAFT - Inclusion of pip bootstrap in Python installation In-Reply-To: <055558DA-0520-43CE-B691-0EA3B40EC0C7@stufft.io> References: <055558DA-0520-43CE-B691-0EA3B40EC0C7@stufft.io> Message-ID: On Tue, Mar 19, 2013 at 3:11 PM, Donald Stufft wrote: > > On Mar 19, 2013, at 2:04 PM, Richard Jones wrote: > >> Hi all, >> >> I present for your deliberation a draft PEP for the inclusion of a pip >> bootstrap program in Python 3.4. Discussion of this PEP should remain >> here on the distutils SIG. >> >> The PEP is revision controlled in my bitbucket account >> https://bitbucket.org/r1chardj0n3s/pypi-pep (this is also where I'm >> intending to develop the implementation.) >> >> >> Richard >> >> PEP: XXXX >> Title: Inclusion of pip bootstrap in Python installation >> Version: >> Last-Modified: >> Author: Richard Jones >> BDFL-Delegate: Nick Coghlan >> Discussions-To: >> Status: Draft >> Type: Standards Track >> Created: 18-Mar-2013 >> Python-Version: 3.4 >> Post-History: 19-Mar-2013 >> >> Abstract >> ======== >> >> This PEP proposes the inclusion of a pip boostrap executable in the Python >> installation to simplify the use of 3rd-party modules by Python users. >> >> This PEP does not propose to include the pip implementation in the Python >> standard library. Nor does it propose to implement any package management or >> installation mechanisms beyond those provided by PEPs 470 ("The Wheel Binary >> Package Format 1.0") and TODO distlib PEP. >> >> >> Rationale >> ========= >> >> Currently the user story for installing 3rd-party Python modules is >> not as simple as it could be. It requires that all 3rd-party modules inform >> the user of how to install the installer, typically via a link to the >> installer. That link may be out of date or the steps required to perform the >> install of the installer may be enough of a roadblock to prevent the user from >> further progress. >> >> Large Python projects which emphasise a low barrier to entry have shied away >> from depending on third party packages because of the introduction of this >> potential stumbling block for new users. >> >> With the inclusion of the package installer command in the standard Python >> installation the barrier to installing additional software is considerably >> reduced. It is hoped that this will therefore increase the likelihood that >> Python projects will reuse third party software. >> >> It is also hoped that this is reduces the number of proposals to include >> more and more software in the Python standard library, and therefore that >> more popular Python software is more easily upgradeable beyond requiring >> Python installation upgrades. >> >> >> Proposal >> ======== >> >> Python install includes an executable called "pip" that attempts to import pip >> machinery. If it can then the pip command proceeds as normal. If it cannot it >> will bootstrap pip by downloading the pip implementation wheel file. >> Once installed, the pip command proceeds as normal. >> >> A boostrap is used in the place of a the full pip code so that we don't have >> to bundle pip and also the install tool is upgradeable outside of the regular >> Python upgrade timeframe and processes. >> >> To avoid issues with sudo we will have the bootstrap default to installing the >> pip implementation to the per-user site-packages directory defined in PEP 370 >> and implemented in Python 2.6/3.0. Since we avoid installing to the system >> Python we also avoid conflicting with any other packaging system (on Linux >> systems, for example.) If the user is inside a virtual environment (TODO PEP >> ref) then the pip implementation will be installed into that virtual >> environment. >> >> The bootstrapping process will proceed as follows: >> >> 1. The user system has Python (3.4+) installed. In the "scripts" directory of >> the Python installation there is the bootstrap script called "pip". >> 2. The user will invoke a pip command, typically "pip install ", for >> example "pip install Django". >> 3. The boostrap script will attempt to import the pip implementation. If this >> succeeds, the pip command is processed normally. >> 4. On failing to import the pip implementation the bootstrap notifies the user >> that it is "upgrading pip" and contacts PyPI to obtain the latest download >> wheel file (see PEP 427.) >> 5. Upon downloading the file it is installed using the distlib installation >> machinery for wheel packages. Upon completing the installation the user >> is notified that "pip has been upgraded." TODO how is it verified? >> 6. The pip tool may now import the pip implementation and continues to process >> the requested user command normally. >> >> Users may be running in an environment which cannot access the public Internet >> and are relying solely on a local package repository. They would use the "-i" >> (Base URL of Python Package Index) argument to the "pip install" command. This >> use case will be handled by: >> >> 1. Recognising the command-line arguments that specify alternative or additional >> locations to discover packages and attempting to download the package >> from those locations. >> 2. If the package is not found there then we attempt to donwload it using >> the standard "https://pypi.python.org/pypi/simple/pip" index. >> 3. If that also fails, for any reason, we indicate to the user the operation >> we were attempting, the reason for failure (if we know it) and display >> further instructions for downloading and installing the file manually. >> >> Manual installation of the pip implementation will be supported through the >> manual download of the wheel file and "pip install ". >> >> This installation will not perform standard pip installation steps of saving the >> file to a cache directory or updating any local database of installed files. >> >> The download of the pip implementation install file should be performed >> securely. The transport from pypi.python.org will be done over HTTPS but the CA >> certificate check will most likely not be performed. Therefore we will >> utilise the embedded signature support in the wheel format to validate the >> downloaded file. > > Major concern is how will this deal with key revocation? If the key used to sign the pip wheels gets compromised what's the path for this tool to revoke the key? The wheel scheme skips revocation to simplify the implementation. You would be hard pressed to argue that it's not better than the current pypi security model ;-) A proper revocation model would look like TUF, a simple one would consist of grabbing the author keys over HTTPS at time of use. The scheme is flipped from the revocation model: require an up-to-date assertion that the key is current in order to trust the key, instead of trusting a key until a revocation happens. > On the side of the HTTPS I've been looking into this recently as far as Python goes. If openssl is correctly configured (this is the case on Linux, and any Python compiled against the OSX OpenSSL) then `urllib.request.urlopen('https://pypi.python.org/', cadefault=True) will do the right thing. This gets sticker on cases where openssl _isn't_ configured with a default set of certificates (Windows i'm assuming, Homebrew on OSX for sure) this will cause a certificate error. It's possible a workable solution can be designed via SSL. From dholth at gmail.com Tue Mar 19 22:41:47 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 19 Mar 2013 17:41:47 -0400 Subject: [Distutils] PEP DRAFT - Inclusion of pip bootstrap in Python installation In-Reply-To: References: <055558DA-0520-43CE-B691-0EA3B40EC0C7@stufft.io> Message-ID: On Tue, Mar 19, 2013 at 3:37 PM, Daniel Holth wrote: > On Tue, Mar 19, 2013 at 3:11 PM, Donald Stufft wrote: >> >> On Mar 19, 2013, at 2:04 PM, Richard Jones wrote: >> >>> Hi all, >>> >>> I present for your deliberation a draft PEP for the inclusion of a pip >>> bootstrap program in Python 3.4. Discussion of this PEP should remain >>> here on the distutils SIG. >>> >>> The PEP is revision controlled in my bitbucket account >>> https://bitbucket.org/r1chardj0n3s/pypi-pep (this is also where I'm >>> intending to develop the implementation.) >>> >>> >>> Richard >>> >>> PEP: XXXX >>> Title: Inclusion of pip bootstrap in Python installation >>> Version: >>> Last-Modified: >>> Author: Richard Jones >>> BDFL-Delegate: Nick Coghlan >>> Discussions-To: >>> Status: Draft >>> Type: Standards Track >>> Created: 18-Mar-2013 >>> Python-Version: 3.4 >>> Post-History: 19-Mar-2013 >>> >>> Abstract >>> ======== >>> >>> This PEP proposes the inclusion of a pip boostrap executable in the Python >>> installation to simplify the use of 3rd-party modules by Python users. >>> >>> This PEP does not propose to include the pip implementation in the Python >>> standard library. Nor does it propose to implement any package management or >>> installation mechanisms beyond those provided by PEPs 470 ("The Wheel Binary >>> Package Format 1.0") and TODO distlib PEP. >>> >>> >>> Rationale >>> ========= >>> >>> Currently the user story for installing 3rd-party Python modules is >>> not as simple as it could be. It requires that all 3rd-party modules inform >>> the user of how to install the installer, typically via a link to the >>> installer. That link may be out of date or the steps required to perform the >>> install of the installer may be enough of a roadblock to prevent the user from >>> further progress. >>> >>> Large Python projects which emphasise a low barrier to entry have shied away >>> from depending on third party packages because of the introduction of this >>> potential stumbling block for new users. >>> >>> With the inclusion of the package installer command in the standard Python >>> installation the barrier to installing additional software is considerably >>> reduced. It is hoped that this will therefore increase the likelihood that >>> Python projects will reuse third party software. >>> >>> It is also hoped that this is reduces the number of proposals to include >>> more and more software in the Python standard library, and therefore that >>> more popular Python software is more easily upgradeable beyond requiring >>> Python installation upgrades. >>> >>> >>> Proposal >>> ======== >>> >>> Python install includes an executable called "pip" that attempts to import pip >>> machinery. If it can then the pip command proceeds as normal. If it cannot it >>> will bootstrap pip by downloading the pip implementation wheel file. >>> Once installed, the pip command proceeds as normal. >>> >>> A boostrap is used in the place of a the full pip code so that we don't have >>> to bundle pip and also the install tool is upgradeable outside of the regular >>> Python upgrade timeframe and processes. >>> >>> To avoid issues with sudo we will have the bootstrap default to installing the >>> pip implementation to the per-user site-packages directory defined in PEP 370 >>> and implemented in Python 2.6/3.0. Since we avoid installing to the system >>> Python we also avoid conflicting with any other packaging system (on Linux >>> systems, for example.) If the user is inside a virtual environment (TODO PEP >>> ref) then the pip implementation will be installed into that virtual >>> environment. >>> >>> The bootstrapping process will proceed as follows: >>> >>> 1. The user system has Python (3.4+) installed. In the "scripts" directory of >>> the Python installation there is the bootstrap script called "pip". >>> 2. The user will invoke a pip command, typically "pip install ", for >>> example "pip install Django". >>> 3. The boostrap script will attempt to import the pip implementation. If this >>> succeeds, the pip command is processed normally. >>> 4. On failing to import the pip implementation the bootstrap notifies the user >>> that it is "upgrading pip" and contacts PyPI to obtain the latest download >>> wheel file (see PEP 427.) >>> 5. Upon downloading the file it is installed using the distlib installation >>> machinery for wheel packages. Upon completing the installation the user >>> is notified that "pip has been upgraded." TODO how is it verified? >>> 6. The pip tool may now import the pip implementation and continues to process >>> the requested user command normally. >>> >>> Users may be running in an environment which cannot access the public Internet >>> and are relying solely on a local package repository. They would use the "-i" >>> (Base URL of Python Package Index) argument to the "pip install" command. This >>> use case will be handled by: >>> >>> 1. Recognising the command-line arguments that specify alternative or additional >>> locations to discover packages and attempting to download the package >>> from those locations. >>> 2. If the package is not found there then we attempt to donwload it using >>> the standard "https://pypi.python.org/pypi/simple/pip" index. >>> 3. If that also fails, for any reason, we indicate to the user the operation >>> we were attempting, the reason for failure (if we know it) and display >>> further instructions for downloading and installing the file manually. >>> >>> Manual installation of the pip implementation will be supported through the >>> manual download of the wheel file and "pip install ". >>> >>> This installation will not perform standard pip installation steps of saving the >>> file to a cache directory or updating any local database of installed files. >>> >>> The download of the pip implementation install file should be performed >>> securely. The transport from pypi.python.org will be done over HTTPS but the CA >>> certificate check will most likely not be performed. Therefore we will >>> utilise the embedded signature support in the wheel format to validate the >>> downloaded file. >> >> Major concern is how will this deal with key revocation? If the key used to sign the pip wheels gets compromised what's the path for this tool to revoke the key? > > The wheel scheme skips revocation to simplify the implementation. You > would be hard pressed to argue that it's not better than the current > pypi security model ;-) > > A proper revocation model would look like TUF, a simple one would > consist of grabbing the author keys over HTTPS at time of use. The > scheme is flipped from the revocation model: require an up-to-date > assertion that the key is current in order to trust the key, instead > of trusting a key until a revocation happens. > >> On the side of the HTTPS I've been looking into this recently as far as Python goes. If openssl is correctly configured (this is the case on Linux, and any Python compiled against the OSX OpenSSL) then `urllib.request.urlopen('https://pypi.python.org/', cadefault=True) will do the right thing. This gets sticker on cases where openssl _isn't_ configured with a default set of certificates (Windows i'm assuming, Homebrew on OSX for sure) this will cause a certificate error. It's possible a workable solution can be designed via SSL. It would also be incredibly easy to require n signatures on the wheel instead of just 1. The signature format used, JWS-JS, already holds a list of signatures. This would make it much harder to compromise the system by pwning a single pip developer's machine and may help to put your mind at ease about revocation. From glyph at twistedmatrix.com Wed Mar 20 00:40:05 2013 From: glyph at twistedmatrix.com (Glyph) Date: Tue, 19 Mar 2013 16:40:05 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: <51479A54.7020606@trueblade.com> <20130318163905.31e8a643@anarchist> Message-ID: On Mar 18, 2013, at 4:52 PM, Nick Coghlan wrote: > On Mon, Mar 18, 2013 at 4:39 PM, Barry Warsaw wrote: >> On Mar 18, 2013, at 04:13 PM, Nick Coghlan wrote: >> >>> Eventually I expect pip will grow a "--wheel-only" option to run it in >>> strict "installer only" mode, but the ecosystem is a long way from >>> supporting that being a useful option (especially since there are some >>> cases which will still require falling back to the "build from source" >>> model). >> >> If that's the end goal, then it should be the default now. > > No, user experience is king. Right now, defaulting to wheel-only would > be an awful user experience (because you wouldn't be able to install > anything), as well as being completely backwards incompatible with the > current behaviour (because everything would break). But it could default to wheels-and-other-things right now without breaking anything, right? What's the rationale for not just preferring wheels if they're available? -glyph From dholth at gmail.com Wed Mar 20 01:00:36 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 19 Mar 2013 20:00:36 -0400 Subject: [Distutils] pip merges wheel In-Reply-To: References: <51479A54.7020606@trueblade.com> <20130318163905.31e8a643@anarchist> Message-ID: It's possible to upload broken wheels. I don't want "I had to find the disable flag" to be anyone's first impression. On Mar 19, 2013 7:40 PM, "Glyph" wrote: > > On Mar 18, 2013, at 4:52 PM, Nick Coghlan wrote: > > > On Mon, Mar 18, 2013 at 4:39 PM, Barry Warsaw wrote: > >> On Mar 18, 2013, at 04:13 PM, Nick Coghlan wrote: > >> > >>> Eventually I expect pip will grow a "--wheel-only" option to run it in > >>> strict "installer only" mode, but the ecosystem is a long way from > >>> supporting that being a useful option (especially since there are some > >>> cases which will still require falling back to the "build from source" > >>> model). > >> > >> If that's the end goal, then it should be the default now. > > > > No, user experience is king. Right now, defaulting to wheel-only would > > be an awful user experience (because you wouldn't be able to install > > anything), as well as being completely backwards incompatible with the > > current behaviour (because everything would break). > > But it could default to wheels-and-other-things right now without breaking > anything, right? What's the rationale for not just preferring wheels if > they're available? > > -glyph > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalke at dalkescientific.com Wed Mar 20 00:54:43 2013 From: dalke at dalkescientific.com (Andrew Dalke) Date: Wed, 20 Mar 2013 00:54:43 +0100 Subject: [Distutils] special compiler options for only one file Message-ID: <3B33BD0A-34A7-486C-83E4-71660775AF09@dalkescientific.com> Hi all, I have a Python extension which uses CPU-specific features, if available. This is done through a run-time check. If the hardware supports the POPCNT instruction then it selects one implementation of my inner loop, if SSSE3 is available then it selects another, otherwise it falls back to generic versions of my performance critical kernel. (Some 95%+ of the time is spent in this kernel.) Unfortunately, there's a failure mode I didn't expect. I use -mssse3 and -O3 to compile all of the C code, even though only one file needs that -mssse3 option. As a result, the other files are compiled with the expectation that SSSE3 will exist. This causes a segfault for the line start_target_popcount = (int)(query_popcount * threshold); because the compiler used fisttpl, which is an SSSE-3 instruction. After all, I told it to assume that ssse3 exists. The Debian packager for my package recently ran into this problem, because the test machine has a gcc which understands -mssse3 but the machine itself has an older CPU without those instructions. I'm trying to come up with a solution that can be automated for the Debian distribution. I want a solution where the same binary can work on older machines and on newer ones Ideally I would like to say that only one file is compiled with the -mssse3 option. Since my selector code isn't part of this file, SSSE-3 code will never be executed unless the CPU supports is. However, I can't figure out any way to tell distutils that a set of compiler options are specific to a single file. Is that even possible? Cheers, Andrew dalke at dalkescientific.com From glyph at twistedmatrix.com Wed Mar 20 01:33:49 2013 From: glyph at twistedmatrix.com (Glyph) Date: Tue, 19 Mar 2013 17:33:49 -0700 Subject: [Distutils] pip merges wheel In-Reply-To: References: <51479A54.7020606@trueblade.com> <20130318163905.31e8a643@anarchist> Message-ID: <50F225FA-09FD-4403-93B7-F6A1F4C8630B@twistedmatrix.com> On Mar 19, 2013, at 5:00 PM, Daniel Holth wrote: > It's possible to upload broken wheels. I don't want "I had to find the disable flag" to be anyone's first impression. > It's possible to upload broken sdists, too. Trust me, Windows' users (who do not have C compilers) impression of pip _could not_ get any worse. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Mar 20 01:43:05 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 19 Mar 2013 20:43:05 -0400 Subject: [Distutils] pip merges wheel In-Reply-To: <50F225FA-09FD-4403-93B7-F6A1F4C8630B@twistedmatrix.com> References: <51479A54.7020606@trueblade.com> <20130318163905.31e8a643@anarchist> <50F225FA-09FD-4403-93B7-F6A1F4C8630B@twistedmatrix.com> Message-ID: we might do different defaults for each platform On Mar 19, 2013 8:33 PM, "Glyph" wrote: > On Mar 19, 2013, at 5:00 PM, Daniel Holth wrote: > > It's possible to upload broken wheels. I don't want "I had to find the > disable flag" to be anyone's first impression. > > It's possible to upload broken sdists, too. > > Trust me, Windows' users (who do not have C compilers) impression of pip > _could not_ get any worse. > > -glyph > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dennis.coldwell at gmail.com Wed Mar 20 02:14:58 2013 From: dennis.coldwell at gmail.com (Dennis Coldwell) Date: Tue, 19 Mar 2013 18:14:58 -0700 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: Welcome Steve! I really enjoyed your lightning talk (and would like to apologize for my audible "WHAT!?!" reaction when I heard the title as "Debugging Python with Microsoft Visual Studio" :) Welcome to the community, I've also been lurking here from some time, hoping to see where I can lend a hand in the packaging sub-community. --Dennis On Mon, Mar 18, 2013 at 2:26 PM, Alex Clark wrote: > On 2013-03-18 16:34:25 +0000, Steve Dower said: > > Hi all >> >> I just joined up after the various discussions at PyCon and wanted to say >> hi. (If you were also there and want to put a face/voice to the name, I did >> the Visual Studio demo at one of the lightning talks.) >> >> The main reason I want to get involved is the openly acknowledged lack of >> Windows expertise that?s available. I work at Microsoft and part of my job >> description is to contribute code/testing/time/**documentation/help/etc. >> to CPython. (I can also do testing/time/help for other projects, but >> copyrightable artifacts are more complicated and, for now, not okay with >> our lawyers.) >> >> I expect I?ll mainly be lurking until I can be useful, which is why I >> wanted to start with this post. I?m pretty good with Windows, and I have >> direct access to all the experts and internal mailing lists. So just shout >> out when something comes up and I?ll be happy to clarify or research an >> answer. >> > > > Welcome! > > >> Cheers, >> Steve >> >> ______________________________**_________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/**mailman/listinfo/distutils-sig >> > > > -- > Alex Clark ? http://about.me/alex.clark > > > ______________________________**_________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/**mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.svetlov at gmail.com Wed Mar 20 05:04:33 2013 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 19 Mar 2013 21:04:33 -0700 Subject: [Distutils] special compiler options for only one file In-Reply-To: <3B33BD0A-34A7-486C-83E4-71660775AF09@dalkescientific.com> References: <3B33BD0A-34A7-486C-83E4-71660775AF09@dalkescientific.com> Message-ID: I guess utilize build_clib command to create static library with your settings just for your file than build your extensions with linking that library. On Tue, Mar 19, 2013 at 4:54 PM, Andrew Dalke wrote: > Hi all, > > I have a Python extension which uses CPU-specific features, > if available. This is done through a run-time check. If the > hardware supports the POPCNT instruction then it selects one > implementation of my inner loop, if SSSE3 is available then > it selects another, otherwise it falls back to generic versions > of my performance critical kernel. (Some 95%+ of the time is > spent in this kernel.) > > Unfortunately, there's a failure mode I didn't expect. I > use -mssse3 and -O3 to compile all of the C code, even though > only one file needs that -mssse3 option. > > As a result, the other files are compiled with the expectation > that SSSE3 will exist. This causes a segfault for the line > > start_target_popcount = (int)(query_popcount * threshold); > > because the compiler used fisttpl, which is an SSSE-3 instruction. > After all, I told it to assume that ssse3 exists. > > The Debian packager for my package recently ran into this problem, > because the test machine has a gcc which understands -mssse3 but > the machine itself has an older CPU without those instructions. > > I'm trying to come up with a solution that can be automated for > the Debian distribution. I want a solution where the same binary > can work on older machines and on newer ones > > Ideally I would like to say that only one file is compiled > with the -mssse3 option. Since my selector code isn't part of this > file, SSSE-3 code will never be executed unless the CPU supports is. > > However, I can't figure out any way to tell distutils that > a set of compiler options are specific to a single file. > > Is that even possible? > > Cheers, > > Andrew > dalke at dalkescientific.com > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Thanks, Andrew Svetlov From bkabrda at redhat.com Wed Mar 20 09:01:47 2013 From: bkabrda at redhat.com (Bohuslav Kabrda) Date: Wed, 20 Mar 2013 04:01:47 -0400 (EDT) Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: Message-ID: <1119152700.11415014.1363766507176.JavaMail.root@redhat.com> ----- Original Message ----- > pkg_resources.requires() is our only current solution for parallel > installation of incompatible versions. This can be made to work and > is > a lot better than the nothing we had before it was created, but also > has quite a few issues (and it can be a nightmare to debug when it > goes wrong). > > Based on the exchanges with Mark McLoughlin the other week, and > chatting to Matthias Klose here at the PyCon US sprints, I think I > have a design that will let us support parallel installs in a way > that > builds on existing standards, while behaving more consistently in > edge > cases and without making sys.path ridiculously long even in systems > with large numbers of potentially incompatible dependencies. > > The core of this proposal is to create an updated version of the > installation database format that defines semantics for *.pth files > inside .dist-info directories. > > Specifically, whereas *.pth files directly in site-packages are > processed automatically when Python starts up, those inside dist-info > directories would be processed only when explicitly requested > (probably through a new distlib API). The processing of the *.pth > file > would insert it into the path immediately before the path entry > containing the .dist-info directory (this is to avoid an issue with > the pkg_resources insert-at-the-front-of-sys.path behaviour where > system packages can end up shadowing those from a local source > checkout, without running into the issue with > append-to-the-end-of-sys.path where a specifically requested version > is shadowed by a globally installed version) > > To use CherryPy2 and CherryPy3 on Fedora as an example, what this > would allow is for CherryPy3 to be installed normally (i.e. directly > in site-packages), while CherryPy2 would be installed as a split > install, with the .dist-info going into site-packages and the actual > package going somewhere else (more on that below). A cherrypy2.pth > file inside the dist-info directory would reference the external > location where cherrypy 2.x can be found. > > To use this at runtime, you would do something like: > > distlib.some_new_requires_api("CherryPy (2.2)") > import cherrypy > So what would be done when CherryPy 4 came? CherryPy 3 is installed directly in site-packages, so version 2 and 4 would be treated with split-install? It seems to me that this type of special casing is not what we want. If you develop on one machine and deploy on another machine, you have no guarantee that the standard installation of CherryPy is the same as on your system. That would force developers to actually always install their used versions by "split-install", so that they could make sure they always import the correct version. At this point, I will go to the Ruby world for example (please don't shout at me :). If you look at how RubyGems work, they put _every_ Gem in a versioned directory (therefore no special casing). When just "require 'foo'" is used, newest "foo" is imported, otherwise a specific version is imported if specified. I believe that we should head a similar way here, making the "split-install" the default (and the only way). Then if user uses standard >>> import cherrypy Python would import the newest version. When using >>> distlib.some_new_requires_api("CherryPy (2.2)") >>> import cherrypy Python would import the specific version. This may actually turn out to be very useful, as you could place all the distlib calls into __init__.py of your package which would nicely separate this from the actual code (and we wouldn't need anything like Ruby Gemfiles). So am I completely wrong here or does this make sense to you? Slavek. > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -- Regards, Bohuslav "Slavek" Kabrda. From p.f.moore at gmail.com Wed Mar 20 11:13:19 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Mar 2013 10:13:19 +0000 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On 19 March 2013 16:21, Steve Dower wrote: > As I understand, the issue is the same as between different versions of Python and comes down to not being able to assume a compiler on Windows machines. It's easy to make a source file that will compile for any ABI and platform, but distributing binaries requires each one to be built separately. This doesn't have to be an onerous task - it can be scripted quite easily once you have all the required compilers - but it does take more effort than simply sharing a source file. Another nice tool would be some sort of Windows build farm, where projects could submit a sdist and it would build wheels for a list of supported Python versions and architectures. That wouldn't work for projects with complex dependencies, obviously, but could cover a reasonable-sized chunk of PyPI (especially if dependencies could be added to the farm on request). And can I have a pony as well, of course... :-) Paul From p.f.moore at gmail.com Wed Mar 20 13:27:30 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Mar 2013 12:27:30 +0000 Subject: [Distutils] Building wheels - project metadata, and specifying compatibility Message-ID: When building wheels, it is necessary to know details of the compatibility requirements of the code. The most common case is for pure Python code, where the code could in theory be valid for a single Python version, but in reality is more likely to be valid either for all Pythons, or sometimes for just Python 2 or Python 3 (where separate code bases or 2to3 are involved). The wheel project supports a "universal" flag in setup.cfg, which sets the compatibility flags to 'py2.py3', but that is only one case. Ultimately, we need a means (probably in metadata) for (pure Python) projects to specify any of the following: 1. The built code works on any version of Python (that the project supports) 2. The built code is specific to the major version of Python that it was built with 3. The built code is only usable for the precise Python version it was built with The default is currently (3), but this is arguably the least common case. Nearly all code will support at least (2) and more and more is supporting (1). Note that this is separate from the question of what versions the project supports. It's about how the code is written. Specifically, there's no point in marking code that uses new features in Python 3.3 as .py33 - it's still .py3 as it will work with Python 3.4. The fact that it won't work on Python 3.2 is just because the project doesn't support Python 3.2. Installing a .py3 wheel into Python 3.2 is no different from installing a sdist there. So overspecifying the wheel compatibility so that a sdist gets picked up for earlier versions isn't helpful. In addition to a means for projects to specify this themselves, tools (bdist_wheel, pip wheel) should probably have a means to override the default at the command line, as it will be some time before projects specify this information, even once it is standard. There's always the option to rename the generated file, but that feels like a hack... Where C extensions are involved, there are other questions. Mostly, compiled code is implementation, architecture, and minor version specific, so there's little to do here. The stable ABI is relevant, but I have no real experience of using it to know how that would work. There is also the case of projects with C accelerators - it would be good to be able to easily build both the accelerated version and a fallback pure-python wheel. I don't believe this is easy as things stand - distutils uses a compiler if it's present, so forcing a pure-python build when you have a compiler is harder work than it needs to be when building binary distributions. Comments? Should the default in bdist_wheel and pip wheel be changed or should it remain "as safe as possible" (practicality vs purity)? If the latter, should override flags be added, or is renaming the wheel in the absence of project metadata the recommended approach? And does anyone have any experience of how this might all work with C extensions? Paul From ncoghlan at gmail.com Wed Mar 20 13:29:00 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 05:29:00 -0700 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: Message-ID: On Tue, Mar 19, 2013 at 11:06 AM, PJ Eby wrote: > Could you perhaps spell out why this is better than just dropping .whl > files (or unpacked directories) into site-packages or equivalent? I need a solution that will also work for packages installed by the system installer - in fact, that's the primary use case. For self-contained installation independent of the system Python, people should be using venv/virtualenv, zc.buildout, software collections (a Fedora/RHEL tool in the same space), or a similar "isolated application" solution. System packages will be spread out according to the FHS, and need to work relatively consistently for every language the OS supports (i.e. all of them), so long term solutions that assume the use of Python-specific bundling formats for the actual installation are not sufficient in my view. I also want to create a database of parallel installed versions that can be used to avoid duplication across virtual environments and software collections by using .pth files to reference a common installed version rather than having to use symlinks or copies of the files. I'm not wedded to using *actual* pth files as a cross-platform linking solution - a more limited format that only supported path additions, without the extra powers of pth files would be fine. The key point is to use the .dist-info directories to bridge between "unversioned installs in site packages" and "finding parallel versions at runtime without side effects on all Python applications executed on that system" (which is the problem with using a pth file in site packages to bootstrap the parallel versioning system as easy_install does). > Also, one thing that actually confuses me about this proposal is that > it sounds like you are saying you'd have two CherryPy.dist-info > directories in site-packages, which sounds broken to me; the whole > point of the existing protocol for .dist-info was that it allowed you > to determine the importable versions from a single listdir(). Your > approach would break that feature, because you'd have to: > > 1. Read each .dist-info directory to find .pth files > 2. Open and read all the .pth files > 3. Compare the .pth file contents with sys.path to find out what is > actually *on* sys.path If a distribution has been installed in site-packages (or has an appropriate *.pth file there), there won't be any *.pth file in the .dist-info directory. The *.pth file will only be present if the package has been installed *somewhere else*. However, it occurs to me that we can do this differently, by explicitly involving a separate directory that *isn't* on sys.path by default, and use a path hook to indicate when it should be accessed. Under this version of the proposal, PEP 376 would remain unchanged, and would effectively become the "database of installed distributions available on sys.path by default". These files would all remain available by default, preserving backwards compatibility for the vast majority of existing software that doesn't use any kind of parallel install system. We could then introduce a separate "database of all installed distributions". Let's use the "versioned-packages" name, and assume it lives adjacent to the existing "site-packages". The difference between this versioned-packages directory and site-packages would be that: - it would never be added to sys.path itself - multiple .dist-info directories for different versions of the same distribution may be present - distributions are installed into named-and-versioned subdirectories rather than directly into versioned-packages - rather than the contents being processed directly from sys.path, we would add a "" entry to sys.path with a path hook that maps to a custom module finder that handles the extra import locations without the same issues as the current approach to modifying sys.path in pkg_resources (which allows shadowing development versions with installed versions by inserting at the front), or the opposite problem that would be created by appending to the end (allowing default versions to shadow explicitly requested versions) We would then add some new version constraint API in distlib to: 1. Check the PEP 376 db. If the version identified there satisfies the constraint, fine, we leave the import state unmodified. 2. If no suitable version is found, check the new versioned-packages directory. 3. If a suitable parallel installed version is found, we check its dist-info directory for the details needed to update the set of paths processed by the versioned import hook. The versioned import hook would work just like normal sys.path based import (i.e. maintaining a sequence of path entries, using sys.modules, sys.path_hooks, sys.path_importer_cache), the only difference is that the set of paths it checks would initially be empty. Calls to the new API in distlib would modify the *versioned* path, effectively inserting all those paths at the point in sys.path where the "" marker is placed, rather than appending them to the beginning or end. The API that updates the paths handled by the versioned import hook would also take care of detecting and complaining about incompatible version constraints. It may even be possible to update pkg_resources.requires() to work this way, potentially avoiding the need for the easy_install.pth file that has side effects on applications that don't even use pkg_resources. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Mar 20 13:36:49 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 05:36:49 -0700 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: <1119152700.11415014.1363766507176.JavaMail.root@redhat.com> References: <1119152700.11415014.1363766507176.JavaMail.root@redhat.com> Message-ID: On Wed, Mar 20, 2013 at 1:01 AM, Bohuslav Kabrda wrote: > So what would be done when CherryPy 4 came? CherryPy 3 is installed directly in site-packages, so version 2 and 4 would be treated with split-install? > It seems to me that this type of special casing is not what we want. If you develop on one machine and deploy on another machine, you have no guarantee that the standard installation of CherryPy is the same as on your system. That would force developers to actually always install their used versions by "split-install", so that they could make sure they always import the correct version. This approach isn't viable, as it is both backwards incompatible with the expectations of current Python software and incompatible with the requirements of Linux distros and other system integrators (who need to be able to add new backwards incompatible versions of software without changing the default version). And I definitely won't shout at people for mentioning what other languages do - learning from what works and what doesn't for other groups is exactly what we *should* be doing. Many of the features in the forthcoming metadata 2.0 specification are driven by stealing things that are known to work from Node.js, Perl, Ruby, PHP, RPM, DEB, etc. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Wed Mar 20 13:44:07 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 20 Mar 2013 08:44:07 -0400 Subject: [Distutils] Building wheels - project metadata, and specifying compatibility In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 8:27 AM, Paul Moore wrote: > When building wheels, it is necessary to know details of the > compatibility requirements of the code. The most common case is for > pure Python code, where the code could in theory be valid for a single > Python version, but in reality is more likely to be valid either for > all Pythons, or sometimes for just Python 2 or Python 3 (where > separate code bases or 2to3 are involved). The wheel project supports > a "universal" flag in setup.cfg, which sets the compatibility flags to > 'py2.py3', but that is only one case. > > Ultimately, we need a means (probably in metadata) for (pure Python) > projects to specify any of the following: > > 1. The built code works on any version of Python (that the project supports) > 2. The built code is specific to the major version of Python that it > was built with > 3. The built code is only usable for the precise Python version it was > built with > > The default is currently (3), but this is arguably the least common > case. Nearly all code will support at least (2) and more and more is > supporting (1). > > Note that this is separate from the question of what versions the > project supports. It's about how the code is written. Specifically, > there's no point in marking code that uses new features in Python 3.3 > as .py33 - it's still .py3 as it will work with Python 3.4. The fact > that it won't work on Python 3.2 is just because the project doesn't > support Python 3.2. Installing a .py3 wheel into Python 3.2 is no > different from installing a sdist there. So overspecifying the wheel > compatibility so that a sdist gets picked up for earlier versions > isn't helpful. On the other hand Python 3.4 knows it is compatible with "py33" and will pick up that wheel too. It is designed this way to provide a (small) distinction between the safe default and intentional cross-Python-compatible publishing. > In addition to a means for projects to specify this themselves, tools > (bdist_wheel, pip wheel) should probably have a means to override the > default at the command line, as it will be some time before projects > specify this information, even once it is standard. There's always the > option to rename the generated file, but that feels like a hack... I need to do a "wheel retag" tool instead of a simple "rename" because now the WHEEL metadata is supposed to contain all the information in the filename through the Tag and Build keys. This lets us effectively sign the filename. > Where C extensions are involved, there are other questions. Mostly, > compiled code is implementation, architecture, and minor version > specific, so there's little to do here. The stable ABI is relevant, > but I have no real experience of using it to know how that would work. > There is also the case of projects with C accelerators - it would be > good to be able to easily build both the accelerated version and a > fallback pure-python wheel. I don't believe this is easy as things > stand - distutils uses a compiler if it's present, so forcing a > pure-python build when you have a compiler is harder work than it > needs to be when building binary distributions. This is an open problem, for example in pypy they might be C decelerators. There should be a better way to have optional or conditional C extensions. > Comments? Should the default in bdist_wheel and pip wheel be changed > or should it remain "as safe as possible" (practicality vs purity)? If > the latter, should override flags be added, or is renaming the wheel > in the absence of project metadata the recommended approach? And does > anyone have any experience of how this might all work with C > extensions? I would like to see the setup.cfg metadata used by bdist_wheel expanded and standardized. The command line override would also be good. Does anyone have the stomach to put some of that into distutils or setuptools itself? Daniel Holth From bkabrda at redhat.com Wed Mar 20 13:57:36 2013 From: bkabrda at redhat.com (Bohuslav Kabrda) Date: Wed, 20 Mar 2013 08:57:36 -0400 (EDT) Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: Message-ID: <972174539.11573192.1363784256835.JavaMail.root@redhat.com> ----- Original Message ----- > On Wed, Mar 20, 2013 at 1:01 AM, Bohuslav Kabrda > wrote: > > So what would be done when CherryPy 4 came? CherryPy 3 is installed > > directly in site-packages, so version 2 and 4 would be treated > > with split-install? > > It seems to me that this type of special casing is not what we > > want. If you develop on one machine and deploy on another machine, > > you have no guarantee that the standard installation of CherryPy > > is the same as on your system. That would force developers to > > actually always install their used versions by "split-install", so > > that they could make sure they always import the correct version. > > This approach isn't viable, as it is both backwards incompatible with > the expectations of current Python software and incompatible with the > requirements of Linux distros and other system integrators (who need > to be able to add new backwards incompatible versions of software > without changing the default version). > Yep, it's backwards incompatible, sure. I think your proposal is a step in the right direction. My proposal is where I think we should be heading in the long term (and do the big step of breaking the backward compatibility as a part of some other huge step, like Python2->Python3 transition was). As for Linux distros, that's not an issue AFAICS. We've been doing the same with Ruby for quite some time and it works (yes, with some patching here and there, but generally it does). Fact is that this system brings lots of benefits to developers. I'm actually quite schizophrenic in this regard, as I'm both packager and developer :) and I see how these worlds collide in these matters. From the packager point of view I see your point, from the developer point of view I install CherryPy 4, import CherryPy and then find out that I'm still using version 3, which breaks my developer expectations. > And I definitely won't shout at people for mentioning what other > languages do - learning from what works and what doesn't for other > groups is exactly what we *should* be doing. Many of the features in > the forthcoming metadata 2.0 specification are driven by stealing > things that are known to work from Node.js, Perl, Ruby, PHP, RPM, > DEB, > etc. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- Regards, Bohuslav "Slavek" Kabrda. From dholth at gmail.com Wed Mar 20 14:02:46 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 20 Mar 2013 09:02:46 -0400 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: <1119152700.11415014.1363766507176.JavaMail.root@redhat.com> Message-ID: Not sure how you could do a good job having one version of a package available by default, and a different one available by requires(). Eggs list the top level packages provided and you could shadow them but it seems like it would be really messy. Ruby Gems appear to have a directory full of gems: ~/.gem/ruby/1.8/gems/. Each subdirectory is {name}-{version} and doesn't need any suffix - we know what they are because of where they are. bundler-1.2.1 json-1.7.5 sinatra-1.3.3 tilt-1.3.3 tzinfo-0.3.33 Each subdirectory contains metadata, and a lib/ directory that would actually be added to the Ruby module path. Like with pkg_resources, developers are warned to only "require Gems" on things that are *not* imported (preferably in the equivalent of our console_scripts wrappers). Otherwise you get an unwanted Gem dependency if you ever tried to use the same gem outside of the gem system. From p.f.moore at gmail.com Wed Mar 20 14:11:35 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Mar 2013 13:11:35 +0000 Subject: [Distutils] Building wheels - project metadata, and specifying compatibility In-Reply-To: References: Message-ID: On 20 March 2013 12:44, Daniel Holth wrote: > On the other hand Python 3.4 knows it is compatible with "py33" and > will pick up that wheel too. > > It is designed this way to provide a (small) distinction between the > safe default and intentional cross-Python-compatible publishing. Good point. I keep forgetting it does that. (I still maintain that behaviour is very non-intuitive, but I'm willing to accept that unless someone else pipes up,. it's probably just me :-)) >> In addition to a means for projects to specify this themselves, tools >> (bdist_wheel, pip wheel) should probably have a means to override the >> default at the command line, as it will be some time before projects >> specify this information, even once it is standard. There's always the >> option to rename the generated file, but that feels like a hack... > > I need to do a "wheel retag" tool instead of a simple "rename" because > now the WHEEL metadata is supposed to contain all the information in > the filename through the Tag and Build keys. This lets us effectively > sign the filename. Again good point. If I get some free time, I might take a stab at that if you'd like... >> Where C extensions are involved, there are other questions. Mostly, >> compiled code is implementation, architecture, and minor version >> specific, so there's little to do here. The stable ABI is relevant, >> but I have no real experience of using it to know how that would work. >> There is also the case of projects with C accelerators - it would be >> good to be able to easily build both the accelerated version and a >> fallback pure-python wheel. I don't believe this is easy as things >> stand - distutils uses a compiler if it's present, so forcing a >> pure-python build when you have a compiler is harder work than it >> needs to be when building binary distributions. > > This is an open problem, for example in pypy they might be C > decelerators. There should be a better way to have optional or > conditional C extensions. Agreed. These are definitely hard issues, and a proper solution won't be quickly achieved. What we have now is a good 80% solution, but let's keep the remaining 20% in mind. >> Comments? Should the default in bdist_wheel and pip wheel be changed >> or should it remain "as safe as possible" (practicality vs purity)? If >> the latter, should override flags be added, or is renaming the wheel >> in the absence of project metadata the recommended approach? And does >> anyone have any experience of how this might all work with C >> extensions? > > I would like to see the setup.cfg metadata used by bdist_wheel > expanded and standardized. The command line override would also be > good. Does anyone have the stomach to put some of that into distutils > or setuptools itself? Agreed. My question would be, should this be exposed anywhere in the project metadata? (For example, for other tools that use distlib to build wheels and need to know programatically what tags to use). By the way, one point I dislike with the bdist_wheel solution is that it explicitly strips #-comments from the end of the universal= line. I can see why you want to be able to use end-of-line comments, but it's not part of the standard configparser format, and you don't support ';' style comments (which could confuse people). Paul From dholth at gmail.com Wed Mar 20 14:17:39 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 20 Mar 2013 09:17:39 -0400 Subject: [Distutils] Building wheels - project metadata, and specifying compatibility In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 9:11 AM, Paul Moore wrote: > On 20 March 2013 12:44, Daniel Holth wrote: >> On the other hand Python 3.4 knows it is compatible with "py33" and >> will pick up that wheel too. >> >> It is designed this way to provide a (small) distinction between the >> safe default and intentional cross-Python-compatible publishing. > > Good point. I keep forgetting it does that. (I still maintain that > behaviour is very non-intuitive, but I'm willing to accept that unless > someone else pipes up,. it's probably just me :-)) > >>> In addition to a means for projects to specify this themselves, tools >>> (bdist_wheel, pip wheel) should probably have a means to override the >>> default at the command line, as it will be some time before projects >>> specify this information, even once it is standard. There's always the >>> option to rename the generated file, but that feels like a hack... >> >> I need to do a "wheel retag" tool instead of a simple "rename" because >> now the WHEEL metadata is supposed to contain all the information in >> the filename through the Tag and Build keys. This lets us effectively >> sign the filename. > > Again good point. If I get some free time, I might take a stab at that > if you'd like... > >>> Where C extensions are involved, there are other questions. Mostly, >>> compiled code is implementation, architecture, and minor version >>> specific, so there's little to do here. The stable ABI is relevant, >>> but I have no real experience of using it to know how that would work. >>> There is also the case of projects with C accelerators - it would be >>> good to be able to easily build both the accelerated version and a >>> fallback pure-python wheel. I don't believe this is easy as things >>> stand - distutils uses a compiler if it's present, so forcing a >>> pure-python build when you have a compiler is harder work than it >>> needs to be when building binary distributions. >> >> This is an open problem, for example in pypy they might be C >> decelerators. There should be a better way to have optional or >> conditional C extensions. > > Agreed. These are definitely hard issues, and a proper solution won't > be quickly achieved. What we have now is a good 80% solution, but > let's keep the remaining 20% in mind. > >>> Comments? Should the default in bdist_wheel and pip wheel be changed >>> or should it remain "as safe as possible" (practicality vs purity)? If >>> the latter, should override flags be added, or is renaming the wheel >>> in the absence of project metadata the recommended approach? And does >>> anyone have any experience of how this might all work with C >>> extensions? >> >> I would like to see the setup.cfg metadata used by bdist_wheel >> expanded and standardized. The command line override would also be >> good. Does anyone have the stomach to put some of that into distutils >> or setuptools itself? > > Agreed. My question would be, should this be exposed anywhere in the > project metadata? (For example, for other tools that use distlib to > build wheels and need to know programatically what tags to use). I think setup.cfg counts as far as build metadata is concerned. > By the way, one point I dislike with the bdist_wheel solution is that > it explicitly strips #-comments from the end of the universal= line. I > can see why you want to be able to use end-of-line comments, but it's > not part of the standard configparser format, and you don't support > ';' style comments (which could confuse people). That wasn't really intentional and it probably doesn't need to do that. It's piggybacking on top of the distutils config parsing system which may do what's needed already. From bkabrda at redhat.com Wed Mar 20 14:24:46 2013 From: bkabrda at redhat.com (Bohuslav Kabrda) Date: Wed, 20 Mar 2013 09:24:46 -0400 (EDT) Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: Message-ID: <1111917148.11618373.1363785886706.JavaMail.root@redhat.com> ----- Original Message ----- > Not sure how you could do a good job having one version of a package > available by default, and a different one available by requires(). > Eggs list the top level packages provided and you could shadow them > but it seems like it would be really messy. > Yup, it'd require decent amount of changes and probably break some backwards compatibility as mentioned. > Ruby Gems appear to have a directory full of gems: > ~/.gem/ruby/1.8/gems/. Each subdirectory is {name}-{version} and > doesn't need any suffix - we know what they are because of where they > are. > > bundler-1.2.1 > json-1.7.5 > sinatra-1.3.3 > tilt-1.3.3 > tzinfo-0.3.33 > > Each subdirectory contains metadata, and a lib/ directory that would > actually be added to the Ruby module path. > Not exactly. the 1.8 directory contains gems/ and specifications/. The specifications/ directory contain {name}-{version}.gemspec, which is a meta-information holder for the specific gem. Among other things, it contains require_paths, that are concatenated with gems/{name}-{versions} to get the load path. So the rubygems require first looks at the list of specs, then chooses the proper one (newest when no version is specified or the specified one) and then computes the load path from it. > Like with pkg_resources, developers are warned to only "require Gems" > on things that are *not* imported (preferably in the equivalent of > our > console_scripts wrappers). Otherwise you get an unwanted Gem > dependency if you ever tried to use the same gem outside of the gem > system. > I don't really know what you mean by this - could you please reword it? -- Regards, Bohuslav "Slavek" Kabrda. From ncoghlan at gmail.com Wed Mar 20 14:39:21 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 06:39:21 -0700 Subject: [Distutils] The pypa account on BitBucket Message-ID: Hey pip/virtualenv folks, does one of you control the pypa placeholder account on BitBucket? (it seems possible, given it was created shortly after the Github account). I've been pondering the communicating-with-the-broader-community issue (especially in relation to http://simeonfranklin.com/blog/2013/mar/17/my-pycon-2013-poster/) and I'm thinking that the PSF account is the wrong home on BitBucket for the meta-packaging documentation repo. The PSF has traditionally been hands off relative to the actual development activities, and I don't want to change that. Instead, I'd prefer to have a separate team account, and also talk to Vinay about moving pylauncher and distlib under that account. I can create a different account if need be, but if one of you controls pypa, then it would be good to use that and parallel the pip/virtualenv team account on GitHub. If you don't already control it, then I'll write to BitBucket support to see if the account is actually being used for anything, and if not, if there's a way to request control over it. Failing that, I'll settle for a similar-but-different name, but "pypa" is definitely my preferred option. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Wed Mar 20 14:58:18 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 20 Mar 2013 09:58:18 -0400 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: <1111917148.11618373.1363785886706.JavaMail.root@redhat.com> References: <1111917148.11618373.1363785886706.JavaMail.root@redhat.com> Message-ID: >> Like with pkg_resources, developers are warned to only "require Gems" >> on things that are *not* imported (preferably in the equivalent of >> our >> console_scripts wrappers). Otherwise you get an unwanted Gem >> dependency if you ever tried to use the same gem outside of the gem >> system. >> > > I don't really know what you mean by this - could you please reword it? There should be only one call to the linker, at the very top of execution. Otherwise in this pseudo-language example you can't use foobar without also using the requires system: myscript: requires(a, b, c) import foobar run() foobar: requires(c, d) # No! From ncoghlan at gmail.com Wed Mar 20 15:35:04 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 07:35:04 -0700 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: <1111917148.11618373.1363785886706.JavaMail.root@redhat.com> Message-ID: On Wed, Mar 20, 2013 at 6:58 AM, Daniel Holth wrote: >>> Like with pkg_resources, developers are warned to only "require Gems" >>> on things that are *not* imported (preferably in the equivalent of >>> our >>> console_scripts wrappers). Otherwise you get an unwanted Gem >>> dependency if you ever tried to use the same gem outside of the gem >>> system. >>> >> >> I don't really know what you mean by this - could you please reword it? > > There should be only one call to the linker, at the very top of > execution. Otherwise in this pseudo-language example you can't use > foobar without also using the requires system: > > myscript: > requires(a, b, c) > import foobar > run() > > foobar: > requires(c, d) # No! RIght, version control and runtime access should be separate steps. In a virtual environment, you shouldn't need runtime checks at all - all the version compatibility checks should be carried out when creating the environment. Similarly, when a distro defines their site-packages contents, they're creating an integrated set of interlocking requirements, all designed to work together. Only when they need multiple mutually incompatible versions installed should the versioning system be needed. Assuming we go this way, distros will presumably install system Python packages into the versioned layout and then symlink them appropriately from the "available by default" layout in site-packages. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Mar 20 16:42:11 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 08:42:11 -0700 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Wed, Mar 20, 2013 at 3:13 AM, Paul Moore wrote: > On 19 March 2013 16:21, Steve Dower wrote: >> As I understand, the issue is the same as between different versions of Python and comes down to not being able to assume a compiler on Windows machines. It's easy to make a source file that will compile for any ABI and platform, but distributing binaries requires each one to be built separately. This doesn't have to be an onerous task - it can be scripted quite easily once you have all the required compilers - but it does take more effort than simply sharing a source file. > > Another nice tool would be some sort of Windows build farm, where > projects could submit a sdist and it would build wheels for a list of > supported Python versions and architectures. That wouldn't work for > projects with complex dependencies, obviously, but could cover a > reasonable-sized chunk of PyPI (especially if dependencies could be > added to the farm on request). > > And can I have a pony as well, of course... :-) This also came up in the discussion over on http://simeonfranklin.com/blog/2013/mar/17/my-pycon-2013-poster/ I was pointed to an interesting resource: http://www.lfd.uci.edu/~gohlke/pythonlibs/ (The security issues with that arrangement are non-trivial, but the convenience factor is huge) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From Steve.Dower at microsoft.com Wed Mar 20 17:03:42 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 20 Mar 2013 16:03:42 +0000 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: <9877bd4e6e274ec790944804a69e4dde@BLUPR03MB035.namprd03.prod.outlook.com> > From: Nick Coghlan [mailto:ncoghlan at gmail.com] > [snip] > > I was pointed to an interesting resource: > http://www.lfd.uci.edu/~gohlke/pythonlibs/ > > (The security issues with that arrangement are non-trivial, but the > convenience factor is huge) FWIW, one of the guys on our team has met with Christoph and considers him trustworthy. > Cheers, > Nick. > From ncoghlan at gmail.com Wed Mar 20 17:31:38 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 09:31:38 -0700 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: <9877bd4e6e274ec790944804a69e4dde@BLUPR03MB035.namprd03.prod.outlook.com> References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> <9877bd4e6e274ec790944804a69e4dde@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Wed, Mar 20, 2013 at 9:03 AM, Steve Dower wrote: >> From: Nick Coghlan [mailto:ncoghlan at gmail.com] >> [snip] >> >> I was pointed to an interesting resource: >> http://www.lfd.uci.edu/~gohlke/pythonlibs/ >> >> (The security issues with that arrangement are non-trivial, but the >> convenience factor is huge) > > FWIW, one of the guys on our team has met with Christoph and considers him trustworthy. Thanks, that's great to know, and ties into an idea that I just had. In addition to whether or not the build is trusted, there's also the risk of MITM attacks against the download site (less so when automated installers aren't involved, but still a risk). We just switched PyPI over to HTTPS for that very reason. The idle thought I had was that it may be useful if PyPI users could designate other users as "repackagers" for their project, and PyPI offered an interface that was *just* file uploads for an existing release. Then the pip developers, for example, could say "we trust Christoph to make our Windows installers", and grant him repackager access so he could upload the binaries for secure redistribution from PyPI rather than needing to host them himself. We'd probably want something like this for an effective build farm system anyway, this way it could work regardless of whether it was a human or an automated system converting the released sdists to platform specific binaries. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From agroszer.ll at gmail.com Wed Mar 20 17:03:31 2013 From: agroszer.ll at gmail.com (Adam GROSZER) Date: Wed, 20 Mar 2013 17:03:31 +0100 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: <5149DDD3.1020604@gmail.com> On 03/20/2013 04:42 PM, Nick Coghlan wrote: > On Wed, Mar 20, 2013 at 3:13 AM, Paul Moore wrote: >> On 19 March 2013 16:21, Steve Dower wrote: >>> As I understand, the issue is the same as between different versions of Python and comes down to not being able to assume a compiler on Windows machines. It's easy to make a source file that will compile for any ABI and platform, but distributing binaries requires each one to be built separately. This doesn't have to be an onerous task - it can be scripted quite easily once you have all the required compilers - but it does take more effort than simply sharing a source file. >> >> Another nice tool would be some sort of Windows build farm, where >> projects could submit a sdist and it would build wheels for a list of >> supported Python versions and architectures. That wouldn't work for >> projects with complex dependencies, obviously, but could cover a >> reasonable-sized chunk of PyPI (especially if dependencies could be >> added to the farm on request). >> >> And can I have a pony as well, of course... :-) > > This also came up in the discussion over on > http://simeonfranklin.com/blog/2013/mar/17/my-pycon-2013-poster/ > > I was pointed to an interesting resource: > http://www.lfd.uci.edu/~gohlke/pythonlibs/ > > (The security issues with that arrangement are non-trivial, but the > convenience factor is huge) > > Cheers, > Nick. > Well a few other links: http://winbot.zope.org https://github.com/zopefoundation/zope.wineggbuilder https://github.com/zopefoundation/zope.winbot I can tell you getting such a beast to work takes quite some time. -- Best regards, Adam GROSZER -- Quote of the day: Each time you are honest and conduct yourself with honesty, a success force will drive you toward greater success. Each time you lie, even with a little white lie, there are strong forces pushing you toward failure. (Joseph Sugarman) From erik.m.bray at gmail.com Wed Mar 20 17:42:55 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Wed, 20 Mar 2013 12:42:55 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: References: Message-ID: On Wed, Mar 13, 2013 at 8:54 PM, PJ Eby wrote: > Jason Coombs (head of the Distribute project) and I are working on > merging the bulk of the improvements distribute made into the > setuptools code base. He has volunteered to take over maintenance of > setuptools, and I welcome his assistance. I appreciate the > contributions made by the distribute maintainers over the years, and > am glad to have Jason's help in getting those contributions into > setuptools as well. Continuing to keep the code bases separate isn't > helping anybody, and as setuptools moves once again into active > development to deal with the upcoming shifts in the Python-wide > packaging infrastructure (the new PEPs, formats, SSL, TUF, etc.), it > makes sense to combine efforts. > > Aside from the problems experienced by people with one package that > are fixed in the other, the biggest difficulties with the fork right > now are faced by the maintainers of setuptools-driven projects like > pip, virtualenv, and buildout, who have to either take sides in a > conflict, or spend additional time and effort testing and integrating > with both setuptools and distribute. We'd like to end that pain and > simplify matters for end users by bringing distribute enhancements to > setuptools and phasing out the distribute fork as soon as is > practical. > > In the short term, our goal is to consolidate the projects to prevent > duplication, wasted effort, and incompatibility, so that we can start > moving forward. This merge will allow us to combine resources and > teams, so that we may focus on a stable but actively-maintained > toolset. In the longer term, the goal is for setuptools as a concept > to become obsolete. For the first time, the Python packaging world > has gotten to a point where there are PEPs *and implementations* for > key parts of the packaging infrastructure that offer the potential to > get rid of setuptools entirely. (Vinay Sajip's work on distlib, > Daniel Holth's work on the "wheel" format, and Nick Coghlan's taking > up the reins of the packaging PEPs and providing a clear vision for a > new way of doing things -- these are just a few of the developments in > recent play.) > > "Obsolete", however, doesn't mean unmaintained or undeveloped. In > fact, for the "new way of doing things" to succeed, setuptools will > need a lot of new features -- some small, some large -- to provide a > migration path. > > At the moment, the merge is not yet complete. We are working on a > common repository where the two projects' history has been spliced > together, and are cleaning up the branch heads to facilitate > re-merging them. We'd hoped to have this done by PyCon, but there > have been a host of personal, health, and community issues consuming > much of our available work time. But we decided to go ahead and make > an announcement *now*, because with the big shifts taking place in the > packaging world, there are people who need to know about the upcoming > merge in order to make the best decisions about their own projects > (e.g. pip, buildout, etc.) and to better support their own users. > > Thank you once again to all the distribute contributors, for the many > fine improvements you've made to the setuptools package over the > years, and I hope that you'll continue to make them in the future. > (Especially as I begin to phase myself out of an active role in the > project!) > > I now want to turn the floor over to Jason, who's put together a > Roadmap/FAQ for what's going to be happening with the project going > forward. We'll then both be here in the thread to address any > questions or concerns you might have. Quick question regarding open issues on Distribute (of which I have a handful assigned to me, and of which I intend to tackle a few others): Would it it make sense to just hold off on those until the merge is completed? Also is there anything I can do to help with the merge? How is that coming along? Erik From p.f.moore at gmail.com Wed Mar 20 17:45:13 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Mar 2013 16:45:13 +0000 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> <9877bd4e6e274ec790944804a69e4dde@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On 20 March 2013 16:31, Nick Coghlan wrote: > Then the pip developers, for example, could say "we trust Christoph to > make our Windows installers", and grant him repackager access so he > could upload the binaries for secure redistribution from PyPI rather > than needing to host them himself. Another axis of the same idea would be to allow people to upload "unofficial" binaries. The individual would not need to be confirmed as trusted by the project, but his uploads would *not* be visible by default on PyPI. Users would be able to "opt in" to builds by that individual, and if they did, those builds would be merged in with what's on PyPI. That model is much closer to how Christoph is actually working at the moment - people can choose whether to trust him, but if they do they can get his builds and the upstream projects don't get involved. Paul From qwcode at gmail.com Wed Mar 20 18:39:51 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 20 Mar 2013 10:39:51 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: Nick: I'm not sure who owns it yet. If it is one of us, then it would need to be a group vote to use the pypa "brand name" like this. I'll try to get all the pypa people to come here and register their opinion. here's my personal thoughts: I understand the motivation to reuse our name, but probably less political to start a new nifty short name. "pypack" or something. "pack" as in a group of people, but also short for "packaging" In the spirit of the blog post, here's the 2 doc projects I'd like to see exist under this new ~"pypack" group account, and be linked to from the main python docs. 1) "Python Packaging User Guide": to replace the unmaintained Hitchhiker's guide, or just get permission to copy that in here and get it up to date and more complete. 2) "Python Packaging Dev Hub": a simpler name to replace "python-meta-packaging" give the ~10-15 people that are actively involved in the various packaging projects and PEPs admin/merge access to help maintain these docs. and then announce this on python-announce as real and supported indirectly by the PSF. people will flock IMO to follow it and contribute with pulls and issues Marcus On Wed, Mar 20, 2013 at 6:39 AM, Nick Coghlan wrote: > Hey pip/virtualenv folks, does one of you control the pypa placeholder > account on BitBucket? (it seems possible, given it was created shortly > after the Github account). > > I've been pondering the communicating-with-the-broader-community issue > (especially in relation to > http://simeonfranklin.com/blog/2013/mar/17/my-pycon-2013-poster/) and > I'm thinking that the PSF account is the wrong home on BitBucket for > the meta-packaging documentation repo. The PSF has traditionally been > hands off relative to the actual development activities, and I don't > want to change that. > > Instead, I'd prefer to have a separate team account, and also talk to > Vinay about moving pylauncher and distlib under that account. > > I can create a different account if need be, but if one of you > controls pypa, then it would be good to use that and parallel the > pip/virtualenv team account on GitHub. If you don't already control > it, then I'll write to BitBucket support to see if the account is > actually being used for anything, and if not, if there's a way to > request control over it. Failing that, I'll settle for a > similar-but-different name, but "pypa" is definitely my preferred > option. > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Mar 20 18:43:14 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 20 Mar 2013 13:43:14 -0400 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 1:39 PM, Marcus Smith wrote: > Nick: > > I'm not sure who owns it yet. > If it is one of us, then it would need to be a group vote to use the pypa > "brand name" like this. > I'll try to get all the pypa people to come here and register their opinion. > > here's my personal thoughts: > > I understand the motivation to reuse our name, but probably less political > to start a new nifty short name. > "pypack" or something. "pack" as in a group of people, but also short for > "packaging" > > In the spirit of the blog post, here's the 2 doc projects I'd like to see > exist under this new ~"pypack" group account, and be linked to from the main > python docs. > > 1) "Python Packaging User Guide": to replace the unmaintained Hitchhiker's > guide, or just get permission to copy that in here and get it up to date > and more complete. > 2) "Python Packaging Dev Hub": a simpler name to replace > "python-meta-packaging" > > give the ~10-15 people that are actively involved in the various packaging > projects and PEPs admin/merge access to help maintain these docs. > > and then announce this on python-announce as real and supported indirectly > by the PSF. > > people will flock IMO to follow it and contribute with pulls and issues > > Marcus I like the python packaging authority brand and think it would be great to put some renewed authority behind it. From kevin.horn at gmail.com Wed Mar 20 18:59:53 2013 From: kevin.horn at gmail.com (Kevin Horn) Date: Wed, 20 Mar 2013 12:59:53 -0500 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 12:39 PM, Marcus Smith wrote: > Nick: > > I'm not sure who owns it yet. > If it is one of us, then it would need to be a group vote to use the pypa > "brand name" like this. > I'll try to get all the pypa people to come here and register their > opinion. > > here's my personal thoughts: > > I understand the motivation to reuse our name, but probably less political > to start a new nifty short name. > "pypack" or something. "pack" as in a group of people, but also short for > "packaging" > > I like the "pypack" name. > In the spirit of the blog post, here's the 2 doc projects I'd like to see > exist under this new ~"pypack" group account, and be linked to from the > main python docs. > > 1) "Python Packaging User Guide": to replace the unmaintained > Hitchhiker's guide, or just get permission to copy that in here and get it > up to date and more complete. > 2) "Python Packaging Dev Hub": a simpler name to replace > "python-meta-packaging" > > give the ~10-15 people that are actively involved in the various packaging > projects and PEPs admin/merge access to help maintain these docs. > > and then announce this on python-announce as real and supported indirectly > by the PSF. > > people will flock IMO to follow it and contribute with pulls and issues > > Marcus > This sounds like a reasonable plan to me. There definitely need to be a user-centric bunch of docs being maintained someplace. -- Kevin Horn -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Mar 20 19:01:49 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 11:01:49 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 10:39 AM, Marcus Smith wrote: > Nick: > > I'm not sure who owns it yet. I ran into Jannis before he left this morning, and he was fairly sure someone decided it would also be a good idea to register it on BitBucket after the GitHub group was set up. > If it is one of us, then it would need to be a group vote to use the pypa > "brand name" like this. > I'll try to get all the pypa people to come here and register their opinion. > > here's my personal thoughts: > > I understand the motivation to reuse our name, but probably less political > to start a new nifty short name. A big part of my role at this point is to take the heat for any potentially political or otherwise controversial issues (similar to the way Guido takes the heat for deciding what colour various bikesheds are going to be painted in the core language design - the "BDFL-Delegate" title was chosen advisedly). While we certainly won't do it if you're not amenable as a group, I'll be trying my best to persuade you that it's a good idea to turn your self-chosen name into official reality :) > "pypack" or something. "pack" as in a group of people, but also short for > "packaging" The reason I'd like permission to re-use the name is because I want to be crystal clear that pip *is* the official installer, and virtualenv is the official way to get venv support in versions prior to 3.3, and similar for distlib and pylauncher (of course, I also need to make sure Vinay is OK with that, since those projects currently live under his personal repo). I don't want to ask the pypa to change its name, and I absolutely *do not* want to have people asking whether or not pypa and some other group are the ones to listen to in terms of how to do software distribution "the Python way". I want to have one group that the core Python docs can reference and say "if you need to distribute Python software with and for older Python versions, here's where to go for the latest and greatest tools and advice". If we have two distinct names on GitHub and PyPI, it becomes that little bit harder to convey that pylauncher, pip, virtualenv, distlib are backwards compatible versions of features of Python 3.4+ and officially endorsed by the core development team. > In the spirit of the blog post, here's the 2 doc projects I'd like to see > exist under this new ~"pypack" group account, and be linked to from the main > python docs. > > 1) "Python Packaging User Guide": to replace the unmaintained Hitchhiker's > guide, or just get permission to copy that in here and get it up to date > and more complete. > 2) "Python Packaging Dev Hub": a simpler name to replace > "python-meta-packaging" > > give the ~10-15 people that are actively involved in the various packaging > projects and PEPs admin/merge access to help maintain these docs. Yes, that sounds like a good structure. > and then announce this on python-announce as real and supported indirectly > by the PSF. It's not PSF backing that matters, it's the python-dev backing to add links from the 2.7 and 3.3 versions of the docs on python.org to the user guide on the new site (and probably from the CPython dev guide to the packaging developer hub). That's a fair bit easier for me to sell if it's one group rather than two. > people will flock IMO to follow it and contribute with pulls and issues Yes, a large part of my goal here is similar to that of the PSF board when Brett Cannon was funded for a couple of months to write the initial version of the CPython developer guide. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Wed Mar 20 19:06:20 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 20 Mar 2013 14:06:20 -0400 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 2:01 PM, Nick Coghlan wrote: > On Wed, Mar 20, 2013 at 10:39 AM, Marcus Smith wrote: >> Nick: >> >> I'm not sure who owns it yet. > > I ran into Jannis before he left this morning, and he was fairly sure > someone decided it would also be a good idea to register it on > BitBucket after the GitHub group was set up. > >> If it is one of us, then it would need to be a group vote to use the pypa >> "brand name" like this. >> I'll try to get all the pypa people to come here and register their opinion. >> >> here's my personal thoughts: >> >> I understand the motivation to reuse our name, but probably less political >> to start a new nifty short name. > > A big part of my role at this point is to take the heat for any > potentially political or otherwise controversial issues (similar to > the way Guido takes the heat for deciding what colour various > bikesheds are going to be painted in the core language design - the > "BDFL-Delegate" title was chosen advisedly). > > While we certainly won't do it if you're not amenable as a group, I'll > be trying my best to persuade you that it's a good idea to turn your > self-chosen name into official reality :) > >> "pypack" or something. "pack" as in a group of people, but also short for >> "packaging" > > The reason I'd like permission to re-use the name is because I want to > be crystal clear that pip *is* the official installer, and virtualenv > is the official way to get venv support in versions prior to 3.3, and > similar for distlib and pylauncher (of course, I also need to make > sure Vinay is OK with that, since those projects currently live under > his personal repo). > > I don't want to ask the pypa to change its name, and I absolutely *do > not* want to have people asking whether or not pypa and some other > group are the ones to listen to in terms of how to do software > distribution "the Python way". I want to have one group that the core > Python docs can reference and say "if you need to distribute Python > software with and for older Python versions, here's where to go for > the latest and greatest tools and advice". If we have two distinct > names on GitHub and PyPI, it becomes that little bit harder to convey > that pylauncher, pip, virtualenv, distlib are backwards compatible > versions of features of Python 3.4+ and officially endorsed by the > core development team. > >> In the spirit of the blog post, here's the 2 doc projects I'd like to see >> exist under this new ~"pypack" group account, and be linked to from the main >> python docs. >> >> 1) "Python Packaging User Guide": to replace the unmaintained Hitchhiker's >> guide, or just get permission to copy that in here and get it up to date >> and more complete. >> 2) "Python Packaging Dev Hub": a simpler name to replace >> "python-meta-packaging" >> >> give the ~10-15 people that are actively involved in the various packaging >> projects and PEPs admin/merge access to help maintain these docs. > > Yes, that sounds like a good structure. > >> and then announce this on python-announce as real and supported indirectly >> by the PSF. > > It's not PSF backing that matters, it's the python-dev backing to add > links from the 2.7 and 3.3 versions of the docs on python.org to the > user guide on the new site (and probably from the CPython dev guide to > the packaging developer hub). That's a fair bit easier for me to sell > if it's one group rather than two. > >> people will flock IMO to follow it and contribute with pulls and issues > > Yes, a large part of my goal here is similar to that of the PSF board > when Brett Cannon was funded for a couple of months to write the > initial version of the CPython developer guide. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig And we really need to double down on this kind of pseudo-totalitarian propaganda: http://s3.pixane.com/lenin_packaging.png (only now with more setuptools!) From donald at stufft.io Wed Mar 20 19:29:32 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 20 Mar 2013 14:29:32 -0400 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> <9877bd4e6e274ec790944804a69e4dde@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: <71EBAF24-4F06-4DD5-A5B2-2FB7CC8816E5@stufft.io> On Mar 20, 2013, at 12:45 PM, Paul Moore wrote: > On 20 March 2013 16:31, Nick Coghlan wrote: >> Then the pip developers, for example, could say "we trust Christoph to >> make our Windows installers", and grant him repackager access so he >> could upload the binaries for secure redistribution from PyPI rather >> than needing to host them himself. > > Another axis of the same idea would be to allow people to upload > "unofficial" binaries. The individual would not need to be confirmed > as trusted by the project, but his uploads would *not* be visible by > default on PyPI. Users would be able to "opt in" to builds by that > individual, and if they did, those builds would be merged in with > what's on PyPI. > > That model is much closer to how Christoph is actually working at the > moment - people can choose whether to trust him, but if they do they > can get his builds and the upstream projects don't get involved. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Why can't unofficial binaries just use a separate index? e.g. Christoph can just make an index with his binaries. This solution also works well if someone wants to maintain a curated PyPI. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Wed Mar 20 19:30:53 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 20 Mar 2013 14:30:53 -0400 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> <9877bd4e6e274ec790944804a69e4dde@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: <050BF00F-5D9C-45D1-831D-5EAD8F792B49@stufft.io> On Mar 20, 2013, at 12:31 PM, Nick Coghlan wrote: > On Wed, Mar 20, 2013 at 9:03 AM, Steve Dower wrote: >>> From: Nick Coghlan [mailto:ncoghlan at gmail.com] >>> [snip] >>> >>> I was pointed to an interesting resource: >>> http://www.lfd.uci.edu/~gohlke/pythonlibs/ >>> >>> (The security issues with that arrangement are non-trivial, but the >>> convenience factor is huge) >> >> FWIW, one of the guys on our team has met with Christoph and considers him trustworthy. > > Thanks, that's great to know, and ties into an idea that I just had. > In addition to whether or not the build is trusted, there's also the > risk of MITM attacks against the download site (less so when automated > installers aren't involved, but still a risk). We just switched PyPI > over to HTTPS for that very reason. > > The idle thought I had was that it may be useful if PyPI users could > designate other users as "repackagers" for their project, and PyPI > offered an interface that was *just* file uploads for an existing > release. I *think* if done properly a TUF secured API can be setup so as that you can delegate the role for signing certain files is delegated, but I'm not sure. > > Then the pip developers, for example, could say "we trust Christoph to > make our Windows installers", and grant him repackager access so he > could upload the binaries for secure redistribution from PyPI rather > than needing to host them himself. > > We'd probably want something like this for an effective build farm > system anyway, this way it could work regardless of whether it was a > human or an automated system converting the released sdists to > platform specific binaries. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From tseaver at palladion.com Wed Mar 20 19:33:33 2013 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 20 Mar 2013 14:33:33 -0400 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/20/2013 06:13 AM, Paul Moore wrote: > Another nice tool would be some sort of Windows build farm, where > projects could submit a sdist and it would build wheels for a list of > supported Python versions and architectures. That wouldn't work for > projects with complex dependencies, obviously, but could cover a > reasonable-sized chunk of PyPI (especially if dependencies could be > added to the farm on request). The Zope Foundation pays hosting charges for a box which runs Windows tests for ZF projects, and also builds and uploads Windows binaries (eggs and MSIs) for them when they are released. http://winbot.zope.org/ As an example, look at any recent zope.interface release, e.g.: https://pypi.python.org/pypi/zope.interface/4.0.5#downloads Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlFKAP0ACgkQ+gerLs4ltQ6UZwCgkxfOtrArGn/F5dKPk6+QepWV 7jYAniBreYijRKhevNS6rDUteePNzfZW =m0LP -----END PGP SIGNATURE----- From p.f.moore at gmail.com Wed Mar 20 19:59:09 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Mar 2013 18:59:09 +0000 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: On 20 March 2013 18:01, Nick Coghlan wrote: >> If it is one of us, then it would need to be a group vote to use the pypa >> "brand name" like this. >> I'll try to get all the pypa people to come here and register their opinion. >> >> here's my personal thoughts: >> >> I understand the motivation to reuse our name, but probably less political >> to start a new nifty short name. > > A big part of my role at this point is to take the heat for any > potentially political or otherwise controversial issues (similar to > the way Guido takes the heat for deciding what colour various > bikesheds are going to be painted in the core language design - the > "BDFL-Delegate" title was chosen advisedly). > > While we certainly won't do it if you're not amenable as a group, I'll > be trying my best to persuade you that it's a good idea to turn your > self-chosen name into official reality :) I don't have a problem with the extension of the pypa "brand name" to cover this, and I'm all in favour of pip and virtualenv being sanctioned as the "official" answers in this space, I'd be a little cautious over some of the administrative aspects of such a move, though - consider if there's a sudden rush of people who want to contribute to packaging documents - do we want them to have commit rights on pip? Do we have different people committers on the github and bitbucket repos? Not insurmountable issues, but worth considering. Paul. From p.f.moore at gmail.com Wed Mar 20 20:10:55 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 20 Mar 2013 19:10:55 +0000 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: <71EBAF24-4F06-4DD5-A5B2-2FB7CC8816E5@stufft.io> References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> <9877bd4e6e274ec790944804a69e4dde@BLUPR03MB035.namprd03.prod.outlook.com> <71EBAF24-4F06-4DD5-A5B2-2FB7CC8816E5@stufft.io> Message-ID: On 20 March 2013 18:29, Donald Stufft wrote: > Why can't unofficial binaries just use a separate index? e.g. Christoph can just make an index with his binaries. > > This solution also works well if someone wants to maintain a curated PyPI. The only real issue I know of is hosting. I've thought about doing this myself, but don't have (free) hosting space I could use, and I don't really feel like paying for and setting something up on spec. I could host the files somewhere like bitbucket, but that feels like an abuse for any substantial number of packages. I presume Christoph doesn't publish his binaries as an index because wininst installers are typically downloaded and installed manually.Although AIUI, easy_install could use them if they were in index format. But you're right, people *can* do that. Paul. From pje at telecommunity.com Wed Mar 20 21:17:54 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 20 Mar 2013 16:17:54 -0400 Subject: [Distutils] Setuptools-Distribute merge announcement In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 12:42 PM, Erik Bray wrote: > Quick question regarding open issues on Distribute (of which I have a > handful assigned to me, and of which I intend to tackle a few others): > Would it it make sense to just hold off on those until the merge is > completed? I'd personally say no, go ahead and do the work now, except that it might be making more work for Jason later at the repository-munging level. ;-) So, hopefully he'll chime in here with a yea or nay. > Also is there anything I can do to help with the merge? > How is that coming along? It's... somewhat of a mess, actually. As Jason mentioned, distribute didn't import setuptools' version history at the start, so it's being a bit of a challenge to merge in a way that maintains history. My original suggestion for merging was to just cherrypick patches and apply them to setuptools (w/appropriate credits), because apart from the added tests and new features, there's at most about 5% difference between setuptools and distribute by line count. (And the added tests and features are mostly in separate files, so can be added without worrying about conflicts. And a lot of the remaining added stuff is being taken out, anyway, because it's the stuff that distribute uses to pretend it's setuptools.) Some challenges that have arisen since, are that the more changes Jason makes to the distribute branch in our merged repo, the less an "hg annot" is actually going to show the real authors of stuff anyway when we get done. (For example, putting back in the missing entry_points.txt whose absence has been causing problems w/distribute lately.) And we're getting huge and (mostly meaningless) conflicts during attempted merges, too. So, if you have any thoughts on what can be done to fix that, by all means, suggest away. ;-) From dholth at gmail.com Wed Mar 20 21:25:47 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 20 Mar 2013 16:25:47 -0400 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> <9877bd4e6e274ec790944804a69e4dde@BLUPR03MB035.namprd03.prod.outlook.com> <71EBAF24-4F06-4DD5-A5B2-2FB7CC8816E5@stufft.io> Message-ID: On Wed, Mar 20, 2013 at 3:10 PM, Paul Moore wrote: > On 20 March 2013 18:29, Donald Stufft wrote: >> Why can't unofficial binaries just use a separate index? e.g. Christoph can just make an index with his binaries. >> >> This solution also works well if someone wants to maintain a curated PyPI. > > The only real issue I know of is hosting. I've thought about doing > this myself, but don't have (free) hosting space I could use, and I > don't really feel like paying for and setting something up on spec. I > could host the files somewhere like bitbucket, but that feels like an > abuse for any substantial number of packages. > > I presume Christoph doesn't publish his binaries as an index because > wininst installers are typically downloaded and installed > manually.Although AIUI, easy_install could use them if they were in > index format. > > But you're right, people *can* do that. > Paul. If we know who to ask we can get hosting (not my area of expertise). From pje at telecommunity.com Wed Mar 20 22:22:44 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 20 Mar 2013 17:22:44 -0400 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 8:29 AM, Nick Coghlan wrote: > I'm not wedded to using *actual* pth files as a cross-platform linking > solution - a more limited format that only supported path additions, > without the extra powers of pth files would be fine. The key point is > to use the .dist-info directories to bridge between "unversioned > installs in site packages" and "finding parallel versions at runtime > without side effects on all Python applications executed on that > system" (which is the problem with using a pth file in site packages > to bootstrap the parallel versioning system as easy_install does). So why not just make a new '.pth-info' file or directory dropped into a sys.path directory for this purpose? Reusing .dist-info as an available package (vs. an *importable* package) looks like a bad idea from a compatibility point of view. (For example, it's immediately incompatible with Distribute, which would interpret the redundant .dist-info as being importable from that directory.) > If a distribution has been installed in site-packages (or has an > appropriate *.pth file there), there won't be any *.pth file in the > .dist-info directory. Right, but if this were the protocol, you wouldn't tell what's *already on sys.path* without reading all those .dist-info directories to see if they *had* .pth files. You'd have to look for the ones that were missing a .pth file, in other words, in order to know which of those .dist-info's represented a package that was actually importable from that directory. > The *.pth file will only be present if the package has been installed *somewhere else*. ...which is precisely the thing that makes it incompatible with PEP 376 (and Distribute ATM). ;-) > However, it occurs to me that we can do this differently, by > explicitly involving a separate directory that *isn't* on sys.path by > default, and use a path hook to indicate when it should be accessed. Why not just put a .pth-info file that points to the other location, or whatever? Then it's still discoverable, but you don't have to open it unless you intend to add it to sys.path (or an import hook or whatever). If it needs to list a bunch of different directories in it, or whatever, doesn't matter. The point is, using a file in the *same* sys.path directory saves a metric tonne of complexity in sys.path management. Plus, you get the available packages in a single directory read, and you can open whatever files you need in order to pick up additional information in the case of needing a non-default package. > Under this version of the proposal, PEP 376 would remain unchanged, > and would effectively become the "database of installed distributions > available on sys.path by default". That's what it is *now*. Or more precisely, it's a directory of packages that would be importable if a given directory is present on sys.path. It doesn't say anything about sys.path as a whole. > - rather than the contents being processed directly from sys.path, we > would add a "" entry to sys.path with a path hook > that maps to a custom module finder that handles the extra import > locations without the same issues as the current approach to modifying > sys.path in pkg_resources (which allows shadowing development versions > with installed versions by inserting at the front), or the opposite > problem that would be created by appending to the end (allowing > default versions to shadow explicitly requested versions) Note that you can do this without needing a separate sys.path entry. You can give alternate versions whatever precedence they *would* have had, by replacing the finder for the relevant directory. But it would be better if you could be clearer about what precedence you want these other packages to have, relative to the matching sys.path entries. You seem to be speaking in terms of a single site-packages and single versioned-packages directory, but applications and users can have more complicated paths than that. For example, how do PYTHONPATH directories factor into this? User-site packages? Application plugin directories? Will all of these need their own markers? That's why I think we should focus on *individual* directories (the way PEP 376 does), rather than trying to define an overall precedence system. While there are some challenges with easy_install.pth, the basic precedence concept it uses is sound: an encapsulated package discovered in a given directory takes precedence over unencapsulated packages in the same directory. The place where easy_install falls down is in the implementation: not only does it have to munge sys.path in order to insert those non-defaults, it also installs *everything* in an encapsulated form, making a huge sys.path. But you can take the same basic idea and apply it to an import hook; I just think that rather than having the extra directory, it's less coupling and complexity if we look at the level of directories rather than sys.path as a whole. This still lets a system installer put stuff wherever it wants; it just has to also write a .pth-info (or whatever you want to call it) file in site-packages, telling Python where to find it. It also lets plugin-oriented systems use the same approach, and PYTHONPATH, and user-site packages, etc., venvs, etc. all work in exactly the same way, without needing to reinvent wheels or share a single (and privileged) hook. > The versioned import hook would work just like normal sys.path based > import (i.e. maintaining a sequence of path entries, using > sys.modules, sys.path_hooks, sys.path_importer_cache), the only > difference is that the set of paths it checks would initially be > empty. Calls to the new API in distlib would modify the *versioned* > path, effectively inserting all those paths at the point in sys.path > where the "" marker is placed, rather than > appending them to the beginning or end. The API that updates the paths > handled by the versioned import hook would also take care of detecting > and complaining about incompatible version constraints. How does this interact with an application that uses both system-installed packages and a user-supplied plugin directory? (This also sounds like a recipe for new breakage and debug issues caused by putting the marker in the wrong place.) From pje at telecommunity.com Wed Mar 20 22:26:05 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 20 Mar 2013 17:26:05 -0400 Subject: [Distutils] Parallel installation of incompatible versions In-Reply-To: References: <1111917148.11618373.1363785886706.JavaMail.root@redhat.com> Message-ID: On Wed, Mar 20, 2013 at 10:35 AM, Nick Coghlan wrote: > Assuming we go this way, distros will presumably install system Python packages > into the versioned layout and then symlink them appropriately from the > "available by default" layout in site-packages. If they're going to do that, then why not put the versioned layout directly into site-packages in the first place? From carl at oddbird.net Wed Mar 20 23:05:01 2013 From: carl at oddbird.net (Carl Meyer) Date: Wed, 20 Mar 2013 15:05:01 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: <514A328D.4010502@oddbird.net> FWIW I think if pip and virtualenv are being elevated to a new level of "official", I have no problem with the "pypa" name being used as the umbrella for the next few years' "improve python packaging" efforts. I know I've talked to some people who don't follow packaging closely who thought this was already the case and were surprised to learn that e.g. distribute was not "part of the PyPA." Python packaging already suffers from a "too many similar but slightly different names" problem; let's consolidate rather than exacerbate this problem. I just checked and my Bitbucket account does not have admin control over bitbucket.org/pypa - must be Jannis? Regarding other administrative issues: On 03/20/2013 11:59 AM, Paul Moore wrote: > I don't have a problem with the extension of the pypa "brand name" to > cover this, and I'm all in favour of pip and virtualenv being > sanctioned as the "official" answers in this space, I'd be a little > cautious over some of the administrative aspects of such a move, > though - consider if there's a sudden rush of people who want to > contribute to packaging documents - do we want them to have commit > rights on pip? Do we have different people committers on the github > and bitbucket repos? Not insurmountable issues, but worth considering. We already have multiple "teams" on the github PyPA to allow for different committers on pip vs virtualenv. AFAIK bitbucket also supports per-repo access control. So I don't see any reason this should be a problem: using the name "PyPA" as an umbrella does not imply that there must be a single list of people with equal access to all PyPA repositories. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From qwcode at gmail.com Wed Mar 20 23:22:40 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 20 Mar 2013 15:22:40 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: <514A328D.4010502@oddbird.net> References: <514A328D.4010502@oddbird.net> Message-ID: so, counting the beans... : ) we have 8 "active" pypa people in my count. I think 5 yea votes would make it official I see 3 yea votes so far. I'm willing to change my vote "for the good of the whole" if needed, but I'm still curious to hear how non-pypa feel about this. Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Mar 20 23:24:57 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 20 Mar 2013 18:24:57 -0400 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: <514A328D.4010502@oddbird.net> Message-ID: On Mar 20, 2013, at 6:22 PM, Marcus Smith wrote: > so, counting the beans... : ) > we have 8 "active" pypa people in my count. > I think 5 yea votes would make it official > I see 3 yea votes so far. > I'm willing to change my vote "for the good of the whole" if needed, but I'm still curious to hear how non-pypa feel about this. > Marcus > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig +0 ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed Mar 20 23:38:53 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 15:38:53 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: <514A328D.4010502@oddbird.net> References: <514A328D.4010502@oddbird.net> Message-ID: On Wed, Mar 20, 2013 at 3:05 PM, Carl Meyer wrote: > We already have multiple "teams" on the github PyPA to allow for > different committers on pip vs virtualenv. AFAIK bitbucket also supports > per-repo access control. So I don't see any reason this should be a > problem: using the name "PyPA" as an umbrella does not imply that there > must be a single list of people with equal access to all PyPA repositories. Indeed, we use this on the PSF BitBucket repos - you can define groups to make it easy to give the same set of people access to multiple repos, but there's no requirement that the access controls to a team's repos all be the same. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From aclark at aclark.net Thu Mar 21 01:22:19 2013 From: aclark at aclark.net (Alex Clark) Date: Wed, 20 Mar 2013 20:22:19 -0400 Subject: [Distutils] The pypa account on BitBucket References: <514A328D.4010502@oddbird.net> Message-ID: On 2013-03-20 22:22:40 +0000, Marcus Smith said: > so, counting the beans... ?: ?) > we have 8 "active" pypa people in my count. > I think 5 yea votes would make it official > I see 3 yea votes so far. > I'm willing to change my vote "for the good of the whole" if needed, > but I'm still curious to hear how non-pypa feel about this. It's shorter than the "The Fellowship of the Packaging" (And FOTP is not as attractive an acronym) :-). IIUC, Nick plans to do some "official pimping" of pip and venv and wants to use the PyPA brand/organization to do it? I would say +0 in general, and +1 to using PyPA instead of a new name. Seems like a good fit. Alex > Marcus > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Alex Clark ? http://about.me/alex.clark From jannis at leidel.info Thu Mar 21 22:31:23 2013 From: jannis at leidel.info (Jannis Leidel) Date: Thu, 21 Mar 2013 14:31:23 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 11:01 AM, Nick Coghlan wrote: > On Wed, Mar 20, 2013 at 10:39 AM, Marcus Smith wrote: > > Nick: > > > > I'm not sure who owns it yet. > > I ran into Jannis before he left this morning, and he was fairly sure > someone decided it would also be a good idea to register it on > BitBucket after the GitHub group was set up. Yep, and my memory was correct this time, I did indeed register it at the time. I've given the current PyPA team access. Let me know who else needs access. > > If it is one of us, then it would need to be a group vote to use the pypa > > "brand name" like this. > > I'll try to get all the pypa people to come here and register their > opinion. > > > > here's my personal thoughts: > > > > I understand the motivation to reuse our name, but probably less > political > > to start a new nifty short name. > > A big part of my role at this point is to take the heat for any > potentially political or otherwise controversial issues (similar to > the way Guido takes the heat for deciding what colour various > bikesheds are going to be painted in the core language design - the > "BDFL-Delegate" title was chosen advisedly). > > While we certainly won't do it if you're not amenable as a group, I'll > be trying my best to persuade you that it's a good idea to turn your > self-chosen name into official reality :) > > > "pypack" or something. "pack" as in a group of people, but also short for > > "packaging" > > The reason I'd like permission to re-use the name is because I want to > be crystal clear that pip *is* the official installer, and virtualenv > is the official way to get venv support in versions prior to 3.3, and > similar for distlib and pylauncher (of course, I also need to make > sure Vinay is OK with that, since those projects currently live under > his personal repo). > > I don't want to ask the pypa to change its name, and I absolutely *do > not* want to have people asking whether or not pypa and some other > group are the ones to listen to in terms of how to do software > distribution "the Python way". I want to have one group that the core > Python docs can reference and say "if you need to distribute Python > software with and for older Python versions, here's where to go for > the latest and greatest tools and advice". If we have two distinct > names on GitHub and PyPI, it becomes that little bit harder to convey > that pylauncher, pip, virtualenv, distlib are backwards compatible > versions of features of Python 3.4+ and officially endorsed by the > core development team. > > > In the spirit of the blog post, here's the 2 doc projects I'd like to > see > > exist under this new ~"pypack" group account, and be linked to from the > main > > python docs. > > > > 1) "Python Packaging User Guide": to replace the unmaintained > Hitchhiker's > > guide, or just get permission to copy that in here and get it up to date > > and more complete. > > 2) "Python Packaging Dev Hub": a simpler name to replace > > "python-meta-packaging" > > > > give the ~10-15 people that are actively involved in the various > packaging > > projects and PEPs admin/merge access to help maintain these docs. > > Yes, that sounds like a good structure. > > > and then announce this on python-announce as real and supported > indirectly > > by the PSF. > > It's not PSF backing that matters, it's the python-dev backing to add > links from the 2.7 and 3.3 versions of the docs on python.org to the > user guide on the new site (and probably from the CPython dev guide to > the packaging developer hub). That's a fair bit easier for me to sell > if it's one group rather than two. > > > people will flock IMO to follow it and contribute with pulls and issues > > Yes, a large part of my goal here is similar to that of the PSF board > when Brett Cannon was funded for a couple of months to write the > initial version of the CPython developer guide. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Mar 21 23:13:57 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 21 Mar 2013 15:13:57 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: as it as now, I can create qwcode repos and give access to the Pypa group, but not seeing how to create repos managed by pypa and have them show under that account. I'm less familiar with bitbucket teams? anyone? create them and then transfer to Pypa maybe? or I just don't have the permissions to create? Marcus P.S. closing the loop on bean counting. +1: daniel, paulm, carl, jannis, marcus +0: donald -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnasrat at gmail.com Thu Mar 21 23:17:08 2013 From: pnasrat at gmail.com (Paul Nasrat) Date: Thu, 21 Mar 2013 18:17:08 -0400 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: I'm in favour of using the pypa brand. Paul On 21 March 2013 18:13, Marcus Smith wrote: > as it as now, I can create qwcode repos and give access to the Pypa group, > but not seeing how to create repos managed by pypa and have them show under > that account. > I'm less familiar with bitbucket teams? anyone? create them and then > transfer to Pypa maybe? > or I just don't have the permissions to create? > > Marcus > > P.S. closing the loop on bean counting. > +1: daniel, paulm, carl, jannis, marcus > +0: donald > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dennis.coldwell at gmail.com Thu Mar 21 23:20:20 2013 From: dennis.coldwell at gmail.com (Dennis Coldwell) Date: Thu, 21 Mar 2013 15:20:20 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: +1 for pypa as well. On Thu, Mar 21, 2013 at 3:17 PM, Paul Nasrat wrote: > I'm in favour of using the pypa brand. > > Paul > > > On 21 March 2013 18:13, Marcus Smith wrote: > >> as it as now, I can create qwcode repos and give access to the Pypa >> group, but not seeing how to create repos managed by pypa and have them >> show under that account. >> I'm less familiar with bitbucket teams? anyone? create them and then >> transfer to Pypa maybe? >> or I just don't have the permissions to create? >> >> Marcus >> >> P.S. closing the loop on bean counting. >> +1: daniel, paulm, carl, jannis, marcus >> +0: donald >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig >> >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Fri Mar 22 00:55:09 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 21 Mar 2013 16:55:09 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: and pypa needs a cool logo now... On Thu, Mar 21, 2013 at 3:17 PM, Paul Nasrat wrote: > I'm in favour of using the pypa brand. > > Paul > > > On 21 March 2013 18:13, Marcus Smith wrote: > >> as it as now, I can create qwcode repos and give access to the Pypa >> group, but not seeing how to create repos managed by pypa and have them >> show under that account. >> I'm less familiar with bitbucket teams? anyone? create them and then >> transfer to Pypa maybe? >> or I just don't have the permissions to create? >> >> Marcus >> >> P.S. closing the loop on bean counting. >> +1: daniel, paulm, carl, jannis, marcus >> +0: donald >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Mar 23 02:27:02 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 22 Mar 2013 18:27:02 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: On Thu, Mar 21, 2013 at 4:55 PM, Marcus Smith wrote: > and pypa needs a cool logo now... If we were happy with something simple, a wheel of cheese bearing the Python logo would be entirely appropriate :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From eu at doxos.eu Sat Mar 23 08:40:29 2013 From: eu at doxos.eu (=?UTF-8?B?VsOhY2xhdiDFoG1pbGF1ZXI=?=) Date: Sat, 23 Mar 2013 08:40:29 +0100 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: <514D5C6D.5090201@doxos.eu> > I was pointed to an interesting resource: > http://www.lfd.uci.edu/~gohlke/pythonlibs/ (The security issues with > that arrangement are non-trivial, but the convenience factor is huge) That webpage saved me a lot of headache with packages I was not able to build under Windows (with mingw64). I contacted Christoph if he could share buildscripts or somesuch, but got no reply from him (might be just too busy) - that build infrastructure would make it much easier to experiment with separate repositories, for personal/company/community needs, and perhaps evolve into some global repository of binaries for windows, with better security. Cheers, Vaclav From regebro at gmail.com Sat Mar 23 09:48:46 2013 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 23 Mar 2013 09:48:46 +0100 Subject: [Distutils] distribute does not install with python3 on Ubuntu machine In-Reply-To: <20130319062551.GA25562@fridge.pov.lt> References: <20130319062551.GA25562@fridge.pov.lt> Message-ID: On Tue, Mar 19, 2013 at 7:25 AM, Marius Gedminas wrote: > On Mon, Mar 18, 2013 at 02:14:12PM -0700, Lennart Regebro wrote: >> On Fri, Mar 15, 2013 at 2:46 PM, Giulio Genovese >> wrote: >> > sudo pip-3.2 install --upgrade distribute >> > I get this: >> > "File "setuptools/dist.py", line 103 >> > except ValueError, e: >> >> You can't upgrade distribute with pip under Python 3. This is a known problem. >> >> https://github.com/pypa/pip/issues/650 >> >> > I think the problem is that with someone recently forgot to put the >> > parentheses (i.e. "except ValueError, e:" should be "except (ValueError, >> > e):") and therefore this does not work anymore with python3. >> >> No, the syntax under Python 3 is except ValueError as 3, but that >> doesn't work with Python 2.4 and Python 2.5 and we are still >> supporting them. >> >> > It should be easy to fix. >> >> It isn't. The solution is to not try to upgrade distribute with pip >> under Python 3. Uninstall it and install it again instead. > > Wouldn't merging Vinay's distribute3 branch fix this? I don't think we should stop using 2to3 until we drop support for 2.5, which we haven't done yet. The codebase is very brittle, so conservatism is appropriate. I think the new setuptools + distribute branch could be the right place to do this, but that's up to the setuptools maintainers. //Lennart From francois.chenais at gmail.com Sat Mar 23 09:51:48 2013 From: francois.chenais at gmail.com (Francois Chenais) Date: Sat, 23 Mar 2013 09:51:48 +0100 Subject: [Distutils] self.introduce(distutils-sig) In-Reply-To: <514D5C6D.5090201@doxos.eu> References: <7e6c08c26720462e881ca295b41ea0a7@BLUPR03MB035.namprd03.prod.outlook.com> <56c7ff822d27466285b44d200d68b5d0@BLUPR03MB035.namprd03.prod.outlook.com> <514D5C6D.5090201@doxos.eu> Message-ID: <2679F7F5-B29E-4C72-9DE9-E37B6A723638@gmail.com> Le 23 mars 2013 ? 08:40, V?clav ?milauer a ?crit : >> I was pointed to an interesting resource: http://www.lfd.uci.edu/~gohlke/pythonlibs/ (The security issues with that arrangement are non-trivial, but the convenience factor is huge) > That webpage saved me a lot of headache with packages I was not able to build under Windows (with mingw64). I contacted Christoph if he could share buildscripts or somesuch, but got no reply from him (might be just too busy) - that build infrastructure would make it much easier to experiment with separate repositories, for personal/company/community needs, and perhaps evolve into some global repository of binaries for windows, with better security. > Dis you try www.pythonxy.com ? > Cheers, Vaclav Fran?ois > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From qwcode at gmail.com Sat Mar 23 18:51:49 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 23 Mar 2013 10:51:49 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: Everybody: Jannis sorted out the permissions. All the existing pypa people have admin access and Nick. (except paul moore, not sure what your account is on bitbucket?) I've created the 2 projects we talked about here: https://bitbucket.org/pypa https://bitbucket.org/pypa/python-packaging-developer-hub - has the initial content from the PSF meta-packaging project - RTD link: https://python-packaging-developer-hub.readthedocs.org/en/latest/ https://bitbucket.org/pypa/python-packaging-user-guide - just has an inital README to make pulls possible - need to decide if we're seeding from the hitchhiker's guide? Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at zope.com Sat Mar 23 22:24:58 2013 From: jim at zope.com (Jim Fulton) Date: Sat, 23 Mar 2013 17:24:58 -0400 Subject: [Distutils] zc.buildout 2.1.0 released Message-ID: The new release has 2 major features: - Conditional sections allow you have buildout options that depend on the environment buildout runs in (OS, Python version, etc.) https://pypi.python.org/pypi/zc.buildout/2.1.0#conditional-sections - Meta-recipes allow you to scale the scope of buildouts sanely. https://pypi.python.org/pypi/zc.buildout/2.1.0#meta-recipe-support As well as a number of bug fixes. https://pypi.python.org/pypi/zc.buildout/2.1.0#id3 Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From holger at merlinux.eu Sun Mar 24 10:48:40 2013 From: holger at merlinux.eu (holger krekel) Date: Sun, 24 Mar 2013 09:48:40 +0000 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib Message-ID: <20130324094840.GX9677@merlinux.eu> Hi Richard, all, two first notes on PEP439. backward compat with present-day release files: the PEP should state it as a goal or at least discuss it in some depth. In that context, the choice of providing a bootstrap for pip rather than easy_install needs reasoning. One problem with pip, compared to easy_install, is that it doesn't support eggs which is a problem particularly on Windows machines where often no fitting C compiler is available. If the remedy here is to support wheels and recommend it's use, it is still a backward compatibility problem: many users will not be able to use the builtin-supported installer to install todays existing egg release files. setuptools and distlib: Even if Python3.4+ had a mature distlib providing minimal setuptools functionality, how would it work for the typical "python setup.py install" which is invoked by pip? Often those setup.py scripts depend on a setuptools package. I am highlighting these two backward-compat aspects because otherwise we might run into this problem: http://xkcd.com/927/ and i understood that most people involved in improving the packaging ecology want to avoid that :) best, holger From dholth at gmail.com Sun Mar 24 14:49:19 2013 From: dholth at gmail.com (Daniel Holth) Date: Sun, 24 Mar 2013 09:49:19 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: <20130324094840.GX9677@merlinux.eu> References: <20130324094840.GX9677@merlinux.eu> Message-ID: Did you know that the "wheel convert" command masks wheels from eggs? On Mar 24, 2013 5:48 AM, "holger krekel" wrote: > Hi Richard, all, > > two first notes on PEP439. > > backward compat with present-day release files: the PEP should state > it as a goal or at least discuss it in some depth. In that context, the > choice of providing a bootstrap for pip rather than easy_install needs > reasoning. One problem with pip, compared to easy_install, is > that it doesn't support eggs which is a problem particularly on > Windows machines where often no fitting C compiler is available. If the > remedy here is to support wheels and recommend it's use, it is still a > backward compatibility problem: many users will not be able to use the > builtin-supported installer to install todays existing egg release files. > > setuptools and distlib: Even if Python3.4+ had a mature distlib > providing minimal setuptools functionality, how would it work for the > typical "python setup.py install" which is invoked by pip? Often those > setup.py scripts depend on a setuptools package. > > I am highlighting these two backward-compat aspects because otherwise > we might run into this problem: http://xkcd.com/927/ and i understood > that most people involved in improving the packaging ecology want > to avoid that :) > > best, > holger > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Sun Mar 24 15:20:53 2013 From: holger at merlinux.eu (holger krekel) Date: Sun, 24 Mar 2013 14:20:53 +0000 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: <20130324142053.GY9677@merlinux.eu> On Sun, Mar 24, 2013 at 09:49 -0400, Daniel Holth wrote: > Did you know that the "wheel convert" command masks wheels from eggs? Wasn't aware, thanks for the pointer. That should certainly be part of the PEP439 discussion i ask for in the first paragraph. holger > On Mar 24, 2013 5:48 AM, "holger krekel" wrote: > > > Hi Richard, all, > > > > two first notes on PEP439. > > > > backward compat with present-day release files: the PEP should state > > it as a goal or at least discuss it in some depth. In that context, the > > choice of providing a bootstrap for pip rather than easy_install needs > > reasoning. One problem with pip, compared to easy_install, is > > that it doesn't support eggs which is a problem particularly on > > Windows machines where often no fitting C compiler is available. If the > > remedy here is to support wheels and recommend it's use, it is still a > > backward compatibility problem: many users will not be able to use the > > builtin-supported installer to install todays existing egg release files. > > > > setuptools and distlib: Even if Python3.4+ had a mature distlib > > providing minimal setuptools functionality, how would it work for the > > typical "python setup.py install" which is invoked by pip? Often those > > setup.py scripts depend on a setuptools package. > > > > I am highlighting these two backward-compat aspects because otherwise > > we might run into this problem: http://xkcd.com/927/ and i understood > > that most people involved in improving the packaging ecology want > > to avoid that :) > > > > best, > > holger > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > > From ncoghlan at gmail.com Sun Mar 24 17:04:19 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 24 Mar 2013 09:04:19 -0700 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: <20130324094840.GX9677@merlinux.eu> References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Sun, Mar 24, 2013 at 2:48 AM, holger krekel wrote: > Hi Richard, all, > > two first notes on PEP439. PEP 439 is just one small piece of a much larger puzzle, and the entire puzzle won't be explained in this PEP. I realise this makes it hard to evaluate in isolation, but the beta freeze for 3.4 is still several months away and all the other pieces will be in place well before then. Please be patient with us as we get all the pieces documented and published over the coming weeks. However, I do have some comments on your specific questions: - while easy_install has provided good service for many years, it is not a viable choice as the officially supported installer. It's default behaviour is completely broken from a system administrator's point of view, as using it to install things has side effects on every Python application run on that system, and the lack of uninstall support is not acceptable either. By contrast, pip has tried from the beginning to accommodate the interests of system administrators *as well as* developers, making it much easier to justify its being blessed as the official installer. The approach taken to this over the coming months will be to identify the reasons that people still use easy_install for some tasks and add support for them to pip. - eggs are too closely associated with easy_install to easily rehabilitate their image with system administrators and platform packagers, and also lack some of the necessary metadata to interoperate correctly with platform packaging tools. The new wheel format builds on a combination of eggs and the sysconfig installation path concept to create a format that can be more readily mapped to FHS compliant platform specific packages. Wheel also introduces the enhanced "compatibility tag" format for filenames, which covers more details of the Python and platform dependencies of the built distribution. - We could potentially provide server side support for automatically generating wheels from eggs uploaded to PyPI, but that would be a question for catalog-sig (since it is purely about PyPI's feature set and behaviour, and independent of the packaging and distribution standards themselves) - metadata 2.0 specifically includes the "Setup-Requires-Dist" field so that projects that require additional dependencies when building from source will be correctly supported by pip and other metadata 2.0 compliant installers. This will be supported transparently for users, so long as they update to metadata 2.0 compliant versions of their build tools (setuptools/distribute, distutils2, d2to1, hashdist, etc). Python 3.4 will also continue to provide distutils, so that metadata 1.0 and 1.1 packages generated with older versions of distutils will continue to work correctly. - once we can bootstrap pip, then bootstrapping easy_install if it still needed for some edge cases will be as easy as installing anything else that is either pure Python or publishes an appropriate wheel for the platform: "pip install setuptools" Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From qwcode at gmail.com Sun Mar 24 20:06:57 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 24 Mar 2013 12:06:57 -0700 Subject: [Distutils] The pypa account on BitBucket In-Reply-To: References: Message-ID: > > > If we were happy with something simple, a wheel of cheese bearing the > Python logo would be entirely appropriate :) > > I second that motion. -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Sun Mar 24 21:21:54 2013 From: holger at merlinux.eu (holger krekel) Date: Sun, 24 Mar 2013 20:21:54 +0000 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: <20130324202154.GA9677@merlinux.eu> On Sun, Mar 24, 2013 at 09:04 -0700, Nick Coghlan wrote: > On Sun, Mar 24, 2013 at 2:48 AM, holger krekel wrote: > > Hi Richard, all, > > > > two first notes on PEP439. > > PEP 439 is just one small piece of a much larger puzzle, and the > entire puzzle won't be explained in this PEP. I realise this makes it > hard to evaluate in isolation, but the beta freeze for 3.4 is still > several months away and all the other pieces will be in place well > before then. Please be patient with us as we get all the pieces > documented and published over the coming weeks. Looking forward to more details. It's indeed a bit hard to get a grasp on what's going on currently. Even though i've even just written the related PEP438 myself. In any case, each PEP should make enough sense on its own or refer to other PEPs or documents for details. > However, I do have some comments on your specific questions: > > - while easy_install has provided good service for many years, it is > not a viable choice as the officially supported installer. It's > default behaviour is completely broken from a system administrator's > point of view, as using it to install things has side effects on every > Python application run on that system, and the lack of uninstall > support is not acceptable either. By contrast, pip has tried from the > beginning to accommodate the interests of system administrators *as > well as* developers, making it much easier to justify its being > blessed as the official installer. The approach taken to this over the > coming months will be to identify the reasons that people still use > easy_install for some tasks and add support for them to pip. I understand your high level motivation and views here. Can you point to a more technical detailed comparison of easy_install and pip? FWIW I've heart from many people that they have to use easy_install because of egg support and the many packages in that format out there. This is why i asked about a "backward compatibility" strategy specifically. > - eggs are too closely associated with easy_install to easily > rehabilitate their image with system administrators and platform > packagers, and also lack some of the necessary metadata to > interoperate correctly with platform packaging tools. The new wheel > format builds on a combination of eggs and the sysconfig installation > path concept to create a format that can be more readily mapped to FHS > compliant platform specific packages. Wheel also introduces the > enhanced "compatibility tag" format for filenames, which covers more > details of the Python and platform dependencies of the built > distribution. I also have the impression that wheels are a very good development. My last mail didn't question the merits of wheels over eggs, though. > - We could potentially provide server side support for automatically > generating wheels from eggs uploaded to PyPI, but that would be a > question for catalog-sig (since it is purely about PyPI's feature set > and behaviour, and independent of the packaging and distribution > standards themselves) Maybe. In any case, I see this the issue of automated egg->wheel conversion is on topic for PEP439. It should be part of a focus on an evolutionary strategy (rather than a pure replacement one) which helps to get as many people on board and benefiting from the PEP as possible. > - metadata 2.0 specifically includes the "Setup-Requires-Dist" field > so that projects that require additional dependencies when building > from source will be correctly supported by pip and other metadata 2.0 > compliant installers. This will be supported transparently for users, > so long as they update to metadata 2.0 compliant versions of their > build tools (setuptools/distribute, distutils2, d2to1, hashdist, etc). Sounds good. The present-day packages which will not be updated (soon or ever), may be a problem, though. > Python 3.4 will also continue to provide distutils, so that metadata > 1.0 and 1.1 packages generated with older versions of distutils will > continue to work correctly. > > - once we can bootstrap pip, then bootstrapping easy_install if it > still needed for some edge cases will be as easy as installing > anything else that is either pure Python or publishes an appropriate > wheel for the platform: "pip install setuptools" Well, one of the stated goals is to make it easier for package maintainers to explain things. If you have to explain pip-bootstrapping, then install setuptools, then the actual package, it's not much of an improvement anymore. Especially if you need to explain things depending on Python version. To avoid the latter the PEP439 should include offering installer-bootstraps from python.org for older Python versions, particularly Python2. This way we could see the benefits of PEP439 and related developments much much earlier than if we all need to wait until everyone uses Python3.4 :) FWIW I am open to do a hangout over all this some time. best, holger > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > From reinout at vanrees.org Sun Mar 24 21:42:18 2013 From: reinout at vanrees.org (Reinout van Rees) Date: Sun, 24 Mar 2013 21:42:18 +0100 Subject: [Distutils] zc.buildout 2.1.0 released In-Reply-To: References: Message-ID: On 23-03-13 22:24, Jim Fulton wrote: > - Conditional sections allow you have buildout options that depend on > the environment buildout runs in (OS, Python version, etc.) > https://pypi.python.org/pypi/zc.buildout/2.1.0#conditional-sections > > - Meta-recipes allow you to scale the scope of buildouts sanely. > https://pypi.python.org/pypi/zc.buildout/2.1.0#meta-recipe-support Nice! I'll try to use the combination of conditional sections and meta-recipes in my work somewhere in the next weeks. I think that'll make our setup easier. Conditional sections could help a lot in "hard-to-install-on-OSX versus easy-on-ubuntu" scenarios, I guess, for instance. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "If you're not sure what to do, make something. -- Paul Graham" From richardjones at optushome.com.au Mon Mar 25 00:07:32 2013 From: richardjones at optushome.com.au (Richard Jones) Date: Mon, 25 Mar 2013 10:07:32 +1100 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: <20130324094840.GX9677@merlinux.eu> References: <20130324094840.GX9677@merlinux.eu> Message-ID: On 24 March 2013 20:48, holger krekel wrote: > backward compat with present-day release files: the PEP should state > it as a goal or at least discuss it in some depth. In that context, the > choice of providing a bootstrap for pip rather than easy_install needs > reasoning. One problem with pip, compared to easy_install, is > that it doesn't support eggs which is a problem particularly on > Windows machines where often no fitting C compiler is available. If the > remedy here is to support wheels and recommend it's use, it is still a > backward compatibility problem: many users will not be able to use the > builtin-supported installer to install todays existing egg release files. This is a valid concern. Obviously "pip install easy_install" is not a solution - especially since the general intention is to deprecate easy_install eventually (as explained in Nick's response). I did not discuss eggs with the pip developers while at PyCon which is quite unfortunate. I would appreciate any insights from those devs on the matter. It may be that wheel convert can solve this issue for some eggs. Unless it can be fully automated it's not going to solve it for all. > setuptools and distlib: Even if Python3.4+ had a mature distlib > providing minimal setuptools functionality, how would it work for the > typical "python setup.py install" which is invoked by pip? Often those > setup.py scripts depend on a setuptools package. This is not the bootstrap's problem (and hence not the PEP's) since the bootstrap exists *solely* to install the pip implementation. If that's not clear enough in the PEP then I can attempt to make it clearer. On 25 March 2013 03:04, Nick Coghlan wrote: > - once we can bootstrap pip, then bootstrapping easy_install if it > still needed for some edge cases will be as easy as installing > anything else that is either pure Python or publishes an appropriate > wheel for the platform: "pip install setuptools" I'm -0 on the idea of also including an easy_install bootstrap in the Python install, since I personally would prefer not to require users to have to deal with two install tools which behave slightly differently. On 25 March 2013 07:21, holger krekel wrote: > If you have to explain pip-bootstrapping, then > install setuptools, then the actual package, it's not much of an > improvement anymore. The point of this PEP is to remove the first "explain pip bootstrapping" step from this equation. I had thrown around the idea of the pip bootstrap installing both pip implementation *and* setuptools. At the time my justification was that pip depended on it. The pip devs have indicated that they could remove the setuptools dependency when distlib and wheel support are in the Python standard library. ISTM however that there is still quite a good justification for installing setuptools during the bootstrapping, for the reasons you state. Richard From ncoghlan at gmail.com Mon Mar 25 04:11:30 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 24 Mar 2013 20:11:30 -0700 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On 3/24/13, Richard Jones wrote: > This is a valid concern. Obviously "pip install easy_install" is not a > solution - especially since the general intention is to deprecate > easy_install eventually (as explained in Nick's response). I did not > discuss eggs with the pip developers while at PyCon which is quite > unfortunate. I would appreciate any insights from those devs on the > matter. Why is "pip install setuptools" not a solution? It's easier than getting setuptools installed is today. > It may be that wheel convert can solve this issue for some eggs. > Unless it can be fully automated it's not going to solve it for all. The simplest solution is likely for pip to gain support for installing from eggs, despite their known issues. >> setuptools and distlib: Even if Python3.4+ had a mature distlib >> providing minimal setuptools functionality, how would it work for the >> typical "python setup.py install" which is invoked by pip? Often those >> setup.py scripts depend on a setuptools package. > > This is not the bootstrap's problem (and hence not the PEP's) since > the bootstrap exists *solely* to install the pip implementation. If > that's not clear enough in the PEP then I can attempt to make it > clearer. Right, in every PEP we should probably make the builder vs installer distinction clear, and be explicit that the PEP only covers the installer side. "install from sdist" unfortunately blurs that boundary, and we may need an egregious hack like doing a substring search for "import setuptools" in setup.py when installing from an sdist with metadata 1.x in PKG-INFO. One of my key long term goals is to eventually allow system administrators to control whether or not a build system ends up on their production servers. At the moment that's not possible (and it likely won't be for 3.4 either), but that's where I would like us to get to somewhere along the line. > On 25 March 2013 03:04, Nick Coghlan wrote: >> - once we can bootstrap pip, then bootstrapping easy_install if it >> still needed for some edge cases will be as easy as installing >> anything else that is either pure Python or publishes an appropriate >> wheel for the platform: "pip install setuptools" > > I'm -0 on the idea of also including an easy_install bootstrap in the > Python install, since I personally would prefer not to require users > to have to deal with two install tools which behave slightly > differently. I only meant "pip install setuptools && easy_install other_project", not a separate bootstrap command. > On 25 March 2013 07:21, holger krekel wrote: >> If you have to explain pip-bootstrapping, then >> install setuptools, then the actual package, it's not much of an >> improvement anymore. > > The point of this PEP is to remove the first "explain pip > bootstrapping" step from this equation. > > I had thrown around the idea of the pip bootstrap installing both pip > implementation *and* setuptools. At the time my justification was that > pip depended on it. The pip devs have indicated that they could remove > the setuptools dependency when distlib and wheel support are in the > Python standard library. > > ISTM however that there is still quite a good justification for > installing setuptools during the bootstrapping, for the reasons you > state. I can make this part simple: I won't accept a PEP that proposes automatically installing setuptools even if you never install a package from source, and never install anything that needs pkg_resources :) We will make it easy for people to install setuptools if they need it. Projects built with newer versions of setuptools will have "Setup-Requires-Dist: setuptools" and "Requires-Dist: setuptools" configured appropriately, while pip will also correctly pick up a runtime dependency identifies in a requires.txt file. But the idea is to eventually deprecate setuptools/pkg_resources/easy_install as components that get deployed to production systems, and leave setuptools as a build system only. It's going to take us a while to get there (especially since we still need a path hook to replace pkg_resource.requires), but even when we do, they will always remain only a "pip install" away for projects that still need them. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From richardjones at optushome.com.au Mon Mar 25 04:49:40 2013 From: richardjones at optushome.com.au (Richard Jones) Date: Mon, 25 Mar 2013 14:49:40 +1100 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On 25 March 2013 14:11, Nick Coghlan wrote: > On 3/24/13, Richard Jones wrote: >> This is a valid concern. Obviously "pip install easy_install" is not a >> solution - especially since the general intention is to deprecate >> easy_install eventually (as explained in Nick's response). I did not >> discuss eggs with the pip developers while at PyCon which is quite >> unfortunate. I would appreciate any insights from those devs on the >> matter. > > Why is "pip install setuptools" not a solution? It's easier than > getting setuptools installed is today. Because of the reason I stated later; it's a second hurdle that users have to get over before installing the actual thing they wish to install. All packages that depend on setuptools must include the instructions "but first install setuptools." >> It may be that wheel convert can solve this issue for some eggs. >> Unless it can be fully automated it's not going to solve it for all. > > The simplest solution is likely for pip to gain support for installing > from eggs, despite their known issues. This would indeed be simpler. >>> setuptools and distlib: Even if Python3.4+ had a mature distlib >>> providing minimal setuptools functionality, how would it work for the >>> typical "python setup.py install" which is invoked by pip? Often those >>> setup.py scripts depend on a setuptools package. >> >> This is not the bootstrap's problem (and hence not the PEP's) since >> the bootstrap exists *solely* to install the pip implementation. If >> that's not clear enough in the PEP then I can attempt to make it >> clearer. > > Right, in every PEP we should probably make the builder vs installer > distinction clear, and be explicit that the PEP only covers the > installer side. Yep. > "install from sdist" unfortunately blurs that boundary, and we may > need an egregious hack like doing a substring search for "import > setuptools" in setup.py when installing from an sdist with metadata > 1.x in PKG-INFO. Hm. I'm not sure I get this point. The intention in the PEP is to install the pip implementation from a wheel, not sdist. So maybe I'm missing the relevance of sdist being mentioned here. >> On 25 March 2013 03:04, Nick Coghlan wrote: >>> - once we can bootstrap pip, then bootstrapping easy_install if it >>> still needed for some edge cases will be as easy as installing >>> anything else that is either pure Python or publishes an appropriate >>> wheel for the platform: "pip install setuptools" >> >> I'm -0 on the idea of also including an easy_install bootstrap in the >> Python install, since I personally would prefer not to require users >> to have to deal with two install tools which behave slightly >> differently. > > I only meant "pip install setuptools && easy_install other_project", > not a separate bootstrap command. OK. So I'm -1 on that ;-) >> On 25 March 2013 07:21, holger krekel wrote: >>> If you have to explain pip-bootstrapping, then >>> install setuptools, then the actual package, it's not much of an >>> improvement anymore. >> >> The point of this PEP is to remove the first "explain pip >> bootstrapping" step from this equation. >> >> I had thrown around the idea of the pip bootstrap installing both pip >> implementation *and* setuptools. At the time my justification was that >> pip depended on it. The pip devs have indicated that they could remove >> the setuptools dependency when distlib and wheel support are in the >> Python standard library. >> >> ISTM however that there is still quite a good justification for >> installing setuptools during the bootstrapping, for the reasons you >> state. > > I can make this part simple: I won't accept a PEP that proposes > automatically installing setuptools even if you never install a > package from source, and never install anything that needs > pkg_resources :) > > We will make it easy for people to install setuptools if they need it. > Projects built with newer versions of setuptools will have > "Setup-Requires-Dist: setuptools" and "Requires-Dist: setuptools" > configured appropriately, while pip will also correctly pick up a > runtime dependency identifies in a requires.txt file. > > But the idea is to eventually deprecate > setuptools/pkg_resources/easy_install as components that get deployed > to production systems, and leave setuptools as a build system only. > It's going to take us a while to get there (especially since we still > need a path hook to replace pkg_resource.requires), but even when we > do, they will always remain only a "pip install" away for projects > that still need them. I think we have too much legacy to support here. Sure it'd be nice if everyone just switched over to PEP 426 style overnight, but it ain't gonna happen. The intent of the automatic setuptools installation is to mirror the *current* situation which people rely on: when pip's installed you also have setuptools (/distribute) installed. Packages may then depend on setuptools in their setup.py with fair confidence that it'll be there. Having this PEP support pip without setuptools will make packaging more complex which is antithetical my goal with the PEP. I can't support a PEP that will make things more complex :-) Richard From dholth at gmail.com Mon Mar 25 05:29:21 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 25 Mar 2013 00:29:21 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: Wheels are very similar to eggs. The initial wheel implementation converts *every* newly built wheel from .egg-info to .dist-info since setuptools doesn't know how to generate the new metadata. I think you will find the conversion to be reliable, though it can't download the eggs for you right now. The hypothetical no-setuptools pip will probably add an implicit setuptools dependency to every sdist, installing it automatically as an additional dependency, until we invent a way for sdists to say that they don't need setuptools. "pip install" always forces its setup.py subprocesses to use setuptools. On Sun, Mar 24, 2013 at 11:49 PM, Richard Jones wrote: > On 25 March 2013 14:11, Nick Coghlan wrote: >> On 3/24/13, Richard Jones wrote: >>> This is a valid concern. Obviously "pip install easy_install" is not a >>> solution - especially since the general intention is to deprecate >>> easy_install eventually (as explained in Nick's response). I did not >>> discuss eggs with the pip developers while at PyCon which is quite >>> unfortunate. I would appreciate any insights from those devs on the >>> matter. >> >> Why is "pip install setuptools" not a solution? It's easier than >> getting setuptools installed is today. > > Because of the reason I stated later; it's a second hurdle that users > have to get over before installing the actual thing they wish to > install. All packages that depend on setuptools must include the > instructions "but first install setuptools." > > >>> It may be that wheel convert can solve this issue for some eggs. >>> Unless it can be fully automated it's not going to solve it for all. >> >> The simplest solution is likely for pip to gain support for installing >> from eggs, despite their known issues. > > This would indeed be simpler. > > >>>> setuptools and distlib: Even if Python3.4+ had a mature distlib >>>> providing minimal setuptools functionality, how would it work for the >>>> typical "python setup.py install" which is invoked by pip? Often those >>>> setup.py scripts depend on a setuptools package. >>> >>> This is not the bootstrap's problem (and hence not the PEP's) since >>> the bootstrap exists *solely* to install the pip implementation. If >>> that's not clear enough in the PEP then I can attempt to make it >>> clearer. >> >> Right, in every PEP we should probably make the builder vs installer >> distinction clear, and be explicit that the PEP only covers the >> installer side. > > Yep. > > >> "install from sdist" unfortunately blurs that boundary, and we may >> need an egregious hack like doing a substring search for "import >> setuptools" in setup.py when installing from an sdist with metadata >> 1.x in PKG-INFO. > > Hm. I'm not sure I get this point. The intention in the PEP is to > install the pip implementation from a wheel, not sdist. So maybe I'm > missing the relevance of sdist being mentioned here. > > >>> On 25 March 2013 03:04, Nick Coghlan wrote: >>>> - once we can bootstrap pip, then bootstrapping easy_install if it >>>> still needed for some edge cases will be as easy as installing >>>> anything else that is either pure Python or publishes an appropriate >>>> wheel for the platform: "pip install setuptools" >>> >>> I'm -0 on the idea of also including an easy_install bootstrap in the >>> Python install, since I personally would prefer not to require users >>> to have to deal with two install tools which behave slightly >>> differently. >> >> I only meant "pip install setuptools && easy_install other_project", >> not a separate bootstrap command. > > OK. So I'm -1 on that ;-) > > >>> On 25 March 2013 07:21, holger krekel wrote: >>>> If you have to explain pip-bootstrapping, then >>>> install setuptools, then the actual package, it's not much of an >>>> improvement anymore. >>> >>> The point of this PEP is to remove the first "explain pip >>> bootstrapping" step from this equation. >>> >>> I had thrown around the idea of the pip bootstrap installing both pip >>> implementation *and* setuptools. At the time my justification was that >>> pip depended on it. The pip devs have indicated that they could remove >>> the setuptools dependency when distlib and wheel support are in the >>> Python standard library. >>> >>> ISTM however that there is still quite a good justification for >>> installing setuptools during the bootstrapping, for the reasons you >>> state. >> >> I can make this part simple: I won't accept a PEP that proposes >> automatically installing setuptools even if you never install a >> package from source, and never install anything that needs >> pkg_resources :) >> >> We will make it easy for people to install setuptools if they need it. >> Projects built with newer versions of setuptools will have >> "Setup-Requires-Dist: setuptools" and "Requires-Dist: setuptools" >> configured appropriately, while pip will also correctly pick up a >> runtime dependency identifies in a requires.txt file. >> >> But the idea is to eventually deprecate >> setuptools/pkg_resources/easy_install as components that get deployed >> to production systems, and leave setuptools as a build system only. >> It's going to take us a while to get there (especially since we still >> need a path hook to replace pkg_resource.requires), but even when we >> do, they will always remain only a "pip install" away for projects >> that still need them. > > I think we have too much legacy to support here. Sure it'd be nice if > everyone just switched over to PEP 426 style overnight, but it ain't > gonna happen. The intent of the automatic setuptools installation is > to mirror the *current* situation which people rely on: when pip's > installed you also have setuptools (/distribute) installed. Packages > may then depend on setuptools in their setup.py with fair confidence > that it'll be there. Having this PEP support pip without setuptools > will make packaging more complex which is antithetical my goal with > the PEP. I can't support a PEP that will make things more complex :-) > > > Richard > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From qwcode at gmail.com Mon Mar 25 07:20:13 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 24 Mar 2013 23:20:13 -0700 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: > I had thrown around the idea of the pip bootstrap installing both pip > implementation *and* setuptools. At the time my justification was that > pip depended on it. The pip devs have indicated that they could remove > the setuptools dependency when distlib and wheel support are in the > Python standard library. > pip's wheel install support is native to pip right now, but it does use pkg_resources. (pip's wheel build support is done via the external wheel package as a setuptools extension) if pip "vendorized" pkg_resources, then it could install a Setuptools wheel. (pip will ultimately refactor its pkg_resources use over to distlib, but that seems less likely in the short term) I have a hard time imagining implementing the "MEBs" idea (i.e. removing the "setuptools dependency") by python 3.4, if that's what people are considering? (possibly only in the simplest way, where pip just automatically installs Setuptools from wheel if it's not installed when it encounters an sdist, because it knows it's the only build option right now) but that really reinforces that we need to get these plans posted up to the packaging developer hub so can really grok all this and talk phases and timelines. https://python-packaging-developer-hub.readthedocs.org/en/latest/ Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Mar 25 17:23:36 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 25 Mar 2013 09:23:36 -0700 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Sun, Mar 24, 2013 at 8:49 PM, Richard Jones wrote: > I think we have too much legacy to support here. Sure it'd be nice if > everyone just switched over to PEP 426 style overnight, but it ain't > gonna happen. The intent of the automatic setuptools installation is > to mirror the *current* situation which people rely on: when pip's > installed you also have setuptools (/distribute) installed. Packages > may then depend on setuptools in their setup.py with fair confidence > that it'll be there. Having this PEP support pip without setuptools > will make packaging more complex which is antithetical my goal with > the PEP. I can't support a PEP that will make things more complex :-) I am not adding setuptools to the standard library, which is effectively what we're doing for anything installed automatically via the pip bootstrap script. Things have moved on since Guido approved setuptools for inclusion in 2.5, and I'm not adding an entire legacy module only to immediately deprecate it in favour of the updated toolchains. That means anyone assuming setuptools will be present as part of all Python installations is just plain wrong, both now and in the future. Therefore, any installation of setuptools on a system *must* be requested explicitly, either directly (via "pip install setuptools") or indirectly (by installing something that explicitly depends on it through a setuptools requirements file, Setup-Requires-Dist or Requires-Dist). The *only* case this approach doesn't immediately cover is a project that: 1. Doesn't publish a pre-built wheel for the current platform (or egg, assuming pip gains support for those, perhaps by implicitly converting them to wheels) 2. Doesn't publish 2.0 metadata with "Setup-Requires-Dist: setuptools" 3. imports setuptools in its setup.py file This can be handled in pip, by using the AST module to scan for setuptools imports in setup.py (or else by checking for a setuptools related ImportError after trying to run it). Yes, it's a hack, but I am *not* going to approve a PEP that further entrenches something even its creator would like to see waved off into the sunset, giving thanks for its good service :) Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From qwcode at gmail.com Mon Mar 25 17:42:05 2013 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 25 Mar 2013 09:42:05 -0700 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: > > The *only* case this approach doesn't immediately cover is a project that: > 1. Doesn't publish a pre-built wheel for the current platform (or egg, > assuming pip gains support for those, perhaps by implicitly converting > them to wheels) > 2. Doesn't publish 2.0 metadata with "Setup-Requires-Dist: setuptools" > 3. imports setuptools in its setup.py file > > so that's most everything on pypi right now in the short and medium term. > This can be handled in pip, by using the AST module to scan for > setuptools imports in setup.py (or else by checking for a setuptools > related ImportError after trying to run it). so you're asking pip to get this working soon, right? like before python3.4, so this PEP can go in? Marcus > Yes, it's a hack, but I > am *not* going to approve a PEP that further entrenches something even > its creator would like to see waved off into the sunset, giving thanks > for its good service :) > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Mar 25 17:42:18 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 25 Mar 2013 09:42:18 -0700 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Mon, Mar 25, 2013 at 9:23 AM, Nick Coghlan wrote: > This can be handled in pip, by using the AST module to scan for > setuptools imports in setup.py (or else by checking for a setuptools > related ImportError after trying to run it). Yes, it's a hack, but I > am *not* going to approve a PEP that further entrenches something even > its creator would like to see waved off into the sunset, giving thanks > for its good service :) I would also be fine with a simpler version of this approach, which works the way pip does now: if pip encounters a metadata 1.0 or 1.1 sdist, then it *assumes* "Setup-Requires-Dist: setuptools". That way, if you give it an index with no setuptools and fully populated with pre-built wheels, you can avoid deploying setuptools to the target environment. The bootstrap script itself should not install setuptools though - it's up to pip to do that before running a setup.py file without the explicit Setup-Requires-Dist support in metadata 2.0+ Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Mar 25 17:50:45 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 25 Mar 2013 09:50:45 -0700 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Mon, Mar 25, 2013 at 9:42 AM, Marcus Smith wrote: > >> >> The *only* case this approach doesn't immediately cover is a project that: >> 1. Doesn't publish a pre-built wheel for the current platform (or egg, >> assuming pip gains support for those, perhaps by implicitly converting >> them to wheels) >> 2. Doesn't publish 2.0 metadata with "Setup-Requires-Dist: setuptools" >> 3. imports setuptools in its setup.py file >> > > so that's most everything on pypi right now in the short and medium term. > >> >> This can be handled in pip, by using the AST module to scan for >> setuptools imports in setup.py (or else by checking for a setuptools >> related ImportError after trying to run it). > > > so you're asking pip to get this working soon, right? like before > python3.4, so this PEP can go in? Our messages crossed in flight - I realised I'm fine with assuming a setuptools dependency for eggs and all sdist's without 2.0+ metadata, it's only the idea of installing setuptools as a dependency of bootstrapping pip itself that I'm not happy with. That means pip only needs to support two configurations: 1. No setuptools, can only install from wheel files and sdists with 2.0+ metadata 2. Has setuptools, can also install from sdists with legacy metadata and eggs By default, installing setuptools when necessary should be automatic, but advanced users should be able to ask that it instead be treated as an error if no wheel is available to satisfy an installation request or dependency (so they don't inadvertently install setuptools on their production systems if they don't want to). To make this work, we'll need to get wheels published on PyPI for setuptools before 3.4, as well as ensuring pip doesn't require setuptools to install from wheel files. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Mon Mar 25 18:05:37 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 25 Mar 2013 17:05:37 +0000 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: 25 March 2013 03:49, Richard Jones wrote: > On 25 March 2013 14:11, Nick Coghlan wrote: >> On 3/24/13, Richard Jones wrote: >>> This is a valid concern. Obviously "pip install easy_install" is not a >>> solution - especially since the general intention is to deprecate >>> easy_install eventually (as explained in Nick's response). I did not >>> discuss eggs with the pip developers while at PyCon which is quite >>> unfortunate. I would appreciate any insights from those devs on the >>> matter. >> >> Why is "pip install setuptools" not a solution? It's easier than >> getting setuptools installed is today. > > Because of the reason I stated later; it's a second hurdle that users > have to get over before installing the actual thing they wish to > install. All packages that depend on setuptools must include the > instructions "but first install setuptools." People have suggested pip offer egg (or wininst) installation for a long time now. But nobody has ever come up with actual code to do this. I don't think anyone should assume that such support will "just happen". Paul. From dholth at gmail.com Mon Mar 25 18:23:47 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 25 Mar 2013 13:23:47 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Mon, Mar 25, 2013 at 12:50 PM, Nick Coghlan wrote: > On Mon, Mar 25, 2013 at 9:42 AM, Marcus Smith wrote: >> >>> >>> The *only* case this approach doesn't immediately cover is a project that: >>> 1. Doesn't publish a pre-built wheel for the current platform (or egg, >>> assuming pip gains support for those, perhaps by implicitly converting >>> them to wheels) >>> 2. Doesn't publish 2.0 metadata with "Setup-Requires-Dist: setuptools" >>> 3. imports setuptools in its setup.py file >>> >> >> so that's most everything on pypi right now in the short and medium term. >> >>> >>> This can be handled in pip, by using the AST module to scan for >>> setuptools imports in setup.py (or else by checking for a setuptools >>> related ImportError after trying to run it). >> >> >> so you're asking pip to get this working soon, right? like before >> python3.4, so this PEP can go in? > > Our messages crossed in flight - I realised I'm fine with assuming a > setuptools dependency for eggs and all sdist's without 2.0+ metadata, > it's only the idea of installing setuptools as a dependency of > bootstrapping pip itself that I'm not happy with. That means pip only > needs to support two configurations: > > 1. No setuptools, can only install from wheel files and sdists with > 2.0+ metadata > 2. Has setuptools, can also install from sdists with legacy metadata and eggs > > By default, installing setuptools when necessary should be automatic, > but advanced users should be able to ask that it instead be treated as > an error if no wheel is available to satisfy an installation request > or dependency (so they don't inadvertently install setuptools on their > production systems if they don't want to). > > To make this work, we'll need to get wheels published on PyPI for > setuptools before 3.4, as well as ensuring pip doesn't require > setuptools to install from wheel files. > > Cheers, > Nick. My vision for the setuptools deprecation process is that distutils rides into the sunset with it. In this future eventually bugs in setuptools will be solved by porting setup.py to one of (X, Y, Z) which haven't necessarily been invented yet. Like Marcus said, pip itself only uses pkg_resources which we can bundle and eventually replace with a shim backed by distlib (Vinay has tried this already). Only the individual distributions' "setup.py" subprocesses (are forced to) import setuptools. The --only-wheel option should be easy. It is going to be problematic to try to find and download legacy eggs correctly and automatically since they don't for example tell you whether they are wide Unicode. Those problems and more would apply to automatically downloading and converting bdist_wininst .exe installers which aren't even named consistently. From pje at telecommunity.com Mon Mar 25 19:16:50 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 25 Mar 2013 14:16:50 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Mon, Mar 25, 2013 at 1:23 PM, Daniel Holth wrote: > It is going to be problematic to try to find and download legacy eggs > correctly and automatically since they don't for example tell you > whether they are wide Unicode. Those problems and more would apply to > automatically downloading and converting bdist_wininst .exe installers > which aren't even named consistently. However, given that this problem exists now, and that this is strictly a transitional feature, all you really need to do is make reasonably similar choices to easy_install. It might be worth looking at the actual stats, but my impression is that most eggs on PyPI are either for Windows or aren't platform-specific at all. Also, as far as detecting the need for setuptools, I think that can be done just by noticing whether the PKG-INFO included in an sdist is metadata 2.0 or not. If it is, then setuptools should be explicitly declared as a build-time dependency, otherwise it's not needed. If it's an older metadata version, then you probably need setuptools. If for political reasons you want to not provide setuptools even in that case (since there are packages that don't use it), you could just throw in an import hook that notices when you're trying to import something from setuptools and handles the install. But ISTM the simplest case is just to look for (the absence of) 2.0 metadata. (Of course, that kind of assumes that the 2.0 standard includes build metadata and procedures, and that we know what a new-style sdist looks like in this scheme.) From qwcode at gmail.com Mon Mar 25 21:07:32 2013 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 25 Mar 2013 13:07:32 -0700 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: I created a pip issue for this https://github.com/pypa/pip/issues/863 On Mon, Mar 25, 2013 at 9:42 AM, Nick Coghlan wrote: > On Mon, Mar 25, 2013 at 9:23 AM, Nick Coghlan wrote: > > This can be handled in pip, by using the AST module to scan for > > setuptools imports in setup.py (or else by checking for a setuptools > > related ImportError after trying to run it). Yes, it's a hack, but I > > am *not* going to approve a PEP that further entrenches something even > > its creator would like to see waved off into the sunset, giving thanks > > for its good service :) > > I would also be fine with a simpler version of this approach, which > works the way pip does now: if pip encounters a metadata 1.0 or 1.1 > sdist, then it *assumes* "Setup-Requires-Dist: setuptools". > > That way, if you give it an index with no setuptools and fully > populated with pre-built wheels, you can avoid deploying setuptools to > the target environment. > > The bootstrap script itself should not install setuptools though - > it's up to pip to do that before running a setup.py file without the > explicit Setup-Requires-Dist support in metadata 2.0+ > > Cheers, > Nick. > > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Mar 25 22:08:59 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 25 Mar 2013 21:08:59 +0000 Subject: [Distutils] Builders vs Installers Message-ID: There's a longer-term issue that occurred to me when thinking about pip's role as a "builder" or an "installer" (to use Nick's terminology). As I understand Nick's vision for the future, installers (like pip) will locate built wheels and download and install them, and builders (like distutils and bento) will be responsible for building wheels. But there's an intermediate role which shouldn't get forgotten in the transition - the role that pip currently handles with the "pip wheel" command. This is where I specify a list of distributions, and pip locates sdists, downloads them, checks dependencies, and ultimately builds all of the wheels. I'm not sure whether the current idea of builders includes this "locate, download and resolve dependencies" function (distutils and bento certainly don't have that capability). I imagine that pip will retain some form of the current "pip wheel" capability that covers this requirement, but maybe as the overall picture of the new design gets clarified, this role should be captured. Paul From vinay_sajip at yahoo.co.uk Mon Mar 25 23:46:12 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 25 Mar 2013 22:46:12 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: Message-ID: Paul Moore gmail.com> writes: > I imagine that pip will retain some form of the current "pip wheel" > capability that covers this requirement, but maybe as the overall > picture of the new design gets clarified, this role should be > captured. Strictly speaking I would have thought "pip wheel" was a builder function which is only in pip as a transitional step, to get wheels more exposure. Is that an incorrect assumption on my part? Regards, Vinay Sajip From p.f.moore at gmail.com Tue Mar 26 00:00:28 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 25 Mar 2013 23:00:28 +0000 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On 25 March 2013 22:46, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > >> I imagine that pip will retain some form of the current "pip wheel" >> capability that covers this requirement, but maybe as the overall >> picture of the new design gets clarified, this role should be >> captured. > > Strictly speaking I would have thought "pip wheel" was a builder function which > is only in pip as a transitional step, to get wheels more exposure. Is that an > incorrect assumption on my part? If some other tool provides the same functionality, I can see the possibility that pip will drop it (assuming that pip takes the route of becoming a "pure installer"). But I can't imagine pip dropping that functionality *until* some other tool is available which does the equivalent. Paul From dholth at gmail.com Tue Mar 26 00:07:55 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 25 Mar 2013 19:07:55 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: Unix users will always want to compile their own. Pip wheel is not going away, but we will definitely evolve the implementation. On Mar 25, 2013 7:00 PM, "Paul Moore" wrote: > On 25 March 2013 22:46, Vinay Sajip wrote: > > Paul Moore gmail.com> writes: > > > >> I imagine that pip will retain some form of the current "pip wheel" > >> capability that covers this requirement, but maybe as the overall > >> picture of the new design gets clarified, this role should be > >> captured. > > > > Strictly speaking I would have thought "pip wheel" was a builder > function which > > is only in pip as a transitional step, to get wheels more exposure. Is > that an > > incorrect assumption on my part? > > If some other tool provides the same functionality, I can see the > possibility that pip will drop it (assuming that pip takes the route > of becoming a "pure installer"). But I can't imagine pip dropping that > functionality *until* some other tool is available which does the > equivalent. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Tue Mar 26 00:15:56 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 25 Mar 2013 19:15:56 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Mon, Mar 25, 2013 at 5:08 PM, Paul Moore wrote: > There's a longer-term issue that occurred to me when thinking about > pip's role as a "builder" or an "installer" (to use Nick's > terminology). > > As I understand Nick's vision for the future, installers (like pip) > will locate built wheels and download and install them, and builders > (like distutils and bento) will be responsible for building wheels. > But there's an intermediate role which shouldn't get forgotten in the > transition - the role that pip currently handles with the "pip wheel" > command. This is where I specify a list of distributions, and pip > locates sdists, downloads them, checks dependencies, and ultimately > builds all of the wheels. I'm not sure whether the current idea of > builders includes this "locate, download and resolve dependencies" > function (distutils and bento certainly don't have that capability). Yes, and to make things even more interesting, consider the cases where there are build-time dependencies. ;-) I would guess that installing from sdists (and revision control) is probably here to stay, along with the inherent coupling between "build" and "fetch" functions. Right now, the "build" side of setuptools fetches build-time dependencies, but in the New World, ISTM that a top-level build tool would just be something that reads the package metadata, finds build-time dependencies, and then runs some entry points to ask for a wheel to be spit out (or to get back a data structure describing the wheel, anyway). This part could be standardized and indeed could be just something that pip does. Another piece that hasn't seemed well-specified so far is what Nick called "archiving" - creating the sdist. Or perhaps more precisely, generating an sdist PKG-INFO and putting the stuff together. IMO, a good install tool needs to be able to run the archiving and building steps as well as installing directly from wheels. However, although metadata 2.0 provides us with a good basis for running build and install steps, there really isn't anything yet in the way of a standard for sdist generation. Of course, we have "setup.py sdist", and the previous work on setup.cfg by the distutil2 team. We could either build on setup.cfg, or perhaps start over with a simple spec to say how the package is to be built: i.e., just a simple set of entry points saying what archiver, builder, etc. are used. Such a file wouldn't change much over the life of the package, and would avoid the need for all the dynamic hooks provided by distutils2's setup.cfg. In the degenerate case, I suppose, it could just be "pyarchiver.cfg" and contain a couple lines saying what tool is used to generate PKG-INFO. On the other hand, we could draw the line at saying, pip only ever installs from sdists, no source checkouts or tarballs. I'm not sure that's a reasonable limitation, though. On the *other* other hand, Perhaps it would be best to just use the setup.cfg work, updated to handle the full metadata 2.0 spec. As I recall, the setup.cfg format handled a ridiculously large number of use cases in a very static format, and IIRC still included the possibility for dynamic hooks to affect the metadata-generation and archive content selection processes, let alone the build and other stages. But then, is that biasing against e.g. bento.info? Argh. Packaging is hard, let's go shopping. ;-) On balance, I think I lean towards just having a simple way to specify your chosen archiver, so that installing from source checkouts and dumps is possible. I just find it annoying that you have to have *two* files in your checkout, one to say what tool you're using, and another one to configure it. (What'd be nice is if you could just somehow detect files like bento.info and setup.cfg and thereby detect what archiver to use. But that would have limited extensibility unless there was a standard naming convention for the files, or a standardized format for at least the first line in the file or something like that, so you could identify the needed tool.) From dholth at gmail.com Tue Mar 26 04:35:32 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 25 Mar 2013 23:35:32 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Mon, Mar 25, 2013 at 7:15 PM, PJ Eby wrote: > On Mon, Mar 25, 2013 at 5:08 PM, Paul Moore wrote: >> There's a longer-term issue that occurred to me when thinking about >> pip's role as a "builder" or an "installer" (to use Nick's >> terminology). >> >> As I understand Nick's vision for the future, installers (like pip) >> will locate built wheels and download and install them, and builders >> (like distutils and bento) will be responsible for building wheels. >> But there's an intermediate role which shouldn't get forgotten in the >> transition - the role that pip currently handles with the "pip wheel" >> command. This is where I specify a list of distributions, and pip >> locates sdists, downloads them, checks dependencies, and ultimately >> builds all of the wheels. I'm not sure whether the current idea of >> builders includes this "locate, download and resolve dependencies" >> function (distutils and bento certainly don't have that capability). > > Yes, and to make things even more interesting, consider the cases > where there are build-time dependencies. ;-) > > I would guess that installing from sdists (and revision control) is > probably here to stay, along with the inherent coupling between > "build" and "fetch" functions. > > Right now, the "build" side of setuptools fetches build-time > dependencies, but in the New World, ISTM that a top-level build tool > would just be something that reads the package metadata, finds > build-time dependencies, and then runs some entry points to ask for a > wheel to be spit out (or to get back a data structure describing the > wheel, anyway). > > This part could be standardized and indeed could be just something > that pip does. > > Another piece that hasn't seemed well-specified so far is what Nick > called "archiving" - creating the sdist. Or perhaps more precisely, > generating an sdist PKG-INFO and putting the stuff together. > > IMO, a good install tool needs to be able to run the archiving and > building steps as well as installing directly from wheels. However, > although metadata 2.0 provides us with a good basis for running build > and install steps, there really isn't anything yet in the way of a > standard for sdist generation. > > Of course, we have "setup.py sdist", and the previous work on > setup.cfg by the distutil2 team. We could either build on setup.cfg, > or perhaps start over with a simple spec to say how the package is to > be built: i.e., just a simple set of entry points saying what > archiver, builder, etc. are used. Such a file wouldn't change much > over the life of the package, and would avoid the need for all the > dynamic hooks provided by distutils2's setup.cfg. > > In the degenerate case, I suppose, it could just be "pyarchiver.cfg" > and contain a couple lines saying what tool is used to generate > PKG-INFO. > > On the other hand, we could draw the line at saying, pip only ever > installs from sdists, no source checkouts or tarballs. I'm not sure > that's a reasonable limitation, though. > > On the *other* other hand, Perhaps it would be best to just use the > setup.cfg work, updated to handle the full metadata 2.0 spec. As I > recall, the setup.cfg format handled a ridiculously large number of > use cases in a very static format, and IIRC still included the > possibility for dynamic hooks to affect the metadata-generation and > archive content selection processes, let alone the build and other > stages. > > But then, is that biasing against e.g. bento.info? Argh. Packaging > is hard, let's go shopping. ;-) > > On balance, I think I lean towards just having a simple way to specify > your chosen archiver, so that installing from source checkouts and > dumps is possible. I just find it annoying that you have to have > *two* files in your checkout, one to say what tool you're using, and > another one to configure it. > > (What'd be nice is if you could just somehow detect files like > bento.info and setup.cfg and thereby detect what archiver to use. But > that would have limited extensibility unless there was a standard > naming convention for the files, or a standardized format for at least > the first line in the file or something like that, so you could > identify the needed tool.) The problem we are solving first is not "setuptools", it is simply that right now Python programs are usually installed by downloading and then running a program that performs the actual install. Instead, we're going to let the installer do the actual install. The funny thing is that all the same things happen. We just move the responsibilities around a little bit, things like Bento or distutils2 don't have to implement their own installer code, and we don't have to worry about that code doing bad things like messing up our systems or accessing the Internet. The installer might do new clever things now that it really controls the install. There are a tremendous number of things you can do from there, mostly undeveloped, including decoupling the rest of the packaging pipeline all the way down to the humble sdist, but we don't really need to change the pip user interface. All the same things will continue to happen, just rearranged in a more modular way. The MEBS "ultimate punt" design is that you would iterate over build plugins to recognize an sdist. The sdist would be defined as anything recognized by a plugin. It's probably more practical to at least name the preferred build system in a very minimal setup.cfg. This is still a little bit ugly. In a normal "new style" sdist, you may be able to trust the PKG-INFO file, but when building from source control you can't, would need to inspect setup.cfg, and ask the build system to refresh the metadata. From pje at telecommunity.com Tue Mar 26 06:15:38 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 26 Mar 2013 01:15:38 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Mon, Mar 25, 2013 at 11:35 PM, Daniel Holth wrote: > The problem we are solving first is not "setuptools", it is simply > that right now Python programs are usually installed by downloading > and then running a program that performs the actual install. Instead, > we're going to let the installer do the actual install. > > The funny thing is that all the same things happen. We just move the > responsibilities around a little bit, things like Bento or distutils2 > don't have to implement their own installer code, and we don't have to > worry about that code doing bad things like messing up our systems or > accessing the Internet. The installer might do new clever things now > that it really controls the install. > > There are a tremendous number of things you can do from there, mostly > undeveloped, including decoupling the rest of the packaging pipeline > all the way down to the humble sdist, but we don't really need to > change the pip user interface. All the same things will continue to > happen, just rearranged in a more modular way. I'm not sure what you're trying to say here; all the above was assumed in my post, as I assumed was implied in Paul's post. I was talking about the *how* we're going to accomplish all this while still supporting building from pure source (vs. an sdist with a 2.0 PKG-INFO). More specifically, I was hoping to move the discussion forward on nailing down some of the details that still need specifying in a PEP somewhere, to finish out what the "new world" of packaging will look like. > The MEBS "ultimate punt" design is that you would iterate over build > plugins to recognize an sdist. The sdist would be defined as anything > recognized by a plugin. I'm only calling it an "sdist" if it includes a PKG-INFO, since right now that's the only real difference between a source checkout and an sdist. (That, and the sdist might have fewer files.) So, source checkouts and github tarballs aren't "sdists" in this sense. (Perhaps we should call them "raw sources" to distinguish them from sdists.) Anyway, the downside to a plugin approach like this is that it would be a chicken-and-egg problem: you would have to install the plugin before you could build a package from a raw source, and there'd be no way to know that you needed the plugin until *after* you obtained the raw source, if it was using a freshly-invented build tool. > It's probably more practical to at least name > the preferred build system in a very minimal setup.cfg. This is still > a little bit ugly. In a normal "new style" sdist, you may be able to > trust the PKG-INFO file, but when building from source control you > can't, would need to inspect setup.cfg, and ask the build system to > refresh the metadata. Right. Or technically, the "archiver" in Nick's terminology. Probably we'll need to standardize on a config file, even if it makes life a little more difficult for users of other archiving tools. OTOH, I suppose a case could be made for checking PKG-INFO into source control along with the rest of your code, in which case the problem disappears entirely: there'd be no such thing as a "raw" source in that case. The downside, though, is that there's a small but vocal contingent that believes checking generated files into source control is a sign of ultimate evil, so it probably won't be a *popular* choice. But, if we support "either you have a setup.cfg specifying your archiver, or a PKG-INFO so an archiver isn't needed", then that would probably cover all the bases, actually. From vinay_sajip at yahoo.co.uk Tue Mar 26 09:54:20 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 26 Mar 2013 08:54:20 +0000 (UTC) Subject: [Distutils] A new, experimental packaging tool: distil Message-ID: I've created a new tool called distil which I'm using to experiment with packaging functionality. Overview -------- It's based on distlib and has IMO some interesting features. With it, one can: * Install projects from PyPI and wheels (see PEP 427). Distil does not invoke setup.py, so projects that do significant computation in setup.py may not be installable by distil. However, a large number of projects on PyPI *can* be installed, and dependencies are detected, downloaded and installed. For those distributions that absolutely *have* to run setup.py, distil can create wheels using pip as a helper, and then install from those wheels. * Optionally upgrade installed distributions, whether installed by distil or installed by pip. * Uninstall distributions installed by distil or pip. * Build source distributions in .tar.gz, .tar.bz2, and .zip formats. * Build binary distributions in wheel format. These can be pure-Python, or have C libraries and extensions. Support for Cython and Fortran (using f2py) is possible, though currently distil cannot install Cython or Numpy directly because of how they use setup.py. * Run tests on built distributions. * Register projects on PyPI. * Upload distributions to PyPI. * Upload documentation to http://pythonhosted.org/. * Display dependencies of a distribution - either as a list of what would be downloaded (and a suggested download order), or in Graphviz format suitable for conversion to an image. Getting started is simple (documentation is at [2]): * Very simple deployment - just copy distil.py[1] to a location on your path, optionally naming it to distil on POSIX platforms. There's no need to install distlib - it's all included. * Uses either a system Python or one in a virtual environment, but by default installs to the user site rather than system Python library locations. * Offers tab-completion and abbreviation of commands and parameters on Bash-compatible shells. Logically, packaging activities can be divided into a number of categories or roles: * Archiver - builds source distributions from a source tree * Builder - builds binary distributions from source * Installer - installs source or binary distributions This version of distil incorporates (for convenience) all of the above roles. There is a school of thought which says that that these roles should be fulfilled by separate programs, and that's fine for production quality tools - it's just more convenient for now to have everything in one package for an experimental tool like distil. Actual Improvements ------------------- Despite the fact that distil is in an alpha stage of development and has received no real-world exposure like the existing go-to packaging tools, it does offer some improvements over them: * Dependency resolution can be performed without downloading any distributions. Unlike e.g. pip, you are told which additional dependencies will be downloaded and installed, before any download occurs. * Better information is stored for uninstallation. This allows better feedback to be given to users during uninstallation. * Dependency checking is done during uninstallation. Say you've installed a distribution A, which pulled in dependencies B and C. If you request an uninstallation of B (or C), distil will complain that you can't do this because A needs it. When you uninstall A, you are offered the option to uninstall B and C as well (assuming you didn't install something else that depends on B or C, after installing A). * By default, installation is to the user site and not to the system Python, so you shouldn't need to invoke sudo to install distributions for personal use which are not for specific projects/virtual environments. * There's no need to "install" distil - the exact same script will run with any system Python or any venv (subject to Python version constraints of 2.6, 2.7, 3.2 or greater). Bootstrapping pip ----------------- I've used distil to bootstrap pip, then used that pip to install other stuff. I created a fresh PEP 405 venv with nothing in it, used distil to install a wheel[3] for my distribute fork which runs on Python 2.x and 3.x, then used distil to install pip from PyPI. Finally, to test pip, I installed SQLAlchemy (using pip) from PyPI. See [4] for the transcript. I would welcome any feedback you could give regarding distil/distlib. There is of course a lot more testing to do, but I consider these initial findings to be promising, and worth sharing. If you find any problems, you can raise issues at [5]. Regards, Vinay Sajip [1] https://bitbucket.org/vinay.sajip/docs-distil/downloads/distil.py [2] https://pythonhosted.org/distil/ [3] https://bitbucket.org/vinay.sajip/distribute3/downloads/ [4] https://gist.github.com/vsajip/5243936 [5] https://bitbucket.org/vinay.sajip/distlib/issues/new From p.f.moore at gmail.com Tue Mar 26 10:06:35 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 26 Mar 2013 09:06:35 +0000 Subject: [Distutils] A new, experimental packaging tool: distil In-Reply-To: References: Message-ID: On 26 March 2013 08:54, Vinay Sajip wrote: > I've created a new tool called distil which I'm using to experiment with > packaging functionality. Interesting! Is the source available anywhere? (Not distil.py, but the source of the big chunk of embedded zipfile that appears to contain the bulk of the functionality...) No big deal if not, I can easily enough unpack the data from distil.py Paul From ronaldoussoren at mac.com Tue Mar 26 10:28:25 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 26 Mar 2013 10:28:25 +0100 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> On 25 Mar, 2013, at 19:16, PJ Eby wrote: > > > Also, as far as detecting the need for setuptools, I think that can be > done just by noticing whether the PKG-INFO included in an sdist is > metadata 2.0 or not. If it is, then setuptools should be explicitly > declared as a build-time dependency, otherwise it's not needed. If > it's an older metadata version, then you probably need setuptools. Is it even necessary to automaticly install setuptools? Setuptools-using package are supposed to use ez_setup.py, or distribute_setup.py for distribute, to ensure that the setuptools package is available during setup. Although I must admit that I have no idea how many packages still do this instead of assuming that users will have installed setuptools anyway. Ronald From vinay_sajip at yahoo.co.uk Tue Mar 26 10:39:28 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 26 Mar 2013 09:39:28 +0000 (UTC) Subject: [Distutils] A new, experimental packaging tool: distil References: Message-ID: Paul Moore gmail.com> writes: > Interesting! Is the source available anywhere? (Not distil.py, but the Not in a public VCS repo - I'm not soliciting contributions, as I'm still experimenting with a few features. But it'd be nice if the packaging-savvy readers here played with it just as users, and gave some feedback from that perspective. > source of the big chunk of embedded zipfile that appears to contain > the bulk of the functionality...) That zipfile contains distlib, some CLI support code and a packager.py, which contains the distil core functionality. Regards, Vinay Sajip From pombredanne at nexb.com Tue Mar 26 10:49:18 2013 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Tue, 26 Mar 2013 10:49:18 +0100 Subject: [Distutils] A new, experimental packaging tool: distil In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 9:54 AM, Vinay Sajip wrote: > I've created a new tool called distil which I'm using to experiment with > packaging functionality. Nice! > * Very simple deployment - just copy distil.py[1] to a location on your path, > optionally naming it to distil on POSIX platforms. There's no need to install > distlib - it's all included. I see that you are using a pattern similar to the virtualenv.py script, embedding other code as a compressed byte array. See how virtualenv.py is turning out to be lately: https://github.com/pypa/virtualenv/blob/11ccab2698274f0e10b72da863f9efb73cf1a9aa/virtualenv.py#L1937 I am in general fine with the approach, though I feel a bit uncomfy with this approach creeping in as "the" way to bootstrap things with one single file for core distribution-related tools. Would anyone know of a better way to package things in a single python-executable bootstrapping script file without obfuscating the source contents in compressed/encoded/obfuscated byte arrays? Also, in your code calling this binary payload STUFF feels a tad scary: this is arbitrary code that I cannot see nor inspect before running. I would not want to run unknown STUFFs on my machine ... and even more so since the corresponding sources are not available publicly yet in a source repo. At the minimum, getting some comments or explicit variable names the virtualenv way on what this payload is would help IMHO: "STUFF = """ eJyEm1OMLlC3Zb+ybbtO1Snbtm3brjpl27Zt27Zt27b6Tye3b9J9k37YK9kv+2FmPMxkrC0vBQKKCgA AIAGIUeqCCqFgEh/4AACRBQCAD8AFGFs4OVtbGNLpGRoYWdnbOTrTObk7GdnZmlqY0dq7qyhDAUDqZP ......" -- Cordially Philippe Ombredanne From p.f.moore at gmail.com Tue Mar 26 11:08:39 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 26 Mar 2013 10:08:39 +0000 Subject: [Distutils] A new, experimental packaging tool: distil In-Reply-To: References: Message-ID: On 26 March 2013 09:49, Philippe Ombredanne wrote: > Would anyone know of a better way to package things in a single > python-executable bootstrapping script file without obfuscating the > source contents in compressed/encoded/obfuscated byte arrays? Packaging as a zip file is a good way - but on Windows the file needs to be named xxx.py (which is surprising, to say the least :-)) for the relevant file association to be triggered (and on Unix, a #! line needs to be prepended). Windows users could define additional associations (pyz and pywz) for "zipped Python applications". Maybe the Python installer should include these in 3.4+, to improve the visibility of this approach. Paul. From regebro at gmail.com Tue Mar 26 11:20:24 2013 From: regebro at gmail.com (Lennart Regebro) Date: Tue, 26 Mar 2013 11:20:24 +0100 Subject: [Distutils] A new, experimental packaging tool: distil In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 9:54 AM, Vinay Sajip wrote: > I've created a new tool called distil which I'm using to experiment with > packaging functionality. Thanks for doing this, I think it's a good way forward. //Lennart From p.f.moore at gmail.com Tue Mar 26 11:29:25 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 26 Mar 2013 10:29:25 +0000 Subject: [Distutils] A new, experimental packaging tool: distil In-Reply-To: References: Message-ID: On 26 March 2013 08:54, Vinay Sajip wrote: > I would welcome any feedback you could give regarding distil/distlib. There is > of course a lot more testing to do, but I consider these initial findings to be > promising, and worth sharing. If you find any problems, you can raise issues > at [5]. A couple of immediate points. I tried "distil install distribute pip wheel" which failed, because distribute requires 2to3 to be run as part of setup.py (no real surprise there). But distil *did* partially install wheel, leaving a broken installation (there was no METADATA filein wheel's dist-info directory). I had to manually delete what had been installed of the 3 projects. I'd suggest that distil needs to roll back anything it did after a failed install. Secondly, when there is a C extension in the distribution (on Windows) the install fails even though I have Visual C installed. This is because cl.exe is not on my PATH - distil should do the same detection of the location of Visual C as distutils does. The install does work if cl.exe is on PATH - presumably, though, it doesn't check that it is the *right* cl.exe (2010 for Python 3.3, 2008 for 2007, etc). Also distil doesn't deal with packages with optional C extensions - but again, that's a case of a "too complex" setup.py (and I'm glad it picks the option of installing the C extension in that case, and not just the pure Python version). But other than these niggles, it's impressively effective so far :-) Paul. PS I'm not entirely happy with the default of installing to the user packages directory. 99.9% of my time, I'm installing into a virtualenv, and this default is very wrong - as the installed packages will "infect" all of my virtualenvs. From vinay_sajip at yahoo.co.uk Tue Mar 26 11:34:33 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 26 Mar 2013 10:34:33 +0000 (UTC) Subject: [Distutils] A new, experimental packaging tool: distil References: Message-ID: Philippe Ombredanne nexb.com> writes: > I see that you are using a pattern similar to the virtualenv.py > script, embedding other code as a compressed byte array. > > I am in general fine with the approach, though I feel a bit uncomfy > with this approach creeping in as "the" way to bootstrap things with > one single file for core distribution-related tools. It's not a particularly new approach - it's just that way because it makes things easier for the user. If I had used a more conventional approach, I'm not sure as many people would be willing to try it. Like virtualenv, it's a tool that cannot rely on the presence of existing installation tools. > Would anyone know of a better way to package things in a single > python-executable bootstrapping script file without obfuscating the > source contents in compressed/encoded/obfuscated byte arrays? It's only obfuscated as a side-effect - the other way would be to put all your code in a single module - not much fun to maintain, that way. But if someone has a better way, that would certainly be of interest. > Also, in your code calling this binary payload STUFF feels a tad > scary: this is arbitrary code that I cannot see nor inspect before > running. Would you find it more trustworthy if it was called TRUST_ME_ITS_SAFE? ;-) Remember, it's just a Python script running without system privileges. > I would not want to run unknown STUFFs on my machine ... and even more > so since the corresponding sources are not available publicly yet in a > source repo. The absence of a public source repo is a red herring. If you want to inspect the code, the time taken to add a pdb breakpoint after the .zip write, and to unzip the file to a folder of your choice (or to add code to distil.py to do this), is trivial compared to the time you would spend doing the actual code inspection. The code is open to inspection, but I'd hope that most users focus on whether the tool has useful qualities, how it could be used to move packaging forwards, what it demonstrates about distlib etc. Have you inspected setuptools or pip code to verify that they are safe? As well as everything you've ever downloaded from PyPI, which might or might not be exactly the same as what's shown in a project's public VCS repo? > At the minimum, getting some comments or explicit variable names the > virtualenv way on what this payload is would help IMHO: > > "STUFF = """ > eJyEm1OMLlC3Zb+ybbtO1Snbtm3brjpl27Zt27Zt27b6Tye3b9J9k37YK9kv+2FmPMxkrC0vBQKKCgA > AIAGIUeqCCqFgEh/4AACRBQCAD8AFGFs4OVtbGNLpGRoYWdnbOTrTObk7GdnZmlqY0dq7qyhDAUDqZP > ......" > While virtualenv has a number of discrete files, I have just one zip file containing distlib, CLI support code and distil code - that's a lot of files, so I'm not sure a comment would be all that helpful. What would it really tell you? What "STUFF" is really saying, to most users, is "stuff you don't need to care about the details of". For the security-conscious, a mere comment from a potentially untrusted source is no substitute for that unzip + time-consuming code inspection. Regards, Vinay Sajip From p.f.moore at gmail.com Tue Mar 26 11:42:22 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 26 Mar 2013 10:42:22 +0000 Subject: [Distutils] A new, experimental packaging tool: distil In-Reply-To: References: Message-ID: On 26 March 2013 08:54, Vinay Sajip wrote: > I would welcome any feedback you could give regarding distil/distlib. There is > of course a lot more testing to do, but I consider these initial findings to be > promising, and worth sharing. If you find any problems, you can raise issues > at [5]. One other (slight) oddity. I installed coverage into an empty virtuwlenv based on Python 3.3. It installed coverage-2.7 and coverage2 executables into Scripts. Why 2.7? Where did it get the idea that this was a Python 2.7 installation? I ran distil with the python 3.3 that is installed in the virtualenv. Paul From vinay_sajip at yahoo.co.uk Tue Mar 26 11:48:43 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 26 Mar 2013 10:48:43 +0000 (UTC) Subject: [Distutils] A new, experimental packaging tool: distil References: Message-ID: Paul Moore gmail.com> writes: > A couple of immediate points. I tried "distil install distribute pip > wheel" which failed, because distribute requires 2to3 to be run as > part of setup.py (no real surprise there). But distil *did* partially > install wheel, leaving a broken installation (there was no METADATA > filein wheel's dist-info directory). I had to manually delete what had > been installed of the 3 projects. I'd suggest that distil needs to > roll back anything it did after a failed install. There is code in distil to roll back when installation fails, but there could be a bug which prevents it kicking in. I'll investigate. Distil does invoke 2to3 automatically if the metadata indicates it, but the metadata for distribute might be wrong if it was built on a 2.x system. I'll investigate this. > Secondly, when there is a C extension in the distribution (on Windows) > the install fails even though I have Visual C installed. This is > because cl.exe is not on my PATH - distil should do the same detection > of the location of Visual C as distutils does. The install does work > if cl.exe is on PATH - presumably, though, it doesn't check that it is > the *right* cl.exe (2010 for Python 3.3, 2008 for 2007, etc). Also > distil doesn't deal with packages with optional C extensions - but > again, that's a case of a "too complex" setup.py (and I'm glad it > picks the option of installing the C extension in that case, and not > just the pure Python version). distil could certainly be improved in this area, but the documentation [1] mentions that C builds should be run in a Visual Studio command window. The checking for the right version of Visual Studio is for a little later, but it's on my list. > PS I'm not entirely happy with the default of installing to the user > packages directory. 99.9% of my time, I'm installing into a > virtualenv, and this default is very wrong - as the installed packages > will "infect" all of my virtualenvs. In what way does "distil -e install distname" fall short of your expectations? If you have a venv activated, it should install in there - does it not do this? Regards, Vinay Sajip [1] http://pythonhosted.org/distil/installing.html#distributions-which-include-c-extensions From vinay_sajip at yahoo.co.uk Tue Mar 26 11:57:16 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 26 Mar 2013 10:57:16 +0000 (UTC) Subject: [Distutils] A new, experimental packaging tool: distil References: Message-ID: Paul Moore gmail.com> writes: > I installed coverage into an empty virtuwlenv based on Python 3.3. > > It installed coverage-2.7 and coverage2 executables into Scripts. Why > 2.7? Where did it get the idea that this was a Python 2.7 > installation? I ran distil with the python 3.3 that is installed in > the virtualenv. Ah. The metadata (see [1] for an example) mentions "coverage-2.7" as a script, as it was built on 2.7. That shouldn't really be in the metadata - there should be a single declaration, which is used by distlib/distil to create version-specific aliases. I've now removed it from the metadata from 3.6, you could try again using distil install "coverage (3.6)" to make sure you pick up the version I changed. Regards, Vinay Sajip [1] http://www.red-dove.com/pypi/projects/C/coverage/package-3.6b3.json From dholth at gmail.com Tue Mar 26 12:06:42 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 07:06:42 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> References: <20130324094840.GX9677@merlinux.eu> <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> Message-ID: On Mar 26, 2013 5:28 AM, "Ronald Oussoren" wrote: > > > On 25 Mar, 2013, at 19:16, PJ Eby wrote: > > > > > > Also, as far as detecting the need for setuptools, I think that can be > > done just by noticing whether the PKG-INFO included in an sdist is > > metadata 2.0 or not. If it is, then setuptools should be explicitly > > declared as a build-time dependency, otherwise it's not needed. If > > it's an older metadata version, then you probably need setuptools. > > Is it even necessary to automaticly install setuptools? Setuptools-using package are supposed to use ez_setup.py, or distribute_setup.py for distribute, to ensure that the setuptools package is available during setup. Although I must admit that I have no idea how many packages still do this instead of assuming that users will have installed setuptools anyway. > > Ronald > We really really really want to get rid of ez_setup. It is considered by many to be the example of something that should not happen as a side effect of running a build script. When packages no longer have to install themselves, they can just mention setup-requires and the installer grabs the necessary setuptools. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Mar 26 12:07:08 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 26 Mar 2013 11:07:08 +0000 Subject: [Distutils] A new, experimental packaging tool: distil In-Reply-To: References: Message-ID: On 26 March 2013 10:57, Vinay Sajip wrote: > Ah. The metadata (see [1] for an example) mentions "coverage-2.7" as a script, as > it was built on 2.7. That shouldn't really be in the metadata - there should be a > single declaration, which is used by distlib/distil to create version-specific > aliases. > > I've now removed it from the metadata from 3.6, you could try again using > > distil install "coverage (3.6)" > > to make sure you pick up the version I changed. Yes, now it just installs coverage.exe. (No coverage-3.3.exe? Not that it bothers me, I don't use the version-specific script wrappers anyway). > [1] http://www.red-dove.com/pypi/projects/C/coverage/package-3.6b3.json So distil (or is it distlib?) uses metadata from www.red-dove.com as well as PyPI? That's a bit of a surprise. I presume this is a short-term fix, what's the longer-term plan for getting such metadata onto PyPI? Paul. From ronaldoussoren at mac.com Tue Mar 26 12:26:51 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 26 Mar 2013 12:26:51 +0100 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> Message-ID: <0FF42779-F280-4B9D-B614-DC860160A22B@mac.com> On 26 Mar, 2013, at 12:06, Daniel Holth wrote: > > On Mar 26, 2013 5:28 AM, "Ronald Oussoren" wrote: > > > > > > On 25 Mar, 2013, at 19:16, PJ Eby wrote: > > > > > > > > > Also, as far as detecting the need for setuptools, I think that can be > > > done just by noticing whether the PKG-INFO included in an sdist is > > > metadata 2.0 or not. If it is, then setuptools should be explicitly > > > declared as a build-time dependency, otherwise it's not needed. If > > > it's an older metadata version, then you probably need setuptools. > > > > Is it even necessary to automaticly install setuptools? Setuptools-using package are supposed to use ez_setup.py, or distribute_setup.py for distribute, to ensure that the setuptools package is available during setup. Although I must admit that I have no idea how many packages still do this instead of assuming that users will have installed setuptools anyway. > > > > Ronald > > > > We really really really want to get rid of ez_setup. It is considered by many to be the example of something that should not happen as a side effect of running a build script. That can't be helped with the current tool versions, distutils in current release of python doesn't support setup-requires and hence the only way to use setuptools is by using ez_setup.py. Ez_setup.py will still have to be present for python 2.7 users that want to use "python setup.py ...", and hence can be assumed to be present for now. I'm all for adding support for metadata 2.0 to the stdlib for Python 3.4, that way ez_setup.py can be phased out in the long run. > > When packages no longer have to install themselves, they can just mention setup-requires and the installer grabs the necessary setuptools. > That can only be done for sdists with 2.0 metadata, sdists for older versions don't have a setup-requires in their metadata. This is not just for installing, if you want to use setuptools in your setup.py you'll have to make sure it is installed in your setup.py, and with the current version of the packaging tools this means you have to use something like ez_setup.py or tell users to install setuptools themselves. And with some luck a large subset of packages will ship wheels in the future, that way the installer doesn't even have to look at sdists. Ronald From vinay_sajip at yahoo.co.uk Tue Mar 26 12:29:29 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 26 Mar 2013 11:29:29 +0000 (UTC) Subject: [Distutils] A new, experimental packaging tool: distil References: Message-ID: Paul Moore gmail.com> writes: > Yes, now it just installs coverage.exe. (No coverage-3.3.exe? Not that > it bothers me, I don't use the version-specific script wrappers > anyway). > So distil (or is it distlib?) uses metadata from www.red-dove.com as > well as PyPI? That's a bit of a surprise. I presume this is a > short-term fix, what's the longer-term plan for getting such metadata > onto PyPI? Yes, it's a short-term fix because otherwise it would be no better than pip in the dependency resolution department: download each dist, run egg_info, look for dependencies, download them, rinse and repeat. I'd love to get this metadata onto PyPI, but that depends on the PyPI folks + for the metadata to be accepted as a format. For it to be proven as a useful format, it needs wider exposure ... so that's what I'm hoping for. No doubt the wider exposure will lead to improvements. You can think of the red-dove.com location as just a sort of unofficial early version of what could be on PyPI, if the relevant people agree it's useful. Regards, Vinay Sajip From r1chardj0n3s at gmail.com Tue Mar 26 04:55:44 2013 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Tue, 26 Mar 2013 14:55:44 +1100 Subject: [Distutils] PEP 439 updated Message-ID: Hi all, I've updated PEP 439 to note the outcome of the recent discussion regarding setuptools dependencies and a couple of other minor things. The changes are viewable here: http://hg.python.org/peps/diff/0d57c70eff91/pep-0439.txt Richard From dholth at gmail.com Tue Mar 26 13:27:17 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 08:27:17 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: <0FF42779-F280-4B9D-B614-DC860160A22B@mac.com> References: <20130324094840.GX9677@merlinux.eu> <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> <0FF42779-F280-4B9D-B614-DC860160A22B@mac.com> Message-ID: On Mar 26, 2013 7:26 AM, "Ronald Oussoren" wrote: > > > On 26 Mar, 2013, at 12:06, Daniel Holth wrote: > > > > > On Mar 26, 2013 5:28 AM, "Ronald Oussoren" wrote: > > > > > > > > > On 25 Mar, 2013, at 19:16, PJ Eby wrote: > > > > > > > > > > > > Also, as far as detecting the need for setuptools, I think that can be > > > > done just by noticing whether the PKG-INFO included in an sdist is > > > > metadata 2.0 or not. If it is, then setuptools should be explicitly > > > > declared as a build-time dependency, otherwise it's not needed. If > > > > it's an older metadata version, then you probably need setuptools. > > > > > > Is it even necessary to automaticly install setuptools? Setuptools-using package are supposed to use ez_setup.py, or distribute_setup.py for distribute, to ensure that the setuptools package is available during setup. Although I must admit that I have no idea how many packages still do this instead of assuming that users will have installed setuptools anyway. > > > > > > Ronald > > > > > > > We really really really want to get rid of ez_setup. It is considered by many to be the example of something that should not happen as a side effect of running a build script. > > That can't be helped with the current tool versions, distutils in current release of python doesn't support setup-requires and hence the only way to use setuptools is by using ez_setup.py. Ez_setup.py will still have to be present for python 2.7 users that want to use "python setup.py ...", and hence can be assumed to be present for now. I'm all for adding support for metadata 2.0 to the stdlib for Python 3.4, that way ez_setup.py can be phased out in the long run. > > > > > When packages no longer have to install themselves, they can just mention setup-requires and the installer grabs the necessary setuptools. > > > That can only be done for sdists with 2.0 metadata, sdists for older versions don't have a setup-requires in their metadata. This is not just for installing, if you want to use setuptools in your setup.py you'll have to make sure it is installed in your setup.py, and with the current version of the packaging tools this means you have to use something like ez_setup.py or tell users to install setuptools themselves. Yes, which is why we propose to assume Setup-Requires-Dist: setuptools if Metadata-Version < 2.0. Then a no-op ez_setup.py can be added to sys.modules before setup.py runs and the installer will have a lot more control over that side effect. These improved installers will target both 2.7 and 3.4. I do understand that some people feel it is harder to say "manually download the installer and then install what you want" rather than "manually download and install the package you want". > And with some luck a large subset of packages will ship wheels in the future, that way the installer doesn't even have to look at sdists. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Tue Mar 26 14:16:54 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 26 Mar 2013 14:16:54 +0100 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> <0FF42779-F280-4B9D-B614-DC860160A22B@mac.com> Message-ID: On 26 Mar, 2013, at 13:27, Daniel Holth wrote: > stall themselves, they can just mention setup-requires and the installer grabs the necessary setuptools. > > > > > That can only be done for sdists with 2.0 metadata, sdists for older versions don't have a setup-requires in their metadata. This is not just for installing, if you want to use setuptools in your setup.py you'll have to make sure it is installed in your setup.py, and with the current version of the packaging tools this means you have to use something like ez_setup.py or tell users to install setuptools themselves. > > Yes, which is why we propose to assume Setup-Requires-Dist: setuptools if Metadata-Version < 2.0. Then a no-op ez_setup.py can be added to sys.modules before setup.py runs and the installer will have a lot more control over that side effect. Just because I'm curious, is that control needed to make sure that a new enough version of setuptools gets used (e.g. one that supports modern features, instead of the 2 year old version that is mentioned in ez_setup.py for $SOME_OLD_PACKAGE)? Just assuming that every sdist with old metadata requires setuptools would work, although it will be strange to see that some packages @work that use plain disutils suddenly seem to require setuptool :-) > > > These improved installers will target both 2.7 and 3.4. I do understand that some people feel it is harder to say "manually download the installer and then install what you want" rather than "manually download and install the package you want". No me. I'm glad to see that the hard work by everyone working in the packaging space is coming to fruition. Infrastructure work is almost never glamorous, and work on Python's packaging system appears to be more stressful than average. Ronald From dholth at gmail.com Tue Mar 26 14:20:14 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 09:20:14 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> <0FF42779-F280-4B9D-B614-DC860160A22B@mac.com> Message-ID: > Just assuming that every sdist with old metadata requires setuptools would work, although it will be strange to see that some packages @work that use plain disutils suddenly seem to require setuptool :-) pip does this already, importing setuptools before running any setup.py >> These improved installers will target both 2.7 and 3.4. I do understand that some people feel it is harder to say "manually download the installer and then install what you want" rather than "manually download and install the package you want". > > No me. I'm glad to see that the hard work by everyone working in the packaging space is coming to fruition. Infrastructure work is almost never glamorous, and work on Python's packaging system appears to be more stressful than average. You said it. From dholth at gmail.com Tue Mar 26 14:56:33 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 09:56:33 -0400 Subject: [Distutils] PEP 439 updated In-Reply-To: References: Message-ID: Made some progress on the wheel signature system that fills my design requirements of being key-centric and emphatically not GPG. It turns out RSA signature verification is just pow(signature, pubkey.e, pubkey.n) and some hashing. You would be able to use "openssl genrsa -out private.pem 2048" to generate the private key, "openssl dgst -sha256 -sign private.pem -binary < partial_jws_blob" to do the actual signature, and use key fingerprints (the same 32-byte length as literal Ed25519 public keys) when asking for "something signed with a particular key or keys". RSA, while producing slower and bigger signatures than the elliptic curve Ed25519, would be more palatable to some by being a more conservative choice and you would be able to use openssl for key management. The idea of "multiple signatures / no key revocation" would be limited to "we don't have tuf yet" installs of things like pip or tuf itself, once tuf was available more complex trust delegation would be available and more subtle attacks could be detected. The idea is to have a security system with a tiny implementation when you do not have, want or need something more complex. On Mon, Mar 25, 2013 at 11:55 PM, Richard Jones wrote: > Hi all, > > I've updated PEP 439 to note the outcome of the recent discussion > regarding setuptools dependencies and a couple of other minor things. > > The changes are viewable here: > http://hg.python.org/peps/diff/0d57c70eff91/pep-0439.txt > > > Richard > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From tseaver at palladion.com Tue Mar 26 16:21:20 2013 From: tseaver at palladion.com (Tres Seaver) Date: Tue, 26 Mar 2013 11:21:20 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> References: <20130324094840.GX9677@merlinux.eu> <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/26/2013 05:28 AM, Ronald Oussoren wrote: > > On 25 Mar, 2013, at 19:16, PJ Eby wrote: >> >> >> Also, as far as detecting the need for setuptools, I think that can >> be done just by noticing whether the PKG-INFO included in an sdist >> is metadata 2.0 or not. If it is, then setuptools should be >> explicitly declared as a build-time dependency, otherwise it's not >> needed. If it's an older metadata version, then you probably need >> setuptools. > > Is it even necessary to automaticly install setuptools? > Setuptools-using package are supposed to use ez_setup.py, or > distribute_setup.py for distribute, to ensure that the setuptools > package is available during setup. No, they are not. That usage was for bootstrapping in an era when setuptools was not widely presetn. Most packages have *removed* those files today. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlFRvPAACgkQ+gerLs4ltQ6XhgCgknMlM9drnL5KJKSvoEcuoKqw 60gAn1QyyUersaUdKXbJrpnJuu3AXkzz =i63/ -----END PGP SIGNATURE----- From erik.m.bray at gmail.com Tue Mar 26 16:45:25 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 26 Mar 2013 11:45:25 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Mon, Mar 25, 2013 at 1:23 PM, Daniel Holth wrote: > My vision for the setuptools deprecation process is that distutils > rides into the sunset with it. In this future eventually bugs in > setuptools will be solved by porting setup.py to one of (X, Y, Z) > which haven't necessarily been invented yet. That would be nice (really!) but what are you proposing replace it for building packages with heavy reliance on C extensions? Because for that one use case (and perhaps that alone) it works "pretty okay" for most cases. I don't want to start seeing an infinite number of ways to configure and build extension modules. The great thing about using distutils (or some variant) for this is that if I had the source for a package I could just `./setup.py build` and it would "just work" for all but the most complex cases (SciPy for example). I don't want to have a situation where some projects are using bento and others are using scons and some are using waf and others are using autoconf, etc, etc. It's fine if a few projects have their own special needs for build toolchains and I've been saying all along that building should be separate from installing, and it should be easier to drop in one's own build system. Another thing that setuptools provides that currently works "pretty well" with extension modules is `./setup.py develop`. It calls `setup.py build_ext --inplace` to make extension modules importable. Any build system for extension modules needs to be able to do something similar to support in-place install functionality like `setup.py develop`, `pip install -e`, etc. Erik From ronaldoussoren at mac.com Tue Mar 26 16:46:53 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 26 Mar 2013 16:46:53 +0100 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> Message-ID: On 26 Mar, 2013, at 16:21, Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 03/26/2013 05:28 AM, Ronald Oussoren wrote: >> >> On 25 Mar, 2013, at 19:16, PJ Eby wrote: >>> >>> >>> Also, as far as detecting the need for setuptools, I think that can >>> be done just by noticing whether the PKG-INFO included in an sdist >>> is metadata 2.0 or not. If it is, then setuptools should be >>> explicitly declared as a build-time dependency, otherwise it's not >>> needed. If it's an older metadata version, then you probably need >>> setuptools. >> >> Is it even necessary to automaticly install setuptools? >> Setuptools-using package are supposed to use ez_setup.py, or >> distribute_setup.py for distribute, to ensure that the setuptools >> package is available during setup. > > No, they are not. That usage was for bootstrapping in an era when > setuptools was not widely presetn. Most packages have *removed* those > files today. I didn't know that, all my project still include the bootstrap code to make it easier to install them in a fresh build of python. The distribute docs still mention that you should use distribute_setup.py (their version of ez_setup.py) in your project. Ronald From erik.m.bray at gmail.com Tue Mar 26 17:00:20 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 26 Mar 2013 12:00:20 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> Message-ID: On Tue, Mar 26, 2013 at 11:54 AM, Erik Bray wrote: > I don't think "downloading the installer" should be a side effect of > running the installation either, but until this mess is cleaned up > it's a necessary evil. Yes, making things easier for users who don't > know what they're doing is a legitimate use case. I should clarify--when I write "until this mess is cleaned up" what I really mean is, "as soon as most packages have wheels built for them for a wide range of platforms". Then I don't really see it as an issue :) Erik From erik.m.bray at gmail.com Tue Mar 26 16:54:30 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 26 Mar 2013 11:54:30 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> <344799D2-5904-43D1-8C7D-BE16F083D14A@mac.com> Message-ID: On Tue, Mar 26, 2013 at 11:21 AM, Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 03/26/2013 05:28 AM, Ronald Oussoren wrote: >> >> On 25 Mar, 2013, at 19:16, PJ Eby wrote: >>> >>> >>> Also, as far as detecting the need for setuptools, I think that can >>> be done just by noticing whether the PKG-INFO included in an sdist >>> is metadata 2.0 or not. If it is, then setuptools should be >>> explicitly declared as a build-time dependency, otherwise it's not >>> needed. If it's an older metadata version, then you probably need >>> setuptools. >> >> Is it even necessary to automaticly install setuptools? >> Setuptools-using package are supposed to use ez_setup.py, or >> distribute_setup.py for distribute, to ensure that the setuptools >> package is available during setup. > > No, they are not. That usage was for bootstrapping in an era when > setuptools was not widely presetn. Most packages have *removed* those > files today. I still use distribute_setup.py very regularly. I'm dealing with scientific users, mostly on Macs or Windows who barely even know what version of Python they have installed (or even what distribution of Python--python.org/macports/homebrew/etc.) much less that they need some variant of setuptools to install a large percentage of packages out there. Sometimes they do have setuptools installed but it's an outdated version, or they didn't install it properly, or something to that effect. I don't think "downloading the installer" should be a side effect of running the installation either, but until this mess is cleaned up it's a necessary evil. Yes, making things easier for users who don't know what they're doing is a legitimate use case. Erik From dholth at gmail.com Tue Mar 26 17:46:48 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 12:46:48 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Tue, Mar 26, 2013 at 11:45 AM, Erik Bray wrote: > On Mon, Mar 25, 2013 at 1:23 PM, Daniel Holth wrote: >> My vision for the setuptools deprecation process is that distutils >> rides into the sunset with it. In this future eventually bugs in >> setuptools will be solved by porting setup.py to one of (X, Y, Z) >> which haven't necessarily been invented yet. > > That would be nice (really!) but what are you proposing replace it for > building packages with heavy reliance on C extensions? Because for > that one use case (and perhaps that alone) it works "pretty okay" for > most cases. I don't want to start seeing an infinite number of ways > to configure and build extension modules. The great thing about using > distutils (or some variant) for this is that if I had the source for a > package I could just `./setup.py build` and it would "just work" for > all but the most complex cases (SciPy for example). > > I don't want to have a situation where some projects are using bento > and others are using scons and some are using waf and others are using > autoconf, etc, etc. It's fine if a few projects have their own > special needs for build toolchains and I've been saying all along that > building should be separate from installing, and it should be easier > to drop in one's own build system. > > Another thing that setuptools provides that currently works "pretty > well" with extension modules is `./setup.py develop`. It calls > `setup.py build_ext --inplace` to make extension modules importable. > Any build system for extension modules needs to be able to do > something similar to support in-place install functionality like > `setup.py develop`, `pip install -e`, etc. It's true that de-standardization of the build process has its own problems. As a consolation you don't have to do it as often because we have binary packages. It's hard to say how much fragmentation will happen, but distutils Extension() is an awful way to compile things! It has very little to recommend it apart from that it's there. For example, no parallel builds and no partial re-compiles based on what's changed. The trouble is that we know the packages on the cheeseshop mostly work but it's harder to count the stuff that avoids the cheeseshop because setuptools and setup.py wasn't a good solution. The example I've had to deal with recently is pycairo. They already use waf to compile their Python extension and don't use setup.py at all, so I argue that de-standardization has already happened. Whatever very easy (for pycairo) thing we can do make them "pip install"-able again is a plus. Even my own simple bcrypt wrapper "cryptacular" would compile an assembler file if I knew how to make that work with distutils. It doesn't and it's a little slower than it would have been if I had a decent build tool. I suspect at least 80% of packages will use some simple thing that comes with Python, two third party build tools will dominate, and we will discover interesting things that just weren't possible before. At least if someone wants to improve packaging we can make it easy for them to try without having to ask distutils-sig. From pje at telecommunity.com Tue Mar 26 19:15:08 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 26 Mar 2013 14:15:08 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Tue, Mar 26, 2013 at 12:46 PM, Daniel Holth wrote: > I suspect at least 80% of packages will use some simple thing that > comes with Python, two third party build tools will dominate, and we > will discover interesting things that just weren't possible before. At > least if someone wants to improve packaging we can make it easy for > them to try without having to ask distutils-sig. ...and for that matter, without them having to monkeypatch distutils, scrape HTML from the PyPI UI, and sandbox-execute other people's setup.py files. ;-) From ncoghlan at gmail.com Tue Mar 26 19:25:48 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 27 Mar 2013 04:25:48 +1000 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 9:15 AM, PJ Eby wrote: > On balance, I think I lean towards just having a simple way to specify > your chosen archiver, so that installing from source checkouts and > dumps is possible. I just find it annoying that you have to have > *two* files in your checkout, one to say what tool you're using, and > another one to configure it. Ah, you have uncovered part of the cunning plan behind Setup-Requires-Dist and the metadata extension system in 2.0+: once we have the basic hooks in place, then we should be able to embed the config settings for the archivers and builders in the main metadata, without the installer needing to understand the details. Allowing embedded json also supports almost arbitrarily complex config options. For metadata 2.0, however, I'm thinking we should retain the distutils-based status quo for the archive hook and the build hook: Archive hook: python setup.py sdist Environment: current working directory = root dir of source checkout/unpacked tarball Build hook: python setup.py bdist_wheel Environment: current working directory = root dir of unpacked sdist The install tool would then pick up the files from their default output locations. Installing from a checkout/tarball would go through the full daisy chain (make sdist, make wheel from sdist, install the wheel), and installing from sdist would also build the intermediate wheel file. The only entry points inspired hook in 2.0 would be the post-install hook I have discussed previously (and will write up properly in PEP 426 later this week). In theory, we could have separate dependencies for the "make sdist" and "make wheel" parts of the chain, but that seems to add complexity without adequate justification to me. The runtime vs setup split is necessary so that you don't need a build chain on your deployment targets, but it seems comparatively harmless to install the archiver onto a dedicated build system even if you're only building from sdists (particularly when I expect most Python-specific tools to continue to follow the model of handling both archiving and building, rather than having completely separate tools for the two steps). > (What'd be nice is if you could just somehow detect files like > bento.info and setup.cfg and thereby detect what archiver to use. But > that would have limited extensibility unless there was a standard > naming convention for the files, or a standardized format for at least > the first line in the file or something like that, so you could > identify the needed tool.) Yeah, I plan to use future releases of the 2.x metadata to define hooks for this. We can also start experimenting in 2.0 through entry points and the structured metadata format I will be defining for the post install hook. (Daniel has an entry points extension PEP mostly written, we just haven't got around to publishing it yet...) In the meantime, formalising the "setup.py sdist" and "setup.py bdist_wheel" invocations should provide a useful stepping stone to a setup.py-is-optional future. Cheers, Nick. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Tue Mar 26 19:32:50 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 26 Mar 2013 18:32:50 +0000 (UTC) Subject: [Distutils] A new, experimental packaging tool: distil References: Message-ID: Paul Moore gmail.com> writes: > A couple of immediate points. I tried "distil install distribute pip > wheel" which failed, because distribute requires 2to3 to be run as > part of setup.py (no real surprise there). But distil *did* partially > install wheel, leaving a broken installation (there was no METADATA > filein wheel's dist-info directory). I had to manually delete what had > been installed of the 3 projects. I'd suggest that distil needs to > roll back anything it did after a failed install. Another problem with distribute is that you can't install it directly off PyPI with distil, because it does stuff in setup.py in a post-installation step. You will need to use the special wheel I created [1], as I mentioned in my initial post where I showed how to bootstrap pip (or rather, linked to a Gist that shows it being done). Regards, Vinay Sajip [1] https://bitbucket.org/vinay.sajip/distribute3/downloads/ From ncoghlan at gmail.com Tue Mar 26 19:48:15 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 27 Mar 2013 04:48:15 +1000 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 3:15 PM, PJ Eby wrote: > More specifically, I was hoping to move the discussion forward on > nailing down some of the details that still need specifying in a PEP > somewhere, to finish out what the "new world" of packaging will look > like. I'm deliberately trying to postpone some of those decisions - one of the reasons distutils2 foundered is because it tried to solve everything at once, and that's just too big a topic. So, *right now*, my focus is on making it possible to systematically decouple building from installing, so that running "setup.py install" on a production system becomes as bizarre an idea as running "make install". As we move further back in the tool chain, I want to follow the lead of the most widely deployed package management systems (i.e. Debian control files and RPM SPEC files) and provide appropriate configurable hooks for *invoking* archivers and builders, allowing developers to choose their own tools, so long as those tools correctly emit standardised formats understood by the rest of the Python packaging ecosystem. In the near term, however, these hooks will still be based on setup.py (specifically setuptools rather than raw distutils, so we can update older versions of Python). > OTOH, I suppose a case could be made for checking PKG-INFO into source > control along with the rest of your code, in which case the problem > disappears entirely: there'd be no such thing as a "raw" source in > that case. > > The downside, though, is that there's a small but vocal contingent > that believes checking generated files into source control is a sign > of ultimate evil, so it probably won't be a *popular* choice. > > But, if we support "either you have a setup.cfg specifying your > archiver, or a PKG-INFO so an archiver isn't needed", then that would > probably cover all the bases, actually. Yeah, you're probably right that we will need to support something else in addition to the PKG-INFO file. A PKG-INFO.in could work, though, rather than a completely independent format like setup.cfg. That way we could easily separate a source checkout/tarball (with PKG-INFO.in) from an sdist (with PKG-INFO) from a wheel (with a named .dist-info directory). (For consistency, we may want to rename PKG-INFO to DIST-INFO in sdist 2.0, though) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Tue Mar 26 20:03:51 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 15:03:51 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 2:48 PM, Nick Coghlan wrote: > On Tue, Mar 26, 2013 at 3:15 PM, PJ Eby wrote: >> More specifically, I was hoping to move the discussion forward on >> nailing down some of the details that still need specifying in a PEP >> somewhere, to finish out what the "new world" of packaging will look >> like. > > I'm deliberately trying to postpone some of those decisions - one of > the reasons distutils2 foundered is because it tried to solve > everything at once, and that's just too big a topic. > > So, *right now*, my focus is on making it possible to systematically > decouple building from installing, so that running "setup.py install" > on a production system becomes as bizarre an idea as running "make > install". > > As we move further back in the tool chain, I want to follow the lead > of the most widely deployed package management systems (i.e. Debian > control files and RPM SPEC files) and provide appropriate configurable > hooks for *invoking* archivers and builders, allowing developers to > choose their own tools, so long as those tools correctly emit > standardised formats understood by the rest of the Python packaging > ecosystem. > > In the near term, however, these hooks will still be based on setup.py > (specifically setuptools rather than raw distutils, so we can update > older versions of Python). > >> OTOH, I suppose a case could be made for checking PKG-INFO into source >> control along with the rest of your code, in which case the problem >> disappears entirely: there'd be no such thing as a "raw" source in >> that case. >> >> The downside, though, is that there's a small but vocal contingent >> that believes checking generated files into source control is a sign >> of ultimate evil, so it probably won't be a *popular* choice. >> >> But, if we support "either you have a setup.cfg specifying your >> archiver, or a PKG-INFO so an archiver isn't needed", then that would >> probably cover all the bases, actually. > > Yeah, you're probably right that we will need to support something > else in addition to the PKG-INFO file. A PKG-INFO.in could work, > though, rather than a completely independent format like setup.cfg. > That way we could easily separate a source checkout/tarball (with > PKG-INFO.in) from an sdist (with PKG-INFO) from a wheel (with a named > .dist-info directory). > > (For consistency, we may want to rename PKG-INFO to DIST-INFO in sdist > 2.0, though) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia With Metadata 2.0 it's pretty feasible that sdists will have a trustworthy PKG-INFO at their root since there can be a lot less changing of requires-dist based on whether you are on win32 (perhaps occasionally still changing based on things we forgot to put into environment marker variables). It would not be surprising to see them also grow a full .dist-info directory (with an unfortunate copy of PKG-INFO, named METADATA) just like sdists tend to contain .egg-info directories. You might always regenerate the file anyway as long as you're running the package's build system. I think PKG-INFO is a highly human-editable format. My hypothetical sdist archiver would validate PKG-INFO instead of regenerating it. It should be clear that I am also in the deliberately postpone as much as possible camp. Daniel From pje at telecommunity.com Tue Mar 26 21:08:50 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 26 Mar 2013 16:08:50 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 3:03 PM, Daniel Holth wrote: > I think PKG-INFO is a highly human-editable format. That doesn't mean you necessarily want to edit it yourself; notably, there will likely be some redundancy between the description in the file and other files like the README. Also, today one of the key use cases people have for custom code in setup.py is to pull the package version from a __version__ attribute in a module. (Which is evil, of course, but people do it anyway.) But it might be worth adding a setuptools feature to pull metadata from PKG-INFO (or DIST-INFO) instead of generating a new one, to see what people think of using PKG-INFO first, other files second. In principle, one could reduce a setup.py to just "from setuptools import setup_distinfo; setup_distinfo()" or some such. No matter what, though, there's going to be some redundancy with the rest of the project. Some people use revision control tags or other automated tags in their versioning, and it's *precisely* these projects that most need raw source builds. Maybe DIST-INFO shouldn't strictly be a PEP 426-conformant file, but rather, a file that allows some additional metadata to be specified via hooks. That way, you could list your version hook, your readme-generation hook, etc. in it, and then the output gets used to generate the final PKG-INFO. So, call it PKG-INFO.in (as Nick said), or BUILD-INFO, or something like that, add a list of "metadata hooks", and presto: no redundancy in the file, so people can check it into source control, and minimal duplication with your build tool. (Presumably, if you use Bento, your BUILD-INFO file would just list the Bento hook and nothing else, if all the other data comes from Bento's .info file.) Heck, in the minimalist case, you could pretend that a missing BUILD-INFO was there and contained a hook that runs setup.py to troll for the metadata, stopping once setup() is called. ;-) And now it's (mostly) backward compatible. From erik.m.bray at gmail.com Tue Mar 26 21:42:11 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 26 Mar 2013 16:42:11 -0400 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Tue, Mar 26, 2013 at 12:46 PM, Daniel Holth wrote: > On Tue, Mar 26, 2013 at 11:45 AM, Erik Bray wrote: >> On Mon, Mar 25, 2013 at 1:23 PM, Daniel Holth wrote: >>> My vision for the setuptools deprecation process is that distutils >>> rides into the sunset with it. In this future eventually bugs in >>> setuptools will be solved by porting setup.py to one of (X, Y, Z) >>> which haven't necessarily been invented yet. >> >> That would be nice (really!) but what are you proposing replace it for >> building packages with heavy reliance on C extensions? Because for >> that one use case (and perhaps that alone) it works "pretty okay" for >> most cases. I don't want to start seeing an infinite number of ways >> to configure and build extension modules. The great thing about using >> distutils (or some variant) for this is that if I had the source for a >> package I could just `./setup.py build` and it would "just work" for >> all but the most complex cases (SciPy for example). >> >> I don't want to have a situation where some projects are using bento >> and others are using scons and some are using waf and others are using >> autoconf, etc, etc. It's fine if a few projects have their own >> special needs for build toolchains and I've been saying all along that >> building should be separate from installing, and it should be easier >> to drop in one's own build system. >> >> Another thing that setuptools provides that currently works "pretty >> well" with extension modules is `./setup.py develop`. It calls >> `setup.py build_ext --inplace` to make extension modules importable. >> Any build system for extension modules needs to be able to do >> something similar to support in-place install functionality like >> `setup.py develop`, `pip install -e`, etc. > > It's true that de-standardization of the build process has its own > problems. As a consolation you don't have to do it as often because we > have binary packages. It's hard to say how much fragmentation will > happen, but distutils Extension() is an awful way to compile things! > It has very little to recommend it apart from that it's there. For > example, no parallel builds and no partial re-compiles based on what's > changed. The trouble is that we know the packages on the cheeseshop > mostly work but it's harder to count the stuff that avoids the > cheeseshop because setuptools and setup.py wasn't a good solution. > > The example I've had to deal with recently is pycairo. They already > use waf to compile their Python extension and don't use setup.py at > all, so I argue that de-standardization has already happened. Whatever > very easy (for pycairo) thing we can do make them "pip install"-able > again is a plus. > > Even my own simple bcrypt wrapper "cryptacular" would compile an > assembler file if I knew how to make that work with distutils. It > doesn't and it's a little slower than it would have been if I had a > decent build tool. > > I suspect at least 80% of packages will use some simple thing that > comes with Python, two third party build tools will dominate, and we > will discover interesting things that just weren't possible before. At > least if someone wants to improve packaging we can make it easy for > them to try without having to ask distutils-sig. I pretty much agree with you on all of this, but I don't think the question should be ignored either--avoiding this question is one of the things that got previous packaging reform efforts into trouble. Though the agreement to treat "build" and "installation" as two different stories mitigates the issue this time around. In any case it's sort of off topic for this thread so I'll bring it up again elsewhen. One thing I see as a possible short-term solution is to still rely on some version of distutils as a build tool *only*. But it would still be nice to have some easy way to standardize "in-place" installation regardless of how extension modules get built. Erik P.S. pycairo does have a setup.py which worked for me, but the installation instructions say it's "unsupported", though I don't see the waf script doing anything enormously different from it. From erik.m.bray at gmail.com Tue Mar 26 21:58:35 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 26 Mar 2013 16:58:35 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 2:48 PM, Nick Coghlan wrote: > On Tue, Mar 26, 2013 at 3:15 PM, PJ Eby wrote: >> More specifically, I was hoping to move the discussion forward on >> nailing down some of the details that still need specifying in a PEP >> somewhere, to finish out what the "new world" of packaging will look >> like. > > I'm deliberately trying to postpone some of those decisions - one of > the reasons distutils2 foundered is because it tried to solve > everything at once, and that's just too big a topic. > > So, *right now*, my focus is on making it possible to systematically > decouple building from installing, so that running "setup.py install" > on a production system becomes as bizarre an idea as running "make > install". > > As we move further back in the tool chain, I want to follow the lead > of the most widely deployed package management systems (i.e. Debian > control files and RPM SPEC files) and provide appropriate configurable > hooks for *invoking* archivers and builders, allowing developers to > choose their own tools, so long as those tools correctly emit > standardised formats understood by the rest of the Python packaging > ecosystem. Right--what we really need here is something akin to the debian/rules file, only not a shell script :) I like the hook idea. It's the "so long as those tools correctly emit standardised formats" that's the problem. > In the near term, however, these hooks will still be based on setup.py > (specifically setuptools rather than raw distutils, so we can update > older versions of Python). That pretty much eases the concerns I brought up in the "backwards compat" thread. >> But, if we support "either you have a setup.cfg specifying your >> archiver, or a PKG-INFO so an archiver isn't needed", then that would >> probably cover all the bases, actually. > > Yeah, you're probably right that we will need to support something > else in addition to the PKG-INFO file. A PKG-INFO.in could work, > though, rather than a completely independent format like setup.cfg. > That way we could easily separate a source checkout/tarball (with > PKG-INFO.in) from an sdist (with PKG-INFO) from a wheel (with a named > .dist-info directory). I'm partly in favor of just saying, "there should be a PKG-INFO in your version control to be considered a valid python distribution". Intermediate formats like a setup.cfg or Nick's JSON format seem kind of unnecessary to me--why have two different formats to describe the same thing? In cases where the metadata needs to be mutated somehow--such as attaching revision numbers to the version--some sort of PKG-INFO.in like you suggest would be great. But I don't see why it should have a different format from PKG-INFO itself. I'd think it would just be a superset of the metadata format but with support for hooks. Basically akin to what d2to1 does with setup.cfg, but without the unnecessarily different-looking intermediate format (I do agree that a JSON format would allow a much greater degree of expression and flexibility, but I'm not entirely sure how I feel about having one file format that generates an entirely different file format). > (For consistency, we may want to rename PKG-INFO to DIST-INFO in sdist > 2.0, though) +1 Thanks, Erik From erik.m.bray at gmail.com Tue Mar 26 22:01:00 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 26 Mar 2013 17:01:00 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 4:08 PM, PJ Eby wrote: > On Tue, Mar 26, 2013 at 3:03 PM, Daniel Holth wrote: >> I think PKG-INFO is a highly human-editable format. > > That doesn't mean you necessarily want to edit it yourself; notably, > there will likely be some redundancy between the description in the > file and other files like the README. > > Also, today one of the key use cases people have for custom code in > setup.py is to pull the package version from a __version__ attribute > in a module. (Which is evil, of course, but people do it anyway.) > > But it might be worth adding a setuptools feature to pull metadata > from PKG-INFO (or DIST-INFO) instead of generating a new one, to see > what people think of using PKG-INFO first, other files second. In > principle, one could reduce a setup.py to just "from setuptools import > setup_distinfo; setup_distinfo()" or some such. In other words, using d2to1 and only for `setup.py egg_info` (only not egg_info but whatever we're doing instead to generate the metadata ;) Erik From dholth at gmail.com Tue Mar 26 22:12:53 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 17:12:53 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: I am -1 on renaming anything unless it solves a technical problem. Forever after we will have to explain "well, it used to be called X, now it's called Y..." On Tue, Mar 26, 2013 at 5:01 PM, Erik Bray wrote: > On Tue, Mar 26, 2013 at 4:08 PM, PJ Eby wrote: >> On Tue, Mar 26, 2013 at 3:03 PM, Daniel Holth wrote: >>> I think PKG-INFO is a highly human-editable format. >> >> That doesn't mean you necessarily want to edit it yourself; notably, >> there will likely be some redundancy between the description in the >> file and other files like the README. >> >> Also, today one of the key use cases people have for custom code in >> setup.py is to pull the package version from a __version__ attribute >> in a module. (Which is evil, of course, but people do it anyway.) >> >> But it might be worth adding a setuptools feature to pull metadata >> from PKG-INFO (or DIST-INFO) instead of generating a new one, to see >> what people think of using PKG-INFO first, other files second. In >> principle, one could reduce a setup.py to just "from setuptools import >> setup_distinfo; setup_distinfo()" or some such. > > In other words, using d2to1 and only for `setup.py egg_info` (only not > egg_info but whatever we're doing instead to generate the metadata ;) > > Erik From donald at stufft.io Tue Mar 26 22:16:59 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 26 Mar 2013 17:16:59 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> On Mar 26, 2013, at 5:12 PM, Daniel Holth wrote: > I am -1 on renaming anything unless it solves a technical problem. > Forever after we will have to explain "well, it used to be called X, > now it's called Y..." > > On Tue, Mar 26, 2013 at 5:01 PM, Erik Bray wrote: >> On Tue, Mar 26, 2013 at 4:08 PM, PJ Eby wrote: >>> On Tue, Mar 26, 2013 at 3:03 PM, Daniel Holth wrote: >>>> I think PKG-INFO is a highly human-editable format. >>> >>> That doesn't mean you necessarily want to edit it yourself; notably, >>> there will likely be some redundancy between the description in the >>> file and other files like the README. >>> >>> Also, today one of the key use cases people have for custom code in >>> setup.py is to pull the package version from a __version__ attribute >>> in a module. (Which is evil, of course, but people do it anyway.) >>> >>> But it might be worth adding a setuptools feature to pull metadata >>> from PKG-INFO (or DIST-INFO) instead of generating a new one, to see >>> what people think of using PKG-INFO first, other files second. In >>> principle, one could reduce a setup.py to just "from setuptools import >>> setup_distinfo; setup_distinfo()" or some such. >> >> In other words, using d2to1 and only for `setup.py egg_info` (only not >> egg_info but whatever we're doing instead to generate the metadata ;) >> >> Erik > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Rename it and make it JSON instead of the homebrew* format! * Yes techincally it's based on a real format, but that format doesn't support all the things it needs so there are hackishly added extensions added to it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Tue Mar 26 22:21:14 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 17:21:14 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> Message-ID: I want to poke myself in the eye every time I have to edit json by hand. Especially the description field. On Mar 26, 2013 5:17 PM, "Donald Stufft" wrote: > > On Mar 26, 2013, at 5:12 PM, Daniel Holth wrote: > > > I am -1 on renaming anything unless it solves a technical problem. > > Forever after we will have to explain "well, it used to be called X, > > now it's called Y..." > > > > On Tue, Mar 26, 2013 at 5:01 PM, Erik Bray > wrote: > >> On Tue, Mar 26, 2013 at 4:08 PM, PJ Eby wrote: > >>> On Tue, Mar 26, 2013 at 3:03 PM, Daniel Holth > wrote: > >>>> I think PKG-INFO is a highly human-editable format. > >>> > >>> That doesn't mean you necessarily want to edit it yourself; notably, > >>> there will likely be some redundancy between the description in the > >>> file and other files like the README. > >>> > >>> Also, today one of the key use cases people have for custom code in > >>> setup.py is to pull the package version from a __version__ attribute > >>> in a module. (Which is evil, of course, but people do it anyway.) > >>> > >>> But it might be worth adding a setuptools feature to pull metadata > >>> from PKG-INFO (or DIST-INFO) instead of generating a new one, to see > >>> what people think of using PKG-INFO first, other files second. In > >>> principle, one could reduce a setup.py to just "from setuptools import > >>> setup_distinfo; setup_distinfo()" or some such. > >> > >> In other words, using d2to1 and only for `setup.py egg_info` (only not > >> egg_info but whatever we're doing instead to generate the metadata ;) > >> > >> Erik > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > > > Rename it and make it JSON instead of the homebrew* format! > > * Yes techincally it's based on a real format, but that format doesn't > support all the things it needs so there are hackishly added extensions > added to it. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Mar 26 22:24:53 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 26 Mar 2013 17:24:53 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> Message-ID: <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> On Mar 26, 2013, at 5:21 PM, Daniel Holth wrote: > I want to poke myself in the eye every time I have to edit json by hand. Especially the description field. > > On Mar 26, 2013 5:17 PM, "Donald Stufft" wrote: > > On Mar 26, 2013, at 5:12 PM, Daniel Holth wrote: > > > I am -1 on renaming anything unless it solves a technical problem. > > Forever after we will have to explain "well, it used to be called X, > > now it's called Y..." > > > > On Tue, Mar 26, 2013 at 5:01 PM, Erik Bray wrote: > >> On Tue, Mar 26, 2013 at 4:08 PM, PJ Eby wrote: > >>> On Tue, Mar 26, 2013 at 3:03 PM, Daniel Holth wrote: > >>>> I think PKG-INFO is a highly human-editable format. > >>> > >>> That doesn't mean you necessarily want to edit it yourself; notably, > >>> there will likely be some redundancy between the description in the > >>> file and other files like the README. > >>> > >>> Also, today one of the key use cases people have for custom code in > >>> setup.py is to pull the package version from a __version__ attribute > >>> in a module. (Which is evil, of course, but people do it anyway.) > >>> > >>> But it might be worth adding a setuptools feature to pull metadata > >>> from PKG-INFO (or DIST-INFO) instead of generating a new one, to see > >>> what people think of using PKG-INFO first, other files second. In > >>> principle, one could reduce a setup.py to just "from setuptools import > >>> setup_distinfo; setup_distinfo()" or some such. > >> > >> In other words, using d2to1 and only for `setup.py egg_info` (only not > >> egg_info but whatever we're doing instead to generate the metadata ;) > >> > >> Erik > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > > > Rename it and make it JSON instead of the homebrew* format! > > * Yes techincally it's based on a real format, but that format doesn't support all the things it needs so there are hackishly added extensions added to it. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > So don't edit it by hand, nobody edits PKG-INFO by hand. PKG-INFO (and the would be replacement) are for tools. Archiver can create it however the package author wants to, could be setup.py sdist, could be bentomaker sdist, could be totallyradpackagemaker create. It's a data exchange format not the API for developers or end users. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Tue Mar 26 22:56:47 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 17:56:47 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> Message-ID: We will have a standard json version of metadata 2. On Mar 26, 2013 5:24 PM, "Donald Stufft" wrote: > > On Mar 26, 2013, at 5:21 PM, Daniel Holth wrote: > > I want to poke myself in the eye every time I have to edit json by hand. > Especially the description field. > On Mar 26, 2013 5:17 PM, "Donald Stufft" wrote: > >> >> On Mar 26, 2013, at 5:12 PM, Daniel Holth wrote: >> >> > I am -1 on renaming anything unless it solves a technical problem. >> > Forever after we will have to explain "well, it used to be called X, >> > now it's called Y..." >> > >> > On Tue, Mar 26, 2013 at 5:01 PM, Erik Bray >> wrote: >> >> On Tue, Mar 26, 2013 at 4:08 PM, PJ Eby wrote: >> >>> On Tue, Mar 26, 2013 at 3:03 PM, Daniel Holth >> wrote: >> >>>> I think PKG-INFO is a highly human-editable format. >> >>> >> >>> That doesn't mean you necessarily want to edit it yourself; notably, >> >>> there will likely be some redundancy between the description in the >> >>> file and other files like the README. >> >>> >> >>> Also, today one of the key use cases people have for custom code in >> >>> setup.py is to pull the package version from a __version__ attribute >> >>> in a module. (Which is evil, of course, but people do it anyway.) >> >>> >> >>> But it might be worth adding a setuptools feature to pull metadata >> >>> from PKG-INFO (or DIST-INFO) instead of generating a new one, to see >> >>> what people think of using PKG-INFO first, other files second. In >> >>> principle, one could reduce a setup.py to just "from setuptools import >> >>> setup_distinfo; setup_distinfo()" or some such. >> >> >> >> In other words, using d2to1 and only for `setup.py egg_info` (only not >> >> egg_info but whatever we're doing instead to generate the metadata ;) >> >> >> >> Erik >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > http://mail.python.org/mailman/listinfo/distutils-sig >> >> >> Rename it and make it JSON instead of the homebrew* format! >> >> * Yes techincally it's based on a real format, but that format doesn't >> support all the things it needs so there are hackishly added extensions >> added to it. >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 >> DCFA >> >> > > So don't edit it by hand, nobody edits PKG-INFO by hand. PKG-INFO (and the > would be replacement) are for tools. Archiver can create it however the > package author wants to, could be setup.py sdist, could be bentomaker > sdist, could be totallyradpackagemaker create. It's a data exchange format > not the API for developers or end users. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Mar 26 23:01:28 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 26 Mar 2013 18:01:28 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> Message-ID: <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> On Mar 26, 2013, at 5:56 PM, Daniel Holth wrote: > We will have a standard json version of metadata 2. > > On Mar 26, 2013 5:24 PM, "Donald Stufft" wrote: > > On Mar 26, 2013, at 5:21 PM, Daniel Holth wrote: > >> I want to poke myself in the eye every time I have to edit json by hand. Especially the description field. >> >> On Mar 26, 2013 5:17 PM, "Donald Stufft" wrote: >> >> On Mar 26, 2013, at 5:12 PM, Daniel Holth wrote: >> >> > I am -1 on renaming anything unless it solves a technical problem. >> > Forever after we will have to explain "well, it used to be called X, >> > now it's called Y..." >> > >> > On Tue, Mar 26, 2013 at 5:01 PM, Erik Bray wrote: >> >> On Tue, Mar 26, 2013 at 4:08 PM, PJ Eby wrote: >> >>> On Tue, Mar 26, 2013 at 3:03 PM, Daniel Holth wrote: >> >>>> I think PKG-INFO is a highly human-editable format. >> >>> >> >>> That doesn't mean you necessarily want to edit it yourself; notably, >> >>> there will likely be some redundancy between the description in the >> >>> file and other files like the README. >> >>> >> >>> Also, today one of the key use cases people have for custom code in >> >>> setup.py is to pull the package version from a __version__ attribute >> >>> in a module. (Which is evil, of course, but people do it anyway.) >> >>> >> >>> But it might be worth adding a setuptools feature to pull metadata >> >>> from PKG-INFO (or DIST-INFO) instead of generating a new one, to see >> >>> what people think of using PKG-INFO first, other files second. In >> >>> principle, one could reduce a setup.py to just "from setuptools import >> >>> setup_distinfo; setup_distinfo()" or some such. >> >> >> >> In other words, using d2to1 and only for `setup.py egg_info` (only not >> >> egg_info but whatever we're doing instead to generate the metadata ;) >> >> >> >> Erik >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > http://mail.python.org/mailman/listinfo/distutils-sig >> >> >> Rename it and make it JSON instead of the homebrew* format! >> >> * Yes techincally it's based on a real format, but that format doesn't support all the things it needs so there are hackishly added extensions added to it. >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > > > > So don't edit it by hand, nobody edits PKG-INFO by hand. PKG-INFO (and the would be replacement) are for tools. Archiver can create it however the package author wants to, could be setup.py sdist, could be bentomaker sdist, could be totallyradpackagemaker create. It's a data exchange format not the API for developers or end users. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > Hopefully this will be included in .dist-info and in every package so we* can pretend PKF-INFO doesn't exist ;) * The proverbial we. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From erik.m.bray at gmail.com Tue Mar 26 23:47:24 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 26 Mar 2013 18:47:24 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> Message-ID: On Tue, Mar 26, 2013 at 5:21 PM, Daniel Holth wrote: > I want to poke myself in the eye every time I have to edit json by hand. > Especially the description field. I'm with you on that--I much prefer YAML (which is a superset of JSON!) but we don't even have that in the stdlib and it's not worth bikeshedding over to me. > On Mar 26, 2013 5:17 PM, "Donald Stufft" wrote: >> >> >> On Mar 26, 2013, at 5:12 PM, Daniel Holth wrote: >> >> > I am -1 on renaming anything unless it solves a technical problem. >> > Forever after we will have to explain "well, it used to be called X, >> > now it's called Y..." >> > >> > On Tue, Mar 26, 2013 at 5:01 PM, Erik Bray >> > wrote: >> >> On Tue, Mar 26, 2013 at 4:08 PM, PJ Eby wrote: >> >>> On Tue, Mar 26, 2013 at 3:03 PM, Daniel Holth >> >>> wrote: >> >>>> I think PKG-INFO is a highly human-editable format. >> >>> >> >>> That doesn't mean you necessarily want to edit it yourself; notably, >> >>> there will likely be some redundancy between the description in the >> >>> file and other files like the README. >> >>> >> >>> Also, today one of the key use cases people have for custom code in >> >>> setup.py is to pull the package version from a __version__ attribute >> >>> in a module. (Which is evil, of course, but people do it anyway.) >> >>> >> >>> But it might be worth adding a setuptools feature to pull metadata >> >>> from PKG-INFO (or DIST-INFO) instead of generating a new one, to see >> >>> what people think of using PKG-INFO first, other files second. In >> >>> principle, one could reduce a setup.py to just "from setuptools import >> >>> setup_distinfo; setup_distinfo()" or some such. >> >> >> >> In other words, using d2to1 and only for `setup.py egg_info` (only not >> >> egg_info but whatever we're doing instead to generate the metadata ;) >> >> >> >> Erik >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > http://mail.python.org/mailman/listinfo/distutils-sig >> >> >> Rename it and make it JSON instead of the homebrew* format! >> >> * Yes techincally it's based on a real format, but that format doesn't >> support all the things it needs so there are hackishly added extensions >> added to it. >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 >> DCFA >> > From donald at stufft.io Wed Mar 27 00:01:48 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 26 Mar 2013 19:01:48 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> Message-ID: <223B1B7B-265F-4E83-A8DC-FAC0B0945D93@stufft.io> On Mar 26, 2013, at 6:47 PM, Erik Bray wrote: > I'm with you on that--I much prefer YAML (which is a superset of > JSON!) but we don't even have that in the stdlib and it's not worth > bikeshedding over to me. YAML is great for human editable. I don't think is has much value over JSON for machine oriented data. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed Mar 27 00:42:13 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 27 Mar 2013 09:42:13 +1000 Subject: [Distutils] PEP439 and backward compat / easy_install / distlib In-Reply-To: References: <20130324094840.GX9677@merlinux.eu> Message-ID: On Wed, Mar 27, 2013 at 6:42 AM, Erik Bray wrote: > I pretty much agree with you on all of this, but I don't think the > question should be ignored either--avoiding this question is one of > the things that got previous packaging reform efforts into trouble. > Though the agreement to treat "build" and "installation" as two > different stories mitigates the issue this time around. In any case > it's sort of off topic for this thread so I'll bring it up again > elsewhen. One thing I see as a possible short-term solution is to > still rely on some version of distutils as a build tool *only*. But > it would still be nice to have some easy way to standardize "in-place" > installation regardless of how extension modules get built. That's exactly the interim solution I have in mind: for the moment, the "archive system" will be "python setup.py sdist" in an appropriate location and the "build system" will be "python setup.py bdist_wheel". Both will be modelled on pip's current behaviour when installing from sdists - the difference will be in the explicit invocation of the separate steps, rather than handling the whole chain with "setup.py install". Longer term I want to make setup.py optional even for source installs, but that requires further enhancements to the metadata. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Mar 27 00:50:39 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 27 Mar 2013 09:50:39 +1000 Subject: [Distutils] Builders vs Installers In-Reply-To: <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Wed, Mar 27, 2013 at 8:01 AM, Donald Stufft wrote: > Hopefully this will be included in .dist-info and in every package so we* > can pretend PKF-INFO doesn't exist ;) The key-value format is actually easier for hand editing and covers most cases. The extension format allows embedded JSON for more complex cases. As an on-disk format, it's isomorphic to JSON, so I don't actually plan to propose changing it. Where we *do* need JSON-compatible metadata, though, is as an easy to pass around in-memory data structure for use in APIs. In particular, metadata 2.0 will be defining this format (and how to convert it to/from the key/value format) so that the signature of the post-install hook can be: def post_install_hook(installed, previous=None): ... "installed" will be a string-keyed metadata dictionary for the distribution that was just installed, containing only dicts, lists and strings as values. "previous" will be the metadata for the version of the distribution that was previously installed, if any. Cheers, Nick. P.S. And now I'm leaving for the airport to fly home to Australia - no more replies from me for a couple of days :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Wed Mar 27 01:33:15 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 26 Mar 2013 20:33:15 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Mar 26, 2013, at 7:50 PM, Nick Coghlan wrote: > On Wed, Mar 27, 2013 at 8:01 AM, Donald Stufft wrote: >> Hopefully this will be included in .dist-info and in every package so we* >> can pretend PKF-INFO doesn't exist ;) > > The key-value format is actually easier for hand editing and covers > most cases. The extension format allows embedded JSON for more complex > cases. As an on-disk format, it's isomorphic to JSON, so I don't > actually plan to propose changing it. I disagree. - These files are used for tools to exchange data, so "hand editable" shouldn't be a primary concern. - There are a number of current fields where the current format is *not* enough and one off psuedo formats have had to be added - `Keywords: dog puppy voting election` - A list masquerading as a string, this one needs field.split() to actually parse it - `Project-URL: Bug, Issue Tracker, http://bitbucket.org/tarek/distribute/issues/` - A dictionary masquerading as a list of strings, this one needs {key.strip(): value.strip() for key, value in [x.rsplit(", ", 1) for x in field]} - Any of the fields can contain arbitrary content, previously Description had specialized handling for this which it has now been moved to the payload section, but all the same issues there affect other fields. - The Extension field name using ExtensionName/ActualKey to kludge a nested dictionary - The ExtensionName/json is a horrible kludge why are we nesting a format inside of a format instead of just using a format that supports everything we could want? As far as I can tell the only things that even use PKG-INFO is setuptools/distribute and we want to phase them out of existence anyways. The only other thing I can think of is Wheel which can either a) be updated to a different format it's new enough there's not much need to worry about legacy support or b) generate the METADATA file just for Wheels. TBH I'd like it if my name was removed as author of the PEP, I only briefly touched the versioning section and I do not agree with the decision to continue using PKG-INFO and do not want my name attached to a PEP that advocates it. > > Where we *do* need JSON-compatible metadata, though, is as an easy to > pass around in-memory data structure for use in APIs. In particular, > metadata 2.0 will be defining this format (and how to convert it > to/from the key/value format) so that the signature of the > post-install hook can be: > > def post_install_hook(installed, previous=None): > ... > > "installed" will be a string-keyed metadata dictionary for the > distribution that was just installed, containing only dicts, lists and > strings as values. > "previous" will be the metadata for the version of the distribution > that was previously installed, if any. > > Cheers, > Nick. > > P.S. And now I'm leaving for the airport to fly home to Australia - no > more replies from me for a couple of days :) > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Wed Mar 27 02:12:02 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 26 Mar 2013 21:12:02 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Tue, Mar 26, 2013 at 8:33 PM, Donald Stufft wrote: > As far as I can tell the only things that even use PKG-INFO is setuptools/distribute and we want to phase them out of existence anyways. The only thing setuptools uses it for is to find out the version of a package in the case where an .egg-info directory or filename doesn't have a version in its filename... which normally only happens in the "setup.py develop" case. So no need to keep it around on my account. ;-) (Some tools do check for the *existence* of a PKG-INFO, like PyPI's sdist upload validation, and the various egg formats require a file *named* PKG-INFO, but AFAIK nothing commonly used out there actually *reads* PKG-INFO or gives a darn about its contents, except for that version usecase mentioned above.) From dholth at gmail.com Wed Mar 27 03:49:35 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 26 Mar 2013 22:49:35 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Tue, Mar 26, 2013 at 9:12 PM, PJ Eby wrote: > On Tue, Mar 26, 2013 at 8:33 PM, Donald Stufft wrote: >> As far as I can tell the only things that even use PKG-INFO is setuptools/distribute and we want to phase them out of existence anyways. > > The only thing setuptools uses it for is to find out the version of a > package in the case where an .egg-info directory or filename doesn't > have a version in its filename... which normally only happens in the > "setup.py develop" case. So no need to keep it around on my account. > ;-) > > (Some tools do check for the *existence* of a PKG-INFO, like PyPI's > sdist upload validation, and the various egg formats require a file > *named* PKG-INFO, but AFAIK nothing commonly used out there actually > *reads* PKG-INFO or gives a darn about its contents, except for that > version usecase mentioned above.) > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig It will be OK. Take a deep breath and laugh at the idea that string.rsplit(', ', 1) on a useless field that's probably already posted as a dict to pypi should be considered a serious threat to the future of packaging. If you didn't laugh you can write Metadata 3.0 / define the JSON serialization and we'll write metadata.json into the .dist-info directory. It's not the end of the world, it is the beginning. From donald at stufft.io Wed Mar 27 04:40:02 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 26 Mar 2013 23:40:02 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Mar 26, 2013, at 10:49 PM, Daniel Holth wrote: > On Tue, Mar 26, 2013 at 9:12 PM, PJ Eby wrote: >> On Tue, Mar 26, 2013 at 8:33 PM, Donald Stufft wrote: >>> As far as I can tell the only things that even use PKG-INFO is setuptools/distribute and we want to phase them out of existence anyways. >> >> The only thing setuptools uses it for is to find out the version of a >> package in the case where an .egg-info directory or filename doesn't >> have a version in its filename... which normally only happens in the >> "setup.py develop" case. So no need to keep it around on my account. >> ;-) >> >> (Some tools do check for the *existence* of a PKG-INFO, like PyPI's >> sdist upload validation, and the various egg formats require a file >> *named* PKG-INFO, but AFAIK nothing commonly used out there actually >> *reads* PKG-INFO or gives a darn about its contents, except for that >> version usecase mentioned above.) >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > It will be OK. Take a deep breath and laugh at the idea that > string.rsplit(', ', 1) on a useless field that's probably already > posted as a dict to pypi should be considered a serious threat to the > future of packaging. If you didn't laugh you can write Metadata 3.0 / > define the JSON serialization and we'll write metadata.json into the > .dist-info directory. It's not the end of the world, it is the > beginning. Yea, it's totally about keywords and that's just not an example of a larger problem (like embedding little mini json documents) and what we need is another competing standard all because of a legacy file format for a file that barely anything uses right now (which makes it the ideal time _to_ replace it, before it starts being actively used in a widespread fashion). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Wed Mar 27 05:08:02 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 00:08:02 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: > Yea, it's totally about keywords and that's just not an example of a larger problem (like embedding little mini json documents) and what we need is another competing standard all because of a legacy file format for a file that barely anything uses right now (which makes it the ideal time _to_ replace it, before it starts being actively used in a widespread fashion). We need approximately five fields: Name Version Provides-Extra Requires-Dist Setup-Requires-Dist the rest are useless, never need to be parsed by anyone, or are already sent to pypi as a dict. We need the environment markers language. We need the requirements specifiers >= 4.0.0, < 9. Define the JSON serialization and we'll have this format converted in 50 lines of code or less. It's that easy. From ncoghlan at gmail.com Wed Mar 27 05:55:50 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 27 Mar 2013 14:55:50 +1000 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On 26 Mar 2013 21:08, "Daniel Holth" wrote: > > > Yea, it's totally about keywords and that's just not an example of a larger problem (like embedding little mini json documents) and what we need is another competing standard all because of a legacy file format for a file that barely anything uses right now (which makes it the ideal time _to_ replace it, before it starts being actively used in a widespread fashion). > > We need approximately five fields: > > Name > Version > Provides-Extra > Requires-Dist > Setup-Requires-Dist > > the rest are useless, never need to be parsed by anyone, or are > already sent to pypi as a dict. > > We need the environment markers language. > > We need the requirements specifiers >= 4.0.0, < 9. > > Define the JSON serialization and we'll have this format converted in > 50 lines of code or less. It's that easy. I've already defined it for the post install hook design, and will now be rewriting the PEP to use that as the base format. As added bonuses, it will allow 2.0 metadata to live alongside 1.1 metadata (due to a different file name), be easier to explain to readers of the PEP and allow us to fix some clumsy legacy naming. When we last considered this question, we were still trying to keep the metadata 1.3 changes minimal to avoid delaying the addition of wheel support to pip. That issue has since been solved more expediently by allowing metadata 1.1 in wheel files. The addition of the post install hook is the other major relevant change, and that's the one which means we need to define a structured metadata format regardless of the on-disk format. It all adds up to it making far more sense to just switch the format to JSON for 2.0 rather than persisting with ad hoc attempts to use a key-value multidict for structured data storage. Cheers, Nick. P.S. I forgot LAX has free wi-fi now :) > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Mar 27 13:02:30 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 12:02:30 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I disagree. > > - These files are used for tools to exchange data, so "hand editable" shouldn't be a primary concern. Right. Nobody hand-edits PKG-INFO now, do they? > - There are a number of current fields where the current format is *not* enough and one off psuedo formats > have had to be added > - `Keywords: dog puppy voting election` - A list masquerading as a string, this one needs field.split() to > actually parse it > - `Project-URL: Bug, Issue Tracker, http://bitbucket.org/tarek/distribute/issues/` - A dictionary > masquerading as a list of strings, this one needs {key.strip(): value.strip() for key, value in > [x.rsplit(", ", 1) for x in field]} > - Any of the fields can contain arbitrary content, previously Description had specialized handling for > this which it has now been moved to the payload section, but all the same issues there affect other fields. > - The Extension field name using ExtensionName/ActualKey to kludge a nested dictionary > - The ExtensionName/json is a horrible kludge why are we nesting a format inside of a format instead of just > using a format that supports everything we could want? > > As far as I can tell the only things that even use PKG-INFO is setuptools/distribute and we want to phase them > out of existence anyways. The only other thing I can think of is Wheel which can either a) be updated to a > different format it's new enough there's not much need to worry about legacy support or b) generate the > METADATA file just for Wheels. Please note that: * I already have a system working fairly well *now* (though it's early days, and needs more testing) where JSON is used for metadata. * The metadata covers not just the index metadata (PKG-INFO) but also metadata covering how to build, install and test distributions. * The metadata already exists for the vast bulk of distributions on PyPI and is derived from the setup.py in those distributions. So migration is not much of an issue. * The "distil" tool demonstrates each of the Archiver, Builder and Installer roles reasonably well for its stage of development. Donald's above analysis resonates with me - it seems pretty kludgy trying to shoe-horn stuff into key-value format which doesn't fit it well. There don't seem to be any valid technical arguments to keeping the key-value format, other than "please let's not try to change too many things at once". If that's really going to be accepted as the reason, it strikes me as being a little timid (given what "distil" shows is possible). And that would be enough of a shame as it is, without making things worse by introducing something like ExtensionName/json. To those people who would balk at editing JSON by hand - who's asking you to? Why not just get the data into an appropriate dict, using any tools you like, and then serialise it to JSON? That approach seems to be what JSON was designed for. If any tools need PKG-INFO style metadata, that's easy enough to generate from a JSON format, as distil's wheel building support demonstrates. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Wed Mar 27 13:18:36 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 12:18:36 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: Daniel Holth gmail.com> writes: > We need approximately five fields: > > Name > Version > Provides-Extra > Requires-Dist > Setup-Requires-Dist > > the rest are useless, never need to be parsed by anyone, or are > already sent to pypi as a dict. > > We need the environment markers language. > > We need the requirements specifiers >= 4.0.0, < 9. > You're taking a disappointingly narrow view of the metadata, it seems to me. If you look at the totality of metadata which describes distributions in the here- and-now world of setuptools, it's a lot more than that - just look at any of my metadata files which I've pointed to in the "distil" documentation. You're only talking about *installation* metadata, but even there your coverage is incomplete. I won't go into any more details now, but suffice to say that as I am working on "distil", I am coming across decisions about installation which either I hard-code into distil (thus making it quite likely that another tool will give different results), or enshrine in installation metadata (constraining all compliant tools to adhere to the developer's and/or user's wishes in that area). Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Wed Mar 27 13:22:01 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 12:22:01 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: Nick Coghlan gmail.com> writes: > It all adds up to it making far more sense to just switch the format to JSON for 2.0 rather than persisting with ad hoc attempts to use a key-value multidict for structured data storage. +1, and I sincerely hope you will take a look at the JSON metadata used in distlib/distil to good advantage in dependency resolution, and installing, archiving and building distributions. Regards, Vinay Sajip From dholth at gmail.com Wed Mar 27 13:37:26 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 08:37:26 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Wed, Mar 27, 2013 at 8:18 AM, Vinay Sajip wrote: > Daniel Holth gmail.com> writes: > >> We need approximately five fields: >> >> Name >> Version >> Provides-Extra >> Requires-Dist >> Setup-Requires-Dist >> >> the rest are useless, never need to be parsed by anyone, or are >> already sent to pypi as a dict. >> >> We need the environment markers language. >> >> We need the requirements specifiers >= 4.0.0, < 9. >> > > You're taking a disappointingly narrow view of the metadata, it seems to me. If > you look at the totality of metadata which describes distributions in the here- > and-now world of setuptools, it's a lot more than that - just look at any of > my metadata files which I've pointed to in the "distil" documentation. > > You're only talking about *installation* metadata, but even there your coverage > is incomplete. I won't go into any more details now, but suffice to say that as > I am working on "distil", I am coming across decisions about installation which > either I hard-code into distil (thus making it quite likely that another tool > will give different results), or enshrine in installation metadata (constraining > all compliant tools to adhere to the developer's and/or user's wishes in that > area). Hooray for JSON. I actually liked the separation and viewed it as a de-coupling feature, but that will probably be less important as we avoid setup.py generating different metadata for each execution. From scarolan at gmail.com Wed Mar 27 14:06:09 2013 From: scarolan at gmail.com (Sean Carolan) Date: Wed, 27 Mar 2013 08:06:09 -0500 Subject: [Distutils] Fwd: python setup.py bdist_rpm fails on RHEL5 x86_64 In-Reply-To: References: Message-ID: My apologies if this comes through twice; I think I sent the first copy before I was approved to send to this list! ********************************************** Hello everyone: I'm attempting to use the bdist_rpm flag to build a plain vanilla, Python 2.7 RPM for RHEL 5 x86_64, but the build command fails. Since you all are the distutils experts I figured you might have seen this before. I also submitted a bug to bugs.python.org: http://bugs.python.org/issue17553? Anyone have some pointers on how to make this build work? thanks Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: From scarolan at gmail.com Wed Mar 27 14:00:55 2013 From: scarolan at gmail.com (Sean Carolan) Date: Wed, 27 Mar 2013 08:00:55 -0500 Subject: [Distutils] python setup.py bdist_rpm fails on RHEL5 x86_64 Message-ID: Hello everyone: I'm attempting to use the bdist_rpm flag to build a plain vanilla, Python 2.7 RPM for RHEL 5 x86_64, but the build command fails. Since you all are the distutils experts I figured you might have seen this before. I also submitted a bug to bugs.python.org: http://bugs.python.org/issue17553? Anyone have some pointers on how to make this build work? thanks Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: From regebro at gmail.com Wed Mar 27 14:39:09 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 27 Mar 2013 14:39:09 +0100 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Mon, Mar 25, 2013 at 10:08 PM, Paul Moore wrote: > There's a longer-term issue that occurred to me when thinking about > pip's role as a "builder" or an "installer" (to use Nick's > terminology). > > As I understand Nick's vision for the future, installers (like pip) > will locate built wheels and download and install them, and builders > (like distutils and bento) will be responsible for building wheels. > But there's an intermediate role which shouldn't get forgotten in the > transition - the role that pip currently handles with the "pip wheel" > command. This is where I specify a list of distributions, and pip > locates sdists, downloads them, checks dependencies, and ultimately > builds all of the wheels. I'm not sure whether the current idea of > builders includes this "locate, download and resolve dependencies" > function (distutils and bento certainly don't have that capability). Personally I don't see that as an intermediate role at all. That for me is a builder. > I imagine that pip will retain some form of the current "pip wheel" I hope it will not. //Lennart From regebro at gmail.com Wed Mar 27 14:39:45 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 27 Mar 2013 14:39:45 +0100 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 12:07 AM, Daniel Holth wrote: > Unix users will always want to compile their own. Yup. > Pip wheel is not going away I don't see how that follows. //Lennart From dholth at gmail.com Wed Mar 27 14:57:39 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 09:57:39 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 9:39 AM, Lennart Regebro wrote: > On Tue, Mar 26, 2013 at 12:07 AM, Daniel Holth wrote: >> Unix users will always want to compile their own. > > Yup. > >> Pip wheel is not going away > > I don't see how that follows. > > //Lennart Is it too convenient? The tool knows how to find sources, compile them, and install them. It will delegate all the work to the actual build system. If pip was a pure installer without a way to invoke a build system then it wouldn't be able to install from sdist at all. It would help if you'd describe the alternative workflow again. The proposed workflow is "pip wheel package"; "pip install --find-links wheelhouse --no-index package". We don't suggest uploading the wheels used to cache compilation to pypi especially since most of them are probably other people's packages. From regebro at gmail.com Wed Mar 27 15:04:22 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 27 Mar 2013 15:04:22 +0100 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 2:57 PM, Daniel Holth wrote: > Is it too convenient? The tool knows how to find sources, compile > them, and install them. It will delegate all the work to the actual > build system. If pip was a pure installer without a way to invoke a > build system then it wouldn't be able to install from sdist at all. All of that should be implemented in a library that pip can use. So this is only a question of a conceptual difference between different tools. It makes no sense to have a tools for developers that does everything including running building, running tests and packaging, and another tool that does nothing but installs, and creates wheel packages. Making wheels should be a part of the tool using for packaging, not the tool used for installing. //Lennart From p.f.moore at gmail.com Wed Mar 27 15:16:04 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 27 Mar 2013 14:16:04 +0000 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On 27 March 2013 14:04, Lennart Regebro wrote: > On Wed, Mar 27, 2013 at 2:57 PM, Daniel Holth wrote: >> Is it too convenient? The tool knows how to find sources, compile >> them, and install them. It will delegate all the work to the actual >> build system. If pip was a pure installer without a way to invoke a >> build system then it wouldn't be able to install from sdist at all. > > All of that should be implemented in a library that pip can use. So > this is only a question of a conceptual difference between different > tools. > > It makes no sense to have a tools for developers that does everything > including running building, running tests and packaging, and another > tool that does nothing but installs, and creates wheel packages. > > Making wheels should be a part of the tool using for packaging, not > the tool used for installing. But sometimes practicality beats purity. As an end user who wants to just install packages, but who knows that not everything will be available as wheels, I need to be able to build my own wheels. But I don't want a full development tool. Having the install tool able to do a download and build from sdist is a huge convenience to me. Of course if someone builds a "wheelmaker" tool that did precisely what "pip wheel" did, I would have no objections to using that. But even then, the mere existence of another tool doesn't seem to me to be enough justification for removing functionality from pip. If pip wheel didn't exist, and someone had written wheelmaker, I would not be arguing to *add* pip wheel. But it's there already and there's a much higher bar for removing useful functionality. Paul From dholth at gmail.com Wed Mar 27 15:16:52 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 10:16:52 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 10:04 AM, Lennart Regebro wrote: > On Wed, Mar 27, 2013 at 2:57 PM, Daniel Holth wrote: >> Is it too convenient? The tool knows how to find sources, compile >> them, and install them. It will delegate all the work to the actual >> build system. If pip was a pure installer without a way to invoke a >> build system then it wouldn't be able to install from sdist at all. > > All of that should be implemented in a library that pip can use. So > this is only a question of a conceptual difference between different > tools. > > It makes no sense to have a tools for developers that does everything > including running building, running tests and packaging, and another > tool that does nothing but installs, and creates wheel packages. > > Making wheels should be a part of the tool using for packaging, not > the tool used for installing. It kindof works this way already. Pip doesn't include any of the actual wheel building logic, it just collects the necessary sources then calls out to the "build one wheel" tool for each downloaded source archive. The developer's tool has some overlap in functionality but is focused on dealing with one first-party package at a time for upload to the index rather than many packages at a time for download and install. What will change is that pip will include the install logic instead of delegating it to the worst shortcoming of packaging "setup.py install". From regebro at gmail.com Wed Mar 27 15:41:15 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 27 Mar 2013 15:41:15 +0100 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 3:16 PM, Paul Moore wrote: > But sometimes practicality beats purity. As an end user who wants to > just install packages, but who knows that not everything will be > available as wheels, I need to be able to build my own wheels. Can you explain to me why you as an end user can not just install the packages? Why do you need to first build wheels? //Lennart From vinay_sajip at yahoo.co.uk Wed Mar 27 15:44:30 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 14:44:30 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: Message-ID: Paul Moore gmail.com> writes: > Of course if someone builds a "wheelmaker" tool that did precisely > what "pip wheel" did, I would have no objections to using that. But I already have made one, it's called wheeler.py [1]. It uses vanilla pip (not the variant which provides pip wheel) to build wheels from sdists on PyPI. The distil tool builds wheels with or without using vanilla pip as a helper; the vanilla pip helper is needed where you *have* to run setup.py to get a correct build (not always the case). With wheeler.py you need to install distlib, while with distil it's included. > even then, the mere existence of another tool doesn't seem to me to be > enough justification for removing functionality from pip. If pip wheel > didn't exist, and someone had written wheelmaker, I would not be > arguing to *add* pip wheel. But it's there already and there's a much > higher bar for removing useful functionality. I personally have no problem with "pip wheel" staying, but it does muddy pip's original intent as denoted by pip standing for "pip installs packages". While "pip wheel" was added as a pragmatic way of getting wheels out there for people to work with, pip's wheel functionality has only recently been added and is unlikely to be widespread, e.g. in distro packages for pip. So it could be reverted (since there are alternatives) and ISTM that the likely impact would only be on a few early adopters. Note that I'm not arguing for reversion at all - it makes sense for there for multiple implementations of wheel building and usage so that interoperability wrinkles can be ironed out. Regards, Vinay Sajip [1] https://gist.github.com/vsajip/4988471 From vinay_sajip at yahoo.co.uk Wed Mar 27 15:47:19 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 14:47:19 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: Message-ID: Lennart Regebro gmail.com> writes: > > Can you explain to me why you as an end user can not just install the packages? > Why do you need to first build wheels? > One likely scenario on Windows is that you have a compiler and can install from sdists or wheels, but want to distribute packages to people who don't have a compiler, so can only install from wheels. Regards, Vinay Sajip From dholth at gmail.com Wed Mar 27 15:51:42 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 10:51:42 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 10:41 AM, Lennart Regebro wrote: > On Wed, Mar 27, 2013 at 3:16 PM, Paul Moore wrote: >> But sometimes practicality beats purity. As an end user who wants to >> just install packages, but who knows that not everything will be >> available as wheels, I need to be able to build my own wheels. > > Can you explain to me why you as an end user can not just install the packages? > Why do you need to first build wheels? > > //Lennart It's because when you install lots of the same packages repeatedly you might want it to be lightning fast the second time. The pip wheel workflow also gives you a useful local copy of all the packages you need, insulating yourself from pypi outages. This is the practical side. The long term / bigger picture use case is that the wheel format or an equivalent manifest serves as a sort of packaging WSGI analogue -- a static interface between builds and installs. We would remove the "setup.py install" command entirely. In that world pip would have to build the wheel because it couldn't "just install" the package. The first convenient wheel tool was more like wheeler.py. It was just a shell script that called pip install --no-install, and "setup.py bdist_wheel for subdirectory in build directory". From vinay_sajip at yahoo.co.uk Wed Mar 27 15:57:01 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 14:57:01 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: Message-ID: Lennart Regebro gmail.com> writes: > It makes no sense to have a tools for developers that does everything > including running building, running tests and packaging, and another > tool that does nothing but installs, and creates wheel packages. > > Making wheels should be a part of the tool using for packaging, not > the tool used for installing. Don't forget that developers are users too - they consume packages as well as developing them. I see no *conceptual* harm in a tool that can do archive/build/install, as long as it can do them well (harder to do than to say, I know). And I see that there is a place for just-installation functionality which does not require the presence of a build environment. But a single tool could have multiple guises, just as some Unix tools of old behaved differently according to which link they were invoked from (the linked-to executable being the same). Isn't our present antagonism to the idea of having one ring to bind them all due to the qualities specific to that ring (setup.py, calls to setup())? Regards, Vinay Sajip From dholth at gmail.com Wed Mar 27 16:03:36 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 11:03:36 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 10:57 AM, Vinay Sajip wrote: > Lennart Regebro gmail.com> writes: > >> It makes no sense to have a tools for developers that does everything >> including running building, running tests and packaging, and another >> tool that does nothing but installs, and creates wheel packages. >> >> Making wheels should be a part of the tool using for packaging, not >> the tool used for installing. > > Don't forget that developers are users too - they consume packages as well as > developing them. I see no *conceptual* harm in a tool that can do > archive/build/install, as long as it can do them well (harder to do than to say, > I know). And > I see that there is a place for just-installation functionality which does not > require the presence of a build environment. But a single tool could have > multiple guises, just as some Unix tools of old behaved differently according to > which link they were invoked from (the linked-to executable being the same). > > Isn't our present antagonism to the idea of having one ring to bind them all > due to the qualities specific to that ring (setup.py, calls to setup())? I really think so. distutils is a bad implementation. This has a lot more to do with how it works internally than how its command line interface looks. We can have new tools that do everything with a single command but really delegate the work out to separate decoupled and hopefully pluggable pieces underneath. From p.f.moore at gmail.com Wed Mar 27 16:11:38 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 27 Mar 2013 15:11:38 +0000 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On 27 March 2013 14:41, Lennart Regebro wrote: > Can you explain to me why you as an end user can not just install the packages? > Why do you need to first build wheels? Mainly just as Daniel said for convenience of repeat installs (in virtualenvs). But also I think there are a *lot* of different workflows out there and we need to avoid focusing on any one exclusively (the strict builder/installer split is more focused on production installs than on developers installing into virtualenvs, for instance). On 27 March 2013 14:44, Vinay Sajip wrote: > I personally have no problem with "pip wheel" staying, but it does muddy pip's > original intent as denoted by pip standing for "pip installs packages". I think we have to remember that pip is a reasonably mature tool with a large existing user base. I don't want the idea that pip is now the "official Python installer" to be at odds with continued support of those users and their backward compatibility needs. Refactoring pip's internals and moving towards support of the new standards and workflow models is one thing, and I'm 100% in favour of that, but I don't see major changes in fundamentals like "pip install foo" (working seamlessly even if there are no wheels available for foo) being on the cards. Having a "pip wheel" command fits into that model simply as a way of saying "stop the pip install process just before the actual final install, so that I can run that final step over and over without redoing the first part". Think of it as "pip install --all-but-install" if you like :-) Paul From regebro at gmail.com Wed Mar 27 16:22:38 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 27 Mar 2013 16:22:38 +0100 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 3:47 PM, Vinay Sajip wrote: > One likely scenario on Windows is that you have a compiler and can install from > sdists or wheels, but want to distribute packages to people who don't have a > compiler, so can only install from wheels. Which means you are actually not just a simple end user, but ops or devops who want to build packages. And then the question arises why we can't have documentation explaining how to build packages with the packaging tools that is usable for that user. Fine, as a stop-gap measure pip wheel might be useful, as this mythical packaging tool doesn't really exist yet (except as bdist_wheel, but I suspect pip wheel does more than that?) But in the long run I don't see the point, and I think it muddles what pip is and does. //Lennart From dholth at gmail.com Wed Mar 27 16:49:10 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 11:49:10 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 11:22 AM, Lennart Regebro wrote: > On Wed, Mar 27, 2013 at 3:47 PM, Vinay Sajip wrote: >> One likely scenario on Windows is that you have a compiler and can install from >> sdists or wheels, but want to distribute packages to people who don't have a >> compiler, so can only install from wheels. > > Which means you are actually not just a simple end user, but ops or > devops who want to build packages. And then the question arises why we > can't have documentation explaining how to build packages with the > packaging tools that is usable for that user. > > Fine, as a stop-gap measure pip wheel might be useful, as this > mythical packaging tool doesn't really exist yet (except as > bdist_wheel, but I suspect pip wheel does more than that?) > But in the long run I don't see the point, and I think it muddles what > pip is and does. Then you are also in favor of removing sdist support from the "pip install" command, in the same way that rpm doesn't automatically compile srpm. Pip wheel does nothing more than run bdist_wheel on each package in a requirements set. It's kindof a stopgap measure but it's also a firm foundation for the more decoupled way packaging should work. From donald at stufft.io Wed Mar 27 16:58:15 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 27 Mar 2013 11:58:15 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Mar 27, 2013, at 10:57 AM, Vinay Sajip wrote: > Lennart Regebro gmail.com> writes: > >> It makes no sense to have a tools for developers that does everything >> including running building, running tests and packaging, and another >> tool that does nothing but installs, and creates wheel packages. >> >> Making wheels should be a part of the tool using for packaging, not >> the tool used for installing. > > Don't forget that developers are users too - they consume packages as well as > developing them. I see no *conceptual* harm in a tool that can do > archive/build/install, as long as it can do them well (harder to do than to say, > I know). And > I see that there is a place for just-installation functionality which does not > require the presence of a build environment. But a single tool could have > multiple guises, just as some Unix tools of old behaved differently according to > which link they were invoked from (the linked-to executable being the same). > > Isn't our present antagonism to the idea of having one ring to bind them all > due to the qualities specific to that ring (setup.py, calls to setup())? > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Basically this. There no need to *enforce* that the toolchain each be separate pieces, but rather ensure that it *can*. The current status quo means setuptools (or distutils) are the only name in the game, if you want to do anything else you have to pretend you are setuptools. In short setuptools owns the entire process. The goal here is to break it up so no one tool owns the entire process, but still allow tools to act as more then one part of the process when it makes sense. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From regebro at gmail.com Wed Mar 27 17:18:40 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 27 Mar 2013 17:18:40 +0100 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 4:49 PM, Daniel Holth wrote: > Then you are also in favor of removing sdist support from the "pip > install" command, in the same way that rpm doesn't automatically > compile srpm. I was not aware that pip could create sdists. //Lennart From dholth at gmail.com Wed Mar 27 18:09:42 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 13:09:42 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 12:18 PM, Lennart Regebro wrote: > On Wed, Mar 27, 2013 at 4:49 PM, Daniel Holth wrote: >> Then you are also in favor of removing sdist support from the "pip >> install" command, in the same way that rpm doesn't automatically >> compile srpm. > > I was not aware that pip could create sdists. In my view the fact that pip creates an installation as an artifact of installing from a source package is equivalent to creating a wheel, given that wheel is a format defined as a zip file containing one installation of a distribution. Both operations equally ruin pip's reputation as being an installer instead of a build tool. Instead all installation should have an intermediate, static, documented binary representation created by the build tool that is later moved into place by the install tool. I would be pleased if "pip install" lost the ability to natively install sdists without that intermediate step. From pje at telecommunity.com Wed Mar 27 18:12:08 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 27 Mar 2013 13:12:08 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Wed, Mar 27, 2013 at 8:02 AM, Vinay Sajip wrote: > To those people who would balk at editing JSON by hand - who's asking you > to? Why not just get the data into an appropriate dict, using any tools you > like, and then serialise it to JSON? The challenge here is again the distinction between raw source and sdist, and the interaction with revision control. Either there has to be some way to tell MEBS (i.e. the overall build system) what tool you're using to generate that JSON, or you have to check a generated file into revision control, and make sure you've updated it. (Which is error prone, even if you don't mind checking generated files into revision control.) This strongly suggests there needs to be *some* human-editable way to at *least* specify what tool you're using to generate the JSON with. From tseaver at palladion.com Wed Mar 27 18:10:47 2013 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 27 Mar 2013 13:10:47 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/26/2013 09:12 PM, PJ Eby wrote: > (Some tools do check for the *existence* of a PKG-INFO, like PyPI's > sdist upload validation, and the various egg formats require a file > *named* PKG-INFO, but AFAIK nothing commonly used out there actually > *reads* PKG-INFO or gives a darn about its contents, except for that > version usecase mentioned above.) I do have a tool (named 'pkginfo', funnily enough) which does parse them: https://pypi.python.org/pypi/pkginfo http://pythonhosted.org/pkginfo/ I use it in another tool, 'compoze', which allows me to build "cureated" indexes from versions installed locally (e.g., after testing in a virtualenv): https://pypi.python.org/pypi/compoze/ http://docs.repoze.org/compoze/ Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlFTKBcACgkQ+gerLs4ltQ4x/wCfZsp/p60ELrQvTCXfdPMhuK1E qJQAoJXvTlTSo1iy/KxylnuizPodbr25 =IJ6t -----END PGP SIGNATURE----- From donald at stufft.io Wed Mar 27 18:30:18 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 27 Mar 2013 13:30:18 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Mar 27, 2013, at 1:12 PM, PJ Eby wrote: > On Wed, Mar 27, 2013 at 8:02 AM, Vinay Sajip wrote: >> To those people who would balk at editing JSON by hand - who's asking you >> to? Why not just get the data into an appropriate dict, using any tools you >> like, and then serialise it to JSON? > > The challenge here is again the distinction between raw source and > sdist, and the interaction with revision control. Either there has to > be some way to tell MEBS (i.e. the overall build system) what tool > you're using to generate that JSON, or you have to check a generated > file into revision control, and make sure you've updated it. (Which > is error prone, even if you don't mind checking generated files into > revision control.) > > This strongly suggests there needs to be *some* human-editable way to > at *least* specify what tool you're using to generate the JSON with. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig I don't actually think packaging needs to solve this. But there are a number of solutions that come to mind (mostly either expecting a standard command ala setup.py develop to work). If I want to install a development version of say libsodium (just an example C lib) I download it and run ./autogen.sh && make make install but once it's packaged I can install it using the packaging tools. So this issue is really sort of parallel to builders, archivers and even the JSON and it comes down to how does an unpackaged directory of code (the VCS checkout portion isn't really that important here) signal to an installer how to install a development version of it. Personally I think a common entrypoint (ala make install) is the way forward for this. When you leave the realm of package formats (ala sdist, wheel, etc) you start needing to get much more freeform. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Wed Mar 27 18:41:00 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 13:41:00 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Wed, Mar 27, 2013 at 1:30 PM, Donald Stufft wrote: > > On Mar 27, 2013, at 1:12 PM, PJ Eby wrote: > >> On Wed, Mar 27, 2013 at 8:02 AM, Vinay Sajip wrote: >>> To those people who would balk at editing JSON by hand - who's asking you >>> to? Why not just get the data into an appropriate dict, using any tools you >>> like, and then serialise it to JSON? >> >> The challenge here is again the distinction between raw source and >> sdist, and the interaction with revision control. Either there has to >> be some way to tell MEBS (i.e. the overall build system) what tool >> you're using to generate that JSON, or you have to check a generated >> file into revision control, and make sure you've updated it. (Which >> is error prone, even if you don't mind checking generated files into >> revision control.) >> >> This strongly suggests there needs to be *some* human-editable way to >> at *least* specify what tool you're using to generate the JSON with. >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > I don't actually think packaging needs to solve this. But there are a number of solutions that come to mind (mostly either expecting a standard command ala setup.py develop to work). > > If I want to install a development version of say libsodium (just an example C lib) I download it and run ./autogen.sh && make make install but once it's packaged I can install it using the packaging tools. > > So this issue is really sort of parallel to builders, archivers and even the JSON and it comes down to how does an unpackaged directory of code (the VCS checkout portion isn't really that important here) signal to an installer how to install a development version of it. Personally I think a common entrypoint (ala make install) is the way forward for this. When you leave the realm of package formats (ala sdist, wheel, etc) you start needing to get much more freeform. It does get a little murky. nothing the file in a source checkout PKG-INFO the file in an sdist PKG-INFO the re-generated file PKG-INFO the installed file (we will probably call it metadata.json soon but the confusion is the same). I think it might make sense to expect only a stub PKG-INFO[.in] at the root of a VCS checkout, have a 100% generated and hopefully trustworthy .dist-info directory in an sdist, and don't bother regenerating the root PKG-INFO. From vinay_sajip at yahoo.co.uk Wed Mar 27 18:41:00 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 17:41:00 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: PJ Eby telecommunity.com> writes: > The challenge here is again the distinction between raw source and > sdist, and the interaction with revision control. Either there has to > be some way to tell MEBS (i.e. the overall build system) what tool > you're using to generate that JSON, or you have to check a generated > file into revision control, and make sure you've updated it. (Which > is error prone, even if you don't mind checking generated files into > revision control.) > > This strongly suggests there needs to be *some* human-editable way to > at *least* specify what tool you're using to generate the JSON with. There are no doubt many possible workflows, but one such is: metadata input files - any number, hand-edited, checked into source control metadata merge tool - creates JSON metadata from input files JSON metadata - produced by tool, so not checked in If the "merge tool" (which could be a simple Python script) is custom to a project, it can be checked into source control in that project. If it is used across multiple projects, it is maintained as a separate tool in its own repo and, if you are just using it but not maintaining it, it becomes part of your build toolset (like sphinx-build). Actually, the doc tools seem to be a good analogy - create a useful format which is a pain to edit by hand (HTML that looks nice in a browser) from some checked in sources which are reasonable to edit by hand (.rst) + a merge tool (Sphinx). The merge tool seems similar in kind to the release.py script that many projects have, which creates release distribution files, bumps version numbers, registers and uploads to PyPI. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Wed Mar 27 18:44:00 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 17:44:00 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: Message-ID: Lennart Regebro gmail.com> writes: > Fine, as a stop-gap measure pip wheel might be useful, as this > mythical packaging tool doesn't really exist yet (except as > bdist_wheel, but I suspect pip wheel does more than that?) Well the distil tool does exist, and though I'm not claiming that it's ready for prime-time yet, it seems well on the way to being useful for this and other purposes. Regards, Vinay Sajip From pje at telecommunity.com Wed Mar 27 18:46:52 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 27 Mar 2013 13:46:52 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 1:09 PM, Daniel Holth wrote: > I would be pleased if "pip install" lost > the ability to natively install sdists without that intermediate step. At that point, it would be giving easy_install (or any other tool that did) a comparative advantage. So that's probably not going to fly. (Unless of course you meant that the intermediate step remains transparent to the user.) easy_install (and pip) became popular because they get code from developers to users with the fewest possible steps for people on either end of the distribution channel. Adding pointless steps is both bad UI design and poor marketing. From donald at stufft.io Wed Mar 27 18:51:57 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 27 Mar 2013 13:51:57 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Mar 27, 2013, at 1:41 PM, Vinay Sajip wrote: > PJ Eby telecommunity.com> writes: > >> The challenge here is again the distinction between raw source and >> sdist, and the interaction with revision control. Either there has to >> be some way to tell MEBS (i.e. the overall build system) what tool >> you're using to generate that JSON, or you have to check a generated >> file into revision control, and make sure you've updated it. (Which >> is error prone, even if you don't mind checking generated files into >> revision control.) >> >> This strongly suggests there needs to be *some* human-editable way to >> at *least* specify what tool you're using to generate the JSON with. > > There are no doubt many possible workflows, but one such is: > > metadata input files - any number, hand-edited, checked into source control > metadata merge tool - creates JSON metadata from input files > JSON metadata - produced by tool, so not checked in > > If the "merge tool" (which could be a simple Python script) is custom to a > project, it can be checked into source control in that project. If it is used > across multiple projects, it is maintained as a separate tool in its own repo > and, if you are just using it but not maintaining it, it becomes part of your > build toolset (like sphinx-build). Actually, the doc tools seem to be a good > analogy - create a useful format which is a pain to edit by hand (HTML that > looks nice in a browser) from some checked in sources which are reasonable > to edit by hand (.rst) + a merge tool (Sphinx). > > The merge tool seems similar in kind to the release.py script that many > projects have, which creates release distribution files, bumps version numbers, > registers and uploads to PyPI. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig I don't think the packaging formats should dictate the development flow at all. .IN files and such all dictate how that should be. To me this is an installer issue not a packaging issue and it's best solved in the installers. Obviously there is some benefit to a "standard" way for installers to treat these but I don't think it should be defined in terms of the packaging formats. Hence my off the cuff suggestion of keeping setup.py develop, or develop.py or some such script that express purpose is in use for development checkouts, but that development checkouts should be discouraged unless you're actively working on that project. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Wed Mar 27 18:53:51 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 13:53:51 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: Agreed that python is a fine language for build scripts. On Wed, Mar 27, 2013 at 1:51 PM, Donald Stufft wrote: > > On Mar 27, 2013, at 1:41 PM, Vinay Sajip wrote: > >> PJ Eby telecommunity.com> writes: >> >>> The challenge here is again the distinction between raw source and >>> sdist, and the interaction with revision control. Either there has to >>> be some way to tell MEBS (i.e. the overall build system) what tool >>> you're using to generate that JSON, or you have to check a generated >>> file into revision control, and make sure you've updated it. (Which >>> is error prone, even if you don't mind checking generated files into >>> revision control.) >>> >>> This strongly suggests there needs to be *some* human-editable way to >>> at *least* specify what tool you're using to generate the JSON with. >> >> There are no doubt many possible workflows, but one such is: >> >> metadata input files - any number, hand-edited, checked into source control >> metadata merge tool - creates JSON metadata from input files >> JSON metadata - produced by tool, so not checked in >> >> If the "merge tool" (which could be a simple Python script) is custom to a >> project, it can be checked into source control in that project. If it is used >> across multiple projects, it is maintained as a separate tool in its own repo >> and, if you are just using it but not maintaining it, it becomes part of your >> build toolset (like sphinx-build). Actually, the doc tools seem to be a good >> analogy - create a useful format which is a pain to edit by hand (HTML that >> looks nice in a browser) from some checked in sources which are reasonable >> to edit by hand (.rst) + a merge tool (Sphinx). >> >> The merge tool seems similar in kind to the release.py script that many >> projects have, which creates release distribution files, bumps version numbers, >> registers and uploads to PyPI. >> >> Regards, >> >> Vinay Sajip >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > > I don't think the packaging formats should dictate the development flow at all. .IN files and such all dictate how that should be. To me this is an installer issue not a packaging issue and it's best solved in the installers. Obviously there is some benefit to a "standard" way for installers to treat these but I don't think it should be defined in terms of the packaging formats. Hence my off the cuff suggestion of keeping setup.py develop, or develop.py or some such script that express purpose is in use for development checkouts, but that development checkouts should be discouraged unless you're actively working on that project. > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > From vinay_sajip at yahoo.co.uk Wed Mar 27 19:04:48 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 18:04:48 +0000 (UTC) Subject: [Distutils] Builders vs Installers References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I don't think the packaging formats should dictate the development flow at all. We might be at cross purposes here. If we posit that packaging metadata is in JSON format (which I think we both favour), I was addressing Daniel's objection to it on the grounds that he doesn't like editing JSON, to suggest an alternative for people with that objection. It doesn't follow that they *have* to use any particular workflow or tool, or that packaging formats are dictating it (other than the bare fact that they are JSON). Regards, Vinay Sajip From donald at stufft.io Wed Mar 27 19:09:31 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 27 Mar 2013 14:09:31 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: <67E9EBEF-CADD-4407-A3D6-D6D930D7251D@stufft.io> On Mar 27, 2013, at 2:04 PM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > > >> I don't think the packaging formats should dictate the development flow at all. > > We might be at cross purposes here. If we posit that packaging metadata is in > JSON format (which I think we both favour), I was addressing Daniel's objection > to it on the grounds that he doesn't like editing JSON, to suggest an alternative > for people with that objection. It doesn't follow that they *have* to use any > particular workflow or tool, or that packaging formats are dictating it (other > than the bare fact that they are JSON). > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Gotcha, yea in my mind the JSON is generated by the archiver tool and added to the various types of dists, wheels, etc. What the users actually edit/use is totally up to the archiver tool. It could be .in files it could be a python file, it could be YAML, it could pull from a SQLite database. Packaging shouldn't care as long as it gets it's sdists bdists, wheels etc in the proper format with the proper metadata files. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From regebro at gmail.com Wed Mar 27 19:50:09 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 27 Mar 2013 19:50:09 +0100 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 6:09 PM, Daniel Holth wrote: > In my view the fact that pip creates an installation as an artifact of > installing from a source package is equivalent to creating a wheel, > given that wheel is a format defined as a zip file containing one > installation of a distribution. Both operations equally ruin pip's > reputation as being an installer instead of a build tool. How installing something can ruin the reputation as an installer is beyond me. > Instead all > installation should have an intermediate, static, documented binary > representation created by the build tool that is later moved into > place by the install tool. I would be pleased if "pip install" lost > the ability to natively install sdists without that intermediate step. That's a separate issue, but I disagree with that as well. //Lennart From regebro at gmail.com Wed Mar 27 19:50:53 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 27 Mar 2013 19:50:53 +0100 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 6:44 PM, Vinay Sajip wrote: > Lennart Regebro gmail.com> writes: > > >> Fine, as a stop-gap measure pip wheel might be useful, as this >> mythical packaging tool doesn't really exist yet (except as >> bdist_wheel, but I suspect pip wheel does more than that?) > > Well the distil tool does exist, and though I'm not claiming that it's ready for > prime-time yet, it seems well on the way to being useful for this and other > purposes. Exactly. //Lennart From dholth at gmail.com Wed Mar 27 20:08:52 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 15:08:52 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 2:50 PM, Lennart Regebro wrote: > On Wed, Mar 27, 2013 at 6:09 PM, Daniel Holth wrote: >> In my view the fact that pip creates an installation as an artifact of >> installing from a source package is equivalent to creating a wheel, >> given that wheel is a format defined as a zip file containing one >> installation of a distribution. Both operations equally ruin pip's >> reputation as being an installer instead of a build tool. > > How installing something can ruin the reputation as an installer is beyond me. > >> Instead all >> installation should have an intermediate, static, documented binary >> representation created by the build tool that is later moved into >> place by the install tool. I would be pleased if "pip install" lost >> the ability to natively install sdists without that intermediate step. > > That's a separate issue, but I disagree with that as well. > > //Lennart We have a different definition of build tools if installing an sdist that has a C extension doesn't make pip a build tool already. Clearly we're just going to disagree on this one. From regebro at gmail.com Wed Mar 27 20:20:06 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 27 Mar 2013 20:20:06 +0100 Subject: [Distutils] Builders vs Installers In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 8:08 PM, Daniel Holth wrote: > We have a different definition of build tools if installing an sdist > that has a C extension doesn't make pip a build tool already. Then the word "build tool" is irrelevant, and the whole discussion of builders vs installers is pointless, since installers in the meaning most people use it within Python by necessity also are builders. The point is still that pip IMO should be a tool to *install* distributions, not *make* distributions. That's what is relevant. If the word "builder" does not describe the too that builds distribution then lets not use that word. //Lennart From vinay_sajip at yahoo.co.uk Wed Mar 27 20:21:52 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 19:21:52 +0000 (UTC) Subject: [Distutils] Importable wheels using distlib/distil Message-ID: > I'm not top-posting, but trying to keep GMane happy ... Since wheels are .zip files, they can sometimes be used to provide functionality without needing to be installed. Whereas .zip files contain no convention for indicating compatibility with a particular Python, wheels do contain this compatibility information. Thus, it is possible to check if a wheel can be directly imported from, and the wheel support in distlib allows you to take advantage of this using the mount() and unmount() methods. When you mount a wheel, its absolute path name is added to sys.path, allowing the Python code in it to be imported. (A DistlibException is raised if the wheel isn't compatible with the Python which calls the mount() method.) You don't need mount() just to add the wheel's name to sys.path, or to import pure-Python wheels. The mount() method goes further than just enabling Python imports - any C extensions in the wheel are also made available for import. For this to be possible, the wheel has to be built with additional metadata about extensions - a JSON file called EXTENSIONS which serialises an extension mapping dictionary. This maps extension module names to the names in the wheel of the shared libraries which implement those modules. Running unmount() on the wheel removes its absolute pathname from sys.path and makes its C extensions, if any, also unavailable for import. Wheels built with distil contain the EXTENSIONS metadata, so can be mounted complete with C extensions: $ distil download -d /tmp simplejson Downloading simplejson-3.1.2.tar.gz to /tmp/simplejson-3.1.2 63KB @ 73 KB/s 100 % Done: 00:00:00 Unpacking ... done. $ distil package --fo=wh -d /tmp /tmp/simplejson-3.1.2/ The following packages were built: /tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl $ python Python 2.7.2+ (default, Jul 20 2012, 22:15:08) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from distlib.wheel import Wheel >>> w = Wheel('/tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl') >>> w.mount() >>> import simplejson._speedups >>> dir(simplejson._speedups) ['__doc__', '__file__', '__loader__', '__name__', '__package__', 'encode_basestring_ascii', 'make_encoder', 'make_scanner', 'scanstring'] >>> simplejson._speedups.__file__ '/home/vinay/.distlib/dylib-cache/simplejson/_speedups.so' >>> This, IMO, makes the wheel format more useful than it already is :-) Regards, Vinay Sajip From jim at zope.com Wed Mar 27 21:01:20 2013 From: jim at zope.com (Jim Fulton) Date: Wed, 27 Mar 2013 16:01:20 -0400 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 3:21 PM, Vinay Sajip wrote: >> I'm not top-posting, but trying to keep GMane happy ... > > Since wheels are .zip files, they can sometimes be used to provide > functionality without needing to be installed. Whereas .zip files contain no > convention for indicating compatibility with a particular Python, wheels do > contain this compatibility information. Thus, it is possible to check if a > wheel can be directly imported from, and the wheel support in distlib allows > you to take advantage of this using the mount() and unmount() methods. When you > mount a wheel, its absolute path name is added to sys.path, allowing the Python > code in it to be imported. (A DistlibException is raised if the wheel isn't > compatible with the Python which calls the mount() method.) > > You don't need mount() just to add the wheel's name to sys.path, or to import > pure-Python wheels. The mount() method goes further than just enabling Python > imports - any C extensions in the wheel are also made available for import. For > this to be possible, the wheel has to be built with additional metadata about > extensions - a JSON file called EXTENSIONS which serialises an extension > mapping dictionary. This maps extension module names to the names in the wheel > of the shared libraries which implement those modules. > > Running unmount() on the wheel removes its absolute pathname from sys.path and > makes its C extensions, if any, also unavailable for import. > > Wheels built with distil contain the EXTENSIONS metadata, so can be mounted > complete with C extensions: > > $ distil download -d /tmp simplejson > Downloading simplejson-3.1.2.tar.gz to /tmp/simplejson-3.1.2 > 63KB @ 73 KB/s 100 % Done: 00:00:00 > Unpacking ... done. > $ distil package --fo=wh -d /tmp /tmp/simplejson-3.1.2/ > The following packages were built: > /tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl > $ python > Python 2.7.2+ (default, Jul 20 2012, 22:15:08) > [GCC 4.6.1] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> from distlib.wheel import Wheel >>>> w = Wheel('/tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl') >>>> w.mount() >>>> import simplejson._speedups >>>> dir(simplejson._speedups) > ['__doc__', '__file__', '__loader__', '__name__', '__package__', > 'encode_basestring_ascii', 'make_encoder', 'make_scanner', 'scanstring'] >>>> simplejson._speedups.__file__ > '/home/vinay/.distlib/dylib-cache/simplejson/_speedups.so' >>>> > > This, IMO, makes the wheel format more useful than it already is :-) It's a trap! At least on Unix systems: - Extensions in zip files that get magically extracted to a user's home directory lead to tragic deployment failures for services that run as special users. - Zip files are a pain in the ass during development or debugging. - Zip files are slower to import from (at least in my experience) It would be far better IMO to just unzip the wheel and put that in your path. (I'm hoping that wheels used this way are a suitable replacement for eggs.) Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From vinay_sajip at yahoo.co.uk Wed Mar 27 21:50:51 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 20:50:51 +0000 (UTC) Subject: [Distutils] Importable wheels using distlib/distil References: Message-ID: Jim Fulton zope.com> writes: > It's a trap! > > At least on Unix systems: > > - Extensions in zip files that get magically extracted to a user's > home directory lead to tragic deployment failures for services that > run as special users. I can see how it would lead to problems, but the home directory location is just as a proof of concept - the cache doesn't need to be in any private place. > - Zip files are a pain in the ass during development or debugging. Of course, but wheels are for deployment, not development, and this is one possibility for deployment (several people have mentioned wanting to sometimes just add wheels to sys.path rather than installing them, which got me thinking about this functionality). > - Zip files are slower to import from (at least in my experience) It's just another option for a user of wheels. Caveat emptor, and all that. > It would be far better IMO to just unzip the wheel and put that in > your path. (I'm hoping that wheels used this way are a suitable > replacement for eggs.) Well that's tantamount to installing the wheel, which is fine. I was thinking along the line of egg replacement - AFAIK eggs allow you to import extensions from zip in a similar fashion. Regards, Vinay Sajip From jim at zope.com Wed Mar 27 22:06:38 2013 From: jim at zope.com (Jim Fulton) Date: Wed, 27 Mar 2013 17:06:38 -0400 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 4:50 PM, Vinay Sajip wrote: > Jim Fulton zope.com> writes: > >> It's a trap! >> >> At least on Unix systems: >> >> - Extensions in zip files that get magically extracted to a user's >> home directory lead to tragic deployment failures for services that >> run as special users. > > I can see how it would lead to problems, but the home directory location is > just as a proof of concept - the cache doesn't need to be in any private place. Anywhere you extract them is likely going to lead to access control or security issues and generally cause pain, IMO. > >> - Zip files are a pain in the ass during development or debugging. > > Of course, but wheels are for deployment, not development, and this is one > possibility for deployment (several people have mentioned wanting to sometimes > just add wheels to sys.path rather than installing them, which got me thinking > about this functionality). I expect to use wheels during development just like I use eggs now. Not for development of the wheel/egg, but for development of something that uses it. You're in pdb and you land in a zipped egg/wheel that the package under development invoked and now you're screwed. > >> - Zip files are slower to import from (at least in my experience) > > It's just another option for a user of wheels. Caveat emptor, and all that. It's been tried with eggs. This is not new ground. Encouraging people to do this is going to cause pain and resentment. I think one of the reasons there's so much (IMO mostly irrational) hate for eggs is that people think you can only used zipped eggs, and zipped eggs cause pain and agony. > >> It would be far better IMO to just unzip the wheel and put that in >> your path. (I'm hoping that wheels used this way are a suitable >> replacement for eggs.) > > Well that's tantamount to installing the wheel, Not really. If you just unzip the wheel and add it to your path, you can stop using it by just removing from your path. If you install the wheel, it's contents will be poured into site-packages (and other places). It's much heavier than just adding the wheel (zipped or unzipped) to your path. > which is fine. I was thinking > along the line of egg replacement Me too. > - AFAIK eggs allow you to import extensions > from zip in a similar fashion. Importing from zipped eggs has proved itself to be an anti pattern. Buildout (as of buildout 2) always unzips eggs. It can then generate scripts with just the packages they need by adding (unzipped) eggs to sys.path. Various plugin systems (including buildout itself with extensions and recipes) do this dynamically at run time. It's very useful. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From ralf at systemexit.de Wed Mar 27 23:27:56 2013 From: ralf at systemexit.de (Ralf Schmitt) Date: Wed, 27 Mar 2013 23:27:56 +0100 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: (Jim Fulton's message of "Wed, 27 Mar 2013 17:06:38 -0400") References: Message-ID: <87r4j0eavn.fsf@myhost.lan> Jim Fulton writes: > Anywhere you extract them is likely going to lead to access control > or security issues and generally cause pain, IMO. right! search the web for PYTHON_EGG_CACHE. From pje at telecommunity.com Wed Mar 27 23:41:45 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 27 Mar 2013 18:41:45 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Wed, Mar 27, 2013 at 1:51 PM, Donald Stufft wrote: > I don't think the packaging formats should dictate the development flow at all. .IN files and such all dictate how that should be. You can't *not* dictate the flow. If you don't have something to generate the file with, then you're dictating that developers must ship an sdist and users must manually run build tools to make wheels. (Or, more likely, you're creating the motivation for somebody to create a meta-meta-build system that solves the problem, probably by having a bunch of plugins to detect what build system a raw source directory is using.) > To me this is an installer issue not a packaging issue and it's best solved in the installers. Obviously there is some benefit to a "standard" way for installers to treat these but I don't think it should be defined in terms of the packaging formats. It definitely doesn't have to be. distutils2's setup.cfg isn't actually a bad human-readable format, but it's not a *packaging* format. In any case, the only thing an installer needs is a way to get the setup-requires-dist, or the portion of it that pertains to identifying the metadata hooks. The rest could be handled with entry points registered for configuration file names. For example, Bento could expose an entry point like: [mebs.metadata.generators] bento.info = some.module.in.bento:hookfunction And then an installer builds a list of these hooks that are in the setup-requires-dists, and runs them based on the filenames found in the project directory. All done. Basically, the only thing we need is a way to avoid having to either: 1. Make every user install Bento/etc. before trying to install a package from source that uses it, or 2. Embed a registry of every possible build configuration file name into every installer. And this can be done in any of several ways: * Have a standardized naming pattern like pybuild.*, and make the .* part indicate the build tool * Have a standardized single name (like pybuild.cfg), and encode the build tool in a string that can be read regardless of file format, so it can be embedded in whatever format the build tool itself uses * Have a separate file that *only* lists the build tool or setup-requires-dists (and maybe can be extended to contain other information for use with a stdlib-supplied build tool) I personally lean towards the last one, especially if it reuses setup.cfg, because setup.cfg already exists and is fairly standardized. There are even tools that work today to let you do a metadata-free setup.py and specify everything needed in setup.cfg, with environment markers and everything. Heck, IIUC, there's at least one library you can use today with *setuptools* to do that -- it doesn't need distutils2 or any of that, it just translates setup.cfg to setup.py arguments. But an even more important reason to standardize is that there should be one, and preferably only one, obvious way to do it. AFAIK, the distutils2 effort didn't fail because of setup.cfg -- heck, setup.cfg was the main *benefit* I saw in the distutils2 work, everything else about it AFAIK was just setuptools warmed over -- it failed because of trying to boil the ocean and *implement* everything, rather than just standardizing on interfaces. A minimal boilerplate setup.cfg could be something like [build] builder = bento >1.6 And leave it at that. Disadvantage is that it's a dumb boilerplate file for tools that don't use setup.cfg for their configuration -- i.e., it's a minor disadvantage to users of those tools. However, if your preferred build tool generates the file for you, it's no big deal, as long as the generated file doesn't change all the time and you check it into source control. Such a usage pattern is teachable and provides what's needed, without dictating anything about the development workflow, other than that you need to tell installers how to make an sdist if you want people to install stuff you shipped without an sdist or a wheel, or if you want to use any generic build-running tools that need to know your build hook(s). > development checkouts should be discouraged unless you're actively working on that project. Perhaps Jim can chime in on this point, but when you work with a whole bunch of people developing a whole bunch of libraries making up a larger project (e.g. Zope), it doesn't seem very sensible to expect that everybody manually check out and manage all the dependencies they're using. Maybe you could mitigate that somewhat with some sort automated continuous build/release system, but that's not always a practical option. From donald at stufft.io Wed Mar 27 23:55:35 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 27 Mar 2013 18:55:35 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Mar 27, 2013, at 6:41 PM, PJ Eby wrote: > On Wed, Mar 27, 2013 at 1:51 PM, Donald Stufft wrote: >> I don't think the packaging formats should dictate the development flow at all. .IN files and such all dictate how that should be. > > You can't *not* dictate the flow. If you don't have something to > generate the file with, then you're dictating that developers must > ship an sdist and users must manually run build tools to make wheels. > > (Or, more likely, you're creating the motivation for somebody to > create a meta-meta-build system that solves the problem, probably by > having a bunch of plugins to detect what build system a raw source > directory is using.) > > >> To me this is an installer issue not a packaging issue and it's best solved in the installers. Obviously there is some benefit to a "standard" way for installers to treat these but I don't think it should be defined in terms of the packaging formats. > > It definitely doesn't have to be. distutils2's setup.cfg isn't > actually a bad human-readable format, but it's not a *packaging* > format. > > In any case, the only thing an installer needs is a way to get the > setup-requires-dist, or the portion of it that pertains to identifying > the metadata hooks. The rest could be handled with entry points > registered for configuration file names. For example, Bento could > expose an entry point like: > > [mebs.metadata.generators] > bento.info = some.module.in.bento:hookfunction > > And then an installer builds a list of these hooks that are in the > setup-requires-dists, and runs them based on the filenames found in > the project directory. All done. > > Basically, the only thing we need is a way to avoid having to either: > > 1. Make every user install Bento/etc. before trying to install a > package from source that uses it, or > 2. Embed a registry of every possible build configuration file name > into every installer. > > And this can be done in any of several ways: > > * Have a standardized naming pattern like pybuild.*, and make the .* > part indicate the build tool > > * Have a standardized single name (like pybuild.cfg), and encode the > build tool in a string that can be read regardless of file format, so > it can be embedded in whatever format the build tool itself uses > > * Have a separate file that *only* lists the build tool or > setup-requires-dists (and maybe can be extended to contain other > information for use with a stdlib-supplied build tool) > > I personally lean towards the last one, especially if it reuses > setup.cfg, because setup.cfg already exists and is fairly > standardized. There are even tools that work today to let you do a > metadata-free setup.py and specify everything needed in setup.cfg, > with environment markers and everything. > > Heck, IIUC, there's at least one library you can use today with > *setuptools* to do that -- it doesn't need distutils2 or any of that, > it just translates setup.cfg to setup.py arguments. > > But an even more important reason to standardize is that there should > be one, and preferably only one, obvious way to do it. AFAIK, the > distutils2 effort didn't fail because of setup.cfg -- heck, setup.cfg > was the main *benefit* I saw in the distutils2 work, everything else > about it AFAIK was just setuptools warmed over -- it failed because of > trying to boil the ocean and *implement* everything, rather than just > standardizing on interfaces. A minimal boilerplate setup.cfg could be > something like > > [build] > builder = bento >1.6 > > And leave it at that. Disadvantage is that it's a dumb boilerplate > file for tools that don't use setup.cfg for their configuration -- > i.e., it's a minor disadvantage to users of those tools. However, if > your preferred build tool generates the file for you, it's no big > deal, as long as the generated file doesn't change all the time and > you check it into source control. > > Such a usage pattern is teachable and provides what's needed, without > dictating anything about the development workflow, other than that you > need to tell installers how to make an sdist if you want people to > install stuff you shipped without an sdist or a wheel, or if you want > to use any generic build-running tools that need to know your build > hook(s). repurposing a single line of setup.cfg for this usecase wouldn't be the worst thing in the world. I don't like setup.cfg and I especially don't like it as the format to exchange the _actual_ metadata, but as a configuration format (configure which build system to use) it's ok. I still think I prefer a setup.py develop or develop.py to invoke the build system for development builds, but atm the difference between something like echo "[build]\nbuilder = bento > 1.6" > setup.cfg and develop.py is not a hill I care to die on. Maybe Nick has different ides for how VCS/install from an unpacked directory (E.g. explicitly not a package) should look I don't know. > > >> development checkouts should be discouraged unless you're actively working on that project. > > Perhaps Jim can chime in on this point, but when you work with a whole > bunch of people developing a whole bunch of libraries making up a > larger project (e.g. Zope), it doesn't seem very sensible to expect > that everybody manually check out and manage all the dependencies > they're using. Maybe you could mitigate that somewhat with some sort > automated continuous build/release system, but that's not always a > practical option. Sorry my statement was a bit unclear, those people would all fall under actively working on that project (Zope in this case). I mean installs from VCS's should be discouraged for end users. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu Mar 28 00:12:59 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 27 Mar 2013 23:12:59 +0000 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On 27 March 2013 21:06, Jim Fulton wrote: >> - AFAIK eggs allow you to import extensions >> from zip in a similar fashion. > > Importing from zipped eggs has proved itself to be an > anti pattern. I don't like the idea of making wheels work like eggs in this respect. As Jim said, (zipped) eggs have a very bad reputation and associating wheels with that type of functionality would be a very bad idea. Wheels are, and should remain, a binary installer format. On the other hand, zipimport is a very cool feature, and seriously under-used. But it has specific benefits and limitations, and in particular zipimport does not support binary extensions for very good reasons. Zip files on sys.path are practical for pure Python code only, IMO. Having said all that, the fact that wheels are zipfiles, and can be used on sys.path, *can* be useful. But it's an incidental benefit and *not* a core feature. Paul. From pje at telecommunity.com Thu Mar 28 02:03:47 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 27 Mar 2013 21:03:47 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Wed, Mar 27, 2013 at 6:55 PM, Donald Stufft wrote: > I still think I prefer a setup.py develop or develop.py to invoke the build system for development builds It's possible we're not talking about the same thing -- I notice you keep mentioning "setup.py develop", which is entirely unrelated to the scenarios I'm talking about. "setup.py develop" is for installing something *you* are working on/developing. But depending on raw source doesn't imply that you would be editing or developing that source; it just means that you have a bleeding-edge dependency (which might in turn have others), adding to your management overhead if you have to know how to build an sdist for each of those dependencies whenever you need a refresh. So, I'm not talking about scenarios where a user obtains a source checkout and does something with it, I'm talking about scenarios where the developer of a package wants to declare a *dependency* on *another* package that currently has to be fetched from revision control. So, in order to install *their* package (e.g. to their staging/test server), the install system has to be able to fetch and build from raw sources. > Sorry my statement was a bit unclear, those people would all fall under actively working on that project (Zope in this case). I mean installs from VCS's should be discouraged for end users. Define "end users". ;-) Here's a different example: there was a point at which I was actively developing PEAK-Rules and somebody else was actively developing something that used it. That person wasn't developing PEAK-Rules, and I wasn't part of their project, but they wanted up-to-the-minute versions because I was making changes based on their use cases, which they needed right away. Are they an "end user"? ;-) You could argue that, well, that's just one project, except that what if somebody *else* depends on *their* project, because they're also doing bleeding edge development? Well, that happened, too, because the consumer of PEAK-Rules was doing a bleeding-edge library that *other* people were doing bleeding-edge development against. So now there were two levels of dependency on raw sources. If you don't support these kinds of scenarios, you slow the community's development velocity. Not too long ago, Richard Jones posted a graph on r/Python showing how package registration took off exponentially around the time easy_install was released. I think that this is in large part due to the increased development velocity afforded by being able to depend on other packages at both development *and* deployment time. Even though most packages don't depend on the bleeding edge (because they're not themselves the bleeding edge), for individual development it's a godsend to be able to depend on your *own* packages from revision control, without needing all kinds of manual rigamarole to use them. (This is also really relevant for private and corporate-internal development scenarios.) From donald at stufft.io Thu Mar 28 02:43:08 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 27 Mar 2013 21:43:08 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Mar 27, 2013, at 9:03 PM, PJ Eby wrote: > On Wed, Mar 27, 2013 at 6:55 PM, Donald Stufft wrote: >> I still think I prefer a setup.py develop or develop.py to invoke the build system for development builds > > It's possible we're not talking about the same thing -- I notice you > keep mentioning "setup.py develop", which is entirely unrelated to the > scenarios I'm talking about. "setup.py develop" is for installing > something *you* are working on/developing. But depending on raw > source doesn't imply that you would be editing or developing that > source; it just means that you have a bleeding-edge dependency (which > might in turn have others), adding to your management overhead if you > have to know how to build an sdist for each of those dependencies > whenever you need a refresh. > > So, I'm not talking about scenarios where a user obtains a source > checkout and does something with it, I'm talking about scenarios where > the developer of a package wants to declare a *dependency* on > *another* package that currently has to be fetched from revision > control. So, in order to install *their* package (e.g. to their > staging/test server), the install system has to be able to fetch and > build from raw sources. I don't think you can, nor should you be able to, explicitly depend on something that is a VCS checkout. Declared dependencies in the metadata should be "abstract", they are a name, possibly a version specifier but they are explicitly _not_ where you get that dependency from. They only become "concrete" when you resolve the abstract dependencies via an index (this index could be PyPI, it could be a directory on your machine, etc). This fits in very well with the idea of "Provides" as well, I do not depend on https://pypi.python.org/packages/source/s/setuptools/setuptools-0.6c11.tar.gz I depend on something that claims to be setuptools it could be https://bitbucket.org/tarek/distribute. The point being me as the theoretical author of setuptools package author can't dictate where to install my package from. > > >> Sorry my statement was a bit unclear, those people would all fall under actively working on that project (Zope in this case). I mean installs from VCS's should be discouraged for end users. > > Define "end users". ;-) > > Here's a different example: there was a point at which I was actively > developing PEAK-Rules and somebody else was actively developing > something that used it. That person wasn't developing PEAK-Rules, and > I wasn't part of their project, but they wanted up-to-the-minute > versions because I was making changes based on their use cases, which > they needed right away. Are they an "end user"? ;-) > > You could argue that, well, that's just one project, except that what > if somebody *else* depends on *their* project, because they're also > doing bleeding edge development? > > Well, that happened, too, because the consumer of PEAK-Rules was doing > a bleeding-edge library that *other* people were doing bleeding-edge > development against. So now there were two levels of dependency on > raw sources. I see, I was misunderstanding the use case. Like I said above though I don't think that a package author should dictate where you install X from. I don't know easy_install or buildout very well, but when I need a bleeding-edge release of something in my dependency graph I use pip's requirements.txt file and add a ``-e `` (which pip internally translates to checking the repo out and running setup.py develop). This is where, in my mind, this belongs because requirements.txt lists "concrete" dependencies (it is paired with a index url, defaulting to PyPI) and so I'm just listing another "concrete" dependency. This does mean dependency graphs involving pre-released unpackaged dependencies are less friendly, but I think that's ok because: * Users should opt into development releases - This thought is reflected in the PEP426 where it instructs installers to default to stable releases only unless the end user requests it either via flag, or by explicitly including it in the version spec. * This is already outside of the packaging infrastructure/ecosystem. Sometimes I need system libraries installed too and I need to manually make sure they are installed. Python packaging can't solve every problem. * I think this is an edge case (one I have hit myself) and I don't think it's a large enough use case to break the "abstract"-ness of the PKG-INFO/JSON/whatever metadata. > > If you don't support these kinds of scenarios, you slow the > community's development velocity. Not too long ago, Richard Jones > posted a graph on r/Python showing how package registration took off > exponentially around the time easy_install was released. I think that > this is in large part due to the increased development velocity > afforded by being able to depend on other packages at both development > *and* deployment time. Even though most packages don't depend on the > bleeding edge (because they're not themselves the bleeding edge), for > individual development it's a godsend to be able to depend on your > *own* packages from revision control, without needing all kinds of > manual rigamarole to use them. > > (This is also really relevant for private and corporate-internal > development scenarios.) This was a hard email to write because I totally understand the motivation behind it, and I think it's a very attractive mis-feature. It sounds really good but I do not think it's a benefit is large enough to include it inside of "packaging" (the formats and minimum toolchain) with the negative qualities behind it. I do think this is a great value add on for an *installer* but that it should remain in the realms of installer specific (ala requirements.txt). * I can't think of better terms than "abstract" and "concrete" and they don't perfectly describe the difference. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Thu Mar 28 03:05:38 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 28 Mar 2013 12:05:38 +1000 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On Thu, Mar 28, 2013 at 11:43 AM, Donald Stufft wrote: > I don't think you can, nor should you be able to, explicitly depend on something that is a VCS checkout. I find it more useful to think of the issue as whether or not you allow publication of source tarballs to satisfy a dependency, or *require* publication of a fully populated sdist. If you allow raw source tarballs, then you effectively allow VCS checkouts as well. I prefer requiring an explicit publication step, but we also need to acknowledge that the installer ecosystem we're trying to replace allows them, and some people are relying on that feature. However, as I've said elsewhere, for metadata 2.0, I *do not* plan to migrate the archiving or build steps away from setup.py. So "give me an sdist" will be spelled "python setup.py sdist" and "give me a wheel file" will be spelled "python setup.py bdist_wheel". There's also an interesting migration problem for pre-2.0 sdists, where we can't assume that "python setup.py bdist_wheel && pip install " is equivalent to "python setup.py install": projects like Twisted that run a post-install hook won't install properly if you build a wheel first, since the existing post-install hook won't run. It's an interesting problem, but one where my near term plans amount to "document the status quo". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Thu Mar 28 03:09:55 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 27 Mar 2013 22:09:55 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: <9611711E-F8F7-424A-B053-89743C56C86B@stufft.io> On Mar 27, 2013, at 10:05 PM, Nick Coghlan wrote: > On Thu, Mar 28, 2013 at 11:43 AM, Donald Stufft wrote: >> I don't think you can, nor should you be able to, explicitly depend on something that is a VCS checkout. > > I find it more useful to think of the issue as whether or not you > allow publication of source tarballs to satisfy a dependency, or > *require* publication of a fully populated sdist. If you allow raw > source tarballs, then you effectively allow VCS checkouts as well. I > prefer requiring an explicit publication step, but we also need to > acknowledge that the installer ecosystem we're trying to replace > allows them, and some people are relying on that feature. Right, which is why I think the ability to install from a raw source is a good feature for an installer, but not for the dependency metadata. Following that we just need a standard way for a raw source tarball to declare what it's builder is, either via some sort of file that tells you that, or a build script , or something along those lines. > > However, as I've said elsewhere, for metadata 2.0, I *do not* plan to > migrate the archiving or build steps away from setup.py. So "give me > an sdist" will be spelled "python setup.py sdist" and "give me a wheel > file" will be spelled "python setup.py bdist_wheel". > > There's also an interesting migration problem for pre-2.0 sdists, > where we can't assume that "python setup.py bdist_wheel && pip install > " is equivalent to "python setup.py install": projects > like Twisted that run a post-install hook won't install properly if > you build a wheel first, since the existing post-install hook won't > run. > > It's an interesting problem, but one where my near term plans amount > to "document the status quo". > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Thu Mar 28 04:44:10 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 27 Mar 2013 23:44:10 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: <9611711E-F8F7-424A-B053-89743C56C86B@stufft.io> References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> <9611711E-F8F7-424A-B053-89743C56C86B@stufft.io> Message-ID: On Wed, Mar 27, 2013 at 10:09 PM, Donald Stufft wrote: > Right, which is why I think the ability to install from a raw source is a good feature for an installer, but not for the dependency metadata. Sure - I never said the dependency metadata had to be able to say *where* you get the raw source from, just that the tool for resolving dependencies needed to be able to process raw source into something installable. > Following that we just need a standard way for a raw source tarball to declare what it's builder is, either via some sort of file that tells you that, or a build script , or something along those lines. Yep. Static configuration is a *must*, here, though, as we want to move away from arbitrary setup script writing by package authors: in general they are really bad at it. A lot of setuptools' odd build-time features (like sandboxing) exist specifically because people write whatever crap they want in setup.py and have zero idea how to actually use/integrate with distutils. One interesting feature that would be possible under a configuration-based system is that you could actually have an installer with a whitelist or blacklist for build tools and setup-requires, in order to prevent or limit untrusted code execution by the overall build system. This would make it slightly more practical to have, say, servers that build wheels, such that only tools the servers' owners know won't import or run arbitrary code are allowed to do the compiling. (Not that that should be the only security involved, but it'd be a cool first-tier sanity check.) (Interestingly, this is also an argument for having a separate "tests-require-dist" in metadata 2.0, since testing tools *have* to run arbitrary code from the package, but archivers and builders do not.) Nick wrote: >> However, as I've said elsewhere, for metadata 2.0, I *do not* plan to >> migrate the archiving or build steps away from setup.py. So "give me >> an sdist" will be spelled "python setup.py sdist" and "give me a wheel >> file" will be spelled "python setup.py bdist_wheel". Works for me. Well, sort of. In principle, it means you can grow next generation build systems that use a dummy setup.py. In practice, it means you're still gonna be relying on setuptools. (Presumably 0.7 post-merge, w/bdist_wheel support baked in.) At some point, there has to be a new way to do it, because the pain of creating a functional dummy setup.py is a really high barrier to entry for a build tool to meet, until all the current tools that run setup.py files go away. IMO it'd be better to standardize this bit *now*, so that it'd be practical to start shipping projects without a setup.py, or perhaps make a "one dummy setup.py to rule them all" implementation that delegates everything to the new build interface. I can certainly understand that there are more urgent priorities in the short run; I just hope that a standard for this part lands concurrent with, say, PEP 439 and distlib ending up in the stdlib, so we don't have to wait another couple years to begin phasing out setuptools/distutils as the only build game in town. I mean, it basically amounts to defining some parameters to programmatically call a pair of sdist() and bdist_wheel() functions with, and a configuration syntax to say what distributions and modules to import those functions from. So it's not like it's going to be a huge time drain. (Maybe not even as much has already been consumed by this thread so far. ;-) ) Nick also wrote: >> There's also an interesting migration problem for pre-2.0 sdists, >> where we can't assume that "python setup.py bdist_wheel && pip install >> " is equivalent to "python setup.py install": projects >> like Twisted that run a post-install hook won't install properly if >> you build a wheel first, since the existing post-install hook won't >> run. >> >> It's an interesting problem, but one where my near term plans amount >> to "document the status quo". Yeah, it's already broken and the new world order isn't going to break it any further. Same goes for allowing pip to convert eggs; the ones that don't work right due to bad platform tags, etc. *already* don't work, so documenting the status quo as a transitional measure is sufficient. Heck, in general, supporting backward compatible stuff that suffers from the same problems as the stuff it's being backward compatible with is a no-brainer if it lets people get on the new so we can phase out the old. (Which is why I love that Vinay is looking into how to make wheels more usable for some of eggs less-frequent but still important use cases: it makes it that much easier to tell someone they don't need to stay on setuptools to do the same stuff.) From donald at stufft.io Thu Mar 28 04:46:50 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 27 Mar 2013 23:46:50 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> <9611711E-F8F7-424A-B053-89743C56C86B@stufft.io> Message-ID: <142AAAB9-F44B-4450-8C5C-3291E61E388C@stufft.io> On Mar 27, 2013, at 11:44 PM, PJ Eby wrote: > On Wed, Mar 27, 2013 at 10:09 PM, Donald Stufft wrote: >> Right, which is why I think the ability to install from a raw source is a good feature for an installer, but not for the dependency metadata. > > Sure - I never said the dependency metadata had to be able to say > *where* you get the raw source from, just that the tool for resolving > dependencies needed to be able to process raw source into something > installable. > >> Following that we just need a standard way for a raw source tarball to declare what it's builder is, either via some sort of file that tells you that, or a build script , or something along those lines. > > Yep. Static configuration is a *must*, here, though, as we want to > move away from arbitrary setup script writing by package authors: in > general they are really bad at it. A lot of setuptools' odd > build-time features (like sandboxing) exist specifically because > people write whatever crap they want in setup.py and have zero idea > how to actually use/integrate with distutils. > > One interesting feature that would be possible under a > configuration-based system is that you could actually have an > installer with a whitelist or blacklist for build tools and > setup-requires, in order to prevent or limit untrusted code execution > by the overall build system. This would make it slightly more > practical to have, say, servers that build wheels, such that only > tools the servers' owners know won't import or run arbitrary code are > allowed to do the compiling. (Not that that should be the only > security involved, but it'd be a cool first-tier sanity check.) > > (Interestingly, this is also an argument for having a separate > "tests-require-dist" in metadata 2.0, since testing tools *have* to > run arbitrary code from the package, but archivers and builders do > not.) catalog-sig without an argument? Is this a first? ;) > > > Nick wrote: >>> However, as I've said elsewhere, for metadata 2.0, I *do not* plan to >>> migrate the archiving or build steps away from setup.py. So "give me >>> an sdist" will be spelled "python setup.py sdist" and "give me a wheel >>> file" will be spelled "python setup.py bdist_wheel". > > Works for me. Well, sort of. In principle, it means you can grow > next generation build systems that use a dummy setup.py. > > In practice, it means you're still gonna be relying on setuptools. > (Presumably 0.7 post-merge, w/bdist_wheel support baked in.) At some > point, there has to be a new way to do it, because the pain of > creating a functional dummy setup.py is a really high barrier to entry > for a build tool to meet, until all the current tools that run > setup.py files go away. > > IMO it'd be better to standardize this bit *now*, so that it'd be > practical to start shipping projects without a setup.py, or perhaps > make a "one dummy setup.py to rule them all" implementation that > delegates everything to the new build interface. > > I can certainly understand that there are more urgent priorities in > the short run; I just hope that a standard for this part lands > concurrent with, say, PEP 439 and distlib ending up in the stdlib, so > we don't have to wait another couple years to begin phasing out > setuptools/distutils as the only build game in town. > > I mean, it basically amounts to defining some parameters to > programmatically call a pair of sdist() and bdist_wheel() functions > with, and a configuration syntax to say what distributions and modules > to import those functions from. So it's not like it's going to be a > huge time drain. (Maybe not even as much has already been consumed by > this thread so far. ;-) ) > > > Nick also wrote: >>> There's also an interesting migration problem for pre-2.0 sdists, >>> where we can't assume that "python setup.py bdist_wheel && pip install >>> " is equivalent to "python setup.py install": projects >>> like Twisted that run a post-install hook won't install properly if >>> you build a wheel first, since the existing post-install hook won't >>> run. >>> >>> It's an interesting problem, but one where my near term plans amount >>> to "document the status quo". > > Yeah, it's already broken and the new world order isn't going to break > it any further. Same goes for allowing pip to convert eggs; the ones > that don't work right due to bad platform tags, etc. *already* don't > work, so documenting the status quo as a transitional measure is > sufficient. Heck, in general, supporting backward compatible stuff > that suffers from the same problems as the stuff it's being backward > compatible with is a no-brainer if it lets people get on the new so we > can phase out the old. > > (Which is why I love that Vinay is looking into how to make wheels > more usable for some of eggs less-frequent but still important use > cases: it makes it that much easier to tell someone they don't need to > stay on setuptools to do the same stuff.) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Thu Mar 28 06:31:36 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 28 Mar 2013 01:31:36 -0400 Subject: [Distutils] Builders vs Installers In-Reply-To: <142AAAB9-F44B-4450-8C5C-3291E61E388C@stufft.io> References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> <9611711E-F8F7-424A-B053-89743C56C86B@stufft.io> <142AAAB9-F44B-4450-8C5C-3291E61E388C@stufft.io> Message-ID: On Wed, Mar 27, 2013 at 11:46 PM, Donald Stufft wrote: > catalog-sig without an argument? Is this a first? ;) No. This! Is! Spart... uh, I mean, DISTUTILS-SIG! ;-) From p.f.moore at gmail.com Thu Mar 28 09:08:00 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Mar 2013 08:08:00 +0000 Subject: [Distutils] Builders vs Installers In-Reply-To: References: <68CC49DB-9A35-44C0-BC4F-F0FE0AB7A625@stufft.io> <78BD213F-0E9F-4C7B-A256-2D60594B6576@stufft.io> <2859FE22-4074-4C68-8153-529D352B286E@stufft.io> Message-ID: On 28 March 2013 02:05, Nick Coghlan wrote: > On Thu, Mar 28, 2013 at 11:43 AM, Donald Stufft wrote: >> I don't think you can, nor should you be able to, explicitly depend on something that is a VCS checkout. > > I find it more useful to think of the issue as whether or not you > allow publication of source tarballs to satisfy a dependency, or > *require* publication of a fully populated sdist. If you allow raw > source tarballs, then you effectively allow VCS checkouts as well. I > prefer requiring an explicit publication step, but we also need to > acknowledge that the installer ecosystem we're trying to replace > allows them, and some people are relying on that feature. To give a real-life example of this issue, on Windows IPython depends on PyReadline. But the released version (1.7.x) of PyReadline is Python 2 only. So if you are using IPython on Python 3, you have to also depend on PyReadline from git. Now IPython doesn't declare a dependency on the VCS version (it just depends on "pyreadline"). And pyreadline is sufficiently stagnant that it hasn't declared anything much. But as an *end user* I have to make sure I force pip to install pyreadline from VCS if I want a working system. Paul. From pombredanne at nexb.com Thu Mar 28 12:40:36 2013 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Thu, 28 Mar 2013 12:40:36 +0100 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] Message-ID: On Tue, Mar 26, 2013 at 11:08 AM, Paul Moore wrote: > On 26 March 2013 09:49, Philippe Ombredanne wrote: >> Would anyone know of a better way to package things in a single >> python-executable bootstrapping script file without obfuscating the >> source contents in compressed/encoded/obfuscated byte arrays? > > Packaging as a zip file is a good way - but on Windows the file needs > to be named xxx.py (which is surprising, to say the least :-)) for the > relevant file association to be triggered (and on Unix, a #! line > needs to be prepended). Paul: I was not talking about this type of zips, but rather the same used in virtualenv, i.e. a string in a .py file that contains an encoded zip. That string is then decoded and unzipped at runtime as in here: https://github.com/pypa/virtualenv/blob/develop/virtualenv.py#L1933 This is not a zip, not an egg, not a wheel but some egg-in-py, zip-in-py or wheel-in-py and is similar to a shar shell archive. My point was that on the one hand, I like the fact that everything is self contained in one single .py file that you can execute right away. On the other hand, I find it somewhat discomforting as an emerging best way to package and distribute self-contained bootstrap scripts. Yet I cannot think of a better way atm: for instance splitting things in non-encoded non-binary plain strings would be quite weird too. Virtualenv does it, distil is doing it now, pip tried some of it here https://github.com/pypa/pip/blob/develop/contrib/get-pip.py In contrast, buildout, distribute and setuptools bootstrap scripts do not embed their dependencies and either try to get them satisfied locally or attempt to download the requirements. Having some support to do self-contained bootstrap scripts (as in requiring no network access and embedding all their dependencies) using this shar style could be something to consider normalizing? -- Philippe Ombredanne +1 650 799 0949 | pombredanne at nexB.com DejaCode Enterprise at http://www.dejacode.com nexB Inc. at http://www.nexb.com From dholth at gmail.com Thu Mar 28 13:26:26 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 28 Mar 2013 08:26:26 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: Message-ID: PEP XXX Yet Another Python .ZIP File Extension One reason Python ZIP applications have not been more widely adopted is because they have no Windows file association and would have to be confusingly named .py to be invoked by the interpreter. Henceforth, files with the extension .pyz shall be known as Python ZIP files. They consist of a ZIP format archive containing at minimum __main__.py, concatenated to two lines #!python or #!pythonw (or the full path to the interpreter), and an explanation # This is a ZIP format archive executable by the Python interpreter. The launcher will be updated to understand this format and Python will register this filename association when it is installed. On Thu, Mar 28, 2013 at 7:40 AM, Philippe Ombredanne wrote: > On Tue, Mar 26, 2013 at 11:08 AM, Paul Moore wrote: >> On 26 March 2013 09:49, Philippe Ombredanne wrote: >>> Would anyone know of a better way to package things in a single >>> python-executable bootstrapping script file without obfuscating the >>> source contents in compressed/encoded/obfuscated byte arrays? >> >> Packaging as a zip file is a good way - but on Windows the file needs >> to be named xxx.py (which is surprising, to say the least :-)) for the >> relevant file association to be triggered (and on Unix, a #! line >> needs to be prepended). > Paul: > I was not talking about this type of zips, but rather the same used in > virtualenv, i.e. a string in a .py file that contains an encoded zip. > That string is then decoded and unzipped at runtime as in here: > https://github.com/pypa/virtualenv/blob/develop/virtualenv.py#L1933 > > This is not a zip, not an egg, not a wheel but some egg-in-py, > zip-in-py or wheel-in-py and is similar to a shar shell archive. > > My point was that on the one hand, I like the fact that everything is > self contained in one single .py file that you can execute right away. > On the other hand, I find it somewhat discomforting as an emerging > best way to package and distribute self-contained bootstrap scripts. > Yet I cannot think of a better way atm: for instance splitting things > in non-encoded non-binary plain strings would be quite weird too. > > Virtualenv does it, distil is doing it now, pip tried some of it here > https://github.com/pypa/pip/blob/develop/contrib/get-pip.py > In contrast, buildout, distribute and setuptools bootstrap scripts do > not embed their dependencies and either try to get them satisfied > locally or attempt to download the requirements. > Having some support to do self-contained bootstrap scripts (as in > requiring no network access and embedding all their dependencies) > using this shar style could be something to consider normalizing? > > -- > Philippe Ombredanne > > +1 650 799 0949 | pombredanne at nexB.com > DejaCode Enterprise at http://www.dejacode.com > nexB Inc. at http://www.nexb.com > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Thu Mar 28 13:32:39 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Mar 2013 12:32:39 +0000 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: Message-ID: On 28 March 2013 11:40, Philippe Ombredanne wrote: > This is not a zip, not an egg, not a wheel but some egg-in-py, > zip-in-py or wheel-in-py and is similar to a shar shell archive. > > My point was that on the one hand, I like the fact that everything is > self contained in one single .py file that you can execute right away. > On the other hand, I find it somewhat discomforting as an emerging > best way to package and distribute self-contained bootstrap scripts. > Yet I cannot think of a better way atm: for instance splitting things > in non-encoded non-binary plain strings would be quite weird too. Yes, my point was that Vinay's usage could be covered by distributing distil as a zip file. All it is doing is decoding it's blob of data (which is an encoded zip file) and then adding the resulting zip to sys.path. The virtualenv situation is different, as there we are trying to ensure that we remain single-file while embedding things that are *not* modules to add to sys.path. And we don't want to download our dependencies because we need to be able to run with no internet connection. But you are right, the embedded script approach is not ideal. I hope that "embedded binary blobs" does not become a common approach. I'd much rather that "runnable zip files" became the norm. It's certainly possible now, but I don't think it's well enough known (and there are administrative issues like the file extension question on Windows that make it more awkward than it should be). Hence my comments, trying to raise awareness a bit. Thanks for the feedback, and in particular the reminder that virtualenv could do with looking at this... I've added a virtualenv issue to remind me to think some more about it. Paul From dholth at gmail.com Thu Mar 28 13:36:13 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 28 Mar 2013 08:36:13 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: Message-ID: If it was distributed as such virtualenv could read blobs out of its own zip file, use the get_data() API to read non-module, or add subdirectories inside its zip file to sys.path On Thu, Mar 28, 2013 at 8:32 AM, Paul Moore wrote: > On 28 March 2013 11:40, Philippe Ombredanne wrote: >> This is not a zip, not an egg, not a wheel but some egg-in-py, >> zip-in-py or wheel-in-py and is similar to a shar shell archive. >> >> My point was that on the one hand, I like the fact that everything is >> self contained in one single .py file that you can execute right away. >> On the other hand, I find it somewhat discomforting as an emerging >> best way to package and distribute self-contained bootstrap scripts. >> Yet I cannot think of a better way atm: for instance splitting things >> in non-encoded non-binary plain strings would be quite weird too. > > Yes, my point was that Vinay's usage could be covered by distributing > distil as a zip file. All it is doing is decoding it's blob of data > (which is an encoded zip file) and then adding the resulting zip to > sys.path. > > The virtualenv situation is different, as there we are trying to > ensure that we remain single-file while embedding things that are > *not* modules to add to sys.path. And we don't want to download our > dependencies because we need to be able to run with no internet > connection. But you are right, the embedded script approach is not > ideal. > > I hope that "embedded binary blobs" does not become a common approach. > I'd much rather that "runnable zip files" became the norm. It's > certainly possible now, but I don't think it's well enough known (and > there are administrative issues like the file extension question on > Windows that make it more awkward than it should be). Hence my > comments, trying to raise awareness a bit. > > Thanks for the feedback, and in particular the reminder that > virtualenv could do with looking at this... I've added a virtualenv > issue to remind me to think some more about it. > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Thu Mar 28 13:43:46 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Mar 2013 12:43:46 +0000 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: Message-ID: On 28 March 2013 12:26, Daniel Holth wrote: > The launcher will be updated to understand this format and Python will > register this filename association when it is installed. The launcher should need no changes. The Python msi installer would need a change to register the new extension, though. And *creating* such zips is mildly annoying on Windows, due to a general lack of tool support for manipulating binary files in text editors. Oh, and wouldn't "#!/usr/bin/env python(w)" be a better header? That would work on Unix, and the launcher recognises that format. But +1 on the idea in general. Paul From dholth at gmail.com Thu Mar 28 13:45:24 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 28 Mar 2013 08:45:24 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 8:43 AM, Paul Moore wrote: > On 28 March 2013 12:26, Daniel Holth wrote: >> The launcher will be updated to understand this format and Python will >> register this filename association when it is installed. > > The launcher should need no changes. The Python msi installer would > need a change to register the new extension, though. > > And *creating* such zips is mildly annoying on Windows, due to a > general lack of tool support for manipulating binary files in text > editors. > > Oh, and wouldn't "#!/usr/bin/env python(w)" be a better header? That > would work on Unix, and the launcher recognises that format. > > But +1 on the idea in general. > Paul There is no 'cat header zip > newzip'? From p.f.moore at gmail.com Thu Mar 28 14:05:00 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Mar 2013 13:05:00 +0000 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: Message-ID: On 28 March 2013 12:45, Daniel Holth wrote: > On Thu, Mar 28, 2013 at 8:43 AM, Paul Moore wrote: >> On 28 March 2013 12:26, Daniel Holth wrote: >>> The launcher will be updated to understand this format and Python will >>> register this filename association when it is installed. >> >> The launcher should need no changes. The Python msi installer would >> need a change to register the new extension, though. >> >> And *creating* such zips is mildly annoying on Windows, due to a >> general lack of tool support for manipulating binary files in text >> editors. >> >> Oh, and wouldn't "#!/usr/bin/env python(w)" be a better header? That >> would work on Unix, and the launcher recognises that format. >> >> But +1 on the idea in general. >> Paul > > There is no 'cat header zip > newzip'? There are multiple options. And text file vs binary file issues to cover. CMD.EXE: copy /b header+zip newzip Powershell: get-content header,zip -enc Byte | set-content newzip -enc Byte Powershell: cmd /c copy /b header+zip newzip (because the previous version is so ugly...) Or write a Python script, which is what I did. Yes, I know :-( Paul From vinay_sajip at yahoo.co.uk Thu Mar 28 14:11:53 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 28 Mar 2013 13:11:53 +0000 (GMT) Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: Message-ID: <1364476313.70667.YahooMailNeo@web171406.mail.ir2.yahoo.com> > From: Paul Moore > > Yes, my point was that Vinay's usage could be covered by distributing > distil as a zip file. All it is doing is decoding it's blob of data > (which is an encoded zip file) and then adding the resulting zip to > sys.path. [snip] > I hope that "embedded binary blobs" does not become a common approach. > I'd much rather that "runnable zip files" became the norm. I don't know if it's that important to distinguish between the two. I found the approach I'm using with distil to be a tad more flexible in my case. A runnable zip has the advantage that it's harder to tinker with, but with the way distil.py is at the moment, you can tweak e.g. its logging just by changing distil.py. (At some point soon it will have an optional configuration file to control some aspects of its behaviour, but that's by the by.) It also does a bit of processing to process -e and -p and relaunches with a new Python interpreter if needed - developing this logic was quicker because I didn't have to add my changes to the .zip each time I tweaked something. Regards, Vinay Sajip From dholth at gmail.com Thu Mar 28 14:19:31 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 28 Mar 2013 09:19:31 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: <1364476313.70667.YahooMailNeo@web171406.mail.ir2.yahoo.com> References: <1364476313.70667.YahooMailNeo@web171406.mail.ir2.yahoo.com> Message-ID: On Thu, Mar 28, 2013 at 9:11 AM, Vinay Sajip wrote: >> From: Paul Moore > >> >> Yes, my point was that Vinay's usage could be covered by distributing >> distil as a zip file. All it is doing is decoding it's blob of data >> (which is an encoded zip file) and then adding the resulting zip to >> sys.path. > [snip] >> I hope that "embedded binary blobs" does not become a common approach. >> I'd much rather that "runnable zip files" became the norm. > > I don't know if it's that important to distinguish between the two. I found the approach I'm using with distil to be a tad more flexible in my case. A runnable zip has the advantage that it's harder to tinker with, but with the way distil.py is at the moment, you can tweak e.g. its logging just by changing distil.py. (At some point soon it will have an optional configuration file to control some aspects of its behaviour, but that's by the by.) It also does a bit of processing to process -e and -p and relaunches with a new Python interpreter if needed - developing this logic was quicker because I didn't have to add my changes to the .zip each time I tweaked something. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Also if you are looking for tweakability you can run a directory with the same contents of the .zip exactly the same as if it was a zip. From p.f.moore at gmail.com Thu Mar 28 14:31:17 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Mar 2013 13:31:17 +0000 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: <1364476313.70667.YahooMailNeo@web171406.mail.ir2.yahoo.com> References: <1364476313.70667.YahooMailNeo@web171406.mail.ir2.yahoo.com> Message-ID: On 28 March 2013 13:11, Vinay Sajip wrote: > I don't know if it's that important to distinguish between the two. I found the approach I'm using with distil to be a tad more flexible in my case. A runnable zip has the advantage that it's harder to tinker with, but with the way distil.py is at the moment, you can tweak e.g. its logging just by changing distil.py. (At some point soon it will have an optional configuration file to control some aspects of its behaviour, but that's by the by.) It also does a bit of processing to process -e and -p and relaunches with a new Python interpreter if needed - developing this logic was quicker because I didn't have to add my changes to the .zip each time I tweaked something. Good point. Paul From vinay_sajip at yahoo.co.uk Thu Mar 28 14:33:43 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 28 Mar 2013 13:33:43 +0000 (GMT) Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: Message-ID: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> > From: Philippe Ombredanne > On the other hand, I find it somewhat discomforting as an emerging > best way to package and distribute self-contained bootstrap scripts. But what is the root cause of that discomfort? The distil approach is slightly more discoverable than a pure zip would be, but for the security conscious all the code is there and available for inspection (unlike installing a distribution directly from PyPI, which will pull you-know-not-what from the network). > Virtualenv does it, distil is doing it now, pip tried some of it here > https://github.com/pypa/pip/blob/develop/contrib/get-pip.py > In contrast, buildout, distribute and setuptools bootstrap scripts do > not embed their dependencies and either try to get them satisfied > locally or attempt to download the requirements. And all this time, they would have been vulnerable to a MITM attack on PyPI because PyPI didn't support verifiable SSL connections until recently. It's good to be cautious, but Bruce Schneier has plenty of stories about caution directed in the wrong directions. > Having some support to do self-contained? bootstrap scripts (as in > requiring no network access and embedding all their dependencies) > using this shar style could be something to consider normalizing? It seems like a decision for individual developers or developer teams to make on a case-by-case basis - it doesn't seem like something that needs to be "officially" encouraged or discouraged. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Thu Mar 28 14:39:04 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 28 Mar 2013 13:39:04 +0000 (GMT) Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364476313.70667.YahooMailNeo@web171406.mail.ir2.yahoo.com> Message-ID: <1364477944.49125.YahooMailNeo@web171405.mail.ir2.yahoo.com> > From: Daniel Holth > Also if you are looking for tweakability you can run a directory with > the same contents of the .zip exactly the same as if it was a zip. Sure, but my smoke testing involved copying the tweaked distil.py to a network share, then running that file from other Windows, Linux and OS X machines - of course I could have copied whole directory trees, but doing it the way I've done works well enough for me :-) Regards, Vinay Sajip From pombredanne at nexb.com Thu Mar 28 15:44:50 2013 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Thu, 28 Mar 2013 15:44:50 +0100 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Thu, Mar 28, 2013 at 2:33 PM, Vinay Sajip wrote: >> From: Philippe Ombredanne >> On the other hand, I find it somewhat discomforting as an emerging >> best way to package and distribute self-contained bootstrap scripts. >> Virtualenv does it, distil is doing it now, pip tried some of it here >> https://github.com/pypa/pip/blob/develop/contrib/get-pip.py >> In contrast, buildout, distribute and setuptools bootstrap scripts do >> not embed their dependencies and either try to get them satisfied >> locally or attempt to download the requirements. > > And all this time, they would have been vulnerable to a MITM attack > on PyPI because PyPI didn't support verifiable SSL connections > until recently. It's good to be cautious, but Bruce Schneier has > plenty of stories about caution directed in the wrong directions. I am not so worried about security... I brought the point here because this is the packaging and distribution list, and I see this as an emerging pattern for the packaging and distribution of bootstrap scripts and this is something that has not been discussed much before. Conceptually I find these no different from setup.py scripts, and these have been mostly normalized (or at the minimum have a conventional name and a conventional if not specified interface.) Yet today, for the all important core package and environment management tools, we have bootstrap scripts each with different interfaces and different approaches to self containment or no containment. I feel this is worth discussing as bootstrapping is where everything begins :) -- Philippe Ombredanne +1 650 799 0949 | pombredanne at nexB.com DejaCode Enterprise at http://www.dejacode.com nexB Inc. at http://www.nexb.com From dholth at gmail.com Thu Mar 28 15:50:11 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 28 Mar 2013 10:50:11 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Not really trying to tell Vinay to rewrite his script, but IMHO if you expect it unzip is a lot easier than file.write(module.random_attribute.decode('base64')). The runnable zip feature is awesome, not well enough known, and totally worth promoting over the shar pattern; with some minimal tooling you'd be good to go. On Thu, Mar 28, 2013 at 10:44 AM, Philippe Ombredanne wrote: > On Thu, Mar 28, 2013 at 2:33 PM, Vinay Sajip wrote: >>> From: Philippe Ombredanne >>> On the other hand, I find it somewhat discomforting as an emerging >>> best way to package and distribute self-contained bootstrap scripts. > >>> Virtualenv does it, distil is doing it now, pip tried some of it here >>> https://github.com/pypa/pip/blob/develop/contrib/get-pip.py >>> In contrast, buildout, distribute and setuptools bootstrap scripts do >>> not embed their dependencies and either try to get them satisfied >>> locally or attempt to download the requirements. >> >> And all this time, they would have been vulnerable to a MITM attack >> on PyPI because PyPI didn't support verifiable SSL connections >> until recently. It's good to be cautious, but Bruce Schneier has >> plenty of stories about caution directed in the wrong directions. > > I am not so worried about security... I brought the point here because > this is the packaging and distribution list, and I see this as an > emerging pattern for the packaging and distribution of bootstrap > scripts and this is something that has not been discussed much before. > > Conceptually I find these no different from setup.py scripts, and > these have been mostly normalized (or at the minimum have a > conventional name and a conventional if not specified interface.) > > Yet today, for the all important core package and environment > management tools, we have bootstrap scripts each with different > interfaces and different approaches to self containment or no > containment. > > I feel this is worth discussing as bootstrapping is where everything begins :) > > -- > Philippe Ombredanne > > +1 650 799 0949 | pombredanne at nexB.com > DejaCode Enterprise at http://www.dejacode.com > nexB Inc. at http://www.nexb.com > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From vinay_sajip at yahoo.co.uk Thu Mar 28 16:39:18 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 28 Mar 2013 15:39:18 +0000 (UTC) Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Philippe Ombredanne nexb.com> writes: > Conceptually I find these no different from setup.py scripts, and > these have been mostly normalized (or at the minimum have a > conventional name and a conventional if not specified interface.) Except that you programmatically interface (to distutils or setuptools) with setup.py, which is not the case with virtualenv or distil. > I feel this is worth discussing as bootstrapping is where everything begins :) Oh, certainly it's worthy of discussion - I wasn't meaning to imply otherwise. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Thu Mar 28 16:46:22 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 28 Mar 2013 15:46:22 +0000 (UTC) Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Daniel Holth gmail.com> writes: > file.write(module.random_attribute.decode('base64')). The runnable zip > feature is awesome, not well enough known, and totally worth promoting > over the shar pattern; with some minimal tooling you'd be good to go. Well, maybe the promoting would be better done by actually shipping something this way that shows how well it works, rather than just talking about it. I don't see that the user experience is any better with runnable zips, though I'm not saying it's any worse. After that, it just comes down to individual developer taste, and there's no accounting for that :-) Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Thu Mar 28 17:02:36 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 28 Mar 2013 16:02:36 +0000 (UTC) Subject: [Distutils] Importable wheels using distlib/distil References: Message-ID: Jim Fulton zope.com> writes: > >> It would be far better IMO to just unzip the wheel and put that in > >> your path. (I'm hoping that wheels used this way are a suitable > >> replacement for eggs.) > > > > Well that's tantamount to installing the wheel, > > Not really. If you just unzip the wheel and add it to your path, > you can stop using it by just removing from your path. If you > install the wheel, it's contents will be poured into site-packages > (and other places). It's much heavier than just adding the > wheel (zipped or unzipped) to your path. > [snip] > by adding (unzipped) eggs to sys.path. Various plugin > systems (including buildout itself with extensions and recipes) > do this dynamically at run time. It's very useful. Thanks for the feedback. How about if I change mount()/unmount() to: def mount(self, append=False, destdir=None): """ Unzip the wheel's contents to the specified directory, or to a temporary directory if destdir is None. Add this directory to sys.path, either appending or prepending according to whether append is True or False. Before doing this, check that the wheel is compatible with the Python making the call to mount(). If successful, this makes the contents of the wheel's root directory - both Python packages and C extensions - importable via normal Python import mechanisms. """ def unmount(self): """ Remove the directory that was used for mounting from sys.path, thus making the wheel's code no longer importable. Return this directory. Note that the caller is responsible for deleting this directory and its contents, which might not be possible - e.g. in Windows, if a shared library has been imported and is linked to the running Python process, there will be an open handle to the shared library which will prevent its deletion. """ Regards, Vinay Sajip From p.f.moore at gmail.com Thu Mar 28 17:42:34 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Mar 2013 16:42:34 +0000 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On 28 March 2013 16:02, Vinay Sajip wrote: > Return this directory. Note that the caller is responsible for > deleting this directory and its contents, which might not be > possible - e.g. in Windows, if a shared library has been > imported and is linked to the running Python process, there will be > an open handle to the shared library which will prevent its deletion. > """ That's the big issue I have with *any* approach like this. It's entirely possible that the directory cannot be deleted, and as a result the user ends up with the problem of managing clutter caused by this mechanism. Even if the directory is in %TEMP% the user still has the issue of clearing up. Consider a buildslave that continually runs tests - temp directory clutter is a definite issue in a situation like that. And of course, if an application user chooses to use this mechanism, I don't have an option to opt out unless we start getting into complex "if the package is installed use it, otherwise mount our internal wheel" logic. I'd like to hold off on this feature until there are actual requests for the functionality. It's not easy to argue against the idea purely on a "it might go wrong" basis without actual use cases to look at and see if/how they would handle the problem situations. Paul. From theller at ctypes.org Thu Mar 28 17:59:45 2013 From: theller at ctypes.org (Thomas Heller) Date: Thu, 28 Mar 2013 17:59:45 +0100 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: Am 28.03.2013 17:42, schrieb Paul Moore: > On 28 March 2013 16:02, Vinay Sajip wrote: >> Return this directory. Note that the caller is responsible for >> deleting this directory and its contents, which might not be >> possible - e.g. in Windows, if a shared library has been >> imported and is linked to the running Python process, there will be >> an open handle to the shared library which will prevent its deletion. >> """ > > That's the big issue I have with *any* approach like this. It's > entirely possible that the directory cannot be deleted, and as a > result the user ends up with the problem of managing clutter caused by > this mechanism. Even if the directory is in %TEMP% the user still has > the issue of clearing up. Consider a buildslave that continually runs > tests - temp directory clutter is a definite issue in a situation like > that. I made an experiment some time ago: It is possible to delete shared libs containing extension modules imported by Python if the Python process (after Py_Finalize()) calls FreeLibrary(hmod) in a loop for every extension until FreeLibrary returns zero; then the shared lib file can be deleted. Thomas From jim at zope.com Thu Mar 28 18:02:15 2013 From: jim at zope.com (Jim Fulton) Date: Thu, 28 Mar 2013 13:02:15 -0400 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 12:02 PM, Vinay Sajip wrote: > Jim Fulton zope.com> writes: > > >> >> It would be far better IMO to just unzip the wheel and put that in >> >> your path. (I'm hoping that wheels used this way are a suitable >> >> replacement for eggs.) >> > >> > Well that's tantamount to installing the wheel, >> >> Not really. If you just unzip the wheel and add it to your path, >> you can stop using it by just removing from your path. If you >> install the wheel, it's contents will be poured into site-packages >> (and other places). It's much heavier than just adding the >> wheel (zipped or unzipped) to your path. >> > [snip] >> by adding (unzipped) eggs to sys.path. Various plugin >> systems (including buildout itself with extensions and recipes) >> do this dynamically at run time. It's very useful. > > Thanks for the feedback. Thanks for trying to provide a useful feature. I hope my comments aren't too much of a downer. > How about if I change mount()/unmount() to: > > def mount(self, append=False, destdir=None): > """ > Unzip the wheel's contents to the specified directory, or to > a temporary directory if destdir is None. Add this directory to > sys.path, either appending or prepending according to whether > append is True or False. > > Before doing this, check that the wheel is compatible with the > Python making the call to mount(). > > If successful, this makes the contents of the wheel's root directory > - both Python packages and C extensions - importable via normal Python > import mechanisms. > """ > > def unmount(self): > """ > Remove the directory that was used for mounting from sys.path, > thus making the wheel's code no longer importable. > > Return this directory. Note that the caller is responsible for > deleting this directory and its contents, which might not be > possible - e.g. in Windows, if a shared library has been > imported and is linked to the running Python process, there will be > an open handle to the shared library which will prevent its deletion. > """ I'm not sure which users or use cases you're trying to serve here, so I'm not sure what to think of this. For buildout users, buildout would download and extract the wheel the first time it's used and keep it in a cache and then add it to a path at script generation time. For buildout's own uses (extensions and recipes) it would simply add the extracted wheel's location to sys.path at run time (downloading and extracting it first if necessary). So the win for buildout and it's users is to be able to have extracted (but not "installed" wheels) around to be mixed and matched either for script generation or run-time use. If I wasn't using buildout, I kinda doubt I'd want to use something like this rather than just installing wheels with pip. Jim P.S. I'm happy to see all the work you've done on distlib. I'm sorry to say I haven't had time to dig into it yet. I assume that buildout 3 will be based on it at some point. -- Jim Fulton http://www.linkedin.com/in/jimfulton From vinay_sajip at yahoo.co.uk Thu Mar 28 18:22:01 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 28 Mar 2013 17:22:01 +0000 (UTC) Subject: [Distutils] Importable wheels using distlib/distil References: Message-ID: Paul Moore gmail.com> writes: > That's the big issue I have with *any* approach like this. It's > entirely possible that the directory cannot be deleted, and as a > result the user ends up with the problem of managing clutter caused by > this mechanism. Even if the directory is in %TEMP% the user still has > the issue of clearing up. Consider a buildslave that continually runs > tests - temp directory clutter is a definite issue in a situation like > that. > > And of course, if an application user chooses to use this mechanism, I > don't have an option to opt out unless we start getting into complex > "if the package is installed use it, otherwise mount our internal > wheel" logic. Well, if you use the feature because it has its uses, you have to work around any costs that it has. At least the problem isn't being ducked. Plus, given that the wheel format is open, it's not a lot of work for an application developer to do a zipfile.extractall() followed by a sys.path.append(), whether distlib's Wheel has a mount() or not. Having mount() might be facilitating a useful feature in a (slightly more) controlled fashion. > I'd like to hold off on this feature until there are actual requests > for the functionality. It's not easy to argue against the idea purely > on a "it might go wrong" basis without actual use cases to look at and > see if/how they would handle the problem situations. Didn't Jim Fulton say in a post in this thread that it was a useful feature? I'm presuming he based this on real-world experience, but perhaps he would care to clarify. In terms of "actual requests" - there haven't been any actual requests for anything, other than suggestions to improve features that I've unilaterally introduced. This is another instance of the same thing, it seems to me. This is a feature that eggs have but nothing else does, so it seems reasonable to see if we can have alternatives. AFAICT, your worries would apply to eggs too, it's nothing to do with wheels or distlib in particular ... Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Thu Mar 28 18:24:22 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 28 Mar 2013 17:24:22 +0000 (UTC) Subject: [Distutils] Importable wheels using distlib/distil References: Message-ID: Jim Fulton zope.com> writes: > So the win for buildout and it's users is to be able to have extracted > (but not "installed" wheels) around to be mixed and matched either for > script generation or run-time use. > > If I wasn't using buildout, I kinda doubt I'd want to use something > like this rather than just installing wheels with pip. Ok, thanks for the clarification. Regards, Vinay Sajip From Steve.Dower at microsoft.com Thu Mar 28 18:54:26 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Thu, 28 Mar 2013 17:54:26 +0000 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> Daniel Holth gmail.com> writes: > file.write(module.random_attribute.decode('base64')). The runnable zip > feature is awesome, not well enough known, and totally worth promoting > over the shar pattern; with some minimal tooling you'd be good to go. Runnable zips sound great - I certainly haven't come across them before (or if I have, I didn't see the potential at the time). That said, from a Windows perspective, shebangs and mixed text/binary files worry me. The better approach on Windows would be to take a new extension (.pyz? .pyp[ackage]?) and associate that with the launcher. (File extensions on Windows are the moral equivalent of shebang lines.) Changing .zip in any way will upset anyone who has a utility for opening ZIP files (i.e. everyone) and there's no way to launch files differently based on content without changing that association. And, I'm almost certain that most if not all existing ZIP tools on Windows will fail to open files with a shebang, since they've never had to deal with them. I also think that a runnable zip may be a better package installation option than MSIs, but that's another issue :) Cheers, Steve From donald at stufft.io Thu Mar 28 19:22:59 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 28 Mar 2013 14:22:59 -0400 Subject: [Distutils] Merge catalog-sig and distutils-sig Message-ID: Is there much point in keeping catalog-sig and distutils-sig separate? It seems to me that most of the same people are on both lists, and the topics almost always have consequences to both sides of the coin. So much so that it's often hard to pick *which* of the two (or both) lists you post too. Further confused by the fact that distutils is hopefully someday going to go away :) Not sure if there's some official process for requesting it or not, but I think we should merge the two lists and just make packaging-sig to umbrella the entire packaging topics. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jim at zope.com Thu Mar 28 19:28:35 2013 From: jim at zope.com (Jim Fulton) Date: Thu, 28 Mar 2013 14:28:35 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 2:22 PM, Donald Stufft wrote: > Is there much point in keeping catalog-sig and distutils-sig separate? Not IMO. > It seems to me that most of the same people are on both lists, and the topics almost always have consequences to both sides of the coin. So much so that it's often hard to pick *which* of the two (or both) lists you post too. Further confused by the fact that distutils is hopefully someday going to go away :) > > Not sure if there's some official process for requesting it or not, but I think we should merge the two lists and just make packaging-sig to umbrella the entire packaging topics. +1 Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From holger at merlinux.eu Thu Mar 28 20:11:44 2013 From: holger at merlinux.eu (holger krekel) Date: Thu, 28 Mar 2013 19:11:44 +0000 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: <20130328191144.GL9677@merlinux.eu> On Thu, Mar 28, 2013 at 14:22 -0400, Donald Stufft wrote: > Is there much point in keeping catalog-sig and distutils-sig separate? > > It seems to me that most of the same people are on both lists, and the topics almost always have consequences to both sides of the coin. So much so that it's often hard to pick *which* of the two (or both) lists you post too. Further confused by the fact that distutils is hopefully someday going to go away :) +1 > Not sure if there's some official process for requesting it or not, but I think we should merge the two lists and just make packaging-sig to umbrella the entire packaging topics. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Catalog-SIG mailing list > Catalog-SIG at python.org > http://mail.python.org/mailman/listinfo/catalog-sig From fred at fdrake.net Thu Mar 28 20:14:24 2013 From: fred at fdrake.net (Fred Drake) Date: Thu, 28 Mar 2013 15:14:24 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 2:22 PM, Donald Stufft wrote: > Is there much point in keeping catalog-sig and distutils-sig separate? No. The last time this was brought up, there were objections, but I don't remember what they were. I'll let people who think there's a point worry about that. > Not sure if there's some official process for requesting it or not, but > I think we should merge the two lists and just make packaging-sig to > umbrella the entire packaging topics. There is the meta-sig, but the description is out-dated: http://mail.python.org/mailman/listinfo/meta-sig and the last message in the archives is dated 2011, and sparked no discussion: http://mail.python.org/pipermail/meta-sig/2011-June.txt +1 on merging the lists. -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From qwcode at gmail.com Thu Mar 28 20:25:59 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 28 Mar 2013 12:25:59 -0700 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Thu Mar 28 20:39:38 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 28 Mar 2013 15:39:38 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 3:14 PM, Fred Drake wrote: > On Thu, Mar 28, 2013 at 2:22 PM, Donald Stufft wrote: >> Is there much point in keeping catalog-sig and distutils-sig separate? > > No. > > The last time this was brought up, there were objections, but I don't > remember what they were. I'll let people who think there's a point > worry about that. > >> Not sure if there's some official process for requesting it or not, but >> I think we should merge the two lists and just make packaging-sig to >> umbrella the entire packaging topics. > > There is the meta-sig, but the description is out-dated: > > http://mail.python.org/mailman/listinfo/meta-sig > > and the last message in the archives is dated 2011, and sparked no > discussion: > > http://mail.python.org/pipermail/meta-sig/2011-June.txt > > +1 on merging the lists. Can we do it by just dropping catalog-sig and keeping distutils-sig? I'm afraid we might lose some important distutils-sig population if the process involves renaming the list, resubscribing, etc. I also *really* don't want to invalidate archive links to the distutils-sig archive. All in all, +1 on not having two lists, but I'm really worried about "breaking" distutils-sig. We're still going to be talking about "distribution utilities", after all. From donald at stufft.io Thu Mar 28 20:42:07 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 28 Mar 2013 15:42:07 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: <3BF298C9-293D-40FF-A86F-76206A88D162@stufft.io> On Mar 28, 2013, at 3:39 PM, PJ Eby wrote: > On Thu, Mar 28, 2013 at 3:14 PM, Fred Drake wrote: >> On Thu, Mar 28, 2013 at 2:22 PM, Donald Stufft wrote: >>> Is there much point in keeping catalog-sig and distutils-sig separate? >> >> No. >> >> The last time this was brought up, there were objections, but I don't >> remember what they were. I'll let people who think there's a point >> worry about that. >> >>> Not sure if there's some official process for requesting it or not, but >>> I think we should merge the two lists and just make packaging-sig to >>> umbrella the entire packaging topics. >> >> There is the meta-sig, but the description is out-dated: >> >> http://mail.python.org/mailman/listinfo/meta-sig >> >> and the last message in the archives is dated 2011, and sparked no >> discussion: >> >> http://mail.python.org/pipermail/meta-sig/2011-June.txt >> >> +1 on merging the lists. > > Can we do it by just dropping catalog-sig and keeping distutils-sig? > I'm afraid we might lose some important distutils-sig population if > the process involves renaming the list, resubscribing, etc. I also > *really* don't want to invalidate archive links to the distutils-sig > archive. > > All in all, +1 on not having two lists, but I'm really worried about > "breaking" distutils-sig. We're still going to be talking about > "distribution utilities", after all. Don't care how it's done. I don't know Mailman enough to know what is possible or how easy things are. I thought packaging-sig sounded nice but if you can't rename + redirect or merge or something in mailman I'm down for whatever. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Thu Mar 28 20:43:07 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 28 Mar 2013 15:43:07 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> On Mar 28, 2013, at 3:39 PM, PJ Eby wrote: > On Thu, Mar 28, 2013 at 3:14 PM, Fred Drake wrote: >> On Thu, Mar 28, 2013 at 2:22 PM, Donald Stufft wrote: >>> Is there much point in keeping catalog-sig and distutils-sig separate? >> >> No. >> >> The last time this was brought up, there were objections, but I don't >> remember what they were. I'll let people who think there's a point >> worry about that. >> >>> Not sure if there's some official process for requesting it or not, but >>> I think we should merge the two lists and just make packaging-sig to >>> umbrella the entire packaging topics. >> >> There is the meta-sig, but the description is out-dated: >> >> http://mail.python.org/mailman/listinfo/meta-sig >> >> and the last message in the archives is dated 2011, and sparked no >> discussion: >> >> http://mail.python.org/pipermail/meta-sig/2011-June.txt >> >> +1 on merging the lists. > > Can we do it by just dropping catalog-sig and keeping distutils-sig? > I'm afraid we might lose some important distutils-sig population if > the process involves renaming the list, resubscribing, etc. I also > *really* don't want to invalidate archive links to the distutils-sig > archive. > > All in all, +1 on not having two lists, but I'm really worried about > "breaking" distutils-sig. We're still going to be talking about > "distribution utilities", after all. Worst case I'm sure subscribers can be transferred and the existing archive kept intact. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Thu Mar 28 20:49:07 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 28 Mar 2013 15:49:07 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Thu, Mar 28, 2013 at 1:54 PM, Steve Dower wrote: > And, I'm almost certain that most if not all existing ZIP tools on Windows will fail to open files with a shebang, since they've never had to deal with them. Actually, the opposite is true, at least for 3rd-party (non-Microsoft) archiving tools: they work even when there's a whole .exe file stuck on the front. ;-) Some of them require you to rename from .exe to .zip first, but some actually detect that an .exe is a stub in front of a zip file and give you extraction options in an Explorer right-click. So, no worries on the prepended data front, even if the extension is .zip. What you probably can't safely do is *modify* a .zip with prepended data... and there I'm just guessing, because I've never actually tried. From dholth at gmail.com Thu Mar 28 20:54:00 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 28 Mar 2013 15:54:00 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Thu, Mar 28, 2013 at 3:49 PM, PJ Eby wrote: > On Thu, Mar 28, 2013 at 1:54 PM, Steve Dower wrote: >> And, I'm almost certain that most if not all existing ZIP tools on Windows will fail to open files with a shebang, since they've never had to deal with them. > > Actually, the opposite is true, at least for 3rd-party (non-Microsoft) > archiving tools: they work even when there's a whole .exe file stuck > on the front. ;-) > > Some of them require you to rename from .exe to .zip first, but some > actually detect that an .exe is a stub in front of a zip file and give > you extraction options in an Explorer right-click. > > So, no worries on the prepended data front, even if the extension is > .zip. What you probably can't safely do is *modify* a .zip with > prepended data... and there I'm just guessing, because I've never > actually tried. It will all work. ZIP is cool! Every offset is relative from the end of the file. The wheel distribution even has a ZipFile subclass that lets you pop() files off the end by truncating the file and rewriting the index. This will work on any ordinary zip file that is just the concatenation of the compressed files in zip directory order, without data or extra space between the compressed files. From holger at merlinux.eu Thu Mar 28 21:04:19 2013 From: holger at merlinux.eu (holger krekel) Date: Thu, 28 Mar 2013 20:04:19 +0000 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: <3BF298C9-293D-40FF-A86F-76206A88D162@stufft.io> References: <3BF298C9-293D-40FF-A86F-76206A88D162@stufft.io> Message-ID: <20130328200419.GM9677@merlinux.eu> On Thu, Mar 28, 2013 at 15:42 -0400, Donald Stufft wrote: > On Mar 28, 2013, at 3:39 PM, PJ Eby wrote: > > > On Thu, Mar 28, 2013 at 3:14 PM, Fred Drake wrote: > >> On Thu, Mar 28, 2013 at 2:22 PM, Donald Stufft wrote: > >>> Is there much point in keeping catalog-sig and distutils-sig separate? > >> > >> No. > >> > >> The last time this was brought up, there were objections, but I don't > >> remember what they were. I'll let people who think there's a point > >> worry about that. > >> > >>> Not sure if there's some official process for requesting it or not, but > >>> I think we should merge the two lists and just make packaging-sig to > >>> umbrella the entire packaging topics. > >> > >> There is the meta-sig, but the description is out-dated: > >> > >> http://mail.python.org/mailman/listinfo/meta-sig > >> > >> and the last message in the archives is dated 2011, and sparked no > >> discussion: > >> > >> http://mail.python.org/pipermail/meta-sig/2011-June.txt > >> > >> +1 on merging the lists. > > > > Can we do it by just dropping catalog-sig and keeping distutils-sig? > > I'm afraid we might lose some important distutils-sig population if > > the process involves renaming the list, resubscribing, etc. I also > > *really* don't want to invalidate archive links to the distutils-sig > > archive. > > > > All in all, +1 on not having two lists, but I'm really worried about > > "breaking" distutils-sig. We're still going to be talking about > > "distribution utilities", after all. > > Don't care how it's done. I don't know Mailman enough to know what is possible or how easy things are. I thought packaging-sig sounded nice but if you can't rename + redirect or merge or something in mailman I'm down for whatever. I've moved lists even from external sites to python.org and renamed them (latest was pytest-dev). That part works nicely and people can continue to use the old ML address. Merging two lists however makes it harder to get redirects for the old archives. But why not just keep distutils-sig and catalog-sig archives, but have all their mail arrive at a new packaging-sig and begin a new archive for the latter? holger > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Catalog-SIG mailing list > Catalog-SIG at python.org > http://mail.python.org/mailman/listinfo/catalog-sig From dholth at gmail.com Thu Mar 28 21:08:44 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 28 Mar 2013 16:08:44 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: <20130328200419.GM9677@merlinux.eu> References: <3BF298C9-293D-40FF-A86F-76206A88D162@stufft.io> <20130328200419.GM9677@merlinux.eu> Message-ID: That should work. Sounds like a plan. On Thu, Mar 28, 2013 at 4:04 PM, holger krekel wrote: > On Thu, Mar 28, 2013 at 15:42 -0400, Donald Stufft wrote: >> On Mar 28, 2013, at 3:39 PM, PJ Eby wrote: >> >> > On Thu, Mar 28, 2013 at 3:14 PM, Fred Drake wrote: >> >> On Thu, Mar 28, 2013 at 2:22 PM, Donald Stufft wrote: >> >>> Is there much point in keeping catalog-sig and distutils-sig separate? >> >> >> >> No. >> >> >> >> The last time this was brought up, there were objections, but I don't >> >> remember what they were. I'll let people who think there's a point >> >> worry about that. >> >> >> >>> Not sure if there's some official process for requesting it or not, but >> >>> I think we should merge the two lists and just make packaging-sig to >> >>> umbrella the entire packaging topics. >> >> >> >> There is the meta-sig, but the description is out-dated: >> >> >> >> http://mail.python.org/mailman/listinfo/meta-sig >> >> >> >> and the last message in the archives is dated 2011, and sparked no >> >> discussion: >> >> >> >> http://mail.python.org/pipermail/meta-sig/2011-June.txt >> >> >> >> +1 on merging the lists. >> > >> > Can we do it by just dropping catalog-sig and keeping distutils-sig? >> > I'm afraid we might lose some important distutils-sig population if >> > the process involves renaming the list, resubscribing, etc. I also >> > *really* don't want to invalidate archive links to the distutils-sig >> > archive. >> > >> > All in all, +1 on not having two lists, but I'm really worried about >> > "breaking" distutils-sig. We're still going to be talking about >> > "distribution utilities", after all. >> >> Don't care how it's done. I don't know Mailman enough to know what is possible or how easy things are. I thought packaging-sig sounded nice but if you can't rename + redirect or merge or something in mailman I'm down for whatever. > > I've moved lists even from external sites to python.org and renamed them > (latest was pytest-dev). That part works nicely and people can continue > to use the old ML address. Merging two lists however makes it harder > to get redirects for the old archives. But why not just keep distutils-sig > and catalog-sig archives, but have all their mail arrive at > a new packaging-sig and begin a new archive for the latter? > > holger > > >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > > > >> _______________________________________________ >> Catalog-SIG mailing list >> Catalog-SIG at python.org >> http://mail.python.org/mailman/listinfo/catalog-sig > > _______________________________________________ > Catalog-SIG mailing list > Catalog-SIG at python.org > http://mail.python.org/mailman/listinfo/catalog-sig From pje at telecommunity.com Thu Mar 28 21:32:26 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 28 Mar 2013 16:32:26 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: On Thu, Mar 28, 2013 at 3:43 PM, Donald Stufft wrote: > On Mar 28, 2013, at 3:39 PM, PJ Eby wrote: >> Can we do it by just dropping catalog-sig and keeping distutils-sig? >> I'm afraid we might lose some important distutils-sig population if >> the process involves renaming the list, resubscribing, etc. I also >> *really* don't want to invalidate archive links to the distutils-sig >> archive. >> >> All in all, +1 on not having two lists, but I'm really worried about >> "breaking" distutils-sig. We're still going to be talking about >> "distribution utilities", after all. > > Worst case I'm sure subscribers can be transferred and the existing archive kept intact. That's a great way to have a bunch of people complaining that they never subscribed to packaging-sig, not to mention the part where it breaks everyone's mail filters. I really don't see any gains for renaming the list. It's not like we can go and scrub the entire internet of references to distutils-sig. From donald at stufft.io Thu Mar 28 21:32:16 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 28 Mar 2013 16:32:16 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: <20130328200419.GM9677@merlinux.eu> References: <3BF298C9-293D-40FF-A86F-76206A88D162@stufft.io> <20130328200419.GM9677@merlinux.eu> Message-ID: <4BDF3B12-B394-4823-9186-73D1E742E78F@stufft.io> On Mar 28, 2013, at 4:04 PM, holger krekel wrote: > On Thu, Mar 28, 2013 at 15:42 -0400, Donald Stufft wrote: >> On Mar 28, 2013, at 3:39 PM, PJ Eby wrote: >> >>> On Thu, Mar 28, 2013 at 3:14 PM, Fred Drake wrote: >>>> On Thu, Mar 28, 2013 at 2:22 PM, Donald Stufft wrote: >>>>> Is there much point in keeping catalog-sig and distutils-sig separate? >>>> >>>> No. >>>> >>>> The last time this was brought up, there were objections, but I don't >>>> remember what they were. I'll let people who think there's a point >>>> worry about that. >>>> >>>>> Not sure if there's some official process for requesting it or not, but >>>>> I think we should merge the two lists and just make packaging-sig to >>>>> umbrella the entire packaging topics. >>>> >>>> There is the meta-sig, but the description is out-dated: >>>> >>>> http://mail.python.org/mailman/listinfo/meta-sig >>>> >>>> and the last message in the archives is dated 2011, and sparked no >>>> discussion: >>>> >>>> http://mail.python.org/pipermail/meta-sig/2011-June.txt >>>> >>>> +1 on merging the lists. >>> >>> Can we do it by just dropping catalog-sig and keeping distutils-sig? >>> I'm afraid we might lose some important distutils-sig population if >>> the process involves renaming the list, resubscribing, etc. I also >>> *really* don't want to invalidate archive links to the distutils-sig >>> archive. >>> >>> All in all, +1 on not having two lists, but I'm really worried about >>> "breaking" distutils-sig. We're still going to be talking about >>> "distribution utilities", after all. >> >> Don't care how it's done. I don't know Mailman enough to know what is possible or how easy things are. I thought packaging-sig sounded nice but if you can't rename + redirect or merge or something in mailman I'm down for whatever. > > I've moved lists even from external sites to python.org and renamed them > (latest was pytest-dev). That part works nicely and people can continue > to use the old ML address. Merging two lists however makes it harder > to get redirects for the old archives. But why not just keep distutils-sig > and catalog-sig archives, but have all their mail arrive at > a new packaging-sig and begin a new archive for the latter? > > holger > > >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > > > >> _______________________________________________ >> Catalog-SIG mailing list >> Catalog-SIG at python.org >> http://mail.python.org/mailman/listinfo/catalog-sig > sounds good to me. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jacob at jacobian.org Thu Mar 28 19:26:05 2013 From: jacob at jacobian.org (Jacob Kaplan-Moss) Date: Thu, 28 Mar 2013 13:26:05 -0500 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: As a mostly-lurker on both who would love to cut down on the number of lists I have to follow: a hearty +1! Jacob On Thu, Mar 28, 2013 at 1:22 PM, Donald Stufft wrote: > Is there much point in keeping catalog-sig and distutils-sig separate? > > It seems to me that most of the same people are on both lists, and the topics almost always have consequences to both sides of the coin. So much so that it's often hard to pick *which* of the two (or both) lists you post too. Further confused by the fact that distutils is hopefully someday going to go away :) > > Not sure if there's some official process for requesting it or not, but I think we should merge the two lists and just make packaging-sig to umbrella the entire packaging topics. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Catalog-SIG mailing list > Catalog-SIG at python.org > http://mail.python.org/mailman/listinfo/catalog-sig > From jacob at jacobian.org Thu Mar 28 22:15:56 2013 From: jacob at jacobian.org (Jacob Kaplan-Moss) Date: Thu, 28 Mar 2013 16:15:56 -0500 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: C'mon, folks, we're arguing about a name. That's about as close to literal bikeshedding as we could get. How about we just let whoever has the keys make the change in whatever way's easiest and most logical for them? Jacob From pombredanne at nexb.com Thu Mar 28 22:40:22 2013 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Thu, 28 Mar 2013 22:40:22 +0100 Subject: [Distutils] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 7:22 PM, Donald Stufft wrote: > Is there much point in keeping catalog-sig and distutils-sig separate? > > It seems to me that most of the same people are on both lists, and the topics almost always have consequences to both sides of the coin. So much so that it's often hard to pick *which* of the two (or both) lists you post too. Further confused by the fact that distutils is hopefully someday going to go away :) > > Not sure if there's some official process for requesting it or not, but I think we should merge the two lists and just make packaging-sig to umbrella the entire packaging topics. +1 -- Philippe Ombredanne +1 650 799 0949 | pombredanne at nexB.com DejaCode Enterprise at http://www.dejacode.com nexB Inc. at http://www.nexb.com From richard at python.org Thu Mar 28 22:42:06 2013 From: richard at python.org (Richard Jones) Date: Fri, 29 Mar 2013 08:42:06 +1100 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: I think I'm the only one on the list who probably would have objected but I'm on both now so whatever :-) Richard On 29 March 2013 07:32, PJ Eby wrote: > On Thu, Mar 28, 2013 at 3:43 PM, Donald Stufft wrote: >> On Mar 28, 2013, at 3:39 PM, PJ Eby wrote: >>> Can we do it by just dropping catalog-sig and keeping distutils-sig? >>> I'm afraid we might lose some important distutils-sig population if >>> the process involves renaming the list, resubscribing, etc. I also >>> *really* don't want to invalidate archive links to the distutils-sig >>> archive. >>> >>> All in all, +1 on not having two lists, but I'm really worried about >>> "breaking" distutils-sig. We're still going to be talking about >>> "distribution utilities", after all. >> >> Worst case I'm sure subscribers can be transferred and the existing archive kept intact. > > That's a great way to have a bunch of people complaining that they > never subscribed to packaging-sig, not to mention the part where it > breaks everyone's mail filters. > > I really don't see any gains for renaming the list. It's not like we > can go and scrub the entire internet of references to distutils-sig. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From tseaver at palladion.com Thu Mar 28 22:42:06 2013 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 28 Mar 2013 17:42:06 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/28/2013 04:32 PM, PJ Eby wrote: > On Thu, Mar 28, 2013 at 3:43 PM, Donald Stufft > wrote: >> On Mar 28, 2013, at 3:39 PM, PJ Eby wrote: >>> Can we do it by just dropping catalog-sig and keeping >>> distutils-sig? I'm afraid we might lose some important >>> distutils-sig population if the process involves renaming the >>> list, resubscribing, etc. I also *really* don't want to >>> invalidate archive links to the distutils-sig archive. >>> >>> All in all, +1 on not having two lists, but I'm really worried >>> about "breaking" distutils-sig. We're still going to be talking >>> about "distribution utilities", after all. >> >> Worst case I'm sure subscribers can be transferred and the existing >> archive kept intact. > > That's a great way to have a bunch of people complaining that they > never subscribed to packaging-sig, not to mention the part where it > breaks everyone's mail filters. > > I really don't see any gains for renaming the list. It's not like we > can go and scrub the entire internet of references to distutils-sig. Not to mention breaking the gmane.org gateway, and those of us who sip the firehose there instead of trying to swallow it via e-mail. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlFUuS4ACgkQ+gerLs4ltQ4zXACguC0D2F3EEE7GT4DGXRa08hy7 FdYAoM56YpHef9J0ScKOdY2OHv/48LOv =3UtH -----END PGP SIGNATURE----- From donald at stufft.io Thu Mar 28 22:57:11 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 28 Mar 2013 17:57:11 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: On Mar 28, 2013, at 5:42 PM, Tres Seaver wrote: > Signed PGP part > On 03/28/2013 04:32 PM, PJ Eby wrote: > > On Thu, Mar 28, 2013 at 3:43 PM, Donald Stufft > > wrote: > >> On Mar 28, 2013, at 3:39 PM, PJ Eby wrote: > >>> Can we do it by just dropping catalog-sig and keeping > >>> distutils-sig? I'm afraid we might lose some important > >>> distutils-sig population if the process involves renaming the > >>> list, resubscribing, etc. I also *really* don't want to > >>> invalidate archive links to the distutils-sig archive. > >>> > >>> All in all, +1 on not having two lists, but I'm really worried > >>> about "breaking" distutils-sig. We're still going to be talking > >>> about "distribution utilities", after all. > >> > >> Worst case I'm sure subscribers can be transferred and the existing > >> archive kept intact. > > > > That's a great way to have a bunch of people complaining that they > > never subscribed to packaging-sig, not to mention the part where it > > breaks everyone's mail filters. > > > > I really don't see any gains for renaming the list. It's not like we > > can go and scrub the entire internet of references to distutils-sig. > > Not to mention breaking the gmane.org gateway, and those of us who sip > the firehose there instead of trying to swallow it via e-mail. > > > Tres. > - -- > =================================================================== > Tres Seaver +1 540-429-0999 tseaver at palladion.com > Palladion Software "Excellence by Design" http://palladion.com > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig This problem is inherent no matter what name is picked. GMane will need updated and some messages will need sent to tell people about the new name. No matter what at least one name isn't going to be used anymore. It's not that big of a deal. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From Steve.Dower at microsoft.com Thu Mar 28 23:19:07 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Thu, 28 Mar 2013 22:19:07 +0000 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: <4113f43794c0475d8dbdf3d7381cd483@BLUPR03MB035.namprd03.prod.outlook.com> Daniel Holth wrote: > On Thu, Mar 28, 2013 at 3:49 PM, PJ Eby wrote: >> On Thu, Mar 28, 2013 at 1:54 PM, Steve Dower wrote: >>> And, I'm almost certain that most if not all existing ZIP tools on >>> Windows will fail to open files with a shebang, since they've >>> never had to deal with them. >> >> Actually, the opposite is true, at least for 3rd-party (non-Microsoft) >> archiving tools: they work even when there's a whole .exe file stuck >> on the front. ;-) >> >> Some of them require you to rename from .exe to .zip first, but some >> actually detect that an .exe is a stub in front of a zip file and give >> you extraction options in an Explorer right-click. >> >> So, no worries on the prepended data front, even if the extension is >> .zip. What you probably can't safely do is *modify* a .zip with >> prepended data... and there I'm just guessing, because I've never >> actually tried. > > It will all work. ZIP is cool! Every offset is relative from the end of the file. > > The wheel distribution even has a ZipFile subclass that lets you pop() files > off the end by truncating the file and rewriting the index. This will work > on any ordinary zip file that is just the concatenation of the compressed > files in zip directory order, without data or extra space between the > compressed files. Ah of course, I totally forgot that it works from the end of the file. I'd still rather they got a new extension though, since nobody is going to teach WinZip what "#! python" means. That's probably an issue for the handful of Windows devs on python-dev rather than here, though. Cheers, Steve From dholth at gmail.com Thu Mar 28 23:43:22 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 28 Mar 2013 18:43:22 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: <4113f43794c0475d8dbdf3d7381cd483@BLUPR03MB035.namprd03.prod.outlook.com> References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> <4113f43794c0475d8dbdf3d7381cd483@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Thu, Mar 28, 2013 at 6:19 PM, Steve Dower wrote: > Daniel Holth wrote: >> On Thu, Mar 28, 2013 at 3:49 PM, PJ Eby wrote: >>> On Thu, Mar 28, 2013 at 1:54 PM, Steve Dower wrote: >>>> And, I'm almost certain that most if not all existing ZIP tools on >>>> Windows will fail to open files with a shebang, since they've >>>> never had to deal with them. >>> >>> Actually, the opposite is true, at least for 3rd-party (non-Microsoft) >>> archiving tools: they work even when there's a whole .exe file stuck >>> on the front. ;-) >>> >>> Some of them require you to rename from .exe to .zip first, but some >>> actually detect that an .exe is a stub in front of a zip file and give >>> you extraction options in an Explorer right-click. >>> >>> So, no worries on the prepended data front, even if the extension is >>> .zip. What you probably can't safely do is *modify* a .zip with >>> prepended data... and there I'm just guessing, because I've never >>> actually tried. >> >> It will all work. ZIP is cool! Every offset is relative from the end of the file. >> >> The wheel distribution even has a ZipFile subclass that lets you pop() files >> off the end by truncating the file and rewriting the index. This will work >> on any ordinary zip file that is just the concatenation of the compressed >> files in zip directory order, without data or extra space between the >> compressed files. > > Ah of course, I totally forgot that it works from the end of the file. > > I'd still rather they got a new extension though, since nobody is going to teach WinZip what "#! python" means. That's probably an issue for the handful of Windows devs on python-dev rather than here, though. > > > Cheers, > Steve WinZip will ignore anything in the front of the file since the zip directory doesn't reference it. The #! shebang is for Unix, would point to the correct Python, and the +x flag would make it executable. The mini PEP is for the .pyz registration and for publicity. From pje at telecommunity.com Fri Mar 29 00:28:14 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 28 Mar 2013 19:28:14 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: On Thu, Mar 28, 2013 at 5:15 PM, Jacob Kaplan-Moss wrote: > C'mon, folks, we're arguing about a name. That's about as close to > literal bikeshedding as we could get. I'm not arguing about the *name*. I just don't see the point in making everybody subscribe to a new list and change their mail filters (and update every book and webpage out there that mentions the distutils-sig), because a few people want to *change* the name -- a change that AFAICT doesn't actually provide any tangible benefit to anybody whatsoever. > How about we just let whoever has the keys make the change in whatever way's easiest and most logical for them? Because it's not up to just the person with the keys. Neither SIG is a mere mailing list, it's a Python special interest group, and SIGs have their own formation and termination processes. In particular, if you're going to start a new SIG, one of the requirements to be met is "in particular, no other SIG nor the general Python newsgroup is already more suitable" (per the Python SIG Creation Guidelines). It's hard to argue that distutils-sig isn't already more suitable than whatever is being proposed to take its place. From donald at stufft.io Fri Mar 29 00:45:55 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 28 Mar 2013 19:45:55 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: On Mar 28, 2013, at 7:28 PM, PJ Eby wrote: > On Thu, Mar 28, 2013 at 5:15 PM, Jacob Kaplan-Moss wrote: >> C'mon, folks, we're arguing about a name. That's about as close to >> literal bikeshedding as we could get. > > I'm not arguing about the *name*. I just don't see the point in > making everybody subscribe to a new list and change their mail filters > (and update every book and webpage out there that mentions the > distutils-sig), because a few people want to *change* the name -- a > change that AFAICT doesn't actually provide any tangible benefit to > anybody whatsoever. > > >> How about we just let whoever has the keys make the change in whatever way's easiest and most logical for them? > > Because it's not up to just the person with the keys. Neither SIG is > a mere mailing list, it's a Python special interest group, and SIGs > have their own formation and termination processes. > > In particular, if you're going to start a new SIG, one of the > requirements to be met is "in particular, no other SIG nor the general > Python newsgroup is already more suitable" (per the Python SIG > Creation Guidelines). It's hard to argue that distutils-sig isn't > already more suitable than whatever is being proposed to take its > place. A requirement for a SIG is also that it has a clear goal and a start and end date. distutils-sig's goal is the distutils module. And the "end date" requirements seems to be completely ignored anymore so arguing strict adherence to the rules seems to be a wash. I suggested packaging-sig because discussion jumps back and forth between distutils-sig and catalog-sig and neither name nor stated goal really reflected what the sig was actually about which was packaging in python in general. I also suggested packaging because it matched the other current sigs which are generic topics and not about a single module. But whatever, I hate the pointless duplication and just want to kill the overlap. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dennis.coldwell at gmail.com Fri Mar 29 01:19:54 2013 From: dennis.coldwell at gmail.com (Dennis Coldwell) Date: Thu, 28 Mar 2013 17:19:54 -0700 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: > But whatever, I hate the pointless duplication and just want to kill the overlap. Agree, +1 to merging into one list. On Thu, Mar 28, 2013 at 4:45 PM, Donald Stufft wrote: > > On Mar 28, 2013, at 7:28 PM, PJ Eby wrote: > > > On Thu, Mar 28, 2013 at 5:15 PM, Jacob Kaplan-Moss > wrote: > >> C'mon, folks, we're arguing about a name. That's about as close to > >> literal bikeshedding as we could get. > > > > I'm not arguing about the *name*. I just don't see the point in > > making everybody subscribe to a new list and change their mail filters > > (and update every book and webpage out there that mentions the > > distutils-sig), because a few people want to *change* the name -- a > > change that AFAICT doesn't actually provide any tangible benefit to > > anybody whatsoever. > > > > > >> How about we just let whoever has the keys make the change in whatever > way's easiest and most logical for them? > > > > Because it's not up to just the person with the keys. Neither SIG is > > a mere mailing list, it's a Python special interest group, and SIGs > > have their own formation and termination processes. > > > > In particular, if you're going to start a new SIG, one of the > > requirements to be met is "in particular, no other SIG nor the general > > Python newsgroup is already more suitable" (per the Python SIG > > Creation Guidelines). It's hard to argue that distutils-sig isn't > > already more suitable than whatever is being proposed to take its > > place. > > A requirement for a SIG is also that it has a clear goal and a start and > end date. distutils-sig's goal is the distutils module. And the "end date" > requirements seems to be completely ignored anymore so arguing strict > adherence to the rules seems to be a wash. > > I suggested packaging-sig because discussion jumps back and forth between > distutils-sig and catalog-sig and neither name nor stated goal really > reflected what the sig was actually about which was packaging in python in > general. I also suggested packaging because it matched the other current > sigs which are generic topics and not about a single module. But whatever, > I hate the pointless duplication and just want to kill the overlap. > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Fri Mar 29 03:59:11 2013 From: barry at python.org (Barry Warsaw) Date: Thu, 28 Mar 2013 22:59:11 -0400 Subject: [Distutils] Merge catalog-sig and distutils-sig In-Reply-To: References: Message-ID: <20130328225911.513250fe@anarchist> On Mar 28, 2013, at 02:22 PM, Donald Stufft wrote: >Is there much point in keeping catalog-sig and distutils-sig separate? Without yet reading the whole thread, I'll just mention that it's probably easier to just retire one or the other mailing lists and divert all discussion to the other one. Of course, the archives for the retired list would be retained for historical purposes. In fact, sigs are *supposed* to be periodically reviewed for renewal or retirement, though I think practically speaking we haven't done that in a very long time. If there's consensus on what you want to do, please contact postmaster@ and let them know. Let's say you just want to retire catalog-sig: we can set up forwards to distutils-sig and let the former be an "acceptable alias" to the latter so postings will be accepted when addressed to either. Of course, folks on the defunct list should manually subscribe to the good list (i.e. opt-in). -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From barry at python.org Fri Mar 29 04:00:38 2013 From: barry at python.org (Barry Warsaw) Date: Thu, 28 Mar 2013 23:00:38 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: <3BF298C9-293D-40FF-A86F-76206A88D162@stufft.io> References: <3BF298C9-293D-40FF-A86F-76206A88D162@stufft.io> Message-ID: <20130328230038.1cf3ccc3@anarchist> On Mar 28, 2013, at 03:42 PM, Donald Stufft wrote: >Don't care how it's done. I don't know Mailman enough to know what is >possible or how easy things are. I thought packaging-sig sounded nice but if >you can't rename + redirect or merge or something in mailman I'm down for >whatever. Renaming can be done, but it's a bit of a pain. Of course, we can keep the archives for any retired list, so urls don't need to break. OTOH, it's definitely easier just to keep distutils-sig and retire catalog-sig. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From richard at python.org Fri Mar 29 10:47:48 2013 From: richard at python.org (Richard Jones) Date: Fri, 29 Mar 2013 20:47:48 +1100 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: On 29 March 2013 14:45, Tres Seaver wrote: > If we leave the main list the 'distutils-sig', and just announce that > 'catalog-sig' is retired, folks who want to follow the new list just > switch over. All the archives (mailman / gmane / etc.) stay valid, but > the list goes into moderated mode. Whoever has the power to do this, do it please. Richard From ncoghlan at gmail.com Fri Mar 29 20:40:58 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 30 Mar 2013 05:40:58 +1000 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: On Fri, Mar 29, 2013 at 7:47 PM, Richard Jones wrote: > On 29 March 2013 14:45, Tres Seaver wrote: >> If we leave the main list the 'distutils-sig', and just announce that >> 'catalog-sig' is retired, folks who want to follow the new list just >> switch over. All the archives (mailman / gmane / etc.) stay valid, but >> the list goes into moderated mode. > > Whoever has the power to do this, do it please. +1 distutils-sig it is. We're expanding the charter to "the distutils standard library module, the Python Package Index and associated interoperabilty standards", but that's a lot easier than forcing everyone to rewrite their mail filters. Besides, it's gonna be a *long* time before the default build system in the standard library is anything other than distutils. Coupling the build system to the language release cycle has proven to be a *bad idea*, because the addition of new platform support needs to happen in a more timely fashion than language releases. The incorporation of pip bootstrapping into 3.4 will also make it a lot easier to recommend more readily upgraded alternatives. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Fri Mar 29 20:43:06 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 29 Mar 2013 15:43:06 -0400 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> Message-ID: <7C8C91B8-46D7-4E74-9FEE-0E52F30F1132@stufft.io> On Mar 29, 2013, at 3:40 PM, Nick Coghlan wrote: > On Fri, Mar 29, 2013 at 7:47 PM, Richard Jones wrote: >> On 29 March 2013 14:45, Tres Seaver wrote: >>> If we leave the main list the 'distutils-sig', and just announce that >>> 'catalog-sig' is retired, folks who want to follow the new list just >>> switch over. All the archives (mailman / gmane / etc.) stay valid, but >>> the list goes into moderated mode. >> >> Whoever has the power to do this, do it please. > > +1 > > distutils-sig it is. We're expanding the charter to "the distutils > standard library module, the Python Package Index and associated > interoperabilty standards", but that's a lot easier than forcing > everyone to rewrite their mail filters. > > Besides, it's gonna be a *long* time before the default build system > in the standard library is anything other than distutils. Coupling the > build system to the language release cycle has proven to be a *bad > idea*, because the addition of new platform support needs to happen in > a more timely fashion than language releases. The incorporation of pip > bootstrapping into 3.4 will also make it a lot easier to recommend > more readily upgraded alternatives. > > Cheers, > Nick. > > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Sounds good to me, whoever please to doing the needful. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Fri Mar 29 20:55:32 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 30 Mar 2013 05:55:32 +1000 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 2:02 AM, Vinay Sajip wrote: > Thanks for the feedback. How about if I change mount()/unmount() to: > > def mount(self, append=False, destdir=None): > """ > Unzip the wheel's contents to the specified directory, or to > a temporary directory if destdir is None. Add this directory to > sys.path, either appending or prepending according to whether > append is True or False. No, mutating sys.path for versioned imports is a broken design. You end up with two possibilities: * If you append, then you can't override modules that have a default version available on sys.path. This is not an acceptable restriction, which is why pkg_resources doesn't do it that way * If you prepend, then you have the existing pkg_resources failure mode where it can accidentally shadow more modules than intended. This is a nightmare to debug when it goes wrong (it took me months to realise this was why a system install of the main application I work on was shadowing the version in source checkout when running the test suite or building the documentation). The correct way to do it is with a path hook that processes a special "" marker label in sys.path (probably placed after the standard library but before site-packages by default). Any mounted directories would be tracked by that path hook, but never included directly in sys.path itself. See http://mail.python.org/pipermail/distutils-sig/2013-March/020207.html for more on how this could be handled (consider mount/unmount as the lower level API for actually adding new path entries directly to the dynamic importer). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Mar 29 21:11:54 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 30 Mar 2013 06:11:54 +1000 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> <4113f43794c0475d8dbdf3d7381cd483@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Fri, Mar 29, 2013 at 8:43 AM, Daniel Holth wrote: > WinZip will ignore anything in the front of the file since the zip > directory doesn't reference it. The #! shebang is for Unix, would > point to the correct Python, and the +x flag would make it executable. > The mini PEP is for the .pyz registration and for publicity. The two big reasons almost nobody knows about the executable zip files and directories is we forgot to mention it in the original 2.6 What's New (it's there now, but was added much later), and it was done in a tracker issue [1] (with Guido participating) rather than as a PEP. A new PEP to: * register the .pyz and .pyzw extensions in the 3.4 Windows installer * ship a tool for creating an executable pyz or pyzw file from a directory of pure-Python files (warning if any extension files are noticed, and with the option of bytecode precompilation) Would be great. Cheers, Nick. [1] http://bugs.python.org/issue1739468 -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From pje at telecommunity.com Fri Mar 29 21:42:08 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 29 Mar 2013 16:42:08 -0400 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 3:55 PM, Nick Coghlan wrote: > No, mutating sys.path for versioned imports is a broken design. You > end up with two possibilities: > > * If you append, then you can't override modules that have a default > version available on sys.path. This is not an acceptable restriction, > which is why pkg_resources doesn't do it that way > * If you prepend, then you have the existing pkg_resources failure > mode where it can accidentally shadow more modules than intended. This > is a nightmare to debug when it goes wrong (it took me months to > realise this was why a system install of the main application I work > on was shadowing the version in source checkout when running the test > suite or building the documentation). > > The correct way to do it is with a path hook that processes a special > "" marker label in sys.path (probably placed after > the standard library but before site-packages by default). Any mounted > directories would be tracked by that path hook, but never included > directly in sys.path itself. How is that different from replacing "" with the path of the versioned package being added? From ncoghlan at gmail.com Fri Mar 29 21:50:26 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 30 Mar 2013 06:50:26 +1000 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On Sat, Mar 30, 2013 at 6:42 AM, PJ Eby wrote: > On Fri, Mar 29, 2013 at 3:55 PM, Nick Coghlan wrote: >> No, mutating sys.path for versioned imports is a broken design. You >> end up with two possibilities: >> >> * If you append, then you can't override modules that have a default >> version available on sys.path. This is not an acceptable restriction, >> which is why pkg_resources doesn't do it that way >> * If you prepend, then you have the existing pkg_resources failure >> mode where it can accidentally shadow more modules than intended. This >> is a nightmare to debug when it goes wrong (it took me months to >> realise this was why a system install of the main application I work >> on was shadowing the version in source checkout when running the test >> suite or building the documentation). >> >> The correct way to do it is with a path hook that processes a special >> "" marker label in sys.path (probably placed after >> the standard library but before site-packages by default). Any mounted >> directories would be tracked by that path hook, but never included >> directly in sys.path itself. > > How is that different from replacing "" with the > path of the versioned package being added? You don't lose the place where you want the inserts to happen. Without the marker, you end up having to come up with a heuristic for "make insertions here" and that gets messy as you modify the path (particularly since other code may also modify the path without your involvement). Using a path hook to say "process these additional path entries here" cleans all that up and lets you precisely control the relative precedence of the additions as a group, without needing to care about their contents, or the other contents of sys.path. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From qwcode at gmail.com Fri Mar 29 22:28:00 2013 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 29 Mar 2013 14:28:00 -0700 Subject: [Distutils] "packaging-user" mailing list? Message-ID: Some of the pypa people have been discussing beginning a "packaging-user" mailing list. - It would be open to *any* packaging or install user issues. - It would be on python.org - pip/virtualenv would use it instead of our "virtualenv" list - Other projects could(would) use it too: Setuptools (old and new), Distribute, wheel, buildout, bento? etc... I think python users would appreciate the simplicity of this, and the support would be better too I think. distutils-sig would maintain it's current focus as a place for development discussions. thoughts? Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Mar 29 23:15:39 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 29 Mar 2013 18:15:39 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> <4113f43794c0475d8dbdf3d7381cd483@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: Would pyzw be much better than reading the shebang line for Windows? On Mar 29, 2013 4:11 PM, "Nick Coghlan" wrote: > On Fri, Mar 29, 2013 at 8:43 AM, Daniel Holth wrote: > > WinZip will ignore anything in the front of the file since the zip > > directory doesn't reference it. The #! shebang is for Unix, would > > point to the correct Python, and the +x flag would make it executable. > > The mini PEP is for the .pyz registration and for publicity. > > The two big reasons almost nobody knows about the executable zip files > and directories is we forgot to mention it in the original 2.6 What's > New (it's there now, but was added much later), and it was done in a > tracker issue [1] (with Guido participating) rather than as a PEP. > > A new PEP to: > > * register the .pyz and .pyzw extensions in the 3.4 Windows installer > * ship a tool for creating an executable pyz or pyzw file from a > directory of pure-Python files (warning if any extension files are > noticed, and with the option of bytecode precompilation) > > Would be great. > > Cheers, > Nick. > > [1] http://bugs.python.org/issue1739468 > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Mar 29 23:31:05 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 29 Mar 2013 22:31:05 +0000 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> <4113f43794c0475d8dbdf3d7381cd483@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On 29 March 2013 22:15, Daniel Holth wrote: > Would pyzw be much better than reading the shebang line for Windows? Yes. A different executable has to be run (console or windows). A .pyz file with pythonw in the shebang would run py.exe and flash up a console window before starting pythonw.exe. Paul From pje at telecommunity.com Fri Mar 29 23:52:06 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 29 Mar 2013 18:52:06 -0400 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 4:50 PM, Nick Coghlan wrote: > On Sat, Mar 30, 2013 at 6:42 AM, PJ Eby wrote: >> On Fri, Mar 29, 2013 at 3:55 PM, Nick Coghlan wrote: >>> No, mutating sys.path for versioned imports is a broken design. You >>> end up with two possibilities: >>> >>> * If you append, then you can't override modules that have a default >>> version available on sys.path. This is not an acceptable restriction, >>> which is why pkg_resources doesn't do it that way >>> * If you prepend, then you have the existing pkg_resources failure >>> mode where it can accidentally shadow more modules than intended. This >>> is a nightmare to debug when it goes wrong (it took me months to >>> realise this was why a system install of the main application I work >>> on was shadowing the version in source checkout when running the test >>> suite or building the documentation). >>> >>> The correct way to do it is with a path hook that processes a special >>> "" marker label in sys.path (probably placed after >>> the standard library but before site-packages by default). Any mounted >>> directories would be tracked by that path hook, but never included >>> directly in sys.path itself. >> >> How is that different from replacing "" with the >> path of the versioned package being added? > > You don't lose the place where you want the inserts to happen. Without > the marker, you end up having to come up with a heuristic for "make > insertions here" and that gets messy as you modify the path > (particularly since other code may also modify the path without your > involvement). But at least you can tell exactly what the order is by inspecting sys.path. > Using a path hook to say "process these additional path > entries here" cleans all that up and lets you precisely control the > relative precedence of the additions as a group, without needing to > care about their contents, or the other contents of sys.path. Then can't you just bracket the entries with "" and "" then? ;-) (That would actually work right now without a path hook, since strings that refer to non-existent paths are optimized away even in Python 2.5+.) Also, btw, pkg_resources's sys.path munging is far more aggressive than anything with wheels needs to be, because pkg_resources is *always* working with individual eggs, not plain install directories. If you are only using standalone wheels to handle alternate versions, then you *definitely* want those standalone wheels to override other things, so a strategy of always placing them immediately before their containing directory is actually safe. (In effect, treating the containing directory as the insertion marker.) So, if the standalone wheel is in site-packages, then activating it would place it just ahead of site-packages -- i.e., the same place you're saying it should go. And as I believe I mentioned before, a single marker for insertion points doesn't address the user site-packages, app directory or app plugin directories, etc. AFAICT your proposal only addresses the needs of system packages, and punts on everything else. At the least, may I suggest that instead of using a marker, if you must use a path hook, simply install the path hook as a wrapper for a specific directory (e.g. site-packages), and let it process insertions for *that directory only*, rather than having a single global notion of "all the versioned packages". From ncoghlan at gmail.com Sat Mar 30 00:53:39 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 30 Mar 2013 09:53:39 +1000 Subject: [Distutils] Importable wheels using distlib/distil In-Reply-To: References: Message-ID: On Sat, Mar 30, 2013 at 8:52 AM, PJ Eby wrote: > On Fri, Mar 29, 2013 at 4:50 PM, Nick Coghlan wrote: >> You don't lose the place where you want the inserts to happen. Without >> the marker, you end up having to come up with a heuristic for "make >> insertions here" and that gets messy as you modify the path >> (particularly since other code may also modify the path without your >> involvement). > > But at least you can tell exactly what the order is by inspecting sys.path. Agreed that introspection support for metapath importers and path hooks is currently lacking. >> Using a path hook to say "process these additional path >> entries here" cleans all that up and lets you precisely control the >> relative precedence of the additions as a group, without needing to >> care about their contents, or the other contents of sys.path. > > Then can't you just bracket the entries with "" and > "" then? ;-) > > (That would actually work right now without a path hook, since strings > that refer to non-existent paths are optimized away even in Python > 2.5+.) Sure, you *could*, but then you're effectively embedding a list inside another list and it would be a lot cleaner to just say "at this point, go consult that other dynamically modified list over there". > Also, btw, pkg_resources's sys.path munging is far more aggressive > than anything with wheels needs to be, because pkg_resources is > *always* working with individual eggs, not plain install directories. > If you are only using standalone wheels to handle alternate versions, > then you *definitely* want those standalone wheels to override other > things, so a strategy of always placing them immediately before their > containing directory is actually safe. (In effect, treating the > containing directory as the insertion marker.) Yes, that was the other notion I had - insert the extra directory immediately before the directory where the metadata was located. > So, if the standalone wheel is in site-packages, then activating it > would place it just ahead of site-packages -- i.e., the same place > you're saying it should go. > > And as I believe I mentioned before, a single marker for insertion > points doesn't address the user site-packages, app directory or app > plugin directories, etc. AFAICT your proposal only addresses the > needs of system packages, and punts on everything else. No, it doesn't - all it does is provide a clear demarcation between the default system path provided by the interpreter and the ad hoc runtime modifications used to gain access to additional packages that aren't available by default. At the moment, if you print out sys.path, you have *no idea* what it originally looked like before the application started modifying it (process global shared state is always fun that way). *What* directories can be added is then entirely up to the manipulation API. I guess we could easily enough snapshot the path during the interpreter initialisation to help diagnose issues with post-startup modifications. > At the least, may I suggest that instead of using a marker, if you > must use a path hook, simply install the path hook as a wrapper for a > specific directory (e.g. site-packages), and let it process insertions > for *that directory only*, rather than having a single global notion > of "all the versioned packages". It's not really "all the versioned packages", it's "all the packages found through this particular path manipulation API". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Sat Mar 30 15:01:16 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 30 Mar 2013 14:01:16 +0000 (UTC) Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> <4113f43794c0475d8dbdf3d7381cd483@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: Nick Coghlan gmail.com> writes: > * ship a tool for creating an executable pyz or pyzw file from a > directory of pure-Python files This could be a variant of my pyzzer.py tool: https://gist.github.com/vsajip/5276787 It doesn't print warnings for extensions or byte-compile anything, but otherwise does more or less what you mentioned. Regards, Vinay Sajip From chris at python.org Sat Mar 30 17:01:14 2013 From: chris at python.org (Chris Withers) Date: Sat, 30 Mar 2013 16:01:14 +0000 Subject: [Distutils] "packaging-user" mailing list? In-Reply-To: References: Message-ID: <51570C4A.9050103@python.org> On 29/03/2013 21:28, Marcus Smith wrote: > distutils-sig would maintain it's current focus as a place for > development discussions. > > thoughts? -1. We want less lists, not more. I'd say just roll it all into distutils and be done with it. cheers, Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk From pje at telecommunity.com Sat Mar 30 17:51:03 2013 From: pje at telecommunity.com (PJ Eby) Date: Sat, 30 Mar 2013 12:51:03 -0400 Subject: [Distutils] "packaging-user" mailing list? In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 5:28 PM, Marcus Smith wrote: > Some of the pypa people have been discussing beginning a "packaging-user" > mailing list. > > - It would be open to *any* packaging or install user issues. > - It would be on python.org > - pip/virtualenv would use it instead of our "virtualenv" list > - Other projects could(would) use it too: Setuptools (old and new), > Distribute, wheel, buildout, bento? etc... > > I think python users would appreciate the simplicity of this, and the > support would be better too I think. But all the developers who actually give support are here, aren't they? Certainly I do setuptools support here, and Jim does buildout support. And all the people already being sent here from other sources aren't going to stop being sent here. > distutils-sig would maintain it's current focus as a place for development > discussions. It's actually mostly a support forum that has periodic surges of development discussion. Granted, the current development surge is bigger and longer than any that have happened in a while. ;-) From brett at yvrsfo.ca Sat Mar 30 16:39:23 2013 From: brett at yvrsfo.ca (Brett Cannon) Date: Sat, 30 Mar 2013 11:39:23 -0400 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> <4113f43794c0475d8dbdf3d7381cd483@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On Mar 29, 2013 4:12 PM, "Nick Coghlan" wrote: > > On Fri, Mar 29, 2013 at 8:43 AM, Daniel Holth wrote: > > WinZip will ignore anything in the front of the file since the zip > > directory doesn't reference it. The #! shebang is for Unix, would > > point to the correct Python, and the +x flag would make it executable. > > The mini PEP is for the .pyz registration and for publicity. > > The two big reasons almost nobody knows about the executable zip files > and directories is we forgot to mention it in the original 2.6 What's > New (it's there now, but was added much later), and it was done in a > tracker issue [1] (with Guido participating) rather than as a PEP. > > A new PEP to: > > * register the .pyz and .pyzw extensions in the 3.4 Windows installer > * ship a tool for creating an executable pyz or pyzw file from a > directory of pure-Python files (warning if any extension files are > noticed, and with the option of bytecode precompilation) > > Would be great. And that pre-compilation could even do it for multiple versions of Python thanks to __pycache__. I've actually contemplated creating a distutils command to do this exact thing. Been thinking about this since 2010: http://sayspy.blogspot.ca/2010/03/various-ways-of-distributing-python.html > > Cheers, > Nick. > > [1] http://bugs.python.org/issue1739468 > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sat Mar 30 20:22:54 2013 From: dholth at gmail.com (Daniel Holth) Date: Sat, 30 Mar 2013 15:22:54 -0400 Subject: [Distutils] PEP: Improving Python ZIP Application Support Message-ID: Python ZIP Application Support - https://docs.google.com/document/d/1MKXgPzhWD5wIUpoSQX7dxmqgTZVO6l9iZZis8dnri78/edit?usp=sharing PEP: 4XX Title: Improving Python ZIP Application Support Author: Daniel Holth Status: Draft Type: Standards Track Python-Version: 3.4 Created: 30 March 2013 Post-History: 30 March 2013 Improving Python ZIP Application Support Python has had the ability to execute directories or ZIP-format archives as scripts since version 2.6. When invoked with a zip file or directory as its first argument the interpreter adds that directory to sys.path and executes the __main__ module. These archives provide a great way to publish software that needs to be distributed as a single file script but is complex enough to need to be written as a collection of modules. This feature is not as popular as it should be for two reasons: a) users haven?t heard of it because it wasn?t mentioned in earlier versions of Python 2.6 ?What?s New? document, and b) Windows users don?t have a file extension (other than .py) to associate with the launcher. This PEP proposes to fix these problems by re-publicising the feature, defining the .pyz and .pyzw extensions as ?Python ZIP applications? and ?Windowed Python Zip Applications?, and providing some simple tooling to manage the format. A New Python ZIP Application Extension The Python 3.4 installer will associate .pyz and .pyzw ?Python ZIP Applications? with itself so they can be executed by the Windows launcher. A .pyz archive is a console application and a .pyzw archive is a windowed application. This indicates whether the console should appear when running the app. Why not use .zip or .py? Users expect a .zip file would be opened with an archive tool, and users expect .py to be opened with a text editor. Both would be confusing for this use case. For UNIX users, .pyz applications should be prefixed with a #! line pointing to the correct Python interpreter and an optional explanation. #!/usr/bin/env python # This is a Python application stored in a ZIP archive. (binary contents of archive) As background, ZIP archives are defined with a footer containing relative offsets from the end of the file. They remain valid when concatenated to the end of any other file. This feature is completely standard and is how self-extracting ZIP archives and the bdist_wininst installer format work. Minimal Tooling: The pyzaa Module This PEP also proposes including a simple application for working with Python ZIP Archives: The Python Zip Application Archiver ?pyzaa? (rhymes with ?huzzah? or ?pizza?). ?pyzaa? can archive or extract these files, compile bytecode, and can write the __main__ module if it is not present. Usage python -m pyzaa (pack | unpack | compile) python -m pyzaa pack [-o path/name] [-m module.submodule:callable] [-c] [-w] [-p interpreter] directory: ZIP the contents of directory as directory.pyz or [-w] directory.pyzw. Adds the executable flag to the archive. -c compile .pyc files and add them to the archive -p interpreter include #!interpreter as the first line of the archive -o path/name archive is written to path/name.pyz[w] instead of dirname. The extension is added if not specified. -m module.submodule:callable __main__.py is written as ?import module.submodule; module.submodule.callable()? pyzaa pack will warn if the directory contains C extensions or if it doesn?t contain __main__.py. python -m pyzaa unpack arcname.pyz[w] The archive is unpacked into a directory [arcname] python -m pyzaa compile arcname.pyz[w] The Python files in arcname.pyz[w] are compiled and appended to the ZIP file. References [1] http://bugs.python.org/issue1739468 ?Allow interpreter to execute a zip file? Copyright This document has been placed into the public domain. From qwcode at gmail.com Sat Mar 30 21:32:39 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 30 Mar 2013 13:32:39 -0700 Subject: [Distutils] "packaging-user" mailing list? In-Reply-To: References: Message-ID: >> We want less lists, not more. the basic math would be adding "packaging-user" and dropping "virtualenv", so no more lists on the whole. if you're not a virtualenv subscriber, then yes, it's one more list, but honestly, this isn't about the active *-sig people, but the users. see below. >> it's actually mostly a support forum that has periodic surges of development discussion the charter sounds like a dev list, and I think the surge is here to stay until the packaging house in order. I think distutils-sig only kinda works now as both, because most of the user traffic doesn't end up here, but rather goes to virtualenv, SO, Python-list, etc... But the problem with those lists, is the lack of certainty about answers across the whole space. If we: - properly described distutils-sig list in it's charter as a user list too - *and* announced it as such, on "Python-announce" - *and* made it prominent in the new "Python Packaging User Guide" (which is in the works; another thread for that later) Then: I think the reality of having distutils-sig serve both would sink in as not ideal. possibly not? The dev/user list distinction is pretty common, so my instinct is to follow that rut here too. my motivation is to have a better "joe user" story. **** currently (slightly exaggerated for fun) **** - joe encounters confusing pip error (due to pip being built on shaky packaging ground) - joe: "well, there's the pip user list called 'virtualenv'.". - co-worker: "that error could actually be due to Setuptools, that pip uses I think; pip's support for Setuptools is best effort I hear." - joe: "hmm, Setuptools hasn't been released for awhile. it's website says to use 'distutils-sig', but that sounds like a dev list. maybe that info is outdated." - co-worker: "oh, you're on python3, so I think you're using Distribute. there's some way to tell, but I'm not sure" - joe: "Distribute lists distutils-sig for 'Feedback and getting involved', hmm." - joe" "the distutils-sig archive sure looks like an active dev list. I might be interfering with silly user questions." - co-worker: "maybe try the 'Fellowship of the Packaging' list; that sounds friendly. but wait, it looks dead." - joe: "let me just post to Stack overflow" - co-worker: "hey, check it out, somebody just responded to your post" - joe: "I guess I'll wait a few days, and see if the answer gets votes, so I can tell if it's right or not" **** what I want **** - joe encounters error... - joe: "I'll just post to 'packaging-user'. everybody uses that now and gets answers they can usually trust." - joe: "If I want to follow dev discussions, I can join 'distutils-sig', but that's more pain than I want to see" Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Sat Mar 30 23:20:15 2013 From: richard at python.org (Richard Jones) Date: Sun, 31 Mar 2013 09:20:15 +1100 Subject: [Distutils] [Catalog-sig] Merge catalog-sig and distutils-sig In-Reply-To: <7C8C91B8-46D7-4E74-9FEE-0E52F30F1132@stufft.io> References: <3280C8A6-FF28-4AE5-B509-B6C543371538@stufft.io> <7C8C91B8-46D7-4E74-9FEE-0E52F30F1132@stufft.io> Message-ID: I've set the wheels in motion. I just need a little help from the pydotorg volunteers (and some hits from the mailman cluebat). Richard On 30 March 2013 06:43, Donald Stufft wrote: > > On Mar 29, 2013, at 3:40 PM, Nick Coghlan wrote: > > > On Fri, Mar 29, 2013 at 7:47 PM, Richard Jones > wrote: > >> On 29 March 2013 14:45, Tres Seaver wrote: > >>> If we leave the main list the 'distutils-sig', and just announce that > >>> 'catalog-sig' is retired, folks who want to follow the new list just > >>> switch over. All the archives (mailman / gmane / etc.) stay valid, but > >>> the list goes into moderated mode. > >> > >> Whoever has the power to do this, do it please. > > > > +1 > > > > distutils-sig it is. We're expanding the charter to "the distutils > > standard library module, the Python Package Index and associated > > interoperabilty standards", but that's a lot easier than forcing > > everyone to rewrite their mail filters. > > > > Besides, it's gonna be a *long* time before the default build system > > in the standard library is anything other than distutils. Coupling the > > build system to the language release cycle has proven to be a *bad > > idea*, because the addition of new platform support needs to happen in > > a more timely fashion than language releases. The incorporation of pip > > bootstrapping into 3.4 will also make it a lot easier to recommend > > more readily upgraded alternatives. > > > > Cheers, > > Nick. > > > > > > -- > > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > > Sounds good to me, whoever please to doing the needful. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Sat Mar 30 23:56:52 2013 From: tseaver at palladion.com (Tres Seaver) Date: Sat, 30 Mar 2013 18:56:52 -0400 Subject: [Distutils] PEP: Improving Python ZIP Application Support In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/30/2013 03:22 PM, Daniel Holth wrote: > Python ZIP Application Support - > https://docs.google.com/document/d/1MKXgPzhWD5wIUpoSQX7dxmqgTZVO6l9iZZis8dnri78/edit?usp=sharing +1, > but I think this actually belongs on python-dev (AFAICT it is unrelated to distutils, setuptools, etc.) Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlFXbbQACgkQ+gerLs4ltQ5+OgCgqP75xYWG8TGxiT3efvjZWe5U YT4AoKTQJMhkbkU4KBoqmhOCDUl+5/Xo =ERp8 -----END PGP SIGNATURE----- From brett at yvrsfo.ca Sun Mar 31 00:07:30 2013 From: brett at yvrsfo.ca (Brett Cannon) Date: Sat, 30 Mar 2013 19:07:30 -0400 Subject: [Distutils] PEP: Improving Python ZIP Application Support In-Reply-To: References: Message-ID: On Mar 30, 2013 3:23 PM, "Daniel Holth" wrote: > > Python ZIP Application Support - > https://docs.google.com/document/d/1MKXgPzhWD5wIUpoSQX7dxmqgTZVO6l9iZZis8dnri78/edit?usp=sharing > > > PEP: 4XX > > Title: Improving Python ZIP Application Support > > Author: Daniel Holth > > Status: Draft > > Type: Standards Track > > Python-Version: 3.4 > > Created: 30 March 2013 > > Post-History: 30 March 2013 > > > Improving Python ZIP Application Support > > > Python has had the ability to execute directories or ZIP-format > archives as scripts since version 2.6. When invoked with a zip file or > directory as its first argument the interpreter adds that directory to > sys.path and executes the __main__ module. These archives provide a > great way to publish software that needs to be distributed as a single > file script but is complex enough to need to be written as a > collection of modules. > > > This feature is not as popular as it should be for two reasons: a) > users haven?t heard of it because it wasn?t mentioned in earlier > versions of Python 2.6 ?What?s New? document, and b) Windows users > don?t have a file extension (other than .py) to associate with the > launcher. > > > This PEP proposes to fix these problems by re-publicising the feature, > defining the .pyz and .pyzw extensions as ?Python ZIP applications? > and ?Windowed Python Zip Applications?, and providing some simple > tooling to manage the format. > > A New Python ZIP Application Extension > > > The Python 3.4 installer will associate .pyz and .pyzw ?Python ZIP > Applications? with itself so they can be executed by the Windows > launcher. A .pyz archive is a console application and a .pyzw archive > is a windowed application. This indicates whether the console should > appear when running the app. > > > Why not use .zip or .py? Users expect a .zip file would be opened with > an archive tool, and users expect .py to be opened with a text editor. > Both would be confusing for this use case. > > > For UNIX users, .pyz applications should be prefixed with a #! line > pointing to the correct Python interpreter and an optional > explanation. > > > #!/usr/bin/env python > > # This is a Python application stored in a ZIP archive. ... built using pyzaa. > > (binary contents of archive) > > > As background, ZIP archives are defined with a footer containing > relative offsets from the end of the file. They remain valid when > concatenated to the end of any other file. This feature is completely > standard and is how self-extracting ZIP archives and the bdist_wininst > installer format work. > > Minimal Tooling: The pyzaa Module > > This PEP also proposes including a simple application for working with > Python ZIP Archives: The Python Zip Application Archiver ?pyzaa? > (rhymes with ?huzzah? or ?pizza?). ?pyzaa? can archive or extract > these files, compile bytecode, and can write the __main__ module if it > is not present. > > Usage > > python -m pyzaa (pack | unpack | compile) > > > python -m pyzaa pack [-o path/name] [-m module.submodule:callable] > [-c] [-w] [-p interpreter] directory: > > ZIP the contents of directory as directory.pyz or [-w] > directory.pyzw. Adds the executable flag to the archive. > > -c compile .pyc files and add them to the archive > > -p interpreter include #!interpreter as the first line of the archive Would `/usr/bin/env python` (or python3 depending on interpreter used to compile) be set otherwise? Or how about the specific python version to prevent possible future-compatibility issues (e.g. specifying python3.3)? > > -o path/name archive is written to path/name.pyz[w] instead of > dirname. The extension is added if not specified. > > -m module.submodule:callable __main__.py is written as ?import > module.submodule; module.submodule.callable()? > > > pyzaa pack will warn if the directory contains C extensions or if > it doesn?t contain __main__.py. > > > python -m pyzaa unpack arcname.pyz[w] > > The archive is unpacked into a directory [arcname] Is this truly necessary? If it's a zip file any archiving tool can unzip it. Heck, we can add a CLI to the ziofile module if it doesn't already have one. > > > python -m pyzaa compile arcname.pyz[w] > > The Python files in arcname.pyz[w] are compiled and appended to > the ZIP file. I would suggest allowing multiple versions for compilation (when specified). There should also be a check that people don't specify multiple versions that can't exist on the same directory (e.g 2.6 and 2.7). Otherwise the compileall module's CLI handles this and people can call it with different interpreters as necessary. IOW I'm advocating KISS for what the tool does. Since making the zip file is only really tricky bit it should only handle that case. Heck you can make it part of the zipfile module if there is resistance to adding yet another script that python installs (obviously will need a Cheeseshop package for older versions). Otherwise I like everything else. -Brett > > References > > [1] http://bugs.python.org/issue1739468 ?Allow interpreter to execute > a zip file? > > Copyright > > This document has been placed into the public domain. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sun Mar 31 00:30:02 2013 From: dholth at gmail.com (Daniel Holth) Date: Sat, 30 Mar 2013 19:30:02 -0400 Subject: [Distutils] PEP: Improving Python ZIP Application Support In-Reply-To: References: Message-ID: On Sat, Mar 30, 2013 at 7:07 PM, Brett Cannon wrote: > > On Mar 30, 2013 3:23 PM, "Daniel Holth" wrote: >> >> Python ZIP Application Support - >> >> https://docs.google.com/document/d/1MKXgPzhWD5wIUpoSQX7dxmqgTZVO6l9iZZis8dnri78/edit?usp=sharing >> >> >> PEP: 4XX >> >> Title: Improving Python ZIP Application Support >> >> Author: Daniel Holth >> >> Status: Draft >> >> Type: Standards Track >> >> Python-Version: 3.4 >> >> Created: 30 March 2013 >> >> Post-History: 30 March 2013 >> >> >> Improving Python ZIP Application Support >> >> >> Python has had the ability to execute directories or ZIP-format >> archives as scripts since version 2.6. When invoked with a zip file or >> directory as its first argument the interpreter adds that directory to >> sys.path and executes the __main__ module. These archives provide a >> great way to publish software that needs to be distributed as a single >> file script but is complex enough to need to be written as a >> collection of modules. >> >> >> This feature is not as popular as it should be for two reasons: a) >> users haven?t heard of it because it wasn?t mentioned in earlier >> versions of Python 2.6 ?What?s New? document, and b) Windows users >> don?t have a file extension (other than .py) to associate with the >> launcher. >> >> >> This PEP proposes to fix these problems by re-publicising the feature, >> defining the .pyz and .pyzw extensions as ?Python ZIP applications? >> and ?Windowed Python Zip Applications?, and providing some simple >> tooling to manage the format. >> >> A New Python ZIP Application Extension >> >> >> The Python 3.4 installer will associate .pyz and .pyzw ?Python ZIP >> Applications? with itself so they can be executed by the Windows >> launcher. A .pyz archive is a console application and a .pyzw archive >> is a windowed application. This indicates whether the console should >> appear when running the app. >> >> >> Why not use .zip or .py? Users expect a .zip file would be opened with >> an archive tool, and users expect .py to be opened with a text editor. >> Both would be confusing for this use case. >> >> >> For UNIX users, .pyz applications should be prefixed with a #! line >> pointing to the correct Python interpreter and an optional >> explanation. >> >> >> #!/usr/bin/env python >> >> # This is a Python application stored in a ZIP archive. > > ... built using pyzaa. > >> >> (binary contents of archive) >> >> >> As background, ZIP archives are defined with a footer containing >> relative offsets from the end of the file. They remain valid when >> concatenated to the end of any other file. This feature is completely >> standard and is how self-extracting ZIP archives and the bdist_wininst >> installer format work. >> >> Minimal Tooling: The pyzaa Module >> >> This PEP also proposes including a simple application for working with >> Python ZIP Archives: The Python Zip Application Archiver ?pyzaa? >> (rhymes with ?huzzah? or ?pizza?). ?pyzaa? can archive or extract >> these files, compile bytecode, and can write the __main__ module if it >> is not present. >> >> Usage >> >> python -m pyzaa (pack | unpack | compile) >> >> >> python -m pyzaa pack [-o path/name] [-m module.submodule:callable] >> [-c] [-w] [-p interpreter] directory: >> >> ZIP the contents of directory as directory.pyz or [-w] >> directory.pyzw. Adds the executable flag to the archive. >> >> -c compile .pyc files and add them to the archive >> >> -p interpreter include #!interpreter as the first line of the archive > > Would `/usr/bin/env python` (or python3 depending on interpreter used to > compile) be set otherwise? Or how about the specific python version to > prevent possible future-compatibility issues (e.g. specifying python3.3)? > >> >> -o path/name archive is written to path/name.pyz[w] instead of >> dirname. The extension is added if not specified. >> >> -m module.submodule:callable __main__.py is written as ?import >> module.submodule; module.submodule.callable()? >> >> >> pyzaa pack will warn if the directory contains C extensions or if >> it doesn?t contain __main__.py. >> >> >> python -m pyzaa unpack arcname.pyz[w] >> >> The archive is unpacked into a directory [arcname] > > Is this truly necessary? If it's a zip file any archiving tool can unzip it. > Heck, we can add a CLI to the ziofile module if it doesn't already have one. It is a convenience so that the contents don't wind up in $PWD, I like the idea of dropping it though. I'll be sure to emphasize that these are completely standard ZIP archives with a new extension. I'm pretty sure zipfile has a CLI. The pack command is a convenience on top of zip & cat. Hope we don't need ignore / MANIFEST.in type features... >> >> >> python -m pyzaa compile arcname.pyz[w] >> >> The Python files in arcname.pyz[w] are compiled and appended to >> the ZIP file. > > I would suggest allowing multiple versions for compilation (when specified). > There should also be a check that people don't specify multiple versions > that can't exist on the same directory (e.g 2.6 and 2.7). Otherwise the > compileall module's CLI handles this and people can call it with different > interpreters as necessary. If that's easy with compileall then great. > IOW I'm advocating KISS for what the tool does. Since making the zip file is > only really tricky bit it should only handle that case. Heck you can make it > part of the zipfile module if there is resistance to adding yet another > script that python installs (obviously will need a Cheeseshop package for > older versions). > > Otherwise I like everything else. Thanks! > -Brett > >> >> References >> >> [1] http://bugs.python.org/issue1739468 ?Allow interpreter to execute >> a zip file? >> >> Copyright >> >> This document has been placed into the public domain. >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig From sontek at gmail.com Sun Mar 31 00:32:28 2013 From: sontek at gmail.com (John Anderson) Date: Sat, 30 Mar 2013 16:32:28 -0700 Subject: [Distutils] Help with buildout Message-ID: I'm trying to use buildout to pull in 4 github repositories and combine their wsgi applications together using paste's urlmap. A few things aren't working the way I expect and so I'm looking for some more information: This is my buildout.cfg: https://github.com/sontek/notaliens.com/blob/master/buildout.cfg 1. I had to declare a [pyramid] section that pulls in Pyramid with the zc.recipe.egg recipe so that it would drop the entry_point scripts in the generated bin/ folder. Is there a way for it to do this from dependencies declared in my install_requires in setup.py rather than being declared in the buildout? 2. The 2nd problem is after I run buildout, I'm not able to import paste.deploy: I see the egg in my sys.path: bin/py >>> import sys >>> sys.path '/home/sontek/code/test_notaliens/ notaliens.com/eggs/PasteDeploy-1.5.0-py2.7.egg', but if I try to import from it: >>> from paste.deploy import loadserver Traceback (most recent call last): File "", line 1, in ImportError: No module named deploy PasteDeploy is defined as a dependency and gets pulled in properly but for some reason the interpreter isn't seeing it. any ideas? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Mar 31 08:20:48 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 31 Mar 2013 16:20:48 +1000 Subject: [Distutils] PEP: Improving Python ZIP Application Support In-Reply-To: References: Message-ID: On Sun, Mar 31, 2013 at 9:07 AM, Brett Cannon wrote: > I would suggest allowing multiple versions for compilation (when > specified). There should also be a check that people don't specify multiple > versions that can't exist on the same directory (e.g 2.6 and 2.7). > Otherwise the compileall module's CLI handles this and people can call it > with different interpreters as necessary. > > IOW I'm advocating KISS for what the tool does. Since making the zip file > is only really tricky bit it should only handle that case. Heck you can > make it part of the zipfile module if there is resistance to adding yet > another script that python installs (obviously will need a Cheeseshop > package for older versions). > > Otherwise I like everything else. > Agreed. However, Tres is right that this is a python-dev PEP rather than a distutils-sig one :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Sun Mar 31 13:40:25 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Sun, 31 Mar 2013 13:40:25 +0200 Subject: [Distutils] Self-contained boostrap scripts [was: Re: A new, experimental packaging tool: distil] In-Reply-To: References: <1364477623.9277.YahooMailNeo@web171402.mail.ir2.yahoo.com> <2598f6be282d41578756120ffa1c4eef@BLUPR03MB035.namprd03.prod.outlook.com> <4113f43794c0475d8dbdf3d7381cd483@BLUPR03MB035.namprd03.prod.outlook.com> Message-ID: On 29 Mar, 2013, at 21:11, Nick Coghlan wrote: > On Fri, Mar 29, 2013 at 8:43 AM, Daniel Holth wrote: >> WinZip will ignore anything in the front of the file since the zip >> directory doesn't reference it. The #! shebang is for Unix, would >> point to the correct Python, and the +x flag would make it executable. >> The mini PEP is for the .pyz registration and for publicity. > > The two big reasons almost nobody knows about the executable zip files > and directories is we forgot to mention it in the original 2.6 What's > New (it's there now, but was added much later), and it was done in a > tracker issue [1] (with Guido participating) rather than as a PEP. > > A new PEP to: > > * register the .pyz and .pyzw extensions in the 3.4 Windows installer Also: * add support for .pyz and .pyzw support to Python Launcher on OSX > * ship a tool for creating an executable pyz or pyzw file from a > directory of pure-Python files (warning if any extension files are > noticed, and with the option of bytecode precompilation) Ronald From pombredanne at nexb.com Sun Mar 31 18:55:18 2013 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Sun, 31 Mar 2013 18:55:18 +0200 Subject: [Distutils] Help with buildout In-Reply-To: References: Message-ID: On Sun, Mar 31, 2013 at 12:32 AM, John Anderson wrote: > I'm trying to use buildout to pull in 4 github repositories and combine > their wsgi applications together using paste's urlmap. A few things aren't > working the way I expect and so I'm looking for some more information: > This is my buildout.cfg: > https://github.com/sontek/notaliens.com/blob/master/buildout.cfg There are issues with your buildout (git urls, colander, etc) and it does not work as-is and errors out. This is probably why you could not get to import paste. This works and has no issue https://gist.github.com/pombredanne/5281269 -- Philippe Ombredanne +1 650 799 0949 | pombredanne at nexB.com DejaCode Enterprise at http://www.dejacode.com nexB Inc. at http://www.nexb.com From pombredanne at nexb.com Sun Mar 31 19:09:09 2013 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Sun, 31 Mar 2013 19:09:09 +0200 Subject: [Distutils] PEP: Improving Python ZIP Application Support In-Reply-To: References: Message-ID: On Sat, Mar 30, 2013 at 8:22 PM, Daniel Holth wrote: > Python ZIP Application Support - > https://docs.google.com/document/d/1MKXgPzhWD5wIUpoSQX7dxmqgTZVO6l9iZZis8dnri78/edit?usp=sharing > PEP: 4XX > Title: Improving Python ZIP Application Support So I guess that this already-available-yet-hidden-or-little-known feature we had since Python 2.6 will be getting a little light. Let me ask a few silly questions: Does this means that any zip with a __main__.py is de-facto already executable? What about a wheel with a __main__ ? or an egg? Or a source archive where the __main__ calls setup.py install or buildout bootstrap? Is this something to promote? How is this overlapping with other packaging approaches? or possibly replacing them all? -- Philippe Ombredanne +1 650 799 0949 | pombredanne at nexB.com DejaCode Enterprise at http://www.dejacode.com nexB Inc. at http://www.nexb.com From aclark at aclark.net Sun Mar 31 19:34:07 2013 From: aclark at aclark.net (Alex Clark) Date: Sun, 31 Mar 2013 13:34:07 -0400 Subject: [Distutils] Help with buildout References: Message-ID: On 2013-03-30 23:32:28 +0000, John Anderson said: > I'm trying to use buildout to pull in 4 github repositories and combine > their wsgi applications together using paste's urlmap. ?A few things > aren't working the way I expect and so I'm looking for some more > information: > > This is my buildout.cfg: > > https://github.com/sontek/notaliens.com/blob/master/buildout.cfg > > 1. I had to declare a [pyramid] section that pulls in Pyramid with the > zc.recipe.egg recipe so that it would drop the entry_point scripts in > the generated bin/ folder. ? Is there a way for it to do this from > dependencies declared in my install_requires in setup.py rather than > being declared in the buildout? AFAIK, and assuming I understand your question, no. The recipes (or extensions) do all the work. > > 2. The 2nd problem is after I run buildout, I'm not able to import > paste.deploy: > > I see the egg in my sys.path: > > bin/py > >>> import sys > >>> sys.path > > > '/home/sontek/code/test_notaliens/notaliens.com/eggs/PasteDeploy-1.5.0-py2.7.egg', > > > > but if I try to import from it: > > >>> from paste.deploy import loadserver > Traceback (most recent call last): > ? File "", line 1, in > ImportError: No module named deploy > > PasteDeploy is defined as a dependency and gets pulled in properly but > for some reason the interpreter isn't seeing it. > > > any ideas? Remove .installed.cfg and try again, works for me with this buildout: https://gist.github.com/aclark4life/5281288 > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Alex Clark ? http://about.me/alex.clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sun Mar 31 21:09:52 2013 From: dholth at gmail.com (Daniel Holth) Date: Sun, 31 Mar 2013 15:09:52 -0400 Subject: [Distutils] PEP: Improving Python ZIP Application Support In-Reply-To: References: Message-ID: On Mar 31, 2013 1:09 PM, "Philippe Ombredanne" wrote: > > On Sat, Mar 30, 2013 at 8:22 PM, Daniel Holth wrote: > > Python ZIP Application Support - > > https://docs.google.com/document/d/1MKXgPzhWD5wIUpoSQX7dxmqgTZVO6l9iZZis8dnri78/edit?usp=sharing > > PEP: 4XX > > Title: Improving Python ZIP Application Support > So I guess that this already-available-yet-hidden-or-little-known > feature we had since Python 2.6 will be getting a little light. > > Let me ask a few silly questions: > > Does this means that any zip with a __main__.py is de-facto already executable? > What about a wheel with a __main__ ? or an egg? > Or a source archive where the __main__ calls setup.py install or > buildout bootstrap? > Is this something to promote? > How is this overlapping with other packaging approaches? or possibly > replacing them all? Yes regardless of the extension, yes, yes, not really, doesn't overlap much. A __main__ at the root of a wheel or egg would be a problem since it would wind up in site packages, shadowing other mains. Wheel itself uses a __main__ in a sub path of the archive. Python app.zip/subpath - that works too. Generally if you can have an installer you don't want the zip application strategy. It is best for medium complexity scripts or pure python applications where the user just wants to use the software and not build on top of it. We don't want packages to self-install except in special cases. It doesn't leave enough control to the end user. > -- > Philippe Ombredanne > > +1 650 799 0949 | pombredanne at nexB.com > DejaCode Enterprise at http://www.dejacode.com > nexB Inc. at http://www.nexb.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Sun Mar 31 23:43:07 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 31 Mar 2013 17:43:07 -0400 Subject: [Distutils] PEP: Improving Python ZIP Application Support In-Reply-To: References: Message-ID: On Sun, Mar 31, 2013 at 1:09 PM, Philippe Ombredanne wrote: > On Sat, Mar 30, 2013 at 8:22 PM, Daniel Holth wrote: >> Python ZIP Application Support - >> https://docs.google.com/document/d/1MKXgPzhWD5wIUpoSQX7dxmqgTZVO6l9iZZis8dnri78/edit?usp=sharing >> PEP: 4XX >> Title: Improving Python ZIP Application Support > So I guess that this already-available-yet-hidden-or-little-known > feature we had since Python 2.6 will be getting a little light. > > Let me ask a few silly questions: > > Does this means that any zip with a __main__.py is de-facto already executable? Yes. > What about a wheel with a __main__ ? or an egg? Yes. > Or a source archive where the __main__ calls setup.py install or > buildout bootstrap? Yep. > Is this something to promote? Why not? > How is this overlapping with other packaging approaches? or possibly > replacing them all? It helps make things easier to install that are themselves installation tools. By the way, after some experimenting this morning, I have figured out how to make this work with Python 2.3 and up in a fairly simple way. It turns out that if you stick a .py file on the front of a zipfile, and you end it with a '#' and a few special characters, then when you run the zipfile with 2.6+, the __main__ is executed, but for 2.3-2.5, the .py header is executed. If this header does sys.path.insert(0, __file__), it then can import things from the zipfile. So, with an appropriate header, you can make a one-size-fits-all executable zipfile. In practice, there are a couple of wrinkles. The magic terminator string, at least for the Windows and Linux boxes I've tested so far, is '\n#\x1a\n\x00\n'. But there are more platforms and builds I *haven't* tested on than those I have. From pje at telecommunity.com Sun Mar 31 23:44:57 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 31 Mar 2013 17:44:57 -0400 Subject: [Distutils] PEP: Improving Python ZIP Application Support In-Reply-To: References: Message-ID: On Sun, Mar 31, 2013 at 5:43 PM, PJ Eby wrote: > In practice, there are a couple of wrinkles. The magic terminator > string, at least for the Windows and Linux boxes I've tested so far, > is '\n#\x1a\n\x00\n'. But there are more platforms and builds I > *haven't* tested on than those I have. Oops. That's supposed to be '\n\x00\n#\x00\x04\x1a' -- the other string was from an earlier series of tests, before I realized that an x04 in the zipfile itself was doing part of the work for me.