From ncoghlan at gmail.com Wed Apr 1 01:03:48 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 1 Apr 2015 09:03:48 +1000 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: On 1 Apr 2015 00:53, "Paul Moore" wrote: > > It's not quite that simple, I know. But until we work out how to do > something useful with a sdist that we can't do with a dev checkout, > it's hard to justify treating sdists specially. I see it as more a matter of eventually migrating to a "devdir -> sdist -> wheel -> installed" build & deployment pipeline, where the tools used at each stage are only required to support the transition to the next stage rather than having to support the whole pipeline as setuptools does. (setup.py support would necessarily remain as a backwards compatibility requirement) The 3 transitions (devdir -> sdist, sdist -> wheel, wheel -> installed) may then not only be executed separately, but can also reduce the combinatorial complexity of what needs to be tested from a tooling perspective. (Whether there's a standard "devdir -> local dev build" command can be considered independently of the design of the distribution & deployment pipeline itself) Cheers, Nick. > > Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Wed Apr 1 02:41:45 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 01 Apr 2015 11:41:45 +1100 Subject: [Distutils] it's happened - wheels without sdists (flit) References: <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: <857ftw3cja.fsf@benfinney.id.au> Nick Coghlan writes: > I see it as more a matter of eventually migrating to a "devdir -> sdist -> > wheel -> installed" build & deployment pipeline Yes. Increasingly often these days, that first distinction (dev working tree is not the same as source for distribution) gets ignored, and actively obliterated. Getting many GitHub or Bitbucket projects to distribute a source release, that doesn't have a mess of build artifacts in it, is an exercise in great frustration ? largely because those platforms provide no convenient means to distribute tarballs that are *not* an undifferentiated bundling of everything in the repository. > where the tools used at each stage are only required to support the > transition to the next stage For the ?dev tree ? sdist? transition, part of the battle is going to be educating developers that such a distinction actually exists. The common bad practice of including build artifacts in VCS obliterates the separate ?package the source for distribution? step, which means that step is rarely reproducible or reliable. -- \ ?Simplicity and elegance are unpopular because they require | `\ hard work and discipline to achieve and education to be | _o__) appreciated.? ?Edsger W. Dijkstra | Ben Finney From robertc at robertcollins.net Wed Apr 1 05:08:07 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 1 Apr 2015 16:08:07 +1300 Subject: [Distutils] d2to1 setup.cfg schema In-Reply-To: References: Message-ID: On 31 March 2015 at 12:16, Erik Bray wrote: > On Mon, Mar 30, 2015 at 7:07 PM, Robert Collins > wrote: >> On 31 March 2015 at 12:03, Erik Bray wrote: >> >>> I haven't followed this whole discussion (I started to in the >>> beginning, but haven't kept up), but I'm not really sure what's being >>> said here. d2to1 *does* support declaring setup-requires dependencies >>> in setup.cfg, and those dependencies should be loaded before any hook >>> scripts are used. Everything in d2to1 is done via various hook >>> points, and the hook functions can be either shipped with the package, >>> or come from external requirements installed via setup-requires. It >>> works pretty well in most cases. >> >> Oh, it does!? I was looking through the source and couldn't figure it >> out. What key is looked for for setup-requires? Also does it define a >> schema for extra-requires? > > Yeah, sorry about that. That's one of those things that was never > actually supported in distutils2 by the time it went poof, and that I > added later. > > You can use: > > [metadata] > setup-requires-dist = foo > > So say, for example you have some package called "versionutils" that's > used to generate the package's version number (by reading it from > another file, tacking on VCS info, etc.) You can use: > > [metadata] > setup-requires-dist = versionutils > > [global] > setup-hooks = versionutils.version_hook > > or something to that effect. It will ensure versionutils is > importable (this uses easy_install just like the normal setup_requires > feature in setuptools; I would like to change this one day to instead > use something like Daniel's setup-requires [1] trick). > It will then, fairly early in the setup process, hand the package > metadata over to versionutils.version_hook, and let it insert a > version string. > > Erik > > [1] https://bitbucket.org/dholth/setup-requires I'll add setup-requires-dist as an alias for setup-requires in the declarative work I'm doing. Did distutils2 support declarative extras? -- Robert Collins Distinguished Technologist HP Converged Cloud From p.f.moore at gmail.com Wed Apr 1 09:03:31 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 1 Apr 2015 08:03:31 +0100 Subject: [Distutils] it's happened - wheels without sdists (flit) In-Reply-To: References: <7C6CF20C-BDA3-47B0-8AC4-138563E7439A@stufft.io> <27147812-9816-4C67-BFD8-BF4C0A71DE6F@stufft.io> <3E7F843C-F25B-420D-BCEF-E518A3BD2F03@stufft.io> Message-ID: On 1 April 2015 at 00:03, Nick Coghlan wrote: >> It's not quite that simple, I know. But until we work out how to do >> something useful with a sdist that we can't do with a dev checkout, >> it's hard to justify treating sdists specially. > > I see it as more a matter of eventually migrating to a "devdir -> sdist -> > wheel -> installed" build & deployment pipeline, where the tools used at > each stage are only required to support the transition to the next stage > rather than having to support the whole pipeline as setuptools does. > (setup.py support would necessarily remain as a backwards compatibility > requirement) I wasn't particularly clear - my apologies. By "we" I meant "pip" in this context. Users expect to be able to do "pip install /dev/directory" and have it "just work". Internally, pip can dor the dev->sdist->wheel->install dance, certainly, and that's the ultimate goal, I agree. What I was trying to say that there's no obvious benefit to pup from splitting out the dev->sdist and sdist->wheel step. We can just as easily run setup.py bdist_wheel in a dev directory directly. I know we can *also* just run "setup.py install" from a dev directory (we do now), but there are benefits in not doing so - splitting out the step of building a wheel lets us cache the wheels and speed up the process, because installing a wheel is quicker than building from source. So what I'm saying is that we need similar motivating benefits to justify splitting out the "build a sdist" step, and it's not yet clear to me what those would be. Paul From guettliml at thomas-guettler.de Wed Apr 1 16:14:12 2015 From: guettliml at thomas-guettler.de (=?UTF-8?B?VGhvbWFzIEfDvHR0bGVy?=) Date: Wed, 01 Apr 2015 16:14:12 +0200 Subject: [Distutils] Get dependencies of a package without full download Message-ID: <551BFD34.1000503@thomas-guettler.de> Hi, just out of curiosity: Is it possible to get the dependencies of a package without full download from pypi? If you want to build a graph of dependencies this would be nice, since it would reduce the network traffic a lot. Regards, Thomas G?ttler From barry at python.org Wed Apr 1 20:17:03 2015 From: barry at python.org (Barry Warsaw) Date: Wed, 1 Apr 2015 14:17:03 -0400 Subject: [Distutils] Get dependencies of a package without full download References: <551BFD34.1000503@thomas-guettler.de> Message-ID: <20150401141703.310cfc6c@anarchist.wooz.org> On Apr 01, 2015, at 04:14 PM, Thomas G?ttler wrote: >Is it possible to get the dependencies of a package without full download >from pypi? It would be kind of nice if you could get the package's metadata (e.g egg-info/entry_points.txt) out of its PyPI JSON blob: https://pypi.python.org/pypi/flufl.i18n/json Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From donald at stufft.io Wed Apr 1 20:20:37 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 1 Apr 2015 14:20:37 -0400 Subject: [Distutils] Get dependencies of a package without full download In-Reply-To: <20150401141703.310cfc6c@anarchist.wooz.org> References: <551BFD34.1000503@thomas-guettler.de> <20150401141703.310cfc6c@anarchist.wooz.org> Message-ID: <41D3DCB2-76B7-4D78-96DB-C5B578D0595E@stufft.io> The answer to this is technically yes, but realistically no. If you build Wheels and you upload a Wheel *first* and you use twine to do so, then you will register the dependency information with PyPI and that will be available in the JSON API. If you upload a sdist first (or you type setup.py register) or you don?t use twine then they will not get registered. Look for ?requires_dist? in https://pypi.python.org/pypi/twine/json If anyone has specific things they?d like to see in an API I urge you to open issues on the Warehouse issue tracker (https://github.com/pypa/warehouse) so that we can make sure we consider them for inclusion into Warehouse. > On Apr 1, 2015, at 2:17 PM, Barry Warsaw wrote: > > On Apr 01, 2015, at 04:14 PM, Thomas G?ttler wrote: > >> Is it possible to get the dependencies of a package without full download >> from pypi? > > It would be kind of nice if you could get the package's metadata (e.g > egg-info/entry_points.txt) out of its PyPI JSON blob: > > https://pypi.python.org/pypi/flufl.i18n/json > > Cheers, > -Barry > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Wed Apr 1 22:21:52 2015 From: dholth at gmail.com (Daniel Holth) Date: Wed, 1 Apr 2015 16:21:52 -0400 Subject: [Distutils] Get dependencies of a package without full download In-Reply-To: <41D3DCB2-76B7-4D78-96DB-C5B578D0595E@stufft.io> References: <551BFD34.1000503@thomas-guettler.de> <20150401141703.310cfc6c@anarchist.wooz.org> <41D3DCB2-76B7-4D78-96DB-C5B578D0595E@stufft.io> Message-ID: Vinay Sajip was maintaining metadata as described here, I'm sure there are functions in distil to help fetch it. http://distil.readthedocs.org/en/latest/packaging.html#packaging-metadata. The most severe problem with this data is of course that it is not always correct because the environment he evaluated setup.py in to get the data may be different than yours. On Wed, Apr 1, 2015 at 2:20 PM, Donald Stufft wrote: > The answer to this is technically yes, but realistically no. > > If you build Wheels and you upload a Wheel *first* and you use twine > to do so, then you will register the dependency information with > PyPI and that will be available in the JSON API. If you upload a sdist > first (or you type setup.py register) or you don?t use twine then > they will not get registered. > > Look for ?requires_dist? in https://pypi.python.org/pypi/twine/json > > If anyone has specific things they?d like to see in an API I urge you > to open issues on the Warehouse issue tracker > (https://github.com/pypa/warehouse) so that we can make sure we > consider them for inclusion into Warehouse. > > >> On Apr 1, 2015, at 2:17 PM, Barry Warsaw wrote: >> >> On Apr 01, 2015, at 04:14 PM, Thomas G?ttler wrote: >> >>> Is it possible to get the dependencies of a package without full download >>> from pypi? >> >> It would be kind of nice if you could get the package's metadata (e.g >> egg-info/entry_points.txt) out of its PyPI JSON blob: >> >> https://pypi.python.org/pypi/flufl.i18n/json >> >> Cheers, >> -Barry >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From ncoghlan at gmail.com Thu Apr 2 10:29:18 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 2 Apr 2015 18:29:18 +1000 Subject: [Distutils] PSF blog posts regarding membership and other matters Message-ID: Hi folks, A conversation with Donald today made me realise I should explicitly pass along links to some recent PSF blog posts regarding PSF membership and other matters, as they're likely to be relevant to distutils-sig members that may not be paying attention to the PSF specific information distribution channels. The most immediately relevant one is the instructions for registering as a Contributing Member: http://pyfound.blogspot.com.au/2015/02/enroll-as-psf-voting-member.html For folks that aren't already aware, the Python Software Foundation switched to an open membership model last year, where anyone is free to register as a Basic Member on python.org, and active contributors to the Python community are invited to participate more directly in the operation of the organisation (which, amongst other things, runs pypi.python.org). Many of the folks here will qualify for self-certification as PSF Contributing Members if you choose to do so :) Historically, that higher level of engagement has consisted primarily of electing the Board of Directors each year and voting on whether or not to approve new Sponsor Members, but we're looking to change that as well, firstly by proposing the adoption a more open strategic decision making model akin to the PEP process (Let's Make Decisions Together: http://pyfound.blogspot.com.au/2015/03/personal-opinion-i-think-its-always.html), and secondly by revamping the old nominated membership system into a new public recognition program for folks that have made significant contributions to the Python community(The PSF Fellow Recognition Program: http://pyfound.blogspot.com.au/2015/03/for-shes-jolly-good-psf-fellow.html). Generally speaking, in the absence of legal or community Code of Conduct concerns, the PSF will keep its nose out of distutils-sig's business, but in this case, I'm issuing an invitation for distutils-sig members that choose to do so to come participate more. If there are any other groups you think might benefit from a more explicit invitation, please feel free to forward this message on. If you're curious about what the PSF does in general (aside from hosting pypi.python.org and the various other *.python.org services), I also suggest checking out some of the other posts on the blog. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Thu Apr 2 11:48:50 2015 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 2 Apr 2015 09:48:50 +0000 (UTC) Subject: [Distutils] Get dependencies of a package without full download In-Reply-To: References: Message-ID: <22812008.5042386.1427968130414.JavaMail.yahoo@mail.yahoo.com> From: Daniel Holth > Vinay Sajip was maintaining metadata as described here, I'm sure there are functions > in distil to help fetch it. Yes, though there is no need for any special API to access it - the metadata is in JSON files served statically. You just make a standard HTTP request, using the client library of your choice, to get a specific URL. The files are under http://www.red-dove.com/pypi/projects/ And you can just use a browser to browse the metadata from this starting point. > The most severe problem with this data is of course that it is not always correct because > the environment he evaluated setup.py in to get the data may be different than yours. Indeed, which is why declarative metadata is so important. Down with setup.py! Regards, Vinay Sajip From guettliml at thomas-guettler.de Thu Apr 2 12:27:18 2015 From: guettliml at thomas-guettler.de (=?windows-1252?Q?Thomas_G=FCttler?=) Date: Thu, 02 Apr 2015 12:27:18 +0200 Subject: [Distutils] Get dependencies of a package without full download In-Reply-To: <41D3DCB2-76B7-4D78-96DB-C5B578D0595E@stufft.io> References: <551BFD34.1000503@thomas-guettler.de> <20150401141703.310cfc6c@anarchist.wooz.org> <41D3DCB2-76B7-4D78-96DB-C5B578D0595E@stufft.io> Message-ID: <551D1986.4050400@thomas-guettler.de> Am 01.04.2015 um 20:20 schrieb Donald Stufft: > The answer to this is technically yes, but realistically no. > > If you build Wheels and you upload a Wheel *first* and you use twine > to do so, then you will register the dependency information with > PyPI and that will be available in the JSON API. If you upload a sdist > first (or you type setup.py register) or you don?t use twine then > they will not get registered. > > Look for ?requires_dist? in https://pypi.python.org/pypi/twine/json > > If anyone has specific things they?d like to see in an API I urge you > to open issues on the Warehouse issue tracker > (https://github.com/pypa/warehouse) so that we can make sure we > consider them for inclusion into Warehouse. Just my feelings, without technical/logical background: I hate the "ORs" and "IFs" in the python packaging world. Can't it be done "condition less"? I want a stupid straight forward step by step way. Regards, Thomas G?ttler From ncoghlan at gmail.com Thu Apr 2 13:02:48 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 2 Apr 2015 21:02:48 +1000 Subject: [Distutils] Get dependencies of a package without full download In-Reply-To: <551D1986.4050400@thomas-guettler.de> References: <551BFD34.1000503@thomas-guettler.de> <20150401141703.310cfc6c@anarchist.wooz.org> <41D3DCB2-76B7-4D78-96DB-C5B578D0595E@stufft.io> <551D1986.4050400@thomas-guettler.de> Message-ID: On 2 April 2015 at 20:27, Thomas G?ttler wrote: > I hate the "ORs" and "IFs" in the python packaging world. > > Can't it be done "condition less"? Unfortunately, that's currently only possible for programming languages tailored primarily for a specific usage domain and with relatively young packaging systems that got to benefit (or not) from everyone else's software distribution experience without backwards compatibility concerns. > I want a stupid straight forward step by step way. Aye, so do we, the 17 years of history since distutils was first released just makes life a little interesting trying to get there :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Thu Apr 2 13:05:46 2015 From: dholth at gmail.com (Daniel Holth) Date: Thu, 2 Apr 2015 07:05:46 -0400 Subject: [Distutils] Get dependencies of a package without full download In-Reply-To: References: <551BFD34.1000503@thomas-guettler.de> <20150401141703.310cfc6c@anarchist.wooz.org> <41D3DCB2-76B7-4D78-96DB-C5B578D0595E@stufft.io> <551D1986.4050400@thomas-guettler.de> Message-ID: We should do a mode where dependencies come from setup.cfg statically and everything else (setup.py build script) works the same. On Apr 2, 2015 7:02 AM, "Nick Coghlan" wrote: > On 2 April 2015 at 20:27, Thomas G?ttler > wrote: > > I hate the "ORs" and "IFs" in the python packaging world. > > > > Can't it be done "condition less"? > > Unfortunately, that's currently only possible for programming > languages tailored primarily for a specific usage domain and with > relatively young packaging systems that got to benefit (or not) from > everyone else's software distribution experience without backwards > compatibility concerns. > > > I want a stupid straight forward step by step way. > > Aye, so do we, the 17 years of history since distutils was first > released just makes life a little interesting trying to get there :) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Apr 2 15:52:00 2015 From: cournape at gmail.com (David Cournapeau) Date: Thu, 2 Apr 2015 14:52:00 +0100 Subject: [Distutils] [pycon] Packaging-related discussions Message-ID: Hi there, I would like to know what are the events/discussions related to packaging happening this year at PyCon ? My employer is sponsoring my trip to the conference this year, and I would like to make sure I am not missing an important event. I will also be there for most of the sprints, cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Thu Apr 2 23:02:50 2015 From: richard at python.org (Richard Jones) Date: Thu, 02 Apr 2015 21:02:50 +0000 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: I can't speak for any plans others active in the PyPA might have, but I'll be using the sprint time to work on Warehouse and hopefully help others work on it also. I'm almost certain that my hallway track time will involve many packaging-related discussions, as it always does :) Richard On Fri, 3 Apr 2015 at 00:52 David Cournapeau wrote: > Hi there, > > I would like to know what are the events/discussions related to packaging > happening this year at PyCon ? > > My employer is sponsoring my trip to the conference this year, and I would > like to make sure I am not missing an important event. I will also be there > for most of the sprints, > > cheers, > David > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Apr 2 23:42:41 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Apr 2015 07:42:41 +1000 Subject: [Distutils] Get dependencies of a package without full download In-Reply-To: References: <551BFD34.1000503@thomas-guettler.de> <20150401141703.310cfc6c@anarchist.wooz.org> <41D3DCB2-76B7-4D78-96DB-C5B578D0595E@stufft.io> <551D1986.4050400@thomas-guettler.de> Message-ID: On 2 Apr 2015 21:05, "Daniel Holth" wrote: > > We should do a mode where dependencies come from setup.cfg statically and everything else (setup.py build script) works the same. I believe that's what Robert Collins pip PR is aimed at providing. It's still quite limited, as the vast majority of projects aren't going to switch to a declarative devdir, hence PEP 426's dependence on generated declarative metadata in the sdist (that way most folks will just be upgrading to a newer setuptools rather than having to switch to a new build toolchain and change their own development practices) Cheers, Nick. > > On Apr 2, 2015 7:02 AM, "Nick Coghlan" wrote: >> >> On 2 April 2015 at 20:27, Thomas G?ttler wrote: >> > I hate the "ORs" and "IFs" in the python packaging world. >> > >> > Can't it be done "condition less"? >> >> Unfortunately, that's currently only possible for programming >> languages tailored primarily for a specific usage domain and with >> relatively young packaging systems that got to benefit (or not) from >> everyone else's software distribution experience without backwards >> compatibility concerns. >> >> > I want a stupid straight forward step by step way. >> >> Aye, so do we, the 17 years of history since distutils was first >> released just makes life a little interesting trying to get there :) >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Apr 2 23:46:31 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Apr 2015 07:46:31 +1000 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: On 3 Apr 2015 07:03, "Richard Jones" wrote: > > I can't speak for any plans others active in the PyPA might have, but I'll be using the sprint time to work on Warehouse and hopefully help others work on it also. > > I'm almost certain that my hallway track time will involve many packaging-related discussions, as it always does :) A Packaging BoF would be good, it just needs a volunteer to arrange a room with the open space organisers. Some time Saturday evening, perhaps? Cheers, Nick. > > > Richard > > On Fri, 3 Apr 2015 at 00:52 David Cournapeau wrote: >> >> Hi there, >> >> I would like to know what are the events/discussions related to packaging happening this year at PyCon ? >> >> My employer is sponsoring my trip to the conference this year, and I would like to make sure I am not missing an important event. I will also be there for most of the sprints, >> >> cheers, >> David >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Fri Apr 3 00:09:32 2015 From: richard at python.org (Richard Jones) Date: Thu, 02 Apr 2015 22:09:32 +0000 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: Could the BoF be Friday instead please? Saturday is International Tabletop Day, and there's a bunch of us will be celebrating that :) On Fri, 3 Apr 2015 at 08:46 Nick Coghlan wrote: > > On 3 Apr 2015 07:03, "Richard Jones" wrote: > > > > I can't speak for any plans others active in the PyPA might have, but > I'll be using the sprint time to work on Warehouse and hopefully help > others work on it also. > > > > I'm almost certain that my hallway track time will involve many > packaging-related discussions, as it always does :) > > A Packaging BoF would be good, it just needs a volunteer to arrange a room > with the open space organisers. Some time Saturday evening, perhaps? > > Cheers, > Nick. > > > > > > > Richard > > > > On Fri, 3 Apr 2015 at 00:52 David Cournapeau wrote: > >> > >> Hi there, > >> > >> I would like to know what are the events/discussions related to > packaging happening this year at PyCon ? > >> > >> My employer is sponsoring my trip to the conference this year, and I > would like to make sure I am not missing an important event. I will also be > there for most of the sprints, > >> > >> cheers, > >> David > >> _______________________________________________ > >> Distutils-SIG maillist - Distutils-SIG at python.org > >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Apr 3 10:44:17 2015 From: cournape at gmail.com (David Cournapeau) Date: Fri, 3 Apr 2015 09:44:17 +0100 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: Both Friday and Saturday would work for me. What does it mean concretely to arrange a room ? Just finding out a room and communicate the location, or are there other responsibilities ? David On Thu, Apr 2, 2015 at 10:46 PM, Nick Coghlan wrote: > > On 3 Apr 2015 07:03, "Richard Jones" wrote: > > > > I can't speak for any plans others active in the PyPA might have, but > I'll be using the sprint time to work on Warehouse and hopefully help > others work on it also. > > > > I'm almost certain that my hallway track time will involve many > packaging-related discussions, as it always does :) > > A Packaging BoF would be good, it just needs a volunteer to arrange a room > with the open space organisers. Some time Saturday evening, perhaps? > > Cheers, > Nick. > > > > > > > Richard > > > > On Fri, 3 Apr 2015 at 00:52 David Cournapeau wrote: > >> > >> Hi there, > >> > >> I would like to know what are the events/discussions related to > packaging happening this year at PyCon ? > >> > >> My employer is sponsoring my trip to the conference this year, and I > would like to make sure I am not missing an important event. I will also be > there for most of the sprints, > >> > >> cheers, > >> David > >> _______________________________________________ > >> Distutils-SIG maillist - Distutils-SIG at python.org > >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Fri Apr 3 11:31:05 2015 From: richard at python.org (Richard Jones) Date: Fri, 03 Apr 2015 09:31:05 +0000 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: There should be BoF boards up at the venue and we'll grab a slot on one of those at the earliest opportunity, and post the time here. If we're lucky the boards will be up during the tutorial time (both Nick and myself will be around then). Saturday is also the PyLadies auction. On Fri, 3 Apr 2015 at 19:44 David Cournapeau wrote: > Both Friday and Saturday would work for me. > > What does it mean concretely to arrange a room ? Just finding out a room > and communicate the location, or are there other responsibilities ? > > David > > On Thu, Apr 2, 2015 at 10:46 PM, Nick Coghlan wrote: > >> >> On 3 Apr 2015 07:03, "Richard Jones" wrote: >> > >> > I can't speak for any plans others active in the PyPA might have, but >> I'll be using the sprint time to work on Warehouse and hopefully help >> others work on it also. >> > >> > I'm almost certain that my hallway track time will involve many >> packaging-related discussions, as it always does :) >> >> A Packaging BoF would be good, it just needs a volunteer to arrange a >> room with the open space organisers. Some time Saturday evening, perhaps? >> >> Cheers, >> Nick. >> >> > >> > >> > Richard >> > >> > On Fri, 3 Apr 2015 at 00:52 David Cournapeau >> wrote: >> >> >> >> Hi there, >> >> >> >> I would like to know what are the events/discussions related to >> packaging happening this year at PyCon ? >> >> >> >> My employer is sponsoring my trip to the conference this year, and I >> would like to make sure I am not missing an important event. I will also be >> there for most of the sprints, >> >> >> >> cheers, >> >> David >> >> _______________________________________________ >> >> Distutils-SIG maillist - Distutils-SIG at python.org >> >> https://mail.python.org/mailman/listinfo/distutils-sig >> > >> > >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > https://mail.python.org/mailman/listinfo/distutils-sig >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Apr 3 12:16:55 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Apr 2015 20:16:55 +1000 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: On 3 April 2015 at 08:09, Richard Jones wrote: > Could the BoF be Friday instead please? Saturday is International Tabletop > Day, and there's a bunch of us will be celebrating that :) Sure, Friday works for me, too. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From contact at ionelmc.ro Fri Apr 3 18:40:42 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 3 Apr 2015 19:40:42 +0300 Subject: [Distutils] Question about wheels and python-dbg 2.7 Message-ID: Hey, It appears that using a debug build of python 2.7 doesn't mark the wheels built using it in any special way. Pip would install them on a regular python 2.7 (if they would be on an package index) and then later on imports for C extensions would fail (not sure why, tho I suspect the different "_d.so" naming scheme). Is there something more complicated preventing this be fixed or there's just hasn't been enough interest? Why does the import fail anyway? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Apr 3 19:26:43 2015 From: dholth at gmail.com (Daniel Holth) Date: Fri, 3 Apr 2015 13:26:43 -0400 Subject: [Distutils] Question about wheels and python-dbg 2.7 In-Reply-To: References: Message-ID: This is a longstanding issue in that the ABI tag (the middle tag in py27-none-linux_x86_64) is not implemented for Python 2.7 in bdist_wheel or in pip. The design calls for a "cp27d" tag for your wheel, instead it is always none. It would be pretty easy to fix by writing some Python 2 code that would detect debug / m / unicode and adding that to both the tagging scheme in bdist_wheel, and the supported tags in pip. On Fri, Apr 3, 2015 at 12:40 PM, Ionel Cristian M?rie? wrote: > Hey, > > It appears that using a debug build of python 2.7 doesn't mark the wheels > built using it in any special way. Pip would install them on a regular > python 2.7 (if they would be on an package index) and then later on imports > for C extensions would fail (not sure why, tho I suspect the different > "_d.so" naming scheme). > > Is there something more complicated preventing this be fixed or there's just > hasn't been enough interest? Why does the import fail anyway? > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro From randy at thesyrings.us Tue Apr 7 16:14:16 2015 From: randy at thesyrings.us (Randy Syring) Date: Tue, 07 Apr 2015 10:14:16 -0400 Subject: [Distutils] Method for calculating virtualenv site-packages directory Message-ID: <5523E638.8000200@thesyrings.us> If I'm running a python script in a virtualenv, what is the best method for calculating the path of that virtualenv's site-packages directory? I'm only running this command for development environments and as a shortcut for developers, so something that works most of the time is fine. This code isn't going to end up in a production system anywhere. *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Tue Apr 7 16:18:21 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Tue, 7 Apr 2015 17:18:21 +0300 Subject: [Distutils] Method for calculating virtualenv site-packages directory In-Reply-To: <5523E638.8000200@thesyrings.us> References: <5523E638.8000200@thesyrings.us> Message-ID: As far as I know, distutils.sysconfig.get_python_lib() is the standard way of doing it (and that's what tools use). Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Tue, Apr 7, 2015 at 5:14 PM, Randy Syring wrote: > If I'm running a python script in a virtualenv, what is the best method > for calculating the path of that virtualenv's site-packages directory? > > I'm only running this command for development environments and as a > shortcut for developers, so something that works most of the time is fine. > This code isn't going to end up in a production system anywhere. > > *Randy Syring* > Husband | Father | Redeemed Sinner > > > *"For what does it profit a man to gain the whole world and forfeit his > soul?" (Mark 8:36 ESV)* > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy at thesyrings.us Wed Apr 8 00:28:00 2015 From: randy at thesyrings.us (Randy Syring) Date: Tue, 07 Apr 2015 18:28:00 -0400 Subject: [Distutils] Method for calculating .so file names Message-ID: <552459F0.4020001@thesyrings.us> As a follow-up to my earlier question, is there a way to pragmatically determine the naming scheme of installed shared object files? I've seen a couple different naming formats: > # python 2.7.6 - default ubuntu version > $ ls /usr/lib/python2.7/dist-packages/_dbus_* > /usr/lib/python2.7/dist-packages/_dbus_bindings.so > /usr/lib/python2.7/dist-packages/_dbus_glib_bindings.so > > # python 3.4.0 - default ubuntu version > $ ls /usr/lib/python3/dist-packages/_dbus_* > /usr/lib/python3/dist-packages/_dbus_bindings.cpython-34m-x86_64-linux-gnu.so > /usr/lib/python3/dist-packages/_dbus_glib_bindings.cpython-34m-x86_64-linux-gnu.so > > # python 3.4.3 - manual install on ubuntu > $ ls /opt/python34/lib/python3.4/site-packages/_dbus_* > /opt/python34/lib/python3.4/site-packages/_dbus_bindings.so > /opt/python34/lib/python3.4/site-packages/_dbus_glib_bindings.so I can piece together the file structure for 3.4.0 from information in sysconfig, but I'm wondering if there is a better way. Thanks. *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Wed Apr 8 11:11:22 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 8 Apr 2015 12:11:22 +0300 Subject: [Distutils] Method for calculating .so file names In-Reply-To: <552459F0.4020001@thesyrings.us> References: <552459F0.4020001@thesyrings.us> Message-ID: distutils.sysconfig.get_config_vars("SO") or distutils.sysconfig.get_config_vars("EXT_SUFFIX") (seems the later is only available for py3? and the py3 source say "SO" is deprecated) Mind you, this is easy to find out if you build an extension and you use hunter , that's how I found it ;-) Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Wed, Apr 8, 2015 at 1:28 AM, Randy Syring wrote: > As a follow-up to my earlier question, is there a way to pragmatically > determine the naming scheme of installed shared object files? I've seen a > couple different naming formats: > > # python 2.7.6 - default ubuntu version > $ ls /usr/lib/python2.7/dist-packages/_dbus_* > /usr/lib/python2.7/dist-packages/_dbus_bindings.so > /usr/lib/python2.7/dist-packages/_dbus_glib_bindings.so > > # python 3.4.0 - default ubuntu version > $ ls /usr/lib/python3/dist-packages/_dbus_* > /usr/lib/python3/dist-packages/_ > dbus_bindings.cpython-34m-x86_64-linux-gnu.so > /usr/lib/python3/dist-packages/_ > dbus_glib_bindings.cpython-34m-x86_64-linux-gnu.so > > # python 3.4.3 - manual install on ubuntu > $ ls /opt/python34/lib/python3.4/site-packages/_dbus_* > /opt/python34/lib/python3.4/site-packages/_dbus_bindings.so > /opt/python34/lib/python3.4/site-packages/_dbus_glib_bindings.so > > > I can piece together the file structure for 3.4.0 from information in > sysconfig, but I'm wondering if there is a better way. > > Thanks. > > > *Randy Syring* > Husband | Father | Redeemed Sinner > > > *"For what does it profit a man to gain the whole world and forfeit his > soul?" (Mark 8:36 ESV)* > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Apr 8 11:28:23 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 8 Apr 2015 10:28:23 +0100 Subject: [Distutils] Method for calculating .so file names In-Reply-To: References: <552459F0.4020001@thesyrings.us> Message-ID: On 8 April 2015 at 10:11, Ionel Cristian M?rie? wrote: > distutils.sysconfig.get_config_vars("SO") or > distutils.sysconfig.get_config_vars("EXT_SUFFIX") Note that this should just be sysconfig (the above is using the copy of sysconfig imported (and accidentally exported) via distutils) Paul From Edward_Leung at invesco.com Wed Apr 8 15:29:13 2015 From: Edward_Leung at invesco.com (Leung, Edward) Date: Wed, 8 Apr 2015 13:29:13 +0000 Subject: [Distutils] installing package - problems Message-ID: Dear Sir/Madam, I was trying to install a python package pysftp using the following at command prompt: python -m pip install pysftp and I got the following error. Could you tell me what is not working?? Thanks. copying lib\Crypto\Random\Fortuna\FortunaGenerator.py -> build\lib.win-amd64 -3.4\Crypto\Random\Fortuna copying lib\Crypto\Random\Fortuna\SHAd256.py -> build\lib.win-amd64-3.4\Cryp to\Random\Fortuna copying lib\Crypto\Random\Fortuna\__init__.py -> build\lib.win-amd64-3.4\Cry pto\Random\Fortuna creating build\lib.win-amd64-3.4\Crypto\Random\OSRNG copying lib\Crypto\Random\OSRNG\fallback.py -> build\lib.win-amd64-3.4\Crypt o\Random\OSRNG copying lib\Crypto\Random\OSRNG\nt.py -> build\lib.win-amd64-3.4\Crypto\Rand om\OSRNG copying lib\Crypto\Random\OSRNG\posix.py -> build\lib.win-amd64-3.4\Crypto\R andom\OSRNG copying lib\Crypto\Random\OSRNG\rng_base.py -> build\lib.win-amd64-3.4\Crypt o\Random\OSRNG copying lib\Crypto\Random\OSRNG\__init__.py -> build\lib.win-amd64-3.4\Crypt o\Random\OSRNG creating build\lib.win-amd64-3.4\Crypto\SelfTest copying lib\Crypto\SelfTest\st_common.py -> build\lib.win-amd64-3.4\Crypto\S elfTest copying lib\Crypto\SelfTest\__init__.py -> build\lib.win-amd64-3.4\Crypto\Se lfTest creating build\lib.win-amd64-3.4\Crypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\common.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_AES.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_ARC2.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_ARC4.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_Blowfish.py -> build\lib.win-amd64-3 .4\Crypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_CAST.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_DES.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_DES3.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_pkcs1_15.py -> build\lib.win-amd64-3 .4\Crypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_pkcs1_oaep.py -> build\lib.win-amd64 -3.4\Crypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_XOR.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\__init__.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Cipher creating build\lib.win-amd64-3.4\Crypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\common.py -> build\lib.win-amd64-3.4\Crypto \SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_HMAC.py -> build\lib.win-amd64-3.4\Cry pto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_MD2.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_MD4.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_MD5.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_RIPEMD.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA224.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA256.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA384.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA512.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\__init__.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash creating build\lib.win-amd64-3.4\Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\test_AllOrNothing.py -> build\lib.win-a md64-3.4\Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\test_chaffing.py -> build\lib.win-amd64 -3.4\Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\test_KDF.py -> build\lib.win-amd64-3.4\ Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\test_rfc1751.py -> build\lib.win-amd64- 3.4\Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\__init__.py -> build\lib.win-amd64-3.4\ Crypto\SelfTest\Protocol creating build\lib.win-amd64-3.4\Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\test_DSA.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\test_ElGamal.py -> build\lib.win-amd64 -3.4\Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\test_importKey.py -> build\lib.win-amd 64-3.4\Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\test_RSA.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\__init__.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\PublicKey creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random copying lib\Crypto\SelfTest\Random\test_random.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\Random copying lib\Crypto\SelfTest\Random\test_rpoolcompat.py -> build\lib.win-amd6 4-3.4\Crypto\SelfTest\Random copying lib\Crypto\SelfTest\Random\test__UserFriendlyRNG.py -> build\lib.win -amd64-3.4\Crypto\SelfTest\Random copying lib\Crypto\SelfTest\Random\__init__.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Random creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna copying lib\Crypto\SelfTest\Random\Fortuna\test_FortunaAccumulator.py -> bui ld\lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna copying lib\Crypto\SelfTest\Random\Fortuna\test_FortunaGenerator.py -> build \lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna copying lib\Crypto\SelfTest\Random\Fortuna\test_SHAd256.py -> build\lib.win- amd64-3.4\Crypto\SelfTest\Random\Fortuna copying lib\Crypto\SelfTest\Random\Fortuna\__init__.py -> build\lib.win-amd6 4-3.4\Crypto\SelfTest\Random\Fortuna creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_fallback.py -> build\lib.win-a md64-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_generic.py -> build\lib.win-am d64-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_nt.py -> build\lib.win-amd64-3 .4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_posix.py -> build\lib.win-amd6 4-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_winrandom.py -> build\lib.win- amd64-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\__init__.py -> build\lib.win-amd64- 3.4\Crypto\SelfTest\Random\OSRNG creating build\lib.win-amd64-3.4\Crypto\SelfTest\Util copying lib\Crypto\SelfTest\Util\test_asn1.py -> build\lib.win-amd64-3.4\Cry pto\SelfTest\Util copying lib\Crypto\SelfTest\Util\test_Counter.py -> build\lib.win-amd64-3.4\ Crypto\SelfTest\Util copying lib\Crypto\SelfTest\Util\test_number.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Util copying lib\Crypto\SelfTest\Util\test_winrandom.py -> build\lib.win-amd64-3. 4\Crypto\SelfTest\Util copying lib\Crypto\SelfTest\Util\__init__.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Util creating build\lib.win-amd64-3.4\Crypto\SelfTest\Signature copying lib\Crypto\SelfTest\Signature\test_pkcs1_15.py -> build\lib.win-amd6 4-3.4\Crypto\SelfTest\Signature copying lib\Crypto\SelfTest\Signature\test_pkcs1_pss.py -> build\lib.win-amd 64-3.4\Crypto\SelfTest\Signature copying lib\Crypto\SelfTest\Signature\__init__.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\Signature creating build\lib.win-amd64-3.4\Crypto\Protocol copying lib\Crypto\Protocol\AllOrNothing.py -> build\lib.win-amd64-3.4\Crypt o\Protocol copying lib\Crypto\Protocol\Chaffing.py -> build\lib.win-amd64-3.4\Crypto\Pr otocol copying lib\Crypto\Protocol\KDF.py -> build\lib.win-amd64-3.4\Crypto\Protoco l copying lib\Crypto\Protocol\__init__.py -> build\lib.win-amd64-3.4\Crypto\Pr otocol creating build\lib.win-amd64-3.4\Crypto\PublicKey copying lib\Crypto\PublicKey\DSA.py -> build\lib.win-amd64-3.4\Crypto\Public Key copying lib\Crypto\PublicKey\ElGamal.py -> build\lib.win-amd64-3.4\Crypto\Pu blicKey copying lib\Crypto\PublicKey\pubkey.py -> build\lib.win-amd64-3.4\Crypto\Pub licKey copying lib\Crypto\PublicKey\RSA.py -> build\lib.win-amd64-3.4\Crypto\Public Key copying lib\Crypto\PublicKey\_DSA.py -> build\lib.win-amd64-3.4\Crypto\Publi cKey copying lib\Crypto\PublicKey\_RSA.py -> build\lib.win-amd64-3.4\Crypto\Publi cKey copying lib\Crypto\PublicKey\_slowmath.py -> build\lib.win-amd64-3.4\Crypto\ PublicKey copying lib\Crypto\PublicKey\__init__.py -> build\lib.win-amd64-3.4\Crypto\P ublicKey creating build\lib.win-amd64-3.4\Crypto\Signature copying lib\Crypto\Signature\PKCS1_PSS.py -> build\lib.win-amd64-3.4\Crypto\ Signature copying lib\Crypto\Signature\PKCS1_v1_5.py -> build\lib.win-amd64-3.4\Crypto \Signature copying lib\Crypto\Signature\__init__.py -> build\lib.win-amd64-3.4\Crypto\S ignature Skipping implicit fixer: buffer Skipping implicit fixer: idioms Skipping implicit fixer: set_literal Skipping implicit fixer: ws_comma running build_ext building 'Crypto.Random.OSRNG.winrandom' extension warning: GMP or MPIR library not found; Not building Crypto.PublicKey._fastm ath. error: Microsoft Visual C++ 10.0 is required (Unable to find vcvarsall.bat). ---------------------------------------- Command "C:\Python34\python.exe -c "import setuptools, tokenize;__file__='C: \\Users\\leunge\\AppData\\Local\\Temp\\pip-build-qwwdy8a4\\pycrypto\\setup.py';e xec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n '), __file__, 'exec'))" install --record C:\Users\leunge\AppData\Local\Temp\pip- tc1xxkzk-record\install-record.txt --single-version-externally-managed --compile " failed with error code 1 in C:\Users\leunge\AppData\Local\Temp\pip-build-qwwdy /***************************/ Edward Leung, Ph.D. Quantitative Research Analyst Invesco Quantitative Strategies 1166 Ave of the Americas, 27th Floor New York, NY 10036 212-278-9744 (w) 646-236-1453 (c) Edward_Leung at invesco.com **************************************************************** Confidentiality Note: The information contained in this message, and any attachments, may contain confidential and/or privileged material. It is intended solely for the person(s) or entity to which it is addressed. Any review, retransmission, dissemination, or taking of any action in reliance upon this information by persons or entities other than the intended recipient(s) is prohibited. If you received this in error, please contact the sender and delete the material from any computer. **************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From Edward_Leung at invesco.com Wed Apr 8 16:36:21 2015 From: Edward_Leung at invesco.com (Leung, Edward) Date: Wed, 8 Apr 2015 14:36:21 +0000 Subject: [Distutils] installing package - problems In-Reply-To: References: Message-ID: Thanks?..so for the 64bit, I will need to install BOTH Visual C++ 2020 Express and Windows SDK for Visual Studio 2010?? edward From: Ionel Cristian M?rie? [mailto:contact at ionelmc.ro] Sent: Wednesday, April 08, 2015 10:30 AM To: Leung, Edward Cc: distutils-sig at python.org; D'Amore, Robert M.; Waisburd, Andrew; Bithoney, Anthony S Subject: Re: [Distutils] installing package - problems Hello, It looks like you need to install the compiler. If you want to install it, I have a guide for that here (well, what worked for me - you might need additional stuff for pysftp). Give it a try. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Wed, Apr 8, 2015 at 4:29 PM, Leung, Edward > wrote: Dear Sir/Madam, I was trying to install a python package pysftp using the following at command prompt: python ?m pip install pysftp and I got the following error. Could you tell me what is not working?? Thanks. copying lib\Crypto\Random\Fortuna\FortunaGenerator.py -> build\lib.win-amd64 -3.4\Crypto\Random\Fortuna copying lib\Crypto\Random\Fortuna\SHAd256.py -> build\lib.win-amd64-3.4\Cryp to\Random\Fortuna copying lib\Crypto\Random\Fortuna\__init__.py -> build\lib.win-amd64-3.4\Cry pto\Random\Fortuna creating build\lib.win-amd64-3.4\Crypto\Random\OSRNG copying lib\Crypto\Random\OSRNG\fallback.py -> build\lib.win-amd64-3.4\Crypt o\Random\OSRNG copying lib\Crypto\Random\OSRNG\nt.py -> build\lib.win-amd64-3.4\Crypto\Rand om\OSRNG copying lib\Crypto\Random\OSRNG\posix.py -> build\lib.win-amd64-3.4\Crypto\R andom\OSRNG copying lib\Crypto\Random\OSRNG\rng_base.py -> build\lib.win-amd64-3.4\Crypt o\Random\OSRNG copying lib\Crypto\Random\OSRNG\__init__.py -> build\lib.win-amd64-3.4\Crypt o\Random\OSRNG creating build\lib.win-amd64-3.4\Crypto\SelfTest copying lib\Crypto\SelfTest\st_common.py -> build\lib.win-amd64-3.4\Crypto\S elfTest copying lib\Crypto\SelfTest\__init__.py -> build\lib.win-amd64-3.4\Crypto\Se lfTest creating build\lib.win-amd64-3.4\Crypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\common.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_AES.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_ARC2.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_ARC4.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_Blowfish.py -> build\lib.win-amd64-3 .4\Crypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_CAST.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_DES.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_DES3.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_pkcs1_15.py -> build\lib.win-amd64-3 .4\Crypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_pkcs1_oaep.py -> build\lib.win-amd64 -3.4\Crypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\test_XOR.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Cipher copying lib\Crypto\SelfTest\Cipher\__init__.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Cipher creating build\lib.win-amd64-3.4\Crypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\common.py -> build\lib.win-amd64-3.4\Crypto \SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_HMAC.py -> build\lib.win-amd64-3.4\Cry pto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_MD2.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_MD4.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_MD5.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_RIPEMD.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA224.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA256.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA384.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\test_SHA512.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Hash copying lib\Crypto\SelfTest\Hash\__init__.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Hash creating build\lib.win-amd64-3.4\Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\test_AllOrNothing.py -> build\lib.win-a md64-3.4\Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\test_chaffing.py -> build\lib.win-amd64 -3.4\Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\test_KDF.py -> build\lib.win-amd64-3.4\ Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\test_rfc1751.py -> build\lib.win-amd64- 3.4\Crypto\SelfTest\Protocol copying lib\Crypto\SelfTest\Protocol\__init__.py -> build\lib.win-amd64-3.4\ Crypto\SelfTest\Protocol creating build\lib.win-amd64-3.4\Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\test_DSA.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\test_ElGamal.py -> build\lib.win-amd64 -3.4\Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\test_importKey.py -> build\lib.win-amd 64-3.4\Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\test_RSA.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\PublicKey copying lib\Crypto\SelfTest\PublicKey\__init__.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\PublicKey creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random copying lib\Crypto\SelfTest\Random\test_random.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\Random copying lib\Crypto\SelfTest\Random\test_rpoolcompat.py -> build\lib.win-amd6 4-3.4\Crypto\SelfTest\Random copying lib\Crypto\SelfTest\Random\test__UserFriendlyRNG.py -> build\lib.win -amd64-3.4\Crypto\SelfTest\Random copying lib\Crypto\SelfTest\Random\__init__.py -> build\lib.win-amd64-3.4\Cr ypto\SelfTest\Random creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna copying lib\Crypto\SelfTest\Random\Fortuna\test_FortunaAccumulator.py -> bui ld\lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna copying lib\Crypto\SelfTest\Random\Fortuna\test_FortunaGenerator.py -> build \lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna copying lib\Crypto\SelfTest\Random\Fortuna\test_SHAd256.py -> build\lib.win- amd64-3.4\Crypto\SelfTest\Random\Fortuna copying lib\Crypto\SelfTest\Random\Fortuna\__init__.py -> build\lib.win-amd6 4-3.4\Crypto\SelfTest\Random\Fortuna creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_fallback.py -> build\lib.win-a md64-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_generic.py -> build\lib.win-am d64-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_nt.py -> build\lib.win-amd64-3 .4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_posix.py -> build\lib.win-amd6 4-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\test_winrandom.py -> build\lib.win- amd64-3.4\Crypto\SelfTest\Random\OSRNG copying lib\Crypto\SelfTest\Random\OSRNG\__init__.py -> build\lib.win-amd64- 3.4\Crypto\SelfTest\Random\OSRNG creating build\lib.win-amd64-3.4\Crypto\SelfTest\Util copying lib\Crypto\SelfTest\Util\test_asn1.py -> build\lib.win-amd64-3.4\Cry pto\SelfTest\Util copying lib\Crypto\SelfTest\Util\test_Counter.py -> build\lib.win-amd64-3.4\ Crypto\SelfTest\Util copying lib\Crypto\SelfTest\Util\test_number.py -> build\lib.win-amd64-3.4\C rypto\SelfTest\Util copying lib\Crypto\SelfTest\Util\test_winrandom.py -> build\lib.win-amd64-3. 4\Crypto\SelfTest\Util copying lib\Crypto\SelfTest\Util\__init__.py -> build\lib.win-amd64-3.4\Cryp to\SelfTest\Util creating build\lib.win-amd64-3.4\Crypto\SelfTest\Signature copying lib\Crypto\SelfTest\Signature\test_pkcs1_15.py -> build\lib.win-amd6 4-3.4\Crypto\SelfTest\Signature copying lib\Crypto\SelfTest\Signature\test_pkcs1_pss.py -> build\lib.win-amd 64-3.4\Crypto\SelfTest\Signature copying lib\Crypto\SelfTest\Signature\__init__.py -> build\lib.win-amd64-3.4 \Crypto\SelfTest\Signature creating build\lib.win-amd64-3.4\Crypto\Protocol copying lib\Crypto\Protocol\AllOrNothing.py -> build\lib.win-amd64-3.4\Crypt o\Protocol copying lib\Crypto\Protocol\Chaffing.py -> build\lib.win-amd64-3.4\Crypto\Pr otocol copying lib\Crypto\Protocol\KDF.py -> build\lib.win-amd64-3.4\Crypto\Protoco l copying lib\Crypto\Protocol\__init__.py -> build\lib.win-amd64-3.4\Crypto\Pr otocol creating build\lib.win-amd64-3.4\Crypto\PublicKey copying lib\Crypto\PublicKey\DSA.py -> build\lib.win-amd64-3.4\Crypto\Public Key copying lib\Crypto\PublicKey\ElGamal.py -> build\lib.win-amd64-3.4\Crypto\Pu blicKey copying lib\Crypto\PublicKey\pubkey.py -> build\lib.win-amd64-3.4\Crypto\Pub licKey copying lib\Crypto\PublicKey\RSA.py -> build\lib.win-amd64-3.4\Crypto\Public Key copying lib\Crypto\PublicKey\_DSA.py -> build\lib.win-amd64-3.4\Crypto\Publi cKey copying lib\Crypto\PublicKey\_RSA.py -> build\lib.win-amd64-3.4\Crypto\Publi cKey copying lib\Crypto\PublicKey\_slowmath.py -> build\lib.win-amd64-3.4\Crypto\ PublicKey copying lib\Crypto\PublicKey\__init__.py -> build\lib.win-amd64-3.4\Crypto\P ublicKey creating build\lib.win-amd64-3.4\Crypto\Signature copying lib\Crypto\Signature\PKCS1_PSS.py -> build\lib.win-amd64-3.4\Crypto\ Signature copying lib\Crypto\Signature\PKCS1_v1_5.py -> build\lib.win-amd64-3.4\Crypto \Signature copying lib\Crypto\Signature\__init__.py -> build\lib.win-amd64-3.4\Crypto\S ignature Skipping implicit fixer: buffer Skipping implicit fixer: idioms Skipping implicit fixer: set_literal Skipping implicit fixer: ws_comma running build_ext building 'Crypto.Random.OSRNG.winrandom' extension warning: GMP or MPIR library not found; Not building Crypto.PublicKey._fastm ath. error: Microsoft Visual C++ 10.0 is required (Unable to find vcvarsall.bat). ---------------------------------------- Command "C:\Python34\python.exe -c "import setuptools, tokenize;__file__='C: \\Users\\leunge\\AppData\\Local\\Temp\\pip-build-qwwdy8a4\\pycrypto\\setup.py';e xec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n '), __file__, 'exec'))" install --record C:\Users\leunge\AppData\Local\Temp\pip- tc1xxkzk-record\install-record.txt --single-version-externally-managed --compile " failed with error code 1 in C:\Users\leunge\AppData\Local\Temp\pip-build-qwwdy /***************************/ Edward Leung, Ph.D. Quantitative Research Analyst Invesco Quantitative Strategies 1166 Ave of the Americas, 27th Floor New York, NY 10036 212-278-9744 (w) 646-236-1453 (c) Edward_Leung at invesco.com **************************************************************** Confidentiality Note: The information contained in this message, and any attachments, may contain confidential and/or privileged material. It is intended solely for the person(s) or entity to which it is addressed. Any review, retransmission, dissemination, or taking of any action in reliance upon this information by persons or entities other than the intended recipient(s) is prohibited. If you received this in error, please contact the sender and delete the material from any computer. **************************************************************** _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Wed Apr 8 16:30:21 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 8 Apr 2015 17:30:21 +0300 Subject: [Distutils] installing package - problems In-Reply-To: References: Message-ID: Hello, It looks like you need to install the compiler. If you want to install it, I have a guide for that here (well, what worked for me - you might need additional stuff for pysftp). Give it a try. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Wed, Apr 8, 2015 at 4:29 PM, Leung, Edward wrote: > Dear Sir/Madam, > > > > I was trying to install a python package pysftp using the following at > command prompt: python ?m pip install pysftp and I got the following error. > Could you tell me what is not working?? Thanks. > > > > > > copying lib\Crypto\Random\Fortuna\FortunaGenerator.py -> > build\lib.win-amd64 > > -3.4\Crypto\Random\Fortuna > > > > copying lib\Crypto\Random\Fortuna\SHAd256.py -> > build\lib.win-amd64-3.4\Cryp > > to\Random\Fortuna > > > > copying lib\Crypto\Random\Fortuna\__init__.py -> > build\lib.win-amd64-3.4\Cry > > pto\Random\Fortuna > > > > creating build\lib.win-amd64-3.4\Crypto\Random\OSRNG > > > > copying lib\Crypto\Random\OSRNG\fallback.py -> > build\lib.win-amd64-3.4\Crypt > > o\Random\OSRNG > > > > copying lib\Crypto\Random\OSRNG\nt.py -> > build\lib.win-amd64-3.4\Crypto\Rand > > om\OSRNG > > > > copying lib\Crypto\Random\OSRNG\posix.py -> > build\lib.win-amd64-3.4\Crypto\R > > andom\OSRNG > > > > copying lib\Crypto\Random\OSRNG\rng_base.py -> > build\lib.win-amd64-3.4\Crypt > > o\Random\OSRNG > > > > copying lib\Crypto\Random\OSRNG\__init__.py -> > build\lib.win-amd64-3.4\Crypt > > o\Random\OSRNG > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest > > > > copying lib\Crypto\SelfTest\st_common.py -> > build\lib.win-amd64-3.4\Crypto\S > > elfTest > > > > copying lib\Crypto\SelfTest\__init__.py -> > build\lib.win-amd64-3.4\Crypto\Se > > lfTest > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\common.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_AES.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_ARC2.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_ARC4.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_Blowfish.py -> > build\lib.win-amd64-3 > > .4\Crypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_CAST.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_DES.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_DES3.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_pkcs1_15.py -> > build\lib.win-amd64-3 > > .4\Crypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_pkcs1_oaep.py -> > build\lib.win-amd64 > > -3.4\Crypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_XOR.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\__init__.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Cipher > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\common.py -> > build\lib.win-amd64-3.4\Crypto > > \SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_HMAC.py -> > build\lib.win-amd64-3.4\Cry > > pto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_MD2.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_MD4.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_MD5.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_RIPEMD.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA224.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA256.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA384.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA512.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\__init__.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\test_AllOrNothing.py -> > build\lib.win-a > > md64-3.4\Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\test_chaffing.py -> > build\lib.win-amd64 > > -3.4\Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\test_KDF.py -> > build\lib.win-amd64-3.4\ > > Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\test_rfc1751.py -> > build\lib.win-amd64- > > 3.4\Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\__init__.py -> > build\lib.win-amd64-3.4\ > > Crypto\SelfTest\Protocol > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\test_DSA.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\test_ElGamal.py -> > build\lib.win-amd64 > > -3.4\Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\test_importKey.py -> > build\lib.win-amd > > 64-3.4\Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\test_RSA.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\__init__.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\PublicKey > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random > > > > copying lib\Crypto\SelfTest\Random\test_random.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\Random > > > > copying lib\Crypto\SelfTest\Random\test_rpoolcompat.py -> > build\lib.win-amd6 > > 4-3.4\Crypto\SelfTest\Random > > > > copying lib\Crypto\SelfTest\Random\test__UserFriendlyRNG.py -> > build\lib.win > > -amd64-3.4\Crypto\SelfTest\Random > > > > copying lib\Crypto\SelfTest\Random\__init__.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Random > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna > > > > copying lib\Crypto\SelfTest\Random\Fortuna\test_FortunaAccumulator.py > -> bui > > ld\lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna > > > > copying lib\Crypto\SelfTest\Random\Fortuna\test_FortunaGenerator.py -> > build > > \lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna > > > > copying lib\Crypto\SelfTest\Random\Fortuna\test_SHAd256.py -> > build\lib.win- > > amd64-3.4\Crypto\SelfTest\Random\Fortuna > > > > copying lib\Crypto\SelfTest\Random\Fortuna\__init__.py -> > build\lib.win-amd6 > > 4-3.4\Crypto\SelfTest\Random\Fortuna > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_fallback.py -> > build\lib.win-a > > md64-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_generic.py -> > build\lib.win-am > > d64-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_nt.py -> > build\lib.win-amd64-3 > > .4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_posix.py -> > build\lib.win-amd6 > > 4-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_winrandom.py -> > build\lib.win- > > amd64-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\__init__.py -> > build\lib.win-amd64- > > 3.4\Crypto\SelfTest\Random\OSRNG > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\test_asn1.py -> > build\lib.win-amd64-3.4\Cry > > pto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\test_Counter.py -> > build\lib.win-amd64-3.4\ > > Crypto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\test_number.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\test_winrandom.py -> > build\lib.win-amd64-3. > > 4\Crypto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\__init__.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Util > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Signature > > > > copying lib\Crypto\SelfTest\Signature\test_pkcs1_15.py -> > build\lib.win-amd6 > > 4-3.4\Crypto\SelfTest\Signature > > > > copying lib\Crypto\SelfTest\Signature\test_pkcs1_pss.py -> > build\lib.win-amd > > 64-3.4\Crypto\SelfTest\Signature > > > > copying lib\Crypto\SelfTest\Signature\__init__.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\Signature > > > > creating build\lib.win-amd64-3.4\Crypto\Protocol > > > > copying lib\Crypto\Protocol\AllOrNothing.py -> > build\lib.win-amd64-3.4\Crypt > > o\Protocol > > > > copying lib\Crypto\Protocol\Chaffing.py -> > build\lib.win-amd64-3.4\Crypto\Pr > > otocol > > > > copying lib\Crypto\Protocol\KDF.py -> > build\lib.win-amd64-3.4\Crypto\Protoco > > l > > > > copying lib\Crypto\Protocol\__init__.py -> > build\lib.win-amd64-3.4\Crypto\Pr > > otocol > > > > creating build\lib.win-amd64-3.4\Crypto\PublicKey > > > > copying lib\Crypto\PublicKey\DSA.py -> > build\lib.win-amd64-3.4\Crypto\Public > > Key > > > > copying lib\Crypto\PublicKey\ElGamal.py -> > build\lib.win-amd64-3.4\Crypto\Pu > > blicKey > > > > copying lib\Crypto\PublicKey\pubkey.py -> > build\lib.win-amd64-3.4\Crypto\Pub > > licKey > > > > copying lib\Crypto\PublicKey\RSA.py -> > build\lib.win-amd64-3.4\Crypto\Public > > Key > > > > copying lib\Crypto\PublicKey\_DSA.py -> > build\lib.win-amd64-3.4\Crypto\Publi > > cKey > > > > copying lib\Crypto\PublicKey\_RSA.py -> > build\lib.win-amd64-3.4\Crypto\Publi > > cKey > > > > copying lib\Crypto\PublicKey\_slowmath.py -> > build\lib.win-amd64-3.4\Crypto\ > > PublicKey > > > > copying lib\Crypto\PublicKey\__init__.py -> > build\lib.win-amd64-3.4\Crypto\P > > ublicKey > > > > creating build\lib.win-amd64-3.4\Crypto\Signature > > > > copying lib\Crypto\Signature\PKCS1_PSS.py -> > build\lib.win-amd64-3.4\Crypto\ > > Signature > > > > copying lib\Crypto\Signature\PKCS1_v1_5.py -> > build\lib.win-amd64-3.4\Crypto > > \Signature > > > > copying lib\Crypto\Signature\__init__.py -> > build\lib.win-amd64-3.4\Crypto\S > > ignature > > > > Skipping implicit fixer: buffer > > > > Skipping implicit fixer: idioms > > > > Skipping implicit fixer: set_literal > > > > Skipping implicit fixer: ws_comma > > > > running build_ext > > > > building 'Crypto.Random.OSRNG.winrandom' extension > > > > warning: GMP or MPIR library not found; Not building > Crypto.PublicKey._fastm > > ath. > > > > error: Microsoft Visual C++ 10.0 is required (Unable to find > vcvarsall.bat). > > > > > > ---------------------------------------- > > Command "C:\Python34\python.exe -c "import setuptools, > tokenize;__file__='C: > > > \\Users\\leunge\\AppData\\Local\\Temp\\pip-build-qwwdy8a4\\pycrypto\\setup.py';e > > xec(compile(getattr(tokenize, 'open', > open)(__file__).read().replace('\r\n', '\n > > '), __file__, 'exec'))" install --record > C:\Users\leunge\AppData\Local\Temp\pip- > > tc1xxkzk-record\install-record.txt --single-version-externally-managed > --compile > > " failed with error code 1 in > C:\Users\leunge\AppData\Local\Temp\pip-build-qwwdy > > > > > > > > /***************************/ > > > > Edward Leung, Ph.D. > > Quantitative Research Analyst > > Invesco Quantitative Strategies > > 1166 Ave of the Americas, 27th Floor > > New York, NY 10036 > > > > 212-278-9744 (w) > > 646-236-1453 (c) > > Edward_Leung at invesco.com > > > > **************************************************************** > Confidentiality Note: The information contained in this > message, and any attachments, may contain confidential > and/or privileged material. It is intended solely for the > person(s) or entity to which it is addressed. Any review, > retransmission, dissemination, or taking of any action in > reliance upon this information by persons or entities other > than the intended recipient(s) is prohibited. If you received > this in error, please contact the sender and delete the > material from any computer. > **************************************************************** > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Wed Apr 8 16:39:07 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 8 Apr 2015 17:39:07 +0300 Subject: [Distutils] installing package - problems In-Reply-To: References: Message-ID: Yes, that's what worked for me. If you feel adventurous you can try Victor Stinner's SDK-only approach (mentioned at the end). Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Wed, Apr 8, 2015 at 5:36 PM, Leung, Edward wrote: > Thanks?..so for the 64bit, I will need to install BOTH Visual C++ 2020 > Express and Windows SDK for Visual Studio 2010?? > > > > edward > > > > *From:* Ionel Cristian M?rie? [mailto:contact at ionelmc.ro] > *Sent:* Wednesday, April 08, 2015 10:30 AM > *To:* Leung, Edward > *Cc:* distutils-sig at python.org; D'Amore, Robert M.; Waisburd, Andrew; > Bithoney, Anthony S > *Subject:* Re: [Distutils] installing package - problems > > > > Hello, > > It looks like you need to install the compiler. If you want to install it, > I have a guide for that here > > (well, what worked for me - you might need additional stuff for pysftp). > Give it a try. > > > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > > > > On Wed, Apr 8, 2015 at 4:29 PM, Leung, Edward > wrote: > > Dear Sir/Madam, > > > > I was trying to install a python package pysftp using the following at > command prompt: python ?m pip install pysftp and I got the following error. > Could you tell me what is not working?? Thanks. > > > > > > copying lib\Crypto\Random\Fortuna\FortunaGenerator.py -> > build\lib.win-amd64 > > -3.4\Crypto\Random\Fortuna > > > > copying lib\Crypto\Random\Fortuna\SHAd256.py -> > build\lib.win-amd64-3.4\Cryp > > to\Random\Fortuna > > > > copying lib\Crypto\Random\Fortuna\__init__.py -> > build\lib.win-amd64-3.4\Cry > > pto\Random\Fortuna > > > > creating build\lib.win-amd64-3.4\Crypto\Random\OSRNG > > > > copying lib\Crypto\Random\OSRNG\fallback.py -> > build\lib.win-amd64-3.4\Crypt > > o\Random\OSRNG > > > > copying lib\Crypto\Random\OSRNG\nt.py -> > build\lib.win-amd64-3.4\Crypto\Rand > > om\OSRNG > > > > copying lib\Crypto\Random\OSRNG\posix.py -> > build\lib.win-amd64-3.4\Crypto\R > > andom\OSRNG > > > > copying lib\Crypto\Random\OSRNG\rng_base.py -> > build\lib.win-amd64-3.4\Crypt > > o\Random\OSRNG > > > > copying lib\Crypto\Random\OSRNG\__init__.py -> > build\lib.win-amd64-3.4\Crypt > > o\Random\OSRNG > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest > > > > copying lib\Crypto\SelfTest\st_common.py -> > build\lib.win-amd64-3.4\Crypto\S > > elfTest > > > > copying lib\Crypto\SelfTest\__init__.py -> > build\lib.win-amd64-3.4\Crypto\Se > > lfTest > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\common.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_AES.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_ARC2.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_ARC4.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_Blowfish.py -> > build\lib.win-amd64-3 > > .4\Crypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_CAST.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_DES.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_DES3.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_pkcs1_15.py -> > build\lib.win-amd64-3 > > .4\Crypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_pkcs1_oaep.py -> > build\lib.win-amd64 > > -3.4\Crypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\test_XOR.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Cipher > > > > copying lib\Crypto\SelfTest\Cipher\__init__.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Cipher > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\common.py -> > build\lib.win-amd64-3.4\Crypto > > \SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_HMAC.py -> > build\lib.win-amd64-3.4\Cry > > pto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_MD2.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_MD4.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_MD5.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_RIPEMD.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA224.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA256.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA384.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\test_SHA512.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Hash > > > > copying lib\Crypto\SelfTest\Hash\__init__.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Hash > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\test_AllOrNothing.py -> > build\lib.win-a > > md64-3.4\Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\test_chaffing.py -> > build\lib.win-amd64 > > -3.4\Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\test_KDF.py -> > build\lib.win-amd64-3.4\ > > Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\test_rfc1751.py -> > build\lib.win-amd64- > > 3.4\Crypto\SelfTest\Protocol > > > > copying lib\Crypto\SelfTest\Protocol\__init__.py -> > build\lib.win-amd64-3.4\ > > Crypto\SelfTest\Protocol > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\test_DSA.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\test_ElGamal.py -> > build\lib.win-amd64 > > -3.4\Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\test_importKey.py -> > build\lib.win-amd > > 64-3.4\Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\test_RSA.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\PublicKey > > > > copying lib\Crypto\SelfTest\PublicKey\__init__.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\PublicKey > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random > > > > copying lib\Crypto\SelfTest\Random\test_random.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\Random > > > > copying lib\Crypto\SelfTest\Random\test_rpoolcompat.py -> > build\lib.win-amd6 > > 4-3.4\Crypto\SelfTest\Random > > > > copying lib\Crypto\SelfTest\Random\test__UserFriendlyRNG.py -> > build\lib.win > > -amd64-3.4\Crypto\SelfTest\Random > > > > copying lib\Crypto\SelfTest\Random\__init__.py -> > build\lib.win-amd64-3.4\Cr > > ypto\SelfTest\Random > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna > > > > copying lib\Crypto\SelfTest\Random\Fortuna\test_FortunaAccumulator.py > -> bui > > ld\lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna > > > > copying lib\Crypto\SelfTest\Random\Fortuna\test_FortunaGenerator.py -> > build > > \lib.win-amd64-3.4\Crypto\SelfTest\Random\Fortuna > > > > copying lib\Crypto\SelfTest\Random\Fortuna\test_SHAd256.py -> > build\lib.win- > > amd64-3.4\Crypto\SelfTest\Random\Fortuna > > > > copying lib\Crypto\SelfTest\Random\Fortuna\__init__.py -> > build\lib.win-amd6 > > 4-3.4\Crypto\SelfTest\Random\Fortuna > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_fallback.py -> > build\lib.win-a > > md64-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_generic.py -> > build\lib.win-am > > d64-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_nt.py -> > build\lib.win-amd64-3 > > .4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_posix.py -> > build\lib.win-amd6 > > 4-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\test_winrandom.py -> > build\lib.win- > > amd64-3.4\Crypto\SelfTest\Random\OSRNG > > > > copying lib\Crypto\SelfTest\Random\OSRNG\__init__.py -> > build\lib.win-amd64- > > 3.4\Crypto\SelfTest\Random\OSRNG > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\test_asn1.py -> > build\lib.win-amd64-3.4\Cry > > pto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\test_Counter.py -> > build\lib.win-amd64-3.4\ > > Crypto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\test_number.py -> > build\lib.win-amd64-3.4\C > > rypto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\test_winrandom.py -> > build\lib.win-amd64-3. > > 4\Crypto\SelfTest\Util > > > > copying lib\Crypto\SelfTest\Util\__init__.py -> > build\lib.win-amd64-3.4\Cryp > > to\SelfTest\Util > > > > creating build\lib.win-amd64-3.4\Crypto\SelfTest\Signature > > > > copying lib\Crypto\SelfTest\Signature\test_pkcs1_15.py -> > build\lib.win-amd6 > > 4-3.4\Crypto\SelfTest\Signature > > > > copying lib\Crypto\SelfTest\Signature\test_pkcs1_pss.py -> > build\lib.win-amd > > 64-3.4\Crypto\SelfTest\Signature > > > > copying lib\Crypto\SelfTest\Signature\__init__.py -> > build\lib.win-amd64-3.4 > > \Crypto\SelfTest\Signature > > > > creating build\lib.win-amd64-3.4\Crypto\Protocol > > > > copying lib\Crypto\Protocol\AllOrNothing.py -> > build\lib.win-amd64-3.4\Crypt > > o\Protocol > > > > copying lib\Crypto\Protocol\Chaffing.py -> > build\lib.win-amd64-3.4\Crypto\Pr > > otocol > > > > copying lib\Crypto\Protocol\KDF.py -> > build\lib.win-amd64-3.4\Crypto\Protoco > > l > > > > copying lib\Crypto\Protocol\__init__.py -> > build\lib.win-amd64-3.4\Crypto\Pr > > otocol > > > > creating build\lib.win-amd64-3.4\Crypto\PublicKey > > > > copying lib\Crypto\PublicKey\DSA.py -> > build\lib.win-amd64-3.4\Crypto\Public > > Key > > > > copying lib\Crypto\PublicKey\ElGamal.py -> > build\lib.win-amd64-3.4\Crypto\Pu > > blicKey > > > > copying lib\Crypto\PublicKey\pubkey.py -> > build\lib.win-amd64-3.4\Crypto\Pub > > licKey > > > > copying lib\Crypto\PublicKey\RSA.py -> > build\lib.win-amd64-3.4\Crypto\Public > > Key > > > > copying lib\Crypto\PublicKey\_DSA.py -> > build\lib.win-amd64-3.4\Crypto\Publi > > cKey > > > > copying lib\Crypto\PublicKey\_RSA.py -> > build\lib.win-amd64-3.4\Crypto\Publi > > cKey > > > > copying lib\Crypto\PublicKey\_slowmath.py -> > build\lib.win-amd64-3.4\Crypto\ > > PublicKey > > > > copying lib\Crypto\PublicKey\__init__.py -> > build\lib.win-amd64-3.4\Crypto\P > > ublicKey > > > > creating build\lib.win-amd64-3.4\Crypto\Signature > > > > copying lib\Crypto\Signature\PKCS1_PSS.py -> > build\lib.win-amd64-3.4\Crypto\ > > Signature > > > > copying lib\Crypto\Signature\PKCS1_v1_5.py -> > build\lib.win-amd64-3.4\Crypto > > \Signature > > > > copying lib\Crypto\Signature\__init__.py -> > build\lib.win-amd64-3.4\Crypto\S > > ignature > > > > Skipping implicit fixer: buffer > > > > Skipping implicit fixer: idioms > > > > Skipping implicit fixer: set_literal > > > > Skipping implicit fixer: ws_comma > > > > running build_ext > > > > building 'Crypto.Random.OSRNG.winrandom' extension > > > > warning: GMP or MPIR library not found; Not building > Crypto.PublicKey._fastm > > ath. > > > > error: Microsoft Visual C++ 10.0 is required (Unable to find > vcvarsall.bat). > > > > > > ---------------------------------------- > > Command "C:\Python34\python.exe -c "import setuptools, > tokenize;__file__='C: > > > \\Users\\leunge\\AppData\\Local\\Temp\\pip-build-qwwdy8a4\\pycrypto\\setup.py';e > > xec(compile(getattr(tokenize, 'open', > open)(__file__).read().replace('\r\n', '\n > > '), __file__, 'exec'))" install --record > C:\Users\leunge\AppData\Local\Temp\pip- > > tc1xxkzk-record\install-record.txt --single-version-externally-managed > --compile > > " failed with error code 1 in > C:\Users\leunge\AppData\Local\Temp\pip-build-qwwdy > > > > > > > > /***************************/ > > > > Edward Leung, Ph.D. > > Quantitative Research Analyst > > Invesco Quantitative Strategies > > 1166 Ave of the Americas, 27th Floor > > New York, NY 10036 > > > > 212-278-9744 (w) > > 646-236-1453 (c) > > Edward_Leung at invesco.com > > > > **************************************************************** > Confidentiality Note: The information contained in this > message, and any attachments, may contain confidential > and/or privileged material. It is intended solely for the > person(s) or entity to which it is addressed. Any review, > retransmission, dissemination, or taking of any action in > reliance upon this information by persons or entities other > than the intended recipient(s) is prohibited. If you received > this in error, please contact the sender and delete the > material from any computer. > **************************************************************** > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy at thesyrings.us Wed Apr 8 16:47:12 2015 From: randy at thesyrings.us (Randy Syring) Date: Wed, 08 Apr 2015 10:47:12 -0400 Subject: [Distutils] Method for calculating .so file names In-Reply-To: References: <552459F0.4020001@thesyrings.us> Message-ID: <55253F70.5010806@thesyrings.us> Thanks. FWIW, I ended up joining the SOAPBI and MULTIARCH config values to get the file name: https://github.com/level12/secretstorage-setup/blob/master/sssetup/core.py#L54 *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ On 04/08/2015 05:11 AM, Ionel Cristian M?rie? wrote: > distutils.sysconfig.get_config_vars("SO") or > distutils.sysconfig.get_config_vars("EXT_SUFFIX") > > (seems the later is only available for py3? and the py3 source say > "SO" is deprecated) > > Mind you, this is easy to find out if you build an extension and you > use hunter , that's how I found > it ;-) > > > Thanks, > -- IonelCristian M?rie?, http://blog.ionelmc.ro > > On Wed, Apr 8, 2015 at 1:28 AM, Randy Syring > wrote: > > As a follow-up to my earlier question, is there a way to > pragmatically determine the naming scheme of installed shared > object files? I've seen a couple different naming formats: > >> # python 2.7.6 - default ubuntu version >> $ ls /usr/lib/python2.7/dist-packages/_dbus_* >> /usr/lib/python2.7/dist-packages/_dbus_bindings.so >> /usr/lib/python2.7/dist-packages/_dbus_glib_bindings.so >> >> # python 3.4.0 - default ubuntu version >> $ ls /usr/lib/python3/dist-packages/_dbus_* >> /usr/lib/python3/dist-packages/_dbus_bindings.cpython-34m-x86_64-linux-gnu.so >> >> /usr/lib/python3/dist-packages/_dbus_glib_bindings.cpython-34m-x86_64-linux-gnu.so >> >> >> # python 3.4.3 - manual install on ubuntu >> $ ls /opt/python34/lib/python3.4/site-packages/_dbus_* >> /opt/python34/lib/python3.4/site-packages/_dbus_bindings.so >> /opt/python34/lib/python3.4/site-packages/_dbus_glib_bindings.so > > I can piece together the file structure for 3.4.0 from information > in sysconfig, but I'm wondering if there is a better way. > > Thanks. > > > *Randy Syring* > Husband | Father | Redeemed Sinner > > /"For what does it profit a man to gain the whole world > and forfeit his soul?" (Mark 8:36 ESV)/ > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Apr 9 17:01:14 2015 From: cournape at gmail.com (David Cournapeau) Date: Thu, 9 Apr 2015 11:01:14 -0400 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: Do we have a confirmation for Friday ? I have not seen a BoF board yet. On Fri, Apr 3, 2015 at 6:16 AM, Nick Coghlan wrote: > On 3 April 2015 at 08:09, Richard Jones wrote: > > Could the BoF be Friday instead please? Saturday is International > Tabletop > > Day, and there's a bunch of us will be celebrating that :) > > Sure, Friday works for me, too. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msabramo at gmail.com Thu Apr 9 21:34:47 2015 From: msabramo at gmail.com (Marc Abramowitz) Date: Thu, 9 Apr 2015 12:34:47 -0700 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: <8377E7CF-95D7-42A0-851D-11B8597A39B3@gmail.com> I was wondering if there was going to be any sprinting on PyPA stuff like pip and virtualenv. I'm not at Pycon this year, but I'd be up for sprinting remotely. -Marc http://marc-abramowitz.com Sent from my iPhone 4S > On Apr 9, 2015, at 8:01 AM, David Cournapeau wrote: > > Do we have a confirmation for Friday ? I have not seen a BoF board yet. > >> On Fri, Apr 3, 2015 at 6:16 AM, Nick Coghlan wrote: >> On 3 April 2015 at 08:09, Richard Jones wrote: >> > Could the BoF be Friday instead please? Saturday is International Tabletop >> > Day, and there's a bunch of us will be celebrating that :) >> >> Sure, Friday works for me, too. >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Apr 10 11:35:43 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Apr 2015 05:35:43 -0400 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: On 9 Apr 2015 11:01, "David Cournapeau" wrote: > > Do we have a confirmation for Friday ? I have not seen a BoF board yet. 6 pm time slot today (Friday), see the board at the base of the escalators for room details. Cheers, Nick. > > On Fri, Apr 3, 2015 at 6:16 AM, Nick Coghlan wrote: >> >> On 3 April 2015 at 08:09, Richard Jones wrote: >> > Could the BoF be Friday instead please? Saturday is International Tabletop >> > Day, and there's a bunch of us will be celebrating that :) >> >> Sure, Friday works for me, too. >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Sat Apr 11 00:10:36 2015 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Sat, 11 Apr 2015 00:10:36 +0200 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: Which specific room are you in? The board says 512dh. -- Olivier From olivier.grisel at ensta.org Sat Apr 11 00:18:22 2015 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Sat, 11 Apr 2015 00:18:22 +0200 Subject: [Distutils] [pycon] Packaging-related discussions In-Reply-To: References: Message-ID: 512f -- Olivier From ncoghlan at gmail.com Sat Apr 11 16:46:33 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 11 Apr 2015 10:46:33 -0400 Subject: [Distutils] pip/warehouse feature idea: "help needed" Message-ID: Guido mentioned in his PyCon keynote this morning that we don't currently have a great way for package authors to ask for help from their user base. It occurred to me that it could be useful to have a "Help needed" feature on PyPI (after the Warehouse migration) where package maintainers could register requests for assistance, such as: * looking for new maintainers * requests for help with Python 3 support * links to specific issues a maintainer would like help with * links to donation pages (including links to Patreon, Gratipay, etc) * links to crowdfunding campaigns for specific new features * links to CVs/LinkedIn if folks are looking for work Given a requirements.txt file, pip could then gain a "help-needed" command that pulled the "help needed" entries for the named projects. The general idea would be to provide a direct channel from project maintainers that may need help to their users who may be in a position to provide that help. It wouldn't need to be too complicated, just a Markdown field that maintainers could edit. In some cases, software is backed by folks that already have a sustainable support model. For these it could be nice if the Markdown field could be used to say "Help not needed", and give credit to the people or orgs supporting them. It's not something we can do anything about until after the Warehouse migration, but I figured I'd mention it while I was thinking about it :) Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Sat Apr 11 16:53:52 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Sat, 11 Apr 2015 10:53:52 -0400 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: Message-ID: <55293580.1060005@sdamon.com> Is the package index really the best place to put this? This is a very social-networking feature for the authoritative repository of just about all the third party module, and it feels like either it could corrupt the 'sanctity' of the repository (in the absolute worst case), or simply be totally ineffective because we all only see the cheese shop through pip and twin (in the best case). I am not saying the PSF shouldn't do this, but is pypi REALLY the best part of python.org to put it? On 4/11/2015 10:46, Nick Coghlan wrote: > > Guido mentioned in his PyCon keynote this morning that we don't > currently have a great way for package authors to ask for help from > their user base. > > It occurred to me that it could be useful to have a "Help needed" > feature on PyPI (after the Warehouse migration) where package > maintainers could register requests for assistance, such as: > > * looking for new maintainers > * requests for help with Python 3 support > * links to specific issues a maintainer would like help with > * links to donation pages (including links to Patreon, Gratipay, etc) > * links to crowdfunding campaigns for specific new features > * links to CVs/LinkedIn if folks are looking for work > > Given a requirements.txt file, pip could then gain a "help-needed" > command that pulled the "help needed" entries for the named projects. > > The general idea would be to provide a direct channel from project > maintainers that may need help to their users who may be in a position > to provide that help. It wouldn't need to be too complicated, just a > Markdown field that maintainers could edit. > > In some cases, software is backed by folks that already have a > sustainable support model. For these it could be nice if the Markdown > field could be used to say "Help not needed", and give credit to the > people or orgs supporting them. > > It's not something we can do anything about until after the Warehouse > migration, but I figured I'd mention it while I was thinking about it :) > > Cheers, > Nick. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Apr 11 18:41:36 2015 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 11 Apr 2015 11:41:36 -0500 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: Message-ID: On Apr 11, 2015 9:46 AM, "Nick Coghlan" wrote: > > Guido mentioned in his PyCon keynote this morning that we don't currently have a great way for package authors to ask for help from their user base. > > It occurred to me that it could be useful to have a "Help needed" feature on PyPI (after the Warehouse migration) where package maintainers could register requests for assistance, such as: > > * looking for new maintainers > * requests for help with Python 3 support > * links to specific issues a maintainer would like help with > * links to donation pages (including links to Patreon, Gratipay, etc) > * links to crowdfunding campaigns for specific new features > * links to CVs/LinkedIn if folks are looking for work > > Given a requirements.txt file, pip could then gain a "help-needed" command that pulled the "help needed" entries for the named projects. We currently have https://pypi.python.org/pypi/pyline/json So, ideally: lookup("https://pypi.python.org/pypi/pyline") lookup("https://github.com/westurner/pypi") lookup() : JSON-LD dict { PEP 426 Metadata 2.0 JSON (pydist.json), doap:Project, schema.org/SoftwareApplication, [{ "id": "#kickstarter-one", "type": "kickstarter-campaign", "url": , "description": "...", "date": "...", }, { "id": #donations-one", "type": "donations-xyz", "url": ", } # ... } Unique ids/types made adding service-specific favicons, #anchors (and structured lookup) a bit easier. JSON-LD can also be easily displayed in RDFa in the primary HTML page. > > The general idea would be to provide a direct channel from project maintainers that may need help to their users who may be in a position to provide that help. It wouldn't need to be too complicated, just a Markdown field that maintainers could edit. There is a patch to add Markdown support to pypa/readme ( https://github.com/pypa/readme/issues/1). A structured (JSON-LD) vocabulary for these links (predicates) would also be immensely useful. Use cases: * what are all of the RSS feeds for [my] project * cross-domain project search * devbots * ( you name it ) This ties in with some work on a "Tools Schema" over in the W3C RDFJS group: * https://text.allmende.io/p/rdfjs As well as a Mailing List Extractor ("devbot"): https://westurner.org/wiki/ideas#open-source-mailing-list-extractor) And a general focus on tool / project documentation: https://westurner.org/wiki/tools ( https://westurner.github.io/tools/ ) > > In some cases, software is backed by folks that already have a sustainable support model. For these it could be nice if the Markdown field could be used to say "Help not needed", and give credit to the people or orgs supporting them. > Absolutely. Even a link to a 'Contributing' docs page (such as created by cookiecutter-pypackage for ReadTheDocs or pythonhosted) > It's not something we can do anything about until after the Warehouse migration, but I figured I'd mention it while I was thinking about it :) Getting setuptools/wheel/pypi/warehouse metadata as RDFa would require: * a JSON-LD context (and/or proper RDFS/OWL) * updates to the warehouse templates * new "reified" edges with a schema:url, schema:name/rdfs:label, and schema:descriptions Here's an example of updating pyvideo.org app with RDFa metadata (without migrating the database schema at all): * "Add Schema.org VideoObject RDFa metadata" https://github.com/pyvideo/richard/pull/213 >From "Implement "hook" support for package signature verification" https://github.com/pypa/pip/issues/1035#issuecomment-39012414 : > Is this a signed graph with typed edges? I would love to work on this; with free time or paid time wherever. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Apr 11 18:44:42 2015 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 11 Apr 2015 11:44:42 -0500 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: <55293580.1060005@sdamon.com> References: <55293580.1060005@sdamon.com> Message-ID: On Sat, Apr 11, 2015 at 9:53 AM, Alexander Walters wrote: > Is the package index really the best place to put this? This is a very > social-networking feature for the authoritative repository of just about > all the third party module, and it feels like either it could corrupt the > 'sanctity' of the repository (in the absolute worst case), or simply be > totally ineffective because we all only see the cheese shop through pip and > twin (in the best case). > > I am not saying the PSF shouldn't do this, but is pypi REALLY the best > part of python.org to put it? > There would need to be an additional application parsing egg/wheel metadata onupload. > > > On 4/11/2015 10:46, Nick Coghlan wrote: > > Guido mentioned in his PyCon keynote this morning that we don't currently > have a great way for package authors to ask for help from their user base. > > It occurred to me that it could be useful to have a "Help needed" feature > on PyPI (after the Warehouse migration) where package maintainers could > register requests for assistance, such as: > > * looking for new maintainers > * requests for help with Python 3 support > * links to specific issues a maintainer would like help with > * links to donation pages (including links to Patreon, Gratipay, etc) > * links to crowdfunding campaigns for specific new features > * links to CVs/LinkedIn if folks are looking for work > > Given a requirements.txt file, pip could then gain a "help-needed" command > that pulled the "help needed" entries for the named projects. > > The general idea would be to provide a direct channel from project > maintainers that may need help to their users who may be in a position to > provide that help. It wouldn't need to be too complicated, just a Markdown > field that maintainers could edit. > > In some cases, software is backed by folks that already have a sustainable > support model. For these it could be nice if the Markdown field could be > used to say "Help not needed", and give credit to the people or orgs > supporting them. > > It's not something we can do anything about until after the Warehouse > migration, but I figured I'd mention it while I was thinking about it :) > > Cheers, > Nick. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.orghttps://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -- Wes Turner https://westurner.org https://wrdrd.com/docs/consulting/knowledge-engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From msabramo at gmail.com Sat Apr 11 19:29:16 2015 From: msabramo at gmail.com (Marc Abramowitz) Date: Sat, 11 Apr 2015 10:29:16 -0700 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: Message-ID: Interesting. One of the things that would help with getting people to help and is in the PEPs but last I checked wasn't yet implemented is the metadata that allows putting in all kinds of URLs and the ones I'm primarily thinking of here are the source code repository URL and the issue tracker URL. I personally sigh when I see a PyPI page that lists its URL as said PyPI page as this seems redundant and not useful and I'd rather see a GitHub or Bitbucket URL (or maybe a foo-project.org or readthedocs URL, but I the repo URL is usually what I'm most interested in). If we had the metadata with all the different kinds of URLs and the tools to show it and search it, then it would be clearer what to put where and would make it easier for consumers to find what they're looking for. Another thought I had while reading your email was the OpenHatch project and if there could be some tie-in with that. It also would be interesting if package maintainers had a channel to communicate with their user base. Back when I was at Yahoo, our proprietary package tool kept track of all installs of packages and stored the information in a centralized database. As a result, a package maintainer could see how many people had installed each version of their package and could send emails to folks who had installed a particular version or folks who had installed any version. A lot of folks used this to warn user bases about security issues, bugs, deprecations, etc. and to encourage folks to upgrade to newer versions and monitor the progress of such efforts. This is a pretty big architectural change of course. I can imagine an easier route could be to have the metadata have a link to a mailing list so a user could easily check a box, press a button, specify an option to pip install, etc. that would subscribe them to a project mailing list, hosted elsewhere. This obviates the need for PyPI/Warehouse to have a big database of who is interested in what by distributing out that responsibility to other tools like Mailman and what not. -Marc http://marc-abramowitz.com Sent from my iPhone 4S > On Apr 11, 2015, at 7:46 AM, Nick Coghlan wrote: > > Guido mentioned in his PyCon keynote this morning that we don't currently have a great way for package authors to ask for help from their user base. > > It occurred to me that it could be useful to have a "Help needed" feature on PyPI (after the Warehouse migration) where package maintainers could register requests for assistance, such as: > > * looking for new maintainers > * requests for help with Python 3 support > * links to specific issues a maintainer would like help with > * links to donation pages (including links to Patreon, Gratipay, etc) > * links to crowdfunding campaigns for specific new features > * links to CVs/LinkedIn if folks are looking for work > > Given a requirements.txt file, pip could then gain a "help-needed" command that pulled the "help needed" entries for the named projects. > > The general idea would be to provide a direct channel from project maintainers that may need help to their users who may be in a position to provide that help. It wouldn't need to be too complicated, just a Markdown field that maintainers could edit. > > In some cases, software is backed by folks that already have a sustainable support model. For these it could be nice if the Markdown field could be used to say "Help not needed", and give credit to the people or orgs supporting them. > > It's not something we can do anything about until after the Warehouse migration, but I figured I'd mention it while I was thinking about it :) > > Cheers, > Nick. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Apr 11 20:14:50 2015 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 11 Apr 2015 13:14:50 -0500 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: Message-ID: On Sat, Apr 11, 2015 at 12:29 PM, Marc Abramowitz wrote: > Interesting. One of the things that would help with getting people to help > and is in the PEPs but last I checked wasn't yet implemented is the > metadata that allows putting in all kinds of URLs and the ones I'm > primarily thinking of here are the source code repository URL and the issue > tracker URL. > http://legacy.python.org/dev/peps/pep-0459/: PEP:459Title:Standard Metadata Extensions for Python Software Packages Version:471651c1fe20Last-Modified:2014-07-02 22:55:34 -0700 (Wed, 02 Jul 2014) Author:Nick Coghlan BDFL-Delegate:Nick Coghlan Discussions-To:Distutils SIG >Status:DraftType:Standards TrackContent-Type:text/x-rst Requires:426 Created:11 Nov 2013 Post-History:21 Dec 2013 A JSON-LD context would be outstanding. - [ ] Additional properties for {...} (see RDFJS https://text.allmende.io/p/rdfjs ## Tools Schema) > I personally sigh when I see a PyPI page that lists its URL as said PyPI > page as this seems redundant and not useful and I'd rather see a GitHub or > Bitbucket URL (or maybe a foo-project.org or readthedocs URL, but I the > repo URL is usually what I'm most interested in). > > If we had the metadata with all the different kinds of URLs and the tools > to show it and search it, then it would be clearer what to put where and > would make it easier for consumers to find what they're looking for. > > Another thought I had while reading your email was the OpenHatch project > and if there could be some tie-in with that. > > It also would be interesting if package maintainers had a channel to > communicate with their user base. Back when I was at Yahoo, our proprietary > package tool kept track of all installs of packages and stored the > information in a centralized database. As a result, a package maintainer > could see how many people had installed each version of their package and > could send emails to folks who had installed a particular version or folks > who had installed any version. A lot of folks used this to warn user bases > about security issues, bugs, deprecations, etc. and to encourage folks to > upgrade to newer versions and monitor the progress of such efforts. > Links to e.g. cvedetails, lists, and RSS feeds would be super helpful. Links to e.g. IRC, Slack, Gitter would be super helpful. Where Links == {edges, predicates, new metadata properties} > > This is a pretty big architectural change of course. I can imagine an > easier route could be to have the metadata have a link to a mailing list so > a user could easily check a box, press a button, specify an option to pip > install, etc. that would subscribe them to a project mailing list, hosted > elsewhere. This obviates the need for PyPI/Warehouse to have a big database > of who is interested in what by distributing out that responsibility to > other tools like Mailman and what not. > There are a number of web-based pip management applications. Really, upgrading to Mailman 3 w/ the message archive links auto-appended would also be great. The applications are much broader than just Python packages (eggs, wheels, condabuilds) and pip/peep dependency resolution. (... RDFJS https://text.allmende.io/p/rdfjs ## Tools Schema ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Apr 11 20:35:31 2015 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 11 Apr 2015 13:35:31 -0500 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: Message-ID: On Sat, Apr 11, 2015 at 1:14 PM, Wes Turner wrote: > > > On Sat, Apr 11, 2015 at 12:29 PM, Marc Abramowitz > wrote: > >> Interesting. One of the things that would help with getting people to >> help and is in the PEPs but last I checked wasn't yet implemented is the >> metadata that allows putting in all kinds of URLs and the ones I'm >> primarily thinking of here are the source code repository URL and the issue >> tracker URL. >> > > http://legacy.python.org/dev/peps/pep-0459/: > > PEP:459Title:Standard Metadata Extensions for Python Software Packages > Version:471651c1fe20Last-Modified:2014-07-02 22:55:34 -0700 (Wed, 02 Jul > 2014) Author:Nick > Coghlan BDFL-Delegate:Nick Coghlan < > ncoghlan at gmail.com>Discussions-To:Distutils SIG python.org >Status:DraftType:Standards > TrackContent-Type:text/x-rst > Requires:426 Created:11 Nov > 2013Post-History:21 Dec 2013 > > A JSON-LD context would be outstanding. > > - [ ] Additional properties for {...} (see RDFJS > https://text.allmende.io/p/rdfjs ## Tools Schema) > > >> I personally sigh when I see a PyPI page that lists its URL as said PyPI >> page as this seems redundant and not useful and I'd rather see a GitHub or >> Bitbucket URL (or maybe a foo-project.org or readthedocs URL, but I the >> repo URL is usually what I'm most interested in). >> >> If we had the metadata with all the different kinds of URLs and the tools >> to show it and search it, then it would be clearer what to put where and >> would make it easier for consumers to find what they're looking for. >> >> Another thought I had while reading your email was the OpenHatch project >> and if there could be some tie-in with that. >> >> It also would be interesting if package maintainers had a channel to >> communicate with their user base. Back when I was at Yahoo, our proprietary >> package tool kept track of all installs of packages and stored the >> information in a centralized database. As a result, a package maintainer >> could see how many people had installed each version of their package and >> could send emails to folks who had installed a particular version or folks >> who had installed any version. A lot of folks used this to warn user bases >> about security issues, bugs, deprecations, etc. and to encourage folks to >> upgrade to newer versions and monitor the progress of such efforts. >> > > Links to e.g. cvedetails, lists, and RSS feeds would be super helpful. > > Links to e.g. IRC, Slack, Gitter would be super helpful. > > Where Links == {edges, predicates, new metadata properties} > Links to downstream packages (and their RSS feeds) would also be helpful. * Debian has RDF (and also more structured link types that would be useful for project metadata) * changelog / "release notes" * build lods * https://wiki.debian.org/RDF * https://packages.qa.debian.org/p/python3-defaults.html * https://packages.qa.debian.org/p/python3-defaults.ttl What URI should pypi:readme or warehouse:readme expand to? @prefix pypi: ; @prefix warehouse: ; @prefix github: ; * pypi:json["info"]["name"] ( + ".json" ) * warehouse:json["info"]["name"] * github:json["info"]["name"] @prefix doap: ; * http://lov.okfn.org/dataset/lov/vocabs/doap @prefix schema: ; * schema:SoftwareApplication -> https://schema.org/SoftwareApplication * schema:Code -> https://schema.org/Code * schema:Project -> TODO (new framework for extension vocabularies) Should/could there be a pypa: namespace? @prefix pypa: ; * [ ] { PEP Metadata -> JSON-LD, pypa.ttl RDF Ontology } -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuaxo2 at yahoo.com Sun Apr 12 08:18:21 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Sun, 12 Apr 2015 06:18:21 +0000 (UTC) Subject: [Distutils] Installing a file into sitepackages In-Reply-To: References: Message-ID: <2090846739.1107472.1428819501411.JavaMail.yahoo@mail.yahoo.com> Finally got round to trying this in python3, it works well so switching over to it, cheers :)?S++ On Friday, March 27, 2015 4:57 PM, Ionel Cristian M?rie? wrote: Also, a similar command subclass can be written for `develop`. So far i got 3 subclasses, for: build, easy_install and develop. Did I miss something important? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Wed, Mar 25, 2015 at 2:51 PM, Stuart Axon wrote: That looks much cleaner than my one, I'll give it a try..? does it work on python3, just found out my one does not. ?S++ On Wednesday, March 25, 2015 9:03 AM, Ionel Cristian M?rie? wrote: This seems to do the trick: class EasyInstallWithPTH(easy_install): ??? def run(self): ??????? easy_install.run(self) ??????? for path in glob(join(dirname(__file__), 'src', '*.pth')): ??????????? dest = join(self.install_dir, basename(path)) ??????????? self.copy_file(path, dest) Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Tue, Mar 24, 2015 at 11:36 AM, Stuart Axon wrote: Hi,?This works from pypi - but not when installing from source with? python setup.py install? which stops this nifty thing from working: PYTHON_HUNTER="module='os.path'" python yourapp.py ?Sandbox monkeypatches os.file, so I think it catches you using copy.??? Maybe we need a common API for code that runs at startup? S++ On Tuesday, March 24, 2015 3:56 PM, Ionel Cristian M?rie? wrote: Hey, If you just want to copy a out-of-package file into site-package you could just override the build command and copy it there (in the build dir). Here's an example: https://github.com/ionelmc/python-hunter/blob/master/setup.py#L27-L31 -? it seems to work fine with wheels. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Mon, Mar 16, 2015 at 11:02 AM, Stuart Axon wrote: Hi All??? This, and another memory-leak bug were triggered by the sandbox.?? Would it be possible to either add an API to exempt files, or just allow writing within site packages, even if just for .pth files ? I'm monkey patching around these for now https://github.com/stuaxo/vext/blob/master/setup.py#L16 S++ On Thursday, March 12, 2015 2:54 PM, Stuart Axon wrote: For closure: ?The solution was to make a Command class + implement finalize_options to fixup the paths in distribution.data_files. Source: # https://gist.github.com/stuaxo/c76a042cb7aa6e77285b"""Install a file into the root of sitepackages on windows as well as linux. Under normal operation on win32 path_to_site_packagesgets changed to '' which installs inside the .egg instead.""" import os from distutils import sysconfigfrom distutils.command.install_data import install_datafrom setuptools import setup here = os.path.normpath(os.path.abspath(os.path.dirname(__file__))) site_packages_path = sysconfig.get_python_lib()site_packages_files = ['TEST_FILE.TXT'] class _install_data(install_data):? ? def finalize_options(self):? ? ? ? """? ? ? ? On win32 the files here are changed to '' which? ? ? ? ends up inside the .egg, change this back to the? ? ? ? absolute path.? ? ? ? """? ? ? ? install_data.finalize_options(self)? ? ? ? global site_packages_files? ? ? ? for i, f in enumerate(list(self.distribution.data_files)):? ? ? ? ? ? if not isinstance(f, basestring):? ? ? ? ? ? ? ? folder, files = f? ? ? ? ? ? ? ? if files == site_packages_files:? ? ? ? ? ? ? ? ? ? # Replace with absolute path version? ? ? ? ? ? ? ? ? ? self.distribution.data_files[i] = (site_packages_path, files) setup(? ? cmdclass={'install_data': _install_data},? ? name='test_install',? ? version='0.0.1', ? ? description='',? ? long_description='',? ? url='https://example.com',? ? author='Stuart Axon',? ? author_email='stuaxo2 at yahoo.com',? ? license='PD',? ? classifiers=[],? ? keywords='',? ? packages=[], ? ? install_requires=[], ? ? data_files=[? ? ? ? (site_packages_path, site_packages_files),? ? ], ) On Tue, 10 Mar, 2015 at 11:29 PM, Stuart Axon wrote: I had more of a dig into this, with a minimal setup.py:https://gist.github.com/stuaxo/c76a042cb7aa6e77285bsetup calls install_dataOn win32 setup.py calls install_data which copies the file into the egg - even though I have given the absolute path to sitepackagesC:\> python setup.py install....running install_datacreating build\bdist.win32\eggcopying TEST_FILE.TXT -> build\bdist.win32\egg\ ....On Linux the file is copied to the right path:$ python setup.py install.....installing package data to build/bdist.linux-x86_64/eggrunning install_datacopying TEST_FILE.TXT -> /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages....*something* is normalising my absolute path to site packages into just '' - it's possible to see by looking at self.data_files in the 'run' function in:distutils/command/install_data.py- on windows it the first part has been changed to '' unlike on linux where it's the absolute path I set... still not sure where it's happening though.*This all took a while, as rebuilt VM and verified on 2.7.8 and 2.7.9..S++ On Monday, March 9, 2015 12:17 AM, Stuart Axon wrote: > I had a further look - and on windows the file ends up inside the .egg file, on linux it ends up inside the site packages as intended. At a guess it seems like there might be a bug in the path handling on windows. .. I wonder if it's something like this http://stackoverflow.com/questions/4579908/cross-platform-splitting-of-path-in-python which seems an easy way to get an off-by-one error in a path ? _______________________________________________ Distutils-SIG maillist? -? Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Apr 13 03:08:33 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 12 Apr 2015 21:08:33 -0400 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: <55293580.1060005@sdamon.com> References: <55293580.1060005@sdamon.com> Message-ID: On 11 Apr 2015 12:22, "Alexander Walters" wrote: > > Is the package index really the best place to put this? This is a very social-networking feature for the authoritative repository of just about all the third party module, and it feels like either it could corrupt the 'sanctity' of the repository (in the absolute worst case) If you're concerned that this feature might weaken the comforting illusion that PyPI published software is contributed and maintained by faceless automatons rather than living, breathing human beings, then yes, encouraging folks to think more about where the software they use is coming from would be a large part of the point of adding such a feature. > or simply be totally ineffective because we all only see the cheese shop through pip and twin (in the best case). Hence the idea of making the feature accessible through the command line clients, not just the web service. > I am not saying the PSF shouldn't do this, but is pypi REALLY the best part of python.org to put it? I personally believe so, yes - sustaining software over the long term is expensive in people's time, but it's often something we take for granted. The specific example Guido brought up in his keynote was the challenge of communicating a project's openness to Python 3 porting assistance. In the current user experience, if a project we use stops getting updated, resentment at the lack of updates is a more likely reaction for most of us than concern for whether or not the maintainer is OK. Regards, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Mon Apr 13 07:29:23 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Mon, 13 Apr 2015 01:29:23 -0400 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: <55293580.1060005@sdamon.com> Message-ID: <552B5433.3030601@sdamon.com> On 4/12/2015 21:08, Nick Coghlan wrote: > > Hence the idea of making the feature accessible through the command > line clients, not just the web service. > For the love of... Can we get packaging fixed before we start jamming crap onto the tools? Enough already. No. Just No. Never. Stop. Just stop. No. As a user of these things, just stop right there. > I personally believe so, yes - sustaining software over the long term > is expensive in people's time, but it's often something we take for > granted. The specific example Guido brought up in his keynote was the > challenge of communicating a project's openness to Python 3 porting > assistance. > > In the current user experience, if a project we use stops getting > updated, resentment at the lack of updates is a more likely reaction > for most of us than concern for whether or not the maintainer is OK. > Again, great idea. Does not need to be on the index. Does not even need to be on the same infrastructure as the index. I can think of at least four other places on the web where it will be better suited. If the PSF wants to take charge of that, then super. Put it next to the job board. But if you really want to solve the problem through the index, just make the link to SCM repos mandatory, and thats all you have to, and even should, do. From guettliml at thomas-guettler.de Mon Apr 13 08:24:38 2015 From: guettliml at thomas-guettler.de (=?UTF-8?B?VGhvbWFzIEfDvHR0bGVy?=) Date: Mon, 13 Apr 2015 08:24:38 +0200 Subject: [Distutils] Make PEP 426 less boring Message-ID: <552B6126.1020000@thomas-guettler.de> Hi, somehow I feel bored if I read PEP 426. https://www.python.org/dev/peps/pep-0426/ One concrete improvement would be to remove this paragraph: {{{ The design draws on the Python community's 15 years of experience with distutils based software distribution, and incorporates ideas and concepts from other distribution systems, including Python's setuptools, pip and other projects, Ruby's gems, Perl's CPAN, Node.js's npm, PHP's composer and Linux packaging systems such as RPM and APT. }}} Because something like this was already saied some lines before {{{ Metadata 2.0 represents a major upgrade to the Python packaging ecosystem, and attempts to incorporate experience gained over the 15 years(!) since distutils was first added to the standard library. Some of that is just incorporating existing practices from setuptools/pip/etc, some of it is copying from other distribution systems (like Linux distros or other development language communities) and some of it is attempting to solve problems which haven't yet been well solved by anyone (like supporting clean conversion of Python source packages to distro policy compliant source packages for at least Debian and Fedora, and perhaps other platform specific distribution systems). }}} **And** I would move the historic background (the second of the above quotes) at the end. Meta: are you interested in feedback like this? Regards, Thomas G?ttler From olivier.grisel at ensta.org Mon Apr 13 14:35:44 2015 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 13 Apr 2015 14:35:44 +0200 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: Message-ID: +1 overall to Nick' suggestions. -- Olivier From cournape at gmail.com Mon Apr 13 16:39:28 2015 From: cournape at gmail.com (David Cournapeau) Date: Mon, 13 Apr 2015 10:39:28 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more Message-ID: Hi there, During pycon, Nick mentioned there was interest in updating the wheel format to support downstream distributions. Nick mentioned Linux distributions, but I would like to express interest for other kind of downstream distributors like Anaconda from Continuum or Canopy from Enthought (disclaimer: I work for Enthought). Right now, wheels have the following limitations for us: 1. lack of post/pre install/removing 2. more fine-grained installation scheme 3. lack of clarify on which tags vendors should use for custom wheels: some packages we provide would not be installable on "normal" python, and it would be nice to have a scheme to avoid confusion there as well. At least 1. and 2. are of interest not just for us. Regarding 2., it looks like anything in the .data/data directory will be placed as is in sys.prefix by pip. This is how distutils scheme is defined ATM, but I am not sure whether that's by design or accident ? I would suggest to use something close to autotools, with some tweaks to work well on windows. I implemented something like this in my project bento ( https://github.com/cournape/Bento/blob/master/bento/core/platforms/sysconfig.py), but we could of course tweak that. For 1., I believe it was a conscious decision not to include them in wheel 1.0 ? Would it make sense to start a discussion to add it to wheel ? I will be at the pycon sprints until wednesday evening, so that we can flesh some concrete proposal first, if there is enough interest. As a background: at Enthought, we have been using eggs to distribute binaries of python packages and other packages (e.g. C libraries, compiled binaries, etc...) for a very long time. We had our own extensions to the egg format to support this, but I want to get out of eggs so as to make our own software more compatible with where the community is going. I would also like to avoid making ad-hoc extensions to wheels for our own purposes. thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Apr 13 16:44:36 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 13 Apr 2015 10:44:36 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: Message-ID: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> > On Apr 13, 2015, at 10:39 AM, David Cournapeau wrote: > > Hi there, > > During pycon, Nick mentioned there was interest in updating the wheel format to support downstream distributions. Nick mentioned Linux distributions, but I would like to express interest for other kind of downstream distributors like Anaconda from Continuum or Canopy from Enthought (disclaimer: I work for Enthought). > > Right now, wheels have the following limitations for us: > > 1. lack of post/pre install/removing > 2. more fine-grained installation scheme > 3. lack of clarify on which tags vendors should use for custom wheels: some packages we provide would not be installable on "normal" python, and it would be nice to have a scheme to avoid confusion there as well. > > At least 1. and 2. are of interest not just for us. > > Regarding 2., it looks like anything in the .data/data directory will be placed as is in sys.prefix by pip. This is how distutils scheme is defined ATM, but I am not sure whether that's by design or accident ? > > I would suggest to use something close to autotools, with some tweaks to work well on windows. I implemented something like this in my project bento (https://github.com/cournape/Bento/blob/master/bento/core/platforms/sysconfig.py ), but we could of course tweak that. > > For 1., I believe it was a conscious decision not to include them in wheel 1.0 ? Would it make sense to start a discussion to add it to wheel ? > > I will be at the pycon sprints until wednesday evening, so that we can flesh some concrete proposal first, if there is enough interest. > > As a background: at Enthought, we have been using eggs to distribute binaries of python packages and other packages (e.g. C libraries, compiled binaries, etc...) for a very long time. We had our own extensions to the egg format to support this, but I want to get out of eggs so as to make our own software more compatible with where the community is going. I would also like to avoid making ad-hoc extensions to wheels for our own purposes. > To my knowledge, (1) was purposely punted until a later revision of Wheel just to make it easier to land the ?basic? wheel. I think (2) is a reasonable thing as long as we can map it sanely on all platforms. I?m not sure what (3) means exactly. What is a ?normal? Python, do you modify Python in a way that breaks the ABI but which isn?t reflected in the standard ABI tag? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From cournape at gmail.com Mon Apr 13 16:54:23 2015 From: cournape at gmail.com (David Cournapeau) Date: Mon, 13 Apr 2015 10:54:23 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Mon, Apr 13, 2015 at 10:44 AM, Donald Stufft wrote: > > On Apr 13, 2015, at 10:39 AM, David Cournapeau wrote: > > Hi there, > > During pycon, Nick mentioned there was interest in updating the wheel > format to support downstream distributions. Nick mentioned Linux > distributions, but I would like to express interest for other kind of > downstream distributors like Anaconda from Continuum or Canopy from > Enthought (disclaimer: I work for Enthought). > > Right now, wheels have the following limitations for us: > > 1. lack of post/pre install/removing > 2. more fine-grained installation scheme > 3. lack of clarify on which tags vendors should use for custom wheels: > some packages we provide would not be installable on "normal" python, and > it would be nice to have a scheme to avoid confusion there as well. > > At least 1. and 2. are of interest not just for us. > > Regarding 2., it looks like anything in the .data/data > directory will be placed as is in sys.prefix by pip. This is how distutils > scheme is defined ATM, but I am not sure whether that's by design or > accident ? > > I would suggest to use something close to autotools, with some tweaks to > work well on windows. I implemented something like this in my project bento > ( > https://github.com/cournape/Bento/blob/master/bento/core/platforms/sysconfig.py), > but we could of course tweak that. > > For 1., I believe it was a conscious decision not to include them in wheel > 1.0 ? Would it make sense to start a discussion to add it to wheel ? > > I will be at the pycon sprints until wednesday evening, so that we can > flesh some concrete proposal first, if there is enough interest. > > As a background: at Enthought, we have been using eggs to distribute > binaries of python packages and other packages (e.g. C libraries, compiled > binaries, etc...) for a very long time. We had our own extensions to the > egg format to support this, but I want to get out of eggs so as to make our > own software more compatible with where the community is going. I would > also like to avoid making ad-hoc extensions to wheels for our own purposes. > > > To my knowledge, (1) was purposely punted until a later revision of Wheel > just to make it easier to land the ?basic? wheel. > Great. Was there any proposal made to support it at all ? Or should I just work from scratch there ? > > I think (2) is a reasonable thing as long as we can map it sanely on all > platforms. > Yes. We support all platforms at Enthought, and Windows is important for us ! > I?m not sure what (3) means exactly. What is a ?normal? Python, do you > modify Python in a way that breaks the ABI but which isn?t reflected in the > standard ABI tag? > It could be multiple things. The most obvious one is that generally. cross-platforms python distributions will try to be "relocatable" (i.e. the whole installation can be moved and still work). This means they require python itself to be built a special way. Strictly speaking, it is not an ABI issue, but the result is the same though: you can't use libraries from anaconda or canopy on top of a normal python More generally, we could be modifying python in a way that is not forward compatible with upstream python: a binary that works on our python may not work on the python from python.org (though the opposite is true). It would be nice if one could make sure pip will not try to install those eggs when installed on top of a python that does not advertise itself as "compatible". David -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Apr 13 17:02:10 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 13 Apr 2015 11:02:10 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: #1 is pretty straightforward. An entry-point format Python pre/post/etc. script may do. I have some ideas for the FHS, though I fear it's full of bikesheds: 1. Allow all GNU directory variables as .data/* subdirectories (https://www.gnu.org/prep/standards/html_node/Directory-Variables.html). The distutils names will continue to be allowed. packagename-1.0.data/mandir/... 2. Make data_files useful again. Interpolate path variables into distutils data_files using $template syntax. (Only allow at beginning?) data_files=[('$mandir/xyz', ['manfile', 'other_man_file']) In addition to $bindir, $mandir, etc. it will be important to allow the package name and version to be interpolated into the install directories. Inside the wheel archive, you will get packagename-1.0.data/mandir/manfile and packagename-1.0.data/mandir/other_man_file 3. Write the install paths (the mapping from $bindir, $mandir, $prefix etc. to the actual paths used) to one or more of a .py, .json, or .dist-info/* based on new metadata in WHEEL: install-paths-to: wheel/_paths.py It is critical that this be allowed to work without requiring the end user to look for it with pkg_resources or its pals. It's also good to only write it if the installed package actually needs to locate its file categories after it has been installed. This will also be written inside the wheel itself with relative paths to the .data/ directory. 4. Allow configurable & custom paths. The GNU paths could be configured relative to the distutils paths as a default. We might let the user add additional paths with a configuration dict. paths = { "foo" : "$bar/${quux}", "bar" : "${baz}/more/stuff", "baz" : "${quux}/again", "quux": "larry" } 5. On Windows, no one will really care where most of these files go, but they probably won't mind if they are installed into separate directories. Come up with sensible locations for the most important categories. On Mon, Apr 13, 2015 at 10:44 AM, Donald Stufft wrote: > > On Apr 13, 2015, at 10:39 AM, David Cournapeau wrote: > > Hi there, > > During pycon, Nick mentioned there was interest in updating the wheel format > to support downstream distributions. Nick mentioned Linux distributions, but > I would like to express interest for other kind of downstream distributors > like Anaconda from Continuum or Canopy from Enthought (disclaimer: I work > for Enthought). > > Right now, wheels have the following limitations for us: > > 1. lack of post/pre install/removing > 2. more fine-grained installation scheme > 3. lack of clarify on which tags vendors should use for custom wheels: some > packages we provide would not be installable on "normal" python, and it > would be nice to have a scheme to avoid confusion there as well. > > At least 1. and 2. are of interest not just for us. > > Regarding 2., it looks like anything in the .data/data directory > will be placed as is in sys.prefix by pip. This is how distutils scheme is > defined ATM, but I am not sure whether that's by design or accident ? > > I would suggest to use something close to autotools, with some tweaks to > work well on windows. I implemented something like this in my project bento > (https://github.com/cournape/Bento/blob/master/bento/core/platforms/sysconfig.py), > but we could of course tweak that. > > For 1., I believe it was a conscious decision not to include them in wheel > 1.0 ? Would it make sense to start a discussion to add it to wheel ? > > I will be at the pycon sprints until wednesday evening, so that we can flesh > some concrete proposal first, if there is enough interest. > > As a background: at Enthought, we have been using eggs to distribute > binaries of python packages and other packages (e.g. C libraries, compiled > binaries, etc...) for a very long time. We had our own extensions to the egg > format to support this, but I want to get out of eggs so as to make our own > software more compatible with where the community is going. I would also > like to avoid making ad-hoc extensions to wheels for our own purposes. > > > To my knowledge, (1) was purposely punted until a later revision of Wheel > just to make it easier to land the ?basic? wheel. > > I think (2) is a reasonable thing as long as we can map it sanely on all > platforms. > > I?m not sure what (3) means exactly. What is a ?normal? Python, do you > modify Python in a way that breaks the ABI but which isn?t reflected in the > standard ABI tag? > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From dholth at gmail.com Mon Apr 13 17:06:00 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 13 Apr 2015 11:06:00 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Mon, Apr 13, 2015 at 10:54 AM, David Cournapeau wrote: > > > On Mon, Apr 13, 2015 at 10:44 AM, Donald Stufft wrote: >> >> >> On Apr 13, 2015, at 10:39 AM, David Cournapeau wrote: >> >> Hi there, >> >> During pycon, Nick mentioned there was interest in updating the wheel >> format to support downstream distributions. Nick mentioned Linux >> distributions, but I would like to express interest for other kind of >> downstream distributors like Anaconda from Continuum or Canopy from >> Enthought (disclaimer: I work for Enthought). >> >> Right now, wheels have the following limitations for us: >> >> 1. lack of post/pre install/removing >> 2. more fine-grained installation scheme >> 3. lack of clarify on which tags vendors should use for custom wheels: >> some packages we provide would not be installable on "normal" python, and it >> would be nice to have a scheme to avoid confusion there as well. >> >> At least 1. and 2. are of interest not just for us. >> >> Regarding 2., it looks like anything in the .data/data >> directory will be placed as is in sys.prefix by pip. This is how distutils >> scheme is defined ATM, but I am not sure whether that's by design or >> accident ? >> >> I would suggest to use something close to autotools, with some tweaks to >> work well on windows. I implemented something like this in my project bento >> (https://github.com/cournape/Bento/blob/master/bento/core/platforms/sysconfig.py), >> but we could of course tweak that. >> >> For 1., I believe it was a conscious decision not to include them in wheel >> 1.0 ? Would it make sense to start a discussion to add it to wheel ? >> >> I will be at the pycon sprints until wednesday evening, so that we can >> flesh some concrete proposal first, if there is enough interest. >> >> As a background: at Enthought, we have been using eggs to distribute >> binaries of python packages and other packages (e.g. C libraries, compiled >> binaries, etc...) for a very long time. We had our own extensions to the egg >> format to support this, but I want to get out of eggs so as to make our own >> software more compatible with where the community is going. I would also >> like to avoid making ad-hoc extensions to wheels for our own purposes. >> >> >> To my knowledge, (1) was purposely punted until a later revision of Wheel >> just to make it easier to land the ?basic? wheel. > > > Great. Was there any proposal made to support it at all ? Or should I just > work from scratch there ? > >> >> >> I think (2) is a reasonable thing as long as we can map it sanely on all >> platforms. > > > Yes. We support all platforms at Enthought, and Windows is important for us > ! > >> >> I?m not sure what (3) means exactly. What is a ?normal? Python, do you >> modify Python in a way that breaks the ABI but which isn?t reflected in the >> standard ABI tag? > > > It could be multiple things. The most obvious one is that generally. > cross-platforms python distributions will try to be "relocatable" (i.e. the > whole installation can be moved and still work). This means they require > python itself to be built a special way. Strictly speaking, it is not an ABI > issue, but the result is the same though: you can't use libraries from > anaconda or canopy on top of a normal python > > More generally, we could be modifying python in a way that is not forward > compatible with upstream python: a binary that works on our python may not > work on the python from python.org (though the opposite is true). It would > be nice if one could make sure pip will not try to install those eggs when > installed on top of a python that does not advertise itself as "compatible" We need a hook to alter pip's list of compatible tags (pip.pep425tags), and to alter the default tags used by bdist_wheel when creating wheels. One sensible proposal for "special" wheels is to just use a truncated hash of the platform description (a random hex string) in place of the wheel platform tag. From p.f.moore at gmail.com Mon Apr 13 17:20:43 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 13 Apr 2015 16:20:43 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 13 April 2015 at 16:02, Daniel Holth wrote: > #1 is pretty straightforward. An entry-point format Python > pre/post/etc. script may do. There's metadata 2.0 information for this. It would be sensible to follow that definition where it applies, but otherwise yes, this shouldn't be hard. Some thoughts, though: 1. Some thought should be put into how we ensure that pre/post install/remove scripts are cross-platform. It would be a shame if a wheel was unusable on Windows for no reason other than that the postinstall script was written as a bash script. Or on Unix because the postinstall script tried to write Windows start menu items. 2. It's worth considering "appropriate use" of such scripts. The Windows start menu example is relevant here - I can easily imagine users requesting something like that for a wheel they want to install into the system Python, but it's completely inappropriate for installing into a virtualenv. To an extent, there's nothing we can (or maybe even should) do about this - projects that include inappropriate install scripts will get issues raised or will lose users, and the problem is self-correcting to an extent, but it's probably worth including in the implementation, some work to add appropriate documentation to the packaging user guide about "best practices" for pre/post-install/remove scripts (hmm, a glossary entry with a good name for these beasts would also be helpful :-)) Paul From chris.barker at noaa.gov Mon Apr 13 18:56:02 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 13 Apr 2015 09:56:02 -0700 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: NOTE: I don't work for any of the companies involved -- just a somewhat frustrated user... And someone that has been trying for years to make things easier for OS-X users. I?m not sure what (3) means exactly. What is a ?normal? Python, do you >> modify Python in a way that breaks the ABI but which isn?t reflected in the >> standard ABI tag? >> > > It could be multiple things. The most obvious one is that generally. > cross-platforms python distributions will try to be "relocatable" (i.e. the > whole installation can be moved and still work). This means they require > python itself to be built a special way. Strictly speaking, it is not an > ABI issue, but the result is the same though: you can't use libraries from > anaconda or canopy on top of a normal python > But why not? -- at least for Anaconda, it's because those libraries likely have non-python dependencies, which are expected to be installed in a particular way. And really, this is not particular to Anaconda/Canopy at all. Python itself has no answer for this issue, and eggs and wheels don't help. Well, maybe kinda sorta they do, but in a clunky/ugly way: in order to build a binary wheel with non-python dependencies (let's say something like libjpeg, for instance), you need to either: - assume that libjpeg is installed in a "standard" place -- really no solution at all (at least outside of linux) - statically link it - ship the dynamic lib with the package For the most part, the accepted solution for OS-X has been to statically link, but: - it's a pain to do. The gnu toolchain really likes to use dynamic linking, and building a static lib that will run on a maybe-older-than-the-build-system machine is pretty tricky. - now we end up with multiple copies of the same lib in the python install. There are a handful of libs that are used a LOT. Maybe there is no real downside -- disk space and memory are cheap these days, but it sure feels ugly. And I have yet to feel comfortable with having multiple versions of the same lib linked into one python instance -- I can't say I've seen a problem, but it makes me nervous. On Windows, the choices are the same, except that: It is so much harder to build many of the "standard" open source libs that package authors are more likely to do it for folks, and you do get the occasional "dll hell" issues. I had a plan to make some binary wheels for OS-X that were not really python packages, but actually just bundled up libs, so that other wheels could depend on them. OS-X does allow linking to relative paths, so this should have been doable, but I never got anyone else to agree this was a good idea, and I never found the roundtoits anyway. And it doesn't really fit into the PyPi, pip, wheel, etc. philosphy to have dependencies that are platform dependent and even worse, build-dependent. Meanwhile, conda was chugging along and getting a lot of momentum in the Scientific community. And the core thing here is that conda was designed from the ground up to support essentially anything, This means is supports python packages that depend on non-python packages, but also supports packages that have nothing to do with python (Perl, command line tools, what have you...) So I have been focusing on conda lately. Which brings me back to the question: should the python tools (i.e. wheel) be extended to support more use-cases, specifically non-python dependencies? Or do we just figure that that's a problem better solved by projects with a larger scope (i.e. rpm, deb, conda, canopy). I'm on the fence here. I mostly care about Python, and I think we're pretty darn close with allowing wheel to support the non-python dependencies, which would allow us all to "simply pip install" pretty much anything -- that would be cool. But maybe it's a bit of a slippery slope, and if we go there, we'll end up re-writing conda. BTW, while you can't generally install a conda package in/for another python, you can generally install a wheel in a conda python....There are a few issues with pip/setuptools trying to resolve dependencies while not knowing about conda packages, but it does mostly work. Not sure that helped the discussion -- but I've been wrestling with this for a while, so thought I'd get my thoughts out there. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Apr 13 21:19:27 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 13 Apr 2015 15:19:27 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Mon, Apr 13, 2015 at 12:56 PM, Chris Barker wrote: > NOTE: I don't work for any of the companies involved -- just a somewhat > frustrated user... And someone that has been trying for years to make things > easier for OS-X users. > >>> I?m not sure what (3) means exactly. What is a ?normal? Python, do you >>> modify Python in a way that breaks the ABI but which isn?t reflected in the >>> standard ABI tag? >> >> >> It could be multiple things. The most obvious one is that generally. >> cross-platforms python distributions will try to be "relocatable" (i.e. the >> whole installation can be moved and still work). This means they require >> python itself to be built a special way. Strictly speaking, it is not an ABI >> issue, but the result is the same though: you can't use libraries from >> anaconda or canopy on top of a normal python > > > But why not? -- at least for Anaconda, it's because those libraries likely > have non-python dependencies, which are expected to be installed in a > particular way. And really, this is not particular to Anaconda/Canopy at > all. Python itself has no answer for this issue, and eggs and wheels don't > help. Well, maybe kinda sorta they do, but in a clunky/ugly way: in order to > build a binary wheel with non-python dependencies (let's say something like > libjpeg, for instance), you need to either: > - assume that libjpeg is installed in a "standard" place -- really no > solution at all (at least outside of linux) > - statically link it > - ship the dynamic lib with the package > > For the most part, the accepted solution for OS-X has been to statically > link, but: > > - it's a pain to do. The gnu toolchain really likes to use dynamic linking, > and building a static lib that will run on a > maybe-older-than-the-build-system machine is pretty tricky. > > - now we end up with multiple copies of the same lib in the python install. > There are a handful of libs that are used a LOT. Maybe there is no real > downside -- disk space and memory are cheap these days, but it sure feels > ugly. And I have yet to feel comfortable with having multiple versions of > the same lib linked into one python instance -- I can't say I've seen a > problem, but it makes me nervous. > > On Windows, the choices are the same, except that: It is so much harder to > build many of the "standard" open source libs that package authors are more > likely to do it for folks, and you do get the occasional "dll hell" issues. > > I had a plan to make some binary wheels for OS-X that were not really python > packages, but actually just bundled up libs, so that other wheels could > depend on them. OS-X does allow linking to relative paths, so this should > have been doable, but I never got anyone else to agree this was a good idea, > and I never found the roundtoits anyway. And it doesn't really fit into the > PyPi, pip, wheel, etc. philosphy to have dependencies that are platform > dependent and even worse, build-dependent. > > Meanwhile, conda was chugging along and getting a lot of momentum in the > Scientific community. And the core thing here is that conda was designed > from the ground up to support essentially anything, This means is supports > python packages that depend on non-python packages, but also supports > packages that have nothing to do with python (Perl, command line tools, what > have you...) > > So I have been focusing on conda lately. > > Which brings me back to the question: should the python tools (i.e. wheel) > be extended to support more use-cases, specifically non-python dependencies? > Or do we just figure that that's a problem better solved by projects with a > larger scope (i.e. rpm, deb, conda, canopy). > > I'm on the fence here. I mostly care about Python, and I think we're pretty > darn close with allowing wheel to support the non-python dependencies, which > would allow us all to "simply pip install" pretty much anything -- that > would be cool. But maybe it's a bit of a slippery slope, and if we go there, > we'll end up re-writing conda. > > BTW, while you can't generally install a conda package in/for another > python, you can generally install a wheel in a conda python....There are a > few issues with pip/setuptools trying to resolve dependencies while not > knowing about conda packages, but it does mostly work. > > Not sure that helped the discussion -- but I've been wrestling with this for > a while, so thought I'd get my thoughts out there. I've always thought of wheel as solving only the Python-specific problem. Providing relocatable Python-specific packaging without trying to solve the intractable problem of non-Python dependencies. The strategy works best the more you are targeting "Python" as your platform and not a specific OS or distribution - sometimes it works well, other times not at all. Obviously if you need a specific build of PostgreSQL wheel isn't going to help you. With enough hacks you could make it work but are we ready to "pip install kde"? I don't think so. Personally I'm happy to let other tools solve the problem of C-level virtualenv. It's been suggested you could have a whole section in your Python package that said "by the way, RedHat package x, or Debian package y, or Gentoo package z", or use a separate package equivalency mapping as a level of indirection. I don't think this would be very good either. Instead, if you are doing system-level stuff, you should just use a system-level or user-level packaging tool that can easily re-package Python packages such as conda, rpm, deb, etc. From cournape at gmail.com Mon Apr 13 21:46:20 2015 From: cournape at gmail.com (David Cournapeau) Date: Mon, 13 Apr 2015 15:46:20 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Mon, Apr 13, 2015 at 12:56 PM, Chris Barker wrote: > NOTE: I don't work for any of the companies involved -- just a somewhat > frustrated user... And someone that has been trying for years to make > things easier for OS-X users. > > I?m not sure what (3) means exactly. What is a ?normal? Python, do you >>> modify Python in a way that breaks the ABI but which isn?t reflected in the >>> standard ABI tag? >>> >> >> It could be multiple things. The most obvious one is that generally. >> cross-platforms python distributions will try to be "relocatable" (i.e. the >> whole installation can be moved and still work). This means they require >> python itself to be built a special way. Strictly speaking, it is not an >> ABI issue, but the result is the same though: you can't use libraries from >> anaconda or canopy on top of a normal python >> > > But why not? -- at least for Anaconda, it's because those libraries likely > have non-python dependencies, which are expected to be installed in a > particular way. And really, this is not particular to Anaconda/Canopy at > all. Python itself has no answer for this issue, and eggs and wheels don't > help. Well, maybe kinda sorta they do, but in a clunky/ugly way: in order > to build a binary wheel with non-python dependencies (let's say something > like libjpeg, for instance), you need to either: > - assume that libjpeg is installed in a "standard" place -- really no > solution at all (at least outside of linux) > - statically link it > - ship the dynamic lib with the package > > For the most part, the accepted solution for OS-X has been to statically > link, but: > > - it's a pain to do. The gnu toolchain really likes to use dynamic > linking, and building a static lib that will run on a > maybe-older-than-the-build-system machine is pretty tricky. > > - now we end up with multiple copies of the same lib in the python > install. There are a handful of libs that are used a LOT. Maybe there is no > real downside -- disk space and memory are cheap these days, but it sure > feels ugly. And I have yet to feel comfortable with having multiple > versions of the same lib linked into one python instance -- I can't say > I've seen a problem, but it makes me nervous. > > On Windows, the choices are the same, except that: It is so much harder to > build many of the "standard" open source libs that package authors are more > likely to do it for folks, and you do get the occasional "dll hell" issues. > > I had a plan to make some binary wheels for OS-X that were not really > python packages, but actually just bundled up libs, so that other wheels > could depend on them. OS-X does allow linking to relative paths, so this > should have been doable, but I never got anyone else to agree this was a > good idea, and I never found the roundtoits anyway. And it doesn't really > fit into the PyPi, pip, wheel, etc. philosphy to have dependencies that are > platform dependent and even worse, build-dependent. > > Meanwhile, conda was chugging along and getting a lot of momentum in the > Scientific community. And the core thing here is that conda was designed > from the ground up to support essentially anything, This means is supports > python packages that depend on non-python packages, but also supports > packages that have nothing to do with python (Perl, command line tools, > what have you...) > > So I have been focusing on conda lately. > The whole reason I started this discussion is to make sure wheel has a standard way to do what is needed for those usecases. conda, rpm, deb, or eggs as used in enthought are all essentially the same: an archive with a bunch of metadata. The real issue is standardising on the exact formats. As you noticed, there is not much missing in the wheel *spec* to get most of what's needed. We've used eggs for that purpose for almost 10 years at Enthought, and we did not need that many extensions on top of the egg format after all. > Which brings me back to the question: should the python tools (i.e. wheel) > be extended to support more use-cases, specifically non-python > dependencies? Or do we just figure that that's a problem better solved by > projects with a larger scope (i.e. rpm, deb, conda, canopy). > IMO, given that wheels do most of what's needed, it is worth supporting most simple usecases (compiled libraries required by well known extensions). Right now, such packages (pyzmq, numpy, cryptography, lxml) resort to quite horrible custom hacks to support those cases. Hope that clarifies the intent, David > I'm on the fence here. I mostly care about Python, and I think we're > pretty darn close with allowing wheel to support the non-python > dependencies, which would allow us all to "simply pip install" pretty much > anything -- that would be cool. But maybe it's a bit of a slippery slope, > and if we go there, we'll end up re-writing conda. > > BTW, while you can't generally install a conda package in/for another > python, you can generally install a wheel in a conda python....There are a > few issues with pip/setuptools trying to resolve dependencies while not > knowing about conda packages, but it does mostly work. > > Not sure that helped the discussion -- but I've been wrestling with this > for a while, so thought I'd get my thoughts out there. > > > -Chris > > > > > > > > > > > > > > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Apr 13 21:55:06 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 13 Apr 2015 15:55:06 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Mon, Apr 13, 2015 at 3:46 PM, David Cournapeau wrote: > > > On Mon, Apr 13, 2015 at 12:56 PM, Chris Barker > wrote: >> >> NOTE: I don't work for any of the companies involved -- just a somewhat >> frustrated user... And someone that has been trying for years to make things >> easier for OS-X users. >> >>>> I?m not sure what (3) means exactly. What is a ?normal? Python, do you >>>> modify Python in a way that breaks the ABI but which isn?t reflected in the >>>> standard ABI tag? >>> >>> >>> It could be multiple things. The most obvious one is that generally. >>> cross-platforms python distributions will try to be "relocatable" (i.e. the >>> whole installation can be moved and still work). This means they require >>> python itself to be built a special way. Strictly speaking, it is not an ABI >>> issue, but the result is the same though: you can't use libraries from >>> anaconda or canopy on top of a normal python >> >> >> But why not? -- at least for Anaconda, it's because those libraries likely >> have non-python dependencies, which are expected to be installed in a >> particular way. And really, this is not particular to Anaconda/Canopy at >> all. Python itself has no answer for this issue, and eggs and wheels don't >> help. Well, maybe kinda sorta they do, but in a clunky/ugly way: in order to >> build a binary wheel with non-python dependencies (let's say something like >> libjpeg, for instance), you need to either: >> - assume that libjpeg is installed in a "standard" place -- really no >> solution at all (at least outside of linux) >> - statically link it >> - ship the dynamic lib with the package >> >> For the most part, the accepted solution for OS-X has been to statically >> link, but: >> >> - it's a pain to do. The gnu toolchain really likes to use dynamic >> linking, and building a static lib that will run on a >> maybe-older-than-the-build-system machine is pretty tricky. >> >> - now we end up with multiple copies of the same lib in the python >> install. There are a handful of libs that are used a LOT. Maybe there is no >> real downside -- disk space and memory are cheap these days, but it sure >> feels ugly. And I have yet to feel comfortable with having multiple versions >> of the same lib linked into one python instance -- I can't say I've seen a >> problem, but it makes me nervous. >> >> On Windows, the choices are the same, except that: It is so much harder to >> build many of the "standard" open source libs that package authors are more >> likely to do it for folks, and you do get the occasional "dll hell" issues. >> >> I had a plan to make some binary wheels for OS-X that were not really >> python packages, but actually just bundled up libs, so that other wheels >> could depend on them. OS-X does allow linking to relative paths, so this >> should have been doable, but I never got anyone else to agree this was a >> good idea, and I never found the roundtoits anyway. And it doesn't really >> fit into the PyPi, pip, wheel, etc. philosphy to have dependencies that are >> platform dependent and even worse, build-dependent. >> >> Meanwhile, conda was chugging along and getting a lot of momentum in the >> Scientific community. And the core thing here is that conda was designed >> from the ground up to support essentially anything, This means is supports >> python packages that depend on non-python packages, but also supports >> packages that have nothing to do with python (Perl, command line tools, what >> have you...) >> >> So I have been focusing on conda lately. > > > The whole reason I started this discussion is to make sure wheel has a > standard way to do what is needed for those usecases. > > conda, rpm, deb, or eggs as used in enthought are all essentially the same: > an archive with a bunch of metadata. The real issue is standardising on the > exact formats. As you noticed, there is not much missing in the wheel *spec* > to get most of what's needed. We've used eggs for that purpose for almost 10 > years at Enthought, and we did not need that many extensions on top of the > egg format after all. > > >> >> Which brings me back to the question: should the python tools (i.e. wheel) >> be extended to support more use-cases, specifically non-python dependencies? >> Or do we just figure that that's a problem better solved by projects with a >> larger scope (i.e. rpm, deb, conda, canopy). > > > IMO, given that wheels do most of what's needed, it is worth supporting most > simple usecases (compiled libraries required by well known extensions). > Right now, such packages (pyzmq, numpy, cryptography, lxml) resort to quite > horrible custom hacks to support those cases. > > Hope that clarifies the intent, > > David Then it sounds like I should read about the Enthought egg extensions. It's something else than just defining a separate pypi name for "just the libxml.so without the python bits"? From cournape at gmail.com Mon Apr 13 22:19:32 2015 From: cournape at gmail.com (David Cournapeau) Date: Mon, 13 Apr 2015 16:19:32 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: I would advise against using or even reading about our egg extensions, as the implementation is full of legacy (we've been doing this many years :) ): http://enstaller.readthedocs.org/en/master/reference/egg_format.html This is what we use on top of setuptools egg: - ability to add dependencies which are not python packages (I think most of it is already handled in metadata 2.0/PEP 426, but I would have to re-read the PEP carefully). - ability to run post/pre install/remove scripts - support for all the of the autotools directories, with "sensible" mapping on windows - a few extensions to the actual binary format (adding support for symlinks is the only one I can think of ATM). Everything else is legacy you really don't want to know (see here if you still want to http://enstaller.readthedocs.org/en/master/reference/egg_format.html) David On Mon, Apr 13, 2015 at 3:55 PM, Daniel Holth wrote: > On Mon, Apr 13, 2015 at 3:46 PM, David Cournapeau > wrote: > > > > > > On Mon, Apr 13, 2015 at 12:56 PM, Chris Barker > > wrote: > >> > >> NOTE: I don't work for any of the companies involved -- just a somewhat > >> frustrated user... And someone that has been trying for years to make > things > >> easier for OS-X users. > >> > >>>> I?m not sure what (3) means exactly. What is a ?normal? Python, do you > >>>> modify Python in a way that breaks the ABI but which isn?t reflected > in the > >>>> standard ABI tag? > >>> > >>> > >>> It could be multiple things. The most obvious one is that generally. > >>> cross-platforms python distributions will try to be "relocatable" > (i.e. the > >>> whole installation can be moved and still work). This means they > require > >>> python itself to be built a special way. Strictly speaking, it is not > an ABI > >>> issue, but the result is the same though: you can't use libraries from > >>> anaconda or canopy on top of a normal python > >> > >> > >> But why not? -- at least for Anaconda, it's because those libraries > likely > >> have non-python dependencies, which are expected to be installed in a > >> particular way. And really, this is not particular to Anaconda/Canopy at > >> all. Python itself has no answer for this issue, and eggs and wheels > don't > >> help. Well, maybe kinda sorta they do, but in a clunky/ugly way: in > order to > >> build a binary wheel with non-python dependencies (let's say something > like > >> libjpeg, for instance), you need to either: > >> - assume that libjpeg is installed in a "standard" place -- really no > >> solution at all (at least outside of linux) > >> - statically link it > >> - ship the dynamic lib with the package > >> > >> For the most part, the accepted solution for OS-X has been to statically > >> link, but: > >> > >> - it's a pain to do. The gnu toolchain really likes to use dynamic > >> linking, and building a static lib that will run on a > >> maybe-older-than-the-build-system machine is pretty tricky. > >> > >> - now we end up with multiple copies of the same lib in the python > >> install. There are a handful of libs that are used a LOT. Maybe there > is no > >> real downside -- disk space and memory are cheap these days, but it sure > >> feels ugly. And I have yet to feel comfortable with having multiple > versions > >> of the same lib linked into one python instance -- I can't say I've > seen a > >> problem, but it makes me nervous. > >> > >> On Windows, the choices are the same, except that: It is so much harder > to > >> build many of the "standard" open source libs that package authors are > more > >> likely to do it for folks, and you do get the occasional "dll hell" > issues. > >> > >> I had a plan to make some binary wheels for OS-X that were not really > >> python packages, but actually just bundled up libs, so that other wheels > >> could depend on them. OS-X does allow linking to relative paths, so this > >> should have been doable, but I never got anyone else to agree this was a > >> good idea, and I never found the roundtoits anyway. And it doesn't > really > >> fit into the PyPi, pip, wheel, etc. philosphy to have dependencies that > are > >> platform dependent and even worse, build-dependent. > >> > >> Meanwhile, conda was chugging along and getting a lot of momentum in the > >> Scientific community. And the core thing here is that conda was designed > >> from the ground up to support essentially anything, This means is > supports > >> python packages that depend on non-python packages, but also supports > >> packages that have nothing to do with python (Perl, command line tools, > what > >> have you...) > >> > >> So I have been focusing on conda lately. > > > > > > The whole reason I started this discussion is to make sure wheel has a > > standard way to do what is needed for those usecases. > > > > conda, rpm, deb, or eggs as used in enthought are all essentially the > same: > > an archive with a bunch of metadata. The real issue is standardising on > the > > exact formats. As you noticed, there is not much missing in the wheel > *spec* > > to get most of what's needed. We've used eggs for that purpose for > almost 10 > > years at Enthought, and we did not need that many extensions on top of > the > > egg format after all. > > > > > >> > >> Which brings me back to the question: should the python tools (i.e. > wheel) > >> be extended to support more use-cases, specifically non-python > dependencies? > >> Or do we just figure that that's a problem better solved by projects > with a > >> larger scope (i.e. rpm, deb, conda, canopy). > > > > > > IMO, given that wheels do most of what's needed, it is worth supporting > most > > simple usecases (compiled libraries required by well known extensions). > > Right now, such packages (pyzmq, numpy, cryptography, lxml) resort to > quite > > horrible custom hacks to support those cases. > > > > Hope that clarifies the intent, > > > > David > > Then it sounds like I should read about the Enthought egg extensions. > It's something else than just defining a separate pypi name for "just > the libxml.so without the python bits"? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Apr 13 23:12:50 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 13 Apr 2015 17:12:50 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: Seems like you could extend wheel to do that easily. On Apr 13, 2015 4:19 PM, "David Cournapeau" wrote: > I would advise against using or even reading about our egg extensions, as > the implementation is full of legacy (we've been doing this many years :) > ): http://enstaller.readthedocs.org/en/master/reference/egg_format.html > > This is what we use on top of setuptools egg: > > - ability to add dependencies which are not python packages (I think most > of it is already handled in metadata 2.0/PEP 426, but I would have to > re-read the PEP carefully). > - ability to run post/pre install/remove scripts > - support for all the of the autotools directories, with "sensible" > mapping on windows > - a few extensions to the actual binary format (adding support for > symlinks is the only one I can think of ATM). > > Everything else is legacy you really don't want to know (see here if you > still want to > http://enstaller.readthedocs.org/en/master/reference/egg_format.html) > > David > > On Mon, Apr 13, 2015 at 3:55 PM, Daniel Holth wrote: > >> On Mon, Apr 13, 2015 at 3:46 PM, David Cournapeau >> wrote: >> > >> > >> > On Mon, Apr 13, 2015 at 12:56 PM, Chris Barker >> > wrote: >> >> >> >> NOTE: I don't work for any of the companies involved -- just a somewhat >> >> frustrated user... And someone that has been trying for years to make >> things >> >> easier for OS-X users. >> >> >> >>>> I?m not sure what (3) means exactly. What is a ?normal? Python, do >> you >> >>>> modify Python in a way that breaks the ABI but which isn?t reflected >> in the >> >>>> standard ABI tag? >> >>> >> >>> >> >>> It could be multiple things. The most obvious one is that generally. >> >>> cross-platforms python distributions will try to be "relocatable" >> (i.e. the >> >>> whole installation can be moved and still work). This means they >> require >> >>> python itself to be built a special way. Strictly speaking, it is not >> an ABI >> >>> issue, but the result is the same though: you can't use libraries from >> >>> anaconda or canopy on top of a normal python >> >> >> >> >> >> But why not? -- at least for Anaconda, it's because those libraries >> likely >> >> have non-python dependencies, which are expected to be installed in a >> >> particular way. And really, this is not particular to Anaconda/Canopy >> at >> >> all. Python itself has no answer for this issue, and eggs and wheels >> don't >> >> help. Well, maybe kinda sorta they do, but in a clunky/ugly way: in >> order to >> >> build a binary wheel with non-python dependencies (let's say something >> like >> >> libjpeg, for instance), you need to either: >> >> - assume that libjpeg is installed in a "standard" place -- really no >> >> solution at all (at least outside of linux) >> >> - statically link it >> >> - ship the dynamic lib with the package >> >> >> >> For the most part, the accepted solution for OS-X has been to >> statically >> >> link, but: >> >> >> >> - it's a pain to do. The gnu toolchain really likes to use dynamic >> >> linking, and building a static lib that will run on a >> >> maybe-older-than-the-build-system machine is pretty tricky. >> >> >> >> - now we end up with multiple copies of the same lib in the python >> >> install. There are a handful of libs that are used a LOT. Maybe there >> is no >> >> real downside -- disk space and memory are cheap these days, but it >> sure >> >> feels ugly. And I have yet to feel comfortable with having multiple >> versions >> >> of the same lib linked into one python instance -- I can't say I've >> seen a >> >> problem, but it makes me nervous. >> >> >> >> On Windows, the choices are the same, except that: It is so much >> harder to >> >> build many of the "standard" open source libs that package authors are >> more >> >> likely to do it for folks, and you do get the occasional "dll hell" >> issues. >> >> >> >> I had a plan to make some binary wheels for OS-X that were not really >> >> python packages, but actually just bundled up libs, so that other >> wheels >> >> could depend on them. OS-X does allow linking to relative paths, so >> this >> >> should have been doable, but I never got anyone else to agree this was >> a >> >> good idea, and I never found the roundtoits anyway. And it doesn't >> really >> >> fit into the PyPi, pip, wheel, etc. philosphy to have dependencies >> that are >> >> platform dependent and even worse, build-dependent. >> >> >> >> Meanwhile, conda was chugging along and getting a lot of momentum in >> the >> >> Scientific community. And the core thing here is that conda was >> designed >> >> from the ground up to support essentially anything, This means is >> supports >> >> python packages that depend on non-python packages, but also supports >> >> packages that have nothing to do with python (Perl, command line >> tools, what >> >> have you...) >> >> >> >> So I have been focusing on conda lately. >> > >> > >> > The whole reason I started this discussion is to make sure wheel has a >> > standard way to do what is needed for those usecases. >> > >> > conda, rpm, deb, or eggs as used in enthought are all essentially the >> same: >> > an archive with a bunch of metadata. The real issue is standardising on >> the >> > exact formats. As you noticed, there is not much missing in the wheel >> *spec* >> > to get most of what's needed. We've used eggs for that purpose for >> almost 10 >> > years at Enthought, and we did not need that many extensions on top of >> the >> > egg format after all. >> > >> > >> >> >> >> Which brings me back to the question: should the python tools (i.e. >> wheel) >> >> be extended to support more use-cases, specifically non-python >> dependencies? >> >> Or do we just figure that that's a problem better solved by projects >> with a >> >> larger scope (i.e. rpm, deb, conda, canopy). >> > >> > >> > IMO, given that wheels do most of what's needed, it is worth supporting >> most >> > simple usecases (compiled libraries required by well known extensions). >> > Right now, such packages (pyzmq, numpy, cryptography, lxml) resort to >> quite >> > horrible custom hacks to support those cases. >> > >> > Hope that clarifies the intent, >> > >> > David >> >> Then it sounds like I should read about the Enthought egg extensions. >> It's something else than just defining a separate pypi name for "just >> the libxml.so without the python bits"? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Apr 13 23:25:38 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 13 Apr 2015 14:25:38 -0700 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Mon, Apr 13, 2015 at 1:19 PM, David Cournapeau wrote: > This is what we use on top of setuptools egg: > > - ability to add dependencies which are not python packages (I think most > of it is already handled in metadata 2.0/PEP 426, but I would have to > re-read the PEP carefully). > - ability to run post/pre install/remove scripts > - support for all the of the autotools directories, with "sensible" > mapping on windows > Are these inside or outside the python installation? I'm more than a bit wary of a wheel that would install stuff outside of the "sandbox" of the python install. The whole reason I started this discussion is to make sure wheel has a > standard way to do what is needed for those usecases. > > conda, rpm, deb, or eggs as used in enthought are all essentially the > same: an archive with a bunch of metadata. The real issue is standardising > on the exact formats. As you noticed, there is not much missing in the > wheel *spec* to get most of what's needed. hmm -- true. I guess where it seems to get more complicated is beyond the wheel (or conda, or...) package itself, to the dependency management, installation tools, etc. But perhaps you are suggesting that we can extend wheel to support a bt more stuff, and leave the rest of the system as separate problem? i.e. Canopy can have it's own find, install, manage-dependency tool, but that it can use the wheel format for the packages themselves? I don't see why not.... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Apr 13 23:35:03 2015 From: cournape at gmail.com (David Cournapeau) Date: Mon, 13 Apr 2015 17:35:03 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Mon, Apr 13, 2015 at 5:25 PM, Chris Barker wrote: > On Mon, Apr 13, 2015 at 1:19 PM, David Cournapeau > wrote: > >> This is what we use on top of setuptools egg: >> >> - ability to add dependencies which are not python packages (I think >> most of it is already handled in metadata 2.0/PEP 426, but I would have to >> re-read the PEP carefully). >> - ability to run post/pre install/remove scripts >> - support for all the of the autotools directories, with "sensible" >> mapping on windows >> > > Are these inside or outside the python installation? I'm more than a bit > wary of a wheel that would install stuff outside of the "sandbox" of the > python install. > I would always install things relative to sys.prefix, for exactly the reasons you mention. > > > The whole reason I started this discussion is to make sure wheel has a >> standard way to do what is needed for those usecases. >> >> conda, rpm, deb, or eggs as used in enthought are all essentially the >> same: an archive with a bunch of metadata. The real issue is standardising >> on the exact formats. As you noticed, there is not much missing in the >> wheel *spec* to get most of what's needed. > > > hmm -- true. I guess where it seems to get more complicated is beyond the > wheel (or conda, or...) package itself, to the dependency management, > installation tools, etc. > > But perhaps you are suggesting that we can extend wheel to support a bt > more stuff, and leave the rest of the system as separate problem? i.e. > Canopy can have it's own find, install, manage-dependency tool, but that it > can use the wheel format for the packages themselves? > Exactly ! David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Tue Apr 14 02:57:20 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 14 Apr 2015 10:57:20 +1000 Subject: [Distutils] pip/warehouse feature idea: "help needed" References: <55293580.1060005@sdamon.com> Message-ID: <85twwjv85r.fsf@benfinney.id.au> Nick Coghlan writes: > On 11 Apr 2015 12:22, "Alexander Walters" wrote: > > Is the package index really the best place to put this? This is a > > very social-networking feature for the authoritative repository of > > just about all the third party module, and it feels like either it > > could corrupt the 'sanctity' of the repository (in the absolute > > worst case) > > If you're concerned that this feature might weaken the comforting > illusion that PyPI published software is contributed and maintained by > faceless automatons rather than living, breathing human beings, then > yes, encouraging folks to think more about where the software they use > is coming from would be a large part of the point of adding such a > feature. I can't speak for Alexander, but I'm also ?1 to have this *on PyPI*. I'm all for such features existing. What is at issue is whether PyPI is the place to put them. We have been gradually improving the function of PyPI as an authoritative *index* of packages; that's possible because it is a repository of uncontroversial facts, not opinions (i.e. ?what is the packaging metadata of this distribution?, ?where is its documentation?, ?where is its VCS?, etc.). > > I am not saying the PSF shouldn't do this, but is pypi REALLY the > > best part of python.org to put it? > > I personally believe so, yes - sustaining software over the long term is > expensive in people's time, but it's often something we take for granted. > The specific example Guido brought up in his keynote was the challenge of > communicating a project's openness to Python 3 porting assistance. The people doing the work of maintaining PyPI have said many times in recent years that there just isn't enough person-power to add a whole bunch of features that have been requested. Why would we think moderating a social-networking rating, opinion, discussion, or other non-factual database is something reasonable to ask of the PyPI maintainers? Conversely, if we are under the impression that adding ratings, feedback, reviews, discussion, and other features to PyPI is *not* going to be a massive increase in workload for the maintainers, I think that's a foolish delusion which will be quite costly to the reputation PyPI has recently gained through hard effort to clarify its role. By all means, set up a well-maintained social ecosystem around Python packages. But not on PyPI itself: The Python Package Index is feasible in part because it has a clear and simple job, though, and that's not it. -- \ ?If you can't hear me sometimes, it's because I'm in | `\ parentheses.? ?Steven Wright | _o__) | Ben Finney From donald at stufft.io Tue Apr 14 04:13:21 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 13 Apr 2015 22:13:21 -0400 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: <85twwjv85r.fsf@benfinney.id.au> References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> Message-ID: > On Apr 13, 2015, at 8:57 PM, Ben Finney wrote: > > Nick Coghlan writes: > >> On 11 Apr 2015 12:22, "Alexander Walters" wrote: >>> Is the package index really the best place to put this? This is a >>> very social-networking feature for the authoritative repository of >>> just about all the third party module, and it feels like either it >>> could corrupt the 'sanctity' of the repository (in the absolute >>> worst case) >> >> If you're concerned that this feature might weaken the comforting >> illusion that PyPI published software is contributed and maintained by >> faceless automatons rather than living, breathing human beings, then >> yes, encouraging folks to think more about where the software they use >> is coming from would be a large part of the point of adding such a >> feature. > > I can't speak for Alexander, but I'm also ?1 to have this *on PyPI*. > > I'm all for such features existing. What is at issue is whether PyPI is > the place to put them. > > We have been gradually improving the function of PyPI as an > authoritative *index* of packages; that's possible because it is a > repository of uncontroversial facts, not opinions (i.e. ?what is the > packaging metadata of this distribution?, ?where is its documentation?, > ?where is its VCS?, etc.). > >>> I am not saying the PSF shouldn't do this, but is pypi REALLY the >>> best part of python.org to put it? >> >> I personally believe so, yes - sustaining software over the long term is >> expensive in people's time, but it's often something we take for granted. >> The specific example Guido brought up in his keynote was the challenge of >> communicating a project's openness to Python 3 porting assistance. > > The people doing the work of maintaining PyPI have said many times in > recent years that there just isn't enough person-power to add a whole > bunch of features that have been requested. Why would we think > moderating a social-networking rating, opinion, discussion, or other > non-factual database is something reasonable to ask of the PyPI > maintainers? > > Conversely, if we are under the impression that adding ratings, > feedback, reviews, discussion, and other features to PyPI is *not* going > to be a massive increase in workload for the maintainers, I think that's > a foolish delusion which will be quite costly to the reputation PyPI has > recently gained through hard effort to clarify its role. > > By all means, set up a well-maintained social ecosystem around Python > packages. But not on PyPI itself: The Python Package Index is feasible > in part because it has a clear and simple job, though, and that's not > it. > > -- > \ ?If you can't hear me sometimes, it's because I'm in | > `\ parentheses.? ?Steven Wright | > _o__) | > Ben Finney > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig I don?t see any problem with the general idea of adding features to PyPI to enable package maintainers to find more help maintaining specific parts of their projects. I do have a problem with expecting the PyPI administrators to fill out or otherwise populate this information. Saying ?Here?s a place you can donate to me? is still a fact, it?s just a more social fact than what we currently enable. I?m kind of down on the idea of linking to CVs or linkedin as part of the project metadata because that?s not project specific and is really more maintainer specific. I think that particular feature would be better suited to some sort of global ?Python profile? that could then be linked to from PyPI instead of trying to bake it into PyPI itself. However things like ?Looking for New Maintainers / Orphan a Project?, or some call to actions on ?here are some issues that need fixed? or other things doesn?t seem unreasonable to me. Particularly the ability to orphan a project or look for new maintainers seems like a useful thing to me that really can?t live anywhere other than PyPI reasonably. The other items can live elsewhere if we wanted them to since they would be easy to add to the long_description of a project which would get added to the PyPI page but that has some drawbacks. For things like crowdfunding campaigns the long_description is set when you upload a release, however it?d be useful to have the campaigns update as the campaign progresses (or even ultimately be removed once the campaign has finished). I think an important part of this idea here is that this doesn?t enable anything that authors can?t already do, it just presents the information in a way that is easier for other tooling to take advantage of it as well as allow us to make more informed decisions about how to show it and when to show it without requiring authors to update the long_description of their projects. I think it will also be a strong signal that it?s OK for projects to ask for help (whether of the man power or monetary kind) and will also help lead more projects to be more sustainable for the long term. As far as man power goes, part of that problem is that adding *anything* to the current PyPI code base is a massive headache because there are zero tests and the code base itself is horribly factored and a pain in the ass to work with. However what we?re doing now is rewriting PyPI using a modern framework with modern practices (test coverage, CSS frameworks, etc). When this gets completed adding new features will be easier, especially for people who don?t regularly work on PyPI itself. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Tue Apr 14 04:17:45 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 14 Apr 2015 14:17:45 +1200 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 14 April 2015 at 09:35, David Cournapeau wrote: ... One of the earlier things mentioned here - {pre,post}{install,remove} scripts - raises a red flag for me. In Debian at least, the underlying system has the ability to run such turing complete scripts, and they are a rich source of bugs - both correctness and performance related. Nowadays nearly all such scripts are machine generated from higher level representations such as 'this should be the default command' or 'configure X if Y is installed', but because the plumbing is turing complete, they all need to be executed, which slows down install/upgrade paths, and any improvement to the tooling requires a version bump on *all* the packages using it - because effectively the package is itself a compiled artifact. I'd really prefer it if we keep wheels 100% declarative, and instead focus on defining appropriate metadata for the things you need to accomplish {pre,post}{install,remove} of a package. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Tue Apr 14 04:29:22 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 13 Apr 2015 22:29:22 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: > On Apr 13, 2015, at 10:17 PM, Robert Collins wrote: > > On 14 April 2015 at 09:35, David Cournapeau wrote: > ... > > One of the earlier things mentioned here - {pre,post}{install,remove} > scripts - raises a red flag for me. > > In Debian at least, the underlying system has the ability to run such > turing complete scripts, and they are a rich source of bugs - both > correctness and performance related. > > Nowadays nearly all such scripts are machine generated from higher > level representations such as 'this should be the default command' or > 'configure X if Y is installed', but because the plumbing is turing > complete, they all need to be executed, which slows down > install/upgrade paths, and any improvement to the tooling requires a > version bump on *all* the packages using it - because effectively the > package is itself a compiled artifact. > > I'd really prefer it if we keep wheels 100% declarative, and instead > focus on defining appropriate metadata for the things you need to > accomplish {pre,post}{install,remove} of a package. A possible way to implement {pre,post}{install,remove} scripts is to instead turn them into extensions. One example is that Twisted uses a setup.py hack to regenerate a cache file of all of the registered plugins. This needs to happen at install time due to permission issues. Currently you can?t get this speedup when installing something that uses twisted plugins and installing via Wheel. So a possible way for this to work is in a PEP 426 world, simply define a twisted.plugins extension that says, in a declarative way, ?hey when you install this Wheel, if there?s a plugin that understands this extension installed, let it do something before you actually move the files into place?. This let?s Wheels themselves still be declarative and moves the responsibility of implementing these bits into their own PyPI projects that can be versioned and independently upgraded and such. We?d probably need some method of marking an extension as ?critical? (e.g. bail out and don?t install this Wheel if you don?t have something that knows how to handle it) and then non critical extensions just get ignored if we don?t know how to handle it. Popular extensions could possibly be added directly to pip at some point if a lot of people are using them (or even moved from third party extension to officially supported extension). --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From cournape at gmail.com Tue Apr 14 07:37:10 2015 From: cournape at gmail.com (David Cournapeau) Date: Tue, 14 Apr 2015 01:37:10 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Mon, Apr 13, 2015 at 10:17 PM, Robert Collins wrote: > On 14 April 2015 at 09:35, David Cournapeau wrote: > ... > > One of the earlier things mentioned here - {pre,post}{install,remove} > scripts - raises a red flag for me. > That's indeed a good a priori. I myself removed a lot of those scripts because of the fragility. Anything that needs to run on a end-user machine can fail, and writing idempotent scripts is hard. Unfortunately, pure declarative does not really cut it if you want cross platform support. Sure, you may be able to deal with menu entries, environment variables, etc... in a cross-platform manner with a significant effort, but what about COM registration ? pywin32 is one of the most used package in the python ecosystem, and its post install script is not trivial. Another difficult case if when a package needs some specific configuration to run at all, and that configuration requires values known at install time only (e.g. sys.prefix, as in the iris package). > I'd really prefer it if we keep wheels 100% declarative, and instead > focus on defining appropriate metadata for the things you need to > accomplish {pre,post}{install,remove} of a package. > What about a way for wheels to specify whether their post,pre/install,remove scripts are declarative or not, with support for the most common tasks, with an escape, but opt-in mechanism ? This way it could be a matter of policy to refuse packages that require non-declarative scripts. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Apr 14 08:44:16 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 14 Apr 2015 07:44:16 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 14 April 2015 at 06:37, David Cournapeau wrote: > pywin32 is one of the most used package in the python ecosystem, and its > post install script is not trivial. And yet pywin32's postinstall script is completely virtualenv-hostile. It registers start menu entries (not applicable when installing in a virtualenv), registers itself as a COM server (once again, one per machine), adds registry entries (again, virtualenv-hostile), moves installed files into the Windows system directory (ditto) etc. And yet for many actual uses of pywin32, installing as a wheel without running the postinstall is sufficient. With the exception of writing COM servers in Python (and maybe writing services, but I thing cx_Freeze lets you do that without pywin32), pretty much every use *I* have seen of pywin32 can be replaced with ctypes or cffi with no loss of functionality. I'd argue that pywin32 is a perfect example of a project where *not* supporting postinstall scripts would be a good idea, as it would encourage the project to find a way to implement the same functionality in a way that's compatible with current practices (virtualenv, tox, etc). Or it would encourage other projects to stop depending on pywin32 (which is actually what is happening, many projects now use ctypes and similar in place of pywin32-using code, to avoid the problems pywin32 causes for them). Paul From robertc at robertcollins.net Tue Apr 14 10:04:19 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 14 Apr 2015 20:04:19 +1200 Subject: [Distutils] version for VCS/local builds between alpha and release Message-ID: Tl;dr: I woud like to change PEP-440 to remove the stigmata around dev versions that come between pre-releases and releases. The basic scenario here is developers and CD deployers building versions from VCS of arbitrary commits. So we need to be able to deliver strictly increasing version numbers, automatically, without interfering with actual publishing of pre-release and release versions to PyPI. Today, this is fairly sane as a sort order and mechanism to deliver this: >>> v = ['1.0', '1.2.3.a1.dev1', '1.2.3.a1', '1.2.3.a2.dev1', '1.2.3.b1', '1.2.3.b2.dev1', '1.2.3.rc1', '1.2.3.rc2.dev1', '1.2.3', '1.2.3.dev1'] >>> sorted(v, key=packaging.version.parse) ['1.0', '1.2.3.dev1', '1.2.3.a1.dev1', '1.2.3.a1', '1.2.3.a2.dev1', '1.2.3.b1', '1.2.3.b2.dev1', '1.2.3.rc1', '1.2.3.rc2.dev1', '1.2.3'] But it doesn't feel good using a 'strongly recommended against' order. I think the recommendation is overly strong: If we touch up the language we can make the spec clearer, and a simple example like above will speak a thousand words :). -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From j.j.molenaar at gmail.com Tue Apr 14 13:31:41 2015 From: j.j.molenaar at gmail.com (Joost Molenaar) Date: Tue, 14 Apr 2015 13:31:41 +0200 Subject: [Distutils] version for VCS/local builds between alpha and release In-Reply-To: References: Message-ID: On 14 April 2015 at 10:04, Robert Collins wrote: > The basic scenario here is developers and CD deployers building > versions from VCS of arbitrary commits. So we need to be able to > deliver strictly increasing version numbers, automatically, without > interfering with actual publishing of pre-release and release versions > to PyPI. I think the advice in PEP440 about using dev tags[1] is a little misguided, because dev tags count towards a known version in the future, while DVCS tags (at least in Git) count the number of commits since a known version in the past. In this respect, 'git describe' most closely resembles the post release tags in PEP440, so that's what I've chosen to use in my build scripts, in spite of the recommendations in PEP440. [1] https://www.python.org/dev/peps/pep-0440/#dvcs-based-version-labels -- Joost Molenaar From cournape at gmail.com Tue Apr 14 14:56:59 2015 From: cournape at gmail.com (David Cournapeau) Date: Tue, 14 Apr 2015 08:56:59 -0400 Subject: [Distutils] Beyond wheel 1.0: more fine-grained installation scheme Message-ID: Hi, I am splitting up the previous thread into one thread / proposal to focus the discussion. Assuming the basis of this proposal does not sound too horrible, I would make a proof of concept in a pip branch, so that we can flush out the details and then write an actual spec (I guess an updated wheel format would need a new PEP ?). The goal of this thread is to flush out a more fine-grained installation scheme, so that wheels can install files anywhere they want (at least within sys.prefix/sys.exec_prefix). I see two issues: 1. defining what the scheme should be 2. how should it be implemented in wheel: there are different trade-offs depending on whether we want this feature to be part of wheel format 1.* or 2.0. First, my understanding of the current situation: * per the wheel PEP 427, anything in the wheel top directory and not in distribution-1.0.data is installed in site-package * every top directory in distribution-1.0.data/ needs to be mapped to the scheme as defined in distutils install command. * pip rejects any directory in distribution-1.0.data/ which is not in the scheme from 2. My suggestion for a better scheme would be to use an extended version of the various default directories defined by autotools. The extension would handle windows-specifics. More concretely: # Suggested variables The goal of supporting those variables is to take something that is flexible enough to support almost any installation scheme, without putting additional burden on the developer. People who do not want/need the flexibility will not need to do anything more than what they do today. The variables I would suggest are every variable defined in https://github.com/cournape/Bento/blob/master/bento/core/platforms/sysconfig.py#L10, except for destdir which is not relevant here. On unix, the defaults follow autotools, and on windows, I mapped almost everything relative to sys.exec_prefix, except for the bindir/sbindir/libexecdir which map to "$prefix\Scripts". The $sitedir variable would need to be updated to use the value from distutils instead of the hardcoded value I put in that file as well. # How to handle the augmented scheme Right now, if I wanted to install something in say $prefix/share/doc, I would need to put it in distribution-1.0.data/data/share/doc, but this prevents use from handling different platforms differently. OTOH, this is the only way I can see to make the new scheme backward compatible with pip versions who would not understand the new scheme. I don't have a good sense of what we should do there, the core pip team may have a better sense. For now, I would be happy to just make a proof of concept not caring about backward compatibility in a pip branch. Does that sound like a workable basis to flush out an actual proposal ? thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Apr 14 15:27:46 2015 From: dholth at gmail.com (Daniel Holth) Date: Tue, 14 Apr 2015 09:27:46 -0400 Subject: [Distutils] Beyond wheel 1.0: more fine-grained installation scheme In-Reply-To: References: Message-ID: That's exactly what I would like to do. Then distribution-1.0.data/sysconfdir/file in a wheel would install into /etc/file in the default scheme, but would probably really wind up in $VIRTUAL_ENV/etc/... for most of us web developers. IIRC extra package-1.0-data/* directories in wheel are undefined. I would have no problem putting fine-grained install schemes in 2.0 and putting some of the other "wheel 2.0" features into wheel 3. Incrementing the major version number would cause older pip to reject the newer wheels, incrementing the minor version would produce a warning. On Tue, Apr 14, 2015 at 8:56 AM, David Cournapeau wrote: > Hi, > > I am splitting up the previous thread into one thread / proposal to focus > the discussion. > > Assuming the basis of this proposal does not sound too horrible, I would > make a proof of concept in a pip branch, so that we can flush out the > details and then write an actual spec (I guess an updated wheel format would > need a new PEP ?). > > The goal of this thread is to flush out a more fine-grained installation > scheme, so that wheels can install files anywhere they want (at least within > sys.prefix/sys.exec_prefix). I see two issues: > > 1. defining what the scheme should be > 2. how should it be implemented in wheel: there are different trade-offs > depending on whether we want this feature to be part of wheel format 1.* or > 2.0. > > First, my understanding of the current situation: > > * per the wheel PEP 427, anything in the wheel top directory and not in > distribution-1.0.data is installed in site-package > * every top directory in distribution-1.0.data/ needs to be mapped to the > scheme as defined in distutils install command. > * pip rejects any directory in distribution-1.0.data/ which is not in the > scheme from 2. > > My suggestion for a better scheme would be to use an extended version of the > various default directories defined by autotools. The extension would handle > windows-specifics. More concretely: > > # Suggested variables > > The goal of supporting those variables is to take something that is flexible > enough to support almost any installation scheme, without putting additional > burden on the developer. People who do not want/need the flexibility will > not need to do anything more than what they do today. > > The variables I would suggest are every variable defined in > https://github.com/cournape/Bento/blob/master/bento/core/platforms/sysconfig.py#L10, > except for destdir which is not relevant here. > > On unix, the defaults follow autotools, and on windows, I mapped almost > everything relative to sys.exec_prefix, except for the > bindir/sbindir/libexecdir which map to "$prefix\Scripts". > > The $sitedir variable would need to be updated to use the value from > distutils instead of the hardcoded value I put in that file as well. > > # How to handle the augmented scheme > > Right now, if I wanted to install something in say $prefix/share/doc, I > would need to put it in distribution-1.0.data/data/share/doc, but this > prevents use from handling different platforms differently. > > OTOH, this is the only way I can see to make the new scheme backward > compatible with pip versions who would not understand the new scheme. I > don't have a good sense of what we should do there, the core pip team may > have a better sense. > > For now, I would be happy to just make a proof of concept not caring about > backward compatibility in a pip branch. Does that sound like a workable > basis to flush out an actual proposal ? > > thanks, > David > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From brett at python.org Tue Apr 14 17:16:18 2015 From: brett at python.org (Brett Cannon) Date: Tue, 14 Apr 2015 15:16:18 +0000 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> Message-ID: On Mon, Apr 13, 2015 at 10:13 PM Donald Stufft wrote: > > > On Apr 13, 2015, at 8:57 PM, Ben Finney > wrote: > > > > Nick Coghlan writes: > > > >> On 11 Apr 2015 12:22, "Alexander Walters" > wrote: > >>> Is the package index really the best place to put this? This is a > >>> very social-networking feature for the authoritative repository of > >>> just about all the third party module, and it feels like either it > >>> could corrupt the 'sanctity' of the repository (in the absolute > >>> worst case) > >> > >> If you're concerned that this feature might weaken the comforting > >> illusion that PyPI published software is contributed and maintained by > >> faceless automatons rather than living, breathing human beings, then > >> yes, encouraging folks to think more about where the software they use > >> is coming from would be a large part of the point of adding such a > >> feature. > > > > I can't speak for Alexander, but I'm also ?1 to have this *on PyPI*. > > > > I'm all for such features existing. What is at issue is whether PyPI is > > the place to put them. > > > > We have been gradually improving the function of PyPI as an > > authoritative *index* of packages; that's possible because it is a > > repository of uncontroversial facts, not opinions (i.e. ?what is the > > packaging metadata of this distribution?, ?where is its documentation?, > > ?where is its VCS?, etc.). > > > >>> I am not saying the PSF shouldn't do this, but is pypi REALLY the > >>> best part of python.org to put it? > >> > >> I personally believe so, yes - sustaining software over the long term is > >> expensive in people's time, but it's often something we take for > granted. > >> The specific example Guido brought up in his keynote was the challenge > of > >> communicating a project's openness to Python 3 porting assistance. > > > > The people doing the work of maintaining PyPI have said many times in > > recent years that there just isn't enough person-power to add a whole > > bunch of features that have been requested. Why would we think > > moderating a social-networking rating, opinion, discussion, or other > > non-factual database is something reasonable to ask of the PyPI > > maintainers? > > > > Conversely, if we are under the impression that adding ratings, > > feedback, reviews, discussion, and other features to PyPI is *not* going > > to be a massive increase in workload for the maintainers, I think that's > > a foolish delusion which will be quite costly to the reputation PyPI has > > recently gained through hard effort to clarify its role. > > > > By all means, set up a well-maintained social ecosystem around Python > > packages. But not on PyPI itself: The Python Package Index is feasible > > in part because it has a clear and simple job, though, and that's not > > it. > > > > -- > > \ ?If you can't hear me sometimes, it's because I'm in | > > `\ parentheses.? ?Steven Wright | > > _o__) | > > Ben Finney > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > I don?t see any problem with the general idea of adding features to PyPI to > enable package maintainers to find more help maintaining specific parts of > their projects. I do have a problem with expecting the PyPI administrators > to fill out or otherwise populate this information. Saying ?Here?s a place > you can donate to me? is still a fact, it?s just a more social fact than > what we currently enable. > > I?m kind of down on the idea of linking to CVs or linkedin as part of the > project metadata because that?s not project specific and is really more > maintainer specific. I think that particular feature would be better suited > to some sort of global ?Python profile? that could then be linked to from > PyPI instead of trying to bake it into PyPI itself. > > However things like ?Looking for New Maintainers / Orphan a Project?, > or some call to actions on ?here are some issues that need fixed? or other > things doesn?t seem unreasonable to me. Particularly the ability to orphan > a project or look for new maintainers seems like a useful thing to me that > really can?t live anywhere other than PyPI reasonably. > I agree. Even something as simple as a boolean that triggers a banner saying "this project is looking for a new maintainer" would be useful both from the perspective of project owners who want to move on or from the perspective of users who can't tell if a project is maintained based on how long it has been since a project uploaded a new version (which is why I think someone suggested sending an annual email asking for a human action to say "alive and kicking" to help determine if a project is completely abandoned). -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Apr 14 17:21:35 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 14 Apr 2015 11:21:35 -0400 Subject: [Distutils] Make PEP 426 less boring In-Reply-To: <552B6126.1020000@thomas-guettler.de> References: <552B6126.1020000@thomas-guettler.de> Message-ID: On 13 April 2015 at 02:24, Thomas G?ttler wrote: > Hi, > > somehow I feel bored if I read PEP 426. > https://www.python.org/dev/peps/pep-0426/ If anyone didn't find the complexities of real world software distribution tedious, frustrating and often mindnumbingly dull, I'd assume they weren't paying attention :) > One concrete improvement would be to remove this paragraph: > > {{{ > The design draws on the Python community's 15 years of experience with > distutils based software distribution, and incorporates ideas and concepts > from other distribution systems, including Python's setuptools, pip and > other projects, Ruby's gems, Perl's CPAN, Node.js's npm, PHP's composer and > Linux packaging systems such as RPM and APT. > }}} > > Because something like this was already saied some lines before > > {{{ > Metadata 2.0 represents a major upgrade to the Python packaging ecosystem, > and attempts to incorporate experience gained over the 15 years(!) since > distutils was first added to the standard library. Some of that is just > incorporating existing practices from setuptools/pip/etc, some of it is > copying from other distribution systems (like Linux distros or other > development language communities) and some of it is attempting to solve > problems which haven't yet been well solved by anyone (like supporting clean > conversion of Python source packages to distro policy compliant source > packages for at least Debian and Fedora, and perhaps other platform specific > distribution systems). > }}} > > **And** I would move the historic background (the second of the above > quotes) at the end. > > Meta: are you interested in feedback like this? Not so much - I haven't done a serious editing pass myself, as I'm still mostly interested in engaging folks that are already invested in the packaging tools ecosystem, rather than making it readily accessible to newcomers. Once we're happy we know what good looks like, then it will be part of the role of packaging.python.org to provide the "just the facts" introduction that lowers the barriers to entry, leaving the raw spec to the folks that are inclined to spend our time reading RFCs and other specs :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Apr 14 17:41:53 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 14 Apr 2015 11:41:53 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 13 April 2015 at 12:56, Chris Barker wrote: > Which brings me back to the question: should the python tools (i.e. wheel) > be extended to support more use-cases, specifically non-python dependencies? > Or do we just figure that that's a problem better solved by projects with a > larger scope (i.e. rpm, deb, conda, canopy). > > I'm on the fence here. I mostly care about Python, and I think we're pretty > darn close with allowing wheel to support the non-python dependencies, which > would allow us all to "simply pip install" pretty much anything -- that > would be cool. But maybe it's a bit of a slippery slope, and if we go there, > we'll end up re-writing conda. The main two language independent solutions I've identified for this general "user level package management" problem in the Fedora Environments & Stacks context (https://fedoraproject.org/wiki/Env_and_Stacks/Projects/UserLevelPackageManagement) are conda (http://conda.pydata.org/) and Nix (https://nixos.org/nix/about.html), backed up by Pulp for plugin-based format independent repository management (http://www.pulpproject.org/). >From a Python upstream perspective, Nix falls a long way behind conda due to the fact that Nix currently relies on Cygwin for Windows support - it's interesting to me for Fedora because Nix ticks a lot of boxes from a system administrator perspective that conda doesn't (in particular, system administrators can more easily track what users have installed, and ensure that packages are updated appropriately in the face of security updates in dependencies). I definitely see value in Python upstream formats being able to bundle additional files like config files, desktop integration files, service definition files, statically linked extensions modules, etc, in a way that not only supports direct installation onto end user machines, but also conversion into platform specific formats (whether that platform is an operating system, or a cross-platform platform like nix, canopy or conda). The point where I draw the line is supporting *dynamic* linking between modules - that's the capability I view as defining the boundary between "enabling an addon ecosystem for a programming language runtime" and "providing a comprehensive software development platform" :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Apr 14 18:06:47 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 14 Apr 2015 12:06:47 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 13 April 2015 at 22:29, Donald Stufft wrote: > So a possible way for this to work is in a PEP 426 world, simply define > a twisted.plugins extension that says, in a declarative way, ?hey when > you install this Wheel, if there?s a plugin that understands this extension > installed, let it do something before you actually move the files into > place?. This let?s Wheels themselves still be declarative and moves the > responsibility of implementing these bits into their own PyPI projects > that can be versioned and independently upgraded and such. We?d probably > need some method of marking an extension as ?critical? (e.g. bail out and > don?t install this Wheel if you don?t have something that knows how to handle > it) and then non critical extensions just get ignored if we don?t know > how to handle it. Right, this is the intent of the "Required extension handling" feature: https://www.python.org/dev/peps/pep-0426/#required-extension-handling If a package flags an extension as "installer_must_handle", then attempts to install that package are supposed to fail if the installer doesn't recognise the extension. Otherwise, installers are free to ignore extensions they don't understand. So meta-installers like canopy could add their own extensions to their generated wheel files, flag those extensions as required, and other installers would correctly reject those wheels as unsupported. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris.barker at noaa.gov Tue Apr 14 18:07:25 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 14 Apr 2015 09:07:25 -0700 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Tue, Apr 14, 2015 at 8:41 AM, Nick Coghlan wrote: > The main two language independent solutions I've identified for this > general "user level package management" problem in the Fedora > Environments & Stacks context > ( > https://fedoraproject.org/wiki/Env_and_Stacks/Projects/UserLevelPackageManagement > ) > are conda (http://conda.pydata.org/) and Nix > (https://nixos.org/nix/about.html), cool -- I hadn't seem nix before. > From a Python upstream perspective, Nix falls a long way behind conda > due to the fact that Nix currently relies on Cygwin for Windows > support - The other thing that's nice about conda is that while it was designed for the general case, it has a lot of python-specific features. Being a Python guy -- I llke that ;-) -- it may not work nearly as well for Ruby or what have you -- I wouldn't know. > The point where I draw the line is supporting *dynamic* > linking between modules - I'm confused -- you don't want a system to be able to install ONE version of a lib that various python packages can all link to? That's really the key use-case for me.... > that's the capability I view as defining the > boundary between "enabling an add-on ecosystem for a programming > language runtime" and "providing a comprehensive software development > platform" :) > Well, with it's target audience being scientific programmers, conda IS trying to give you a "comprehensive software development platform" We're talking about Python here -- it's a development tool. It turns out that for scientific development, pure python is simply not enough -- hence the need for conda and friends. I guess this is what it comes down to -- I'm all for adding a few features to wheel -- it would be nice to be abel to pip install most of what I, and people like me, need. But maybe it's not possible -- you can solve the shared lib problem, and the scripts problem, and maybe the menu entires problem, but eventually, you end up with "I want to use numba" -- and then you need LLVM, etc. -- and pretty soon you are building a tool that provides a "comprehensive software development platform". ;-) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Apr 14 18:10:33 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 14 Apr 2015 09:10:33 -0700 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: > > If there?s a plugin that understands this extension > > installed, let it do something before you actually move the files into > > place?. This let?s Wheels themselves still be declarative and moves the > > responsibility of implementing these bits into their own PyPI projects > > that can be versioned and independently upgraded and such. We?d probably > > need some method of marking an extension as ?critical? (e.g. bail out and > > don?t install this Wheel if you don?t have something that knows how to > handle > > it) and then non critical extensions just get ignored if we don?t know > > how to handle it. > Could an "extension" be -- "run this arbitrary Python script" ? We've got a full featured scripting language (with batteries included!) -- isn't that all the extension you need? Or is this about security? We don't want to let a package do virtually anything on install? -CHB > > Right, this is the intent of the "Required extension handling" > feature: > https://www.python.org/dev/peps/pep-0426/#required-extension-handling > > If a package flags an extension as "installer_must_handle", then > attempts to install that package are supposed to fail if the installer > doesn't recognise the extension. Otherwise, installers are free to > ignore extensions they don't understand. > > So meta-installers like canopy could add their own extensions to their > generated wheel files, flag those extensions as required, and other > installers would correctly reject those wheels as unsupported. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Apr 14 18:46:26 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 14 Apr 2015 17:46:26 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 14 April 2015 at 17:10, Chris Barker wrote: >> If there?s a plugin that understands this extension >> > installed, let it do something before you actually move the files into >> > place?. This let?s Wheels themselves still be declarative and moves the >> > responsibility of implementing these bits into their own PyPI projects >> > that can be versioned and independently upgraded and such. We?d probably >> > need some method of marking an extension as ?critical? (e.g. bail out >> > and >> > don?t install this Wheel if you don?t have something that knows how to >> > handle >> > it) and then non critical extensions just get ignored if we don?t know >> > how to handle it. > > Could an "extension" be -- "run this arbitrary Python script" ? The main point (as I see it) of an "extension" is that it's distributed independently of the packages that use it. So you get to decide to use an extension (and by inference audit it if you want) *before* it gets run as part of an installation. Extensions get peer review by the community, and bad ones get weeded out, in a way that just having a chunk of code in your setup.py or the postinstall section of your wheel doesn't. > We've got a full featured scripting language (with batteries included!) -- > isn't that all the extension you need? Up to a point yes. It's the independent review and quality control aspects that matter to me. > Or is this about security? We don't want to let a package do virtually > anything on install? Security is one aspect, and one that a lot of people will pick up on immediately. But there's also portability. And code quality. And consistency. I'd be much happier installing a project that used a well-known "start menu manager extension" than one that just used custom code. I'd be willing to assume that the author of the extension had thought about Unix/Windows compatibility, how to handle use in a virtualenv, handling user preferences (such as the end user *not wanting* shortcuts), etc etc. And I could look at the extension project's issue tracker to see how happy I was with the state of the project. Of course, if the project I want to install makes using the extension mandatory for the install to work, I still don't have a real choice - I accept the extension or I can't use the code I want - but there's an extra level of transparency involved. And hopefully most extensions will be optional, in practice. Paul From trishank at nyu.edu Tue Apr 14 17:19:19 2015 From: trishank at nyu.edu (Trishank Karthik Kuppusamy) Date: Tue, 14 Apr 2015 11:19:19 -0400 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> Message-ID: On 14 April 2015 at 11:16, Brett Cannon wrote: > > I agree. Even something as simple as a boolean that triggers a banner > saying "this project is looking for a new maintainer" would be useful both > from the perspective of project owners who want to move on or from the > perspective of users who can't tell if a project is maintained based on how > long it has been since a project uploaded a new version (which is why I > think someone suggested sending an annual email asking for a human action > to say "alive and kicking" to help determine if a project is completely > abandoned). > > Yeah, I think Guido said something to this effect in his keynote. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Apr 14 21:20:49 2015 From: cournape at gmail.com (David Cournapeau) Date: Tue, 14 Apr 2015 15:20:49 -0400 Subject: [Distutils] Beyond wheel 1.0: more fine-grained installation scheme In-Reply-To: References: Message-ID: On Tue, Apr 14, 2015 at 9:27 AM, Daniel Holth wrote: > That's exactly what I would like to do. Then > distribution-1.0.data/sysconfdir/file in a wheel would install into > /etc/file in the default scheme, but would probably really wind up in > $VIRTUAL_ENV/etc/... for most of us web developers. > $prefix would be set to sys.prefix and $eprefix to sys.exec_prefix to handle automatically. Should I work on a PEP for wheel 2.0, or a pip implementation first ? > IIRC extra package-1.0-data/* directories in wheel are undefined. pip will actually fail to install any wheel with an undefined directory package-1.0-data/* (unfortunately this is not detected before installing, so it ends up with a half installed package). David > I > would have no problem putting fine-grained install schemes in 2.0 and > putting some of the other "wheel 2.0" features into wheel 3. > Incrementing the major version number would cause older pip to reject > the newer wheels, incrementing the minor version would produce a > warning. > > > > On Tue, Apr 14, 2015 at 8:56 AM, David Cournapeau > wrote: > > Hi, > > > > I am splitting up the previous thread into one thread / proposal to focus > > the discussion. > > > > Assuming the basis of this proposal does not sound too horrible, I would > > make a proof of concept in a pip branch, so that we can flush out the > > details and then write an actual spec (I guess an updated wheel format > would > > need a new PEP ?). > > > > The goal of this thread is to flush out a more fine-grained installation > > scheme, so that wheels can install files anywhere they want (at least > within > > sys.prefix/sys.exec_prefix). I see two issues: > > > > 1. defining what the scheme should be > > 2. how should it be implemented in wheel: there are different trade-offs > > depending on whether we want this feature to be part of wheel format 1.* > or > > 2.0. > > > > First, my understanding of the current situation: > > > > * per the wheel PEP 427, anything in the wheel top directory and not in > > distribution-1.0.data is installed in site-package > > * every top directory in distribution-1.0.data/ needs to be mapped to the > > scheme as defined in distutils install command. > > * pip rejects any directory in distribution-1.0.data/ which is not in the > > scheme from 2. > > > > My suggestion for a better scheme would be to use an extended version of > the > > various default directories defined by autotools. The extension would > handle > > windows-specifics. More concretely: > > > > # Suggested variables > > > > The goal of supporting those variables is to take something that is > flexible > > enough to support almost any installation scheme, without putting > additional > > burden on the developer. People who do not want/need the flexibility will > > not need to do anything more than what they do today. > > > > The variables I would suggest are every variable defined in > > > https://github.com/cournape/Bento/blob/master/bento/core/platforms/sysconfig.py#L10 > , > > except for destdir which is not relevant here. > > > > On unix, the defaults follow autotools, and on windows, I mapped almost > > everything relative to sys.exec_prefix, except for the > > bindir/sbindir/libexecdir which map to "$prefix\Scripts". > > > > The $sitedir variable would need to be updated to use the value from > > distutils instead of the hardcoded value I put in that file as well. > > > > # How to handle the augmented scheme > > > > Right now, if I wanted to install something in say $prefix/share/doc, I > > would need to put it in distribution-1.0.data/data/share/doc, but this > > prevents use from handling different platforms differently. > > > > OTOH, this is the only way I can see to make the new scheme backward > > compatible with pip versions who would not understand the new scheme. I > > don't have a good sense of what we should do there, the core pip team may > > have a better sense. > > > > For now, I would be happy to just make a proof of concept not caring > about > > backward compatibility in a pip branch. Does that sound like a workable > > basis to flush out an actual proposal ? > > > > thanks, > > David > > > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Apr 14 21:35:26 2015 From: dholth at gmail.com (Daniel Holth) Date: Tue, 14 Apr 2015 15:35:26 -0400 Subject: [Distutils] Beyond wheel 1.0: more fine-grained installation scheme In-Reply-To: References: Message-ID: Just implement it. You could also try editing wheel's own proof of concept installer. https://bitbucket.org/pypa/wheel/src/tip/wheel/install.py?at=default#cl-246 On Tue, Apr 14, 2015 at 3:20 PM, David Cournapeau wrote: > > > On Tue, Apr 14, 2015 at 9:27 AM, Daniel Holth wrote: >> >> That's exactly what I would like to do. Then >> distribution-1.0.data/sysconfdir/file in a wheel would install into >> /etc/file in the default scheme, but would probably really wind up in >> $VIRTUAL_ENV/etc/... for most of us web developers. > > > $prefix would be set to sys.prefix and $eprefix to sys.exec_prefix to handle > automatically. > > Should I work on a PEP for wheel 2.0, or a pip implementation first ? > >> >> IIRC extra package-1.0-data/* directories in wheel are undefined. > > > pip will actually fail to install any wheel with an undefined directory > package-1.0-data/* (unfortunately this is not detected before installing, so > it ends up with a half installed package). > > David > >> >> I >> would have no problem putting fine-grained install schemes in 2.0 and >> putting some of the other "wheel 2.0" features into wheel 3. >> Incrementing the major version number would cause older pip to reject >> the newer wheels, incrementing the minor version would produce a >> warning. > > > >> >> >> >> On Tue, Apr 14, 2015 at 8:56 AM, David Cournapeau >> wrote: >> > Hi, >> > >> > I am splitting up the previous thread into one thread / proposal to >> > focus >> > the discussion. >> > >> > Assuming the basis of this proposal does not sound too horrible, I would >> > make a proof of concept in a pip branch, so that we can flush out the >> > details and then write an actual spec (I guess an updated wheel format >> > would >> > need a new PEP ?). >> > >> > The goal of this thread is to flush out a more fine-grained installation >> > scheme, so that wheels can install files anywhere they want (at least >> > within >> > sys.prefix/sys.exec_prefix). I see two issues: >> > >> > 1. defining what the scheme should be >> > 2. how should it be implemented in wheel: there are different trade-offs >> > depending on whether we want this feature to be part of wheel format 1.* >> > or >> > 2.0. >> > >> > First, my understanding of the current situation: >> > >> > * per the wheel PEP 427, anything in the wheel top directory and not in >> > distribution-1.0.data is installed in site-package >> > * every top directory in distribution-1.0.data/ needs to be mapped to >> > the >> > scheme as defined in distutils install command. >> > * pip rejects any directory in distribution-1.0.data/ which is not in >> > the >> > scheme from 2. >> > >> > My suggestion for a better scheme would be to use an extended version of >> > the >> > various default directories defined by autotools. The extension would >> > handle >> > windows-specifics. More concretely: >> > >> > # Suggested variables >> > >> > The goal of supporting those variables is to take something that is >> > flexible >> > enough to support almost any installation scheme, without putting >> > additional >> > burden on the developer. People who do not want/need the flexibility >> > will >> > not need to do anything more than what they do today. >> > >> > The variables I would suggest are every variable defined in >> > >> > https://github.com/cournape/Bento/blob/master/bento/core/platforms/sysconfig.py#L10, >> > except for destdir which is not relevant here. >> > >> > On unix, the defaults follow autotools, and on windows, I mapped almost >> > everything relative to sys.exec_prefix, except for the >> > bindir/sbindir/libexecdir which map to "$prefix\Scripts". >> > >> > The $sitedir variable would need to be updated to use the value from >> > distutils instead of the hardcoded value I put in that file as well. >> > >> > # How to handle the augmented scheme >> > >> > Right now, if I wanted to install something in say $prefix/share/doc, I >> > would need to put it in distribution-1.0.data/data/share/doc, but this >> > prevents use from handling different platforms differently. >> > >> > OTOH, this is the only way I can see to make the new scheme backward >> > compatible with pip versions who would not understand the new scheme. I >> > don't have a good sense of what we should do there, the core pip team >> > may >> > have a better sense. >> > >> > For now, I would be happy to just make a proof of concept not caring >> > about >> > backward compatibility in a pip branch. Does that sound like a workable >> > basis to flush out an actual proposal ? >> > >> > thanks, >> > David >> > >> > >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > https://mail.python.org/mailman/listinfo/distutils-sig >> > > > From chris.barker at noaa.gov Tue Apr 14 23:02:14 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 14 Apr 2015 14:02:14 -0700 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Tue, Apr 14, 2015 at 9:46 AM, Paul Moore wrote: > > Could an "extension" be -- "run this arbitrary Python script" ? > > The main point (as I see it) of an "extension" is that it's > distributed independently of the packages that use it. So you get to > decide to use an extension (and by inference audit it if you want) > *before* it gets run as part of an installation. OK, I think this is getting clearer to me now -- an Extension is (I suppose arbitrary) block of python code, but what goes into the wheel is not the code, but rather a declarative configuration for the extension. then at install-time, the actual code that runs is separate from the wheel, which gives the end user greater control, plus these nifty features.... > Extensions get peer > review by the community, and bad ones get weeded out, > the independent review and quality control > > there's also portability. And code quality. And > consistency. > And I'll add that this would promote code re-use and DRY. I'd be much happier installing a project that used a well-known "start > menu manager extension" So where would that code live? and how would it be managed? I'm thinking: - In package on PyPi like anything else - a specification in install_requires - pip auto-installs it (if not already there) when the user goes to install the wheel. Is that the idea? Of course, if the project I want to install makes using the extension > mandatory for the install to work, I still don't have a real choice - > I accept the extension or I can't use the code I want - well, you can't easily auto-install it anyway -- you could still do a source install, presumably. but there's an > extra level of transparency involved. And hopefully most extensions > will be optional, in practice. > There's a bit to think about in the API/UI here. If an installation_extension is used by a package, and it's specified in install_requires, then it's going to get auto-magically installed an used with a regular old "pip install". If we are worried about code review and users being in control of what extensions they use, then how to we make it obvious that a given extension is in use, but optional, and how to turn it off if you want? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Apr 14 23:19:20 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 14 Apr 2015 22:19:20 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 14 April 2015 at 22:02, Chris Barker wrote: > - pip auto-installs it (if not already there) when the user goes to install > the wheel. Personally, I'm not a fan of auto-installing, so I'd hope for something more like pip would fail to install if a required extension were missing. The user would then install the extension and redo the install. But that may be a minority opinion - it's a bit like setup_requires in principle, and people seem to prefer that to be auto-installed. Paul Paul From ncoghlan at gmail.com Wed Apr 15 00:00:48 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 14 Apr 2015 18:00:48 -0400 Subject: [Distutils] Beyond wheel 1.0: more fine-grained installation scheme In-Reply-To: References: Message-ID: On 14 Apr 2015 09:28, "Daniel Holth" wrote: > > That's exactly what I would like to do. Then > distribution-1.0.data/sysconfdir/file in a wheel would install into > /etc/file in the default scheme, but would probably really wind up in > $VIRTUAL_ENV/etc/... for most of us web developers. > > IIRC extra package-1.0-data/* directories in wheel are undefined. I > would have no problem putting fine-grained install schemes in 2.0 and > putting some of the other "wheel 2.0" features into wheel 3. > Incrementing the major version number would cause older pip to reject > the newer wheels, incrementing the minor version would produce a > warning. +1, although I expect bdist_wheel would likely need support on the generation side to use the lowest viable version of the wheel spec for a given package. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Wed Apr 15 00:46:43 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 15 Apr 2015 08:46:43 +1000 Subject: [Distutils] pip/warehouse feature idea: "help needed" References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> Message-ID: <85k2xeuy3w.fsf@benfinney.id.au> Trishank Karthik Kuppusamy writes: > Yeah, I think Guido said something to this effect in his keynote. Apparently I'm missing that context, then. The original post didn't help me understand why this proposal is significantly different from past ?add a bunch of social to PyPI? rejected in the past. Can someone help by distinguishing this from past proposals of that kind? -- \ ?I am as agnostic about God as I am about fairies and the | `\ Flying Spaghetti Monster.? ?Richard Dawkins, 2006-10-13 | _o__) | Ben Finney From donald at stufft.io Wed Apr 15 01:21:46 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 14 Apr 2015 19:21:46 -0400 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: <85k2xeuy3w.fsf@benfinney.id.au> References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> <85k2xeuy3w.fsf@benfinney.id.au> Message-ID: > On Apr 14, 2015, at 6:46 PM, Ben Finney wrote: > > Trishank Karthik Kuppusamy writes: > >> Yeah, I think Guido said something to this effect in his keynote. > > Apparently I'm missing that context, then. The original post didn't help > me understand why this proposal is significantly different from past > ?add a bunch of social to PyPI? rejected in the past. > > Can someone help by distinguishing this from past proposals of that > kind? I think one of the distinguishing characteristics is that we?ve (as a community) changed and we?re more willing to evolve PyPI to do more and be a more central part of the life of a project than previously. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed Apr 15 02:14:32 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 14 Apr 2015 20:14:32 -0400 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> Message-ID: On 14 April 2015 at 11:19, Trishank Karthik Kuppusamy wrote: > On 14 April 2015 at 11:16, Brett Cannon wrote: >> I agree. Even something as simple as a boolean that triggers a banner >> saying "this project is looking for a new maintainer" would be useful both >> from the perspective of project owners who want to move on or from the >> perspective of users who can't tell if a project is maintained based on how >> long it has been since a project uploaded a new version (which is why I >> think someone suggested sending an annual email asking for a human action to >> say "alive and kicking" to help determine if a project is completely >> abandoned). > > Yeah, I think Guido said something to this effect in his keynote. Yep, Guido's keynote was the genesis of the thread. For folks that haven't seen it, the specific points of concern raised were: * seeking a new maintainer from amongst their users * seeking help with enabling Python 3 support Past suggestions for social features have related to providing users with a standard way to reach maintainers and each other, and I'd prefer to leave maintainers in full control of that aspect of the maintainer experience. I'm not alone in feeling that way, hence why such features tend not to be viewed especially positively. The one thing that *only* PyPI can provide is the combination of a publication channel for maintainers to reach their user base without either side needing to share contact information they aren't already sharing, together with the creation of the clear understanding that providing sustaining engineering for a piece of software represents a significant time commitment that users benefiting from an open source maintainer's generosity should respect. This thread regarding maintainers being able to more clearly communicate maintenance status to users also relates to my blog post ( http://www.curiousefficiency.org/posts/2015/04/stop-supporting-python26.html) regarding the fact that folks that: a) don't personally need to ensure software they maintain works on old versions of Python; and b) aren't getting paid to ensure it works on old versions of Python; c) shouldn't feel obliged to provide such support for free Supporting legacy platforms is generally tedious work that isn't inherently interesting or rewarding. Folks that want such legacy platform support should thus be expecting to have to pay for it, and demanding it for free is unreasonable. The perception that open source software is provided by magic internet pixies that don't need to eat (or at the very least to be thanked for the time their generosity has saved us) is unfortunately widespread and pernicious [1], and PyPI is in a position to help shift that situation to one where open source maintainers at least have the opportunity to clearly explain the sustaining engineering model backing their software while deflecting any criticism for the mere existence of such explanations onto the PyPI maintainers rather than having to cope with any negative feedback themselves. Regards, Nick. [1] As far as *how* that mistaken perception that failing to adequately compensate open source developers is OK became so widespread, my own theory is that it stems from the fact that open source was popularised first in the context of Linux, which relies heavily on the corporate patronage model where companies like Red Hat make money from customers that often aren't interested in technology for its own sake, while making the underlying software freely available as a basis for shared collaboration and experimentation. I personally like that model [2], but plenty of folks have entirely reasonable concerns about it and hence need to support their open source work in other ways. My view is that appropriately addressing that complex situation involves actively challenging the common assumption that adequately compensating the project developers for their work is somebody else's problem, and thus that it makes sense to eventually build the ability to issue that challenge directly into the software distribution infrastructure. It's not at the top of the priority list right now, but Guido's keynote made me realise it should be on the list somewhere. [2] http://community.redhat.com/blog/2015/02/the-quid-pro-quo-of-open-infrastructure/ -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Wed Apr 15 02:34:08 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 15 Apr 2015 10:34:08 +1000 Subject: [Distutils] pip/warehouse feature idea: "help needed" References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> Message-ID: <854moiut4v.fsf@benfinney.id.au> Nick Coghlan writes: > Yep, Guido's keynote was the genesis of the thread. I can't find it online, can you give a URL so we can see the talk? > Past suggestions for social features have related to providing users > with a standard way to reach maintainers and each other, and I'd > prefer to leave maintainers in full control of that aspect of the > maintainer experience. I'm not alone in feeling that way, hence why > such features tend not to be viewed especially positively. Thanks for this detailed response differentiating this proposal from previous ones, it's exactly what I was asking for. -- \ ?For mad scientists who keep brains in jars, here's a tip: why | `\ not add a slice of lemon to each jar, for freshness?? ?Jack | _o__) Handey | Ben Finney From robertc at robertcollins.net Wed Apr 15 04:15:12 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 15 Apr 2015 14:15:12 +1200 Subject: [Distutils] version for VCS/local builds between alpha and release In-Reply-To: References: Message-ID: On 14 April 2015 at 23:31, Joost Molenaar wrote: > On 14 April 2015 at 10:04, Robert Collins wrote: > >> The basic scenario here is developers and CD deployers building >> versions from VCS of arbitrary commits. So we need to be able to >> deliver strictly increasing version numbers, automatically, without >> interfering with actual publishing of pre-release and release versions >> to PyPI. > > I think the advice in PEP440 about using dev tags[1] is a little misguided, > because dev tags count towards a known version in the future, while DVCS tags > (at least in Git) count the number of commits since a known version in the > past. In this respect, 'git describe' most closely resembles the post release > tags in PEP440, so that's what I've chosen to use in my build scripts, in spite > of the recommendations in PEP440. > > > [1] https://www.python.org/dev/peps/pep-0440/#dvcs-based-version-labels Fair enough - what we're doing is using semver to predict the next version based on the git history - for instance the pseudo header Sem-Ver: api-break will cause the next version to be a major version up. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From kevin.horn at gmail.com Wed Apr 15 05:57:11 2015 From: kevin.horn at gmail.com (Kevin Horn) Date: Tue, 14 Apr 2015 22:57:11 -0500 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Tue, Apr 14, 2015 at 4:19 PM, Paul Moore wrote: > On 14 April 2015 at 22:02, Chris Barker wrote: > > Personally, I'm not a fan of auto-installing, so I'd hope for > something more like pip would fail to install if a required extension > were missing. The user would then install the extension and redo the > install. But that may be a minority opinion - it's a bit like > setup_requires in principle, and people seem to prefer that to be > auto-installed. > > (lurker surfaces) I'm with Paul on this one. It seems to me that auto-installing the extension would destroy most of the advantages of distributing the extensions separately. I _might_ not hate it if pip prompted the user and _then_ installed, but then again, I might. (lurker sinks back into the depths) -- Kevin Horn -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at reportlab.com Wed Apr 15 13:09:04 2015 From: robin at reportlab.com (Robin Becker) Date: Wed, 15 Apr 2015 12:09:04 +0100 Subject: [Distutils] name of the dependency problem Message-ID: <552E46D0.1020106@chamonix.reportlab.co.uk> After again finding that pip doesn't have a correct dependency resolution solution a colleage and I discussed the nature of the problem. We examined the script capture of our install and it seems as though when presented with level 0 A A level 1 1.4<= C level 0 B B level 1 1.6<= C <1.7 pip manages to download version 1.8 of C(Django) using A's requirement, but never even warns us that the B requirement of C was violated. Surely even in the absence of a resolution pip could raise a warning at the end. Anyhow after some discussion I realize I don't even know the name of the problem that pip should try to solve, is there some tree / graph problem that corresponds? Searching on dependency seems to lead to topological sorts of one kind or another, but here we seem to have nodes with discrete values attached so in the above example we might have (assuming only singleton A & B) R --> A R --> B A --> C-1.4 A --> C-1.6 A --> C-1.6.11 A --> C-1.7 A --> C-1.8 B --> C-1.6 B --> C-1.6.11 so looking at C equivalent nodes seems to allow a solution set. Are there any real problem descriptions / solutions to this kind of problem? -- Robin Becker From jcappos at nyu.edu Wed Apr 15 13:43:31 2015 From: jcappos at nyu.edu (Justin Cappos) Date: Wed, 15 Apr 2015 07:43:31 -0400 Subject: [Distutils] name of the dependency problem In-Reply-To: <552E46D0.1020106@chamonix.reportlab.co.uk> References: <552E46D0.1020106@chamonix.reportlab.co.uk> Message-ID: First of all, I'm surprised that pip doesn't warn or error in this case. I think this is certainly a bug that should be fixed. The problem can come up in much more subtle cases too that are very hard for the user to understand. The good news is that this is a known problem that happens when doing dependency resolution and has a solution. The solution, which is referred to as backtracking dependency resolution, basically boils down to saving the state of the dependency resolver whenever you have multiple choices to resolve a dependency. Then if you reach a later point where there is a conflict, you can backtrack to the point where you made a choice and see if another option would resolve the conflict. I have some of the gory details, in Chapter 3.8.5 of my dissertation ( http://isis.poly.edu/~jcappos/papers/cappos_stork_dissertation_08.pdf ). There is also working Python code out there that shows how this should behave. (I implemented this as part of Stork, a package manager that was used for years in a large academic testbed. ) Thanks, Justin On Wed, Apr 15, 2015 at 7:09 AM, Robin Becker wrote: > After again finding that pip doesn't have a correct dependency resolution > solution a colleage and I discussed the nature of the problem. We examined > the script capture of our install and it seems as though when presented with > > > level 0 A > A level 1 1.4<= C > > > level 0 B > B level 1 1.6<= C <1.7 > > pip manages to download version 1.8 of C(Django) using A's requirement, > but never even warns us that the B requirement of C was violated. Surely > even in the absence of a resolution pip could raise a warning at the end. > > Anyhow after some discussion I realize I don't even know the name of the > problem that pip should try to solve, is there some tree / graph problem > that corresponds? Searching on dependency seems to lead to topological > sorts of one kind or another, but here we seem to have nodes with discrete > values attached so in the above example we might have (assuming only > singleton A & B) > > R --> A > R --> B > > A --> C-1.4 > A --> C-1.6 > A --> C-1.6.11 > A --> C-1.7 > A --> C-1.8 > > B --> C-1.6 > B --> C-1.6.11 > > so looking at C equivalent nodes seems to allow a solution set. Are there > any real problem descriptions / solutions to this kind of problem? > -- > Robin Becker > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Apr 15 14:55:28 2015 From: dholth at gmail.com (Daniel Holth) Date: Wed, 15 Apr 2015 08:55:28 -0400 Subject: [Distutils] name of the dependency problem In-Reply-To: References: <552E46D0.1020106@chamonix.reportlab.co.uk> Message-ID: See also http://en.wikipedia.org/wiki/ZYpp On Wed, Apr 15, 2015 at 7:43 AM, Justin Cappos wrote: > First of all, I'm surprised that pip doesn't warn or error in this case. I > think this is certainly a bug that should be fixed. The problem can come up > in much more subtle cases too that are very hard for the user to understand. > > The good news is that this is a known problem that happens when doing > dependency resolution and has a solution. The solution, which is referred > to as backtracking dependency resolution, basically boils down to saving the > state of the dependency resolver whenever you have multiple choices to > resolve a dependency. Then if you reach a later point where there is a > conflict, you can backtrack to the point where you made a choice and see if > another option would resolve the conflict. > > I have some of the gory details, in Chapter 3.8.5 of my dissertation ( > http://isis.poly.edu/~jcappos/papers/cappos_stork_dissertation_08.pdf ). > There is also working Python code out there that shows how this should > behave. (I implemented this as part of Stork, a package manager that was > used for years in a large academic testbed. ) > > Thanks, > Justin > > > > > > On Wed, Apr 15, 2015 at 7:09 AM, Robin Becker wrote: >> >> After again finding that pip doesn't have a correct dependency resolution >> solution a colleage and I discussed the nature of the problem. We examined >> the script capture of our install and it seems as though when presented with >> >> >> level 0 A >> A level 1 1.4<= C >> >> >> level 0 B >> B level 1 1.6<= C <1.7 >> >> pip manages to download version 1.8 of C(Django) using A's requirement, >> but never even warns us that the B requirement of C was violated. Surely >> even in the absence of a resolution pip could raise a warning at the end. >> >> Anyhow after some discussion I realize I don't even know the name of the >> problem that pip should try to solve, is there some tree / graph problem >> that corresponds? Searching on dependency seems to lead to topological sorts >> of one kind or another, but here we seem to have nodes with discrete values >> attached so in the above example we might have (assuming only singleton A & >> B) >> >> R --> A >> R --> B >> >> A --> C-1.4 >> A --> C-1.6 >> A --> C-1.6.11 >> A --> C-1.7 >> A --> C-1.8 >> >> B --> C-1.6 >> B --> C-1.6.11 >> >> so looking at C equivalent nodes seems to allow a solution set. Are there >> any real problem descriptions / solutions to this kind of problem? >> -- >> Robin Becker >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From jcappos at nyu.edu Wed Apr 15 15:28:11 2015 From: jcappos at nyu.edu (Justin Cappos) Date: Wed, 15 Apr 2015 09:28:11 -0400 Subject: [Distutils] name of the dependency problem In-Reply-To: References: <552E46D0.1020106@chamonix.reportlab.co.uk> Message-ID: Yes, it's another way to solve the problem. Both backtracking dependency resolution and ZYpp will always find a solution. The tradeoff is really in how they function. ZYpp is faster if there are a lot of dependencies that conflict. The backtracking dependency resolution used in Stork is much easier for the user to understand why it chose what it did. An aside: I'm not necessarily convinced that you need to solve this problem automatically, instead of just raising an error when it occurs. It should be quite rare in practice and as such may not be worth the complexity to have an automatic solution for the problem. Thanks, Justin On Wed, Apr 15, 2015 at 8:55 AM, Daniel Holth wrote: > See also http://en.wikipedia.org/wiki/ZYpp > > On Wed, Apr 15, 2015 at 7:43 AM, Justin Cappos wrote: > > First of all, I'm surprised that pip doesn't warn or error in this > case. I > > think this is certainly a bug that should be fixed. The problem can > come up > > in much more subtle cases too that are very hard for the user to > understand. > > > > The good news is that this is a known problem that happens when doing > > dependency resolution and has a solution. The solution, which is > referred > > to as backtracking dependency resolution, basically boils down to saving > the > > state of the dependency resolver whenever you have multiple choices to > > resolve a dependency. Then if you reach a later point where there is a > > conflict, you can backtrack to the point where you made a choice and see > if > > another option would resolve the conflict. > > > > I have some of the gory details, in Chapter 3.8.5 of my dissertation ( > > http://isis.poly.edu/~jcappos/papers/cappos_stork_dissertation_08.pdf ). > > There is also working Python code out there that shows how this should > > behave. (I implemented this as part of Stork, a package manager that was > > used for years in a large academic testbed. ) > > > > Thanks, > > Justin > > > > > > > > > > > > On Wed, Apr 15, 2015 at 7:09 AM, Robin Becker > wrote: > >> > >> After again finding that pip doesn't have a correct dependency > resolution > >> solution a colleage and I discussed the nature of the problem. We > examined > >> the script capture of our install and it seems as though when presented > with > >> > >> > >> level 0 A > >> A level 1 1.4<= C > >> > >> > >> level 0 B > >> B level 1 1.6<= C <1.7 > >> > >> pip manages to download version 1.8 of C(Django) using A's requirement, > >> but never even warns us that the B requirement of C was violated. Surely > >> even in the absence of a resolution pip could raise a warning at the > end. > >> > >> Anyhow after some discussion I realize I don't even know the name of the > >> problem that pip should try to solve, is there some tree / graph problem > >> that corresponds? Searching on dependency seems to lead to topological > sorts > >> of one kind or another, but here we seem to have nodes with discrete > values > >> attached so in the above example we might have (assuming only singleton > A & > >> B) > >> > >> R --> A > >> R --> B > >> > >> A --> C-1.4 > >> A --> C-1.6 > >> A --> C-1.6.11 > >> A --> C-1.7 > >> A --> C-1.8 > >> > >> B --> C-1.6 > >> B --> C-1.6.11 > >> > >> so looking at C equivalent nodes seems to allow a solution set. Are > there > >> any real problem descriptions / solutions to this kind of problem? > >> -- > >> Robin Becker > >> _______________________________________________ > >> Distutils-SIG maillist - Distutils-SIG at python.org > >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at reportlab.com Wed Apr 15 15:51:02 2015 From: robin at reportlab.com (Robin Becker) Date: Wed, 15 Apr 2015 14:51:02 +0100 Subject: [Distutils] name of the dependency problem In-Reply-To: References: <552E46D0.1020106@chamonix.reportlab.co.uk> Message-ID: <552E6CC6.8050904@chamonix.reportlab.co.uk> Thanks to Justin & Daniel for some insights. On 15/04/2015 14:28, Justin Cappos wrote: > Yes, it's another way to solve the problem. Both backtracking dependency > resolution and ZYpp will always find a solution. The tradeoff is really in > how they function. ZYpp is faster if there are a lot of dependencies that > conflict. The backtracking dependency resolution used in Stork is much > easier for the user to understand why it chose what it did. > > An aside: I'm not necessarily convinced that you need to solve this problem > automatically, instead of just raising an error when it occurs. It should > be quite rare in practice and as such may not be worth the complexity to > have an automatic solution for the problem. > I think you are probably right. Not sure how deep the dependency graph can be, but there could be quite a few candidates to check and presumably they have to be downloaded and the dependency information obtained for each unique branch of the decision tree. > Thanks, > Justin > > On Wed, Apr 15, 2015 at 8:55 AM, Daniel Holth wrote: > >> See also http://en.wikipedia.org/wiki/ZYpp >> >> On Wed, Apr 15, 2015 at 7:43 AM, Justin Cappos wrote: >>> First of all, I'm surprised that pip doesn't warn or error in this >> case. I ........ -- Robin Becker From reinout at vanrees.org Wed Apr 15 16:01:13 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 15 Apr 2015 16:01:13 +0200 Subject: [Distutils] buildout/setuptools slow as it scans the whole project dir Message-ID: Hi, In some of my projects, buildout takes a looooooong time to complete. Sometimes the same problem occurs on the server or on another developer's laptop, sometimes not. The difference is something like "15 seconds if there's no problem" and "5 minutes if the problem occurs". It is terribly hard to pinpoint the exact problem and/or cause. An error in the setup.py or or MANIFEST.in is unlikely, as the very same project might take 5 minutes locally and 15 seconds on the server or vice versa... Anyway, if I run "bin/buildout" as "strace -f bin/buildout", I see a lot of "stat" calls. Setuptools walks through all the files inside my project dir. Including parts/omelette/* and, if available, a bower_compontents/ directory full of thousands of javascript files. Removing some of these directories (which running the buildout re-creates) fixes the speed issue for one run. I modified my local buildout copy to run "setup.py develop" with a -v instead of a -q option. This way I found out where it approximately happens: /usr/bin/python /tmp/tmp6UdsMl -v develop -mxN -d /vagrant/sso/develop-eggs/tmpfioc1Ibuild running develop running egg_info writing requirements to sso.egg-info/requires.txt writing sso.egg-info/PKG-INFO writing top-level names to sso.egg-info/top_level.txt writing dependency_links to sso.egg-info/dependency_links.txt writing entry points to sso.egg-info/entry_points.txt ### This is where the process seems to stop a couple of minutes to scan all the files. reading manifest file 'sso.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'sso.egg-info/SOURCES.txt' running build_ext Creating /vagrant/sso/develop-eggs/tmpfioc1Ibuild/sso.egg-link (link to .) The MANIFEST.in loooks like this: # Include docs in the root. include *.rst # Include everything in our project directory (sso/views.py, sso/static/some.js, etc) graft sso It is a git project. The setup.py looks like this: from setuptools import setup version = '1.1.dev0' long_description = '\n\n'.join([ open('README.rst').read(), open('CREDITS.rst').read(), open('CHANGES.rst').read(), ]) install_requires = [ 'Django >= 1.4.2, < 1.7', 'django-nose', 'lizard-auth-server', 'gunicorn', 'raven', 'werkzeug', 'south', 'django-auth-ldap', 'django-mama-cas', ], setup(name='sso', version=version, description="Single sign on server (and more) for lizard", long_description=long_description, # Get strings from http://www.python.org/pypi?%3Aaction=list_classifiers classifiers=['Programming Language :: Python', 'Framework :: Django', ], keywords=[], author='Do not blame Reinout', author_email='reinout.vanrees at nelen-schuurmans.nl', url='', license='GPL', packages=['sso'], zip_safe=False, install_requires=install_requires, entry_points={ 'console_scripts': [ ]}, ) Conclusion for me: something somewhere in setuptools is reading my whole project folder. It does it after "writing entry points to sso.egg-info/entry_points.txt" and before "reading manifest file 'sso.egg-info/SOURCES.txt'". The SOURCES.txt itself is small: CHANGES.rst CREDITS.rst LICENSE.rst MANIFEST.in README.rst setup.cfg setup.py sso/__init__.py sso/__init__.pyc sso/admin.py sso/developmentsettings.py sso/developmentsettings.pyc sso/models.py sso/models.pyc sso/settings.py sso/settings.pyc sso/stagingsettings.py sso/tests.py sso/urls.py sso/views.py sso.egg-info/PKG-INFO sso.egg-info/SOURCES.txt sso.egg-info/dependency_links.txt sso.egg-info/entry_points.txt sso.egg-info/not-zip-safe sso.egg-info/requires.txt Hm. I see some .pyc files in there. Something that needs fixing, but not the cause, I think. Is there something I'm missing? Where should I look? I cannot find "writing entry points to..." in the setuptools source code right away. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From jim at zope.com Wed Apr 15 16:14:50 2015 From: jim at zope.com (Jim Fulton) Date: Wed, 15 Apr 2015 10:14:50 -0400 Subject: [Distutils] buildout/setuptools slow as it scans the whole project dir In-Reply-To: References: Message-ID: On Wed, Apr 15, 2015 at 10:01 AM, Reinout van Rees wrote: > Hi, > > In some of my projects, buildout takes a looooooong time to complete. > Sometimes the same problem occurs on the server or on another developer's > laptop, sometimes not. The difference is something like "15 seconds if > there's no problem" and "5 minutes if the problem occurs". ... I wonder if the culprit is _dir_hash in zc.buildout.buildlout. Buildout reinstalls a part if it has changed. It considers a part to have changed if it's arguments have changed or it's recipe has changed. If a recipe is a develop egg, then it computes a hash for the recipe based on it's package contents. A quick thing to try might be to hack _dir_hash to be a noop (e.g. add ``return 42`` at the top and see if it makes the delay go away.) Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From tk47 at nyu.edu Wed Apr 15 15:34:15 2015 From: tk47 at nyu.edu (Trishank Karthik Kuppusamy) Date: Wed, 15 Apr 2015 09:34:15 -0400 Subject: [Distutils] name of the dependency problem In-Reply-To: References: <552E46D0.1020106@chamonix.reportlab.co.uk> Message-ID: <552E68D7.50909@nyu.edu> On 4/15/15 9:28 AM, Justin Cappos wrote: > Yes, it's another way to solve the problem. Both backtracking > dependency resolution and ZYpp will always find a solution. The > tradeoff is really in how they function. ZYpp is faster if there are > a lot of dependencies that conflict. The backtracking dependency > resolution used in Stork is much easier for the user to understand why > it chose what it did. > > An aside: I'm not necessarily convinced that you need to solve this > problem automatically, instead of just raising an error when it > occurs. It should be quite rare in practice and as such may not be > worth the complexity to have an automatic solution for the problem. > ZYpp seems to assume that dependency resolution is an NP-complete problem (http://www.watzmann.net/blog/2005/11/package-installation-is-np-complete.html). I agree that we need not solve the problem just yet. It may be worthwhile to inspect packages on PyPI to see which package is unsatisfiable, but I am led to understand that this is difficult to do because most package metadata is in setup.py (runtime information). Where did you run into this problem, Robin? From reinout at vanrees.org Wed Apr 15 16:26:32 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 15 Apr 2015 16:26:32 +0200 Subject: [Distutils] buildout/setuptools slow as it scans the whole project dir In-Reply-To: References: Message-ID: Jim Fulton schreef op 15-04-15 om 16:14: > I wonder if the culprit is _dir_hash in zc.buildout.buildlout. > > Buildout reinstalls a part if it has changed. It considers a part to have > changed if it's arguments have changed or it's recipe has changed. > If a recipe is a develop egg, then it computes a hash for the recipe > based on it's > package contents. > > A quick thing to try might be to hack _dir_hash to be a noop (e.g. add > ``return 42`` at > the top and see if it makes the delay go away.) No, that's not it. I tried the quick hack and it is still slow. It should be somewhere in setuptools itself, as buildout executes a temp copy of the setup.py by calling /usr/bin/python /tmp/tmpEMi6Xe -v develop -mxN -d /vagrant/sso/develop-eggs/tmpfmeAuobuild The "/tmp/tmpEMi6Xe" is a copy of the setup.py, though I must check, and the directory at the end is presumably a symlink to the project directory. Setuptools seems to run all "egg_info.writers" entry points it can find in the piece of code where the slowness occurs. So any of the entry points could be the culprit. And perhaps even an entry point outside of setuptools, due to the way entry points work. I'll try to debug further. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From reinout at vanrees.org Wed Apr 15 17:01:19 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 15 Apr 2015 17:01:19 +0200 Subject: [Distutils] buildout/setuptools slow as it scans the whole project dir In-Reply-To: References: Message-ID: Reinout van Rees schreef op 15-04-15 om 16:26: > Setuptools seems to run all "egg_info.writers" entry points it can > find in the piece of code where the slowness occurs. So any of the > entry points could be the culprit. And perhaps even an entry point > outside of setuptools, due to the way entry points work. > > I'll try to debug further. Ok, the egg_info.writers entry points are innocent. A couple of debug "print" statements later I found that setuptools/commands/egg_info.py's manifest_maker class calls "findall()", which is the monkeypatched version from setuptools/__init__.py. This does an "os.walk" through the entire project directory to create a full and complete list of files. Inside the os.walk() loop, it calls os.path.isfile() on every item. This takes a long time! 90%+ of the time is spend in parts/omelette/. And most of that in parts/omelette/django's localization folders... Of course it also goes through the var/ folder, which might store all sorts of files. Now... this is before it actually tries to include/exclude/graft stuff per the MANIFEST.in instructions. It just goes through the entire project. Why is it so slow? Probably because I'm running it on OSX inside a vmware VM (running ubuntu). Probably the disk access via the virtualization layer is slow. I'm not completely satisfied yet, as sometimes it also is slow on the server. (Also on VMs, but normally the performance hit isn't noticable). So.... it almost seems as if there's no solution to this? Or can someone give a hint on os.walk performance relative to VMs? Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From cournape at gmail.com Wed Apr 15 17:15:32 2015 From: cournape at gmail.com (David Cournapeau) Date: Wed, 15 Apr 2015 11:15:32 -0400 Subject: [Distutils] name of the dependency problem In-Reply-To: <552E68D7.50909@nyu.edu> References: <552E46D0.1020106@chamonix.reportlab.co.uk> <552E68D7.50909@nyu.edu> Message-ID: On Wed, Apr 15, 2015 at 9:34 AM, Trishank Karthik Kuppusamy wrote: > On 4/15/15 9:28 AM, Justin Cappos wrote: > >> Yes, it's another way to solve the problem. Both backtracking dependency >> resolution and ZYpp will always find a solution. The tradeoff is really in >> how they function. ZYpp is faster if there are a lot of dependencies that >> conflict. The backtracking dependency resolution used in Stork is much >> easier for the user to understand why it chose what it did. >> >> An aside: I'm not necessarily convinced that you need to solve this >> problem automatically, instead of just raising an error when it occurs. It >> should be quite rare in practice and as such may not be worth the >> complexity to have an automatic solution for the problem. >> >> > ZYpp seems to assume that dependency resolution is an NP-complete problem ( > http://www.watzmann.net/blog/2005/11/package-installation- > is-np-complete.html). > > I agree that we need not solve the problem just yet. It may be worthwhile > to inspect packages on PyPI to see which package is unsatisfiable, but I am > led to understand that this is difficult to do because most package > metadata is in setup.py (runtime information). > This is indeed the case. If you want to solve dependencies in a way that works well, you want an index that describes all your available package versions. While solving dependencies is indeed NP complete, they can be fairly fast in practice because of various specificities : each rule is generally only a few variables, and the rules have a specific form allowing for more efficient rule representation (e.g. "at most one of variable", etc...). In my experience, it is not more difficult than using graph-based algorithms, and FWIW, at Enthought, we are working on a pure python SAT solver for resolving dependencies, to solve some of those issues. I am actually hacking on it right at PyCon, we hope to have a first working version end of Q2, at which point it will be OSS, and reintegrated in my older project depsolver (https://github.com/enthought/depsolver). David -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at reportlab.com Wed Apr 15 17:39:38 2015 From: robin at reportlab.com (Robin Becker) Date: Wed, 15 Apr 2015 16:39:38 +0100 Subject: [Distutils] name of the dependency problem In-Reply-To: <552E68D7.50909@nyu.edu> References: <552E46D0.1020106@chamonix.reportlab.co.uk> <552E68D7.50909@nyu.edu> Message-ID: <552E863A.5090201@chamonix.reportlab.co.uk> On 15/04/2015 14:34, Trishank Karthik Kuppusamy wrote: > On 4/15/15 9:28 AM, Justin Cappos wrote: ......... > > ZYpp seems to assume that dependency resolution is an NP-complete problem > (http://www.watzmann.net/blog/2005/11/package-installation-is-np-complete.html). > yes I deduced that this must be some kind of satisfiability problem although what the mapping is escapes me right now. Various hints in the stork work seem to imply having the version info (and presumably other meta data including requirements) stored in fast access some how so that the computations can be done fast. > I agree that we need not solve the problem just yet. It may be worthwhile to > inspect packages on PyPI to see which package is unsatisfiable, but I am led to > understand that this is difficult to do because most package metadata is in > setup.py (runtime information). > > Where did you run into this problem, Robin? this is the reportlab website repository where we have a top level requirements file for whatever reason that contains a git+https package from github django-mailer 0.2a1 which has open ended requirement 1.4<=Django, later in the top level requirements there's another package that wants 1.6<=Django<1.7, and this package coming after never gets a look in so far as Django is concerned. After looking at the trace and discovering the problem the fix is to move the open ended requirement after the closed one. It would have been easier to discover the problem if pip had flagged the unsatisfied requirement eg !!!!!!!!!!! Django 1.8 loaded because of django-mailer-0.2a1 conflicts with 1.6<=Django<1.7 required by docengine-0.6.29 but how hard that is I don't know; certainly pip must inspect the requirements so it could record the facts and possible spit out the first n conflicts. -- Robin Becker From qwcode at gmail.com Wed Apr 15 17:49:34 2015 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 15 Apr 2015 08:49:34 -0700 Subject: [Distutils] name of the dependency problem In-Reply-To: <552E46D0.1020106@chamonix.reportlab.co.uk> References: <552E46D0.1020106@chamonix.reportlab.co.uk> Message-ID: > level 0 A > A level 1 1.4<= C > > > level 0 B > B level 1 1.6<= C <1.7 > > pip manages to download version 1.8 of C(Django) using A's requirement, > but never even warns us that the B requirement of C was violated. Surely > even in the absence of a resolution pip could raise a warning at the end. > agreed on the warning, but there is a documented workaround for this, that is to put the desired constraint for C at level 0 (i.e. in the install args or in your requirements file) see case #2 here https://pip.pypa.io/en/latest/user_guide.html#requirements-files -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcappos at nyu.edu Wed Apr 15 17:52:40 2015 From: jcappos at nyu.edu (Justin Cappos) Date: Wed, 15 Apr 2015 11:52:40 -0400 Subject: [Distutils] name of the dependency problem In-Reply-To: References: <552E46D0.1020106@chamonix.reportlab.co.uk> <552E68D7.50909@nyu.edu> Message-ID: I guess I should provide the code for what I've done in this space also. Yes, the problem is NP hard, so in both cases, the system may run really slowly. In practice, you don't tend to have a lot of conflicts you need to resolve during a package install though. So I'd argue that optimizing for the case where you have a huge number of conflicts doesn't really matter. I've put the code for Stork up at: https://github.com/JustinCappos/stork It almost certainly still runs, but hasn't been used in production for about 4 years. The code which does the backtracking dependency resolution is in the satisfy function here: https://github.com/JustinCappos/stork/blob/master/python/storkdependency.py#L1004 (FYI: Stork was the very first Python program I wrote after completing the Python tutorial, so apologies for the style and mess. I can clean up the code if needed, but I don't imagine Stork's code would be useful as more than a reference.) However, I'd just like to reiterate that it would be good to check that pip really needs a solution to automatically resolve conflicts, whether with a SAT solver or backtracking. (Stork did something atypical with the way trust and conflicts were specified, so I think it made more sense for our user base.) Perhaps it's worth measuring the scope and severity of the problem first. In the meantime, is someone planning to work on a patch to fix the conflict detection issue in pip? Thanks, Justin On Wed, Apr 15, 2015 at 11:15 AM, David Cournapeau wrote: > > > On Wed, Apr 15, 2015 at 9:34 AM, Trishank Karthik Kuppusamy > wrote: > >> On 4/15/15 9:28 AM, Justin Cappos wrote: >> >>> Yes, it's another way to solve the problem. Both backtracking >>> dependency resolution and ZYpp will always find a solution. The tradeoff >>> is really in how they function. ZYpp is faster if there are a lot of >>> dependencies that conflict. The backtracking dependency resolution used in >>> Stork is much easier for the user to understand why it chose what it did. >>> >>> An aside: I'm not necessarily convinced that you need to solve this >>> problem automatically, instead of just raising an error when it occurs. It >>> should be quite rare in practice and as such may not be worth the >>> complexity to have an automatic solution for the problem. >>> >>> >> ZYpp seems to assume that dependency resolution is an NP-complete problem >> (http://www.watzmann.net/blog/2005/11/package-installation- >> is-np-complete.html). >> >> I agree that we need not solve the problem just yet. It may be worthwhile >> to inspect packages on PyPI to see which package is unsatisfiable, but I am >> led to understand that this is difficult to do because most package >> metadata is in setup.py (runtime information). >> > > This is indeed the case. If you want to solve dependencies in a way that > works well, you want an index that describes all your available package > versions. > > While solving dependencies is indeed NP complete, they can be fairly fast > in practice because of various specificities : each rule is generally only > a few variables, and the rules have a specific form allowing for more > efficient rule representation (e.g. "at most one of variable", etc...). In > my experience, it is not more difficult than using graph-based algorithms, > and > > FWIW, at Enthought, we are working on a pure python SAT solver for > resolving dependencies, to solve some of those issues. I am actually > hacking on it right at PyCon, we hope to have a first working version end > of Q2, at which point it will be OSS, and reintegrated in my older project > depsolver (https://github.com/enthought/depsolver). > > David > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at reportlab.com Wed Apr 15 17:56:29 2015 From: robin at reportlab.com (Robin Becker) Date: Wed, 15 Apr 2015 16:56:29 +0100 Subject: [Distutils] name of the dependency problem In-Reply-To: References: <552E46D0.1020106@chamonix.reportlab.co.uk> Message-ID: <552E8A2D.5000401@chamonix.reportlab.co.uk> On 15/04/2015 16:49, Marcus Smith wrote: .......... > > agreed on the warning, but there is a documented workaround for this, that > is to put the desired constraint for C at level 0 (i.e. in the install args > or in your requirements file) > > see case #2 here > https://pip.pypa.io/en/latest/user_guide.html#requirements-files > indeed pushing all the requirements to level 0 is a way to solve the dependency problem myself. Without knowledge that a problem existed I didn't do this first time around so unless I check all the dependencies for the installed packages I won't know if another one should be pushed to level 0. With a warning I am at least alerted to the issue, without one I depend on bugs happening and then need to figure out the problems myself. -- Robin Becker From qwcode at gmail.com Wed Apr 15 18:01:21 2015 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 15 Apr 2015 09:01:21 -0700 Subject: [Distutils] name of the dependency problem In-Reply-To: <552E8A2D.5000401@chamonix.reportlab.co.uk> References: <552E46D0.1020106@chamonix.reportlab.co.uk> <552E8A2D.5000401@chamonix.reportlab.co.uk> Message-ID: On Wed, Apr 15, 2015 at 8:56 AM, Robin Becker wrote: > On 15/04/2015 16:49, Marcus Smith wrote: > .......... > >> >> agreed on the warning, but there is a documented workaround for this, that >> is to put the desired constraint for C at level 0 (i.e. in the install >> args >> or in your requirements file) >> >> see case #2 here >> https://pip.pypa.io/en/latest/user_guide.html#requirements-files >> >> indeed pushing all the requirements to level 0 is a way to solve the > dependency problem myself. well, not all, just the cases where an override to the default "first found, wins" routine, is needed again, I agree a warning would be a great, just noting that people can work around this (once they realize there's a problem), without having to manually order things (as you mentioned in your last post) -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Apr 15 18:52:05 2015 From: cournape at gmail.com (David Cournapeau) Date: Wed, 15 Apr 2015 12:52:05 -0400 Subject: [Distutils] name of the dependency problem In-Reply-To: References: <552E46D0.1020106@chamonix.reportlab.co.uk> <552E68D7.50909@nyu.edu> Message-ID: On Wed, Apr 15, 2015 at 11:32 AM, Trishank Karthik Kuppusamy < trishank at nyu.edu> wrote: > On 15 April 2015 at 11:15, David Cournapeau wrote: > >> >> This is indeed the case. If you want to solve dependencies in a way that >> works well, you want an index that describes all your available package >> versions. >> >> While solving dependencies is indeed NP complete, they can be fairly fast >> in practice because of various specificities : each rule is generally only >> a few variables, and the rules have a specific form allowing for more >> efficient rule representation (e.g. "at most one of variable", etc...). In >> my experience, it is not more difficult than using graph-based algorithms, >> and >> >> FWIW, at Enthought, we are working on a pure python SAT solver for >> resolving dependencies, to solve some of those issues. I am actually >> hacking on it right at PyCon, we hope to have a first working version end >> of Q2, at which point it will be OSS, and reintegrated in my older project >> depsolver (https://github.com/enthought/depsolver). >> >> > Awesome! Then pip could use that in the near future :) > That's the goal. For various reasons, it ended up easier to develop the solver within our own package manager enstaller, but once done, extracting it as a separate library should not be too hard. It is for example designed to support various versioning schemes (for legacy reasons we can't use PEP440 just yet). Regarding speed, initial experiments showed that even for relatively deep graphs, the running time is taken outside the SAT solver (e.g. to generate the rules, you need to parse the version of every package you want to consider, and parsing 1000s of PEP440 versions is slow :) ). David -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinout at vanrees.org Wed Apr 15 19:02:51 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 15 Apr 2015 19:02:51 +0200 Subject: [Distutils] buildout/setuptools slow as it scans the whole project dir In-Reply-To: References: Message-ID: Reinout van Rees schreef op 15-04-15 om 17:01: > > So.... it almost seems as if there's no solution to this? > > Or can someone give a hint on os.walk performance relative to VMs? Bingo! I think I've found the real problem: symlinks between filesystems. The project is on the host file system and is mounted via vmware/vagrant inside the VM as /vagrant. So my project is on /vagrant/projectname. I have buildout set up with a download and egg cache, in ~/.buildout/downloads and ~/.buildout/eggs respectively. The omelette recipy makes symlinks in /vagrant/projectname/parts/omelette/ for all the eggs, pointing at ~/.buildout/eggs/eggname-version/ So: the symlink is FROM the mounted host filesystem at /vagrant to the VM's own ubuntu filesystem. And apparently symlinks from such a filesystem to another aren't effective. As a test, I've set the egg-cache directory to /vagrant/.buildout/eggs/ And yes: buildout performance is great again! I'll try to write it up a bit more eloquently and clearer later on. Dinner is calling now. (Let me indulge in a quote by my brother Maurits, not unknown on this mailinglist: "When dinner is calling, you need a rifle.") Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From contact at ionelmc.ro Wed Apr 15 19:04:45 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 15 Apr 2015 20:04:45 +0300 Subject: [Distutils] buildout/setuptools slow as it scans the whole project dir In-Reply-To: References: Message-ID: Could this be the same problem https://bitbucket.org/pypa/setuptools/issue/249/have-a-way-to-ingore-specific-dirs-when ? I frequently have this issue (.tox dir can sometimes get quite large) and I resorted to monkeypatching setuptools (I know, right?) in ~/.local/lib/python2.7/site-packages/usercustomize.py: import setuptools from distutils import filelist import os def findall(dir=os.curdir, original=filelist.findall, listdir=os.listdir, basename=os.path.basename): os.listdir = lambda path: [i for i in listdir(path) if '/.tox/' not in i and not i.startswith('.tox/')] try: return original(dir) finally: os.listdir = listdir filelist.findall = findall Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Wed, Apr 15, 2015 at 5:01 PM, Reinout van Rees wrote: > Hi, > > In some of my projects, buildout takes a looooooong time to complete. > Sometimes the same problem occurs on the server or on another developer's > laptop, sometimes not. The difference is something like "15 seconds if > there's no problem" and "5 minutes if the problem occurs". > > It is terribly hard to pinpoint the exact problem and/or cause. An error > in the setup.py or or MANIFEST.in is unlikely, as the very same project > might take 5 minutes locally and 15 seconds on the server or vice versa... > > Anyway, if I run "bin/buildout" as "strace -f bin/buildout", I see a lot > of "stat" calls. Setuptools walks through all the files inside my project > dir. Including parts/omelette/* and, if available, a bower_compontents/ > directory full of thousands of javascript files. Removing some of these > directories (which running the buildout re-creates) fixes the speed issue > for one run. > > I modified my local buildout copy to run "setup.py develop" with a -v > instead of a -q option. This way I found out where it approximately happens: > > > /usr/bin/python /tmp/tmp6UdsMl -v develop -mxN -d > /vagrant/sso/develop-eggs/tmpfioc1Ibuild > running develop > running egg_info > writing requirements to sso.egg-info/requires.txt > writing sso.egg-info/PKG-INFO > writing top-level names to sso.egg-info/top_level.txt > writing dependency_links to sso.egg-info/dependency_links.txt > writing entry points to sso.egg-info/entry_points.txt > > ### This is where the process seems to stop a couple of minutes to scan > all the files. > > reading manifest file 'sso.egg-info/SOURCES.txt' > reading manifest template 'MANIFEST.in' > writing manifest file 'sso.egg-info/SOURCES.txt' > running build_ext > Creating /vagrant/sso/develop-eggs/tmpfioc1Ibuild/sso.egg-link (link to .) > > > > The MANIFEST.in loooks like this: > > # Include docs in the root. > include *.rst > # Include everything in our project directory (sso/views.py, > sso/static/some.js, etc) > graft sso > > > It is a git project. The setup.py looks like this: > > > from setuptools import setup > > version = '1.1.dev0' > > long_description = '\n\n'.join([ > open('README.rst').read(), > open('CREDITS.rst').read(), > open('CHANGES.rst').read(), > ]) > > install_requires = [ > 'Django >= 1.4.2, < 1.7', > 'django-nose', > 'lizard-auth-server', > 'gunicorn', > 'raven', > 'werkzeug', > 'south', > 'django-auth-ldap', > 'django-mama-cas', > ], > > setup(name='sso', > version=version, > description="Single sign on server (and more) for lizard", > long_description=long_description, > # Get strings from http://www.python.org/pypi?% > 3Aaction=list_classifiers > classifiers=['Programming Language :: Python', > 'Framework :: Django', > ], > keywords=[], > author='Do not blame Reinout', > author_email='reinout.vanrees at nelen-schuurmans.nl', > url='', > license='GPL', > packages=['sso'], > zip_safe=False, > install_requires=install_requires, > entry_points={ > 'console_scripts': [ > ]}, > ) > > > > Conclusion for me: something somewhere in setuptools is reading my whole > project folder. It does it after "writing entry points to > sso.egg-info/entry_points.txt" and before "reading manifest file > 'sso.egg-info/SOURCES.txt'". The SOURCES.txt itself is small: > > > CHANGES.rst > CREDITS.rst > LICENSE.rst > MANIFEST.in > README.rst > setup.cfg > setup.py > sso/__init__.py > sso/__init__.pyc > sso/admin.py > sso/developmentsettings.py > sso/developmentsettings.pyc > sso/models.py > sso/models.pyc > sso/settings.py > sso/settings.pyc > sso/stagingsettings.py > sso/tests.py > sso/urls.py > sso/views.py > sso.egg-info/PKG-INFO > sso.egg-info/SOURCES.txt > sso.egg-info/dependency_links.txt > sso.egg-info/entry_points.txt > sso.egg-info/not-zip-safe > sso.egg-info/requires.txt > > > Hm. I see some .pyc files in there. Something that needs fixing, but not > the cause, I think. > > Is there something I'm missing? Where should I look? I cannot find > "writing entry points to..." in the setuptools source code right away. > > > > Reinout > > -- > Reinout van Rees http://reinout.vanrees.org/ > reinout at vanrees.org http://www.nelen-schuurmans.nl/ > "Learning history by destroying artifacts is a time-honored atrocity" > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trishank at nyu.edu Wed Apr 15 17:32:33 2015 From: trishank at nyu.edu (Trishank Karthik Kuppusamy) Date: Wed, 15 Apr 2015 11:32:33 -0400 Subject: [Distutils] name of the dependency problem In-Reply-To: References: <552E46D0.1020106@chamonix.reportlab.co.uk> <552E68D7.50909@nyu.edu> Message-ID: On 15 April 2015 at 11:15, David Cournapeau wrote: > > This is indeed the case. If you want to solve dependencies in a way that > works well, you want an index that describes all your available package > versions. > > While solving dependencies is indeed NP complete, they can be fairly fast > in practice because of various specificities : each rule is generally only > a few variables, and the rules have a specific form allowing for more > efficient rule representation (e.g. "at most one of variable", etc...). In > my experience, it is not more difficult than using graph-based algorithms, > and > > FWIW, at Enthought, we are working on a pure python SAT solver for > resolving dependencies, to solve some of those issues. I am actually > hacking on it right at PyCon, we hope to have a first working version end > of Q2, at which point it will be OSS, and reintegrated in my older project > depsolver (https://github.com/enthought/depsolver). > > Awesome! Then pip could use that in the near future :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinout at vanrees.org Wed Apr 15 20:38:55 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 15 Apr 2015 20:38:55 +0200 Subject: [Distutils] buildout/setuptools slow as it scans the whole project dir In-Reply-To: References: Message-ID: Ionel Cristian M?rie? schreef op 15-04-15 om 19:04: > Could this be the same problem > https://bitbucket.org/pypa/setuptools/issue/249/have-a-way-to-ingore-specific-dirs-when > ? Yes, that's the same. Though: normally reading such a dir structure shouldn't take long. See my other answer, in my case the problem was symlinks going to a different filesystem. > > I frequently have this issue (.tox dir can sometimes get quite large) > and I resorted to monkeypatching setuptools (I know, right?) in > ~/.local/lib/python2.7/site-packages/usercustomize.py: I like it :-) Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From fungi at yuggoth.org Wed Apr 15 20:44:36 2015 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 15 Apr 2015 18:44:36 +0000 Subject: [Distutils] name of the dependency problem In-Reply-To: <552E46D0.1020106@chamonix.reportlab.co.uk> References: <552E46D0.1020106@chamonix.reportlab.co.uk> Message-ID: <20150415184436.GL2456@yuggoth.org> On 2015-04-15 12:09:04 +0100 (+0100), Robin Becker wrote: > After again finding that pip doesn't have a correct dependency > resolution solution a colleage and I discussed the nature of the > problem. [...] Before the discussion of possible solutions heads too far afield, it's worth noting that this was identified years ago and has a pending feature request looking for people pitching in on implementation. It's perhaps better discussed at https://github.com/pypa/pip/issues/988 so as to avoid too much repetition. -- Jeremy Stanley From chris.barker at noaa.gov Wed Apr 15 22:40:23 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 15 Apr 2015 13:40:23 -0700 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Tue, Apr 14, 2015 at 8:57 PM, Kevin Horn wrote: > Personally, I'm not a fan of auto-installing, > > > I'm with Paul on this one. It seems to me that auto-installing the > extension would destroy most of the advantages of distributing the > extensions separately. > Exactly -- I actually tossed that one out there because I wanted to know what folks were thinking, but also a bit of bait ;-) -- we've got a conflict here: 1) These possible extensions are potentially dangerous, etc, and should be well reviewed and not just tossed in there. 2) People (and I'm one of them) really, really want "pip install" to "just work". (or conda install or enpkg, or...). If it's going to "just work", then it needs to find and install the extensions auto-magically, and then we're really not very far from running arbitrary code... Would that be that different than the fact that installing a given package automatically installs all sorts of other packages -- and most of us don't give that a good review before running install... (I just was showing off Shinx to a class last night -- quite amazing what gets brought in with a pip install of sphinx (including pytz -- I have no idea why). But at the end of the day, I don't care much either. I'm trusting that the Sphinx folks aren't doing something ridiculous or dangerous. Which brings us back to the "review of extensions" thing -- I think it's less about the end user checking it out and making a decision about it, but about the package builder doing that. I have a package I want to be easy to install on Windows -- so I go look for an extension that does the Start Menu, etc. Indeed, that kind of thing "'should" be part of pip and/or wheel, but it would probably be more successful if it were done as third party extensions -- perhaps over the years, the ones that rise to the top of usefulness can become standards. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Apr 15 23:23:54 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 15 Apr 2015 22:23:54 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 15 April 2015 at 21:40, Chris Barker wrote: > Which brings us back to the "review of extensions" thing -- I think it's > less about the end user checking it out and making a decision about it, but > about the package builder doing that. I have a package I want to be easy to > install on Windows -- so I go look for an extension that does the Start > Menu, etc. Indeed, that kind of thing "'should" be part of pip and/or wheel, > but it would probably be more successful if it were done as third party > extensions -- perhaps over the years, the ones that rise to the top of > usefulness can become standards. In the PEP, there's a concept of "optional" vs "required" extensions. See https://www.python.org/dev/peps/pep-0426/#required-extension-handling. This is crucial - I've no problem if a particular extension is used by a project, as long as it's optional. I won't install it, so it's fine. It seems to me that pip *has* to ignore missing optional extensions, for this reason. Of course, that introduces the converse problem, which is how would people who might want that extension to be activated, know that a project used it? Critical extensions, on the other hand, are precisely that - the install won't run without them. I'd hope that critical extensions will only be used for things where the installation will be useless without them. But I worry that some people may have a more liberal definition of "required" than I do. To be honest, I can't think of *anything* that I'd consider a "required" extension. Console script wrappers aren't essential, for example (you can use "python -m pip" even if pip.exe isn't present). More generally, none of the extensions in PEP 459 seem essential, in this sense. Start menu entry writers wouldn't be essential, nor would COM registration extensions necessarily be (most of pywin32's functionality works fine if the COM components aren't registered). Beyond that I'm struggling to think of things that might be extensions. So, as long as the "optional" vs "required" distinction is respected, people are conservative about deeming something as "essential", and a missing optional extension doesn't stop an install, then I don't see extensions as being a big issue. Based on the above, it's possibly valid to allow "required" extensions to be auto-installed. It *is* a vector for unexpected code execution, but maybe that's OK. Paul PS The idea of a "Start Menu entries" has come up a lot here. To be clear, I *don't* actually think such a thing is a good idea (far from it - I think it's a pretty lousy idea) but it is a good example of something that people think they ought to do, but in practice users have widely differing views on whet they prefer or will use, and a developer with limited experience could easily create a dreadful user experience without meaning to ("developer" here could either mean the extension developer, or the package developer using the extension - both have opportunities to make a horrible mess...) So it's a good straw man for "an extension that some people will love and others will hate" :-) PPS I'm inclined to think that the PEP should say "Installation tools MUST NOT fail if installer_must_handle is set to false for an extension that the tool cannot process. Installation tools SHOULD NOT attempt to install plugins or similar optional functionality to handle an extension with installer_must_handle set to false, except with explicit approval from the end user." From ncoghlan at gmail.com Thu Apr 16 01:26:41 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 15 Apr 2015 19:26:41 -0400 Subject: [Distutils] version for VCS/local builds between alpha and release In-Reply-To: References: Message-ID: On 14 Apr 2015 04:04, "Robert Collins" wrote: > > Tl;dr: I woud like to change PEP-440 to remove the stigmata around dev > versions that come between pre-releases and releases. > > The basic scenario here is developers and CD deployers building > versions from VCS of arbitrary commits. So we need to be able to > deliver strictly increasing version numbers, automatically, without > interfering with actual publishing of pre-release and release versions > to PyPI. We'd had a previous request for a similar clarification around local version identifiers, so I've finally tweaked the PEP to clarify that both that admonition and this one relate specifically to *published* versions, rather than what folks do with their local or CI builds. See https://hg.python.org/peps/rev/bf4ffb364faf for the exact wording change. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Apr 16 01:33:34 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 16 Apr 2015 11:33:34 +1200 Subject: [Distutils] version for VCS/local builds between alpha and release In-Reply-To: References: Message-ID: Thanks. There's still a little too much negative there for me - if I propose a diff here would that be ok? On 16 April 2015 at 11:26, Nick Coghlan wrote: > > On 14 Apr 2015 04:04, "Robert Collins" wrote: >> >> Tl;dr: I woud like to change PEP-440 to remove the stigmata around dev >> versions that come between pre-releases and releases. >> >> The basic scenario here is developers and CD deployers building >> versions from VCS of arbitrary commits. So we need to be able to >> deliver strictly increasing version numbers, automatically, without >> interfering with actual publishing of pre-release and release versions >> to PyPI. > > We'd had a previous request for a similar clarification around local version > identifiers, so I've finally tweaked the PEP to clarify that both that > admonition and this one relate specifically to *published* versions, rather > than what folks do with their local or CI builds. > > See https://hg.python.org/peps/rev/bf4ffb364faf for the exact wording > change. > > Cheers, > Nick. > -- Robert Collins Distinguished Technologist HP Converged Cloud From ncoghlan at gmail.com Thu Apr 16 01:44:17 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 15 Apr 2015 19:44:17 -0400 Subject: [Distutils] version for VCS/local builds between alpha and release In-Reply-To: References: Message-ID: On 15 April 2015 at 19:33, Robert Collins wrote: > Thanks. There's still a little too much negative there for me - if I > propose a diff here would that be ok? Sure, or a PR against the GitHub mirror: https://github.com/pypa/interoperability-peps/blob/master/pep-0440-versioning.rst Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From Steve.Dower at microsoft.com Thu Apr 16 01:48:33 2015 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 15 Apr 2015 23:48:33 +0000 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> , Message-ID: On the Start Menu suggestion, I think that's a horrible idea. Pip is not the system package manager and it shouldn't be changing the system. Unversioned script launchers are in the same category, but aren't quite as offensive. I know it's only a hypothetical, but I'd much rather it didn't get repeated so often that it actually happens. There are better tools for making app installers, as opposed to package installers. Cheers, Steve Top-posted from my Windows Phone ________________________________ From: Paul Moore Sent: ?4/?15/?2015 17:24 To: Chris Barker Cc: distutils-sig Subject: Re: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more On 15 April 2015 at 21:40, Chris Barker wrote: > Which brings us back to the "review of extensions" thing -- I think it's > less about the end user checking it out and making a decision about it, but > about the package builder doing that. I have a package I want to be easy to > install on Windows -- so I go look for an extension that does the Start > Menu, etc. Indeed, that kind of thing "'should" be part of pip and/or wheel, > but it would probably be more successful if it were done as third party > extensions -- perhaps over the years, the ones that rise to the top of > usefulness can become standards. In the PEP, there's a concept of "optional" vs "required" extensions. See https://www.python.org/dev/peps/pep-0426/#required-extension-handling. This is crucial - I've no problem if a particular extension is used by a project, as long as it's optional. I won't install it, so it's fine. It seems to me that pip *has* to ignore missing optional extensions, for this reason. Of course, that introduces the converse problem, which is how would people who might want that extension to be activated, know that a project used it? Critical extensions, on the other hand, are precisely that - the install won't run without them. I'd hope that critical extensions will only be used for things where the installation will be useless without them. But I worry that some people may have a more liberal definition of "required" than I do. To be honest, I can't think of *anything* that I'd consider a "required" extension. Console script wrappers aren't essential, for example (you can use "python -m pip" even if pip.exe isn't present). More generally, none of the extensions in PEP 459 seem essential, in this sense. Start menu entry writers wouldn't be essential, nor would COM registration extensions necessarily be (most of pywin32's functionality works fine if the COM components aren't registered). Beyond that I'm struggling to think of things that might be extensions. So, as long as the "optional" vs "required" distinction is respected, people are conservative about deeming something as "essential", and a missing optional extension doesn't stop an install, then I don't see extensions as being a big issue. Based on the above, it's possibly valid to allow "required" extensions to be auto-installed. It *is* a vector for unexpected code execution, but maybe that's OK. Paul PS The idea of a "Start Menu entries" has come up a lot here. To be clear, I *don't* actually think such a thing is a good idea (far from it - I think it's a pretty lousy idea) but it is a good example of something that people think they ought to do, but in practice users have widely differing views on whet they prefer or will use, and a developer with limited experience could easily create a dreadful user experience without meaning to ("developer" here could either mean the extension developer, or the package developer using the extension - both have opportunities to make a horrible mess...) So it's a good straw man for "an extension that some people will love and others will hate" :-) PPS I'm inclined to think that the PEP should say "Installation tools MUST NOT fail if installer_must_handle is set to false for an extension that the tool cannot process. Installation tools SHOULD NOT attempt to install plugins or similar optional functionality to handle an extension with installer_must_handle set to false, except with explicit approval from the end user." _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Apr 16 06:00:00 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 16 Apr 2015 16:00:00 +1200 Subject: [Distutils] name of the dependency problem In-Reply-To: References: <552E46D0.1020106@chamonix.reportlab.co.uk> <552E68D7.50909@nyu.edu> Message-ID: On 16 April 2015 at 04:52, David Cournapeau wrote: > > > On Wed, Apr 15, 2015 at 11:32 AM, Trishank Karthik Kuppusamy > wrote: >> >> On 15 April 2015 at 11:15, David Cournapeau wrote: >>> >>> >>> This is indeed the case. If you want to solve dependencies in a way that >>> works well, you want an index that describes all your available package >>> versions. >>> >>> While solving dependencies is indeed NP complete, they can be fairly fast >>> in practice because of various specificities : each rule is generally only a >>> few variables, and the rules have a specific form allowing for more >>> efficient rule representation (e.g. "at most one of variable", etc...). In >>> my experience, it is not more difficult than using graph-based algorithms, >>> and >>> >>> FWIW, at Enthought, we are working on a pure python SAT solver for >>> resolving dependencies, to solve some of those issues. I am actually hacking >>> on it right at PyCon, we hope to have a first working version end of Q2, at >>> which point it will be OSS, and reintegrated in my older project depsolver >>> (https://github.com/enthought/depsolver). >>> >> >> Awesome! Then pip could use that in the near future :) > > > That's the goal. For various reasons, it ended up easier to develop the > solver within our own package manager enstaller, but once done, extracting > it as a separate library should not be too hard. It is for example designed > to support various versioning schemes (for legacy reasons we can't use > PEP440 just yet). > > Regarding speed, initial experiments showed that even for relatively deep > graphs, the running time is taken outside the SAT solver (e.g. to generate > the rules, you need to parse the version of every package you want to > consider, and parsing 1000s of PEP440 versions is slow :) ). My intent was to use a simple backtracking enhancement to the existing pip code, because: - there is no index of dependencies today - many packages have no wheels, so we have to run arbitrary code in a new process to produce dependency metadata - so only considering the versions we have to process seems preferrable. Are you working on integrating your thing into PIP? My current work queue is roughly: - declarative setup_requires and install_requires/extra_requires support in pip and setuptools - handling conflicts with already installed packages (https://github.com/pypa/pip/issues/2687) - then teach pip how to backtrack If you're actively working on integrating it into pip, cool. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From wes.turner at gmail.com Thu Apr 16 06:36:50 2015 From: wes.turner at gmail.com (Wes Turner) Date: Wed, 15 Apr 2015 23:36:50 -0500 Subject: [Distutils] name of the dependency problem In-Reply-To: <552E46D0.1020106@chamonix.reportlab.co.uk> References: <552E46D0.1020106@chamonix.reportlab.co.uk> Message-ID: IIRC, Conda (BSD License) takes a SAT solving (e.g. Sudoku) approach: http://continuum.io/blog/new-advances-in-conda (such as installing "pycosat" (MIT License) when I install conda). Some links to the source: * https://github.com/conda/conda/blob/master/conda/logic.py * https://github.com/conda/conda/blob/master/tests/test_logic.py' * https://github.com/conda/conda/blob/master/conda/resolve.py * https://github.com/conda/conda/blob/master/tests/test_resolve.py ... https://github.com/conda/conda/blob/master/conda/toposort.py On Apr 15, 2015 6:14 AM, "Robin Becker" wrote: > After again finding that pip doesn't have a correct dependency resolution > solution a colleage and I discussed the nature of the problem. We examined > the script capture of our install and it seems as though when presented with > > > level 0 A > A level 1 1.4<= C > > > level 0 B > B level 1 1.6<= C <1.7 > > pip manages to download version 1.8 of C(Django) using A's requirement, > but never even warns us that the B requirement of C was violated. Surely > even in the absence of a resolution pip could raise a warning at the end. > > Anyhow after some discussion I realize I don't even know the name of the > problem that pip should try to solve, is there some tree / graph problem > that corresponds? Searching on dependency seems to lead to topological > sorts of one kind or another, but here we seem to have nodes with discrete > values attached so in the above example we might have (assuming only > singleton A & B) > > R --> A > R --> B > > A --> C-1.4 > A --> C-1.6 > A --> C-1.6.11 > A --> C-1.7 > A --> C-1.8 > > B --> C-1.6 > B --> C-1.6.11 > > so looking at C equivalent nodes seems to allow a solution set. Are there > any real problem descriptions / solutions to this kind of problem? > -- > Robin Becker > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Apr 16 09:08:05 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 16 Apr 2015 08:08:05 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 16 April 2015 at 00:48, Steve Dower wrote: > On the Start Menu suggestion, I think that's a horrible idea. Pip is not the > system package manager and it shouldn't be changing the system. Unversioned > script launchers are in the same category, but aren't quite as offensive. > > I know it's only a hypothetical, but I'd much rather it didn't get repeated > so often that it actually happens. There are better tools for making app > installers, as opposed to package installers. Sorry - I agree it's an awful idea. Older wininst installers such as the pywin32 (and I think the PyQT one) one do this, I wanted to use it as an example of abuse of postinstall scripts that should *not* be perpetuated in any new scheme. Just to expand on another point in my mail - I'd like *anyone* to provide an example of a genuine use case for something they think should be a "required" installer extension. I'm not sure such a thing actually exists... Paul From p.f.moore at gmail.com Thu Apr 16 10:21:01 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 16 Apr 2015 09:21:01 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: <552F6533.9060504@timgolden.me.uk> References: <552F6533.9060504@timgolden.me.uk> Message-ID: On 16 April 2015 at 08:30, Tim Golden wrote: >> Sorry - I agree it's an awful idea. Older wininst installers such as >> the pywin32 (and I think the PyQT one) one do this, I wanted to use it >> as an example of abuse of postinstall scripts that should *not* be >> perpetuated in any new scheme. > > FWIW I've just had a to-and-fro by email with Mark Hammond. I gather > that he's now given Glyph access to the PyPI & hg setup for pywin32. > > He's also happy to consider changes to the setup process to support > wheel/virtualenv/postinstall improvements. There's been a side > discussion on the pywin32 list about which versions of Python pywin32 > should continue to support going forward, which obviously interacts with > the idea of making it wheel/virtualenv-friendly. Thanks for involving Mark in this. While pywin32 isn't the only project with a postinstall script, it's one of the most complex that I know of, and a good example to work from when looking at what projects need. > I'm not sure what Glyph's plan is at this point -- doubtless he can > speak for himself. I gather from Paul's comments earlier that he's not a > particular fan of pywin32. If the thing seems to have legs, I'm happy to > coordinate changes to the setup. (I am, technically, a pywin32 committer > although I've never made use of that fact). To be clear, I don't have that much of a problem with pywin32. I don't use it myself, these days, but that's because (a) it's a very big, monolithic dependency, and (b) it's not usable directly with pip. The problem I have with it is that a lot of projects use it for simple access to the Win32 API (uses which can easily be handled by ctypes, possibly with slightly more messy code) and that means that they inherit the pywin32 problems. So I advise against pywin32 because of that, *not* because I think it's a problem itself, when used for situations where there isn't a better alternative. > The particular issue I'm not sure about is: how does Paul see pywin32's > postinstall steps working when they *are* needed, ie when someone wants > to install pywin32 as a wheel and wants the COM registration to happen? > Or is that a question of: run these steps manually once pip's completed? To be honest, for the cases I encounter frequently, these requirements don't come up. So my experience here goes back to the days when I used pywin32 to write COM servers and services, which was quite a while ago. >From what I recall, pywin32 has the following steps in its postinstall: 1. Create start menu entries. My view is that this should simply be dropped. Python packages should never be adding start menu entries. Steve Dower has confirmed he agrees with this view earlier on this thread. 2. Move the pywin32 DLLs to the system directory. I don't see any way this is compatible with per-user or virtualenv installs, so I don't know how to address this, other than again dropping the step. I've no idea why this is necessary, or precisely which parts of pywin32 require it (I've a recollection from a long time ago that "services written in Python" was the explanation, but that's all I know). But presumably such use cases already break with a per-user Python install? 3. Registering the ActiveX COM DLLs. I believe this is mostly obsolete technology these days (who still uses ActiveX Scripting in anything other than VBScript or maybe a bit of JScript?) I'd drop this and make it a step that the user has to do manually if they want it. In place of it, pywin32 could provide an entry point to register the DLLs ("python -m pywin32 --register-dlls" or something). Presumably users who need it would understand the implications, and how to avoid registering multiple environments or forgetting to unregister before dropping an environment, etc. That sort of pitfall isn't something Python should try to solve automatically via pre- and post- install scripts. 4. Registering help files. I never understood how that worked or why it was needed. So again, I'd say just drop it. Have I missed anything else? Paul From mail at timgolden.me.uk Thu Apr 16 09:30:59 2015 From: mail at timgolden.me.uk (Tim Golden) Date: Thu, 16 Apr 2015 08:30:59 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: Message-ID: <552F6533.9060504@timgolden.me.uk> On 16/04/2015 08:08, Paul Moore wrote: > On 16 April 2015 at 00:48, Steve Dower wrote: >> On the Start Menu suggestion, I think that's a horrible idea. Pip is not the >> system package manager and it shouldn't be changing the system. Unversioned >> script launchers are in the same category, but aren't quite as offensive. >> >> I know it's only a hypothetical, but I'd much rather it didn't get repeated >> so often that it actually happens. There are better tools for making app >> installers, as opposed to package installers. > > Sorry - I agree it's an awful idea. Older wininst installers such as > the pywin32 (and I think the PyQT one) one do this, I wanted to use it > as an example of abuse of postinstall scripts that should *not* be > perpetuated in any new scheme. FWIW I've just had a to-and-fro by email with Mark Hammond. I gather that he's now given Glyph access to the PyPI & hg setup for pywin32. He's also happy to consider changes to the setup process to support wheel/virtualenv/postinstall improvements. There's been a side discussion on the pywin32 list about which versions of Python pywin32 should continue to support going forward, which obviously interacts with the idea of making it wheel/virtualenv-friendly. I'm not sure what Glyph's plan is at this point -- doubtless he can speak for himself. I gather from Paul's comments earlier that he's not a particular fan of pywin32. If the thing seems to have legs, I'm happy to coordinate changes to the setup. (I am, technically, a pywin32 committer although I've never made use of that fact). The particular issue I'm not sure about is: how does Paul see pywin32's postinstall steps working when they *are* needed, ie when someone wants to install pywin32 as a wheel and wants the COM registration to happen? Or is that a question of: run these steps manually once pip's completed? TJG From mail at timgolden.me.uk Thu Apr 16 12:11:19 2015 From: mail at timgolden.me.uk (Tim Golden) Date: Thu, 16 Apr 2015 11:11:19 +0100 Subject: [Distutils] pywin32 on wheels [was: Beyond wheels 1.0: helping downstream, FHS and more] In-Reply-To: References: <552F6533.9060504@timgolden.me.uk> Message-ID: <552F8AC7.5050904@timgolden.me.uk> [cc-ing Mark H as he indicated he was open to be kept in the loop; also changed the title to reflect the shift of conversation] On 16/04/2015 09:21, Paul Moore wrote: > On 16 April 2015 at 08:30, Tim Golden wrote: >>> Sorry - I agree it's an awful idea. Older wininst installers such as >>> the pywin32 (and I think the PyQT one) one do this, I wanted to use it >>> as an example of abuse of postinstall scripts that should *not* be >>> perpetuated in any new scheme. >> >> FWIW I've just had a to-and-fro by email with Mark Hammond. I gather >> that he's now given Glyph access to the PyPI & hg setup for pywin32. >> >> He's also happy to consider changes to the setup process to support >> wheel/virtualenv/postinstall improvements. There's been a side >> discussion on the pywin32 list about which versions of Python pywin32 >> should continue to support going forward, which obviously interacts with >> the idea of making it wheel/virtualenv-friendly. > > Thanks for involving Mark in this. While pywin32 isn't the only > project with a postinstall script, it's one of the most complex that I > know of, and a good example to work from when looking at what projects > need. > >> I'm not sure what Glyph's plan is at this point -- doubtless he can >> speak for himself. I gather from Paul's comments earlier that he's not a >> particular fan of pywin32. If the thing seems to have legs, I'm happy to >> coordinate changes to the setup. (I am, technically, a pywin32 committer >> although I've never made use of that fact). > > To be clear, I don't have that much of a problem with pywin32. I don't > use it myself, these days, but that's because (a) it's a very big, > monolithic dependency, and (b) it's not usable directly with pip. The > problem I have with it is that a lot of projects use it for simple > access to the Win32 API (uses which can easily be handled by ctypes, > possibly with slightly more messy code) and that means that they > inherit the pywin32 problems. So I advise against pywin32 because of > that, *not* because I think it's a problem itself, when used for > situations where there isn't a better alternative. > >> The particular issue I'm not sure about is: how does Paul see pywin32's >> postinstall steps working when they *are* needed, ie when someone wants >> to install pywin32 as a wheel and wants the COM registration to happen? >> Or is that a question of: run these steps manually once pip's completed? > > To be honest, for the cases I encounter frequently, these requirements > don't come up. So my experience here goes back to the days when I used > pywin32 to write COM servers and services, which was quite a while > ago. > > From what I recall, pywin32 has the following steps in its postinstall: > > 1. Create start menu entries. My view is that this should simply be > dropped. Python packages should never be adding start menu entries. > Steve Dower has confirmed he agrees with this view earlier on this > thread. > 2. Move the pywin32 DLLs to the system directory. I don't see any way > this is compatible with per-user or virtualenv installs, so I don't > know how to address this, other than again dropping the step. I've no > idea why this is necessary, or precisely which parts of pywin32 > require it (I've a recollection from a long time ago that "services > written in Python" was the explanation, but that's all I know). But > presumably such use cases already break with a per-user Python > install? > 3. Registering the ActiveX COM DLLs. I believe this is mostly obsolete > technology these days (who still uses ActiveX Scripting in anything > other than VBScript or maybe a bit of JScript?) I'd drop this and make > it a step that the user has to do manually if they want it. In place > of it, pywin32 could provide an entry point to register the DLLs > ("python -m pywin32 --register-dlls" or something). Presumably users > who need it would understand the implications, and how to avoid > registering multiple environments or forgetting to unregister before > dropping an environment, etc. That sort of pitfall isn't something > Python should try to solve automatically via pre- and post- install > scripts. > 4. Registering help files. I never understood how that worked or why > it was needed. So again, I'd say just drop it. Really, pywin32 is several things: a set of libraries (win32api, win32file, etc.); some system-level support for various things (COM registration, Service support etc.); and a development/editing environment (pythonwin). I see this ending up as (respectively): as venv-friendly wheel; a py -m script of the kind Paul suggests; and an installable app with the usual start menu icons etc. In my copious spare time I'll at least try to visit the pywin32 codebase to see how viable all this is. Feel free to challenge my thoughts on the matter. TJG From kevin.horn at gmail.com Thu Apr 16 15:43:22 2015 From: kevin.horn at gmail.com (Kevin Horn) Date: Thu, 16 Apr 2015 08:43:22 -0500 Subject: [Distutils] pywin32 on wheels [was: Beyond wheels 1.0: helping downstream, FHS and more] In-Reply-To: <552F8AC7.5050904@timgolden.me.uk> References: <552F6533.9060504@timgolden.me.uk> <552F8AC7.5050904@timgolden.me.uk> Message-ID: Tim, As a long time user, I think you're right on the money. My only concern is how to manage the transition in user experience, as moving to what you've described (which I totally approve of, if it's feasible) will be a significant change, and may break user expectations. I think maybe the best thing to do is to change the existing binary installer package to: - install the included wheel in the system python environment - run the various post-install scripts (py -m) - install pythonwin, along with start menu icons, etc. Those that want to use pywin32 in a virtualenv (or just without all the system changes) could simply install the wheel (or even an sdist, perhaps) from the command line using pip, and then perform whatever other steps they want manually. This would allow those who are installing using the installer package (which I assume is almost everybody, right?) to get a similar experience to the current one, while those wanting more control (use in virtualenvs, etc) to have that as well. I think the changes described have the potential to be a big win. On Thu, Apr 16, 2015 at 5:11 AM, Tim Golden wrote: > [cc-ing Mark H as he indicated he was open to be kept in the loop; also > changed the title to reflect the shift of conversation] > > On 16/04/2015 09:21, Paul Moore wrote: > > On 16 April 2015 at 08:30, Tim Golden wrote: > >>> Sorry - I agree it's an awful idea. Older wininst installers such as > >>> the pywin32 (and I think the PyQT one) one do this, I wanted to use it > >>> as an example of abuse of postinstall scripts that should *not* be > >>> perpetuated in any new scheme. > >> > >> FWIW I've just had a to-and-fro by email with Mark Hammond. I gather > >> that he's now given Glyph access to the PyPI & hg setup for pywin32. > >> > >> He's also happy to consider changes to the setup process to support > >> wheel/virtualenv/postinstall improvements. There's been a side > >> discussion on the pywin32 list about which versions of Python pywin32 > >> should continue to support going forward, which obviously interacts with > >> the idea of making it wheel/virtualenv-friendly. > > > > Thanks for involving Mark in this. While pywin32 isn't the only > > project with a postinstall script, it's one of the most complex that I > > know of, and a good example to work from when looking at what projects > > need. > > > >> I'm not sure what Glyph's plan is at this point -- doubtless he can > >> speak for himself. I gather from Paul's comments earlier that he's not a > >> particular fan of pywin32. If the thing seems to have legs, I'm happy to > >> coordinate changes to the setup. (I am, technically, a pywin32 committer > >> although I've never made use of that fact). > > > > To be clear, I don't have that much of a problem with pywin32. I don't > > use it myself, these days, but that's because (a) it's a very big, > > monolithic dependency, and (b) it's not usable directly with pip. The > > problem I have with it is that a lot of projects use it for simple > > access to the Win32 API (uses which can easily be handled by ctypes, > > possibly with slightly more messy code) and that means that they > > inherit the pywin32 problems. So I advise against pywin32 because of > > that, *not* because I think it's a problem itself, when used for > > situations where there isn't a better alternative. > > > >> The particular issue I'm not sure about is: how does Paul see pywin32's > >> postinstall steps working when they *are* needed, ie when someone wants > >> to install pywin32 as a wheel and wants the COM registration to happen? > >> Or is that a question of: run these steps manually once pip's completed? > > > > To be honest, for the cases I encounter frequently, these requirements > > don't come up. So my experience here goes back to the days when I used > > pywin32 to write COM servers and services, which was quite a while > > ago. > > > > From what I recall, pywin32 has the following steps in its postinstall: > > > > 1. Create start menu entries. My view is that this should simply be > > dropped. Python packages should never be adding start menu entries. > > Steve Dower has confirmed he agrees with this view earlier on this > > thread. > > 2. Move the pywin32 DLLs to the system directory. I don't see any way > > this is compatible with per-user or virtualenv installs, so I don't > > know how to address this, other than again dropping the step. I've no > > idea why this is necessary, or precisely which parts of pywin32 > > require it (I've a recollection from a long time ago that "services > > written in Python" was the explanation, but that's all I know). But > > presumably such use cases already break with a per-user Python > > install? > > 3. Registering the ActiveX COM DLLs. I believe this is mostly obsolete > > technology these days (who still uses ActiveX Scripting in anything > > other than VBScript or maybe a bit of JScript?) I'd drop this and make > > it a step that the user has to do manually if they want it. In place > > of it, pywin32 could provide an entry point to register the DLLs > > ("python -m pywin32 --register-dlls" or something). Presumably users > > who need it would understand the implications, and how to avoid > > registering multiple environments or forgetting to unregister before > > dropping an environment, etc. That sort of pitfall isn't something > > Python should try to solve automatically via pre- and post- install > > scripts. > > 4. Registering help files. I never understood how that worked or why > > it was needed. So again, I'd say just drop it. > > Really, pywin32 is several things: a set of libraries (win32api, > win32file, etc.); some system-level support for various things (COM > registration, Service support etc.); and a development/editing > environment (pythonwin). > > I see this ending up as (respectively): as venv-friendly wheel; a py -m > script of the kind Paul suggests; and an installable app with the usual > start menu icons etc. > > In my copious spare time I'll at least try to visit the pywin32 codebase > to see how viable all this is. Feel free to challenge my thoughts on the > matter. > > > TJG > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- -- Kevin Horn -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Apr 16 15:50:06 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 16 Apr 2015 14:50:06 +0100 Subject: [Distutils] pywin32 on wheels [was: Beyond wheels 1.0: helping downstream, FHS and more] In-Reply-To: <552F8AC7.5050904@timgolden.me.uk> References: <552F6533.9060504@timgolden.me.uk> <552F8AC7.5050904@timgolden.me.uk> Message-ID: On 16 April 2015 at 11:11, Tim Golden wrote: > Really, pywin32 is several things: a set of libraries (win32api, > win32file, etc.); some system-level support for various things (COM > registration, Service support etc.); and a development/editing > environment (pythonwin). That sounds about right. > I see this ending up as (respectively): as venv-friendly wheel; a py -m > script of the kind Paul suggests; and an installable app with the usual > start menu icons etc. Again, yes, that seems reasonable. Personally, for the uses I see of pywin32, it would make sense to split it into a number of separate wheels (win32api, win32file, ...) to reduce the dependency footprint for projects that only use one or two functions out of the whole thing, but honestly ctypes is probably still a better approach for that scenario, so the benefit of such a split is likely minimal. > In my copious spare time I'll at least try to visit the pywin32 codebase > to see how viable all this is. Feel free to challenge my thoughts on the > matter. I think you're going in the right direction. The hardest parts are likely to be where the Windows architecture interferes (COM registration and services). Paul. From dholth at gmail.com Thu Apr 16 15:51:08 2015 From: dholth at gmail.com (Daniel Holth) Date: Thu, 16 Apr 2015 09:51:08 -0400 Subject: [Distutils] pywin32 on wheels [was: Beyond wheels 1.0: helping downstream, FHS and more] In-Reply-To: References: <552F6533.9060504@timgolden.me.uk> <552F8AC7.5050904@timgolden.me.uk> Message-ID: This seems like a good time to remind everyone that "wheel convert" can turn bdist_wininst .exe's to wheels. Both formats are designed to preserve all the distutils file categories. In the future it would be nice if the bdist_wininst .exe wrapper used wheel instead of its own format. Then a single file would both be the Windows installer and a valid wheel (except for the extension). On Thu, Apr 16, 2015 at 9:43 AM, Kevin Horn wrote: > Tim, > > As a long time user, I think you're right on the money. > > My only concern is how to manage the transition in user experience, as > moving to what you've described (which I totally approve of, if it's > feasible) will be a significant change, and may break user expectations. > > I think maybe the best thing to do is to change the existing binary > installer package to: > - install the included wheel in the system python environment > - run the various post-install scripts (py -m) > - install pythonwin, along with start menu icons, etc. > > Those that want to use pywin32 in a virtualenv (or just without all the > system changes) could simply install the wheel (or even an sdist, perhaps) > from the command line using pip, and then perform whatever other steps they > want manually. > > This would allow those who are installing using the installer package (which > I assume is almost everybody, right?) to get a similar experience to the > current one, while those wanting more control (use in virtualenvs, etc) to > have that as well. > > I think the changes described have the potential to be a big win. > > > On Thu, Apr 16, 2015 at 5:11 AM, Tim Golden wrote: >> >> [cc-ing Mark H as he indicated he was open to be kept in the loop; also >> changed the title to reflect the shift of conversation] >> >> On 16/04/2015 09:21, Paul Moore wrote: >> > On 16 April 2015 at 08:30, Tim Golden wrote: >> >>> Sorry - I agree it's an awful idea. Older wininst installers such as >> >>> the pywin32 (and I think the PyQT one) one do this, I wanted to use it >> >>> as an example of abuse of postinstall scripts that should *not* be >> >>> perpetuated in any new scheme. >> >> >> >> FWIW I've just had a to-and-fro by email with Mark Hammond. I gather >> >> that he's now given Glyph access to the PyPI & hg setup for pywin32. >> >> >> >> He's also happy to consider changes to the setup process to support >> >> wheel/virtualenv/postinstall improvements. There's been a side >> >> discussion on the pywin32 list about which versions of Python pywin32 >> >> should continue to support going forward, which obviously interacts >> >> with >> >> the idea of making it wheel/virtualenv-friendly. >> > >> > Thanks for involving Mark in this. While pywin32 isn't the only >> > project with a postinstall script, it's one of the most complex that I >> > know of, and a good example to work from when looking at what projects >> > need. >> > >> >> I'm not sure what Glyph's plan is at this point -- doubtless he can >> >> speak for himself. I gather from Paul's comments earlier that he's not >> >> a >> >> particular fan of pywin32. If the thing seems to have legs, I'm happy >> >> to >> >> coordinate changes to the setup. (I am, technically, a pywin32 >> >> committer >> >> although I've never made use of that fact). >> > >> > To be clear, I don't have that much of a problem with pywin32. I don't >> > use it myself, these days, but that's because (a) it's a very big, >> > monolithic dependency, and (b) it's not usable directly with pip. The >> > problem I have with it is that a lot of projects use it for simple >> > access to the Win32 API (uses which can easily be handled by ctypes, >> > possibly with slightly more messy code) and that means that they >> > inherit the pywin32 problems. So I advise against pywin32 because of >> > that, *not* because I think it's a problem itself, when used for >> > situations where there isn't a better alternative. >> > >> >> The particular issue I'm not sure about is: how does Paul see pywin32's >> >> postinstall steps working when they *are* needed, ie when someone wants >> >> to install pywin32 as a wheel and wants the COM registration to happen? >> >> Or is that a question of: run these steps manually once pip's >> >> completed? >> > >> > To be honest, for the cases I encounter frequently, these requirements >> > don't come up. So my experience here goes back to the days when I used >> > pywin32 to write COM servers and services, which was quite a while >> > ago. >> > >> > From what I recall, pywin32 has the following steps in its postinstall: >> > >> > 1. Create start menu entries. My view is that this should simply be >> > dropped. Python packages should never be adding start menu entries. >> > Steve Dower has confirmed he agrees with this view earlier on this >> > thread. >> > 2. Move the pywin32 DLLs to the system directory. I don't see any way >> > this is compatible with per-user or virtualenv installs, so I don't >> > know how to address this, other than again dropping the step. I've no >> > idea why this is necessary, or precisely which parts of pywin32 >> > require it (I've a recollection from a long time ago that "services >> > written in Python" was the explanation, but that's all I know). But >> > presumably such use cases already break with a per-user Python >> > install? >> > 3. Registering the ActiveX COM DLLs. I believe this is mostly obsolete >> > technology these days (who still uses ActiveX Scripting in anything >> > other than VBScript or maybe a bit of JScript?) I'd drop this and make >> > it a step that the user has to do manually if they want it. In place >> > of it, pywin32 could provide an entry point to register the DLLs >> > ("python -m pywin32 --register-dlls" or something). Presumably users >> > who need it would understand the implications, and how to avoid >> > registering multiple environments or forgetting to unregister before >> > dropping an environment, etc. That sort of pitfall isn't something >> > Python should try to solve automatically via pre- and post- install >> > scripts. >> > 4. Registering help files. I never understood how that worked or why >> > it was needed. So again, I'd say just drop it. >> >> Really, pywin32 is several things: a set of libraries (win32api, >> win32file, etc.); some system-level support for various things (COM >> registration, Service support etc.); and a development/editing >> environment (pythonwin). >> >> I see this ending up as (respectively): as venv-friendly wheel; a py -m >> script of the kind Paul suggests; and an installable app with the usual >> start menu icons etc. >> >> In my copious spare time I'll at least try to visit the pywin32 codebase >> to see how viable all this is. Feel free to challenge my thoughts on the >> matter. >> >> >> TJG >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > > -- > -- > Kevin Horn > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From p.f.moore at gmail.com Thu Apr 16 15:58:51 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 16 Apr 2015 14:58:51 +0100 Subject: [Distutils] pywin32 on wheels [was: Beyond wheels 1.0: helping downstream, FHS and more] In-Reply-To: References: <552F6533.9060504@timgolden.me.uk> <552F8AC7.5050904@timgolden.me.uk> Message-ID: On 16 April 2015 at 14:43, Kevin Horn wrote: > Those that want to use pywin32 in a virtualenv (or just without all the > system changes) could simply install the wheel (or even an sdist, perhaps) > from the command line using pip, and then perform whatever other steps they > want manually. Just as a data point, converting the existing wininst installer for pywin32 to a wheel (using wheel convert), and installing that via pip, is entirely workable (for the win32api/win32file type basic functionality). The pypiwin32 project (started by Glyph as a way of providing pywin32 wheels from PyPI) includes wheels for Python 3.x which I built that way, so it's certainly seen some level of use. The wheels are probably unnecessarily big, as they'll include all of pythonwin, and the ActiveX Scripting and service creation support, which I guess won't work in that configuration, but they are a good starting point for checking precisely what will work unmodified from a wheel. Paul From chris.barker at noaa.gov Thu Apr 16 18:58:08 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 16 Apr 2015 09:58:08 -0700 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On Wed, Apr 15, 2015 at 2:23 PM, Paul Moore wrote: > In the PEP, there's a concept of "optional" vs "required" extensions. > See https://www.python.org/dev/peps/pep-0426/#required-extension-handling. > This is crucial - I've no problem if a particular extension is used by > a project, as long as it's optional. I won't install it, so it's fine. > It seems to me that pip *has* to ignore missing optional extensions, > for this reason. Of course, that introduces the converse problem, > which is how would people who might want that extension to be > activated, know that a project used it? > Exactly -- we do want "pip install" to just work... > But I worry that some people may have a more liberal definition > of "required" than I do. They probably do -- if they want things to "just work" We have the same problem with optional dependencies. For instance, for iPython to work, you don't need much. but if you want the ipython notebook to work, you need tornado, zeromq, who knows what else. But people want it to just work -- and just work be default, so you want all that optional stuff to go in by default. I expect this is the same with wheel installer extensions. To use your example, for instance. People want to do: pip install sphinx and then have the sphinx-quickstart utility ready to go. by default. So scripts need to be installed by default. The trade-off between convenience and control/security is tough. > Based on the above, it's possibly valid to allow "required" extensions > to be auto-installed. It *is* a vector for unexpected code execution, > but maybe that's OK. > If even required extensions aren't auto installed, then we can just toss out the whole idea of automatic dependency management. (which I personally wouldn't mind, actually, but I'm weird that way) But maybe we need some "real" use cases to talk about -- I agree with others in this thread that the Start menu isn't a good example. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Apr 16 20:33:47 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 16 Apr 2015 19:33:47 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 16 April 2015 at 17:58, Chris Barker wrote: > We have the same problem with optional dependencies. > > For instance, for iPython to work, you don't need much. but if you want the > ipython notebook to work, you need tornado, zeromq, who knows what else. But > people want it to just work -- and just work be default, so you want all > that optional stuff to go in by default. But none of those are installed by default with ipython - they are covered by extras. If you want them, you ask for them. Thanks to extras, ipython offers a nice shortcut for you - pip install ipython[all] - but you still have to ask for them. > I expect this is the same with wheel installer extensions. To use your > example, for instance. People want to do: > > pip install sphinx > > and then have the sphinx-quickstart utility ready to go. by default. So > scripts need to be installed by default. Yes, the script wrapper extension is a much better example of a "nobody would ever want this off" extension. But is it really "required"? The distribution would work fine if scripts weren't installed. My understanding of the required/optional distinction in the PEP is that an extension is required if the installation would be *broken* if that extension wasn't supported. And having to type "python -m something" rather than just "something" isn't broken, it's just an inconvenience. In practice, I'd assume script wrappers would be an extension built into pip, so it would always be available. But that's different from "required". Installing via "wheel install" (which doesn't support generating script wrappers) still works. Actually, I just checked PEP 459, which says of the "python.commands" extension: "Build tools SHOULD mark this extension as requiring handling by installers". So I stand corrected - script wrappers should be considered mandatory. In practice, though, what that means is that pip will be fine (as it'll have support built in) and wheel will be a non-compliant installer (as it neither supports generating wrappers, nor does it give an error when asked to create them - maybe an error will get added, but I doubt it as wheel install isn't intended to be a full installer). I've no idea at this point what distil or easy_install will do, much less any other installers out there. > The trade-off between convenience and control/security is tough. Certainly. But that's about what is available by default, and how the installer (pip) handles the user interface for "this package says that it gives you some extra functionality if you have extensions X, Y, and Z". There's no convenience or UI implication with required extensions - if they aren't available, the installer refuses to install the distribution. Simple as that. Maybe pip could try to locate and download mandatory extensions before deciding whether to fail the install, but the package metadata doesn't say how to find such installer plugins (and it *can't* - because the plugin would be different for pip, easy-install, distil or wheel, so all it can say is "I need plugin foo" and the installer has to know the rest). That's an installation program quality of implementation issue though. Given that a random project could add metadata "extensions: { foocorp.randomstuff: { "installer_must_handle": true, "foocorp.localdata": "Some random stuff" } } there is no way that pip has any means of discovering where to get code to handle the "foocorp.randomstuff" extension from. So in the general case, auto-installing required extension support just isn't possible. At best, pip could have a registry of plugins that support standard extensions (i.e. those covered by a PEP) but I'd expect that we'd just build them into pip (as we don't have a plugin interface at the moment). >> Based on the above, it's possibly valid to allow "required" extensions >> to be auto-installed. It *is* a vector for unexpected code execution, >> but maybe that's OK. > > If even required extensions aren't auto installed, then we can just toss out > the whole idea of automatic dependency management. (which I personally > wouldn't mind, actually, but I'm weird that way) I disagree. It's no different conceptually than the fact that if you don't have a C compiler, you can't install a package that contains C code and only comes in sdist format today. The UI in pip for noticing and saying "you need a C compiler" is terrible (you get a build error which might mention that you don't have the compiler, if you're lucky :-)). And yet people survive. So a clear error saying "package X needs a handler for extension Y to install" is a major improvement over the current situation. (I know C compilers are build-step and extensions are install-step, but right now the user experience doesn't clearly distinguish these, so the analogy holds). Whether users want pip to go one step further and auto-install the plugin is unknown at this point. So far, it seems that the only people who have expressed an opinion (you and I) aren't particularly pushing for auto-installing (unless I misinterpreted your "which I personally wouldn't mind" comment). For a proper view, we'd need a concrete example of a package with a specific required extension, that pip was unlikely to include out of the box. Or we could just not worry for now, and wait to see what feedback we got from a non-automatic initial implementation in real use. > But maybe we need some "real" use cases to talk about -- I agree with others > in this thread that the Start menu isn't a good example. +10000 To focus discussion, I think we need - A credible "required" extension (python.constraints or python.commands from PEP 459) - A credible "required" extension that pip wouldn't provide builtin support for - A credible "optional" extension (most of the other extensions in PEP 459, for example exports) - A credible "optional" extension that pip wouldn't provide builtin support for I've separated out things that pip wouldn't provide builtin support for, because those are the only ones where there's a real question about "what do we do if support isn't available", at least from a pip point of view. In practice, that probably means "not defined in an accepted PEP" (I'd expect pip to build in support for standardised extensions). By the way. I just did a check through PEPs 426 and 459. Neither one currently defines a "postinstall script" metadata item or extension, which is interesting given that this discussion started from the question of how postinstall actions would be supported. There *have* been discussions in the past, and I could have sworn they ended up in a PEP somewhere, but maybe I was wrong. Paul From brett at python.org Thu Apr 16 21:45:19 2015 From: brett at python.org (Brett Cannon) Date: Thu, 16 Apr 2015 19:45:19 +0000 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: <854moiut4v.fsf@benfinney.id.au> References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> <854moiut4v.fsf@benfinney.id.au> Message-ID: On Tue, Apr 14, 2015 at 8:34 PM Ben Finney wrote: > Nick Coghlan writes: > > > Yep, Guido's keynote was the genesis of the thread. > > I can't find it online, can you give a URL so we can see the talk? > https://www.youtube.com/watch?v=G-uKNd5TSBw > > > Past suggestions for social features have related to providing users > > with a standard way to reach maintainers and each other, and I'd > > prefer to leave maintainers in full control of that aspect of the > > maintainer experience. I'm not alone in feeling that way, hence why > > such features tend not to be viewed especially positively. > > Thanks for this detailed response differentiating this proposal from > previous ones, it's exactly what I was asking for. > > -- > \ ?For mad scientists who keep brains in jars, here's a tip: why | > `\ not add a slice of lemon to each jar, for freshness?? ?Jack | > _o__) Handey | > Ben Finney > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Apr 16 22:20:43 2015 From: cournape at gmail.com (David Cournapeau) Date: Thu, 16 Apr 2015 16:20:43 -0400 Subject: [Distutils] pywin32 on wheels [was: Beyond wheels 1.0: helping downstream, FHS and more] In-Reply-To: References: <552F6533.9060504@timgolden.me.uk> <552F8AC7.5050904@timgolden.me.uk> Message-ID: On Thu, Apr 16, 2015 at 9:58 AM, Paul Moore wrote: > On 16 April 2015 at 14:43, Kevin Horn wrote: > > Those that want to use pywin32 in a virtualenv (or just without all the > > system changes) could simply install the wheel (or even an sdist, > perhaps) > > from the command line using pip, and then perform whatever other steps > they > > want manually. > > Just as a data point, converting the existing wininst installer for > pywin32 to a wheel (using wheel convert), and installing that via pip, > is entirely workable (for the win32api/win32file type basic > functionality). The pypiwin32 project (started by Glyph as a way of > providing pywin32 wheels from PyPI) includes wheels for Python 3.x > which I built that way, so it's certainly seen some level of use. > > The wheels are probably unnecessarily big, as they'll include all of > pythonwin, and the ActiveX Scripting and service creation support, > which I guess won't work in that configuration, but they are a good > starting point for checking precisely what will work unmodified from a > wheel. > For people interested in a lightweight alternative to pywin32, we have the pywin32ctypes project, which started as a way to get access to win32 credentials without depending on DLL (to avoid file locking issues with inplace updates). The project is on github (https://github.com/enthought/pywin32-ctypes), and is already used by a few projects. We support both cffi and ctypes backends (the former to work out of the box on cpython, the latter to work on pypy). David -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Apr 16 23:39:21 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 16 Apr 2015 16:39:21 -0500 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> Message-ID: On Apr 14, 2015 7:15 PM, "Nick Coghlan" wrote: > > On 14 April 2015 at 11:19, Trishank Karthik Kuppusamy wrote: > > On 14 April 2015 at 11:16, Brett Cannon wrote: > >> I agree. Even something as simple as a boolean that triggers a banner > >> saying "this project is looking for a new maintainer" would be useful both > >> from the perspective of project owners who want to move on or from the > >> perspective of users who can't tell if a project is maintained based on how > >> long it has been since a project uploaded a new version (which is why I > >> think someone suggested sending an annual email asking for a human action to > >> say "alive and kicking" to help determine if a project is completely > >> abandoned). > > > > Yeah, I think Guido said something to this effect in his keynote. > > Yep, Guido's keynote was the genesis of the thread. For folks that haven't seen it, the specific points of concern raised were: > > * seeking a new maintainer from amongst their users > * seeking help with enabling Python 3 support > > Past suggestions for social features have related to providing users with a standard way to reach maintainers and each other, and I'd prefer to leave maintainers in full control of that aspect of the maintainer experience. I'm not alone in feeling that way, hence why such features tend not to be viewed especially positively. > If only there was a way to add RDFa metadata to the tags in the HTML output of a pypa/readme-rendered long_description. FOAF RDFa and/or a /users/ https://warehouse.readthedocs.org/application/ Recently I learned about pyramid_autodoc. > The one thing that *only* PyPI can provide is the combination of a publication channel for maintainers to reach their user base without either side needing to share contact information they aren't already sharing, together with the creation of the clear understanding that providing sustaining engineering for a piece of software represents a significant time commitment that users benefiting from an open source maintainer's generosity should respect. > > This thread regarding maintainers being able to more clearly communicate maintenance status to users also relates to my blog post ( http://www.curiousefficiency.org/posts/2015/04/stop-supporting-python26.html) regarding the fact that folks that: > > a) don't personally need to ensure software they maintain works on old versions of Python; and > b) aren't getting paid to ensure it works on old versions of Python; > c) shouldn't feel obliged to provide such support for free > > Supporting legacy platforms is generally tedious work that isn't inherently interesting or rewarding. Folks that want such legacy platform support should thus be expecting to have to pay for it, and demanding it for free is unreasonable. > > The perception that open source software is provided by magic internet pixies that don't need to eat (or at the very least to be thanked for the time their generosity has saved us) is unfortunately widespread and pernicious [1], and PyPI is in a position to help shift that situation to one where open source maintainers at least have the opportunity to clearly explain the sustaining engineering model backing their software while deflecting any criticism for the mere existence of such explanations onto the PyPI maintainers rather than having to cope with any negative feedback themselves. So, is this a new ENUM field and something over and above mtime? -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Apr 16 23:42:42 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 16 Apr 2015 16:42:42 -0500 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> Message-ID: On Apr 14, 2015 7:15 PM, "Nick Coghlan" wrote: > > [...] > > The perception that open source software is provided by magic internet pixies that don't need to eat (or at the very least to be thanked for the time their generosity has saved us) https://en.wikipedia.org/wiki/Business_models_for_open-source_software https://gist.github.com/ndarville/4295324 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhammond at skippinet.com.au Fri Apr 17 01:17:22 2015 From: mhammond at skippinet.com.au (Mark Hammond) Date: Fri, 17 Apr 2015 09:17:22 +1000 Subject: [Distutils] pywin32 on wheels [was: Beyond wheels 1.0: helping downstream, FHS and more] In-Reply-To: <552F8AC7.5050904@timgolden.me.uk> References: <552F6533.9060504@timgolden.me.uk> <552F8AC7.5050904@timgolden.me.uk> Message-ID: <55304302.7030803@skippinet.com.au> As already mentioned in this thread, most of the postinstall stuff is needed only for a subset of users - mainly those who want to write COM objects or Windows Services (and also people who want the shortcuts etc). pywin32 itself should be close to "portable" - eg, "setup.py install" doesn't run the postinstall script but leaves a largely functioning pywin32 install. So I think it should be relatively easy to achieve for pywin32 to work in a virtual env without running any of the post-install scripts, and I'd support any consolidation of the setup process to support this effort. Cheers, Mark On 16/04/2015 8:11 PM, Tim Golden wrote: > [cc-ing Mark H as he indicated he was open to be kept in the loop; also > changed the title to reflect the shift of conversation] > > On 16/04/2015 09:21, Paul Moore wrote: >> On 16 April 2015 at 08:30, Tim Golden wrote: >>>> Sorry - I agree it's an awful idea. Older wininst installers such as >>>> the pywin32 (and I think the PyQT one) one do this, I wanted to use it >>>> as an example of abuse of postinstall scripts that should *not* be >>>> perpetuated in any new scheme. >>> >>> FWIW I've just had a to-and-fro by email with Mark Hammond. I gather >>> that he's now given Glyph access to the PyPI & hg setup for pywin32. >>> >>> He's also happy to consider changes to the setup process to support >>> wheel/virtualenv/postinstall improvements. There's been a side >>> discussion on the pywin32 list about which versions of Python pywin32 >>> should continue to support going forward, which obviously interacts with >>> the idea of making it wheel/virtualenv-friendly. >> >> Thanks for involving Mark in this. While pywin32 isn't the only >> project with a postinstall script, it's one of the most complex that I >> know of, and a good example to work from when looking at what projects >> need. >> >>> I'm not sure what Glyph's plan is at this point -- doubtless he can >>> speak for himself. I gather from Paul's comments earlier that he's not a >>> particular fan of pywin32. If the thing seems to have legs, I'm happy to >>> coordinate changes to the setup. (I am, technically, a pywin32 committer >>> although I've never made use of that fact). >> >> To be clear, I don't have that much of a problem with pywin32. I don't >> use it myself, these days, but that's because (a) it's a very big, >> monolithic dependency, and (b) it's not usable directly with pip. The >> problem I have with it is that a lot of projects use it for simple >> access to the Win32 API (uses which can easily be handled by ctypes, >> possibly with slightly more messy code) and that means that they >> inherit the pywin32 problems. So I advise against pywin32 because of >> that, *not* because I think it's a problem itself, when used for >> situations where there isn't a better alternative. >> >>> The particular issue I'm not sure about is: how does Paul see pywin32's >>> postinstall steps working when they *are* needed, ie when someone wants >>> to install pywin32 as a wheel and wants the COM registration to happen? >>> Or is that a question of: run these steps manually once pip's completed? >> >> To be honest, for the cases I encounter frequently, these requirements >> don't come up. So my experience here goes back to the days when I used >> pywin32 to write COM servers and services, which was quite a while >> ago. >> >> From what I recall, pywin32 has the following steps in its postinstall: >> >> 1. Create start menu entries. My view is that this should simply be >> dropped. Python packages should never be adding start menu entries. >> Steve Dower has confirmed he agrees with this view earlier on this >> thread. >> 2. Move the pywin32 DLLs to the system directory. I don't see any way >> this is compatible with per-user or virtualenv installs, so I don't >> know how to address this, other than again dropping the step. I've no >> idea why this is necessary, or precisely which parts of pywin32 >> require it (I've a recollection from a long time ago that "services >> written in Python" was the explanation, but that's all I know). But >> presumably such use cases already break with a per-user Python >> install? >> 3. Registering the ActiveX COM DLLs. I believe this is mostly obsolete >> technology these days (who still uses ActiveX Scripting in anything >> other than VBScript or maybe a bit of JScript?) I'd drop this and make >> it a step that the user has to do manually if they want it. In place >> of it, pywin32 could provide an entry point to register the DLLs >> ("python -m pywin32 --register-dlls" or something). Presumably users >> who need it would understand the implications, and how to avoid >> registering multiple environments or forgetting to unregister before >> dropping an environment, etc. That sort of pitfall isn't something >> Python should try to solve automatically via pre- and post- install >> scripts. >> 4. Registering help files. I never understood how that worked or why >> it was needed. So again, I'd say just drop it. > > Really, pywin32 is several things: a set of libraries (win32api, > win32file, etc.); some system-level support for various things (COM > registration, Service support etc.); and a development/editing > environment (pythonwin). > > I see this ending up as (respectively): as venv-friendly wheel; a py -m > script of the kind Paul suggests; and an installable app with the usual > start menu icons etc. > > In my copious spare time I'll at least try to visit the pywin32 codebase > to see how viable all this is. Feel free to challenge my thoughts on the > matter. > > > TJG > From jcappos at nyu.edu Fri Apr 17 02:19:56 2015 From: jcappos at nyu.edu (Justin Cappos) Date: Thu, 16 Apr 2015 20:19:56 -0400 Subject: [Distutils] name of the dependency problem In-Reply-To: <20150415184436.GL2456@yuggoth.org> References: <552E46D0.1020106@chamonix.reportlab.co.uk> <20150415184436.GL2456@yuggoth.org> Message-ID: Okay, I tried to summarize the discussion and most of my thoughts on that issue. https://github.com/pypa/pip/issues/988 I'll post anything further I have to say there. I hope to get a student to measure the extent of the problem... Thanks, Justin On Wed, Apr 15, 2015 at 2:44 PM, Jeremy Stanley wrote: > On 2015-04-15 12:09:04 +0100 (+0100), Robin Becker wrote: > > After again finding that pip doesn't have a correct dependency > > resolution solution a colleage and I discussed the nature of the > > problem. > [...] > > Before the discussion of possible solutions heads too far afield, > it's worth noting that this was identified years ago and has a > pending feature request looking for people pitching in on > implementation. It's perhaps better discussed at > https://github.com/pypa/pip/issues/988 so as to avoid too much > repetition. > -- > Jeremy Stanley > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Apr 17 03:31:29 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 16 Apr 2015 21:31:29 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 16 Apr 2015 03:08, "Paul Moore" wrote: > > Just to expand on another point in my mail - I'd like *anyone* to > provide an example of a genuine use case for something they think > should be a "required" installer extension. I'm not sure such a thing > actually exists... The constraints extension in PEP 459 recommends flagging extension processing as required, otherwise it's possible for unaware installers to silently skip the compatibility checks: https://www.python.org/dev/peps/pep-0459/#the-python-constraints-extension Installers offering the ability to opt in to ignoring environmental constraints is one thing, ignoring them through lack of understanding the extension is something else entirely. Cheers, Nick. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Apr 17 04:01:26 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 16 Apr 2015 22:01:26 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: On 16 Apr 2015 14:34, "Paul Moore" wrote: > > By the way. I just did a check through PEPs 426 and 459. Neither one > currently defines a "postinstall script" metadata item or extension, > which is interesting given that this discussion started from the > question of how postinstall actions would be supported. There *have* > been discussions in the past, and I could have sworn they ended up in > a PEP somewhere, but maybe I was wrong. Arbitrary postinstall operations haven't ended up in a PEP yet because *I* don't like them. Anyone that has experienced "Windows rot" where uninstallers fail to clean up properly after themselves has seen first hand the consequences of delegating trust to development teams without the ability to set any minimum expectations for their quality assurance processes. (One way of looking at Linux distro packaging policies is to view them as a code review process applied to Turing complete software build and installation programs, while container tech like Docker is a way of isolating apps from the host system) Trigger based programming is hard at the best of times, and it doesn't get easier when integrating arbitrary pieces of software written by different people at different times in different contexts. On the other hand, I *am* prepared to build in an escape hatch that lets folks disagree with me, and I'll just not install their software. As far as *pip* goes, whether or not to add a plugin system to handle additional metadata extensions would be up to the pip devs. As a user, my main request if the pip devs decided to add such a plugin system would be that extension handlers couldn't be implicitly installed as a dependency of another package. If folks want their installs to "just work", they shouldn't be marking non-standard metadata extensions as mandatory :) Cheers, Nick. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Apr 17 05:56:48 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 16 Apr 2015 23:56:48 -0400 Subject: [Distutils] pip/warehouse feature idea: "help needed" In-Reply-To: References: <55293580.1060005@sdamon.com> <85twwjv85r.fsf@benfinney.id.au> Message-ID: On 16 April 2015 at 17:42, Wes Turner wrote: > > On Apr 14, 2015 7:15 PM, "Nick Coghlan" wrote: >> >> [...] >> >> The perception that open source software is provided by magic internet >> pixies that don't need to eat (or at the very least to be thanked for the >> time their generosity has saved us) > > https://en.wikipedia.org/wiki/Business_models_for_open-source_software > > https://gist.github.com/ndarville/4295324 Right, there *are* for-profit business models and non-profit fundraising models that can support sustainable development and maintenance of open source software. However, it can also be hard to tell the difference between supported and unsupported software until low level infrastructure shifts like Python 3 or Linux containerisation come along - in those cases, the software without a good sustaining development story runs a high risk of getting trapped in the old model. Unfortunately, "Do you know and understand the sustaining engineering model for all of your dependencies?" is a question most of us will be forced to say "No" to, even those of us that really should know better. It's very easy to assume that a popular open source project has a well-funded sustainable development process backing it without actually checking that that assumption is accurate. When I first started working there, I used to think Boeing's risk management folks were overly paranoid for demanding to know the answer to that question before agreeing to depend on a new supplier, but I eventually came to understand that it's mostly a matter of being able to quantify risk - if you have 10 key dependencies, each with a 10% chance of going away in a given time period, then you end up facing a 2/3rds chance of having to replace at least one of those components with an alternative. As a result, these days *I* tend to be the one wanting to know the long term sustaining engineering plans for new services and dependencies (and sometimes a service or dependency will be valuable enough to be worth taking a chance on). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Apr 17 22:18:00 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 17 Apr 2015 16:18:00 -0400 Subject: [Distutils] Idea: move accepted interoperability specifications to packaging.python.org Message-ID: Daniel's started work on a new revision of the wheel specification, and it's crystallised a concern for me that's been building for a while: the Python Enhancement Proposal process is fundamentally a *change management* process and fundamentally ill-suited to acting as a *hosting service* for the resulting reference documentation. This is why we're seeing awkward splits like the one I have in PEP 440, where the specification itself is up top, with the rationale for changes below, and the large amounts of supporting material in PEP 426, where the specification is mixed in with a lot of background and rationale that isn't relevant if you just want the technical details of the latest version of the format. It also creates a problem where links to PEP based reference documents are fundamentally unstable - when we define a new version of the wheel format in a new PEP then folks are going to have to follow the daisy chain from PEP 427 through to the new PEP, rather than having a stable link that automatically presents the latest version of the format, perhaps with archived copies of the old version readily available. I think I see a way to resolve this, and I believe it should be fairly straightforward: we could create a "specifications" section on packaging.python.org, and as we next revise them, we start migrating the specs themselves out of the PEP system and into packaging.python.org. This would be akin to the change in the Python 3.3, where the specification of the way the import system worked finally moved from PEP 302 into the language reference. Under that model, the wheel 2.0 would be specifically focused on describing and justifying the *changes* between 1.0 and 2.0, but the final spec itself would be a standalone document living on packaging.python.org, and prominently linked to from both PEP 427 (which it would Supersede) and from the new PEP. This approach also gives a much nicer model for fixing typos in the specifications - those would just be ordinary GitHub PR's on the packaging.python.org repo, rather than needing to update the PEPs repo. Thoughts? Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Fri Apr 17 22:40:39 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 17 Apr 2015 21:40:39 +0100 Subject: [Distutils] Idea: move accepted interoperability specifications to packaging.python.org In-Reply-To: References: Message-ID: On 17 April 2015 at 21:18, Nick Coghlan wrote: > I think I see a way to resolve this, and I believe it should be fairly > straightforward: we could create a "specifications" section on > packaging.python.org, and as we next revise them, we start migrating > the specs themselves out of the PEP system and into > packaging.python.org. This would be akin to the change in the Python > 3.3, where the specification of the way the import system worked > finally moved from PEP 302 into the language reference. [...] > Thoughts? +1 Paul From donald at stufft.io Sat Apr 18 00:32:28 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 17 Apr 2015 18:32:28 -0400 Subject: [Distutils] Idea: move accepted interoperability specifications to packaging.python.org In-Reply-To: References: Message-ID: <5A2BF2F3-E738-44AB-95AA-99D5A6676F6B@stufft.io> > On Apr 17, 2015, at 4:18 PM, Nick Coghlan wrote: > > Daniel's started work on a new revision of the wheel specification, > and it's crystallised a concern for me that's been building for a > while: the Python Enhancement Proposal process is fundamentally a > *change management* process and fundamentally ill-suited to acting as > a *hosting service* for the resulting reference documentation. > > This is why we're seeing awkward splits like the one I have in PEP > 440, where the specification itself is up top, with the rationale for > changes below, and the large amounts of supporting material in PEP > 426, where the specification is mixed in with a lot of background and > rationale that isn't relevant if you just want the technical details > of the latest version of the format. > > It also creates a problem where links to PEP based reference documents > are fundamentally unstable - when we define a new version of the wheel > format in a new PEP then folks are going to have to follow the daisy > chain from PEP 427 through to the new PEP, rather than having a stable > link that automatically presents the latest version of the format, > perhaps with archived copies of the old version readily available. > > I think I see a way to resolve this, and I believe it should be fairly > straightforward: we could create a "specifications" section on > packaging.python.org, and as we next revise them, we start migrating > the specs themselves out of the PEP system and into > packaging.python.org. This would be akin to the change in the Python > 3.3, where the specification of the way the import system worked > finally moved from PEP 302 into the language reference. > > Under that model, the wheel 2.0 would be specifically focused on > describing and justifying the *changes* between 1.0 and 2.0, but the > final spec itself would be a standalone document living on > packaging.python.org, and prominently linked to from both PEP 427 > (which it would Supersede) and from the new PEP. > > This approach also gives a much nicer model for fixing typos in the > specifications - those would just be ordinary GitHub PR's on the > packaging.python.org repo, rather than needing to update the PEPs > repo. > > Thoughts? Would Daniel?s change still require a PEP to make it or would it just require a PR? If we?re going to make a GitHub repository the source of truth for specs would it make sense to just ditch PEPs all together and use Pull Requests to handle things? They can have discussion and comments and stuff baked into them which could function to capture the ?Why?. I?m not sure it?s super useful in general, I don?t see much difference between the way we?re using PEPs and the way RFCs are written. They often have some light rationalization inside of the meat of the RFC and then in the Appendix they have more in depth rationalization. A bigger problem I have so far is that we don?t really have any user facing documentation. For some things that?s fine (you don?t need a user facing documentation for Wheels, because users shouldn?t generally be manually constructing them), but things like PEP 440, or parts of PEP 426 we don?t have any real information to point users at to tell them what they can and can?t do that isn?t essentially a spec to implement it. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From coleb at eyesopen.com Fri Apr 17 17:17:49 2015 From: coleb at eyesopen.com (Brian Cole) Date: Fri, 17 Apr 2015 15:17:49 +0000 Subject: [Distutils] How to sign a exe created with bdist_wininst? Message-ID: We've recently converted over to using bdist_wininst for creating our Windows .exe installers for our libraries. Unfortunately, whenever we use the Windows signtool utility to cryptographically sign our installer it appears to corrupt the .exe and it can't be run anymore. The error message thrown by Windows is "Setup program invalid or damaged". My best guess at this point is that bdist_wininst is creating a checksum of the file somehow and signtool is altering the file in such a way to invalidate that checksum. The commands we're using at this point is like this: python3.4.exe setup.py bdist_wininst --target-version 3.4 --bitmap OurLogo --title OurTitle-OurVersion cp DistUtilsSetupFileName.exe OurSetupFileName.exe call "C:\program Files (x86)\Microsoft Visual Studio 9.0\Common7\Tools\vsvars32.bat" signtool sign /n OurCompany /t http://timestamp.verisign.com/scripts/timstamp.dll /d OurProject /du OurWebsite OurSetupFileName.exe Anyone know of a way to cryptographically sign an .exe installer from bdist_wininst? Thanks, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From coleb at eyesopen.com Fri Apr 17 17:29:02 2015 From: coleb at eyesopen.com (Brian Cole) Date: Fri, 17 Apr 2015 15:29:02 +0000 Subject: [Distutils] How to sign a exe created with bdist_wininst? Message-ID: We've recently converted over to using bdist_wininst for creating our Windows .exe installers for our libraries. Unfortunately, whenever we use the Windows signtool utility to cryptographically sign our installer it appears to corrupt the .exe and it can't be run anymore. The error message thrown by Windows is "Setup program invalid or damaged". My best guess at this point is that bdist_wininst is creating a checksum of the file somehow and signtool is altering the file in such a way to invalidate that checksum. The commands we're using at this point is like this: python3.4.exe setup.py bdist_wininst --target-version 3.4 --bitmap OurLogo --title OurTitle-OurVersion cp DistUtilsSetupFileName.exe OurSetupFileName.exe call "C:\program Files (x86)\Microsoft Visual Studio 9.0\Common7\Tools\vsvars32.bat" signtool sign /n OurCompany /t http://timestamp.verisign.com/scripts/timstamp.dll /d OurProject /du OurWebsite OurSetupFileName.exe Anyone know of a way to cryptographically sign an .exe installer from bdist_wininst? Thanks, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev-mailings at gongy.de Fri Apr 17 09:31:07 2015 From: dev-mailings at gongy.de (Christoph Schmitt) Date: Fri, 17 Apr 2015 09:31:07 +0200 Subject: [Distutils] Missing documentation for proper handling of PEP420 namespace packages in setup.py Message-ID: I am using the newest versions of setuptools (15.0) and pip (6.1.1) with Python 3.4. I wanted to satisfy the following two requirements at the same time, but had some trouble: A) creating and installing two source distribution tarballs which have the same top level namespace package (no __init__.py for namespace package) with setuptools and pip B) having other packages within the same namespace package outside of /site-packages in the PYTHONPATH (not managed by pip/setuptools) Since Python >= 3.3 supports namespace packages out of the box (PEP420) this seemed to me like a straightforward task. However, it turned out to be not. Either, setuptools would not find my modules, pip would not install them properly or requirement B) was broken. The solution that worked for me, was to omit the namespace_packages=['whatever_namespace'] declaration in setup.py (to prevent the creation of *-nspkg.pth files by pip) and not to use find_packages, which does not comply with PEP420 (no packages found). This solution is somewhat counter-intuitive and I am not sure, whether it is an intended/valid configuration of a setup.py file. Definitely, it is not clear from the documentation (which has a section about namespace packages btw.). Since I read into https://bitbucket.org/pypa/setuptools/issue/97 I now know that convenient support PEP420 is not easy to achieve for setuptools (regarding find_packages). However it would have been very helpful, if the documentation explained how to handle PEP420-compliant namespace packages (without any __init__.py) in setup.py. At least I would have expected a hint that there are caveats regarding PEP420 with a link to issue 97. I also created a minimal example to reproduce the issue, which I can provide if anyone is interested. Kind regards, Christoph Schmitt From p.f.moore at gmail.com Sat Apr 18 11:57:56 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 18 Apr 2015 10:57:56 +0100 Subject: [Distutils] How to sign a exe created with bdist_wininst? In-Reply-To: References: Message-ID: On 17 April 2015 at 16:17, Brian Cole wrote: > We've recently converted over to using bdist_wininst for creating our > Windows .exe installers for our libraries. Unfortunately, whenever we use > the Windows signtool utility to cryptographically sign our installer it > appears to corrupt the .exe and it can't be run anymore. The error message > thrown by Windows is "Setup program invalid or damaged". > > My best guess at this point is that bdist_wininst is creating a checksum of > the file somehow and signtool is altering the file in such a way to > invalidate that checksum. The commands we're using at this point is like > this: > > python3.4.exe setup.py bdist_wininst --target-version 3.4 --bitmap OurLogo > --title OurTitle-OurVersion > cp DistUtilsSetupFileName.exe OurSetupFileName.exe > call "C:\program Files (x86)\Microsoft Visual Studio > 9.0\Common7\Tools\vsvars32.bat" > signtool sign /n OurCompany /t > http://timestamp.verisign.com/scripts/timstamp.dll /d OurProject /du > OurWebsite OurSetupFileName.exe > > Anyone know of a way to cryptographically sign an .exe installer from > bdist_wininst? The wininst format is a stub Windows executable, with some ini-format data and a zipfile appended (in that order). I don't know where signtools adds the signature, but if it's at the end, then that won't work (as it's necessary for the zip data to be the *last* thing in the file - zipfile format supports prepending data but not appending it as the central directory is defined as being at a fixed offset from the end of the file). There may also be a length or checksum in the ini data, I'd have to check the source to confirm that. Just checked, no it doesn't - the full details are here: https://hg.python.org/cpython/file/bc1a178b3bc8/PC/bdist_wininst/install.c So basically, I don't think it's possible to sign (or otherwise modify) wininst executables. Paul From Steve.Dower at microsoft.com Sat Apr 18 16:46:49 2015 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sat, 18 Apr 2015 14:46:49 +0000 Subject: [Distutils] How to sign a exe created with bdist_wininst? In-Reply-To: References: , Message-ID: It may be possible to add an empty key container to the stub with signtool so that it can be filled in after adding the zip without having to extend the length. I believe the PE header is modified to locate the certificate, so it doesn't necessarily have to be at the end. Feel free to investigate this yourself with the wininst stub in Lib\distutils\command. I'll take a look, but may not be able to get to it for a while (file an issue and nosy me if you don't get anywhere, or even if you do and we can support this in newer versions). Cheers, Steve Top-posted from my Windows Phone ________________________________ From: Paul Moore Sent: ?4/?18/?2015 2:58 To: Brian Cole Cc: distutils-sig at python.org Subject: Re: [Distutils] How to sign a exe created with bdist_wininst? On 17 April 2015 at 16:17, Brian Cole wrote: > We've recently converted over to using bdist_wininst for creating our > Windows .exe installers for our libraries. Unfortunately, whenever we use > the Windows signtool utility to cryptographically sign our installer it > appears to corrupt the .exe and it can't be run anymore. The error message > thrown by Windows is "Setup program invalid or damaged". > > My best guess at this point is that bdist_wininst is creating a checksum of > the file somehow and signtool is altering the file in such a way to > invalidate that checksum. The commands we're using at this point is like > this: > > python3.4.exe setup.py bdist_wininst --target-version 3.4 --bitmap OurLogo > --title OurTitle-OurVersion > cp DistUtilsSetupFileName.exe OurSetupFileName.exe > call "C:\program Files (x86)\Microsoft Visual Studio > 9.0\Common7\Tools\vsvars32.bat" > signtool sign /n OurCompany /t > http://timestamp.verisign.com/scripts/timstamp.dll /d OurProject /du > OurWebsite OurSetupFileName.exe > > Anyone know of a way to cryptographically sign an .exe installer from > bdist_wininst? The wininst format is a stub Windows executable, with some ini-format data and a zipfile appended (in that order). I don't know where signtools adds the signature, but if it's at the end, then that won't work (as it's necessary for the zip data to be the *last* thing in the file - zipfile format supports prepending data but not appending it as the central directory is defined as being at a fixed offset from the end of the file). There may also be a length or checksum in the ini data, I'd have to check the source to confirm that. Just checked, no it doesn't - the full details are here: https://hg.python.org/cpython/file/bc1a178b3bc8/PC/bdist_wininst/install.c So basically, I don't think it's possible to sign (or otherwise modify) wininst executables. Paul _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sat Apr 18 19:19:03 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Sat, 18 Apr 2015 10:19:03 -0700 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> Message-ID: <1434536378936016707@unknownmsgid> For the most part, I think it's all been said. What should and shouldn't be installed by default is really specific extension dependent, not much point in speculating. But a comment or two: having to type > "python -m something" rather than just "something" isn't broken, it's > just an inconvenience. Tell that to a newbie. This is EXACTLY the kind of thing that should "just work". Maybe a three-tier system: 1) mandatory -- can't install without it 2) default -- try to install it be default if possible 3) optional: only install if specifically asked for And this isn't just about extensions -- for instance, the "all" stuff in iPython would be well served by level 2 > It's no different conceptually than the fact that if you > don't have a C compiler, you can't install a package that contains C Sure it is -- a C complier is a system tool, and the whole point of binary wheels is that the end user doesn't need one. -CHB From vinay_sajip at yahoo.co.uk Sat Apr 18 19:27:24 2015 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 18 Apr 2015 17:27:24 +0000 (UTC) Subject: [Distutils] How to sign a exe created with bdist_wininst? In-Reply-To: References: Message-ID: <1636361956.7154022.1429378044969.JavaMail.yahoo@mail.yahoo.com> According to this resource: http://recon.cx/2012/schedule/attachments/54_Signed_executables.pps it is doable, but tricky, and IIUC may not work on Windows XP SP2/SP3. Wouldn't it be safer for the stub to work correctly in the presence of a signature? Presumably it could use a different algorithm to locate the archive directory, rather than just expecting it to be at the end of the file. Or if it is less work, just make a temporary copy of the wininst .exe excluding the appended signature, and use that for the unarchiving operation. (Just my 2 cents, or should I say tuppence ...) Regards, Vinay Sajip From: Steve Dower To: Paul Moore ; Brian Cole Cc: "distutils-sig at python.org" Sent: Saturday, 18 April 2015, 15:46 Subject: Re: [Distutils] How to sign a exe created with bdist_wininst? #yiv2682230560 #yiv2682230560 -- .yiv2682230560EmailQuote {margin-left:1pt;padding-left:4pt;border-left:#800000 2px solid;}#yiv2682230560 It may be possible to add an empty key container to the stub with signtool so that it can be filled in after adding the zip without having to extend the length. I believe the PE header is modified to locate the certificate, so it doesn't necessarily have to be at the end. Feel free to investigate this yourself with the wininst stub in Lib\distutils\command. I'll take a look, but may not be able to get to it for a while (file an issue and nosy me if you don't get anywhere, or even if you do and we can support this in newer versions). Cheers, Steve Top-posted from my Windows Phone From:Paul Moore Sent:?4/?18/?2015 2:58 To:Brian Cole Cc:distutils-sig at python.org Subject:Re: [Distutils] How to sign a exe created with bdist_wininst? On 17 April 2015 at 16:17, Brian Cole wrote: > We've recently converted over to using bdist_wininst for creating our > Windows .exe installers for our libraries. Unfortunately, whenever we use > the Windows signtool utility to cryptographically sign our installer it > appears to corrupt the .exe and it can't be run anymore. The error message > thrown by Windows is "Setup program invalid or damaged". > > My best guess at this point is that bdist_wininst is creating a checksum of > the file somehow and signtool is altering the file in such a way to > invalidate that checksum. The commands we're using at this point is like > this: > > python3.4.exe setup.py bdist_wininst --target-version 3.4 --bitmap OurLogo > --title OurTitle-OurVersion > cp DistUtilsSetupFileName.exe OurSetupFileName.exe > call "C:\program Files (x86)\Microsoft Visual Studio > 9.0\Common7\Tools\vsvars32.bat" > signtool sign /n OurCompany? /t > http://timestamp.verisign.com/scripts/timstamp.dll /d OurProject /du > OurWebsite OurSetupFileName.exe > > Anyone know of a way to cryptographically sign an .exe installer from > bdist_wininst? The wininst format is a stub Windows executable, with some ini-format data and a zipfile appended (in that order). I don't know where signtools adds the signature, but if it's at the end, then that won't work (as it's necessary for the zip data to be the *last* thing in the file - zipfile format supports prepending data but not appending it as the central directory is defined as being at a fixed offset from the end of the file). There may also be a length or checksum in the ini data, I'd have to check the source to confirm that. Just checked, no it doesn't - the full details are here: https://hg.python.org/cpython/file/bc1a178b3bc8/PC/bdist_wininst/install.c So basically, I don't think it's possible to sign (or otherwise modify) wininst executables. Paul _______________________________________________ Distutils-SIG maillist? -? Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig _______________________________________________ Distutils-SIG maillist? -? Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Apr 18 19:36:13 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 18 Apr 2015 18:36:13 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: <1434536378936016707@unknownmsgid> References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> <1434536378936016707@unknownmsgid> Message-ID: On 18 April 2015 at 18:19, Chris Barker - NOAA Federal wrote: (your quote trimming's a bit over-enthusiastic, you lost the attribution here) >> "python -m something" rather than just "something" isn't broken, it's >> just an inconvenience. > > Tell that to a newbie. This is EXACTLY the kind of thing that should > "just work". It's a huge "quality of implementation" issue, certainly - any installer that doesn't include script generation built in is going to be as annoying as hell to a user. But they do exist (wheel install, for instance) and the resulting installation "works", even if a newcomer would hate it. So it's not "mandatory" in the sense that no functionality is lost. But this is a moot point, as PEP 459 says the python.commands extension SHOULD be marked as required. And wheel install would technically be in violation of PEP 426, as it doesn't handle script wrappers and it doesn't fail when a package needs them (only "technically", because PEP 426 isn't finalised yet, and "wheel install" could be updated to support it). But I'd already said most of that - you just pulled that one point out of context. Paul From p.f.moore at gmail.com Sat Apr 18 19:39:10 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 18 Apr 2015 18:39:10 +0100 Subject: [Distutils] How to sign a exe created with bdist_wininst? In-Reply-To: <1636361956.7154022.1429378044969.JavaMail.yahoo@mail.yahoo.com> References: <1636361956.7154022.1429378044969.JavaMail.yahoo@mail.yahoo.com> Message-ID: On 18 April 2015 at 18:27, Vinay Sajip wrote: > Wouldn't it be safer for the stub to work correctly in the presence of a > signature? Presumably it could use a different algorithm to locate the > archive directory, rather than just expecting it to be at the end of the > file. It's the definition of the zip format which mandates that you seek from the end of file to get the directory. Sure, bdist_wininst could write its own code based on its current zip extraction code, but the fact that wininst files are zip files is used elsewhere (wheel convert uses it, and I have used it to investigate wininst files by opening them in 7-zip). Paul From dholth at gmail.com Sat Apr 18 20:12:44 2015 From: dholth at gmail.com (Daniel Holth) Date: Sat, 18 Apr 2015 14:12:44 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> <1434536378936016707@unknownmsgid> Message-ID: The wheel installer does call setuptools to generate console script wrappers. On Apr 18, 2015 1:36 PM, "Paul Moore" wrote: > On 18 April 2015 at 18:19, Chris Barker - NOAA Federal > wrote: > > (your quote trimming's a bit over-enthusiastic, you lost the attribution > here) > > >> "python -m something" rather than just "something" isn't broken, it's > >> just an inconvenience. > > > > Tell that to a newbie. This is EXACTLY the kind of thing that should > > "just work". > > It's a huge "quality of implementation" issue, certainly - any > installer that doesn't include script generation built in is going to > be as annoying as hell to a user. But they do exist (wheel install, > for instance) and the resulting installation "works", even if a > newcomer would hate it. So it's not "mandatory" in the sense that no > functionality is lost. But this is a moot point, as PEP 459 says the > python.commands extension SHOULD be marked as required. And wheel > install would technically be in violation of PEP 426, as it doesn't > handle script wrappers and it doesn't fail when a package needs them > (only "technically", because PEP 426 isn't finalised yet, and "wheel > install" could be updated to support it). > > But I'd already said most of that - you just pulled that one point out > of context. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Apr 18 20:55:26 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 18 Apr 2015 14:55:26 -0400 Subject: [Distutils] Idea: move accepted interoperability specifications to packaging.python.org In-Reply-To: <5A2BF2F3-E738-44AB-95AA-99D5A6676F6B@stufft.io> References: <5A2BF2F3-E738-44AB-95AA-99D5A6676F6B@stufft.io> Message-ID: On 17 April 2015 at 18:32, Donald Stufft wrote: > > Would Daniel?s change still require a PEP to make it or would it just > require a PR? If we?re going to make a GitHub repository the source > of truth for specs would it make sense to just ditch PEPs all together > and use Pull Requests to handle things? They can have discussion and > comments and stuff baked into them which could function to capture the > ?Why?. We'd get a similar model to CPython - clarifications could be done in a PR, but changes that significantly impact interoperabillity would still need the additional visibility of the PEP process. > I?m not sure it?s super useful in general, I don?t see much difference > between the way we?re using PEPs and the way RFCs are written. They > often have some light rationalization inside of the meat of the RFC and > then in the Appendix they have more in depth rationalization. Right, there wouldn't necessarily need to be much change to the way the PEPs themselves are written, the reference doc on packaging.python.org would just be a description of the *latest* standard on that topic, without the rationale for the changes from the previous version. That way, the PEPs could err on the side of "more explanation", secure in the knowledge that the "just the current recommendations" version will be readily available on packaging.python.org. > A bigger problem I have so far is that we don?t really have any user > facing documentation. For some things that?s fine (you don?t need a user > facing documentation for Wheels, because users shouldn?t generally be > manually constructing them), but things like PEP 440, or parts of PEP 426 > we don?t have any real information to point users at to tell them what > they can and can?t do that isn?t essentially a spec to implement it. Right, that's really the kind of thing I'm talking about. So perhaps what I'm asking for isn't to move the specs themselves, but rather that we add a new "Reference" section to packaging.python.org to provide the user facing counterpart to the implementor facing PEPs. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Apr 18 21:02:56 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 18 Apr 2015 15:02:56 -0400 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> <1434536378936016707@unknownmsgid> Message-ID: On 18 April 2015 at 13:36, Paul Moore wrote: > On 18 April 2015 at 18:19, Chris Barker - NOAA Federal > wrote: > > (your quote trimming's a bit over-enthusiastic, you lost the attribution here) > >>> "python -m something" rather than just "something" isn't broken, it's >>> just an inconvenience. >> >> Tell that to a newbie. This is EXACTLY the kind of thing that should >> "just work". > > It's a huge "quality of implementation" issue, certainly - any > installer that doesn't include script generation built in is going to > be as annoying as hell to a user. But they do exist (wheel install, > for instance) and the resulting installation "works", even if a > newcomer would hate it. So it's not "mandatory" in the sense that no > functionality is lost. But this is a moot point, as PEP 459 says the > python.commands extension SHOULD be marked as required. And wheel > install would technically be in violation of PEP 426, as it doesn't > handle script wrappers and it doesn't fail when a package needs them > (only "technically", because PEP 426 isn't finalised yet, and "wheel > install" could be updated to support it). It's not in violation, that's the whole point of saying SHOULD, rather than MUST. Please don't lose that distinction - if users start demanding that developers always implement SHOULDs, they're misreading the spec, and are going to make life miserable for a lot of people by making unreasonable demands on their time. As a specification author, "SHOULD" is a way for us to say "most users are likely to want this, so you should probably do it if you don't have a strong preference, but not all users will want it, so certain tools may choose not to do it for reasons that are too context specific for us to go into in a general purpose specification". The MUSTs are the "things you may not personally care about, but other people do care about, will break if you get this wrong" part of the specs, the SHOULDs are "this is probably a good thing to do, but you may disagree" :) Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Sat Apr 18 21:40:51 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 18 Apr 2015 20:40:51 +0100 Subject: [Distutils] Beyond wheels 1.0: helping downstream, FHS and more In-Reply-To: References: <87AE23BF-FEA1-4A03-83AC-34BD4A241DA9@stufft.io> <1434536378936016707@unknownmsgid> Message-ID: On 18 April 2015 at 20:02, Nick Coghlan wrote: >> It's a huge "quality of implementation" issue, certainly - any >> installer that doesn't include script generation built in is going to >> be as annoying as hell to a user. But they do exist (wheel install, >> for instance) and the resulting installation "works", even if a >> newcomer would hate it. So it's not "mandatory" in the sense that no >> functionality is lost. But this is a moot point, as PEP 459 says the >> python.commands extension SHOULD be marked as required. And wheel >> install would technically be in violation of PEP 426, as it doesn't >> handle script wrappers and it doesn't fail when a package needs them >> (only "technically", because PEP 426 isn't finalised yet, and "wheel >> install" could be updated to support it). > > It's not in violation, that's the whole point of saying SHOULD, rather > than MUST. Please don't lose that distinction - if users start > demanding that developers always implement SHOULDs, they're misreading > the spec, and are going to make life miserable for a lot of people by > making unreasonable demands on their time. > > As a specification author, "SHOULD" is a way for us to say "most users > are likely to want this, so you should probably do it if you don't > have a strong preference, but not all users will want it, so certain > tools may choose not to do it for reasons that are too context > specific for us to go into in a general purpose specification". > > The MUSTs are the "things you may not personally care about, but other > people do care about, will break if you get this wrong" part of the > specs, the SHOULDs are "this is probably a good thing to do, but you > may disagree" :) Sorry - I missed quoting one relevant bit of PEP 426. What I was trying to say was that wheel install violates the "MUST" in PEP 426 (that installers MUST support a required extension or report an error when installing). But Daniel pointed out that I'm wrong anyway - wheel install *does* support generating script wrappers, using setuptools to do so. My apologies for not checking my facts more carefully, and for my confusing wording. Paul From dholth at gmail.com Sat Apr 18 23:24:33 2015 From: dholth at gmail.com (Daniel Holth) Date: Sat, 18 Apr 2015 17:24:33 -0400 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel Message-ID: I am working on a minor update to the wheel format to add more categories under which files can be installed. Constructive comments welcome. Distutills, setuptools, wheel currently have the best support for installing files relative to ('purelib', 'platlib', 'headers', 'scripts', 'data') with 'data' usually meaning '/' or the root of the virtualenv. In practice only exactly one of the 'purelib' or 'platlib' locations (which are usually mapped to the same directory on disk), and sometimes 'scripts' is used for any given package, and Python packages have no good way of loading any of their package files at run time if they are installed into any location not relative to sys.path. This works fairly well for Python libraries, but anyone packaging applications for a Linux distribution is required to follow the filesystem hierarchy standard or FHS. http://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/s1-filesystem-fhs.html, http://www.debian.org/releases/etch/amd64/apcs02.html, http://www.pathname.com/fhs/ It would help Linux distribution package maintainers and Python application (not library) developers if wheel had better support for installing files into the FHS, but it would help everyone else who wanted to generate cross-platform packages if the FHS was not hardcoded as paths in the data/ category. To that end we should come up with some additional categories that map, or do not map, to the FHS based on the end user's platform. Distutils2 had the concept of resource categories: http://alexis.notmyidea.org/distutils2/setupcfg.html#resources """ Default categories are: * config * appdata * appdata.arch * appdata.persistent * appdata.disposable * help * icon * scripts * doc * info * man """ GNU has directory variables: https://www.gnu.org/prep/standards/html_node/Directory-Variables.html , for example prefix, exec_prefix, bindir, sbindir. Bento has a list based on the GNU paths, and allows new paths to be defined: prefix: install architecture-independent files eprefix: install architecture-dependent files bindir: user executables sbindir: system admin executables libexecdir: program executables sysconfdir: read-only single-machine data sharedstatedir: modifiable architecture-independent data localstatedir: modifiable single-machine data libdir: object code libraries includedir: C header files oldincludedir: C header files for non-gcc datarootdir: read-only arch.-independent data root datadir: read-only architecture-independent data infodir: info documentation localedir: locale-dependent data mandir: man documentation docdir: documentation root htmldir: html documentation dvidir: dvi documentation pdfdir: pdf documentation psdir: ps documentation I would like to add Bento's list to wheel and to setuptools. We would fix the disused setup.py setup(data_files = ) argument so that it could be used with $ substitution, or provide a utility function that would expand them in setup.py itself: data_files = { '$libdir/mylib' : [ 'some_library_file'], '$datadir/mydata' : ['some_data_file']} We would provide a default configuration file that mapped the categories to their installed locations. We would store the actual paths used at install time, so a package could look for files relative to the $datadir used when it was installed. Then it would be easier to distribute Python programs that need to install some files to paths that are not relative to the site-packages directory. - Daniel Holth From p.f.moore at gmail.com Sun Apr 19 01:26:44 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 19 Apr 2015 00:26:44 +0100 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: Message-ID: On 18 April 2015 at 22:24, Daniel Holth wrote: > with 'data' usually meaning '/' or the root of the virtualenv On Windows, it means sys.prefix. In a user install, it means - %APPDATA%\Python on Windows - ~/Library/$PYTHONFRAMEWORK/x.y on Darwin (is that OSX?) - ~/.local on Unix See sysconfig.py for the full details, but it's far more complex than just "/ or the root of the virtualenv"... > This works fairly well for Python libraries, but anyone packaging > applications for a Linux distribution is required to follow the > filesystem hierarchy standard or FHS. Serious question - should people packaging applications for a Linux distribution be using wheels in the first place? They are a Python packaging format. Are there any examples on PyPI of the sort of distribution you mean here? Or is this something that would be needed by in-house projects, rather than public ones on PyPI? > It would help Linux distribution package maintainers and Python > application (not library) developers if wheel had better support for > installing files into the FHS, but it would help everyone else who > wanted to generate cross-platform packages if the FHS was not > hardcoded as paths in the data/ category. To that end we should come > up with some additional categories that map, or do not map, to the FHS > based on the end user's platform. Is this a purely Linux issue? Or do the same problems apply for people packaging applications for Windows? Are the packages involved expected to be platform-specific? (In other words, would they need to work on both Linux and Windows, or can we assume that any project needing this feature wouldn't support Windows anyway?) Again, specific examples of projects needing this functionality would help, here. > Bento has a list based on the GNU paths, and allows new paths to be defined: > > prefix: install architecture-independent files > eprefix: install architecture-dependent files > bindir: user executables > sbindir: system admin executables > libexecdir: program executables > sysconfdir: read-only single-machine data > sharedstatedir: modifiable architecture-independent data > localstatedir: modifiable single-machine data > libdir: object code libraries > includedir: C header files > oldincludedir: C header files for non-gcc > datarootdir: read-only arch.-independent data root > datadir: read-only architecture-independent data > infodir: info documentation > localedir: locale-dependent data > mandir: man documentation > docdir: documentation root > htmldir: html documentation > dvidir: dvi documentation > pdfdir: pdf documentation > psdir: ps documentation > > I would like to add Bento's list to wheel and to setuptools. These look highly Unix-centric. If this is to solve a purely Linux problem, that's fine. But in that case can we please make it clear when defining this feature that using it explicitly makes a package non-portable to Windows? If it's intended to be cross-platform, I'd rather the targets weren't so tied to Unix usage (mandir is meaningless on Windows, I can't see Windows users wanting dvidir or psdir, but the lack of a chmdir is an oversight, sbindir (which probably corresponds to C:\Windows) should be split to cover C:\Windows, C:\Windows\System, and C:\Windows\System32. And so on... Examples of where this functionality is needed would be great. I'm worried that we'll over-engineer a solution otherwise. Paul From dholth at gmail.com Sun Apr 19 03:31:27 2015 From: dholth at gmail.com (Daniel Holth) Date: Sat, 18 Apr 2015 21:31:27 -0400 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: Message-ID: On Sat, Apr 18, 2015 at 7:26 PM, Paul Moore wrote: > On 18 April 2015 at 22:24, Daniel Holth wrote: >> with 'data' usually meaning '/' or the root of the virtualenv > > On Windows, it means sys.prefix. > > In a user install, it means > > - %APPDATA%\Python on Windows > - ~/Library/$PYTHONFRAMEWORK/x.y on Darwin (is that OSX?) > - ~/.local on Unix > > See sysconfig.py for the full details, but it's far more complex than > just "/ or the root of the virtualenv"... > >> This works fairly well for Python libraries, but anyone packaging >> applications for a Linux distribution is required to follow the >> filesystem hierarchy standard or FHS. > > Serious question - should people packaging applications for a Linux > distribution be using wheels in the first place? They are a Python > packaging format. Are there any examples on PyPI of the sort of > distribution you mean here? Or is this something that would be needed > by in-house projects, rather than public ones on PyPI? > >> It would help Linux distribution package maintainers and Python >> application (not library) developers if wheel had better support for >> installing files into the FHS, but it would help everyone else who >> wanted to generate cross-platform packages if the FHS was not >> hardcoded as paths in the data/ category. To that end we should come >> up with some additional categories that map, or do not map, to the FHS >> based on the end user's platform. > > Is this a purely Linux issue? Or do the same problems apply for people > packaging applications for Windows? Are the packages involved expected > to be platform-specific? (In other words, would they need to work on > both Linux and Windows, or can we assume that any project needing this > feature wouldn't support Windows anyway?) > > Again, specific examples of projects needing this functionality would > help, here. > >> Bento has a list based on the GNU paths, and allows new paths to be defined: >> >> prefix: install architecture-independent files >> eprefix: install architecture-dependent files >> bindir: user executables >> sbindir: system admin executables >> libexecdir: program executables >> sysconfdir: read-only single-machine data >> sharedstatedir: modifiable architecture-independent data >> localstatedir: modifiable single-machine data >> libdir: object code libraries >> includedir: C header files >> oldincludedir: C header files for non-gcc >> datarootdir: read-only arch.-independent data root >> datadir: read-only architecture-independent data >> infodir: info documentation >> localedir: locale-dependent data >> mandir: man documentation >> docdir: documentation root >> htmldir: html documentation >> dvidir: dvi documentation >> pdfdir: pdf documentation >> psdir: ps documentation >> >> I would like to add Bento's list to wheel and to setuptools. > > These look highly Unix-centric. If this is to solve a purely Linux > problem, that's fine. But in that case can we please make it clear > when defining this feature that using it explicitly makes a package > non-portable to Windows? If it's intended to be cross-platform, I'd > rather the targets weren't so tied to Unix usage (mandir is > meaningless on Windows, I can't see Windows users wanting dvidir or > psdir, but the lack of a chmdir is an oversight, sbindir (which > probably corresponds to C:\Windows) should be split to cover > C:\Windows, C:\Windows\System, and C:\Windows\System32. And so on... > > Examples of where this functionality is needed would be great. I'm > worried that we'll over-engineer a solution otherwise. Remember when distutils-sig didn't believe that Windows users actually lacked compilers? graphite-web and carbon are two examples of Python applications with lots of data files, that want to install config files into an obvious place so you can edit them and get the applications up and running. Because Python packaging has such terrible support for applications that are not libraries, it's unnecessarily hard to get these packages up and running -- especially if you want to put them in a virtualenv. Unfortunately data_files has been so broken for so long that most Python packages, especially the libraries that I actually use, avoid the feature entirely. From ncoghlan at gmail.com Sun Apr 19 05:42:04 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 18 Apr 2015 23:42:04 -0400 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: Message-ID: On 18 April 2015 at 21:31, Daniel Holth wrote: > On Sat, Apr 18, 2015 at 7:26 PM, Paul Moore wrote: >> Examples of where this functionality is needed would be great. I'm >> worried that we'll over-engineer a solution otherwise. > > Remember when distutils-sig didn't believe that Windows users actually > lacked compilers? > > graphite-web and carbon are two examples of Python applications with > lots of data files, that want to install config files into an obvious > place so you can edit them and get the applications up and running. > Because Python packaging has such terrible support for applications > that are not libraries, it's unnecessarily hard to get these packages > up and running -- especially if you want to put them in a virtualenv. It's a fairly standard practice in Fedora/RHEL/CentOS to use setup.py to define the build process in the RPM spec file, even if the package itself is never distributed via the upstream package index (e.g. beaker-project.org is built that way, as is pulpproject.org). Fedora's packaging policy for redistribution of upstream Python projects also switched last year to favour using "pip install" in the RPM build process over invoking "setup.py install" directly. Historically, all the "extra bits" needed to comply with FHS have lived in the spec file, independently of the upstream packaging system, requiring changes in two places for certain kinds of packaging modifications, and frequently rendering the projects undeployable in a virtual environment for no good reason. The benefit of Daniel's proposal is that it should make it feasible to modify many of these projects to be virtualenv friendly, and then *automate* the process of generating FHS policy compliant downstream packages. That will be a big step towards "package for PyPI, get your conda/Nix/Debian/Fedora packaging for free", so it feeds directly into my own interests in streamlining the redistribution pipeline in Fedora. >From a Windows perspective, I believe this change mostly has the potential to make services that were previously Linux-only solely for packaging related reasons available on Windows as well. However, there may also be an opportunity to better automate the process of generating wix-based installers from PyPI packages (see http://wixtoolset.org/documentation/manual/v3/howtos/files_and_registry/add_a_file.html) rather than generating Windows installers directly (if I understand the tooling correctly, introducing wix into that process should offer the same kind of potential for better platform integration that integrating with distro package managers offers on Linux). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From holger at merlinux.eu Sun Apr 19 10:21:09 2015 From: holger at merlinux.eu (holger krekel) Date: Sun, 19 Apr 2015 08:21:09 +0000 Subject: [Distutils] Idea: move accepted interoperability specifications to packaging.python.org In-Reply-To: References: Message-ID: <20150419082109.GH15996@merlinux.eu> I'd appreciate a "current packaging specs" site which ideally also states how pypa tools support it, since which version. holger On Fri, Apr 17, 2015 at 16:18 -0400, Nick Coghlan wrote: > Daniel's started work on a new revision of the wheel specification, > and it's crystallised a concern for me that's been building for a > while: the Python Enhancement Proposal process is fundamentally a > *change management* process and fundamentally ill-suited to acting as > a *hosting service* for the resulting reference documentation. > > This is why we're seeing awkward splits like the one I have in PEP > 440, where the specification itself is up top, with the rationale for > changes below, and the large amounts of supporting material in PEP > 426, where the specification is mixed in with a lot of background and > rationale that isn't relevant if you just want the technical details > of the latest version of the format. > > It also creates a problem where links to PEP based reference documents > are fundamentally unstable - when we define a new version of the wheel > format in a new PEP then folks are going to have to follow the daisy > chain from PEP 427 through to the new PEP, rather than having a stable > link that automatically presents the latest version of the format, > perhaps with archived copies of the old version readily available. > > I think I see a way to resolve this, and I believe it should be fairly > straightforward: we could create a "specifications" section on > packaging.python.org, and as we next revise them, we start migrating > the specs themselves out of the PEP system and into > packaging.python.org. This would be akin to the change in the Python > 3.3, where the specification of the way the import system worked > finally moved from PEP 302 into the language reference. > > Under that model, the wheel 2.0 would be specifically focused on > describing and justifying the *changes* between 1.0 and 2.0, but the > final spec itself would be a standalone document living on > packaging.python.org, and prominently linked to from both PEP 427 > (which it would Supersede) and from the new PEP. > > This approach also gives a much nicer model for fixing typos in the > specifications - those would just be ordinary GitHub PR's on the > packaging.python.org repo, rather than needing to update the PEPs > repo. > > Thoughts? > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- about me: http://holgerkrekel.net/about-me/ contracting: http://merlinux.eu From p.f.moore at gmail.com Sun Apr 19 12:03:43 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 19 Apr 2015 11:03:43 +0100 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: Message-ID: On 19 April 2015 at 04:42, Nick Coghlan wrote: > On 18 April 2015 at 21:31, Daniel Holth wrote: >> Remember when distutils-sig didn't believe that Windows users actually >> lacked compilers? :-) I remember the days before we had package_data, when projects had to abuse the data file support in setuptools to create $prefix/share, or $prefix/docs. Projects like moin and cx_Oracle did this, for example. It looked a mess on Windows installations :-) >> graphite-web and carbon are two examples of Python applications with >> lots of data files, that want to install config files into an obvious >> place so you can edit them and get the applications up and running. >> Because Python packaging has such terrible support for applications >> that are not libraries, it's unnecessarily hard to get these packages >> up and running -- especially if you want to put them in a virtualenv. Thanks. If I get the time, I'll have a look at those packages and see how their requirements might translate to Windows. > It's a fairly standard practice in Fedora/RHEL/CentOS to use setup.py > to define the build process in the RPM spec file, even if the package > itself is never distributed via the upstream package index (e.g. > beaker-project.org is built that way, as is pulpproject.org). > > Fedora's packaging policy for redistribution of upstream Python > projects also switched last year to favour using "pip install" in the > RPM build process over invoking "setup.py install" directly. > > Historically, all the "extra bits" needed to comply with FHS have > lived in the spec file, independently of the upstream packaging > system, requiring changes in two places for certain kinds of packaging > modifications, and frequently rendering the projects undeployable in a > virtual environment for no good reason. > > The benefit of Daniel's proposal is that it should make it feasible to > modify many of these projects to be virtualenv friendly, and then > *automate* the process of generating FHS policy compliant downstream > packages. That will be a big step towards "package for PyPI, get your > conda/Nix/Debian/Fedora packaging for free", so it feeds directly into > my own interests in streamlining the redistribution pipeline in > Fedora. OK, thanks. That clarifies why this is needed, but it does read to me as being something that projects should use sparingly, and only when they are facing the sort of Linux packaging issues that it's designed to solve. That's fine - my worry is that it'll be easy for projects to see it as the "normal" way to structure their data, and as a result we'll see a move away from package_data (which was introduced precisely to solve the sorts of problems that the Unix model of dedicated directories introduced in the context of Windows, zipimport and virtualenv). > From a Windows perspective, I believe this change mostly has the > potential to make services that were previously Linux-only solely for > packaging related reasons available on Windows as well. However, there > may also be an opportunity to better automate the process of > generating wix-based installers from PyPI packages (see > http://wixtoolset.org/documentation/manual/v3/howtos/files_and_registry/add_a_file.html) > rather than generating Windows installers directly (if I understand > the tooling correctly, introducing wix into that process should offer > the same kind of potential for better platform integration that > integrating with distro package managers offers on Linux). The biggest problem I see from a Windows POV is that installing files outside of the package data structure is actively hostile to tools like py2exe, and cx_Freeze, which are widely used for bundling standalone Windows applications written in Python. Such tools won't be able to find data installed via the proposed file category mechanism. As a possible compromise, how about an approach where on Linux system installs (or more accurately "those install schemes that are relevant to the distribution packaging issue you described") the file categories are installed into dedicated directories, as described. But on other installs (virtualenvs, Windows, maybe OSX) the file categories map to locations within package_data, so that they can be accessed via normal mechanisms like loader.get_data. Application code would need some support code to locate and read the files, but that's true whether we go for this proposal or an "outside of site-packages" scheme. Also, some things may even be better designated as "don't install" in certain schemes (man files, for example, are pretty much useless on Windows). Beyond this issue, I do have some concerns over the specific locations proposed, but they are better addressed as part of the PEP review process, once we have a specific proposal to review. Paul From cournape at gmail.com Sun Apr 19 12:55:18 2015 From: cournape at gmail.com (David Cournapeau) Date: Sun, 19 Apr 2015 12:55:18 +0200 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: Message-ID: Thanks for pushing this Daniel. I think we should wait for a bit before making this proposal official. Just after I made my PR against wheel to implement this, I had some discussions with Nathaniel Smith from numpy, where he remarked we may want to support better "everything in site-packages" model. At the lowest level, the supported schema supports both usecases. We know that because distributions like NixOS support the "one directory per package" model even though most packages they package use autotools scheme. But we may want to add higher level support at the same time as this new scheme to avoid people coming up with their own custom solutions. One idea that was thrown out was enabling a pkg-config-like mechanism to separate where files are from how to find information for building things. That would allow inside site-packages and outside site-packages schemes to work seamlessly. I can work on a "paperman" implementation of this on top of the "wheel" installer for the end of this week. I think that would both alleviate some concerns for people interested in "everything in package directory", and make the discussion more focused. On Sat, Apr 18, 2015 at 11:24 PM, Daniel Holth wrote: > I am working on a minor update to the wheel format to add more > categories under which files can be installed. Constructive comments > welcome. > > Distutills, setuptools, wheel currently have the best support for > installing files relative to ('purelib', 'platlib', 'headers', > 'scripts', 'data') with 'data' usually meaning '/' or the root of the > virtualenv. In practice only exactly one of the 'purelib' or > 'platlib' locations (which are usually mapped to the same directory on > disk), and sometimes 'scripts' is used for any given package, and > Python packages have no good way of loading any of their package files > at run time if they are installed into any location not relative to > sys.path. > > This works fairly well for Python libraries, but anyone packaging > applications for a Linux distribution is required to follow the > filesystem hierarchy standard or FHS. > > > http://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/s1-filesystem-fhs.html > , > http://www.debian.org/releases/etch/amd64/apcs02.html, > http://www.pathname.com/fhs/ > > It would help Linux distribution package maintainers and Python > application (not library) developers if wheel had better support for > installing files into the FHS, but it would help everyone else who > wanted to generate cross-platform packages if the FHS was not > hardcoded as paths in the data/ category. To that end we should come > up with some additional categories that map, or do not map, to the FHS > based on the end user's platform. > > Distutils2 had the concept of resource categories: > http://alexis.notmyidea.org/distutils2/setupcfg.html#resources > > """ > Default categories are: > > * config > * appdata > * appdata.arch > * appdata.persistent > * appdata.disposable > * help > * icon > * scripts > * doc > * info > * man > """ > > GNU has directory variables: > https://www.gnu.org/prep/standards/html_node/Directory-Variables.html > , for example prefix, exec_prefix, bindir, sbindir. > > Bento has a list based on the GNU paths, and allows new paths to be > defined: > > prefix: install architecture-independent files > eprefix: install architecture-dependent files > bindir: user executables > sbindir: system admin executables > libexecdir: program executables > sysconfdir: read-only single-machine data > sharedstatedir: modifiable architecture-independent data > localstatedir: modifiable single-machine data > libdir: object code libraries > includedir: C header files > oldincludedir: C header files for non-gcc > datarootdir: read-only arch.-independent data root > datadir: read-only architecture-independent data > infodir: info documentation > localedir: locale-dependent data > mandir: man documentation > docdir: documentation root > htmldir: html documentation > dvidir: dvi documentation > pdfdir: pdf documentation > psdir: ps documentation > > I would like to add Bento's list to wheel and to setuptools. We would > fix the disused setup.py setup(data_files = ) argument so that it > could be used with $ substitution, or provide a utility function that > would expand them in setup.py itself: > > data_files = { '$libdir/mylib' : [ 'some_library_file'], > '$datadir/mydata' : ['some_data_file']} > > We would provide a default configuration file that mapped the > categories to their installed locations. > > We would store the actual paths used at install time, so a package > could look for files relative to the $datadir used when it was > installed. > > Then it would be easier to distribute Python programs that need to > install some files to paths that are not relative to the site-packages > directory. > > - Daniel Holth > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Apr 19 14:42:18 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 19 Apr 2015 13:42:18 +0100 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: Message-ID: On 19 April 2015 at 11:55, David Cournapeau wrote: > I can work on a "paperman" implementation of this on top of the "wheel" > installer for the end of this week. I think that would both alleviate some > concerns for people interested in "everything in package directory", and > make the discussion more focused. That sounds good. One thing the wheel install command doesn't support is per-user installs. I'd appreciate seeing some details of how those would be handled - on the assumption that you don't want to deal with pip's source just yet, a rough spec would be fine for now. Paul From contact at ionelmc.ro Sun Apr 19 15:00:44 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Sun, 19 Apr 2015 16:00:44 +0300 Subject: [Distutils] Secondary package indexes Message-ID: Hello, Probably this has been discussed in the past but I'm asking anyway cause I'm not sure what's it at now. Currently there's this problem with wheels, many package authors don't publish them for the platforms I'm using. I'm speaking about the wheels that need a compiler and/or other annoying dependencies. It would be really nice if one could configure pip to look into a secondary (external) indexes. This way someone could make his own index with windows wheels, another person could make an index with wheels for ubuntu 14.04 and people can use those to avoid the pain of compiling the packages. This is already popular, eg: http://www.lfd.uci.edu/~gohlke/pythonlibs/ but not readily usable with pip. Maybe PyPI could even provide hosting for these 3rd party wheels, this would make publishing very easy and lower the entry bar. I believe we'll never get package authors to publish wheels for platforms they don't care about. But other people might - we just need to make this easy and convenient. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Apr 19 15:10:14 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 19 Apr 2015 14:10:14 +0100 Subject: [Distutils] Secondary package indexes In-Reply-To: References: Message-ID: On 19 April 2015 at 14:00, Ionel Cristian M?rie? wrote: > Probably this has been discussed in the past but I'm asking anyway cause I'm > not sure what's it at now. > > Currently there's this problem with wheels, many package authors don't > publish them for the platforms I'm using. I'm speaking about the wheels that > need a compiler and/or other annoying dependencies. > > It would be really nice if one could configure pip to look into a secondary > (external) indexes. This way someone could make his own index with windows > wheels, another person could make an index with wheels for ubuntu 14.04 and > people can use those to avoid the pain of compiling the packages. > > This is already popular, eg: http://www.lfd.uci.edu/~gohlke/pythonlibs/ but > not readily usable with pip. > > Maybe PyPI could even provide hosting for these 3rd party wheels, this would > make publishing very easy and lower the entry bar. > > I believe we'll never get package authors to publish wheels for platforms > they don't care about. But other people might - we just need to make this > easy and convenient. I'm not sure exactly what you're suggesting here, but you can add an extra_index_url to your pip.ini. As far as hosting is concerned, if you don't want to set up your own package index, binstar offers pypi-style hosting for wheels. I'm not sure what reasons there may be for Christoph Gohlke not hosting his wheels in a pip-usable format. It may be worth someone asking him. I know it'd be nicer for me to point to his index, rather than downloading the wheels I need and hosting them myself for my personal use. Paul From contact at ionelmc.ro Sun Apr 19 15:14:04 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Sun, 19 Apr 2015 16:14:04 +0300 Subject: [Distutils] Secondary package indexes In-Reply-To: References: Message-ID: So what you're saying is that Christoph Gohlke could use binstar to host the wheels yes? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Sun, Apr 19, 2015 at 4:10 PM, Paul Moore wrote: > On 19 April 2015 at 14:00, Ionel Cristian M?rie? > wrote: > > Probably this has been discussed in the past but I'm asking anyway cause > I'm > > not sure what's it at now. > > > > Currently there's this problem with wheels, many package authors don't > > publish them for the platforms I'm using. I'm speaking about the wheels > that > > need a compiler and/or other annoying dependencies. > > > > It would be really nice if one could configure pip to look into a > secondary > > (external) indexes. This way someone could make his own index with > windows > > wheels, another person could make an index with wheels for ubuntu 14.04 > and > > people can use those to avoid the pain of compiling the packages. > > > > This is already popular, eg: http://www.lfd.uci.edu/~gohlke/pythonlibs/ > but > > not readily usable with pip. > > > > Maybe PyPI could even provide hosting for these 3rd party wheels, this > would > > make publishing very easy and lower the entry bar. > > > > I believe we'll never get package authors to publish wheels for platforms > > they don't care about. But other people might - we just need to make this > > easy and convenient. > > I'm not sure exactly what you're suggesting here, but you can add an > extra_index_url to your pip.ini. As far as hosting is concerned, if > you don't want to set up your own package index, binstar offers > pypi-style hosting for wheels. I'm not sure what reasons there may be > for Christoph Gohlke not hosting his wheels in a pip-usable format. It > may be worth someone asking him. I know it'd be nicer for me to point > to his index, rather than downloading the wheels I need and hosting > them myself for my personal use. > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Apr 19 15:37:06 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 19 Apr 2015 14:37:06 +0100 Subject: [Distutils] Secondary package indexes In-Reply-To: References: Message-ID: On 19 April 2015 at 14:14, Ionel Cristian M?rie? wrote: > So what you're saying is that Christoph Gohlke could use binstar to host the > wheels yes? Possibly. It depends on whether his use conformed to the free plans, or he wanted to take out a paid plan. I thought you were talking about making your own wheels. Paul. From contact at ionelmc.ro Sun Apr 19 15:49:41 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Sun, 19 Apr 2015 16:49:41 +0300 Subject: [Distutils] Secondary package indexes In-Reply-To: References: Message-ID: On Sun, Apr 19, 2015 at 4:10 PM, Paul Moore wrote: > you can add an > extra_index_url to your pip.ini > ?Would this work as expected if a package is in multiple indexes? Eg: sdist in main index, wheel in ?extra index. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Apr 19 15:52:01 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 19 Apr 2015 14:52:01 +0100 Subject: [Distutils] Secondary package indexes In-Reply-To: References: Message-ID: On 19 April 2015 at 14:49, Ionel Cristian M?rie? wrote: > > On Sun, Apr 19, 2015 at 4:10 PM, Paul Moore wrote: >> >> you can add an >> extra_index_url to your pip.ini > > Would this work as expected if a package is in multiple indexes? Eg: sdist > in main index, wheel in extra index. Yes. From ncoghlan at gmail.com Sun Apr 19 18:34:36 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 19 Apr 2015 12:34:36 -0400 Subject: [Distutils] Idea: move accepted interoperability specifications to packaging.python.org In-Reply-To: <20150419082109.GH15996@merlinux.eu> References: <20150419082109.GH15996@merlinux.eu> Message-ID: On 19 April 2015 at 04:21, holger krekel wrote: > > I'd appreciate a "current packaging specs" site which ideally also states > how pypa tools support it, since which version. OK, I've filed this idea as https://github.com/pypa/python-packaging-user-guide/issues/151, but I have no idea when I'll find time to work on it myself. If anyone had the time and inclination to start putting something together (perhaps starting with the already accepted PEP 440), that would be wonderful. Cheers, Nick. > > holger > > On Fri, Apr 17, 2015 at 16:18 -0400, Nick Coghlan wrote: >> Daniel's started work on a new revision of the wheel specification, >> and it's crystallised a concern for me that's been building for a >> while: the Python Enhancement Proposal process is fundamentally a >> *change management* process and fundamentally ill-suited to acting as >> a *hosting service* for the resulting reference documentation. >> >> This is why we're seeing awkward splits like the one I have in PEP >> 440, where the specification itself is up top, with the rationale for >> changes below, and the large amounts of supporting material in PEP >> 426, where the specification is mixed in with a lot of background and >> rationale that isn't relevant if you just want the technical details >> of the latest version of the format. >> >> It also creates a problem where links to PEP based reference documents >> are fundamentally unstable - when we define a new version of the wheel >> format in a new PEP then folks are going to have to follow the daisy >> chain from PEP 427 through to the new PEP, rather than having a stable >> link that automatically presents the latest version of the format, >> perhaps with archived copies of the old version readily available. >> >> I think I see a way to resolve this, and I believe it should be fairly >> straightforward: we could create a "specifications" section on >> packaging.python.org, and as we next revise them, we start migrating >> the specs themselves out of the PEP system and into >> packaging.python.org. This would be akin to the change in the Python >> 3.3, where the specification of the way the import system worked >> finally moved from PEP 302 into the language reference. >> >> Under that model, the wheel 2.0 would be specifically focused on >> describing and justifying the *changes* between 1.0 and 2.0, but the >> final spec itself would be a standalone document living on >> packaging.python.org, and prominently linked to from both PEP 427 >> (which it would Supersede) and from the new PEP. >> >> This approach also gives a much nicer model for fixing typos in the >> specifications - those would just be ordinary GitHub PR's on the >> packaging.python.org repo, rather than needing to update the PEPs >> repo. >> >> Thoughts? >> >> Regards, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > -- > about me: http://holgerkrekel.net/about-me/ > contracting: http://merlinux.eu -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From qwcode at gmail.com Sun Apr 19 18:40:33 2015 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 19 Apr 2015 09:40:33 -0700 Subject: [Distutils] Idea: move accepted interoperability specifications to packaging.python.org In-Reply-To: <20150419082109.GH15996@merlinux.eu> References: <20150419082109.GH15996@merlinux.eu> Message-ID: the PyPA site has a PEP reference that includes details on implementation: https://www.pypa.io/en/latest/peps I don't think we need another reference in the Packaging User Guide (PUG). We could mention that the PyPA one exists in the PUG. As for user-facing PEP docs, I think the docs for PUG/pip/setuptools/wheel etc.. should handle that inline, and act as the layer that references (if need be) the correct PEP that applies to the feature being described. For example, the PUG references PEP440 in a few places. Ideally, it should be a requirement to update the docs for the major projects (and the PUG) *before* releasing a PEP-implementations into those projects. For example, a round of docs PRs prior to the release of PEP440 into pug & pip could have likely prevented the confusion over the meaning of ">" that unfortunately had to be sorted out after the release. On Sun, Apr 19, 2015 at 1:21 AM, holger krekel wrote: > > I'd appreciate a "current packaging specs" site which ideally also states > how pypa tools support it, since which version. > > holger > > On Fri, Apr 17, 2015 at 16:18 -0400, Nick Coghlan wrote: > > Daniel's started work on a new revision of the wheel specification, > > and it's crystallised a concern for me that's been building for a > > while: the Python Enhancement Proposal process is fundamentally a > > *change management* process and fundamentally ill-suited to acting as > > a *hosting service* for the resulting reference documentation. > > > > This is why we're seeing awkward splits like the one I have in PEP > > 440, where the specification itself is up top, with the rationale for > > changes below, and the large amounts of supporting material in PEP > > 426, where the specification is mixed in with a lot of background and > > rationale that isn't relevant if you just want the technical details > > of the latest version of the format. > > > > It also creates a problem where links to PEP based reference documents > > are fundamentally unstable - when we define a new version of the wheel > > format in a new PEP then folks are going to have to follow the daisy > > chain from PEP 427 through to the new PEP, rather than having a stable > > link that automatically presents the latest version of the format, > > perhaps with archived copies of the old version readily available. > > > > I think I see a way to resolve this, and I believe it should be fairly > > straightforward: we could create a "specifications" section on > > packaging.python.org, and as we next revise them, we start migrating > > the specs themselves out of the PEP system and into > > packaging.python.org. This would be akin to the change in the Python > > 3.3, where the specification of the way the import system worked > > finally moved from PEP 302 into the language reference. > > > > Under that model, the wheel 2.0 would be specifically focused on > > describing and justifying the *changes* between 1.0 and 2.0, but the > > final spec itself would be a standalone document living on > > packaging.python.org, and prominently linked to from both PEP 427 > > (which it would Supersede) and from the new PEP. > > > > This approach also gives a much nicer model for fixing typos in the > > specifications - those would just be ordinary GitHub PR's on the > > packaging.python.org repo, rather than needing to update the PEPs > > repo. > > > > Thoughts? > > > > Regards, > > Nick. > > > > -- > > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > -- > about me: http://holgerkrekel.net/about-me/ > contracting: http://merlinux.eu > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Apr 19 18:41:09 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 19 Apr 2015 12:41:09 -0400 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: Message-ID: On 19 April 2015 at 06:03, Paul Moore wrote: > As a possible compromise, how about an approach where on Linux system > installs (or more accurately "those install schemes that are relevant > to the distribution packaging issue you described") the file > categories are installed into dedicated directories, as described. But > on other installs (virtualenvs, Windows, maybe OSX) the file > categories map to locations within package_data, so that they can be > accessed via normal mechanisms like loader.get_data. Application code > would need some support code to locate and read the files, but that's > true whether we go for this proposal or an "outside of site-packages" > scheme. Also, some things may even be better designated as "don't > install" in certain schemes (man files, for example, are pretty much > useless on Windows). That's not a compromise, that's exactly what I want to see happen :) If it helps, one way to think of this is as a "file classification" system, where the packaging tools gain the ability to sort files into a richer set of categories, and its then up to installers to decide how those categories map to installation locations. For Windows, virtualenv, conda, nix, single file applications, etc, that's likely to be "self-contained application directory". For FHS, it will map to the on-disk categorisation expected by other tools. At the moment, because the FHS support is either hacked in, or managed externally, there's no way for installers to remap the installation targets for the "self-contained directory" use case. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Sun Apr 19 18:39:56 2015 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 19 Apr 2015 16:39:56 +0000 (UTC) Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: Message-ID: <190165161.7631109.1429461596487.JavaMail.yahoo@mail.yahoo.com> From: Paul Moore > On 19 April 2015 at 11:55, David Cournapeau wrote: >> I can work on a "paperman" implementation of this on top of the "wheel" >> installer for the end of this week. I think that would both alleviate some >> concerns for people interested in "everything in package directory", and >> make the discussion more focused. Is "paperman" a Disney reference, or something else? > That sounds good. One thing the wheel install command doesn't support > is per-user installs. I'd appreciate seeing some details of how those > would be handled - on the assumption that you don't want to deal with > pip's source just yet, a rough spec would be fine for now. I presume the way "wheel install" works is orthogonal to the scheme for handling different categories of data - "distil install", for example, does per-user installs by default. Are the proposed implementations just a proof of concept, to validate the usability of the implemented scheme? What is the envisaged timeline for proposing/agreeing specifications for how the file categories will work cross-platform? I'm bearing in mind that there might be other implementations of installers which would need to interoperate with any file category scheme ... Regards, Vinay Sajip From dholth at gmail.com Sun Apr 19 20:26:11 2015 From: dholth at gmail.com (Daniel Holth) Date: Sun, 19 Apr 2015 14:26:11 -0400 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: <190165161.7631109.1429461596487.JavaMail.yahoo@mail.yahoo.com> References: <190165161.7631109.1429461596487.JavaMail.yahoo@mail.yahoo.com> Message-ID: He probably means straw man, or an implementation built only to facilitate discussion. I'd like to get it done before October. Ideally the installs should still work regardless of exactly where each category maps on disk. On Apr 19, 2015 12:43 PM, "Vinay Sajip" wrote: > > > > > From: Paul Moore > > On 19 April 2015 at 11:55, David Cournapeau wrote: > >> I can work on a "paperman" implementation of this on top of the "wheel" > >> installer for the end of this week. I think that would both alleviate > some > >> concerns for people interested in "everything in package directory", and > >> make the discussion more focused. > > Is "paperman" a Disney reference, or something else? > > > That sounds good. One thing the wheel install command doesn't support > > > is per-user installs. I'd appreciate seeing some details of how those > > would be handled - on the assumption that you don't want to deal with > > pip's source just yet, a rough spec would be fine for now. > I presume the way "wheel install" works is orthogonal to the scheme for > handling different categories of data - "distil install", for example, does > per-user installs by default. Are the proposed implementations just a proof > of concept, to validate the usability of the implemented scheme? What is > the envisaged timeline for proposing/agreeing specifications for how the > file categories will work cross-platform? I'm bearing in mind that there > might be other implementations of installers which would need to > interoperate with any file category scheme ... > > Regards, > > Vinay Sajip > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Apr 19 20:57:12 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 19 Apr 2015 19:57:12 +0100 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: Message-ID: On 19 April 2015 at 17:41, Nick Coghlan wrote: > On 19 April 2015 at 06:03, Paul Moore wrote: >> As a possible compromise, how about an approach where on Linux system >> installs (or more accurately "those install schemes that are relevant >> to the distribution packaging issue you described") the file >> categories are installed into dedicated directories, as described. But >> on other installs (virtualenvs, Windows, maybe OSX) the file >> categories map to locations within package_data, so that they can be >> accessed via normal mechanisms like loader.get_data. Application code >> would need some support code to locate and read the files, but that's >> true whether we go for this proposal or an "outside of site-packages" >> scheme. Also, some things may even be better designated as "don't >> install" in certain schemes (man files, for example, are pretty much >> useless on Windows). > > That's not a compromise, that's exactly what I want to see happen :) > > If it helps, one way to think of this is as a "file classification" > system, where the packaging tools gain the ability to sort files into > a richer set of categories, and its then up to installers to decide > how those categories map to installation locations. For Windows, > virtualenv, conda, nix, single file applications, etc, that's likely > to be "self-contained application directory". For FHS, it will map to > the on-disk categorisation expected by other tools. > > At the moment, because the FHS support is either hacked in, or managed > externally, there's no way for installers to remap the installation > targets for the "self-contained directory" use case. OK, that's fantastic. It sounds much more reasonable when put that way. The only big debate then is likely to be over the precise categories used. And in particular, should we simply take an existing Unix-centric set of categories like the autotools ones, as currently proposed, or should we choose something more cross-platform? In favour of Unix autotools-style classes: 1. Already exists, easy to just take what's already defined. 2. Proven utility (although possibly over-engineered, some of the autotools categories seem pretty obscure to my admittedly untrained eye). 3. Mapping to Unix systems (and hence distribution packaging systems) is obvious. 4. Generally, Windows users don't care about any of this as the model on Windows is to install everything in the application directory, so following Unix conventions will be less controversial. Against: 1. The categories are pretty meaningless for Windows developers, so would tend to be ignored or used incorrectly by them. 2. Makes it too easy for Unix users to ignore or misunderstand cross-platform issues (for example, the mandir category implies that shipping man pages is a reasonable way of documenting your package, but Windows users won't be able to read the manpages, and they may not even get installed on Windows). 3. Easy to interpret as "treating Windows as a second class citizen". Overall, I don't see enough disadvantages in there to argue against the autotools classifications, but I would like to see the PEP include a discussion of the portability implications of the new feature. And there are still a load of questions waiting on a clear spec of how installers are expected to map the categories to filesystem locations, for all the various OS/install scheme combinations that exist. But it's not worth speculating on those, better to wait for the spec and then discuss the details. Paul From cournape at gmail.com Sun Apr 19 21:04:49 2015 From: cournape at gmail.com (David Cournapeau) Date: Sun, 19 Apr 2015 21:04:49 +0200 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: <190165161.7631109.1429461596487.JavaMail.yahoo@mail.yahoo.com> References: <190165161.7631109.1429461596487.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Sun, Apr 19, 2015 at 6:39 PM, Vinay Sajip wrote: > > > > > From: Paul Moore > > On 19 April 2015 at 11:55, David Cournapeau wrote: > >> I can work on a "paperman" implementation of this on top of the "wheel" > >> installer for the end of this week. I think that would both alleviate > some > >> concerns for people interested in "everything in package directory", and > >> make the discussion more focused. > > Is "paperman" a Disney reference, or something else? > it was used in another project I was involved in, to mean "even less than a straw man". Since the project was handled by native speakers, I assumed it was a generic term :) In any case, the implementation I started in wheel was something to *start* the discussion, not to be used as a reference. David > > > That sounds good. One thing the wheel install command doesn't support > > > is per-user installs. I'd appreciate seeing some details of how those > > would be handled - on the assumption that you don't want to deal with > > pip's source just yet, a rough spec would be fine for now. > I presume the way "wheel install" works is orthogonal to the scheme for > handling different categories of data - "distil install", for example, does > per-user installs by default. Are the proposed implementations just a proof > of concept, to validate the usability of the implemented scheme? What is > the envisaged timeline for proposing/agreeing specifications for how the > file categories will work cross-platform? I'm bearing in mind that there > might be other implementations of installers which would need to > interoperate with any file category scheme ... > > Regards, > > Vinay Sajip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Sun Apr 19 18:57:05 2015 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sun, 19 Apr 2015 16:57:05 +0000 Subject: [Distutils] Add additional file categories for distutils, setuptools, wheel In-Reply-To: References: , Message-ID: My brief POV is that if a package on Windows is installing anything outside sys.path at all then it's an application and should use something other than wheel for installation. WiX/MSI will do proper reference counting and upgrades to avoid having multiple versions colliding with each other (imagine installing Mercurial on 2.6 and 2.7 simultaneously...) These should probably also bundle the interpreter as well to avoid other update issues. I'm already going to release a self extracting zip of the interpreter for this, though there will be tweaks necessary to the initialization process to avoid registry stomping, and tools like pynsist are being worked on to simply installer generation. These can freely install files anywhere, and having a consistent spec for generating these would be nice, but I don't want pip to install files outside of sys.path Cheers, Steve Top-posted from my Windows Phone ________________________________ From: Nick Coghlan Sent: ?4/?19/?2015 9:41 To: Paul Moore Cc: DistUtils mailing list Subject: Re: [Distutils] Add additional file categories for distutils, setuptools, wheel On 19 April 2015 at 06:03, Paul Moore wrote: > As a possible compromise, how about an approach where on Linux system > installs (or more accurately "those install schemes that are relevant > to the distribution packaging issue you described") the file > categories are installed into dedicated directories, as described. But > on other installs (virtualenvs, Windows, maybe OSX) the file > categories map to locations within package_data, so that they can be > accessed via normal mechanisms like loader.get_data. Application code > would need some support code to locate and read the files, but that's > true whether we go for this proposal or an "outside of site-packages" > scheme. Also, some things may even be better designated as "don't > install" in certain schemes (man files, for example, are pretty much > useless on Windows). That's not a compromise, that's exactly what I want to see happen :) If it helps, one way to think of this is as a "file classification" system, where the packaging tools gain the ability to sort files into a richer set of categories, and its then up to installers to decide how those categories map to installation locations. For Windows, virtualenv, conda, nix, single file applications, etc, that's likely to be "self-contained application directory". For FHS, it will map to the on-disk categorisation expected by other tools. At the moment, because the FHS support is either hacked in, or managed externally, there's no way for installers to remap the installation targets for the "self-contained directory" use case. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Apr 20 03:22:14 2015 From: dholth at gmail.com (Daniel Holth) Date: Sun, 19 Apr 2015 21:22:14 -0400 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal Message-ID: It might be helpful to discuss the fine grained install scheme proposal as several proposals: - Add more install paths (categories) to the set currently used by distutils/sysconfig https://docs.python.org/3/library/sysconfig.html#installation-paths . Define sensible defaults for each platform. - Allow the paths to be individually overridden for each installed package, in the same way that python setup.py install --install-purelib=/some-directory can override the categories we have now. - Record the { category : path } mapping used during installation. - Provide an API mapping (distribution name, category, relative path within category) to help applications using data that is only accessible via the module loader. https://docs.python.org/3/library/importlib.html#importlib.abc.ResourceLoader - Provide an API mapping the same to paths on the filesystem. - Make the recorded mapping available in a predictable location, so it can be perhaps understood by non-Python code. - Allow setup.py's setup() call to install files relative to each defined category. - Extend Python binary package formats to support the new categories, so the { category : path } mapping can be set at install time and not at build time. From p.f.moore at gmail.com Mon Apr 20 09:13:52 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 20 Apr 2015 08:13:52 +0100 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: Message-ID: On 20 April 2015 at 02:22, Daniel Holth wrote: > - Add more install paths (categories) to the set currently used by > distutils/sysconfig > https://docs.python.org/3/library/sysconfig.html#installation-paths . > Define sensible defaults for each platform. +0. I have no personal use for extra categories, but I gather others do. > - Allow the paths to be individually overridden for each installed > package, in the same way that python setup.py install > --install-purelib=/some-directory can override the categories we have > now. -0. I'm not clear this is needed, but if it is I don't mind as long as it's as rare as it presently is. I am concerned about the implication that it's package authors that set this, and users won't have control (note that pip install doesn't offer an interface to this - there's --install-option but that's sdist-only so not applicable). > - Record the { category : path } mapping used during installation. -1. There's no clear use case (except see below) and it implies that each package having its own mapping is the norm, where I would prefer a standard mapping for all cases with overrides being a vanishingly rare need. Basically, like existing paths such as the header directory. > - Provide an API mapping (distribution name, category, relative path > within category) to help applications using data that is only > accessible via the module loader. > https://docs.python.org/3/library/importlib.html#importlib.abc.ResourceLoader +0. Ideally, this should be in the stdlib, but I accept that is impractical. I'm strongly against "installer runtime support" modules, it feels too like the setuptools/pkg_resources problem (developer or end user tool?). If necessary,then this should be in a completely separate package, just providing the runtime support, and not bundled with any particular installer. Note that for interoperability, this implies that the PEP needs to be very clear on all details, to avoid the API being defined by the implementation. Also, I'd prefer an interoperability-style "how the data is made available" standard, allowing projects to write their won API for it, rather than a "you have to use X to get at your data" style. The data for the mapping sounds like it's something that should be stored in the package's metadata 2.0 API (as a metadata extension). We don't need a custom approach for this data. > - Provide an API mapping the same to paths on the filesystem. -0. Like pkg_resources.resource_filename. Donald was proposing an extension to the pkgutils API to do something similar as well. We don't need even more of these APIs, it should just be solved once in a general manner. For files not in package_data, the metadata API should provide enough to allow projects to extract the information manually. Someone could provide a simple helper module on PyPI to do that, but it shouldn't be *necessary*. > - Make the recorded mapping available in a predictable location, so it > can be perhaps understood by non-Python code. -1. Metadata 2.0 extensions seem to me to be the correct place for this. > - Allow setup.py's setup() call to install files relative to each > defined category. Conditional +1. As in, if we do define such categories, setup.py *must* support them, by definition. I presume here we'd be talking about setuptools rather than distutils. > - Extend Python binary package formats to support the new categories, > so the { category : path } mapping can be set at install time and not > at build time. Conditional +0. Binary formats (wheel) must support this if defined, but setting mappings at install time seems optional to me. The defaults should be good enough for 99% of use cases, and I'd want installer support for changing the mappings to be optional (although clearly a quality-of-implementation issue - I'd expect pip to include it in some form, although maybe via a variant of something generic like --install-option). One further note - when I mention Metadata 2.0 above, I'm thinking of the runtime data installed into a system with the package, not the static data exposed by PyPI (although having "which install locations does this package use" recorded in the static data would be good for ease of auditing). I'm not sure how well the runtime side of Metadata 2.0 has been thought through yet (in terms of APIs to read the data, installers adding data at install time) but that doesn't alter the fact that I think this is where such data should go. Also, I'm not trying to kill this proposal by making it depend on Metadata 2.0, rather pointing out that this is the sort of thing we need to push Metadata 2.0 forward for,so that we don't invent yet more custom and short-term data formats. I do think it's important to get this right if we're doing it, and that means implementing the appropriate parts of Metadata 2.0 to support it, so it is a *partial* dependency (muck like the packaging library has implemented the version and specifier parts, so that pip can use them). Paul From dholth at gmail.com Mon Apr 20 15:31:15 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 20 Apr 2015 09:31:15 -0400 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: Message-ID: On Mon, Apr 20, 2015 at 3:13 AM, Paul Moore wrote: > On 20 April 2015 at 02:22, Daniel Holth wrote: >> - Add more install paths (categories) to the set currently used by >> distutils/sysconfig >> https://docs.python.org/3/library/sysconfig.html#installation-paths . >> Define sensible defaults for each platform. > > +0. I have no personal use for extra categories, but I gather others do. > >> - Allow the paths to be individually overridden for each installed >> package, in the same way that python setup.py install >> --install-purelib=/some-directory can override the categories we have >> now. > > -0. I'm not clear this is needed, but if it is I don't mind as long as > it's as rare as it presently is. I am concerned about the implication > that it's package authors that set this, and users won't have control > (note that pip install doesn't offer an interface to this - there's > --install-option but that's sdist-only so not applicable). To understand the need for this feature it's important to imagine different installation scenarios. Imagine a package manager that installs each package into a tree of ~/python-packages/dist_name/version/ ie ~/python-packages/mercurial/3.4/. Each installed distribution exists in its own independent directory. Console scripts written by this package manager append the required dependencies to sys.path before calling the program. In this setup, each individual application has its own isolated set of dependencies; upgrading one package does not break any other. You would get similar isolation to having a virtualenv per package, but without the overhead of setting up one before being able to use a useful program. This is just one example of a potentially useful setup that would be helped by per-distribution { category : path } mappings. That doesn't mean there is not another that you might like better. I feel strongly about better supporting isolated per-package installs because I can't recommend any Python application to my non-Python-programmer friends if they have to create a virtualenv first; they simply will not be successful getting through the virtualenv step. Non-package-managed installers are too much potentially platform specific work for the package creator, and --user installs might break the dependencies of any other application already installed as --user. If they can simply run 'humane_install application' and be ready to go then it will be easier to recommend useful Python applications. >> - Record the { category : path } mapping used during installation. > > -1. There's no clear use case (except see below) and it implies that > each package having its own mapping is the norm, where I would prefer > a standard mapping for all cases with overrides being a vanishingly > rare need. Basically, like existing paths such as the header > directory. > >> - Provide an API mapping (distribution name, category, relative path >> within category) to help applications using data that is only >> accessible via the module loader. >> https://docs.python.org/3/library/importlib.html#importlib.abc.ResourceLoader > > +0. Ideally, this should be in the stdlib, but I accept that is > impractical. I'm strongly against "installer runtime support" modules, > it feels too like the setuptools/pkg_resources problem (developer or > end user tool?). If necessary,then this should be in a completely > separate package, just providing the runtime support, and not bundled > with any particular installer. Note that for interoperability, this > implies that the PEP needs to be very clear on all details, to avoid > the API being defined by the implementation. Also, I'd prefer an > interoperability-style "how the data is made available" standard, > allowing projects to write their won API for it, rather than a "you > have to use X to get at your data" style. > > The data for the mapping sounds like it's something that should be > stored in the package's metadata 2.0 API (as a metadata extension). We > don't need a custom approach for this data. The Metadata 2.0 specifically pydata.json is supposed to be static. If it is static then it would be illegal for the installer to change that file as part of the installation. Instead, the { category : path } mapping would be more like RECORD, an additional file in .dist-info that stores installation data. See https://www.python.org/dev/peps/pep-0376/ I am against making everyone depend on 'installer_support_package' too, mirroring the situation that many packages depend on pkg_resources without declaring that dependency. That is why I proposed writing the install paths to an importable file in the package's namespace on request without a new API. This would also avoid "for path in sys.path: glob(path + '/*.dist-info')" the first time any install path was required. However if it must be a new API so be it. >> - Provide an API mapping the same to paths on the filesystem. > > -0. Like pkg_resources.resource_filename. Donald was proposing an > extension to the pkgutils API to do something similar as well. We > don't need even more of these APIs, it should just be solved once in a > general manner. For files not in package_data, the metadata API should > provide enough to allow projects to extract the information manually. > Someone could provide a simple helper module on PyPI to do that, but > it shouldn't be *necessary*. > >> - Make the recorded mapping available in a predictable location, so it >> can be perhaps understood by non-Python code. > > -1. Metadata 2.0 extensions seem to me to be the correct place for this. > >> - Allow setup.py's setup() call to install files relative to each >> defined category. > > Conditional +1. As in, if we do define such categories, setup.py > *must* support them, by definition. I presume here we'd be talking > about setuptools rather than distutils. If we had a better setup.py replacement, I'd propose supporting the new categories there, first, as a carrot. But that's a different discussion. >> - Extend Python binary package formats to support the new categories, >> so the { category : path } mapping can be set at install time and not >> at build time. > > Conditional +0. Binary formats (wheel) must support this if defined, > but setting mappings at install time seems optional to me. The > defaults should be good enough for 99% of use cases, and I'd want > installer support for changing the mappings to be optional (although > clearly a quality-of-implementation issue - I'd expect pip to include > it in some form, although maybe via a variant of something generic > like --install-option). The defaults will be unoffensive and typical for each supported platform and default install scheme. > One further note - when I mention Metadata 2.0 above, I'm thinking of > the runtime data installed into a system with the package, not the > static data exposed by PyPI (although having "which install locations > does this package use" recorded in the static data would be good for > ease of auditing). I'm not sure how well the runtime side of Metadata > 2.0 has been thought through yet (in terms of APIs to read the data, > installers adding data at install time) but that doesn't alter the > fact that I think this is where such data should go. Also, I'm not > trying to kill this proposal by making it depend on Metadata 2.0, > rather pointing out that this is the sort of thing we need to push > Metadata 2.0 forward for,so that we don't invent yet more custom and > short-term data formats. I do think it's important to get this right > if we're doing it, and that means implementing the appropriate parts > of Metadata 2.0 to support it, so it is a *partial* dependency (muck > like the packaging library has implemented the version and specifier > parts, so that pip can use them). > > Paul From p.f.moore at gmail.com Mon Apr 20 16:07:43 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 20 Apr 2015 15:07:43 +0100 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: Message-ID: On 20 April 2015 at 14:31, Daniel Holth wrote: > This is just one example of a potentially useful setup that would be > helped by per-distribution { category : path } mappings. That doesn't > mean there is not another that you might like better. OK, understood. Thanks for the explanation. I'm still uncertain over one thing, though. Who (in your view) is expected to be changing the defaults as part of an install? If doing so is viewed as an "advanced feature targeted at creators of distribution packages - i.e. people creating RPM/deb/MSI build scripts", then I'm fine with that, 99% of pip users won't ever see, need, or even be aware of the feature. The 1% of people who need the feature can deal with it. That's why I'm insistent on having good defaults - it's all that 99% of users will ever deal with. The 1% who customise things, will know what they are doing. Paul From dholth at gmail.com Mon Apr 20 17:30:43 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 20 Apr 2015 11:30:43 -0400 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: Message-ID: On Mon, Apr 20, 2015 at 10:07 AM, Paul Moore wrote: > On 20 April 2015 at 14:31, Daniel Holth wrote: >> This is just one example of a potentially useful setup that would be >> helped by per-distribution { category : path } mappings. That doesn't >> mean there is not another that you might like better. > > OK, understood. Thanks for the explanation. I'm still uncertain over > one thing, though. Who (in your view) is expected to be changing the > defaults as part of an install? If doing so is viewed as an "advanced > feature targeted at creators of distribution packages - i.e. people > creating RPM/deb/MSI build scripts", then I'm fine with that, 99% of > pip users won't ever see, need, or even be aware of the feature. The > 1% of people who need the feature can deal with it. > > That's why I'm insistent on having good defaults - it's all that 99% > of users will ever deal with. The 1% who customise things, will know > what they are doing. > Paul The author of the installer or package format conversion tool would choose the defaults 99% of the time. These would vary as a function of the operating system, --user install scheme, hypothetical isolated install scheme etc. Pip would choose its defaults to be as unsurprising as possible compared to its current model. Hopefully we will never have to talk about fine grained install scheme customization again after specifying how to make it possible. From p.f.moore at gmail.com Mon Apr 20 17:56:14 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 20 Apr 2015 16:56:14 +0100 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: Message-ID: On 20 April 2015 at 16:30, Daniel Holth wrote: > Pip would choose its defaults to be as > unsurprising as possible compared to its current model. Hopefully we > will never have to talk about fine grained install scheme > customization again after specifying how to make it possible. All of the other packaging PEPs are careful to define things in terms of "installers", including pip but admitting the possibility that other installers might exist. I think this PEP should do the same, and define the behaviour of an "installer" (pip, distil, etc) while leaving the option for other things (such as package format converters) to use different schemes. I don't recall if I said this before in the private email thread or here, but I'll repeat it here so it'd recorded on distutils-sig: I'm strongly against the PEP offering no guidance on the default mappings to use for the "normal case" of installing a package from PyPI for use in Python scripts (i.e., the pip model). There are a number of installation scenarios encapsulated in pip, and it is only reasonable that users should expect all installers to behave the same for them - i.e., there's a standard mapping. Apologies if I've said this on the list before - too many email threads, not enough time to check them. I won't labour this point any more until the PEP review phase. Paul From vinay_sajip at yahoo.co.uk Mon Apr 20 20:07:24 2015 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 20 Apr 2015 18:07:24 +0000 (UTC) Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: Message-ID: <2117092882.933557.1429553244723.JavaMail.yahoo@mail.yahoo.com> From: Daniel Holth > pkg_resources without declaring that dependency. That is why I > proposed writing the install paths to an importable file in the > package's namespace on request without a new API. This would also Not sure what you mean by this, but I hope by "importable" you don't mean Python source. JSON sounds like a better option. We currently write all the installed files as a simple text file (RECORD), but there's no reason it couldn't be a JSON file which held the mappings of category -> path used during installation, as well as the list of files installed. When you say "package's namespace", are you referring to the .dist-info directory, or something else? Those two words are fraught with ambiguity. Regards, Vinay Sajip From dholth at gmail.com Mon Apr 20 20:35:45 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 20 Apr 2015 14:35:45 -0400 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: <2117092882.933557.1429553244723.JavaMail.yahoo@mail.yahoo.com> References: <2117092882.933557.1429553244723.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Mon, Apr 20, 2015 at 2:07 PM, Vinay Sajip wrote: > From: Daniel Holth > > > >> pkg_resources without declaring that dependency. That is why I >> proposed writing the install paths to an importable file in the >> package's namespace on request without a new API. This would also > > > Not sure what you mean by this, but I hope by "importable" you don't mean Python source. JSON sounds like a better option. We currently write all the installed files as a simple text file (RECORD), but there's no reason it couldn't be a JSON file which held the mappings of category -> path used during installation, as well as the list of files installed. When you say "package's namespace", are you referring to the .dist-info directory, or something else? Those two words are fraught with ambiguity. I had suggested writing the mapping next to one of the package's installed .py files, but it sounds like all other commenters would prefer a JSON file inside the .dist-info directory. I would prefer to keep the RECORD manifest of all installed files plus hashes separate from the e.g. .dist-info/install_scheme.json, it should not be necessary to parse the former just to figure out where the config directory is. From vinay_sajip at yahoo.co.uk Mon Apr 20 21:01:40 2015 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 20 Apr 2015 19:01:40 +0000 (UTC) Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: Message-ID: <895271076.995094.1429556500471.JavaMail.yahoo@mail.yahoo.com> From: Daniel Holth > I had suggested writing the mapping next to one of the package's > installed .py files, but it sounds like all other commenters would > prefer a JSON file inside the .dist-info directory. Since it's installation metadata, .dist-info seems the right place for it. > I would prefer to keep the RECORD manifest of all installed files plus > hashes separate from the e.g. .dist-info/install_scheme.json, it > should not be necessary to parse the former just to figure out where > the config directory is. It's just a json.load - no specialised parsing code would seem to be required. Is your preference due to concerns about performance, aesthetics, backward compatibility or something else? Regards, Vinay Sajip From dholth at gmail.com Mon Apr 20 21:14:24 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 20 Apr 2015 15:14:24 -0400 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: <895271076.995094.1429556500471.JavaMail.yahoo@mail.yahoo.com> References: <895271076.995094.1429556500471.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Mon, Apr 20, 2015 at 3:01 PM, Vinay Sajip wrote: > From: Daniel Holth > >> I had suggested writing the mapping next to one of the package's > >> installed .py files, but it sounds like all other commenters would >> prefer a JSON file inside the .dist-info directory. > Since it's installation metadata, .dist-info seems the right place for it. > >> I would prefer to keep the RECORD manifest of all installed files plus >> hashes separate from the e.g. .dist-info/install_scheme.json, it >> should not be necessary to parse the former just to figure out where >> the config directory is. > > It's just a json.load - no specialised parsing code would seem to be required. Is your preference due to concerns about performance, aesthetics, backward compatibility or something else? All of the above. Performance, memory, adding re-standardizing the RECORD as yet another prerequisite for getting this proposal through, orthogonality. If you want to argue it's equally fast, tell me that it's equally fast on a 1st gen Raspberry Pi, not on an 8-core Zeon. Most importantly the RECORD just has nothing to do with the paths to each file category, and it doesn't even exist in a development checkout which has not been installed. > Regards, > > Vinay Sajip From dholth at gmail.com Mon Apr 20 21:23:48 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 20 Apr 2015 15:23:48 -0400 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: <895271076.995094.1429556500471.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Mon, Apr 20, 2015 at 3:19 PM, Paul Moore wrote: > On 20 April 2015 at 20:14, Daniel Holth wrote: >> Most importantly the RECORD just has nothing to do with the paths to >> each file category, and it doesn't even exist in a development >> checkout which has not been installed. > > Nor does the mapping file, surely? Or am I (still!) missing something? > Paul Not sure exactly how it would work but it might need to be created to point to paths within the development checkout as part of "pip install -e ." if the package relies on the feature. From p.f.moore at gmail.com Mon Apr 20 21:19:42 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 20 Apr 2015 20:19:42 +0100 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: <895271076.995094.1429556500471.JavaMail.yahoo@mail.yahoo.com> Message-ID: On 20 April 2015 at 20:14, Daniel Holth wrote: > Most importantly the RECORD just has nothing to do with the paths to > each file category, and it doesn't even exist in a development > checkout which has not been installed. Nor does the mapping file, surely? Or am I (still!) missing something? Paul From p.f.moore at gmail.com Mon Apr 20 21:29:38 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 20 Apr 2015 20:29:38 +0100 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: <895271076.995094.1429556500471.JavaMail.yahoo@mail.yahoo.com> Message-ID: On 20 April 2015 at 20:23, Daniel Holth wrote: > Not sure exactly how it would work but it might need to be created to > point to paths within the development checkout as part of "pip install > -e ." if the package relies on the feature. Well, pip install -e doesn't create a dist-info directory (it relies on setuptools' setup.py develop, so it's actually setuptools that doesn't create the dist-info directory) so it's not pip that will need to do this. But as long as it's clear from the PEP what the various tools need to do, then fair enough. Paul From vinay_sajip at yahoo.co.uk Mon Apr 20 21:27:38 2015 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 20 Apr 2015 19:27:38 +0000 (UTC) Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: Message-ID: <1983253779.1024306.1429558058862.JavaMail.yahoo@mail.yahoo.com> From: Daniel Holth > orthogonality. If you want to argue it's equally fast, tell me that > it's equally fast on a 1st gen Raspberry Pi, not on an 8-core Zeon. No point arguing either way on performance or memory usage without measurements :-) Backward compatibility seems easy enough to cater for by using a name other than RECORD. For some value of "easy enough", obviously. Obviously we have to consider backward compatibility any way, since existing installations wouldn't have the category mapping in any form. > Most importantly the RECORD just has nothing to do with the paths to > each file category, and it doesn't even exist in a development > checkout which has not been installed. In a development checkout, there would presumably be no "this mapping of categories to paths was used for installation" either. There would seem to be at least two such mappings, anyway - a default mapping specifying the publisher's preference, which might live in PEP 426 source metadata, and a logically separate mapping written to .dist-info which might be different, depending on how the installer was invoked (e.g. with command line overrides specified by a user at installation time). Regards, Vinay Sajip From p.f.moore at gmail.com Mon Apr 20 21:31:34 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 20 Apr 2015 20:31:34 +0100 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: <1983253779.1024306.1429558058862.JavaMail.yahoo@mail.yahoo.com> References: <1983253779.1024306.1429558058862.JavaMail.yahoo@mail.yahoo.com> Message-ID: On 20 April 2015 at 20:27, Vinay Sajip wrote: > a default mapping specifying the publisher's preference I didn't think the project author got to specify a preference. All they say is what categories various files go into, and it's the *installer* that maps them to locations. Paul From dholth at gmail.com Mon Apr 20 21:44:27 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 20 Apr 2015 15:44:27 -0400 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: <1983253779.1024306.1429558058862.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Mon, Apr 20, 2015 at 3:31 PM, Paul Moore wrote: > On 20 April 2015 at 20:27, Vinay Sajip wrote: >> a default mapping specifying the publisher's preference > > I didn't think the project author got to specify a preference. All > they say is what categories various files go into, and it's the > *installer* that maps them to locations. > Paul If an application relied on relative paths from the root of a category, then it would probably include the same folder structure in its development checkout; setup.py would say "copy ./data into $datadir", modulo ignore patterns; it would be convenient if the mechanism pointing to $datadir pointed to ./data. From p.f.moore at gmail.com Mon Apr 20 22:04:29 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 20 Apr 2015 21:04:29 +0100 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: <1983253779.1024306.1429558058862.JavaMail.yahoo@mail.yahoo.com> Message-ID: On 20 April 2015 at 20:44, Daniel Holth wrote: > On Mon, Apr 20, 2015 at 3:31 PM, Paul Moore wrote: >> On 20 April 2015 at 20:27, Vinay Sajip wrote: >>> a default mapping specifying the publisher's preference >> >> I didn't think the project author got to specify a preference. All >> they say is what categories various files go into, and it's the >> *installer* that maps them to locations. >> Paul > > If an application relied on relative paths from the root of a > category, then it would probably include the same folder structure in > its development checkout; setup.py would say "copy ./data into > $datadir", modulo ignore patterns; it would be convenient if the > mechanism pointing to $datadir pointed to ./data. Hmm, I'm clearly struggling to understand what you're trying to cover with the PEP. I don't think it's particularly productive for me to keep asking confused questions. I'll wait for the next draft of the PEP to be published, to see if it clarifies things for me, rather than continually asking the same sorts of question over and over. But can I ask that PEP 491 be recast as specifically justifying and documenting the *changes* to the wheel spec, and how installers will need to change to address these. The current form of the PEP, as an updated version of the full wheel spec, is far too difficult to understand (having to spot the changes by in effect doing a mental diff between the two PEPs is a big problem, as is the lack of any obvious place to document *why* the changes have been made). This ties in with Nick's proposal to hold specs externally, and make PEPs into proposals for changes to the specs, rather than specs themselves. Paul From dholth at gmail.com Mon Apr 20 22:50:51 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 20 Apr 2015 16:50:51 -0400 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: <1983253779.1024306.1429558058862.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Mon, Apr 20, 2015 at 4:04 PM, Paul Moore wrote: > On 20 April 2015 at 20:44, Daniel Holth wrote: >> On Mon, Apr 20, 2015 at 3:31 PM, Paul Moore wrote: >>> On 20 April 2015 at 20:27, Vinay Sajip wrote: >>>> a default mapping specifying the publisher's preference >>> >>> I didn't think the project author got to specify a preference. All >>> they say is what categories various files go into, and it's the >>> *installer* that maps them to locations. >>> Paul >> >> If an application relied on relative paths from the root of a >> category, then it would probably include the same folder structure in >> its development checkout; setup.py would say "copy ./data into >> $datadir", modulo ignore patterns; it would be convenient if the >> mechanism pointing to $datadir pointed to ./data. > > Hmm, I'm clearly struggling to understand what you're trying to cover > with the PEP. I don't think it's particularly productive for me to > keep asking confused questions. I'll wait for the next draft of the > PEP to be published, to see if it clarifies things for me, rather than > continually asking the same sorts of question over and over. If you are writing an application called "borderify" that puts custom messages inside ornate PDF borders stored as files in the data category, then your sdist may have a folder "data/border1.pdf". When it is installed the same file may go into "/usr/share/borderify/border1.pdf". It would be nice if the same API "give me border1.pdf in the data category" would return "data/border1.pdf" during development, but "/usr/share/borderify/border1.pdf" when installed on Linux. If the proposed API winds up needing a json file in the development packagename*.dist-info directory, then you might want it to say { "data" : "C:\\Users\\me\\Documents\\borderify-dev\\data\\", ... } during development. I hadn't really thought about the categories of paths feature from setup.py all the way through to the package format. I hope we can get it done with somewhat fewer than 5 peps. It seems it will need much more explaining than code. > But can I ask that PEP 491 be recast as specifically justifying and > documenting the *changes* to the wheel spec, and how installers will > need to change to address these. The current form of the PEP, as an > updated version of the full wheel spec, is far too difficult to > understand (having to spot the changes by in effect doing a mental > diff between the two PEPs is a big problem, as is the lack of any > obvious place to document *why* the changes have been made). This ties > in with Nick's proposal to hold specs externally, and make PEPs into > proposals for changes to the specs, rather than specs themselves. It will be edited. The current draft exists to motivate discussion. From donald at stufft.io Mon Apr 20 22:52:51 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 20 Apr 2015 16:52:51 -0400 Subject: [Distutils] elements of setuptools/wheel/FHS file categories proposal In-Reply-To: References: <1983253779.1024306.1429558058862.JavaMail.yahoo@mail.yahoo.com> Message-ID: <65969F86-F7D6-4B46-BFF6-64502DCE2D96@stufft.io> > On Apr 20, 2015, at 4:50 PM, Daniel Holth wrote: > > I hadn't really thought about the categories of paths feature from > setup.py all the way through to the package format. I hope we can get > it done with somewhat fewer than 5 peps. It seems it will need much > more explaining than code. When you have an install base such as the Python packaging tools do, most changes do take more effort to explain it and hammer it out than it takes to write the code, that?s often the easiest part. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Apr 21 19:54:55 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 21 Apr 2015 13:54:55 -0400 Subject: [Distutils] Python 3.x Adoption for PyPI and PyPI Download Numbers Message-ID: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> Just thought I'd share this since it shows how what people are using to download things from PyPI have changed over the past year. Of particular interest to most people will be the final graphs showing what percentage of downloads from PyPI are for Python 3.x or 2.x. As always it's good to keep in mind, "Lies, Damn Lies, and Statistics". I've tried not to bias the results too much, but some bias is unavoidable. Of particular note is that a lot of these numbers come from pip, and as of version 6.0 of pip, pip will cache downloads by default. This would mean that older versions of pip are more likely to "inflate" the downloads than newer versions since they don't cache by default. In addition if a project has a file which is used for both 2.x and 3.x and they do a ``pip install`` on the 2.x version first then it will show up as counted under 2.x but not 3.x due to caching (and of course the inverse is true, if they install on 3.x first it won't show up on 2.x). Here's the link: https://caremad.io/2015/04/a-year-of-pypi-downloads/ Anyways, I'll have access to the data set for another day or two before I shut down the (expensive) server that I have to use to crunch the numbers so if there's anything anyone else wants to see before I shut it down, speak up soon. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From guido at python.org Tue Apr 21 20:01:15 2015 From: guido at python.org (Guido van Rossum) Date: Tue, 21 Apr 2015 11:01:15 -0700 Subject: [Distutils] [Python-Dev] Python 3.x Adoption for PyPI and PyPI Download Numbers In-Reply-To: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> References: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> Message-ID: Thanks for the detailed research! On Tue, Apr 21, 2015 at 10:54 AM, Donald Stufft wrote: > Just thought I'd share this since it shows how what people are using to > download things from PyPI have changed over the past year. Of particular > interest to most people will be the final graphs showing what percentage of > downloads from PyPI are for Python 3.x or 2.x. > > As always it's good to keep in mind, "Lies, Damn Lies, and Statistics". > I've > tried not to bias the results too much, but some bias is unavoidable. Of > particular note is that a lot of these numbers come from pip, and as of > version > 6.0 of pip, pip will cache downloads by default. This would mean that older > versions of pip are more likely to "inflate" the downloads than newer > versions > since they don't cache by default. In addition if a project has a file > which > is used for both 2.x and 3.x and they do a ``pip install`` on the 2.x > version > first then it will show up as counted under 2.x but not 3.x due to caching > (and > of course the inverse is true, if they install on 3.x first it won't show > up > on 2.x). > > Here's the link: https://caremad.io/2015/04/a-year-of-pypi-downloads/ > > Anyways, I'll have access to the data set for another day or two before I > shut down the (expensive) server that I have to use to crunch the numbers > so if > there's anything anyone else wants to see before I shut it down, speak up > soon. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.badger at gmail.com Tue Apr 21 21:15:00 2015 From: a.badger at gmail.com (Toshio Kuratomi) Date: Tue, 21 Apr 2015 12:15:00 -0700 Subject: [Distutils] Python 3.x Adoption for PyPI and PyPI Download Numbers In-Reply-To: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> References: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> Message-ID: <20150421191500.GA10283@roan.lan> On Tue, Apr 21, 2015 at 01:54:55PM -0400, Donald Stufft wrote: > > Anyways, I'll have access to the data set for another day or two before I > shut down the (expensive) server that I have to use to crunch the numbers so if > there's anything anyone else wants to see before I shut it down, speak up soon. > Where are curl and wget getting categorized in the User Agent graphs? Just morbidly curious as to whether they're in with Browser and therefore mostly unused or Unknown and therefore only slightly less unused ;-) -Toshio -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From donald at stufft.io Tue Apr 21 21:21:49 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 21 Apr 2015 15:21:49 -0400 Subject: [Distutils] [Python-Dev] Python 3.x Adoption for PyPI and PyPI Download Numbers In-Reply-To: <20150421191500.GA10283@roan.lan> References: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> <20150421191500.GA10283@roan.lan> Message-ID: > On Apr 21, 2015, at 3:15 PM, Toshio Kuratomi wrote: > > On Tue, Apr 21, 2015 at 01:54:55PM -0400, Donald Stufft wrote: >> >> Anyways, I'll have access to the data set for another day or two before I >> shut down the (expensive) server that I have to use to crunch the numbers so if >> there's anything anyone else wants to see before I shut it down, speak up soon. >> > Where are curl and wget getting categorized in the User Agent graphs? > > Just morbidly curious as to whether they're in with Browser and therefore > mostly unused or Unknown and therefore only slightly less unused ;-) They get classified as Unknown, here?s the hacky script I use to parse the log files: https://bpaste.net/show/515017c78e32 --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From reinout at vanrees.org Wed Apr 22 00:18:43 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 22 Apr 2015 00:18:43 +0200 Subject: [Distutils] Python 3.x Adoption for PyPI and PyPI Download Numbers In-Reply-To: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> References: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> Message-ID: Donald Stufft schreef op 21-04-15 om 19:54: > Here's the link:https://caremad.io/2015/04/a-year-of-pypi-downloads/ Nice, thanks! I think it is safe to assume that buildout is grouped under "setuptools" (as it uses setuptools under the hood)? I really like buildout, but I must say that the amount of traction behind pip is quite intimidating :-) Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From reinout at vanrees.org Wed Apr 22 00:26:17 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 22 Apr 2015 00:26:17 +0200 Subject: [Distutils] Python 3.x Adoption for PyPI and PyPI Download Numbers In-Reply-To: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> References: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> Message-ID: Donald Stufft schreef op 21-04-15 om 19:54: > Here's the link:https://caremad.io/2015/04/a-year-of-pypi-downloads/ > The last graph is very, very weird. The 'requests' library is very popular. Why on earth has python 2.6 gone from 10% to 25% market share, eating into python 2.7's share? I haven't seen that in any of the other graphs. Weird :-) Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From donald at stufft.io Wed Apr 22 00:26:41 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 21 Apr 2015 18:26:41 -0400 Subject: [Distutils] Python 3.x Adoption for PyPI and PyPI Download Numbers In-Reply-To: References: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> Message-ID: <26BCB1D3-4813-4F6D-88AF-453C374CCB6C@stufft.io> > On Apr 21, 2015, at 6:18 PM, Reinout van Rees wrote: > > Donald Stufft schreef op 21-04-15 om 19:54: >> Here's the link:https://caremad.io/2015/04/a-year-of-pypi-downloads/ > Nice, thanks! > > I think it is safe to assume that buildout is grouped under "setuptools" (as it uses setuptools under the hood)? > > I really like buildout, but I must say that the amount of traction behind pip is quite intimidating :-) > Assuming buildout doesn?t do anything that would cause the user-agent to be something other than what setuptools uses themselves, yes that?s a safe assumption. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Wed Apr 22 00:57:40 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 21 Apr 2015 18:57:40 -0400 Subject: [Distutils] Python 3.x Adoption for PyPI and PyPI Download Numbers In-Reply-To: References: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> Message-ID: <4CC475B2-A8E4-443D-B7F4-6E31E411521E@stufft.io> > On Apr 21, 2015, at 6:26 PM, Reinout van Rees wrote: > > Donald Stufft schreef op 21-04-15 om 19:54: >> Here's the link:https://caremad.io/2015/04/a-year-of-pypi-downloads/ >> > The last graph is very, very weird. > > The 'requests' library is very popular. Why on earth has python 2.6 gone from 10% to 25% market share, eating into python 2.7's share? I haven't seen that in any of the other graphs. > > Weird :-) > I have a few guesses: * RHEL6 is still a major deployment target, but the versions of things packaged in it are getting older and older which incentivizes people on that platform to start downloading things from PyPI instead of their package managers. Doing this for pure python libraries like requests is a lot easier than compiling a whole new Python for your RHEL6 box. * People using Python 2.6 are more likely to also be using a version of pip prior to 6.0 when caching was enabled by default, so you actually have more people installing on 2.7 but PyPI never sees that because it just serves from cache whereas Python 2.6 users are not getting cached values. * People are using a newer version of pip that caches by default, and they are running their tests with something like tox via ``tox`` and they have their envlist sorted like: py26,py27,py32,etc. In this case if they don?t already have the file cached they?ll download it with Python 2.6, cache it, then re-use that cache for all the subsequent Pythons. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Wed Apr 22 05:46:27 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 21 Apr 2015 23:46:27 -0400 Subject: [Distutils] [Python-Dev] Python 3.x Adoption for PyPI and PyPI Download Numbers In-Reply-To: References: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> Message-ID: <8A88415D-8766-4C44-B3F8-81096718FC9D@stufft.io> > On Apr 21, 2015, at 11:35 PM, Gregory P. Smith wrote: > > > > On Tue, Apr 21, 2015 at 10:55 AM Donald Stufft > wrote: > Just thought I'd share this since it shows how what people are using to > download things from PyPI have changed over the past year. Of particular > interest to most people will be the final graphs showing what percentage of > downloads from PyPI are for Python 3.x or 2.x. > > As always it's good to keep in mind, "Lies, Damn Lies, and Statistics". I've > tried not to bias the results too much, but some bias is unavoidable. Of > particular note is that a lot of these numbers come from pip, and as of version > 6.0 of pip, pip will cache downloads by default. This would mean that older > versions of pip are more likely to "inflate" the downloads than newer versions > since they don't cache by default. In addition if a project has a file which > is used for both 2.x and 3.x and they do a ``pip install`` on the 2.x version > first then it will show up as counted under 2.x but not 3.x due to caching (and > of course the inverse is true, if they install on 3.x first it won't show up > on 2.x). > > Here's the link: https://caremad.io/2015/04/a-year-of-pypi-downloads/ > > Anyways, I'll have access to the data set for another day or two before I > shut down the (expensive) server that I have to use to crunch the numbers so if > there's anything anyone else wants to see before I shut it down, speak up soon. > > Thanks! > > I like your focus on particular packages of note such as django and requests. > > How do CDNs influence these "lies"? I thought the download counts on PyPI were effectively meaningless due to CDN mirrors fetching and hosting things? > > Do we have user-agent logs from all PyPI package CDN mirrors or just from the master? > > -gps We took the download counts offline for awhile because of the CDN, however within a month or two (now almost two years ago) they enabled logs on our account to bring them back. So these numbers are from the CDN edge and they reflect the ?true? traffic. I say ?true? because although we have logs, logging isn?t considered an essential service so in times of problems logging can be reduced or disabled completely (you can see in the data set some weeks had a massive drop, this was due to missing a day or two of logs). That being said though, ontop of the Fastly provided CDN, there is also the ability to mirror PyPI (which shows up as bandersnatch or others in the logs) and if someone is installing from a mirror we don?t see that data at all. On top of that, all versions of pip prior to 6.0 had an opt in download cache which would mean that, on an opt in basis, we wouldn?t see downloads for those people and since 6.0 there is now an opt-out cache. Specifically to the mirror network itself, that represents about 20% of the total traffic on PyPI, however we can determine when it was a mirror and those downloads show up as ?Unknown? in other charts since it?s a mirror client we don?t know what the final target environment will be. This might mean that future snapshots will look at API accesses instead, or perhaps we try to implement some sort of optional popcon or maybe we continue to look at package installs and we just interpret the data with the knowledge that these things are at play. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dev-mailings at gongy.de Wed Apr 22 12:13:41 2015 From: dev-mailings at gongy.de (Christoph Schmitt) Date: Wed, 22 Apr 2015 12:13:41 +0200 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: References: Message-ID: <55373910.1070506@gongy.de> Hello again, since I haven't got any replies yet I'm trying to make myself a bit more precise now. I consider the behaviour described in my original posting a bug. I posted to this list because the setuptools docs say "Please use the distutils-sig mailing list [3] for questions and discussion about setuptools, and the setuptools bug tracker [4] ONLY for issues you have confirmed via the list". I have two questions, which I hope some expert here can answer: 1) How do I properly handle PEP420 namespace packages with setuptools and pip (Python >= 3.3) assuming the scenario described below? 2) Is the behaviour of setuptools/pip as I encountered it intended (including the solution I found) or is it a bug that should be filed? So here is, with some more detail, what I am trying to do: I am using Python 3.4. There are two projects with modules in the namaspace coolpkg. These implement the sub-packages coolpkg.bar and and coolpkg.foo respectively. Both projects have (as allowed by PEP420) no coolpkg/__init__.py. Both projects have a setup.py using setuptools (15.0) to create a source distribution and will be installed using pip (6.1.1). In addition to that, there is another submodule coolpkg.baz, which will not be packaged/installed using setuptools/pip. Instead the folder containing it, will we added to the PYTHONPATH. Here is the complete layout of the projects and the additional module. project-bar/ project-bar/setup.py project-bar/src project-bar/src/coolpkg project-bar/src/coolpkg/bar project-bar/src/coolpkg/bar/__init__.py project-bar/src/coolpkg/bar/barmod.py project-foo/ project-foo/setup.py project-foo/src project-foo/src/coolpkg project-foo/src/coolpkg/foo project-foo/src/coolpkg/foo/foomod.py project-foo/src/coolpkg/foo/__init__.py shady-folder/ shady-folder/coolpkg shady-folder/coolpkg/baz shady-folder/coolpkg/baz/__init__.py shady-folder/coolpkg/baz/bazmod.py My goal is to have a runtime configuration such that modules coolpkg.foo.foomod, coolpkg.bar.barmod, coolpkg.baz.bazmod can all be imported. Test 1) (just a basic test to verify general setup) Add project-bar/src, project-foo/src and shady-folder manually to sys.path Result: works (obviously) For further tests: Create source distributions with setup.py sdist for project-bar and project-foo, install them with pip and put shady-folder on PYTHONPATH. Declare packages=['coolpkg', 'coopkg.foo'] and packages=['coolpkg', 'coopkg.bar'] in respectice setup.py, since find_packages does not play well with PEP420. Test 2) Delcare namespace_packages=['coolpkg'] in setup.py of each project Result: coolpkg.foo.foomod and coolpkg.bar.barmod can be imported, but importing coolpkg.baz.bazmod fails Suspected reason of failure: pip creates *-nspkg.path files which prevent coolpkg.baz from being found (breaking PEP420) Test 3) DO NOT delcare namespace_packages=['coolpkg'] in setup.py of each project Result: all modules can be imported To put it bluntly: The setup of test 2) is the one I would expect to work when writing a setup.py as described by the documentation, but pip unneccessarily creates *-nspkg.path files which undo the whole point of PEP420. The setup of test 3) seems like an ulgy hack as we fool pip into not creating *-nspkg.path files by hiding the existance of the namepsace package from setuptools. To summarize my questions stated above: Which is a bug, which is intended and is there a way to handle PEP420-compliant packages properly with the current versions of setuptools and pip. Kind regards, Christoph Schmitt Attachment: Example with scripts, modiy setup.py files according to test 2) and 3) Am 17.04.2015 um 09:31 schrieb Christoph Schmitt: > I am using the newest versions of setuptools (15.0) and pip (6.1.1) > with Python 3.4. I wanted to satisfy the following two requirements at > the same time, but had some trouble: > A) creating and installing two source distribution tarballs which have > the same top level namespace package (no __init__.py for namespace > package) with setuptools and pip > B) having other packages within the same namespace package outside of > /site-packages in the PYTHONPATH (not managed by > pip/setuptools) > > Since Python >= 3.3 supports namespace packages out of the box (PEP420) > this seemed to me like a straightforward task. However, it turned out > to be not. Either, setuptools would not find my modules, pip would not > install them properly or requirement B) was broken. The solution that > worked for me, was to omit the > namespace_packages=['whatever_namespace'] declaration in setup.py (to > prevent the creation of *-nspkg.pth files by pip) and not to use > find_packages, which does not comply with PEP420 (no packages found). > > This solution is somewhat counter-intuitive and I am not sure, whether > it is an intended/valid configuration of a setup.py file. Definitely, > it is not clear from the documentation (which has a section about > namespace packages btw.). Since I read into > https://bitbucket.org/pypa/setuptools/issue/97 [1] I now know that > convenient support PEP420 is not easy to achieve for setuptools > (regarding find_packages). However it would have been very helpful, if > the documentation explained how to handle PEP420-compliant namespace > packages (without any __init__.py) in setup.py. At least I would have > expected a hint that there are caveats regarding PEP420 with a link to > issue 97. > > I also created a minimal example to reproduce the issue, which I can > provide if anyone is interested. > > Kind regards, > Christoph Schmitt > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig [2] Links: ------ [1] https://bitbucket.org/pypa/setuptools/issue/97 [2] https://mail.python.org/mailman/listinfo/distutils-sig [3] http://mail.python.org/pipermail/distutils-sig/ [4] https://bitbucket.org/pypa/setuptools/ -------------- next part -------------- A non-text attachment was scrubbed... Name: coolpkg-example.tar.gz Type: application/x-gzip Size: 1084 bytes Desc: not available URL: From robertc at robertcollins.net Wed Apr 22 12:38:41 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 22 Apr 2015 22:38:41 +1200 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: <55373910.1070506@gongy.de> References: <55373910.1070506@gongy.de> Message-ID: On 22 April 2015 at 22:13, Christoph Schmitt wrote: > Hello again, > > since I haven't got any replies yet I'm trying to make myself a bit more > precise now. I consider the behaviour described in my original posting a > bug. I posted to this list because the setuptools docs say "Please use the > distutils-sig mailing list [3] for questions and discussion about > Test 3) > DO NOT delcare namespace_packages=['coolpkg'] in setup.py of each project > Result: all modules can be imported This is correct AFAICT. the setuptools namespace_packages thing predates PEP-420, and because PEP-420 namespaces don't interoperate with .pth file based packages (expecially when you get into interactions between system non-PEP-420 + virtualenv PEP-420 packages!) changing this is super hard: you'll guarantee to break many existing installs. Perhaps there should be a new keyword, but since nothing is needed to make things work, it seems like it would be rather redundant. -Rob From mal at egenix.com Wed Apr 22 12:59:18 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 22 Apr 2015 12:59:18 +0200 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: References: <55373910.1070506@gongy.de> Message-ID: <55377F06.5050509@egenix.com> On 22.04.2015 12:38, Robert Collins wrote: > On 22 April 2015 at 22:13, Christoph Schmitt wrote: >> Hello again, >> >> since I haven't got any replies yet I'm trying to make myself a bit more >> precise now. I consider the behaviour described in my original posting a >> bug. I posted to this list because the setuptools docs say "Please use the >> distutils-sig mailing list [3] for questions and discussion about > >> Test 3) >> DO NOT delcare namespace_packages=['coolpkg'] in setup.py of each project >> Result: all modules can be imported > > This is correct AFAICT. > > the setuptools namespace_packages thing predates PEP-420, and because > PEP-420 namespaces don't interoperate with .pth file based packages > (expecially when you get into interactions between system non-PEP-420 > + virtualenv PEP-420 packages!) changing this is super hard: you'll > guarantee to break many existing installs. > > Perhaps there should be a new keyword, but since nothing is needed to > make things work, it seems like it would be rather redundant. You can make use of the namespace_packages keyword argument to setup() optional depending on which Python version is running it. I guess that's the only way forward unless you want to break the package for previous Python versions. However, doing so may be hard for namespaces which are used by a lot of packages. Perhaps setuptools could simply ignore the keyword for Python 3.3+ and then rely on PEP 420 to get things working in more or less the same way: https://www.python.org/dev/peps/pep-0420/ -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 22 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From greg at krypto.org Wed Apr 22 05:35:21 2015 From: greg at krypto.org (Gregory P. Smith) Date: Wed, 22 Apr 2015 03:35:21 +0000 Subject: [Distutils] [Python-Dev] Python 3.x Adoption for PyPI and PyPI Download Numbers In-Reply-To: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> References: <70D6B7F8-DB34-4484-BB17-730CE94E3F0E@stufft.io> Message-ID: On Tue, Apr 21, 2015 at 10:55 AM Donald Stufft wrote: > Just thought I'd share this since it shows how what people are using to > download things from PyPI have changed over the past year. Of particular > interest to most people will be the final graphs showing what percentage of > downloads from PyPI are for Python 3.x or 2.x. > > As always it's good to keep in mind, "Lies, Damn Lies, and Statistics". > I've > tried not to bias the results too much, but some bias is unavoidable. Of > particular note is that a lot of these numbers come from pip, and as of > version > 6.0 of pip, pip will cache downloads by default. This would mean that older > versions of pip are more likely to "inflate" the downloads than newer > versions > since they don't cache by default. In addition if a project has a file > which > is used for both 2.x and 3.x and they do a ``pip install`` on the 2.x > version > first then it will show up as counted under 2.x but not 3.x due to caching > (and > of course the inverse is true, if they install on 3.x first it won't show > up > on 2.x). > > Here's the link: https://caremad.io/2015/04/a-year-of-pypi-downloads/ > > Anyways, I'll have access to the data set for another day or two before I > shut down the (expensive) server that I have to use to crunch the numbers > so if > there's anything anyone else wants to see before I shut it down, speak up > soon. > Thanks! I like your focus on particular packages of note such as django and requests. How do CDNs influence these "lies"? I thought the download counts on PyPI were effectively meaningless due to CDN mirrors fetching and hosting things? Do we have user-agent logs from all PyPI package CDN mirrors or just from the master? -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From wichert at wiggy.net Wed Apr 22 14:18:07 2015 From: wichert at wiggy.net (Wichert Akkerman) Date: Wed, 22 Apr 2015 14:18:07 +0200 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: References: <55373910.1070506@gongy.de> Message-ID: On Wed, Apr 22, 2015 at 12:38 PM, Robert Collins wrote: > On 22 April 2015 at 22:13, Christoph Schmitt > wrote: > > Hello again, > > > > since I haven't got any replies yet I'm trying to make myself a bit more > > precise now. I consider the behaviour described in my original posting a > > bug. I posted to this list because the setuptools docs say "Please use > the > > distutils-sig mailing list [3] for questions and discussion about > > > Test 3) > > DO NOT delcare namespace_packages=['coolpkg'] in setup.py of each project > > Result: all modules can be imported > > This is correct AFAICT. > > the setuptools namespace_packages thing predates PEP-420, and because > PEP-420 namespaces don't interoperate with .pth file based packages > (expecially when you get into interactions between system non-PEP-420 > + virtualenv PEP-420 packages!) changing this is super hard: you'll > guarantee to break many existing installs. > If I remember things correctly I tried that a while ago, but it resulted in setuptools generating an sdist without any of the namespace packages. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mal at egenix.com Wed Apr 22 17:09:38 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 22 Apr 2015 17:09:38 +0200 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> Message-ID: <5537B9B2.6050709@egenix.com> [adding list back on CC] On 22.04.2015 16:11, Christoph Schmitt wrote: > Am 2015-04-22 12:59, schrieb M.-A. Lemburg: >> On 22.04.2015 12:38, Robert Collins wrote: >>> On 22 April 2015 at 22:13, Christoph Schmitt wrote: >>>> Hello again, >>>> >>>> since I haven't got any replies yet I'm trying to make myself a bit more >>>> precise now. I consider the behaviour described in my original posting a >>>> bug. I posted to this list because the setuptools docs say "Please use the >>>> distutils-sig mailing list [3] for questions and discussion about >>> >>>> Test 3) >>>> DO NOT delcare namespace_packages=['coolpkg'] in setup.py of each project >>>> Result: all modules can be imported >>> >>> This is correct AFAICT. >>> >>> the setuptools namespace_packages thing predates PEP-420, and because >>> PEP-420 namespaces don't interoperate with .pth file based packages >>> (expecially when you get into interactions between system non-PEP-420 >>> + virtualenv PEP-420 packages!) changing this is super hard: you'll >>> guarantee to break many existing installs. >>> >>> Perhaps there should be a new keyword, but since nothing is needed to >>> make things work, it seems like it would be rather redundant. >> >> You can make use of the namespace_packages keyword argument to setup() >> optional depending on which Python version is running it. >> >> I guess that's the only way forward unless you want to break >> the package for previous Python versions. >> >> However, doing so may be hard for namespaces which are used >> by a lot of packages. >> >> Perhaps setuptools could simply ignore the keyword for >> Python 3.3+ and then rely on PEP 420 to get things working >> in more or less the same way: >> >> https://www.python.org/dev/peps/pep-0420/ > > I would be fine with declaring namespace_packages conditionally. But doesn't this affect sdist in > another way than install (or pip install)? If an sdist intended for use with Python < 3.3 is created > with Python >= 3.3, the included metadata (egg-info) would look different (I don't know if pip > relies on egg-info or setup.py). The egg-info generated by setup.py at install time :-) > That would apply also if namespace_packages would be ignored > automatically for Python >= 3.3 as you proposed. > > As a consequence, two distributions were neccessary. One with namespace_packages declared and > containing an __init__.py (with pkg_resources.declare_namespace) and another one without those > additions. But how does setuptools figure out to leave out the __init__.py for non-declard namespace > packages in the latter case? Like I mentioned above: it's probably better for setuptools to handle this in a consistent way rather than changing all setup.pys. One detail I'm not sure about is how compatible the two namespace package techniques really are. The PEP 420 hand waves over possible differences and only addresses pkgutil, not the setuptools approach, which is by far more common. For most simply applications not relying on any of the advanced features, they will most likely be compatible. I guess some experiments are necessary to figure that out. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 22 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From robertc at robertcollins.net Wed Apr 22 21:08:57 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 23 Apr 2015 07:08:57 +1200 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: <5537B9B2.6050709@egenix.com> References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> Message-ID: On 23 April 2015 at 03:09, M.-A. Lemburg wrote: > [adding list back on CC] > > On 22.04.2015 16:11, Christoph Schmitt wrote: >> Am 2015-04-22 12:59, schrieb M.-A. Lemburg: >>> On 22.04.2015 12:38, Robert Collins wrote: >> That would apply also if namespace_packages would be ignored >> automatically for Python >= 3.3 as you proposed. >> >> As a consequence, two distributions were neccessary. One with namespace_packages declared and >> containing an __init__.py (with pkg_resources.declare_namespace) and another one without those >> additions. But how does setuptools figure out to leave out the __init__.py for non-declard namespace >> packages in the latter case? > > Like I mentioned above: it's probably better for setuptools to handle > this in a consistent way rather than changing all setup.pys. I agree, but consider this situation - on any PEP-420 supporting python Two packages: name.A and name.B. name.A already installed on the machine systemwide using old-style namespace path hacks, and then do a wheel install of name.B. If the wheel for name.B was built expecting PEP-420, it won't be importable after install (because the path manipulation that sets up name as an old-style namespace happens in site.py). If the wheel was built expecting old-style namespaces, but A was installed using PEP-420, then A won't be installable after B is installed (same reason). The point of splitting the place the two are installed is to show that the user may not be able to fix the existing install. So - pip would have to a) detect both styles of package, b) automatically install all installed-but-wrong-style versions to match the site installed ones. And if any of the packages in the namespace only support legacy, everything would be clamped down to legacy. > One detail I'm not sure about is how compatible the two namespace > package techniques really are. The PEP 420 hand waves over possible > differences and only addresses pkgutil, not the setuptools > approach, which is by far more common. For most simply applications > not relying on any of the advanced features, they will most likely > be compatible. They are entirely incompatible. > I guess some experiments are necessary to figure that out. I spent a week last year debugging issues within openstack with namespace packages, local tree imports of same, and both pure venv and split system and venv environments. Some key interesting things: - the setuptools pth files aren't fully venv aware today - potentially fixable but the resulting pth file is fugly, so I decided not worth it. - local tree imports work a heck of a lot better in tox etc with PEP-420 namespaces - PEP-420 namespaces can work on older pythons with importlib, but - PEP-420 and legacy packages being mixed for one namespace doesn't work at all today - in principle fixable via changes to both setuptools and importlib - but it was about here that the other openstack folk said 'ok wow, lets just stop using namespace packages :) I think its a -lot- easier to reason about these things as two entirely separate features. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From mal at egenix.com Wed Apr 22 21:25:31 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 22 Apr 2015 21:25:31 +0200 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> Message-ID: <5537F5AB.7060804@egenix.com> On 22.04.2015 21:08, Robert Collins wrote: > On 23 April 2015 at 03:09, M.-A. Lemburg wrote: >> [adding list back on CC] >> >> On 22.04.2015 16:11, Christoph Schmitt wrote: >>> Am 2015-04-22 12:59, schrieb M.-A. Lemburg: >>>> On 22.04.2015 12:38, Robert Collins wrote: > > >>> That would apply also if namespace_packages would be ignored >>> automatically for Python >= 3.3 as you proposed. >>> >>> As a consequence, two distributions were neccessary. One with namespace_packages declared and >>> containing an __init__.py (with pkg_resources.declare_namespace) and another one without those >>> additions. But how does setuptools figure out to leave out the __init__.py for non-declard namespace >>> packages in the latter case? >> >> Like I mentioned above: it's probably better for setuptools to handle >> this in a consistent way rather than changing all setup.pys. > > I agree, but consider this situation - on any PEP-420 supporting python > > Two packages: name.A and name.B. name.A already installed on the > machine systemwide using old-style namespace path hacks, and then do a > wheel install of name.B. > > If the wheel for name.B was built expecting PEP-420, it won't be > importable after install (because the path manipulation that sets up > name as an old-style namespace happens in site.py). > > If the wheel was built expecting old-style namespaces, but A was > installed using PEP-420, then A won't be installable after B is > installed (same reason). > > The point of splitting the place the two are installed is to show that > the user may not be able to fix the existing install. > > So - pip would have to a) detect both styles of package, b) > automatically install all installed-but-wrong-style versions to match > the site installed ones. And if any of the packages in the namespace > only support legacy, everything would be clamped down to legacy. I don't think support mixed setups is really a practical option. Either the namespace package is legacy all the way, or it isn't and uses PEP 420. Wouldn't it be possible for setuptools or pip to work this out depending on the Python version ? Python < 3.3: use legacy system for everything Python >= 3.3: use PEP 420 for everything (and remove __init__.py files at install time as necessary) >> One detail I'm not sure about is how compatible the two namespace >> package techniques really are. The PEP 420 hand waves over possible >> differences and only addresses pkgutil, not the setuptools >> approach, which is by far more common. For most simply applications >> not relying on any of the advanced features, they will most likely >> be compatible. > > They are entirely incompatible. Oh, I meant from a functionality point of view, not the technology side. They both allow installing packages in different directory trees and that's the only feature most namespace packages use. >> I guess some experiments are necessary to figure that out. > > I spent a week last year debugging issues within openstack with > namespace packages, local tree imports of same, and both pure venv and > split system and venv environments. > > Some key interesting things: > - the setuptools pth files aren't fully venv aware today - > potentially fixable but the resulting pth file is fugly, so I decided > not worth it. > - local tree imports work a heck of a lot better in tox etc with > PEP-420 namespaces > - PEP-420 namespaces can work on older pythons with importlib, but > - PEP-420 and legacy packages being mixed for one namespace doesn't > work at all today - in principle fixable via changes to both > setuptools and importlib - but it was about here that the other > openstack folk said 'ok wow, lets just stop using namespace packages > :) It's certainly easier to not use namespace packages and simply install packages into the same tree. The main reason for namespace packages in setuptools was the desire to stuff everything into ZIP files (the eggs) or egg directories, even when installed, to simplify uninstalls. As soon as you drop that requirement you can have the package manager deal with the complexities of having multiple packages share the same directory in site-packages. That's solvable as pip, RPM, apt, et al. show :-) But ok, that doesn't solve the issue of support namespace packages if the developers want to use them ;-) > I think its a -lot- easier to reason about these things as two > entirely separate features. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 22 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From eric at trueblade.com Wed Apr 22 22:16:52 2015 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 22 Apr 2015 16:16:52 -0400 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: <5537F5AB.7060804@egenix.com> References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> <5537F5AB.7060804@egenix.com> Message-ID: <553801B4.9050404@trueblade.com> On 04/22/2015 03:25 PM, M.-A. Lemburg wrote: > On 22.04.2015 21:08, Robert Collins wrote: >> So - pip would have to a) detect both styles of package, b) >> automatically install all installed-but-wrong-style versions to match >> the site installed ones. And if any of the packages in the namespace >> only support legacy, everything would be clamped down to legacy. > > I don't think support mixed setups is really a practical option. Isn't it supposed to be a supported feature? https://www.python.org/dev/peps/pep-0420/#migrating-from-legacy-namespace-packages I realize I should know this, but it's been a while since I wrote it. And I'd swear there are test for this, but I can't find them right now. But I'm away from home and will have more time to research this later. Eric. From robertc at robertcollins.net Wed Apr 22 23:14:57 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 23 Apr 2015 09:14:57 +1200 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: <553801B4.9050404@trueblade.com> References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> <5537F5AB.7060804@egenix.com> <553801B4.9050404@trueblade.com> Message-ID: On 23 April 2015 at 08:16, Eric V. Smith wrote: > On 04/22/2015 03:25 PM, M.-A. Lemburg wrote: >> On 22.04.2015 21:08, Robert Collins wrote: >>> So - pip would have to a) detect both styles of package, b) >>> automatically install all installed-but-wrong-style versions to match >>> the site installed ones. And if any of the packages in the namespace >>> only support legacy, everything would be clamped down to legacy. >> >> I don't think support mixed setups is really a practical option. > > Isn't it supposed to be a supported feature? > https://www.python.org/dev/peps/pep-0420/#migrating-from-legacy-namespace-packages > > I realize I should know this, but it's been a while since I wrote it. > And I'd swear there are test for this, but I can't find them right now. > But I'm away from home and will have more time to research this later. I'm not sure if that was done. Certainly the caveat there - no dynamic path support - is somewhat key for the scenarios I was looking at (e.g. tests in the current tree using 'setup develop' / pip install -e . -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Wed Apr 22 23:20:00 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 23 Apr 2015 09:20:00 +1200 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: <5537F5AB.7060804@egenix.com> References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> <5537F5AB.7060804@egenix.com> Message-ID: On 23 April 2015 at 07:25, M.-A. Lemburg wrote: > On 22.04.2015 21:08, Robert Collins wrote: > I don't think support mixed setups is really a practical option. > > Either the namespace package is legacy all the way, or it > isn't and uses PEP 420. > > Wouldn't it be possible for setuptools or pip to work this out > depending on the Python version ? Ah, ok so I think this is the crux - I'm arguing that Python version isn't a big enough check. Because anything installed with a current version of setuptools, or any wheel built likewise, is going to not have that per-Python-version check. And it seems to me that that implies that bringing in a per-Python-version check in a new release of setuptools or pip is going to result in mixed mode installs: install name.A with setuptools-X [legacy] upgrade setuptools install name.B with setuptools-Y [does a Python version check] -> boom But perhaps sufficient glue can be written to make it all work. My personal preferred migration strategy is: - have a flag day amongst the cooperating packages that make up the namespace - release versions that are all in the new layout in a batch to PyPI. It would be nice if PEP-426 had a conflicts stanza, so you could say conflicts: [name.A < version_with_new_X] without that implying that name.A *should* be installed. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From barry at python.org Wed Apr 22 23:41:45 2015 From: barry at python.org (Barry Warsaw) Date: Wed, 22 Apr 2015 17:41:45 -0400 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> Message-ID: <20150422174145.541224d2@anarchist.wooz.org> On Apr 23, 2015, at 07:08 AM, Robert Collins wrote: >PEP-420 and legacy packages being mixed for one namespace doesn't work at all >today - in principle fixable via changes to both setuptools and importlib - >but it was about here that the other openstack folk said 'ok wow, lets just >stop using namespace packages :) I think this is actually by design, despite what PEP 420 says. If you have a portion that contains an __init__.py, that basically overrides any namespace portions found on sys.path. It's all or nothing. Now, were this comes up for me is in bilingual code. I have some namespace packages (e.g. flufl.*) which should live in the same parent package even if the portions live in distinct directories. Each package has an __init__.py that stitches things together for Python 2 (using the various common hack recipes), but of course installing this package in Python 3 means they aren't namespace packages, and this makes me sad. I wish there was some kind of exception or marker I could put in the flufl/__init__.py files that signaled PEP 420-aware Python 3s to treat it as if the __init__.py doesn't exist. In the Debian ecosystem, we solve this with package builder help. The standard helpers will actually not include the top-level __init__.py file for the Python 3 binary package version, so they'll be stitched-together namespace-ish for Python 3 and straight up PEP 420 namespace packages for Python 3. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From chris.barker at noaa.gov Wed Apr 22 23:25:40 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 22 Apr 2015 14:25:40 -0700 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: <553801B4.9050404@trueblade.com> References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> <5537F5AB.7060804@egenix.com> <553801B4.9050404@trueblade.com> Message-ID: A note from the peanut gallery: I like the idea of namepace packages, but every time I've tried to use them, I've been stymied -- maybe this PEP will solve that, but... First - the issues: - It somehow seems like a lot of work, details to get right, and more-than-one-way-to-do-it. But maybe that's all pre- PEP 420 - Last time I tried, I couldn't get them to work with "setup.py develop" But at the core of this -- Why does it have to be so hard? It seems very simple to me -- what am I missing? What are namespace packages? To me, they are a package that serves no other purpose than to provide a single namespace in which to put other packages. This makes a lot of sense if you have a bunch of related packages where users may only require one, or a couple, but not all. And you want to be able to maintain them, and version control them independently. But it seem to get this, all we need is: 1) A directory with the top-level name 2) It has an (empty) __init__.py (so it is a python package) 3) It has other directories in it -- each of these are regular old python packages -- the ONLY difference is that they are installed under that name That's it. Done. Now all we need is a way to install these things -- well that's easy, each sub-package installs itself just like it would, maybe overwriting the top level directory name and the __init__.py, if another sub-package has already installed it. But that's OK, because the name is by definition the same, and the __init__ is empty. This seems SO SIMPLE. No declaring things all over the place, no dynamic path manipulation, nothing unusual at all, except the ability to install a module into a dir without clobbering what might already be in that dir. What am I missing? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Apr 23 02:31:52 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 22 Apr 2015 20:31:52 -0400 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> <5537F5AB.7060804@egenix.com> <553801B4.9050404@trueblade.com> Message-ID: > On Apr 22, 2015, at 5:25 PM, Chris Barker wrote: > > A note from the peanut gallery: > > I like the idea of namepace packages, but every time I've tried to use them, I've been stymied -- maybe this PEP will solve that, but... > > First - the issues: > > - It somehow seems like a lot of work, details to get right, and more-than-one-way-to-do-it. But maybe that's all pre- PEP 420 > > - Last time I tried, I couldn't get them to work with "setup.py develop" > > But at the core of this -- Why does it have to be so hard? It seems very simple to me -- what am I missing? > > What are namespace packages? To me, they are a package that serves no other purpose than to provide a single namespace in which to put other packages. This makes a lot of sense if you have a bunch of related packages where users may only require one, or a couple, but not all. And you want to be able to maintain them, and version control them independently. > > But it seem to get this, all we need is: > > 1) A directory with the top-level name > > 2) It has an (empty) __init__.py (so it is a python package) > > 3) It has other directories in it -- each of these are regular old python packages -- the ONLY difference is that they are installed under that name > > That's it. Done. Now all we need is a way to install these things -- well that's easy, each sub-package installs itself just like it would, maybe overwriting the top level directory name and the __init__.py, if another sub-package has already installed it. But that's OK, because the name is by definition the same, and the __init__ is empty. > > This seems SO SIMPLE. No declaring things all over the place, no dynamic path manipulation, nothing unusual at all, except the ability to install a module into a dir without clobbering what might already be in that dir. > > What am I missing? Prior to PEP 420 you needed the dynamic path stuff because sometimes your namespace package is split across multiple locations on sys.path. Relying on the filesystem as you mentioned only works if you install every single namespace package into the same directory. PEP 420 more or less solves all of the problems with namespace packages, other than it?s a Python 3 only feature so most people aren?t going to be willing to depend on it. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From eric at trueblade.com Thu Apr 23 03:49:29 2015 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 22 Apr 2015 21:49:29 -0400 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> <5537F5AB.7060804@egenix.com> <553801B4.9050404@trueblade.com> Message-ID: <55384FA9.6090702@trueblade.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 04/22/2015 08:31 PM, Donald Stufft wrote: > >> On Apr 22, 2015, at 5:25 PM, Chris Barker > > wrote: >> >> A note from the peanut gallery: >> >> I like the idea of namepace packages, but every time I've tried >> to use them, I've been stymied -- maybe this PEP will solve that, >> but... >> >> First - the issues: >> >> - It somehow seems like a lot of work, details to get right, and >> more-than-one-way-to-do-it. But maybe that's all pre- PEP 420 >> >> - Last time I tried, I couldn't get them to work with "setup.py >> develop" >> >> But at the core of this -- Why does it have to be so hard? It >> seems very simple to me -- what am I missing? >> >> What are namespace packages? To me, they are a package that >> serves no other purpose than to provide a single namespace in >> which to put other packages. This makes a lot of sense if you >> have a bunch of related packages where users may only require >> one, or a couple, but not all. And you want to be able to >> maintain them, and version control them independently. >> >> But it seem to get this, all we need is: >> >> 1) A directory with the top-level name >> >> 2) It has an (empty) __init__.py (so it is a python package) >> >> 3) It has other directories in it -- each of these are regular >> old python packages -- the ONLY difference is that they are >> installed under that name >> >> That's it. Done. Now all we need is a way to install these things >> -- well that's easy, each sub-package installs itself just like >> it would, maybe overwriting the top level directory name and the >> __init__.py, if another sub-package has already installed it. But >> that's OK, because the name is by definition the same, and the >> __init__ is empty. >> >> This seems SO SIMPLE. No declaring things all over the place, no >> dynamic path manipulation, nothing unusual at all, except the >> ability to install a module into a dir without clobbering what >> might already be in that dir. >> >> What am I missing? > > > Prior to PEP 420 you needed the dynamic path stuff because > sometimes your namespace package is split across multiple locations > on sys.path. Relying on the filesystem as you mentioned only works > if you install every single namespace package into the same > directory. > > PEP 420 more or less solves all of the problems with namespace > packages, other than it?s a Python 3 only feature so most people > aren?t going to be willing to depend on it. Right. The problem is that there are 2 ways to install namespace packages. Say you have a namespace package foo, with 2 "portions" named bar and baz (see: https://www.python.org/dev/peps/pep-0420/#terminology). There are 2 ways to install these 2 portions: 1. in 2 different directories on sys.path, say /somewhere/foo-bar and /somewhere/foo-baz. 2. in a single /somewhere/foo directory on sys.path, with the files unioned on top of each other, say /somewhere/foo/bar and /somewhere/foo/baz. The problem that PEP 420 tries to solve is: what do I do with foo/__init__.py? Prior to PEP 420, you'd include a copy in both portions. This foo/__init__.py would contain the magic incantations to call extend_path(), or whatever similar thing setuptools supports. This works great for scenario 1 above. You can install the bar portion or that baz portion, or both, and everything works fine. Note that all copies of /somewhere/foo/*/__init__.py have the same contents. The problem is in the second scenario. This is the scenario bdist_rpm supports. But which RPM owns /somewhere/foo/__init__.py if both portions are installed? It's the same file in terms of its contents, and you'd like them to just overlay each other (with either one winning), but with RPM and other package managers you can't do this, because you'd have 2 different packages both owning the same file. So what PEP 420 does is just delete foo/__init__.py. It's never needed, by any package. Now you can install all namespace packages with either scenario 1 or 2 above, and everything just works. Eric. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.14 (GNU/Linux) iQEcBAEBAgAGBQJVOE+pAAoJENxauZFcKtNxEf0H/jgUwkdj0q6CsxMC2UPPQg0o grjhL2FMVfbqhy74aJ0stJhBQ6+fSFR09b6LJ8va3Ql2iJLyXQVX0Kedhts9Hjud zpfMxpQdxeKE41QjCjeua5hQjTFpqQofxMmcwmjoDOB89Tn+30K1gatPJ4xzjTRc Lek5UT4yaZTS6mil61vdPUZMSWbxppJQBI0/EUmK9ps4vW3OfVxHMJK0AbvUbRnN oXRW+NlEhXL2FtMdaoApQQL1STmsH0+RrRY+XlgGbT2G1LbXNviWMLaookUKwLJd 9V4SjS/dgepO9hqL7glSZU7/3THOpF5548gwzmPKuq65bYkx0XzqjUQzaWE6Guo= =+J5a -----END PGP SIGNATURE----- From barry at python.org Fri Apr 24 01:38:50 2015 From: barry at python.org (Barry Warsaw) Date: Thu, 23 Apr 2015 19:38:50 -0400 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> <5537F5AB.7060804@egenix.com> <553801B4.9050404@trueblade.com> Message-ID: <20150423193850.4c65b01f@anarchist.wooz.org> On Apr 22, 2015, at 02:25 PM, Chris Barker wrote: >That's it. Done. Now all we need is a way to install these things -- well >that's easy, each sub-package installs itself just like it would, maybe >overwriting the top level directory name and the __init__.py, if another >sub-package has already installed it. This gets at the heart of the motivation for PEP 420, at least for me with my distro-developer hat on. Any single file can only be owned by exactly one distro-level package. So take the zope.* packages. Which distro package owns zope/__init__.py when you have zope.interfaces, zope.component, etc.? The answer is, none can, because you have no idea which one will be installed first. Distros had to go through a lot of hackery to make this work, but now, we can just delete zope/__init__.py from *all* distro packages (at least for the Python 3 versions) and it will just work. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From chris.barker at noaa.gov Fri Apr 24 01:57:46 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 23 Apr 2015 16:57:46 -0700 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> <5537F5AB.7060804@egenix.com> <553801B4.9050404@trueblade.com> Message-ID: On Wed, Apr 22, 2015 at 5:31 PM, Donald Stufft wrote: > This seems SO SIMPLE. > > ... > What am I missing? > > Prior to PEP 420 you needed the dynamic path stuff because sometimes your > namespace package is split across multiple locations on sys.path. > OK -- sure you'd need it then -- still not sure why we need to scatter namespace packages all of the file system though -- or why we couldn't do it the quick and dirty and easy way for most cases, anyway... PEP 420 more or less solves all of the problems with namespace packages, > other than it?s a Python 3 only feature so most people aren?t going to be > willing to depend on it. > yeah there's always that -- maybe I'll revisit namespace packages when I've made the full transition to py3... On Thu, Apr 23, 2015 at 4:38 PM, Barry Warsaw wrote: > This gets at the heart of the motivation for PEP 420, at least for me with > my > distro-developer hat on. Any single file can only be owned by exactly one > distro-level package. > I see -- that' a pain. though it seems to me to be a limitation of the distro-packaging systems, not a python one -- though maybe we need to work with it anyway...And distro packages need shared directories already -- seems like a lot of work for an empty __init__.py ;-) Thanks for the clarification. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Sat Apr 25 21:08:54 2015 From: pje at telecommunity.com (PJ Eby) Date: Sat, 25 Apr 2015 15:08:54 -0400 Subject: [Distutils] Proper handling of PEP420 namespace packages with setuptools and pip In-Reply-To: References: <55373910.1070506@gongy.de> <55377F06.5050509@egenix.com> <448b1fd779d24dcb804f7c7986b3deaf@wb09.serverdomain.org> <5537B9B2.6050709@egenix.com> <5537F5AB.7060804@egenix.com> Message-ID: On Wed, Apr 22, 2015 at 5:20 PM, Robert Collins wrote: > Ah, ok so I think this is the crux - I'm arguing that Python version > isn't a big enough check. Because anything installed with a current > version of setuptools, or any wheel built likewise, is going to not > have that per-Python-version check. > > And it seems to me that that implies that bringing in a > per-Python-version check in a new release of setuptools or pip is > going to result in mixed mode installs: > > install name.A with setuptools-X [legacy] > upgrade setuptools > install name.B with setuptools-Y [does a Python version check] > -> boom > > But perhaps sufficient glue can be written to make it all work. When I wrote PEP 402 (the precursor to 420), the idea was that in a mixed environment you would need to: 1. Change pkg_resources' namespace system to support non-__init__ directories (and likewise pkgutil.extend_path) 2. Change easy_install's .pth generation magic to not do any magic or import a PEP 420 emulation module on old systems 3. Change package building tools to stop injecting __init__.py files This basically solves the mixed installation problem because if you have an __init__.py that uses existing magic, the empty dirs get folded in. You basically have a transitional state with mixed __init__ and non-__init__ stuff. If there happens to be an __init__.py, then as long as it declares the namespace then the local *runtime* takes care of making the runtime environment work. The *installation* tools don't have to manage mixed modes, they should just blindly install whatever package they have, and over the long term the packages all end up shipped without __init__.py's, but the __init__.py approach will continue to work basically forever. > My personal preferred migration strategy is: > - have a flag day amongst the cooperating packages that make up the namespace > - release versions that are all in the new layout in a batch to PyPI. > > It would be nice if PEP-426 had a conflicts stanza, so you could say > conflicts: [name.A < version_with_new_X] without that implying that > name.A *should* be installed. This is all *really* unnecessary. Setuptools has essentially *always* built non-egg, non-exe binary distributions in a PEP 420-compatible way (i.e., without __init__.py). And pkg_resources already builds a namespace path by asking importers if they can import a package at that location. So if PEP 420 importers say "yes" when asked to find_module('somepkg') in the case of an __init__-less subdirectory named 'somepkg', then pkg_resources *already* supports mixed-mode installations under PEP 420, and doesn't even need to be updated! I haven't checked whether that's the case, but if it is, then the only thing that setuptools neds to change is its .pth generation magic, to not do the magic if it's on a PEP 420 platform at runtime, and to stop including __init__.py's for namespace packages. From eu at doxos.eu Mon Apr 27 14:12:31 2015 From: eu at doxos.eu (=?UTF-8?B?VsOhY2xhdiDFoG1pbGF1ZXI=?=) Date: Mon, 27 Apr 2015 14:12:31 +0200 Subject: [Distutils] cross-compilation patches Message-ID: <553E27AF.6090101@doxos.eu> Hi evrybody, I hope this is the right SIG for cross-compilation of Python interpreter (not of 3rd party modules -- yet). As you may know, cross-compilation was being declared unsupported for a long time and contributions in that area were not always gladly accepted. There have been numerous attempts to improve the situation, usually ending in patches in bugzilla which quickly became obsolete, resulting in frustration and great and repeated waste of manpower. Most active proponents of these patches were, it seems, Tarek Ziad? and Roumen Petrov. Roumen made an impressive set of patches against 3.4 at the end of 2013, all referenced from [1]; they are split into modernization of mingw&cygwin classes (as a prominent case of cross-building), cross-building interpreter core, cross-building core modules, and cross-installation. Each of these references individual issues (e.g. interpreter core in [2] references 15 issues, each with patch, core modules [3] have 24 sub-issues). Since I have been needing cross-build support (build for Windows, and recently also Xeon Phi as targets, on Linux host) myself, I would like these patches not fall under the table again. I was pointed to this SIG to see what could be done (or, more precisely, what could I do) for those patches to be progressively integrated. Some people mention that the infrastructure is currently missing a cross-building bot, without which regressions will soon creep in again. What can be done in this regard, what is needed? Cheers, V?clav --- [1] http://bugs.python.org/issue3871#msg199695 [2] http://bugs.python.org/issue17605 [3] http://bugs.python.org/issue18653 From shamsu.p007 at gmail.com Wed Apr 29 19:24:27 2015 From: shamsu.p007 at gmail.com (Shamsudheen Padath) Date: Wed, 29 Apr 2015 22:54:27 +0530 Subject: [Distutils] Query about serial module installation Message-ID: Hi, I have been trying to communicate through my usb port through as part of my project work. For this I have usb to serial converter installed in my system. I am working on python 3.4.3 and installed the serial module pyserial-2.7.win32_py3k.exe downloaded from www.pyhton.org/. My OS is 64 bit MS Windows 8.1. The trial programme i have been trying is import serial ser = serial.Serial(5) print (ser.portstr) ser.write("hello") ser.close() and this produces an error message COM6 Traceback (most recent call last): File "C:\Users\shamsu\Desktop\ff.py", line 4, in ser.write("hello") # write a string File "C:\Python34\lib\site-packages\serial\serialwin32.py", line 283, in write data = to_bytes(data) File "C:\Python34\lib\site-packages\serial\serialutil.py", line 76, in to_bytes b.append(item) # this one handles int and str for our emulation and ints for Python 3.x TypeError: an integer is required As a newer to the Python i couldn't what the problem is and got stuck on it. Expecting a solution to my problem. Shamsudheen p -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed Apr 29 22:32:51 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 30 Apr 2015 08:32:51 +1200 Subject: [Distutils] Query about serial module installation In-Reply-To: References: Message-ID: Hi, welcome to Python. This list is about how to install programs written in Python - as a matter of respect for the time of the other folk here, we can't really discuss the issue you've had. But - I can hopefully point you in a useful direction: You might find https://wiki.python.org/moin/BeginnersGuide/Programmers useful if you already know how to program, or https://wiki.python.org/moin/BeginnersGuide/NonProgrammers if you do not already know how to program. At a guess, the specific issue you are facing is a bytestring/textstring difference - you can find more out about those through the links above. HTH, Rob On 30 April 2015 at 05:24, Shamsudheen Padath wrote: > Hi, > I have been trying to communicate through my usb port through as part of my > project work. For this I have usb to serial converter installed in my > system. I am working on python 3.4.3 and installed the serial module > pyserial-2.7.win32_py3k.exe downloaded from www.pyhton.org/. My OS is 64 bit > MS Windows 8.1. The trial programme i have been trying is > > import serial > ser = serial.Serial(5) > print (ser.portstr) > ser.write("hello") > ser.close() > > and this produces an error message > > COM6 > Traceback (most recent call last): > File "C:\Users\shamsu\Desktop\ff.py", line 4, in > ser.write("hello") # write a string > File "C:\Python34\lib\site-packages\serial\serialwin32.py", line 283, in > write > data = to_bytes(data) > File "C:\Python34\lib\site-packages\serial\serialutil.py", line 76, in > to_bytes > b.append(item) # this one handles int and str for our emulation and > ints for Python 3.x > TypeError: an integer is required > > As a newer to the Python i couldn't what the problem is and got stuck on > it. > Expecting a solution to my problem. > > Shamsudheen p > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Robert Collins Distinguished Technologist HP Converged Cloud From ncoghlan at gmail.com Thu Apr 30 01:34:52 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 30 Apr 2015 09:34:52 +1000 Subject: [Distutils] Query about serial module installation In-Reply-To: References: Message-ID: On 30 April 2015 at 06:32, Robert Collins wrote: > Hi, welcome to Python. > > This list is about how to install programs written in Python - as a > matter of respect for the time of the other folk here, we can't really > discuss the issue you've had. > > But - I can hopefully point you in a useful direction: > > You might find https://wiki.python.org/moin/BeginnersGuide/Programmers > useful if you already know how to program, or > https://wiki.python.org/moin/BeginnersGuide/NonProgrammers if you do > not already know how to program. There also appears to be a reasonably responsive pyserial community on Stack Overflow: http://stackoverflow.com/questions/tagged/pyserial Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia