From jim at jimfulton.info Sun Apr 2 13:24:45 2017 From: jim at jimfulton.info (Jim Fulton) Date: Sun, 2 Apr 2017 13:24:45 -0400 Subject: [Distutils] Pain installing pinned versions of setuptools requirements. Message-ID: Today, I ran into trouble working with an old project that had six pinned to version 1.1.0. The install failed because buildout tried to install it as 1.10.0 and failed because 1.10.0 was already installed. The problem arose because six's setup.py imports setuptools and then imports six to get __version__. When Buildout runs a setup script, it puts it's own path ahead of the distribution, so the setup script would get whatever version buildout was running. IMO, this is a six bug, but wait, there's more. I tried installing a pinned version with pip, using ``pip install -U six==1.9.0``. This worked. I then tried with version 1.1.0, and this failed, because setuptools wouldn't work with 1.1.0. Pip puts the distribution ahead of it's own path when running a setup script. setuptools requires six >= 1.6, so pip can't be used to install pinned versions (in requirements.txt) earlier than 1.6. Six is a wildly popular package and has been around for a long time. Earlier pins are likely. I raise this here in the broader context of managing clashes between setuptools requirements and requirements of libraries (and applications using them) it's installing. I think Buildout's approach of putting it's path first is better, although it was more painful in this instance. I look forward to a time when we don't run scripts at install time (or are at least wildly less likely to). Buildout is growing wheel support. It should have provided a work around, but: - I happened to be trying to install a 1.1 pin and the earliest six wheel is for 1.. - I tried installing six 1.8. Buildout's wheel extension depended on pip, which depends on setuptools and six. When buildout tries to load the extension, it tries to get the extension's dependencies, which includes six while honoring the version pin, which means it has to install six before it has wheel support. Obviously, this is Buildout's problem, but it illustrates the complexity that arises when packaging dependencies overlap dependencies of packages being managed. IDK what the answer is. I'm just (re-)raising the issue and providing a data point. I suspect that packaging tools should manage their own dependencies independently. That's what was happening until recently IIUC for the pypa tools through vendoring. I didn't like vendoring, but I'm starting to see the wisdom of it. :) Jim -- Jim Fulton http://jimfulton.info -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Apr 2 13:29:36 2017 From: donald at stufft.io (Donald Stufft) Date: Sun, 2 Apr 2017 13:29:36 -0400 Subject: [Distutils] Pain installing pinned versions of setuptools requirements. In-Reply-To: References: Message-ID: <3DE4CFCD-7907-43D9-891E-34ADE7CA23B7@stufft.io> > On Apr 2, 2017, at 1:24 PM, Jim Fulton wrote: > Can you post this on https://github.com/pypa/setuptools/issues/980? That?s where most of the discussion from the fall out of setuptools devendoring has concentrated. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimfulton.info Sun Apr 2 13:32:45 2017 From: jim at jimfulton.info (Jim Fulton) Date: Sun, 2 Apr 2017 13:32:45 -0400 Subject: [Distutils] Pain installing pinned versions of setuptools requirements. In-Reply-To: <3DE4CFCD-7907-43D9-891E-34ADE7CA23B7@stufft.io> References: <3DE4CFCD-7907-43D9-891E-34ADE7CA23B7@stufft.io> Message-ID: On Sun, Apr 2, 2017 at 1:29 PM, Donald Stufft wrote: > > On Apr 2, 2017, at 1:24 PM, Jim Fulton wrote: > > > > Can you post this on https://github.com/pypa/setuptools/issues/980? That?s > where most of the discussion from the fall out of setuptools devendoring > has concentrated. > Yup. Jim -- Jim Fulton http://jimfulton.info -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Apr 3 16:11:31 2017 From: donald at stufft.io (Donald Stufft) Date: Mon, 3 Apr 2017 16:11:31 -0400 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? Message-ID: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> CentOS5 is EOL now and the repositories have been taken offline. Does this impact manylinux1? Do we need to work on a manylinux2? ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Apr 3 18:30:35 2017 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 3 Apr 2017 15:30:35 -0700 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> Message-ID: On Mon, Apr 3, 2017 at 1:11 PM, Donald Stufft wrote: > CentOS5 is EOL now and the repositories have been taken offline. Does this > impact manylinux1? Do we need to work on a manylinux2? Ugh, if we don't have access to the repositories then the existing manylinux1 docker images will keep working, but we won't be able to rebuild or otherwise modify them :-(. Todo: - PEP update or a new PEP: I think we can define manylinux2 and manylinux3 as basically the same as manylinux1, except: - dropping ncurses as an allowed library (https://github.com/pypa/manylinux/issues/94) - bumping up the version for glibc and friends - I guess adding ppc support for manylinux3, while we're at it? - pip patch to sniff manylinux2/3 compatibility -- should be pretty trivial since most of the code will be shared with manylinux1, see pip/pep425tags.py - update auditwheel to handle the new policies -- shouldn't be too hard since auditwheel already breaks this out into a "policy file"; maybe a bit of UX work to figure out how to pick the appropriate policy. (I guess ideally the default should be to use the smallest manylinux that it detects compatibility with, since every manylinux2-compatible package is also manylinux3-compatible, etc.) - new docker images -- should be able to re-use a fair amount of stuff from the manylinux1 docker images, but there will be differences in the details; hard to predict how much work this will be. I'm happy to provide pointers but can't realistically take on much or any of this myself right now. -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Mon Apr 3 18:34:26 2017 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 3 Apr 2017 15:34:26 -0700 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> Message-ID: On Mon, Apr 3, 2017 at 3:30 PM, Nathaniel Smith wrote: > On Mon, Apr 3, 2017 at 1:11 PM, Donald Stufft wrote: >> CentOS5 is EOL now and the repositories have been taken offline. Does this >> impact manylinux1? Do we need to work on a manylinux2? > > Ugh, if we don't have access to the repositories then the existing > manylinux1 docker images will keep working, but we won't be able to > rebuild or otherwise modify them :-(. Ok, it looks like the main CentOS 5 repo is still available at vault.centos.org, and EPEL 5 and the devtools are still available at their original locations, so this is a little less of an emergency then it might have been. Still probably a good idea to move on ASAP though :-). (URLs for those who want to check for themselves in the future: http://vault.centos.org/5.11/ https://dl.fedoraproject.org/pub/epel/5/ https://people.centos.org/tru/devtools-2/ ) -n -- Nathaniel J. Smith -- https://vorpus.org From bruno.rosa at eldorado.org.br Tue Apr 4 13:17:51 2017 From: bruno.rosa at eldorado.org.br (Bruno Alexandre Rosa) Date: Tue, 4 Apr 2017 17:17:51 +0000 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> Message-ID: <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> We would be very glad to help adding ppc support for manylinux3 :) Regards, Bruno Rosa -----Original Message----- From: Distutils-SIG [mailto:distutils-sig-bounces+bruno.rosa=eldorado.org.br at python.org] On Behalf Of Nathaniel Smith Sent: segunda-feira, 3 de abril de 2017 19:34 To: Donald Stufft Cc: distutils sig Subject: Re: [Distutils] CentOS5 is EOL, impact on manylinux1? On Mon, Apr 3, 2017 at 3:30 PM, Nathaniel Smith wrote: > On Mon, Apr 3, 2017 at 1:11 PM, Donald Stufft wrote: >> CentOS5 is EOL now and the repositories have been taken offline. Does >> this impact manylinux1? Do we need to work on a manylinux2? > > Ugh, if we don't have access to the repositories then the existing > manylinux1 docker images will keep working, but we won't be able to > rebuild or otherwise modify them :-(. Ok, it looks like the main CentOS 5 repo is still available at vault.centos.org, and EPEL 5 and the devtools are still available at their original locations, so this is a little less of an emergency then it might have been. Still probably a good idea to move on ASAP though :-). (URLs for those who want to check for themselves in the future: http://vault.centos.org/5.11/ https://dl.fedoraproject.org/pub/epel/5/ https://people.centos.org/tru/devtools-2/ ) -n -- Nathaniel J. Smith -- https://vorpus.org _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig From njs at pobox.com Tue Apr 4 21:12:32 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 4 Apr 2017 18:12:32 -0700 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> Message-ID: On Tue, Apr 4, 2017 at 10:17 AM, Bruno Alexandre Rosa wrote: > We would be very glad to help adding ppc support for manylinux3 :) I guess if we're going ahead and doing manylinux3 now, then we can probably roll in ppc support as part of the same process? The one major hiccup is that our current Travis-CI-based build pipeline won't work for ppc, but there's lots of stuff to do before that that could be shared and where we'd welcome help (see my other email...) -n > Regards, > Bruno Rosa > > -----Original Message----- > From: Distutils-SIG [mailto:distutils-sig-bounces+bruno.rosa=eldorado.org.br at python.org] On Behalf Of Nathaniel Smith > Sent: segunda-feira, 3 de abril de 2017 19:34 > To: Donald Stufft > Cc: distutils sig > Subject: Re: [Distutils] CentOS5 is EOL, impact on manylinux1? > > On Mon, Apr 3, 2017 at 3:30 PM, Nathaniel Smith wrote: >> On Mon, Apr 3, 2017 at 1:11 PM, Donald Stufft wrote: >>> CentOS5 is EOL now and the repositories have been taken offline. Does >>> this impact manylinux1? Do we need to work on a manylinux2? >> >> Ugh, if we don't have access to the repositories then the existing >> manylinux1 docker images will keep working, but we won't be able to >> rebuild or otherwise modify them :-(. > > Ok, it looks like the main CentOS 5 repo is still available at vault.centos.org, and EPEL 5 and the devtools are still available at their original locations, so this is a little less of an emergency then it might have been. Still probably a good idea to move on ASAP though :-). > > (URLs for those who want to check for themselves in the future: > http://vault.centos.org/5.11/ > https://dl.fedoraproject.org/pub/epel/5/ > https://people.centos.org/tru/devtools-2/ > ) > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Wed Apr 5 01:10:38 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 5 Apr 2017 15:10:38 +1000 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> Message-ID: On 5 April 2017 at 11:12, Nathaniel Smith wrote: > On Tue, Apr 4, 2017 at 10:17 AM, Bruno Alexandre Rosa > wrote: >> We would be very glad to help adding ppc support for manylinux3 :) > > I guess if we're going ahead and doing manylinux3 now, then we can > probably roll in ppc support as part of the same process? That was my main feedback on the draft manylinux3-for-ppc64le PEP: it's so close to also covering the manylinux1 architectures that it seems more sensible to me to just go ahead and fully define manylinux3, rather than only doing the ppc64le bits and then needing to somehow impose architecture restrictions at the PyPI and installation tool level. > The one > major hiccup is that our current Travis-CI-based build pipeline won't > work for ppc, but there's lots of stuff to do before that that could > be shared and where we'd welcome help (see my other email...) It may also be worth getting in touch with the CentOS CI folks about those aspects: https://wiki.centos.org/QaWiki/CI/GettingStarted We're looking at that for CI on my current project after realising that Travis CI probably wouldn't work for us [1] and it has some very nice properties for this kind of thing: - the build/CI environments are (or can be) bare metal, so they're well suited to spinning up virtual machines with Vagrant - the CentOS Vagrant boxes are pre-cached in the service infrastructure - you have root on the box during a run, so you can use your config management system of choice to set it up [2] Cheers, Nick. [1] https://github.com/leapp-to/prototype/issues/10 [2] We're probably going to go with: 1) use curl to bootstrap pip; 2) use pip to bootstrap upstream Ansible; 3) use Ansible to configure the machine to run the tests -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Wed Apr 5 01:59:31 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 4 Apr 2017 22:59:31 -0700 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> Message-ID: On Tue, Apr 4, 2017 at 10:10 PM, Nick Coghlan wrote: > On 5 April 2017 at 11:12, Nathaniel Smith wrote: >> On Tue, Apr 4, 2017 at 10:17 AM, Bruno Alexandre Rosa >> wrote: >>> We would be very glad to help adding ppc support for manylinux3 :) >> >> I guess if we're going ahead and doing manylinux3 now, then we can >> probably roll in ppc support as part of the same process? > > That was my main feedback on the draft manylinux3-for-ppc64le PEP: > it's so close to also covering the manylinux1 architectures that it > seems more sensible to me to just go ahead and fully define > manylinux3, rather than only doing the ppc64le bits and then needing > to somehow impose architecture restrictions at the PyPI and > installation tool level. Well, manylinux1 is spec'ed to be x86_64 and i686 only, and we already implemented the supported architectures list in pip: https://github.com/pypa/pip/blob/491294f61e37d766720aead97b4fb008cfc2e51d/pip/pep425tags.py#L146 so that part's not a big deal either way, but it's certainly more efficient to share all the spec and tooling work :-) >> The one >> major hiccup is that our current Travis-CI-based build pipeline won't >> work for ppc, but there's lots of stuff to do before that that could >> be shared and where we'd welcome help (see my other email...) > > It may also be worth getting in touch with the CentOS CI folks about > those aspects: https://wiki.centos.org/QaWiki/CI/GettingStarted That page seems to say that for CentOS 7 they only support x86_64? -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Wed Apr 5 05:37:57 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 5 Apr 2017 19:37:57 +1000 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> Message-ID: On 5 April 2017 at 15:59, Nathaniel Smith wrote: > On Tue, Apr 4, 2017 at 10:10 PM, Nick Coghlan wrote: >> It may also be worth getting in touch with the CentOS CI folks about >> those aspects: https://wiki.centos.org/QaWiki/CI/GettingStarted > > That page seems to say that for CentOS 7 they only support x86_64? Sorry, I should have spelled out that chain of logic more clearly: - ppc64le machines aren't that easy to come by, especially in public cloud services - a build approach based on ppc64le VMs running on x86_64 hosts is likely to be easier to manage - that means the desired CI service feature is "run arbitrary VMs", rather than "bare metal ppc64le systems" You can kinda sorta do that to a limited degree in Travis, but our experience was that the CentOS Vagrant boxes didn't work by default, so we asked the CentOS CI maintainers what our options might be over there. And as it turns out, not only are we able to get bare metal machines to run our VMs on (rather than messing about with nested virt support), but the CentOS boxes are pre-cached on the local network, so they should be pretty quick to download (we haven't actually got our CI up and running yet, so I won't be able to vouch for that personally until we do). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris.barker at noaa.gov Wed Apr 5 11:52:44 2017 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 5 Apr 2017 08:52:44 -0700 Subject: [Distutils] Which commercial vendor? In-Reply-To: <2ea413e9-9fce-9125-897d-caef87a51dfd@thomas-guettler.de> References: <07e85472-5e08-32ee-3884-af0e327ac466@thomas-guettler.de> <2ea413e9-9fce-9125-897d-caef87a51dfd@thomas-guettler.de> Message-ID: <-4712602602821257844@unknownmsgid> > On Mar 30, 2017, at 1:53 AM, Thomas G?ttler wrote: > > > My frustration has reached a limit. Yes, I am willing to pay money. > > Which vendor do you suggest to give me a reliable package management? You may want conda -- it's an open source project, and you can get commercial support through Continuum Analytics. Though conda, and Continuum was born in the data science / scientific computing world, there is nothing about conda specific to data science. It's just that computational computing poses extra challenges for packaging -- but the easy stuff is still easy. And the conda-forge project provides an open source way to support a larger community. -CHB From ncoghlan at gmail.com Wed Apr 5 21:41:03 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 6 Apr 2017 11:41:03 +1000 Subject: [Distutils] Which commercial vendor? In-Reply-To: <-4712602602821257844@unknownmsgid> References: <07e85472-5e08-32ee-3884-af0e327ac466@thomas-guettler.de> <2ea413e9-9fce-9125-897d-caef87a51dfd@thomas-guettler.de> <-4712602602821257844@unknownmsgid> Message-ID: On 6 April 2017 at 01:52, Chris Barker - NOAA Federal wrote: >> On Mar 30, 2017, at 1:53 AM, Thomas G?ttler wrote: >> >> >> My frustration has reached a limit. Yes, I am willing to pay money. >> >> Which vendor do you suggest to give me a reliable package management? > > You may want conda -- it's an open source project, and you can get > commercial support through Continuum Analytics. > > Though conda, and Continuum was born in the data science / scientific > computing world, there is nothing about conda specific to data > science. It's just that computational computing poses extra challenges > for packaging -- but the easy stuff is still easy. PayPal Engineering put together a decent write-up of their path towards adopting that model last year: https://www.paypal-engineering.com/2016/09/07/python-packaging-at-paypal/ It's definitely a reasonable way to go for organisational infrastructure, but even conda doesn't cover all the potential use cases that are out there (something I discuss a bit in http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html ) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ubernostrum at gmail.com Thu Apr 6 05:05:39 2017 From: ubernostrum at gmail.com (James Bennett) Date: Thu, 6 Apr 2017 02:05:39 -0700 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: Bumping this because Django 1.11 is out, so 'Framework :: Django :: 1.11' would be a useful thing to have. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guettliml at thomas-guettler.de Thu Apr 6 10:32:22 2017 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Thu, 6 Apr 2017 16:32:22 +0200 Subject: [Distutils] The Python Packaging Ecosystem (of Nick) Support for other programming languages Message-ID: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> Dear Nick and other distutils listeners, Nick wrote this about seven months ago: http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html I love Python and I use it daily. On the other hand there are other interesting programming languages out there. Why not do "thinking in sets" here and see python just as one item in the list of a languages? Let's dream: All languages should be supported in the ever best packaging solution of the future. What do you think? Regards, Thomas -- Thomas Guettler http://www.thomas-guettler.de/ From robertc at robertcollins.net Thu Apr 6 10:41:07 2017 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 7 Apr 2017 02:41:07 +1200 Subject: [Distutils] The Python Packaging Ecosystem (of Nick) Support for other programming languages In-Reply-To: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> References: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> Message-ID: I wrote this 5 years ago, and its largely still true as far as I can tell of the surrounding systems. Snappy-core for instance, isn't targeting MacOS X or Windows (last I checked anyhow ...) https://rbtcollins.wordpress.com/2012/08/27/why-platform-specific-package-systems-exist-and-wont-go-away/ -Rob On 7 April 2017 at 02:32, Thomas G?ttler wrote: > Dear Nick and other distutils listeners, > > Nick wrote this about seven months ago: > > http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html > > I love Python and I use it daily. > > On the other hand there are other interesting programming languages out > there. > > Why not do "thinking in sets" here and see python just as one item in the > list of a languages? > > Let's dream: All languages should be supported in the ever best packaging > solution of the future. > > What do you think? > > Regards, > Thomas > > > > -- > Thomas Guettler http://www.thomas-guettler.de/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From njs at pobox.com Thu Apr 6 11:22:44 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 6 Apr 2017 08:22:44 -0700 Subject: [Distutils] The Python Packaging Ecosystem (of Nick) Support for other programming languages In-Reply-To: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> References: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> Message-ID: On Apr 6, 2017 7:32 AM, "Thomas G?ttler" wrote: Dear Nick and other distutils listeners, Nick wrote this about seven months ago: http://www.curiousefficiency.org/posts/2016/09/python-packag ing-ecosystem.html I love Python and I use it daily. On the other hand there are other interesting programming languages out there. Why not do "thinking in sets" here and see python just as one item in the list of a languages? Let's dream: All languages should be supported in the ever best packaging solution of the future. What do you think? This is basically what conda attempts to do. It's nice in some ways, but does also have limitations. In any case this isn't? a very useful thing to post here? Distutils and pip and pypi aren't going anywhere, and the folks here are all at their limit trying to keep them from falling over, so taking up time with these super vague blue sky ideas is a bit rude. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Apr 6 11:26:18 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 7 Apr 2017 01:26:18 +1000 Subject: [Distutils] The Python Packaging Ecosystem (of Nick) Support for other programming languages In-Reply-To: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> References: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> Message-ID: On 7 April 2017 at 00:32, Thomas G?ttler wrote: > Dear Nick and other distutils listeners, > > Nick wrote this about seven months ago: > > http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html > > I love Python and I use it daily. > > On the other hand there are other interesting programming languages out > there. > > Why not do "thinking in sets" here and see python just as one item in the > list of a languages? > > Let's dream: All languages should be supported in the ever best packaging > solution of the future. > > What do you think? Package management that is both language and platform independent is precisely the role conda fills, but as I explain in the post, that doesn't eliminate the need for a language specific plugin manager for Python runtimes (which is the role pip fills), and nor does it eliminate the need for platform specific component managers (which is the role filled by tools like apt-get, yum/dnf, OneGet, etc). The language independent packaging problem is *way* out of scope for distutils-sig as an entity though - while it's a genuine use case, there are other more appropriate communities for discussing it (such as those related to conda development). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris.barker at noaa.gov Thu Apr 6 15:44:44 2017 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 6 Apr 2017 12:44:44 -0700 Subject: [Distutils] Which commercial vendor? In-Reply-To: References: <07e85472-5e08-32ee-3884-af0e327ac466@thomas-guettler.de> <2ea413e9-9fce-9125-897d-caef87a51dfd@thomas-guettler.de> <-4712602602821257844@unknownmsgid> Message-ID: On Wed, Apr 5, 2017 at 6:41 PM, Nick Coghlan wrote: > PayPal Engineering put together a decent write-up of their path > towards adopting that model last year: > https://www.paypal-engineering.com/2016/09/07/python-packaging-at-paypal/ Thanks for that link. We're a much smaller shop, but have had pretty much the same experience -- also really good to see them mention miniconda at the end -- I think that's a much better way to go for most folks than the whole Anaconda pile. It's definitely a reasonable way to go for organisational > infrastructure, but even conda doesn't cover all the potential use > cases that are out there of course not -- nothing does, but I would add to the contents of that post: The conda-forge project is a Major boon to the conda infrastructure -- there is now a robust way for the community to expand the available number of packages (and keep up more recent versions). And anyone can take advantage of that infrastructure for their own (selfish?) needs: Chances are, there will be a package or two that you rely on that is not in conda defaults (maintained by Continuum) or currently in conda-forge. So you can pip-install those few -- but what if they aren't on PyPi either? or are hard to compile and install with ugly dependencies? You can contribute build recipes to conda-forge, and then have it for you, and all your users, and the rest of the world to access. Much better than hand maintaining stuff yourself. My pain point now is still full multi-platform support. conda has package versions that are platform independent, but it can still be hard to get everything built in the same version on all platforms, so it does get a bit ugly. But no other solution makes that better anyway. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruno.rosa at eldorado.org.br Thu Apr 6 15:39:51 2017 From: bruno.rosa at eldorado.org.br (Bruno Alexandre Rosa) Date: Thu, 6 Apr 2017 19:39:51 +0000 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> Message-ID: <11ff4b2baf2c4168a3c35e70a60b92a4@serv030.corp.eldorado.org.br> Please correct me if I'm wrong: so, that means - in theory - we can use CentOS CI for ppc64le machine and migrate away from Travis? Kind Regards, Bruno Rosa -----Original Message----- From: Nick Coghlan [mailto:ncoghlan at gmail.com] Sent: quarta-feira, 5 de abril de 2017 06:38 To: Nathaniel Smith Cc: Bruno Alexandre Rosa ; distutils sig Subject: Re: [Distutils] CentOS5 is EOL, impact on manylinux1? On 5 April 2017 at 15:59, Nathaniel Smith wrote: > On Tue, Apr 4, 2017 at 10:10 PM, Nick Coghlan wrote: >> It may also be worth getting in touch with the CentOS CI folks about >> those aspects: https://wiki.centos.org/QaWiki/CI/GettingStarted > > That page seems to say that for CentOS 7 they only support x86_64? Sorry, I should have spelled out that chain of logic more clearly: - ppc64le machines aren't that easy to come by, especially in public cloud services - a build approach based on ppc64le VMs running on x86_64 hosts is likely to be easier to manage - that means the desired CI service feature is "run arbitrary VMs", rather than "bare metal ppc64le systems" You can kinda sorta do that to a limited degree in Travis, but our experience was that the CentOS Vagrant boxes didn't work by default, so we asked the CentOS CI maintainers what our options might be over there. And as it turns out, not only are we able to get bare metal machines to run our VMs on (rather than messing about with nested virt support), but the CentOS boxes are pre-cached on the local network, so they should be pretty quick to download (we haven't actually got our CI up and running yet, so I won't be able to vouch for that personally until we do). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Thu Apr 6 16:32:29 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 6 Apr 2017 13:32:29 -0700 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> Message-ID: On Wed, Apr 5, 2017 at 2:37 AM, Nick Coghlan wrote: > On 5 April 2017 at 15:59, Nathaniel Smith wrote: >> On Tue, Apr 4, 2017 at 10:10 PM, Nick Coghlan wrote: >>> It may also be worth getting in touch with the CentOS CI folks about >>> those aspects: https://wiki.centos.org/QaWiki/CI/GettingStarted >> >> That page seems to say that for CentOS 7 they only support x86_64? > > Sorry, I should have spelled out that chain of logic more clearly: > > - ppc64le machines aren't that easy to come by, especially in public > cloud services > - a build approach based on ppc64le VMs running on x86_64 hosts is > likely to be easier to manage > - that means the desired CI service feature is "run arbitrary VMs", > rather than "bare metal ppc64le systems" > > You can kinda sorta do that to a limited degree in Travis, but our > experience was that the CentOS Vagrant boxes didn't work by default, > so we asked the CentOS CI maintainers what our options might be over > there. > > And as it turns out, not only are we able to get bare metal machines > to run our VMs on (rather than messing about with nested virt > support), but the CentOS boxes are pre-cached on the local network, so > they should be pretty quick to download (we haven't actually got our > CI up and running yet, so I won't be able to vouch for that personally > until we do). I think bare metal access only matters for running x86 with hardware accelerated virtualization? If we want to emulate a totally different architecture like ppc64le then IIUC qemu does that as a regular user-space program. You definitely can run qemu in this mode on travis-ci, e.g. check out all the virtual architectures that rust builds on: https://travis-ci.org/rust-lang/rust/ So I think travis-ci and centos-ci are equivalent on this axis ? if we want to go the qemu route, then either works, and if we don't, then neither works? For me the big question is whether emulation is actually a good idea. When rust announced their plans I remember seeing some skepticism about whether one can really trust emulated machines for this kind of use case, though I can't find it again now... but it's definitely true that all the major distributions go to great lengths to use real hardware in their build farms, despite the obvious drawbacks (not just in terms of maintenance, but also the ongoing pain of using tiny little arm and mips machines that take dozens of hours to build things). They know a whole lot more about this than I do so I assume they have *some* reason :-). It might be useful to get in touch with some of the distro's ppc64le wranglers to get their opinion, if anyone knows any of them... -n -- Nathaniel J. Smith -- https://vorpus.org From wes.turner at gmail.com Thu Apr 6 20:34:17 2017 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 6 Apr 2017 19:34:17 -0500 Subject: [Distutils] Which commercial vendor? In-Reply-To: References: <07e85472-5e08-32ee-3884-af0e327ac466@thomas-guettler.de> <2ea413e9-9fce-9125-897d-caef87a51dfd@thomas-guettler.de> <-4712602602821257844@unknownmsgid> Message-ID: On Thursday, April 6, 2017, Chris Barker wrote: > On Wed, Apr 5, 2017 at 6:41 PM, Nick Coghlan > wrote: > > >> PayPal Engineering put together a decent write-up of their path >> towards adopting that model last year: >> https://www.paypal-engineering.com/2016/09/07/python-packaging-at-paypal/ > > > Thanks for that link. > +1 > > We're a much smaller shop, but have had pretty much the same experience -- > also really good to see them mention miniconda at the end -- I think that's > a much better way to go for most folks than the whole Anaconda pile. > > It's definitely a reasonable way to go for organisational >> infrastructure, but even conda doesn't cover all the potential use >> cases that are out there > > > of course not -- nothing does, but I would add to the contents of that > post: > > The conda-forge project is a Major boon to the conda infrastructure -- > there is now a robust way for the community to expand the available number > of packages (and keep up more recent versions). And anyone can take > advantage of that infrastructure for their own (selfish?) needs: > > Chances are, there will be a package or two that you rely on that is not > in conda defaults (maintained by Continuum) or currently in conda-forge. So > you can pip-install those few -- but what if they aren't on PyPi either? or > are hard to compile and install with ugly dependencies? You can contribute > build recipes to conda-forge, and then have it for you, and all your users, > and the rest of the world to access. Much better than hand maintaining > stuff yourself. > Someone still needs to commit to maintaining the conda package; otherwise who knows whether this is the latest stable release? > > My pain point now is still full multi-platform support. conda has package > versions that are platform independent, but it can still be hard to get > everything built in the same version on all platforms, so it does get a > bit ugly. > Docker images are reproducible and archivable: https://github.com/ContinuumIO/docker-images - https://github.com/ContinuumIO/docker-images/blob/master/miniconda3/Dockerfile - https://github.com/ContinuumIO/docker-images/blob/master/anaconda3/Dockerfile https://github.com/jupyter/docker-stacks - https://github.com/jupyter/docker-stacks/blob/master/README.md#visual-overview - https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile - a diff miniconda setup - https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile.ppc64le - ARMv would be cool too (e.g. for raspberry pi) https://github.com/Kaggle/docker-python - https://github.com/Kaggle/docker-python/blob/master/Dockerfile - everything and the kitchen sink What platforms does conda-forge auto-build for? - [x] x86[-64] - [ ] linux-armv7l - https://github.com/conda-forge/conda-forge.github.io/issues/269 -------------- next part -------------- An HTML attachment was scrubbed... URL: From guettliml at thomas-guettler.de Fri Apr 7 01:26:28 2017 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Fri, 7 Apr 2017 07:26:28 +0200 Subject: [Distutils] The Python Packaging Ecosystem (of Nick) Support for other programming languages In-Reply-To: References: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> Message-ID: Am 06.04.2017 um 17:22 schrieb Nathaniel Smith: > On Apr 6, 2017 7:32 AM, "Thomas G?ttler" > wrote: > > Dear Nick and other distutils listeners, > > Nick wrote this about seven months ago: > > http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html > > I love Python and I use it daily. > > On the other hand there are other interesting programming languages out there. > > Why not do "thinking in sets" here and see python just as one item in the list of a languages? > > Let's dream: All languages should be supported in the ever best packaging solution of the future. > > What do you think? > > > This is basically what conda attempts to do. It's nice in some ways, but does also have limitations. > > In any case this isn't? a very useful thing to post here? For me it was very useful. I was not aware of the paypal post about python packaging. Yes, it was useful. > Distutils and pip and pypi aren't going anywhere, That's new to me. I guess I misunderstood what you said (I am not a native speaker). I understood "There is no progress, and won't be in the future". That's new to me. I saw several new pip versions in the past. I thought there was progress. >.. and the folks here are all at their limit trying to keep them from falling over, so taking up time with these super vague blue sky ideas is a bit rude. What is the problem if "falling over" happens? Is there no easier solution then going at its limit? Why is having blue sky ideas rude? AFAIK the word "rude" means "offensively impolite or bad-mannered." I personally like the tongue spoken at the linux-kernel mailing list (or systemd). Yes, the people there are impolite and bad-mannered. People speak you what they think and feel. Why not? I think being polite in tech related discussions slows down progress. Look at my signature. Tell me what's wrong: Hit me with arguments. Regards, Thomas G?ttler -- I am looking for feedback for my personal programming guidelines: https://github.com/guettli/programming-guidelines From njs at pobox.com Fri Apr 7 01:55:21 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 6 Apr 2017 22:55:21 -0700 Subject: [Distutils] The Python Packaging Ecosystem (of Nick) Support for other programming languages In-Reply-To: References: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> Message-ID: On Thu, Apr 6, 2017 at 10:26 PM, Thomas G?ttler wrote: > Am 06.04.2017 um 17:22 schrieb Nathaniel Smith: >> On Apr 6, 2017 7:32 AM, "Thomas G?ttler" > wrote: >> >> Dear Nick and other distutils listeners, >> >> Nick wrote this about seven months ago: >> >> http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html >> >> I love Python and I use it daily. >> >> On the other hand there are other interesting programming languages out there. >> >> Why not do "thinking in sets" here and see python just as one item in the list of a languages? >> >> Let's dream: All languages should be supported in the ever best packaging solution of the future. >> >> What do you think? >> >> >> This is basically what conda attempts to do. It's nice in some ways, but does also have limitations. >> >> In any case this isn't a very useful thing to post here? > > For me it was very useful. I was not aware of the paypal post about python packaging. Yes, it was useful. My point was it's wasting the time of the many many people who read this list, who are trying to move Python packaging forward for the millions of people who use Python. Justifying that by saying your message was useful to *you* is... stunningly self-centered. Did your post make *Python* better? If not, maybe think twice next time before posting? >> Distutils and pip and pypi aren't going anywhere, > > That's new to me. I guess I misunderstood what you said (I am not a native speaker). I understood "There is no progress, and won't be in the future". That's new to me. > I saw several new pip versions in the past. I thought there was progress. My apologies for using an unclear idiom. My sentence means: "they are not going to disappear, or be replaced by something radically different". >>.. and the folks here are all at their limit trying to keep them from falling over, so taking up time with these super vague blue sky ideas is a bit rude. > > What is the problem if "falling over" happens? > > Is there no easier solution then going at its limit? > > Why is having blue sky ideas rude? AFAIK the word "rude" means "offensively impolite or bad-mannered." > > I personally like the tongue spoken at the linux-kernel mailing list (or systemd). Yes, the people there are impolite and bad-mannered. > People speak you what they think and feel. Why not? I think being polite in tech related discussions slows down progress. > > Look at my signature. Tell me what's wrong: Hit me with arguments. If you don't care how your words effect others then I'm not really interested in talking to you, except to urge you to reconsider that. -n -- Nathaniel J. Smith -- https://vorpus.org From chris.barker at noaa.gov Fri Apr 7 17:58:46 2017 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 7 Apr 2017 14:58:46 -0700 Subject: [Distutils] Which commercial vendor? In-Reply-To: References: <07e85472-5e08-32ee-3884-af0e327ac466@thomas-guettler.de> <2ea413e9-9fce-9125-897d-caef87a51dfd@thomas-guettler.de> <-4712602602821257844@unknownmsgid> Message-ID: On Thu, Apr 6, 2017 at 5:34 PM, Wes Turner wrote: > Chances are, there will be a package or two that you rely on that is not >> in conda defaults (maintained by Continuum) or currently in conda-forge. So >> you can pip-install those few -- but what if they aren't on PyPi either? or >> are hard to compile and install with ugly dependencies? You can contribute >> build recipes to conda-forge, and then have it for you, and all your users, >> and the rest of the world to access. Much better than hand maintaining >> stuff yourself. >> > > Someone still needs to commit to maintaining the conda package; otherwise > who knows whether this is the latest stable release? > Indeed. and it it's a not-that-widely-used package, then you will have to do that yourself -- but using the conda-forge infrastructure makes that (relatively) easy. In contrast -- who knows whether the package on PyPi is the latest stable release? Hopefully the maintainer is keeping it up, but if not, you're kinda dead in the water. >> My pain point now is still full multi-platform support. conda has package >> versions that are platform independent, but it can still be hard to get >> everything built in the same version on all platforms, so it does get a >> bit ugly. >> > > Docker images are reproducible and archivable: > In a way Docker, as I understand it, is orthogonal to this conversation. And when I talk about "all platforms", I mean running natively on all platforms -- I can't give my Windows users a Linux VM and expect them to know what the heck to do with it. Not that Docker isn't a really useful tool to htlep address some of these problems... -CHB > What platforms does conda-forge auto-build for? > - [x] x86[-64] > linux-64 win-32 win-64 osx-64 (all Intel) - [ ] linux-armv7l > - https://github.com/conda-forge/conda-forge.github.io/issues/269 > looks like folks are trying, but it's not really there yet -- mostly due to the lack of easy to access CI services for armv7I It's an open-source project -- if it's important to someone, it will get done. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Fri Apr 7 19:51:08 2017 From: wes.turner at gmail.com (Wes Turner) Date: Fri, 7 Apr 2017 18:51:08 -0500 Subject: [Distutils] Which commercial vendor? In-Reply-To: References: <07e85472-5e08-32ee-3884-af0e327ac466@thomas-guettler.de> <2ea413e9-9fce-9125-897d-caef87a51dfd@thomas-guettler.de> <-4712602602821257844@unknownmsgid> Message-ID: On Fri, Apr 7, 2017 at 4:58 PM, Chris Barker wrote: > On Thu, Apr 6, 2017 at 5:34 PM, Wes Turner wrote: > > >> Chances are, there will be a package or two that you rely on that is not >>> in conda defaults (maintained by Continuum) or currently in conda-forge. So >>> you can pip-install those few -- but what if they aren't on PyPi either? or >>> are hard to compile and install with ugly dependencies? You can contribute >>> build recipes to conda-forge, and then have it for you, and all your users, >>> and the rest of the world to access. Much better than hand maintaining >>> stuff yourself. >>> >> >> Someone still needs to commit to maintaining the conda package; otherwise >> who knows whether this is the latest stable release? >> > > Indeed. and it it's a not-that-widely-used package, then you will have to > do that yourself -- but using the conda-forge infrastructure makes that > (relatively) easy. In contrast -- who knows whether the package on PyPi is > the latest stable release? Hopefully the maintainer is keeping it up, but > if not, you're kinda dead in the water. > So then there's pulling from a specific source rev: pip install -e git+ssh://git at github.com/pypa/pip at 9.0.1#egg=pip There was a discussion about adding the git/hg/svn/vcs source URI to each package's metadata: - "add "sourceURL" to the metadata 3.0 PEP." https://www.mail-archive.com/distutils-sig at python.org/msg25836.html https://www.mail-archive.com/distutils-sig at python.org/msg25833.html - Project-URL - Source-URL (metadata 2.0) AFAIU, the only way to read the package version from the {git, hg, } source repository is to run the setup.py. There's a semver way to specify the vcs revision ("git short SHA") in the package identifier: https://github.com/openstack-dev/pbr/pull/14/commits/5b7a619046eb10ed3fa7bb987be95208faf2fda3 > > >>> My pain point now is still full multi-platform support. conda has >>> package versions that are platform independent, but it can still be hard to >>> get everything built in the same version on all platforms, so it does get >>> a bit ugly. >>> >> >> Docker images are reproducible and archivable: >> > > In a way Docker, as I understand it, is orthogonal to this conversation. > And when I talk about "all platforms", I mean running natively on all > platforms -- I can't give my Windows users a Linux VM and expect them to > know what the heck to do with it. > IIUC, this should work w/ Docker for {Linux, Windows, OSX, }: docker run -i -t continuumio/miniconda3 /bin/bash AFAIU, If you instead wanted to run Windows containers on a Windows host, you'd need Windows Server 2016: https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-server You can run "provisioner(s)" (shell script, salt states, ansible playbooks) at image build time with e.g. Packer: https://www.packer.io/docs/basics/terminology.html With a configuration management tool like salt or ansible, you can define configuration policies with conditionals for whichever given platform specs (and see it all in one place as "infrastructure as code"). > > Not that Docker isn't a really useful tool to htlep address some of these > problems... > > -CHB > > > >> What platforms does conda-forge auto-build for? >> - [x] x86[-64] >> > > linux-64 > win-32 > win-64 > osx-64 > > (all Intel) > > > > - [ ] linux-armv7l >> - https://github.com/conda-forge/conda-forge.github.io/issues/269 >> > > > looks like folks are trying, but it's not really there yet -- mostly due > to the lack of easy to access CI services for armv7I > > It's an open-source project -- if it's important to someone, it will get > done. > Raspberry Pi support for conda-forge would probably be really useful for education. IDK what sort of resources would be needed to add Pi2's to the CI build farm? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Apr 7 22:17:25 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 8 Apr 2017 12:17:25 +1000 Subject: [Distutils] Idea: Using Trove classifiers for platform compatibility warnings Message-ID: PyPI already has a reasonably extensive component tagging system in https://pypi.python.org/pypi?%3Aaction=list_classifiers but we don't really *use* it all that much for programmatic purposes. That means the incentives for setting tags correctly are weak, since there isn't much pay-off in the form of tooling-intermediated communication of constraints and testing limitations to end users. What I'm starting to wonder is whether or not it may make sense to start recommending that installation tools emit warnings in the following cases: 1. At least one "Operating System" tag is set, but the tags don't include any that cover the *current* operating system 2. At least one "Programming Language :: Python" tag is set, but the tags don't include any that cover the *current* Python version The "at least one relevant tag is set" pre-requisite would be to avoid emitting false positives for projects that don't provide any platform compatibility guidance at all. Instead, the goal would be to eliminate the cases where *incorrect* guidance is currently provided - no guidance at all would be fine, correct guidance would be fine, but incorrect guidance would result in install time warnings on nominally unsupported platforms. Checking for applicable tags at run time would then be a matter of defining two things: - for a given platform, figure out the list of applicable tags that indicate compatibility - for a given platform, figure out the list of applicable tags that indicate *in*compatibility I'm bringing this idea up now as it came up twice for me this week: - in my current work project, where even though the project itself is pure Python, we're manipulating Linux containers and local VM hypervisors, so I put a "Operating System :: POSIX :: Linux" tag on it - in a discussion of using absolute paths in "data_files" where it can be an entirely reasonable thing to do, as long as you're OK with making the affected library or application platform specific: https://github.com/pypa/python-packaging-user-guide/pull/212#issuecomment-292686566 It's also quite applicable to some of the enhancements Daniel would like to make to the wheel format to support more of the GNU autotools paths (https://www.gnu.org/prep/standards/html_node/Directory-Variables.html), which become a lot more palatable if there's a defined machine readable way of saying "Hey, this project assumes POSIX directory layout conventions" (which would be to set "Operating System :: POSIX" or one of it's more specific variants, given the definitions above). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Apr 8 01:41:33 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 8 Apr 2017 15:41:33 +1000 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> Message-ID: On 7 April 2017 at 06:32, Nathaniel Smith wrote: > On Wed, Apr 5, 2017 at 2:37 AM, Nick Coghlan wrote: >> And as it turns out, not only are we able to get bare metal machines >> to run our VMs on (rather than messing about with nested virt >> support), but the CentOS boxes are pre-cached on the local network, so >> they should be pretty quick to download (we haven't actually got our >> CI up and running yet, so I won't be able to vouch for that personally >> until we do). > > I think bare metal access only matters for running x86 with hardware > accelerated virtualization? If we want to emulate a totally different > architecture like ppc64le then IIUC qemu does that as a regular > user-space program. You definitely can run qemu in this mode on > travis-ci, e.g. check out all the virtual architectures that rust > builds on: > > https://travis-ci.org/rust-lang/rust/ > > So I think travis-ci and centos-ci are equivalent on this axis ? if we > want to go the qemu route, then either works, and if we don't, then > neither works? My understanding of the problem is that Travis can't actually run the 64-bit CentOS VMs fast enough to get build jobs done in a reasonable time period (due to the resource constraints imposed on the test VMs), whereas the CentOS systems have more resources available to them (since they have the whole machine to themselves). "Try it and see" would be the best way to confirm whether or not qemu-ppc64le-on-Travis works, though. > For me the big question is whether emulation is actually a good idea. > When rust announced their plans I remember seeing some skepticism > about whether one can really trust emulated machines for this kind of > use case, though I can't find it again now... but it's definitely true > that all the major distributions go to great lengths to use real > hardware in their build farms, despite the obvious drawbacks (not just > in terms of maintenance, but also the ongoing pain of using tiny > little arm and mips machines that take dozens of hours to build > things). They know a whole lot more about this than I do so I assume > they have *some* reason :-). It's one thing using emulation for a service being provided for free in a community project, something else entirely to be doing it for commercial or commercially-sponsored software. One particular problem with it is that any use of profile-guided optimisation would be optimising for the performance characteristics of the emulator, not those of the real hardware (that's also a problem for cross-compilation: as far as I am aware, you *can't* readily use profile guided optimisation in the cross-compilation case). > It might be useful to get in touch with some of the distro's ppc64le > wranglers to get their opinion, if anyone knows any of them... FWIW, I believe some of the CentOS CI wranglers are also the Centos build system wranglers :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Apr 8 01:55:06 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 8 Apr 2017 15:55:06 +1000 Subject: [Distutils] The Python Packaging Ecosystem (of Nick) Support for other programming languages In-Reply-To: References: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> Message-ID: On 7 April 2017 at 15:26, Thomas G?ttler wrote: > Why is having blue sky ideas rude? AFAIK the word "rude" means "offensively impolite or bad-mannered." Ideas are easy to come by - we have no shortage of them. What's difficult to come by is the time and energy needed to work through the complexities of turning ideas for improvement into practical enhancements that can be rolled out and incrementally adopted by the community. Hearing from folks that say "I'm working on a project to improve X, can you give me some advice?" is generally wonderful, but "Someone (else) should totally build this thing that I wish existed" is typically just noise, and "I have never personally done anything for any of you, but I want you all to imagine you work for me and have to work on the things I care about" is extraordinarily self-entitled behaviour. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Sat Apr 8 02:44:46 2017 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 7 Apr 2017 23:44:46 -0700 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> Message-ID: On Fri, Apr 7, 2017 at 10:41 PM, Nick Coghlan wrote: > On 7 April 2017 at 06:32, Nathaniel Smith wrote: >> On Wed, Apr 5, 2017 at 2:37 AM, Nick Coghlan wrote: >>> And as it turns out, not only are we able to get bare metal machines >>> to run our VMs on (rather than messing about with nested virt >>> support), but the CentOS boxes are pre-cached on the local network, so >>> they should be pretty quick to download (we haven't actually got our >>> CI up and running yet, so I won't be able to vouch for that personally >>> until we do). >> >> I think bare metal access only matters for running x86 with hardware >> accelerated virtualization? If we want to emulate a totally different >> architecture like ppc64le then IIUC qemu does that as a regular >> user-space program. You definitely can run qemu in this mode on >> travis-ci, e.g. check out all the virtual architectures that rust >> builds on: >> >> https://travis-ci.org/rust-lang/rust/ >> >> So I think travis-ci and centos-ci are equivalent on this axis ? if we >> want to go the qemu route, then either works, and if we don't, then >> neither works? > > My understanding of the problem is that Travis can't actually run the > 64-bit CentOS VMs fast enough to get build jobs done in a reasonable > time period (due to the resource constraints imposed on the test VMs), > whereas the CentOS systems have more resources available to them > (since they have the whole machine to themselves). > > "Try it and see" would be the best way to confirm whether or not > qemu-ppc64le-on-Travis works, though. Ah, good point ? Travis imposes a 1 hour time limit on builds, and for manylinux1 we're at about half that already with all the supported versions of Python... though some of that is compensating for centos 5 brokenness (building our own openssl) and some of it is Python versions we could probably drop at this point (2.6 at least, since there will never be a version of pip that supports both python 2.6 and manylinux2+!). >> For me the big question is whether emulation is actually a good idea. >> When rust announced their plans I remember seeing some skepticism >> about whether one can really trust emulated machines for this kind of >> use case, though I can't find it again now... but it's definitely true >> that all the major distributions go to great lengths to use real >> hardware in their build farms, despite the obvious drawbacks (not just >> in terms of maintenance, but also the ongoing pain of using tiny >> little arm and mips machines that take dozens of hours to build >> things). They know a whole lot more about this than I do so I assume >> they have *some* reason :-). > > It's one thing using emulation for a service being provided for free > in a community project, something else entirely to be doing it for > commercial or commercially-sponsored software. Oh sure. I actually don't know which case this is -- does anyone care about ppc64le who isn't working with some big IBM support contract? We've had at least one @ibm address in one of these threads... > One particular problem with it is that any use of profile-guided > optimisation would be optimising for the performance characteristics > of the emulator, not those of the real hardware (that's also a problem > for cross-compilation: as far as I am aware, you *can't* readily use > profile guided optimisation in the cross-compilation case). I thought PGO uses profiling to observe branches and variable values, which emulation shouldn't effect? Fortunately this part really doesn't matter much to us anyway though -- nothing in the docker image gets shipped to end-users. The important thing would be how the package distributors are running the image to do *their* builds, and that's not our problem. -n -- Nathaniel J. Smith -- https://vorpus.org From p.f.moore at gmail.com Sat Apr 8 05:29:24 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 8 Apr 2017 10:29:24 +0100 Subject: [Distutils] Idea: Using Trove classifiers for platform compatibility warnings In-Reply-To: References: Message-ID: On 8 April 2017 at 03:17, Nick Coghlan wrote: > PyPI already has a reasonably extensive component tagging system in > https://pypi.python.org/pypi?%3Aaction=list_classifiers but we don't > really *use* it all that much for programmatic purposes. > > That means the incentives for setting tags correctly are weak, since > there isn't much pay-off in the form of tooling-intermediated > communication of constraints and testing limitations to end users. > > What I'm starting to wonder is whether or not it may make sense to > start recommending that installation tools emit warnings in the > following cases: > > 1. At least one "Operating System" tag is set, but the tags don't > include any that cover the *current* operating system > 2. At least one "Programming Language :: Python" tag is set, but the > tags don't include any that cover the *current* Python version > > The "at least one relevant tag is set" pre-requisite would be to avoid > emitting false positives for projects that don't provide any platform > compatibility guidance at all. I agree that there's little incentive at the moment to get classifiers right. So my concern with this proposal would be that it issues the warnings to end users, who don't have any direct means of resolving the issue (they can of course raise bugs on the projects they find with incorrect classifiers). Furthermore, there's a potential risk that projects might see classifiers as implying a level of support they are not happy with, and so are reluctant to add classifiers "just" to suppress the warning. But without data, the above is just FUD, so I'd suggest we do some analysis. I did some spot checks, and it seems that projects might typically not set the OS classifier, which alleviates my biggest concern (projects stating "POSIX" because that's what they develop on, when they actually work fine on Windows) - but propoer data would be better. Two things I'd like to see: 1. A breakdown of how many projects actually use the various OS and Language classifiers. 2. Where projects ship wheels, do the wheels they ship match the classifiers they declare? That should give a good idea of the immediate impact of this proposal. (There's not much we can say about source-only distributions, but that's OK). The data needed to answer those questions should be available - the only way I have of getting it is via the JSON interface to PyPI, so I can write a script to collect the information, but it might be some time before I can collate it. Is this something the BigQuery data we have (which I haven't even looked at myself) could answer? Paul From matthew.brett at gmail.com Sat Apr 8 06:05:58 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 8 Apr 2017 11:05:58 +0100 Subject: [Distutils] The Python Packaging Ecosystem (of Nick) Support for other programming languages In-Reply-To: References: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> Message-ID: Hi, On Sat, Apr 8, 2017 at 6:55 AM, Nick Coghlan wrote: > On 7 April 2017 at 15:26, Thomas G?ttler wrote: >> Why is having blue sky ideas rude? AFAIK the word "rude" means "offensively impolite or bad-mannered." > > Ideas are easy to come by - we have no shortage of them. Bad ideas are surely easy to come by, but, at least in my experience, it often takes a bit of discussion to work out which ones these are. > Hearing from folks that say "I'm working on a project to improve X, > can you give me some advice?" is generally wonderful, but "Someone > (else) should totally build this thing that I wish existed" is > typically just noise, and > "I have never personally done anything for any of you, but I want you > all to imagine you work for me and have to work on the things I care > about" is extraordinarily self-entitled behaviour. Wishing to avoid this criticism I humbly submit the pip / manylinux / macOS packaging work I do for the scientific Python stack. That done, as you imply, no-one can force us volunteers to do any particular piece of work, so, although self-entitlement may be unattractive, it's not particularly threatening. Of course, sometimes, people who aren't doing anything at present, suggest ideas that are useful, and with the right encouragement, actually do start to help. I've certainly seen that happen with my other developer hats on. Cheers, Matthew From ncoghlan at gmail.com Sat Apr 8 22:27:28 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 9 Apr 2017 12:27:28 +1000 Subject: [Distutils] Idea: Using Trove classifiers for platform compatibility warnings In-Reply-To: References: Message-ID: On 8 April 2017 at 19:29, Paul Moore wrote: > On 8 April 2017 at 03:17, Nick Coghlan wrote: >> The "at least one relevant tag is set" pre-requisite would be to avoid >> emitting false positives for projects that don't provide any platform >> compatibility guidance at all. > > I agree that there's little incentive at the moment to get classifiers > right. I'll also explicitly note that I think this idea counts as a "nice to have" - in cases where there are real compatibility problems, those are going to show up at runtime anyway, so what this idea really provides is a debugging hint that says "Hey, you know that weird behaviour you're seeing in ? How sure are you that all of your dependencies actually support that configuration?" That said, if folks agree that this idea at least seems plausible, one outcome is that I would abandon the draft "Supported Environments" section for the "python.constraints" extension in PEP 459: https://www.python.org/dev/peps/pep-0459/#supported-environments While programmatic expressions like that are handy for publishers, they don't convey the difference between "We expect future Python versions to work" and "We have tested this particular Python version, and it does appear to work", and they're also fairly hostile to automated data analysis, since you need to evaluate expressions in a mini-language rather than just filtering on an appropriately defined set of metadata tags. When it comes to the "Programming Language :: Python" classifiers though, we already give folks quite a bit of flexibility there: - no tag or the generic unversioned tag to say "No guidance provided" - the "PL :: Python :: X" tags to say "definitely supports Python X" without saying which X.Y versions - the "PL :: Python :: X.Y" tags to say "definitely supports Python X.Y" And that flexibility provides an opportunity to let publishers make a trade-off between precision of information provided (down to just major version, or specifying both major and minor version) and the level of maintenance effort (with the more precise approach meaning always having to make a new release to update the compatibility metadata for new Python feature releases, even when the existing code works without any changes, but also meaning you get a way to affirmatively say "Yes, we tested this with the new version, and it still works"). We also have the "PL :: Python :: X :: Only" tags, but I think that may be a misguided approach and we'd be better off with a general notion of tag negation: "Not :: PL :: Python :: X" (so you'd add a "Not :: Programming Language :: Python :: 2" tag instead of adding a "Programming Language :: Python :: 3 :: Only" tag) > So my concern with this proposal would be that it issues the > warnings to end users, who don't have any direct means of resolving > the issue (they can of course raise bugs on the projects they find > with incorrect classifiers). We need to be clear about the kinds of end users we're considering here, though: folks using pip (or similar) tools to do their own install-time software integration, *not* folks consuming pre-built and pre-integrated components through conda/apt/dnf/msi/etc. In the latter cases, the redistributor is taking on the task of making sure their particular combinations work well together, but when we use pip (et al) directly, that task falls directly on us as useful, and it's useful when debugging to know whether what we're doing is a combination that upstream has already thought about (and is hopefully covering in their CI setup if they have one), or whether we may be doing something unusual that most other people haven't tried yet. While this is also useful info for redistributors to know, I was thinking in PyPI publisher & pip user terms when the idea occurred to me. The concept is based at least in part on my experience as a World of Warcraft player, where there are two main pieces to their compatibility handling model for UI Add-ons: 1. Add-on authors tag the add-on itself with the most recent version of the client API that they've tested it against 2. To avoid having your UI break completely every time the client API changes, he main game client has a simple "Load Out of Date Addons" check box to let you opt-in to continue to use add-ons that may not have been updated for the latest changes to the game's runtime API (while also clearly saying "Don't complain to Blizzard about any UI bugs you encounter in this unsupported configuration") Assuming we do pursue this idea (which is still a big assumption at this point, due to the "potentially nice to have for debugging in some situations" motivation being a fairly weak one for volunteer efforts), I think a sensible way to go would be to have the classifier checking be opt-in initially (e.g. through a "--check-classifiers" option), and only consider making it the default behaviour if having it available as a debugging option seems insufficient. > Furthermore, there's a potential risk > that projects might see classifiers as implying a level of support > they are not happy with, and so are reluctant to add classifiers > "just" to suppress the warning. >From a client UX perspective, something like the approach used for the `--no-binary` option would seem reasonable: https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption-no-binary That is: * `--check-classifiers :none:` to disable checks entirely * `--check-classifiers :all:` to check everything * `--check-classifiers a,b,c,d` to check key packages you really care about, but ignore others > But without data, the above is just FUD, so I'd suggest we do some > analysis. I did some spot checks, and it seems that projects might > typically not set the OS classifier, which alleviates my biggest > concern (projects stating "POSIX" because that's what they develop on, > when they actually work fine on Windows) - but propoer data would be > better. Two things I'd like to see: > > 1. A breakdown of how many projects actually use the various OS and > Language classifiers. > 2. Where projects ship wheels, do the wheels they ship match the > classifiers they declare? > > That should give a good idea of the immediate impact of this proposal. I think the other thing that research would provide is guidance on whether it makes more sense to create *new* tags specifically for compatibility testing reports rather than attempting to define new semantics for existing tags. The inference from existing tags would then solely be a migration step where clients and services could synthesise the new tags based on old metadata (including things like `Requires-Python:`). If we went down that path, it might look like this: 1. Two new classifier namespaces specifically for compatibility assertions: "Compatible" and "Incompatible" 2. Within each, start by defining two subnamespaces based on existing classifiers: Compatible :: Python :: [as for `Programming Language :: Python ::`] Compatible :: OS :: [as for `Operating System :: `] Incompatible :: Python :: [as for `Programming Language :: Python ::`] Incompatible :: OS :: [as for `Operating System :: `] Within the "Compatible" namespace the ` :: Only` suffix would be a modifier to strengthen the "Compatible with this" assertion into a "almost certainly not compatible with any of the other options in this category" assertion. One nice aspect of that model is that it would be readily extensible to other dimensions of compatibility, like "Implementation" (so projects that know they're tightly coupled to the C API for example can add "Compatible :: Implementation :: CPython"). The downside is that it would leave the older "for information only" classifiers as semantically ambiguous and we'd be stuck permanently with two very similar sets of classifiers. > (There's not much we can say about source-only distributions, but > that's OK). The data needed to answer those questions should be > available - the only way I have of getting it is via the JSON > interface to PyPI, so I can write a script to collect the information, > but it might be some time before I can collate it. Is this something > the BigQuery data we have (which I haven't even looked at myself) > could answer? Back when Donald and I were working on PEP 440 and ensuring the normalization scheme covered the vast majority of existing projects, we had to retrieve all the version info over XML-RPC: https://github.com/pypa/packaging/blob/master/tasks/check.py I'm not aware of any subsequent changes on that front, so I don't believe we currently push the PKG-INFO registration metadata into Big Query. However, I do believe we *could* (if Google are amenable), and if we did, it would make these kinds of research questions much easier to answer. Donald, any feedback on how hard it would be to get the current PyPI project metadata into a queryable format in BQ? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Apr 8 23:12:03 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 9 Apr 2017 13:12:03 +1000 Subject: [Distutils] The Python Packaging Ecosystem (of Nick) Support for other programming languages In-Reply-To: References: <0d420464-1a9a-9a83-f896-a5b76f1af84d@thomas-guettler.de> Message-ID: On 8 April 2017 at 20:05, Matthew Brett wrote: > Hi, > > On Sat, Apr 8, 2017 at 6:55 AM, Nick Coghlan wrote: >> On 7 April 2017 at 15:26, Thomas G?ttler wrote: >>> Why is having blue sky ideas rude? AFAIK the word "rude" means "offensively impolite or bad-mannered." >> >> Ideas are easy to come by - we have no shortage of them. > > Bad ideas are surely easy to come by, but, at least in my experience, > it often takes a bit of discussion to work out which ones these are. Indeed, but it's an entirely different matter to post suggestions for specific, concrete, ideas for changes to the Python plugin management ecosystem than it is to post: """ Why not do "thinking in sets" here and see python just as one item in the list of a languages? Let's dream: All languages should be supported in the ever best packaging solution of the future. """ While citing a post that specifically explains why such a project would be out of scope for PyPA & distutils-sig, since we maintain and provide plugin management tools and interoperability standards for Python runtimes, not general purpose infrastructure management for arbitrary software components. It isn't like we don't already know that problem exists, nor that we consider it unimportant in the larger scheme of things - it's just not a problem we're attempting to solve or help with *here*. >> Hearing from folks that say "I'm working on a project to improve X, >> can you give me some advice?" is generally wonderful, but "Someone >> (else) should totally build this thing that I wish existed" is >> typically just noise, and >> "I have never personally done anything for any of you, but I want you >> all to imagine you work for me and have to work on the things I care >> about" is extraordinarily self-entitled behaviour. > > Wishing to avoid this criticism I humbly submit the pip / manylinux / > macOS packaging work I do for the scientific Python stack. Thank you for that work! > That done, as you imply, no-one can force us volunteers to do any > particular piece of work, so, although self-entitlement may be > unattractive, it's not particularly threatening. Yeah, it only becomes irritating when it's persistent - most folks are able to recognise that they should drop a topic in a given venue when nobody else expresses any interest in it, and it's a rare few that will still attempt to persist even after they're explicitly told to drop it as being off-topic. > Of course, > sometimes, people who aren't doing anything at present, suggest ideas > that are useful, and with the right encouragement, actually do start > to help. I've certainly seen that happen with my other developer hats > on. Likewise (and in terms of my own contributions to PyPA/distutils-sig, they've been far more extensive in the form of mentoring and support for the folks actually doing the work than they have been in terms of code or documentation), and it's why the first reaction to off-topic posts should always be to default to coaching folks on the purpose of the channel. It's only in light of persistent attempts to reshape the channel to a different purpose without first actively contributing to helping the group to achieve its existing purpose that problems really start to arise (as in this thread). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guettliml at thomas-guettler.de Mon Apr 10 04:04:23 2017 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Mon, 10 Apr 2017 10:04:23 +0200 Subject: [Distutils] Which commercial vendor? In-Reply-To: References: <07e85472-5e08-32ee-3884-af0e327ac466@thomas-guettler.de> <2ea413e9-9fce-9125-897d-caef87a51dfd@thomas-guettler.de> <-4712602602821257844@unknownmsgid> Message-ID: <601e3bba-eaf1-51a9-9a64-839bde085016@thomas-guettler.de> Am 08.04.2017 um 01:51 schrieb Wes Turner: > AFAIU, the only way to read the package version from the {git, hg, } source repository is to run the setup.py. I see. This is the only way at the moment. Let's look back. How was this in the past? Maybe five years ago? Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From ben+python at benfinney.id.au Mon Apr 10 05:01:50 2017 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 10 Apr 2017 19:01:50 +1000 Subject: [Distutils] Which commercial vendor? References: <07e85472-5e08-32ee-3884-af0e327ac466@thomas-guettler.de> <2ea413e9-9fce-9125-897d-caef87a51dfd@thomas-guettler.de> <-4712602602821257844@unknownmsgid> <601e3bba-eaf1-51a9-9a64-839bde085016@thomas-guettler.de> Message-ID: <8560icd4ap.fsf@benfinney.id.au> Thomas G?ttler writes: > Let's look back. How was this in the past? Maybe five years ago? That's a very vague question. What kind of answer do you want? Is it one you have an answer for already; and if so, what is the point of your question here? I don't doubt that you *have* a point. What I doubt is the value to others here of asking something that demands a lot of to even find out what you mean. -- \ ?[T]he question of whether machines can think ? is about as | `\ relevant as the question of whether submarines can swim.? | _o__) ?Edsger W. Dijkstra | Ben Finney From lele at metapensiero.it Mon Apr 10 19:09:40 2017 From: lele at metapensiero.it (Lele Gaifax) Date: Tue, 11 Apr 2017 01:09:40 +0200 Subject: [Distutils] PyPI abuse Message-ID: <871sszyi4r.fsf@nautilus> Hi all, I know it's been debated here whether there should be some kind of filtering on uploaded packages on PyPI, but today someone, either an automated tool or a silly guy, started to upload dozens of "Xxx 0.1.0" where "Xxx" is some "surname", here is latest variant: https://pypi.python.org/pypi/Lykov/0.1.0 Is there something that can/should be done to stop it? Thank you, ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From tritium-list at sdamon.com Tue Apr 11 04:41:12 2017 From: tritium-list at sdamon.com (tritium-list at sdamon.com) Date: Tue, 11 Apr 2017 04:41:12 -0400 Subject: [Distutils] PyPI abuse In-Reply-To: <871sszyi4r.fsf@nautilus> References: <871sszyi4r.fsf@nautilus> Message-ID: <009701d2b29f$606dabb0$21490310$@hotmail.com> Playing devil's advocate here, do we put a value judgement on the content of a module on pypi, even if that content is limited to a single function that just prints? What does that mean for something like a 'metapackage' (a package on pypi that has no content, and exists only to install other modules - like a project that has been modularized into other packages on pypi)? I think this should be handled on a case by case basis, where someone else comes along wanting to use a name currently being used by one of these obvious placeholders. That said, this looks sketchy, and I would not be shocked to find these names being held hostage on some auction site somewhere. If that is the case, burninate them. > -----Original Message----- > From: Distutils-SIG [mailto:distutils-sig-bounces+tritium- > list=sdamon.com at python.org] On Behalf Of Lele Gaifax > Sent: Monday, April 10, 2017 7:10 PM > To: Distutils-Sig at Python.Org > Subject: [Distutils] PyPI abuse > > Hi all, > > I know it's been debated here whether there should be some kind of > filtering > on uploaded packages on PyPI, but today someone, either an automated > tool or a > silly guy, started to upload dozens of "Xxx 0.1.0" where "Xxx" is some > "surname", here is latest variant: https://pypi.python.org/pypi/Lykov/0.1.0 > > Is there something that can/should be done to stop it? > > Thank you, > ciao, lele. > -- > nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri > real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. > lele at metapensiero.it | -- Fortunato Depero, 1929. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From prometheus235 at gmail.com Tue Apr 11 09:37:16 2017 From: prometheus235 at gmail.com (Nick Timkovich) Date: Tue, 11 Apr 2017 09:37:16 -0400 Subject: [Distutils] PyPI abuse In-Reply-To: <009701d2b29f$606dabb0$21490310$@hotmail.com> References: <871sszyi4r.fsf@nautilus> <009701d2b29f$606dabb0$21490310$@hotmail.com> Message-ID: Other devils advocate: having *someone* park them, even if they're not the trademark holder, might be useful. If someone wants to go over what should be a relatively low bar to usurp the "holder", great. I'm thinking of the apple/android/angular/osx/ubuntu crop. On Tue, Apr 11, 2017 at 4:41 AM, wrote: > Playing devil's advocate here, do we put a value judgement on the content > of a module on pypi, even if that content is limited to a single function > that just prints? What does that mean for something like a 'metapackage' > (a package on pypi that has no content, and exists only to install other > modules - like a project that has been modularized into other packages on > pypi)? I think this should be handled on a case by case basis, where > someone else comes along wanting to use a name currently being used by one > of these obvious placeholders. > > That said, this looks sketchy, and I would not be shocked to find these > names being held hostage on some auction site somewhere. If that is the > case, burninate them. > > > -----Original Message----- > > From: Distutils-SIG [mailto:distutils-sig-bounces+tritium- > > list=sdamon.com at python.org] On Behalf Of Lele Gaifax > > Sent: Monday, April 10, 2017 7:10 PM > > To: Distutils-Sig at Python.Org > > Subject: [Distutils] PyPI abuse > > > > Hi all, > > > > I know it's been debated here whether there should be some kind of > > filtering > > on uploaded packages on PyPI, but today someone, either an automated > > tool or a > > silly guy, started to upload dozens of "Xxx 0.1.0" where "Xxx" is some > > "surname", here is latest variant: https://pypi.python.org/pypi/ > Lykov/0.1.0 > > > > Is there something that can/should be done to stop it? > > > > Thank you, > > ciao, lele. > > -- > > nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri > > real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. > > lele at metapensiero.it | -- Fortunato Depero, 1929. > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From setuptools at bugs.python.org Wed Apr 12 06:36:22 2017 From: setuptools at bugs.python.org (Laurent Senta) Date: Wed, 12 Apr 2017 10:36:22 +0000 Subject: [Distutils] [issue165] setuptools doesn't use venv on macOs Message-ID: <1491993382.15.0.569243210192.issue165@psf.upfronthosting.co.za> New submission from Laurent Senta: Hi, on macOs Sierra (got the issue for at least 2 versions), setuptools appears to be producing commands with the wrong python env when it generates shebangs. I use virtualenvs with python3 (brew install python3), when I pip install commands, they end up using the /Cellar/ python instead of the venv. Here's an example of a session showing the incorrect shebangs: ? mkvirtualenv -p python3 bugreport Running virtualenv with interpreter /usr/local/bin/python3 Using base prefix '/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6' New python executable in /Users/laurent/.virtualenvs/bugreport/bin/python3.6 Also creating executable in /Users/laurent/.virtualenvs/bugreport/bin/python Installing setuptools, pip, wheel...done. virtualenvwrapper.user_scripts creating /Users/laurent/.virtualenvs/bugreport/bin/predeactivate virtualenvwrapper.user_scripts creating /Users/laurent/.virtualenvs/bugreport/bin/postdeactivate virtualenvwrapper.user_scripts creating /Users/laurent/.virtualenvs/bugreport/bin/preactivate virtualenvwrapper.user_scripts creating /Users/laurent/.virtualenvs/bugreport/bin/postactivate virtualenvwrapper.user_scripts creating /Users/laurent/.virtualenvs/bugreport/bin/get_env_details (bugreport) # pip is correct ? which pip /Users/laurent/.virtualenvs/bugreport/bin/pip ? head -n1 `which pip` #!/Users/laurent/.virtualenvs/bugreport/bin/python3.6 # pip install django pytest ipython... ? which django-admin /Users/laurent/.virtualenvs/bugreport/bin/django-admin ? head -n1 `which django-admin` #!/usr/local/Cellar/python3/3.6.1/bin/python3.6 ? which pytest /Users/laurent/.virtualenvs/bugreport/bin/pytest ? head -n1 `which pytest` #!/usr/local/Cellar/python3/3.6.1/bin/python3.6 ? which ipython /Users/laurent/.virtualenvs/bugreport/bin/ipython ? head -n1 `which ipython` #!/usr/local/Cellar/python3/3.6.1/bin/python3.6 ---------- messages: 787 nosy: lsenta priority: bug status: unread title: setuptools doesn't use venv on macOs _______________________________________________ Setuptools tracker _______________________________________________ From geoffspear at gmail.com Wed Apr 12 07:58:59 2017 From: geoffspear at gmail.com (Geoffrey Spear) Date: Wed, 12 Apr 2017 11:58:59 +0000 Subject: [Distutils] [issue165] setuptools doesn't use venv on macOs In-Reply-To: <1491993382.15.0.569243210192.issue165@psf.upfronthosting.co.za> References: <1491993382.15.0.569243210192.issue165@psf.upfronthosting.co.za> Message-ID: On Wed, Apr 12, 2017 at 6:36 AM Laurent Senta wrote: > > New submission from Laurent Senta: > > [...] I notice that while this bug tracker does display a message that it's no longer the correct bug tracker for setuptools, it also includes a link to bitbucket, which is also not the correct tracker and which gives an access denied message (presumably to anyone who's not a member of the bitbucket pypa org?) Anyone have access to change the link on bpo to github? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruno.rosa at eldorado.org.br Thu Apr 13 12:47:21 2017 From: bruno.rosa at eldorado.org.br (Bruno Alexandre Rosa) Date: Thu, 13 Apr 2017 16:47:21 +0000 Subject: [Distutils] CentOS5 is EOL, impact on manylinux1? In-Reply-To: References: <3EB75D0B-0D3C-4EDE-BC8F-9EEEC636DF76@stufft.io> <2fc1d98fb27c45a9b6123ea937a4dd48@serv030.corp.eldorado.org.br> Message-ID: <1c712e6f8fbb42559038b3bdf3a3baed@serv031.corp.eldorado.org.br> On build time: I think it's high likely that a build on qemu will surpass the 1-hour limit (IIRC slowdowns of emulation were around 4x). On defining manylinux3: so, right now, should we start by working on the items in the TODO list Nathaniel posted earlier? Regards, Bruno Rosa -----Original Message----- From: Nathaniel Smith [mailto:njs at pobox.com] Sent: s?bado, 8 de abril de 2017 03:45 To: Nick Coghlan Cc: Bruno Alexandre Rosa ; distutils sig Subject: Re: [Distutils] CentOS5 is EOL, impact on manylinux1? On Fri, Apr 7, 2017 at 10:41 PM, Nick Coghlan wrote: > On 7 April 2017 at 06:32, Nathaniel Smith wrote: >> On Wed, Apr 5, 2017 at 2:37 AM, Nick Coghlan wrote: >>> And as it turns out, not only are we able to get bare metal machines >>> to run our VMs on (rather than messing about with nested virt >>> support), but the CentOS boxes are pre-cached on the local network, >>> so they should be pretty quick to download (we haven't actually got >>> our CI up and running yet, so I won't be able to vouch for that >>> personally until we do). >> >> I think bare metal access only matters for running x86 with hardware >> accelerated virtualization? If we want to emulate a totally different >> architecture like ppc64le then IIUC qemu does that as a regular >> user-space program. You definitely can run qemu in this mode on >> travis-ci, e.g. check out all the virtual architectures that rust >> builds on: >> >> https://travis-ci.org/rust-lang/rust/ >> >> So I think travis-ci and centos-ci are equivalent on this axis ? if >> we want to go the qemu route, then either works, and if we don't, >> then neither works? > > My understanding of the problem is that Travis can't actually run the > 64-bit CentOS VMs fast enough to get build jobs done in a reasonable > time period (due to the resource constraints imposed on the test VMs), > whereas the CentOS systems have more resources available to them > (since they have the whole machine to themselves). > > "Try it and see" would be the best way to confirm whether or not > qemu-ppc64le-on-Travis works, though. Ah, good point ? Travis imposes a 1 hour time limit on builds, and for manylinux1 we're at about half that already with all the supported versions of Python... though some of that is compensating for centos 5 brokenness (building our own openssl) and some of it is Python versions we could probably drop at this point (2.6 at least, since there will never be a version of pip that supports both python 2.6 and manylinux2+!). >> For me the big question is whether emulation is actually a good idea. >> When rust announced their plans I remember seeing some skepticism >> about whether one can really trust emulated machines for this kind of >> use case, though I can't find it again now... but it's definitely >> true that all the major distributions go to great lengths to use real >> hardware in their build farms, despite the obvious drawbacks (not >> just in terms of maintenance, but also the ongoing pain of using tiny >> little arm and mips machines that take dozens of hours to build >> things). They know a whole lot more about this than I do so I assume >> they have *some* reason :-). > > It's one thing using emulation for a service being provided for free > in a community project, something else entirely to be doing it for > commercial or commercially-sponsored software. Oh sure. I actually don't know which case this is -- does anyone care about ppc64le who isn't working with some big IBM support contract? We've had at least one @ibm address in one of these threads... > One particular problem with it is that any use of profile-guided > optimisation would be optimising for the performance characteristics > of the emulator, not those of the real hardware (that's also a problem > for cross-compilation: as far as I am aware, you *can't* readily use > profile guided optimisation in the cross-compilation case). I thought PGO uses profiling to observe branches and variable values, which emulation shouldn't effect? Fortunately this part really doesn't matter much to us anyway though -- nothing in the docker image gets shipped to end-users. The important thing would be how the package distributors are running the image to do *their* builds, and that's not our problem. -n -- Nathaniel J. Smith -- https://vorpus.org From ja.geb at me.com Fri Apr 21 10:27:45 2017 From: ja.geb at me.com (Jannis Gebauer) Date: Fri, 21 Apr 2017 16:27:45 +0200 Subject: [Distutils] The sad and insecure state of commercial private package indexes Message-ID: I did some research on commercial private package indexes, namely Gemfury and packagecloud. Both of them recommend to use `--extra-index-url` as a parameter to point to their own index servers hosting the private package. This is blatantly insecure. Using `--extra-index-url` tells pip to use the server as an _extra_ index url (huge surprise). This basically means that, during pip install, PyPi and the private server share the same namespace. Pip queries both servers for available releases for a given package. On unpinned packages, the server with the latest release seems to win. This means, If I?m using one of these private package indexes, an attacker is able to run arbitrary Python code (through setup.py during installation) simply by guessing my private package names and uploading them to PyPi. I?ve contacted both Gemfury and packagecloud. Gemfury didn?t respond. Packagecloud basically said works as intended, wontfix. They could, of course, fix this very easily by running their own PyPi mirrors. I couldn?t care less about these companies, but I care about Python packaging in general. I talked to a couple of Python developers regarding this. All of them use pip and PyPi regularly but have no idea about the internals. This was a huge surprise to them. My problem with this is that PyPi and pip will look bad if this is ever going to be abused. What are your thoughts on this? ? Jannis Gebauer From waynejwerner at gmail.com Fri Apr 21 16:25:03 2017 From: waynejwerner at gmail.com (Wayne Werner) Date: Fri, 21 Apr 2017 15:25:03 -0500 (CDT) Subject: [Distutils] The sad and insecure state of commercial private package indexes In-Reply-To: References: Message-ID: On Fri, 21 Apr 2017, Jannis Gebauer wrote: > They could, of course, fix this very easily by running their own PyPi mirrors. And now they have two problems. On the one hand, I agree that there is a potential from some abuse and vulnerabilities... but I think that I'd argue that if you're in a position where you're worried about that attack vector and you're using pypi.python.org then *you're doing it wrong!* On systems where I'm worried about pypi as an attack vector, I've downloaded the packages, built wheels, and stuck them in an S3 bucket, and I install with `--no-index --find-links=/path/to/my/wheelhouse`. I'm not sure if there are any improvements that you could make to the security of pip/pypi that are much better, but I'm not a security expert :) -Wayne From ncoghlan at gmail.com Sat Apr 22 03:13:12 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 22 Apr 2017 17:13:12 +1000 Subject: [Distutils] The sad and insecure state of commercial private package indexes In-Reply-To: References: Message-ID: On 22 April 2017 at 06:25, Wayne Werner wrote: > On Fri, 21 Apr 2017, Jannis Gebauer wrote: > >> They could, of course, fix this very easily by running their own PyPi >> mirrors. > > > And now they have two problems. > > > On the one hand, I agree that there is a potential from some abuse and > vulnerabilities... but I think that I'd argue that if you're in a > position where you're worried about that attack vector and you're using > pypi.python.org then *you're doing it wrong!* > > On systems where I'm worried about pypi as an attack vector, I've > downloaded the packages, built wheels, and stuck them in an S3 bucket, > and I install with `--no-index --find-links=/path/to/my/wheelhouse`. Right, package whitelists with artifact caches under organisational control are the typical answer here, since they protect you from a range of potential threats, and ensure you always have the ability to reproduce builds if necessary, even if the original publisher decides to delete all their previous releases. Pinterest put together a decent write-up about the approach back when they first published pinrepo as an open source project: https://medium.com/@Pinterest_Engineering/open-sourcing-pinrepo-the-pinterest-artifact-repository-6ebfe5917bcb The devpi devs have also granted in-principle approval for package whitelisting support in their PyPI caching proxy configuration settings: https://github.com/devpi/devpi/issues/198 > I'm not sure if there are any improvements that you could make to the > security of pip/pypi that are much better, but I'm not a security expert > :) Pinning dependencies protects against implicit upgrade attacks, and checking for expected hashes helps avoid substitution attacks. The latter was added to baseline pip in 8.0.0, and was available through `peep` before that. It's also a native part of the way that Pipenv.lock files work. For an approach that's less reliant on being part of an automated build pipeline, the yum/dnf solution to this at the operating system layer is repository priorities: https://wiki.centos.org/PackageManagement/Yum/Priorities conda similarly supports priorities at the channel level: https://conda.io/docs/channels.html#after-conda-4-1-0 The gist of that approach is that it lets you assign priority orders to particular repositories, such that the higher priority repo will always win in the event of a name clash. In this kind of context, it's used to ensure that 3rd party repositories can only ever add *new* packages, without allowing them to ever replace or upgrade ones that are available from the private repository. My understanding is that Debian's equivalent system is broadly similar in principle, but has a few additional features beyond the basic relative priority support. Nobody has been motivated to implement that capability for the Python-specific tooling so far, as it competes against two alternatives that will often make more architectural sense: - automated build pipelines using dependency pinning, hash checks, and pre-filtered artifact repositories - relying on build & deployment formats that already offer repository priority support (e.g. native Linux packages or conda packages) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sat Apr 22 07:05:47 2017 From: donald at stufft.io (Donald Stufft) Date: Sat, 22 Apr 2017 07:05:47 -0400 Subject: [Distutils] The sad and insecure state of commercial private package indexes In-Reply-To: References: Message-ID: <659BA93C-5242-4ACE-9967-747C2F6FE775@stufft.io> > On Apr 22, 2017, at 3:13 AM, Nick Coghlan wrote: > > Nobody has been motivated to implement that capability for the > Python-specific tooling so far, as it competes against two > alternatives that will often make more architectural sense: > > - automated build pipelines using dependency pinning, hash checks, and > pre-filtered artifact repositories > - relying on build & deployment formats that already offer repository > priority support (e.g. native Linux packages or conda packages) I think the biggest barrier to doing it in pip is simply the UX of it. We?re currently constrained by the fact that *all* of our options are available as CLI flags, environment variables, and of course, a config file. This works great for simple key, value configuration but it breaks down with more complex situations like trying to assign a priority to different repositories or selecting which repository a particular package *should* come from (and other more complex situations). Thus far we?ve more or less stuck our fingers in our ears and focused on other problems, but I think we?re going to end up needing to refactor the way pip handles configuration to really make this sort of thing sane. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Apr 24 01:10:19 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 24 Apr 2017 15:10:19 +1000 Subject: [Distutils] The sad and insecure state of commercial private package indexes In-Reply-To: <659BA93C-5242-4ACE-9967-747C2F6FE775@stufft.io> References: <659BA93C-5242-4ACE-9967-747C2F6FE775@stufft.io> Message-ID: On 22 April 2017 at 21:05, Donald Stufft wrote: > I think the biggest barrier to doing it in pip is simply the UX of it. We?re > currently constrained by the fact that *all* of our options are available as > CLI flags, environment variables, and of course, a config file. This works > great for simple key, value configuration but it breaks down with more > complex situations like trying to assign a priority to different > repositories or selecting which repository a particular package *should* > come from (and other more complex situations). > > Thus far we?ve more or less stuck our fingers in our ears and focused on > other problems, but I think we?re going to end up needing to refactor the > way pip handles configuration to really make this sort of thing sane. As much as it annoys me in other ways, the `/etc/yum.repos.d/*` approach to managing named repositories seems hard to beat in terms of allowing for per-repo configuration settings while limiting command line complexity (since the command line mainly deals with repo names rather than the individual settings). Debian offers something similar now in the form of `/etc/apt/sources.list.d/`. It doesn't automatically solve the complexity problem around pulling components from multiple repositories, but it helps makes the complexity more manageable since: - remote repositories are explicitly modeled as named entities with various configurable characteristics - other commands then work with the local entity names rather than directly with remote URLs Something I *don't* think either yum/dnf or apt handle well is the notion of *relative* priorities for a single command, since they're built around the notion of associating static numeric priorities with the repository definitions, whereas for development purposes, you usually want to express relative priorities like: --indices=project-specific,org-common,pypi # Continuous deployment --indices=dev,staging,stable,pypi # Phased deployment, dev --indices=staging,stable # Phased deployment, CI/staging --indices=stable # Phased deployment, production Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Mon Apr 24 02:42:56 2017 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 24 Apr 2017 18:42:56 +1200 Subject: [Distutils] The sad and insecure state of commercial private package indexes In-Reply-To: References: <659BA93C-5242-4ACE-9967-747C2F6FE775@stufft.io> Message-ID: On 24 April 2017 at 17:10, Nick Coghlan wrote: > On 22 April 2017 at 21:05, Donald Stufft wrote: >> I think the biggest barrier to doing it in pip is simply the UX of it. We?re >> currently constrained by the fact that *all* of our options are available as >> CLI flags, environment variables, and of course, a config file. This works >> great for simple key, value configuration but it breaks down with more >> complex situations like trying to assign a priority to different >> repositories or selecting which repository a particular package *should* >> come from (and other more complex situations). >> >> Thus far we?ve more or less stuck our fingers in our ears and focused on >> other problems, but I think we?re going to end up needing to refactor the >> way pip handles configuration to really make this sort of thing sane. > > As much as it annoys me in other ways, the `/etc/yum.repos.d/*` > approach to managing named repositories seems hard to beat in terms of > allowing for per-repo configuration settings while limiting command > line complexity (since the command line mainly deals with repo names > rather than the individual settings). Debian offers something similar > now in the form of `/etc/apt/sources.list.d/`. > > It doesn't automatically solve the complexity problem around pulling > components from multiple repositories, but it helps makes the > complexity more manageable since: > > - remote repositories are explicitly modeled as named entities with > various configurable characteristics > - other commands then work with the local entity names rather than > directly with remote URLs > > Something I *don't* think either yum/dnf or apt handle well is the > notion of *relative* priorities for a single command, since they're > built around the notion of associating static numeric priorities with > the repository definitions, whereas for development purposes, you > usually want to express relative priorities like: > > --indices=project-specific,org-common,pypi # Continuous deployment > --indices=dev,staging,stable,pypi # Phased deployment, dev > --indices=staging,stable # Phased deployment, CI/staging > --indices=stable # Phased deployment, production Well apt and similar tools are working in a system global context. So pipeline stage based rules are a poor fit for the use of system packages. Of course if you use a system-in-a-container approach you can have both worlds, which a lot of folk do. That said apt pinning is pretty capable and can pick arbitrary packages in arbitrary order across arbitrary repositories. -Rob From waynejwerner at gmail.com Mon Apr 24 09:06:42 2017 From: waynejwerner at gmail.com (Wayne Werner) Date: Mon, 24 Apr 2017 13:06:42 +0000 Subject: [Distutils] The sad and insecure state of commercial private package indexes In-Reply-To: References: <659BA93C-5242-4ACE-9967-747C2F6FE775@stufft.io> Message-ID: On Mon, Apr 24, 2017, 1:43 AM Robert Collins wrote: > > That said apt pinning is pretty capable and can pick arbitrary > packages in arbitrary order across arbitrary repositories. > And yum allows excludes from a repository. We use that at work so we only include ossec packages from a repository. -W > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at kluyver.me.uk Mon Apr 24 11:27:24 2017 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Mon, 24 Apr 2017 16:27:24 +0100 Subject: [Distutils] Idea: Using Trove classifiers for platform compatibility warnings In-Reply-To: References: Message-ID: <1493047644.428497.954420432.5735EE66@webmail.messagingengine.com> On Sat, Apr 8, 2017, at 03:17 AM, Nick Coghlan wrote: > 2. At least one "Programming Language :: Python" tag is set, but the > tags don't include any that cover the *current* Python version We've just enabled setuptools, PyPI and pip to use the python-requires metadata field, so that (for instance), IPython 6.0 was uploaded with python-requires: >=3.3 . Pip 9 respects that, so users running 'pip install ipython' on Python 2.7 will get the latest IPython 5.x release. So we have a relatively elegant way to tell pip about supported Python versions - although it's designed for the use case "this won't work on X.Y, so don't even try to install it", rather than "this isn't explicitly marked as supporting X.Y, but it might work anyway." Personally I try to avoid the 'Python :: X.Y' classifiers because I don't want to be updating a dozen packages every time a new Python comes out. I've used the 'Python :: 2'/'Python :: 3' classifiers before - but even those could cause a lot of inaccurate metadata if Python 4 happens and is mostly compatible with Python 3. Thomas From randy at thesyrings.us Thu Apr 27 16:21:32 2017 From: randy at thesyrings.us (Randy Syring) Date: Thu, 27 Apr 2017 16:21:32 -0400 Subject: [Distutils] bug with get-pip.py? Message-ID: I've installed Python 3.6 on Ubuntu 14.04 through the dead snakes PPA and I'm trying to get pip installed. $ python3.6 ~/tmp/get-pip.py Traceback (most recent call last): File "/home/rsyring/tmp/get-pip.py", line 20061, in main() File "/home/rsyring/tmp/get-pip.py", line 194, in main bootstrap(tmpdir=tmpdir) File "/home/rsyring/tmp/get-pip.py", line 119, in bootstrap import setuptools # noqa File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 12, in from setuptools.extension import Extension File "/usr/lib/python3/dist-packages/setuptools/extension.py", line 7, in from setuptools.dist import _get_unpatched File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 16, in import pkg_resources File "/usr/lib/python3/dist-packages/pkg_resources.py", line 1479, in register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider) AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader' Any ideas? *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Apr 28 12:47:59 2017 From: brett at python.org (Brett Cannon) Date: Fri, 28 Apr 2017 16:47:59 +0000 Subject: [Distutils] bug with get-pip.py? In-Reply-To: References: Message-ID: How old is the version of setuptools being used? I vaguely remember this error from a while back and it being fixed in a later release of setuptools. On Thu, 27 Apr 2017 at 13:22 Randy Syring wrote: > I've installed Python 3.6 on Ubuntu 14.04 through the dead snakes PPA and > I'm trying to get pip installed. > > $ python3.6 ~/tmp/get-pip.py > Traceback (most recent call last): > File "/home/rsyring/tmp/get-pip.py", line 20061, in > main() > File "/home/rsyring/tmp/get-pip.py", line 194, in main > bootstrap(tmpdir=tmpdir) > File "/home/rsyring/tmp/get-pip.py", line 119, in bootstrap > import setuptools # noqa > File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 12, > in > from setuptools.extension import Extension > File "/usr/lib/python3/dist-packages/setuptools/extension.py", line 7, > in > from setuptools.dist import _get_unpatched > File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 16, in > > import pkg_resources > File "/usr/lib/python3/dist-packages/pkg_resources.py", line 1479, in > > register_loader_type(importlib_bootstrap.SourceFileLoader, > DefaultProvider) > AttributeError: module 'importlib._bootstrap' has no attribute > 'SourceFileLoader' > > Any ideas? > > *Randy Syring* > Husband | Father | Redeemed Sinner > > > *"For what does it profit a man to gain the whole world and forfeit his > soul?" (Mark 8:36 ESV)* > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: