From guettliml at thomas-guettler.de Tue Nov 1 03:30:25 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Tue, 1 Nov 2016 08:30:25 +0100 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: Message-ID: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Am 17.09.2016 um 12:29 schrieb Nick Coghlan: > Hi folks, > > Prompted by a few posts I read recently about the current state of the > Python packaging ecosystem, I figured it made sense to put together an > article summarising my own perspective on the current state of things: > http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html Thank you for this summarizing article. Yes, a lot was done during the last months. I liked the part "My core software ecosystem design philosophy" a lot, since it explains that both parties (software consumer and software publisher) want it to be simple and easy. About conda: if pip and conda overlap in some point. Why not implement this in a reusable library which gets used by conda and pip? About funding: Looking for more income is one way to solve this. Why not look into the other direction: How to reduce costs? Heading "Making the presence of a compiler on end user systems optional". Here I just can say: Thank you very much. I guess it was a lot of hard work to make this all simple and easy for the software consumers and publishers. Thank you. I wrote some lines, but I deleted my thoughts about the topic "Automating wheel creation", since I am a afraid it could raise bad mood in this list again. That's not my goal. Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From ncoghlan at gmail.com Tue Nov 1 05:50:44 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 1 Nov 2016 19:50:44 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 1 November 2016 at 17:30, Thomas G?ttler wrote: > Am 17.09.2016 um 12:29 schrieb Nick Coghlan: >> Hi folks, >> >> Prompted by a few posts I read recently about the current state of the >> Python packaging ecosystem, I figured it made sense to put together an >> article summarising my own perspective on the current state of things: >> http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html > > > Thank you for this summarizing article. Yes, a lot was done during the last months. > > I liked the part "My core software ecosystem design philosophy" a lot, since > it explains that both parties (software consumer and software publisher) want > it to be simple and easy. > > > About conda: if pip and conda overlap in some point. Why not implement > this in a reusable library which gets used by conda and pip? For the parts where they genuinely overlap, conda is already able to just use pip, or else the same libraries that pip uses. For the platform management pieces (SAT solving for conda repositories, converting PyPI packages to conda ones, language independent environment management), what conda does is outside the scope of what pip supports anyway. > About funding: Looking for more income is one way to solve this. Why not > look into the other direction: How to reduce costs? Thanks to donated infrastructure, the direct costs to the PSF are incredibly low already. Donald went into some detail on that in https://caremad.io/posts/2016/05/powering-pypi/ and that's mostly still accurate (although his funded role with HPE ended recently) > Heading "Making the presence of a compiler on end user systems optional". Here > I just can say: Thank you very much. I guess it was a lot of hard work to > make this all simple and easy for the software consumers and publishers. Thank you. > > I wrote some lines, but I deleted my thoughts about the topic "Automating wheel creation", since > I am a afraid it could raise bad mood in this list again. That's not my goal. I currently see 3 main ways that could eventually happen: - the PSF sorts out its general sustainability concerns to the point where it believes it can credibly maintain such a service on the community's behalf - the conda-forge folks branch out into offering wheel building as well (so it becomes a matter of "publish your Python projects for the user level conda platform, get platform independent Python wheels as well") - someone builds such a service independently of the current PyPI infrastructure team, and convinces package publishers to start using it There's also a 4th variant, which is getting to a point where someone figures out a pushbutton solution for a build pipeline in a public PaaS that offers a decent free tier. This is potentially one of the more promising options, since it means the sustainability risks related to future growth in demand accrue to the PaaS providers, rather than to the PSF. However, it's somewhat gated on the Warehouse migration, since you really want API token support for that kind of automation, which is something the current PyPI code base doesn't have, and isn't going to gain. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Nov 1 06:22:32 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 1 Nov 2016 10:22:32 +0000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 1 November 2016 at 09:50, Nick Coghlan wrote: > There's also a 4th variant, which is getting to a point where someone > figures out a pushbutton solution for a build pipeline in a public > PaaS that offers a decent free tier. This, of course, is relatively easy for the simple cases (I could set up a project that builds Windows wheels on Appveyor relatively easily - I may well do so when I have some spare time). But the "simple cases" are precisely those where the only difficulty is the (relatively easy to overcome) need for a compiler. In other words, if "pip wheel foo" works when you have a compiler, it's simple to automate the building of wheels. The significant problem here is the process of setting up build dependencies for more complex projects, like scipy, pywin32, or lxml, or even simpler cases such as pyyaml. Each such project would need custom setup steps, and there's no easy way for a generic wheel building service to infer those steps - the onus would necessarily be on the individual projects to provide a script that manages the build process (and such a script might as well be the setup,py, meaning that we are back to the "pip wheel foo just works" case). So the issue remains one of *someone* doing project-by-project work to contribute scripts that cleanly handle setting up the build environment. Whether those scripts get contributed to the project or to a build farm project, is not so much a technical matter as a matter of whether projects are willing to accept such pull requests. Paul From ncoghlan at gmail.com Tue Nov 1 08:19:51 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 1 Nov 2016 22:19:51 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 1 November 2016 at 20:22, Paul Moore wrote: > On 1 November 2016 at 09:50, Nick Coghlan wrote: >> There's also a 4th variant, which is getting to a point where someone >> figures out a pushbutton solution for a build pipeline in a public >> PaaS that offers a decent free tier. > > This, of course, is relatively easy for the simple cases (I could set > up a project that builds Windows wheels on Appveyor relatively easily > - I may well do so when I have some spare time). But the "simple > cases" are precisely those where the only difficulty is the > (relatively easy to overcome) need for a compiler. In other words, if > "pip wheel foo" works when you have a compiler, it's simple to > automate the building of wheels. It isn't that simple, as what you really want to automate is the *release process*, where you upload an sdist, and the wheels *just happen* for: - the Python versions you want to support (e.g 2.7, 3.4, 3.5) - the platforms you want to support (typically x86_64 for Windows, Mac OS X, manylinux1, and maybe 32 bit Windows as well) Adding a new Python release or a new platform to the build configuration is currently an activity that requires per-project work when in theory a build service could just add it automatically based on when new releases happen. > The significant problem here is the process of setting up build > dependencies for more complex projects, like scipy, pywin32, or lxml, > or even simpler cases such as pyyaml. Each such project would need > custom setup steps, and there's no easy way for a generic wheel > building service to infer those steps - the onus would necessarily be > on the individual projects to provide a script that manages the build > process (and such a script might as well be the setup,py, meaning that > we are back to the "pip wheel foo just works" case). Making "pip wheel foo" work cross platform and defining the steps needed to cut an sdist for each release are the only pieces that should genuinely require work on a per-project basis. However, at the moment, there are a lot of other steps related to the per-release build-and-publish matrix that are also defined on a per-project basis, and there's no fundamental reason that they need to be - it's purely a matter of providing a public build service and keeping it running smoothly isn't something you want to be asking volunteers to be responsible for. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Nov 1 09:10:08 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 1 Nov 2016 13:10:08 +0000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 1 November 2016 at 12:19, Nick Coghlan wrote: > It isn't that simple, as what you really want to automate is the > *release process*, where you upload an sdist, and the wheels *just > happen* for: > > - the Python versions you want to support (e.g 2.7, 3.4, 3.5) > - the platforms you want to support (typically x86_64 for Windows, Mac > OS X, manylinux1, and maybe 32 bit Windows as well) > > Adding a new Python release or a new platform to the build > configuration is currently an activity that requires per-project work > when in theory a build service could just add it automatically based > on when new releases happen. Ah. If you're looking for automated publication on PyPI (or alternatively, creation and maintenance of a secondary index for wheels) then yes, that's a much bigger piece of work. Paul From mezin.alexander at gmail.com Tue Nov 1 11:40:57 2016 From: mezin.alexander at gmail.com (Aleksandr Mezin) Date: Tue, 1 Nov 2016 21:40:57 +0600 Subject: [Distutils] Improving compiler (GCC) detection? Message-ID: I have problems with distutils not detecting when compiler is GCC. It passes "-R" option on command line, which isn't supported by GCC without "-Wl,". In my build environment (I build Python from source), "CC" environment variable is set to "/usr/bin/cc", which actually points to GCC. But distutils simply checks executable name. Also I'd like to use ccache... So, I think, distutils should somehow check __GNUC__ definition instead. Are there any reasons why such detection can't/shouldn't be implemented? I'd like to work on the patch, but at first I'd like to know if it's possible/will be accepted. Also, clang on linux doesn't accept "-R" too: warning: unknown remark option '-R/usr/lib'; did you mean '-Rpass'? [-Wunknown-warning-option] From guettliml at thomas-guettler.de Tue Nov 1 13:35:13 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Tue, 1 Nov 2016 18:35:13 +0100 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: Am 01.11.2016 um 10:50 schrieb Nick Coghlan: > On 1 November 2016 at 17:30, Thomas G?ttler > wrote: >> Am 17.09.2016 um 12:29 schrieb Nick Coghlan: >>> Hi folks, >>> >>> Prompted by a few posts I read recently about the current state of the >>> Python packaging ecosystem, I figured it made sense to put together an >>> article summarising my own perspective on the current state of things: >>> http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html >> >> >> Thank you for this summarizing article. Yes, a lot was done during the last months. >> >> I liked the part "My core software ecosystem design philosophy" a lot, since >> it explains that both parties (software consumer and software publisher) want >> it to be simple and easy. >> >> >> About conda: if pip and conda overlap in some point. Why not implement >> this in a reusable library which gets used by conda and pip? > > For the parts where they genuinely overlap, conda is already able to > just use pip, or else the same libraries that pip uses. For the > platform management pieces (SAT solving for conda repositories, > converting PyPI packages to conda ones, language independent > environment management), what conda does is outside the scope of what > pip supports anyway. > >> About funding: Looking for more income is one way to solve this. Why not >> look into the other direction: How to reduce costs? > > Thanks to donated infrastructure, the direct costs to the PSF are > incredibly low already. Donald went into some detail on that in > https://caremad.io/posts/2016/05/powering-pypi/ and that's mostly > still accurate (although his funded role with HPE ended recently) > >> Heading "Making the presence of a compiler on end user systems optional". Here >> I just can say: Thank you very much. I guess it was a lot of hard work to >> make this all simple and easy for the software consumers and publishers. Thank you. >> >> I wrote some lines, but I deleted my thoughts about the topic "Automating wheel creation", since >> I am a afraid it could raise bad mood in this list again. That's not my goal. > > I currently see 3 main ways that could eventually happen: > > - the PSF sorts out its general sustainability concerns to the point > where it believes it can credibly maintain such a service on the > community's behalf > - the conda-forge folks branch out into offering wheel building as > well (so it becomes a matter of "publish your Python projects for the > user level conda platform, get platform independent Python wheels as > well") > - someone builds such a service independently of the current PyPI > infrastructure team, and convinces package publishers to start using > it > > There's also a 4th variant, which is getting to a point where someone > figures out a pushbutton solution for a build pipeline in a public > PaaS that offers a decent free tier. This is potentially one of the > more promising options, since it means the sustainability risks > related to future growth in demand accrue to the PaaS providers, > rather than to the PSF. However, it's somewhat gated on the Warehouse > migration, since you really want API token support for that kind of > automation, which is something the current PyPI code base doesn't > have, and isn't going to gain. I like this 4th variant. I guess most companies which use Python in a professional way already run a own pypi server. I am unsure if a public PaaS for this would exist. Maybe a script which runs a container on linux is enough. At least enough to build linux-only wheels. I guess most people have root access to a linux server somewhere. Regards, Thomas -- Thomas Guettler http://www.thomas-guettler.de/ From matthew.brett at gmail.com Tue Nov 1 12:50:13 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 1 Nov 2016 09:50:13 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: Hi, On Tue, Nov 1, 2016 at 10:35 AM, Thomas G?ttler wrote: > > > Am 01.11.2016 um 10:50 schrieb Nick Coghlan: >> >> On 1 November 2016 at 17:30, Thomas G?ttler >> wrote: >>> >>> Am 17.09.2016 um 12:29 schrieb Nick Coghlan: >>>> >>>> Hi folks, >>>> >>>> Prompted by a few posts I read recently about the current state of the >>>> Python packaging ecosystem, I figured it made sense to put together an >>>> article summarising my own perspective on the current state of things: >>>> >>>> http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html >>> >>> >>> >>> Thank you for this summarizing article. Yes, a lot was done during the >>> last months. >>> >>> I liked the part "My core software ecosystem design philosophy" a lot, >>> since >>> it explains that both parties (software consumer and software publisher) >>> want >>> it to be simple and easy. >>> >>> >>> About conda: if pip and conda overlap in some point. Why not implement >>> this in a reusable library which gets used by conda and pip? >> >> >> For the parts where they genuinely overlap, conda is already able to >> just use pip, or else the same libraries that pip uses. For the >> platform management pieces (SAT solving for conda repositories, >> converting PyPI packages to conda ones, language independent >> environment management), what conda does is outside the scope of what >> pip supports anyway. >> >>> About funding: Looking for more income is one way to solve this. Why not >>> look into the other direction: How to reduce costs? >> >> >> Thanks to donated infrastructure, the direct costs to the PSF are >> incredibly low already. Donald went into some detail on that in >> https://caremad.io/posts/2016/05/powering-pypi/ and that's mostly >> still accurate (although his funded role with HPE ended recently) >> >>> Heading "Making the presence of a compiler on end user systems optional". >>> Here >>> I just can say: Thank you very much. I guess it was a lot of hard work to >>> make this all simple and easy for the software consumers and publishers. >>> Thank you. >>> >>> I wrote some lines, but I deleted my thoughts about the topic "Automating >>> wheel creation", since >>> I am a afraid it could raise bad mood in this list again. That's not my >>> goal. >> >> >> I currently see 3 main ways that could eventually happen: >> >> - the PSF sorts out its general sustainability concerns to the point >> where it believes it can credibly maintain such a service on the >> community's behalf >> - the conda-forge folks branch out into offering wheel building as >> well (so it becomes a matter of "publish your Python projects for the >> user level conda platform, get platform independent Python wheels as >> well") >> - someone builds such a service independently of the current PyPI >> infrastructure team, and convinces package publishers to start using >> it >> >> There's also a 4th variant, which is getting to a point where someone >> figures out a pushbutton solution for a build pipeline in a public >> PaaS that offers a decent free tier. This is potentially one of the >> more promising options, since it means the sustainability risks >> related to future growth in demand accrue to the PaaS providers, >> rather than to the PSF. However, it's somewhat gated on the Warehouse >> migration, since you really want API token support for that kind of >> automation, which is something the current PyPI code base doesn't >> have, and isn't going to gain. > > > I like this 4th variant. I guess most companies which use Python > in a professional way already run a own pypi server. > > I am unsure if a public PaaS for this would exist. Maybe a script > which runs a container on linux is enough. At least enough to > build linux-only wheels. I guess most people have root access > to a linux server somewhere. I wrote some scripts to do this: https://github.com/matthew-brett/multibuild See the README there for standard usage. For simple projects the project configuration is only a few lines. Quite a few projects use multibuild and travis-ci to build 32- and 64-bit manylinux and OSX wheels, here are some (more complex) examples: https://travis-ci.org/MacPython/numpy-wheels https://travis-ci.org/MacPython/scipy-wheels https://travis-ci.org/MacPython/matplotlib-wheels https://travis-ci.org/scikit-image/scikit-image-wheels https://travis-ci.org/MacPython/pandas-wheels https://travis-ci.org/python-pillow/pillow-wheels https://travis-ci.org/MacPython/cython-wheels The travis-ci jobs mostly upload to a CDN on a Rackspace account donated to scikit-learn, from which the release managers can upload to pypi with a few lines at the terminal. So, for many of us scientific Python projects, and for Linux and OSX, this problem has a reasonable solution in principle. One hurdle at the moment, is that new projects have to either get an encrypted key to the scikit-learn account from one of us with the credentials, or roll their own mechanism of uploading the built wheels. It would be very helpful to have some standard pypa account mechanism for this. That would allow us to standardize the tools and permission mechanism, Cheers, Matthew From chris.barker at noaa.gov Tue Nov 1 13:01:42 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 1 Nov 2016 10:01:42 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Tue, Nov 1, 2016 at 2:50 AM, Nick Coghlan wrote: > > I wrote some lines, but I deleted my thoughts about the topic > "Automating wheel creation", since > > I am a afraid it could raise bad mood in this list again. That's not my > goal. > > I currently see 3 main ways that could eventually happen: > > - the PSF sorts out its general sustainability concerns to the point > where it believes it can credibly maintain such a service on the > community's behalf > That would be great. > - the conda-forge folks branch out into offering wheel building as > well (so it becomes a matter of "publish your Python projects for the > user level conda platform, get platform independent Python wheels as > well") > I doubt that's going to happen, but conda-forge is an open source project -- someone else could certainly make a friendly fork and make a wheel-forge. - someone builds such a service independently of the current PyPI > infrastructure team, and convinces package publishers to start using > it > and starting with the nice tools conda-forge has built would be a good way to get going. There is an odd thing here, though -- conda-forge works because of two things: 1) The gitHub workflow is helpful for developing and vetting recipes 2) gitHub already has arrangements with CI services -- so it's easy to leverage them to do the building. But in theory, the PSF could negotiate an arrangement with the CI services to make it easy to call them from PyPi. and (1) is really necessary because of two things -- the folks making teh recipes generally are not the package maintainers. and, critical here -- there are a lot of issues with consistent dependency management -- which his what conda can help with, and wheels just don't. ANother key point about all of this: Nick missed the key point about conda. He mentioned that conda allows you to manage the python run-time itself, which is in deed a nice feature, but getting a python run-time as never been the hard part (maybe on Linux if you want a different one than your system supplies). And it's not about needing a compiler -- wheels solve that -- and getting the system compiler is not all that hard either -- on all three major OSs. It's about the non-python dependencies -- all the various C libs, etc that your python packages might need. Or, for that matter, about non-python command line utilities, and who knows what else. Sure, this is a bigger issue in scientific computing -- which is why conda was born there -- but it's an issue in many other realms as well. For many years, a Web-dev I worked with avoided using PIL because it was too much a pain to install -- that's pretty easy these days (thanks Pillow!) but it's not long before a serious web service might need SOMETHING that's not pure python... So -- an auto wheel building system would be nice, but until it can handle non-python dependencies, it's going to hit a wall.... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Nov 1 13:05:40 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 1 Nov 2016 10:05:40 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Tue, Nov 1, 2016 at 5:19 AM, Nick Coghlan wrote: > It isn't that simple, as what you really want to automate is the > *release process*, where you upload an sdist, and the wheels *just > happen* for: > > - the Python versions you want to support (e.g 2.7, 3.4, 3.5) > - the platforms you want to support (typically x86_64 for Windows, Mac > OS X, manylinux1, and maybe 32 bit Windows as well) > indeed -- do take a look at conda-forge, they've got all that working. and while it ultimately calls "conda build this_thing", it wouldn't be hard to make that a call to build a wheel instead. > Adding a new Python release or a new platform to the build > configuration is currently an activity that requires per-project work > when in theory a build service could just add it automatically based > on when new releases happen. hmm -- maybe we could leverage gitHub, like conda-forge does -- Warehouse would actually push to a repo on gitHub that would then trigger the CI builds -- though the sure seems cleaner for Warehouse to call teh CIs directly. > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From guettliml at thomas-guettler.de Wed Nov 2 02:00:27 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 2 Nov 2016 07:00:27 +0100 Subject: [Distutils] Travis-CI is not open source. Was: Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> Am 01.11.2016 um 17:50 schrieb Matthew Brett: > Hi, > > On Tue, Nov 1, 2016 at 10:35 AM, Thomas G?ttler > wrote: >> >> >> Am 01.11.2016 um 10:50 schrieb Nick Coghlan: >>> >>> On 1 November 2016 at 17:30, Thomas G?ttler >>> wrote: >>>> >>>> Am 17.09.2016 um 12:29 schrieb Nick Coghlan: >>>>> >>>>> Hi folks, >>>>> >>>>> Prompted by a few posts I read recently about the current state of the >>>>> Python packaging ecosystem, I figured it made sense to put together an >>>>> article summarising my own perspective on the current state of things: >>>>> >>>>> http://www.curiousefficiency.org/posts/2016/09/python-packaging-ecosystem.html >>>> >>>> >>>> >>>> Thank you for this summarizing article. Yes, a lot was done during the >>>> last months. >>>> >>>> I liked the part "My core software ecosystem design philosophy" a lot, >>>> since >>>> it explains that both parties (software consumer and software publisher) >>>> want >>>> it to be simple and easy. >>>> >>>> >>>> About conda: if pip and conda overlap in some point. Why not implement >>>> this in a reusable library which gets used by conda and pip? >>> >>> >>> For the parts where they genuinely overlap, conda is already able to >>> just use pip, or else the same libraries that pip uses. For the >>> platform management pieces (SAT solving for conda repositories, >>> converting PyPI packages to conda ones, language independent >>> environment management), what conda does is outside the scope of what >>> pip supports anyway. >>> >>>> About funding: Looking for more income is one way to solve this. Why not >>>> look into the other direction: How to reduce costs? >>> >>> >>> Thanks to donated infrastructure, the direct costs to the PSF are >>> incredibly low already. Donald went into some detail on that in >>> https://caremad.io/posts/2016/05/powering-pypi/ and that's mostly >>> still accurate (although his funded role with HPE ended recently) >>> >>>> Heading "Making the presence of a compiler on end user systems optional". >>>> Here >>>> I just can say: Thank you very much. I guess it was a lot of hard work to >>>> make this all simple and easy for the software consumers and publishers. >>>> Thank you. >>>> >>>> I wrote some lines, but I deleted my thoughts about the topic "Automating >>>> wheel creation", since >>>> I am a afraid it could raise bad mood in this list again. That's not my >>>> goal. >>> >>> >>> I currently see 3 main ways that could eventually happen: >>> >>> - the PSF sorts out its general sustainability concerns to the point >>> where it believes it can credibly maintain such a service on the >>> community's behalf >>> - the conda-forge folks branch out into offering wheel building as >>> well (so it becomes a matter of "publish your Python projects for the >>> user level conda platform, get platform independent Python wheels as >>> well") >>> - someone builds such a service independently of the current PyPI >>> infrastructure team, and convinces package publishers to start using >>> it >>> >>> There's also a 4th variant, which is getting to a point where someone >>> figures out a pushbutton solution for a build pipeline in a public >>> PaaS that offers a decent free tier. This is potentially one of the >>> more promising options, since it means the sustainability risks >>> related to future growth in demand accrue to the PaaS providers, >>> rather than to the PSF. However, it's somewhat gated on the Warehouse >>> migration, since you really want API token support for that kind of >>> automation, which is something the current PyPI code base doesn't >>> have, and isn't going to gain. >> >> >> I like this 4th variant. I guess most companies which use Python >> in a professional way already run a own pypi server. >> >> I am unsure if a public PaaS for this would exist. Maybe a script >> which runs a container on linux is enough. At least enough to >> build linux-only wheels. I guess most people have root access >> to a linux server somewhere. > > I wrote some scripts to do this: > > https://github.com/matthew-brett/multibuild > > See the README there for standard usage. For simple projects the > project configuration is only a few lines. Quite a few projects use > multibuild and travis-ci to build 32- and 64-bit manylinux and OSX > wheels, here are some (more complex) examples: > > https://travis-ci.org/MacPython/numpy-wheels > https://travis-ci.org/MacPython/scipy-wheels > https://travis-ci.org/MacPython/matplotlib-wheels > https://travis-ci.org/scikit-image/scikit-image-wheels > https://travis-ci.org/MacPython/pandas-wheels > https://travis-ci.org/python-pillow/pillow-wheels > https://travis-ci.org/MacPython/cython-wheels > > The travis-ci jobs mostly upload to a CDN on a Rackspace account > donated to scikit-learn, from which the release managers can upload to > pypi with a few lines at the terminal. > > So, for many of us scientific Python projects, and for Linux and OSX, > this problem has a reasonable solution in principle. > > One hurdle at the moment, is that new projects have to either get an > encrypted key to the scikit-learn account from one of us with the > credentials, or roll their own mechanism of uploading the built > wheels. It would be very helpful to have some standard pypa account > mechanism for this. That would allow us to standardize the tools and > permission mechanism, I see an other hurdle. travis-ci is very widespread, but AFAIK it is not open source: https://travis-ci.com/plans -- http://www.thomas-guettler.de/ From graffatcolmingov at gmail.com Wed Nov 2 09:13:56 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Wed, 2 Nov 2016 08:13:56 -0500 Subject: [Distutils] Travis-CI is not open source. Was: Current Python packaging status (from my point of view) In-Reply-To: <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> Message-ID: On Wed, Nov 2, 2016 at 1:00 AM, Thomas G?ttler wrote: > I see an other hurdle. travis-ci is very widespread, but AFAIK it is not open source: > > https://travis-ci.com/plans It is open source: https://github.com/travis-ci Sadly, the infrastructure it takes to operate it is not Free. From ncoghlan at gmail.com Wed Nov 2 10:32:34 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Nov 2016 00:32:34 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 2 November 2016 at 03:01, Chris Barker wrote: > Nick missed the key point about conda. He mentioned that conda allows you to > manage the python run-time itself, which is in deed a nice feature, but > getting a python run-time as never been the hard part (maybe on Linux if you > want a different one than your system supplies). I didn't miss it accidentally, I left it out because it wasn't relevant to the post (which was about the ecosystem design direction, not the end user features that make the desire to use pip incomprehensible to a lot of folks). Designing software assembly tools for interpreted software is a system integrator's game, and when people are writing Python code, there is one absolutely 100% unavoidable integration task: choosing which Python runtimes they're going to support. But as an ecosystem designer, even if you only consider that single core integration activity, you have already hit a fundamental difference between pip and conda: conda requires that you use a runtime that it provides (similar to the way Linux distro packaging works), while pip really doesn't care where the runtime came from, as long as it can run pip properly. Essentially all the other differences between the two systems stem from the different choice made in that core design decision, which is why the two aren't substitutes for each other and never will be - they solve different problems for different audiences. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Nov 2 10:40:38 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Nov 2016 00:40:38 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 2 November 2016 at 03:05, Chris Barker wrote: >> Adding a new Python release or a new platform to the build >> configuration is currently an activity that requires per-project work >> when in theory a build service could just add it automatically based >> on when new releases happen. > > hmm -- maybe we could leverage gitHub, like conda-forge does -- Warehouse > would actually push to a repo on gitHub that would then trigger the CI > builds -- though the sure seems cleaner for Warehouse to call teh CIs > directly. GitHub's integration works the other way around - it emits notifications when events (like new commits) happen, and folks can register external service to receive those events and then authorize them to act on them (e.g. by publishing a release, or commenting on a pull request). This is one of the real costs of the lack of funding for PyPI development - we simply don't have those event notification and service authorisation primitives built into the current platform, so the only current way to automate things is to trust services with full access to your PyPI account by providing your password to them (which is a fundamentally bad idea, which is why we don't recommend it). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Nov 2 10:54:03 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Nov 2016 00:54:03 +1000 Subject: [Distutils] Travis-CI is not open source. Was: Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> Message-ID: On 2 November 2016 at 23:13, Ian Cordasco wrote: > On Wed, Nov 2, 2016 at 1:00 AM, Thomas G?ttler > wrote: >> I see an other hurdle. travis-ci is very widespread, but AFAIK it is not open source: >> >> https://travis-ci.com/plans > > It is open source: https://github.com/travis-ci > > Sadly, the infrastructure it takes to operate it is not Free. This is also an area where I'm fine with recommending freemium solutions if they're the lowest barrier to entry option for new users, and "Use GitHub + Travis CI" qualifies on that front. The CPython repo hosting debate forced me to seriously consider when in someone's journey into open source and free software it makes sense to introduce the notion of "Free software needs free tools", and I came to the conclusion that in most cases the right answer is "After they discover the concept on their own and want to know more about it". My hard requirement is that folks using completely open source/free software stacks be *supported*, and that's not at risk here (since there are plenty of ways to run your own private build service). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris.barker at noaa.gov Wed Nov 2 11:54:53 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 2 Nov 2016 08:54:53 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Wed, Nov 2, 2016 at 7:32 AM, Nick Coghlan wrote: > > He mentioned that conda allows you to > > manage the python run-time itself, which is in deed a nice feature, but > > getting a python run-time as never been the hard part (maybe on Linux if > you > > want a different one than your system supplies). > > I didn't miss it accidentally, I left it out because it wasn't > relevant to the post (which was about the ecosystem design direction, > I would argue that is is quite relevant -- talking about design decisions without talking about the motivations and consequences of those decisions is missing much of the point. The issue here is that folks might read that post and think: "do I want to manage my python install or not?" and think that's the only question they need to ask to determine if pip or conda is right for them. But it's not at all. pip is about managing stuff WITHIN python -- that's why it can work with any conforming python install. So that's the advantage of this design decision. But it leads to a major limitation, EVEN if you only care about python, because it can't (properly) manage the stuff outside of python that python packages may need. I honestly have no idea if the original motivation was specifically to have a system that could work with any python install (maybe), but it certainly was designed specifically to handle python packages, period. conda started with the motivation of managing complex non-python dependencies (initially, to support python) -- in order to do that effectively, it has to operate outside the python envelope, and and that means that it really needs to manage python itself. I'm pretty sure that managing python itself was a consequence of the design goals, not a primary design goal. not the end user features that make the desire to use pip > incomprehensible to a lot of folks). > > Designing software assembly tools for interpreted software is a system > integrator's game, and when people are writing Python code, there is > one absolutely 100% unavoidable integration task: choosing which > Python runtimes they're going to support. > hmm -- I don't think that's the code-writers job -- it's the deployers job. Other than choosing which python *version* I want to use, I can happily develop with system python and pip, and then deploy with conda -- or vice versa. INdeed, I can develop on Windows and deploy on LInux, or.... though if you meant pypy vs iron python vs cPython when you meant "runtime" then yes, with the dependency issue, you really do need to make that choice upfront. > conda requires that you use a > runtime that it provides (similar to the way Linux distro packaging > works), while pip really doesn't care where the runtime came from, as > long as it can run pip properly. > yes indeed -- and I'm fully aware of the consequences of that -- I work in a very locked down system -- our system security folks REALLY want us to use system-supplied packages (i.e. the python supplied by the OS distributor) where possible. But that doesn't mean that I should try to manage my applications with pip -- because while that makes the python part easier, it makes the dependencies a nightmare -- having to custom compile all sorts of stuff -- much more work, and not any more satisfying to the security folks. So you really need to look at your whole system to determine what will best work for your use case. > Essentially all the other differences between the two systems stem > from the different choice made in that core design decision, which is > why the two aren't substitutes for each other and never will be - they > solve different problems for different audiences. > Indeed they do. But the different audiences aren't "data science folks" vs "web developers" -- the different audiences are determined by deployment needs, not domain. conda environments are kind like mini docker containers -- they really can make life easier for lots of use cases, if you can get your IT folks to accept a "non standard" python. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 2 12:49:30 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Nov 2016 02:49:30 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 3 November 2016 at 01:54, Chris Barker wrote: > On Wed, Nov 2, 2016 at 7:32 AM, Nick Coghlan wrote: >> >> > He mentioned that conda allows you to >> > manage the python run-time itself, which is in deed a nice feature, but >> > getting a python run-time as never been the hard part (maybe on Linux if >> > you >> > want a different one than your system supplies). >> >> I didn't miss it accidentally, I left it out because it wasn't >> relevant to the post (which was about the ecosystem design direction, > > I would argue that is is quite relevant -- talking about design decisions > without talking about the motivations and consequences of those decisions is > missing much of the point. No, as the post was about the fundamental and irreconcilable differences in capabilities, not the incidental ones that can be solved if folks choose (or are paid) to put in the necessary design and development time. > The issue here is that folks might read that post and think: "do I want to > manage my python install or not?" and think that's the only question they > need to ask to determine if pip or conda is right for them. But it's not at > all. The post isn't written for beginners deciding which tool to use, it's written for intermediate folks that have already chosen one or the other for their own needs, and are wondering why the other still exists (or, worse, are telling people that have chosen the other tool for good reasons that they're wrong, and should switch). > pip is about managing stuff WITHIN python -- that's why it can work with any > conforming python install. So that's the advantage of this design decision. > But it leads to a major limitation, EVEN if you only care about python, > because it can't (properly) manage the stuff outside of python that python > packages may need. I honestly have no idea if the original motivation was > specifically to have a system that could work with any python install > (maybe), but it certainly was designed specifically to handle python > packages, period. Aside from already needing a Python runtime, the inability to fully specify the target environment isn't an inherent design limitation though, the solution just looks different at a pip level: - you need a system for specifying environmental *constraints* (like dynamically linked C libraries and command line applications you invoke) - you need a system for asking the host environment if it can satisfy those constraints Tennessee Leeuwenberg started a draft PEP for that first part last year: https://github.com/pypa/interoperability-peps/pull/30/files dnf/yum, apt, brew, conda, et al all *work around* the current lack of such a system by asking humans (aka "downstream package maintainers") to supply the additional information by hand in a platform specific format. > conda started with the motivation of managing complex non-python > dependencies (initially, to support python) -- in order to do that > effectively, it has to operate outside the python envelope, and and that > means that it really needs to manage python itself. I'm pretty sure that > managing python itself was a consequence of the design goals, not a primary > design goal. Correct, just as managing Python runtimes specifically isn't a primary goal of Linux distro package managers, it's just an artifact of providing a language independent dependency declaration and management system. >> not the end user features that make the desire to use pip >> incomprehensible to a lot of folks). >> >> Designing software assembly tools for interpreted software is a system >> integrator's game, and when people are writing Python code, there is >> one absolutely 100% unavoidable integration task: choosing which >> Python runtimes they're going to support. > > hmm -- I don't think that's the code-writers job -- it's the deployers job. > Other than choosing which python *version* I want to use, I can happily > develop with system python and pip, and then deploy with conda -- or vice > versa. INdeed, I can develop on Windows and deploy on LInux, or.... You still need to decide which versions you're going to test against, and which bug reports you're going to accept as potentially valid feedback (e.g. very few people running upstream community projects will accept "doesn't run on Python 2.5" as a valid bug report any more, and RHEL/CentOS 7, Software Collections, and conda have been around for long enough now that most won't accept "doesn't run on 2.6" either) > though if you meant pypy vs iron python vs cPython when you meant "runtime" > then yes, with the dependency issue, you really do need to make that choice > upfront. I also mean 2.6 vs 2.7 vs 3.4 vs 3.5 vs 3.6, etc > But the different audiences aren't "data science folks" vs "web developers" > -- the different audiences are determined by deployment needs, not domain. Deployment needs are strongly correlated with domain though, and there's a world of difference between the way folks do exploratory data analysis and the way production apps are managed in Heroku/OpenShift/Cloud Foundry/Lambda/etc. You can certainly use conda to do production web service deployments, and if you're still deploying to bare metal or full VMs, or building your own container images from scratch, it's a decent option to consider. However, if you're specifically interested in web service development, then swapping in your own Python runtime rather than just using a PaaS provided one is really much lower level than most beginners are going to want to be worrying about these days - getting opinionated about that kind of thing comes later (if it happens at all). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Wed Nov 2 12:57:45 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 2 Nov 2016 12:57:45 -0400 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> > On Nov 2, 2016, at 12:49 PM, Nick Coghlan wrote: > >> >> hmm -- I don't think that's the code-writers job -- it's the deployers job. >> Other than choosing which python *version* I want to use, I can happily >> develop with system python and pip, and then deploy with conda -- or vice >> versa. INdeed, I can develop on Windows and deploy on LInux, or.... > > You still need to decide which versions you're going to test against, > and which bug reports you're going to accept as potentially valid > feedback (e.g. very few people running upstream community projects > will accept "doesn't run on Python 2.5" as a valid bug report any > more, and RHEL/CentOS 7, Software Collections, and conda have been > around for long enough now that most won't accept "doesn't run on 2.6" > either) > >> though if you meant pypy vs iron python vs cPython when you meant "runtime" >> then yes, with the dependency issue, you really do need to make that choice >> upfront. > > I also mean 2.6 vs 2.7 vs 3.4 vs 3.5 vs 3.6, etc There are still platform differences too, we regularly get bugs that only are exposed on Anaconda or on Ubuntu?s Python or on RHEL's Python or on Python.org?s OS X installers etc etc. Basically every variation has a chance to introduce a bug of some kind, and if you?re around long enough and you?re used enough you?ll run into them on every system. As someone writing that code you have to decide where you draw the line for what you support or not (for instance, you may support Ubuntu/RHEL/Anaconda, but you may decide that any version of CPython running on HPUX is not supported). ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Nov 2 14:22:54 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 2 Nov 2016 11:22:54 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On Wed, Nov 2, 2016 at 9:57 AM, Donald Stufft wrote: > > On Nov 2, 2016, at 12:49 PM, Nick Coghlan wrote: > > > > I also mean 2.6 vs 2.7 vs 3.4 vs 3.5 vs 3.6, etc > Of course, but that has nothing to do with the package management system... > There are still platform differences too, we regularly get bugs that only > are exposed on Anaconda or on Ubuntu?s Python or on RHEL's Python or on > Python.org?s OS X installers etc etc. > Yes, that is the challenge -- and one reason folks may be tempted to say we should have ONE package manager -- to reduce the variation -- though I don't think any of us think that's possible (and probably not desirable). Basically every variation has a chance to introduce a bug of some kind, and > if you?re around long enough and you?re used enough you?ll run into them on > every system. As someone writing that code you have to decide where you > draw the line for what you support or not (for instance, you may support > Ubuntu/RHEL/Anaconda, but you may decide that any version of CPython > running on HPUX is not supported). > or you may decide to ONLY support conda -- my use case is a big pile of tangled dependencies (yes, lots o' scientific stuff) that is fairly easy to manage in conda and freekin' nightmare without it. Oh, and I'm putting it behind a web service, so I need to deploy on locked-down servers.... You CAN get all those dependencies working without conda, but it's a serious pain. Almost impossible on Windows. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Nov 2 14:31:39 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 2 Nov 2016 14:31:39 -0400 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: > On Nov 2, 2016, at 2:22 PM, Chris Barker wrote: > > or you may decide to ONLY support conda -- my use case is a big pile of tangled dependencies (yes, lots o' scientific stuff) that is fairly easy to manage in conda and freekin' nightmare without it. Sure. Do whatever you want, I don?t think anyone here thinks you absolutely must use pip. :) [1] My point is just that even if you narrow yourself down to CPython 3.6.0 there are still variations that can cause problems so each project individually ends up needing to decide what they support. Often times ?unsupported? variations will still work unless they go out of their way to break it, but not always. [1] There seems to be some animosity among pip supporters and conda supports, or at least a perception that there is. I?d just like to say that this isn?t really shared (to my knowledge) by the development teams of either project. I think everyone involved thinks folks should use whatever solution best allows them to solve whatever problem they are having. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Nov 2 14:39:36 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 2 Nov 2016 11:39:36 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Wed, Nov 2, 2016 at 9:49 AM, Nick Coghlan wrote: > No, as the post was about the fundamental and irreconcilable > differences in capabilities, not the incidental ones that can be > solved if folks choose (or are paid) to put in the necessary design > and development time. It's your post, but the external dependency is hardly an incidental issue. > The post isn't written for beginners deciding which tool to use, it's > written for intermediate folks that have already chosen one or the > other for their own needs, and are wondering why the other still > exists (or, worse, are telling people that have chosen the other tool > for good reasons that they're wrong, and should switch). again -- those reasons are very much about external dependency management! - you need a system for specifying environmental *constraints* (like > dynamically linked C libraries and command line applications you > invoke) > - you need a system for asking the host environment if it can satisfy > those constraints > and it it can't -- you're then done -- that's actually the easy part (and happens already and build or run time, yes?): I try to build libgdal, it'll fail if I don't have that huge pile of dependencies installed. I try to run a wheel someone else built -- it'll also fail. It'd be better if this could be hashed out a compilation or linking error, sure, but that's not goign to do a whole lot except make the error messages nicer :-) > dnf/yum, apt, brew, conda, et al all *work around* the current lack of > such a system by asking humans (aka "downstream package maintainers") > to supply the additional information by hand in a platform specific > format. if conda is a "platform", then yes. but in that sense pip is a platform, too. I'll beat this drum again -- if you want to extend pip to solve all (most) of the problems conda solves, then you are re-inventing conda. If someone doesn't like the conda design, and has better ideas, great, but only re-invent the wheel if you really are going to make a better wheel. However, I haven't thought about it carefully -- maybe it would be possible to have a system than managed everything except python itself. But that would be difficult, and I don't see the point, except to satisfy brain dead IT security folks :-) > > But the different audiences aren't "data science folks" vs "web > developers" > > -- the different audiences are determined by deployment needs, not > domain. > > Deployment needs are strongly correlated with domain though, and > there's a world of difference between the way folks do exploratory > data analysis and the way production apps are managed in > Heroku/OpenShift/Cloud Foundry/Lambda/etc. > sigh. not everyone that uses the complex scipy stack is doing "exploratory data analysis" -- a lot of us are building production software, much of it behind web services... and that's what I mean by deployment needs. However, if you're specifically interested in web service > development, then swapping in your own Python runtime rather than just > using a PaaS provided one is really much lower level than most > beginners are going to want to be worrying about these days - getting > opinionated about that kind of thing comes later (if it happens at > all). > it's not a matter of opinion, but needs -- if this "beginner" is doing stuff only with pure-python packages, then great -- there are many easy options. But if they need some other stuff - maybe this beginner needs to work with scientific data sets.. then they're dead in the water with a Platform that doesn't support what they need. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Nov 2 14:49:40 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 2 Nov 2016 11:49:40 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On Wed, Nov 2, 2016 at 11:31 AM, Donald Stufft wrote: > Sure. Do whatever you want, I don?t think anyone here thinks you > absolutely must use pip. :) [1] > indeed -- and IIUC, part of the thrust of Nick's post was that different package managers serve different use-cases -- so we do need more than one. > My point is just that even if you narrow yourself down to CPython 3.6.0 > there are still variations that can cause problems so each project > individually ends up needing to decide what they support. Often times > ?unsupported? variations will still work unless they go out of their way to > break it, but not always. > yup -- when I say I don't support non-conda installs, it doesn't mean my software wont work without it -- it means you're going to need to figure out how to compile those ugly dependencies yourself :-) > [1] There seems to be some animosity among pip supporters and conda > supports, or at least a perception that there is. > for my part, I'd say "frustration" more than animosity. I see a lot of folks struggling with tools that don't serve their needs, without awareness that there are better options. And some of those folks want me to support them -- and I do want to support my users. I?d just like to say that this isn?t really shared (to my knowledge) by the > development teams of either project. I think everyone involved thinks folks > should use whatever solution best allows them to solve whatever problem > they are having. > I agree -- Python is a great community! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Nov 2 15:28:13 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 2 Nov 2016 12:28:13 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Nov 2, 2016 9:52 AM, "Nick Coghlan" wrote: > [...] > Aside from already needing a Python runtime, the inability to fully > specify the target environment isn't an inherent design limitation > though, the solution just looks different at a pip level: > > - you need a system for specifying environmental *constraints* (like > dynamically linked C libraries and command line applications you > invoke) > - you need a system for asking the host environment if it can satisfy > those constraints > > Tennessee Leeuwenberg started a draft PEP for that first part last > year: https://github.com/pypa/interoperability-peps/pull/30/files > > dnf/yum, apt, brew, conda, et al all *work around* the current lack of > such a system by asking humans (aka "downstream package maintainers") > to supply the additional information by hand in a platform specific > format. To be fair, though, it's not yet clear whether such a system is actually possible. AFAIK no one has ever managed to reliably translate between different languages that Linux distros use for describing environment constraints, never mind handling the places where they're genuinely irreconcilable (e.g. the way different distro openssl packages have incompatible ABIs), plus other operating systems too. I mean, it would be awesome if someone pulls it off. But it's possible that this is like saying that it's not an inherent design limitation of my bike that it's only suited for terrestrial use, because I could always strap on wings and a rocket... -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Nov 2 15:26:28 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 2 Nov 2016 15:26:28 -0400 Subject: [Distutils] Released: pip v9.0.0 Message-ID: <7CEFCE24-DFC3-4E9E-B7E6-ABB398853AB3@stufft.io> I?d like to announce the release of pip v9.0. This release features: * The 9.x series will be the last pip versions to support Python 2.6. * Support for Requires-Python (will require additional support in setuptools/PyPI) to allow releasing sdists that will be ignored by specific versions of Python (e.g. foobar 5.x doesn?t get downloaded on 2.6). * Ability to pass platform, Python version, implementation, etc into ``pip download`` to download wheels for other platforms. * Add a ``pick check`` command to check the state of installed dependencies. * Add new formats for ``pip list``, including a new columnar layout and a JSON format for ease of scripting. For a full list of changes, see https://pip.pypa.io/en/stable/news/ . ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From guettliml at thomas-guettler.de Wed Nov 2 16:41:52 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 2 Nov 2016 21:41:52 +0100 Subject: [Distutils] I abandon my idea, Was: Current Python packaging status (from my point of view) In-Reply-To: <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: Am 02.11.2016 um 17:57 schrieb Donald Stufft: > >> On Nov 2, 2016, at 12:49 PM, Nick Coghlan > wrote: >> >>> >>> hmm -- I don't think that's the code-writers job -- it's the deployers job. >>> Other than choosing which python *version* I want to use, I can happily >>> develop with system python and pip, and then deploy with conda -- or vice >>> versa. INdeed, I can develop on Windows and deploy on LInux, or.... >> >> You still need to decide which versions you're going to test against, >> and which bug reports you're going to accept as potentially valid >> feedback (e.g. very few people running upstream community projects >> will accept "doesn't run on Python 2.5" as a valid bug report any >> more, and RHEL/CentOS 7, Software Collections, and conda have been >> around for long enough now that most won't accept "doesn't run on 2.6" >> either) >> >>> though if you meant pypy vs iron python vs cPython when you meant "runtime" >>> then yes, with the dependency issue, you really do need to make that choice >>> upfront. >> >> I also mean 2.6 vs 2.7 vs 3.4 vs 3.5 vs 3.6, etc > > > There are still platform differences too, we regularly get bugs that only are exposed on Anaconda or on Ubuntu?s Python or on RHEL's Python or on Python.org ?s OS X installers etc etc. Basically every variation has a chance to introduce a bug of some kind, and if you?re around long enough and you?re used enough you?ll run into them on every system. As someone writing that code you have to decide where you draw the line for what you support or not (for instance, you may support Ubuntu/RHEL/Anaconda, but you may decide that any version of CPython running on HPUX is not supported). I guess you are right. I will abandon my idea. Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From matthew.brett at gmail.com Wed Nov 2 17:02:37 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 2 Nov 2016 14:02:37 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: Hi, On Wed, Nov 2, 2016 at 11:31 AM, Donald Stufft wrote: [snip] > [1] There seems to be some animosity among pip supporters and conda > supports, or at least a perception that there is. I?d just like to say that > this isn?t really shared (to my knowledge) by the development teams of > either project. I think everyone involved thinks folks should use whatever > solution best allows them to solve whatever problem they are having. I don't know whether there is animosity, but there is certainly tension. Speaking personally, I care a lot about having the option to prefer pip. There are others in the scientific community who feel we should standardize on conda. I think this is the cause of Chris' frustration. If we could all use conda instead of pip, then this would make it easier for him in terms of packaging, because he would not have to support pip (Chris please correct me if I'm wrong). Although there are clear differences in the audience for pip and conda, there is also a very large overlap. In practice the majority of users could reasonably choose one or the other as their starting point. Of course, one may come to dominate this choice over the other. At the point where enough users become frustrated with the lack of pip wheels, conda will become the default. If pip wheels are widely available, that makes the pressure to use conda less. If we reach this tipping point it will become wasteful of developer effort to make pip wheels / conda packages, the number and quality of binary packages will drop, and one of these package managers will go into decline. Cheers, Matthew From chris.barker at noaa.gov Wed Nov 2 19:08:20 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 2 Nov 2016 16:08:20 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: Hey Matthew, > [1] There seems to be some animosity among pip supporters and conda > > supports, or at least a perception that there is. > > I don't know whether there is animosity, but there is certainly > tension. Speaking personally, I care a lot about having the option to > prefer pip. indeed -- and you have made herculean efforts to make that possible :-) > There are others in the scientific community who feel we > should standardize on conda. I think this is the cause of Chris' > frustration. If we could all use conda instead of pip, then this > would make it easier for him in terms of packaging, because he would > not have to support pip (Chris please correct me if I'm wrong). > yup -- nor would you :-) My personal frustration comes with my history -- I've been in this community for over 15 years -- and I spent a lot of effort back in the day to make packages available for the Mac (before pypi, pip, before wheels, ...). And I found I was constantly re-figuring out how to build the dependent libraries needed, and statically linking everything, etc... It sucked. A lot of work, and not at all fun (for me, anyway -- maybe some folks enjoy that sort of thing). So I started a project to share that effort, and to build a bit of infrastructure. I also started looking into how to handle dependent libs with pip+wheel -- I got no support whatsoever for that. I got frustrated 'cause it was too hard, and I also felt like I was fighting the tools. I did not get far. Mathew ended up picking up that effort and really making it work, and had gotten all the core SciPY stuff out there as binary wheels -- really great stuff. But then I discovered conda -- and while I was resistant at first, I found that it was a much nicer environment to do what I needed to do. It started not because of anything specific about conda, but because someone had already built a bunch of the stuff I needed -- nice! But the breaking point was when I needed to build a package of my own: py_gd -- it required libgd, which required libpng, libjpeg, libtiff --- some of those come out of the box on a Mac, it's all available from homebrew, and all pretty common on Linux -- but I have users that are on Windows, or on a Mac and can't grok homebrew or macports. And, frankly, neither do all of my development team. But guess what? libgd wasn't in conda, but everything else I needed was -- this is all pretty common stuff -- other people have solved the problem and the system supports installing libs, so I can just use them. My work was SO MUCH easier. And especially my users have it so much easier, cause I can just give them a conda package. And while that particular example would have been solvable with binary wheels, as things get more complicated, it gets hard or impossible to do. So if I'm happy with conda -- why the frustration? Some is the history, but also there are two things: 1) pip is considered "the way" to handle dependencies -- my users want to use it, and they ask me for help using it, and I don't want to abandon my users -- so more work for me. 2) I see people over and over again starting out with pip -- cause that's what you do. Then hitting a wall, then trying Enthought Canopy, then trying Anaconda, then ending up with a tangled mess of multiple systems where who knows what python "pip" is associated with. This is why "there is only one way to do it" would be nice. And I'm pretty sure that "wall" will always be there -- things have gotten better with wheels -- between Matthew's efforts and manylinux, most everybody can get the core SciPy stack with pip -- very nice! But not pyHDF, netCDF5, gdal, shapely, ... (to name a few that I need to work with). And these are ugly: which means very hard for end-users to build, and very hard for people to package up into wheels (is it even possible?) And of course, all is not rosy with conda either -- the conda-forge effort has made phenomenal progress, but it's really hard to manage that huge stack of stuff (I'm using the time I'm writing this with to take a break from conda-forge dependency hell ...). But in a way, I think we'd be better off if there was more focus on conda-forge rather than the effort to shoehorn pip into solving more of the dependency problem. And the final frustration -- I think conda is still pretty misunderstood an misrepresented as a solution only (or primarily) for "data scientists" or people doing interactive data exploration, or "scientific programmers", whereas it's actually a pretty good solution to a lot of people's problems. > Although there are clear differences in the audience for pip and > conda, there is also a very large overlap. In practice the majority > of users could reasonably choose one or the other as their starting > point. exactly. > Of course, one may come to dominate this choice over the > other. At the point where enough users become frustrated with the > lack of pip wheels, conda will become the default. If pip wheels are > widely available, that makes the pressure to use conda less. If we > reach this tipping point it will become wasteful of developer effort > to make pip wheels / conda packages, the number and quality of binary > packages will drop, and one of these package managers will go into > decline. > perhaps so -- but it will be a good while! The endorsement of the "official" community really does keep pip going. And, of course, it works great for a lot of use-cases. If it were all up to me (which of course it's not) -- I'd say that keeping pip / PyPi fully supported for all the stuff it's good at -- pure python and small/no dependency extension modules -- and folks can go to conda when/if they need more. After all, you can use pip from within a conda environment just fine :-) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Nov 2 20:02:52 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 2 Nov 2016 17:02:52 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: Hi, On Wed, Nov 2, 2016 at 4:08 PM, Chris Barker wrote: > Hey Matthew, > >> > [1] There seems to be some animosity among pip supporters and conda >> > supports, or at least a perception that there is. >> >> I don't know whether there is animosity, but there is certainly >> tension. Speaking personally, I care a lot about having the option to >> prefer pip. > > > indeed -- and you have made herculean efforts to make that possible :-) > >> >> There are others in the scientific community who feel we >> should standardize on conda. I think this is the cause of Chris' >> frustration. If we could all use conda instead of pip, then this >> would make it easier for him in terms of packaging, because he would >> not have to support pip (Chris please correct me if I'm wrong). > > > yup -- nor would you :-) > > My personal frustration comes with my history -- I've been in this community > for over 15 years -- and I spent a lot of effort back in the day to make > packages available for the Mac (before pypi, pip, before wheels, ...). And I > found I was constantly re-figuring out how to build the dependent libraries > needed, and statically linking everything, etc... It sucked. A lot of work, > and not at all fun (for me, anyway -- maybe some folks enjoy that sort of > thing). > > So I started a project to share that effort, and to build a bit of > infrastructure. I also started looking into how to handle dependent libs > with pip+wheel -- I got no support whatsoever for that. I got frustrated > 'cause it was too hard, and I also felt like I was fighting the tools. I did > not get far. > > Mathew ended up picking up that effort and really making it work, and had > gotten all the core SciPY stuff out there as binary wheels -- really great > stuff. > > But then I discovered conda -- and while I was resistant at first, I found > that it was a much nicer environment to do what I needed to do. It started > not because of anything specific about conda, but because someone had > already built a bunch of the stuff I needed -- nice! > > But the breaking point was when I needed to build a package of my own: py_gd > -- it required libgd, which required libpng, libjpeg, libtiff --- some of > those come out of the box on a Mac, it's all available from homebrew, and > all pretty common on Linux -- but I have users that are on Windows, or on a > Mac and can't grok homebrew or macports. And, frankly, neither do all of my > development team. Anaconda has an overwhelming advantage on Windows, in that Continuum can bear the licensing liabilities enforced by the Intel Fortran compiler, and we can not. We therefore have no license-compatible Fortran compiler for Python 3.5 - so we can't build scipy, and that's a full stop for the scipy stack. I'm sure you know, but the only practical open-source option is mingw-w64, that does not work with the Microsoft runtime used by Python 3.5 [1]. It appears the only person capable of making mingw-w64 compatible with this runtime is Ray Donnelly, who now works for Continuum, and we haven't yet succeeded in finding the 15K USD or so to pay Continuum for his time. > But not pyHDF, netCDF5, gdal, shapely, ... (to name a few that I need to > work with). And these are ugly: which means very hard for end-users to > build, and very hard for people to package up into wheels (is it even > possible?) I'd be surprised if these packages were very hard to build OSX and Linux wheels for. We're already building hard packages like Pillow and Matplotlib and h5py and pytables. If you mean netCDF4 - there are already OSX and Windows wheels for that [2]. Cheers, Matthew [1] https://mingwpy.github.io/issues.html#the-vs-14-2015-runtime [2] https://pypi.python.org/pypi/netCDF4 From donald at stufft.io Wed Nov 2 21:34:59 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 2 Nov 2016 21:34:59 -0400 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: <979976C2-30BE-4942-B76F-7FD1AB4ADD42@stufft.io> > On Nov 2, 2016, at 7:08 PM, Chris Barker wrote: > > perhaps so -- but it will be a good while! The endorsement of the "official" community really does keep pip going. And, of course, it works great for a lot of use-cases. Right, there is some overlap in terms of ?I am on Windows/macOS/most Linuxs and I either prefer Anaconda for my Python or I don?t care where it comes from and I don?t have any compelling reasons NOT to use something different?. That can be a pretty big chunk of people, but then there is an area that doesn?t overlap that the differences in decisions aren?t really going to be very easy to reconcile. For instance, the obvious one that favors conda is anytime your existing platform isn?t providing you something that conda is providing (e.g. a new enough NumPy on CentOS, or binary packages at all on Windows, or what have you). The flip side of that is pip?s ability to just-in-time build binary packages means that if you?re deploying to say, alpine linux like a lot of the folks using Docker is doing or to FreeBSD or something like that then, as long as your system is configured, `pip install` will just work without you having to go through the work to stand up an entire Conda repository that contains all of the ?bare minimum? compiled packages for an entire new architecture. Pip?s ability to ?slot? into an existing environment also means that it generally just works in any environment you give it, if you want something that will integrate with something installed by apt-get, well you?re going to have a hard time getting that with conda. Beyond all of that though, there?s more to the pip tooling than just what command line program some end user happens to be executing. Part of this is enabling a project author to release a package that can be consumed by Conda and Debian and RHEL and Alpine and FreeBsd ports and etc etc. I don?t mean in the sense of ?I can run ``pip install`` on them, but in the sense that by the very nature of our JIT building from source we need a programmatic standard interface to the build system, so if pip can automatically consume your package then conda-build and such becomes much easier to write for any individual package. At the end of the day, sure it would make some things a bit easier if everyone would just standardize around one thing, but I doubt it will ever happen because these things really do serve different audiences once you leave the overlap. Much like it?d be great if everyone just standardized on Ubuntu! Or Debian! Or CentOS! Or macOS! Or Windows! Or it?d be nice if everyone picked a single Python version, or a single Python implementation, or a single language at all :) So yea, diversity makes things more difficult, but it also tends to lead to better software overall :D > > If it were all up to me (which of course it's not) -- I'd say that keeping pip / PyPi fully supported for all the stuff it's good at -- pure python and small/no dependency extension modules -- and folks can go to conda when/if they need more. > > After all, you can use pip from within a conda environment just fine :-) I think this is pretty much where we're at and where we?re likely to stay for the general case as far as what is officially supported. We may gain some stuff to try and integrate better with whatever platform we?re on (for instance, the metadata that Nick suggested that would allow people to say ?I need libgd? and have conda or apt or whatever provide that) but I don?t think we?re likely to go much further than that. I know that there is the pynativelib stuff which would cross into that threshold, but one of the things about that proposal is it doesn?t require anything from pip or PyPI or whatever to make it happen. That?s ?just? some square pegging into a round hole to sort of try and close the gap a little bit but I doubt it?s ever going to be something that even approaches a solution like what Conda/apt/etc give. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 3 03:57:08 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Nov 2016 17:57:08 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 3 November 2016 at 04:39, Chris Barker wrote: > On Wed, Nov 2, 2016 at 9:49 AM, Nick Coghlan wrote: >> - you need a system for specifying environmental *constraints* (like >> dynamically linked C libraries and command line applications you >> invoke) >> - you need a system for asking the host environment if it can satisfy >> those constraints > > and it it can't -- you're then done -- that's actually the easy part (and > happens already and build or run time, yes?): When it comes to the kinds of environmental constraints most Python applications (even scientific ones) impose, conda, dnf/yum, and apt can meet them. If you're consistently running (for example) the Fedora Scientific Lab as your Linux distro of choice, and aren't worried about other platforms, then you don't need and probably don't want conda - your preferred binary dependency management community is the Fedora one, not the conda one, and your preferred app isolation technology will be Linux containers, not conda environments. Within the containers, if you use virtualenv at all, it will be with the system site-packages enabled. > I try to build libgdal, it'll fail if I don't have that huge pile of > dependencies installed. > > I try to run a wheel someone else built -- it'll also fail. > > It'd be better if this could be hashed out a compilation or linking error, > sure, but that's not goign to do a whole lot except make the error messages > nicer :-) This is why fixing this in the general case requires the ability for pip to ask the host platform to install additional components. >> dnf/yum, apt, brew, conda, et al all *work around* the current lack of >> such a system by asking humans (aka "downstream package maintainers") >> to supply the additional information by hand in a platform specific >> format. > > if conda is a "platform", then yes. but in that sense pip is a platform, > too. No, it's not, as it still wouldn't have the ability to install external dependencies itself, only the ability to request them from the host platform. You'd still need a plugin interacting with something like conda or PackageKit underneath to actually do the installation (you'd want something like PackageKit rather than raw dnf/yum/apt on Linux, as PackageKit is what powers features like PackageKit-command-not-found, which lets you install CLI commands from system repos > I'll beat this drum again -- if you want to extend pip to solve all (most) > of the problems conda solves, then you are re-inventing conda. I don't. I want publishers of Python packages to be able to express their external dependencies in a platform independent way such that the following processes can be almost completely and reliably automated: * converting PyPI packages to conda packages * converting PyPI packages to RPMs * converting PyPI packages to deb packages * requesting external dependencies from the host system (whether that's conda, a Linux distro, or something else) when installing Python packages with pip > If someone > doesn't like the conda design, and has better ideas, great, but only > re-invent the wheel if you really are going to make a better wheel. It wouldn't be conda, because it still wouldn't be give you a way to publish Python runtimes and other external dependencies - only a way to specify what you need when installed so that pip can ask the host environment to provide it (and fail noisily if it says it can't oblige) and so format converters like "conda skeleton" and "pyp2rpm" can automatically emit the correct external dependencies. > However, I haven't thought about it carefully -- maybe it would be possible > to have a system than managed everything except python itself. But that > would be difficult, and I don't see the point, except to satisfy brain dead > IT security folks :-) The IT security folks aren't brain dead, they have obligations to manage organisational risk, and that means keeping some level of control over what's being run with privileged access to network resources (and in most organisations running "inside the firewall" still implicitly grants some level of privileged information access) Beyond that "because IT said so" case, we'd kinda like Fedora system components to be running in the Fedora system Python (and ditto for other Linux distributions), while folks running in Heroku/OpenShift/Lambda should really be using the platform provided runtimes so they benefit from automated security updates. More esoterically, Python interpreter implementers and other folks need a way to obtain Python packages that isn't contingent on the interpreter being published in any particular format, since it may not be published at all. Victor Stinner's "performance" benchmark suite is an example of that, since it uses pip and virtualenv to compare the performance of different Python interpreters across a common set of benchmarks: https://github.com/python/performance >> > But the different audiences aren't "data science folks" vs "web >> > developers" >> > -- the different audiences are determined by deployment needs, not >> > domain. >> >> Deployment needs are strongly correlated with domain though, and >> there's a world of difference between the way folks do exploratory >> data analysis and the way production apps are managed in >> Heroku/OpenShift/Cloud Foundry/Lambda/etc. > > sigh. not everyone that uses the complex scipy stack is doing "exploratory > data analysis" -- a lot of us are building production software, much of it > behind web services... > > and that's what I mean by deployment needs. Sure, and folks building Linux desktop apps aren't web developers either. Describing them that way is just shorthand caricatures that each attempt to encompass broad swathes of a complex ecosystem. It makes a *lot* of sense for folks using conda in some contexts (e.g. exploratory data analysis) to expand that to using conda everywhere. It also makes a lot of sense for folks that don't have a preference yet to start with conda for their own personal use. However, it *doesn't* make sense for folks to introduce conda into their infrastructure just for the sake of using conda. Instead, for folks in the first category and the last category, the key question they have to ask themselves is "Do I want to let the conda packaging community determine which Python runtimes are going to be available for me to use?". In the first case, there's a strong consistency argument in favour of doing so. However, in the last case, it really depends greatly on what you're doing, what your team prefers, and what kind of infrastructure setup you have. >> However, if you're specifically interested in web service >> development, then swapping in your own Python runtime rather than just >> using a PaaS provided one is really much lower level than most >> beginners are going to want to be worrying about these days - getting >> opinionated about that kind of thing comes later (if it happens at >> all). > > it's not a matter of opinion, but needs -- if this "beginner" is doing stuff > only with pure-python packages, then great -- there are many easy options. > But if they need some other stuff - maybe this beginner needs to work with > scientific data sets.. then they're dead in the water with a Platform that > doesn't support what they need. Yep, and if that's the case, hopefully someone directs them towards a platform with that feature set, like conda, or Scientific Linux or the Fedora Scientific Lab, rather than a vanilla Python distro. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Nov 3 04:40:53 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Nov 2016 18:40:53 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 3 November 2016 at 05:28, Nathaniel Smith wrote: > On Nov 2, 2016 9:52 AM, "Nick Coghlan" wrote: >> Tennessee Leeuwenberg started a draft PEP for that first part last >> year: https://github.com/pypa/interoperability-peps/pull/30/files >> >> dnf/yum, apt, brew, conda, et al all *work around* the current lack of >> such a system by asking humans (aka "downstream package maintainers") >> to supply the additional information by hand in a platform specific >> format. > > To be fair, though, it's not yet clear whether such a system is actually > possible. AFAIK no one has ever managed to reliably translate between > different languages that Linux distros use for describing environment > constraints, never mind handling the places where they're genuinely > irreconcilable (e.g. the way different distro openssl packages have > incompatible ABIs), plus other operating systems too. The main problem with mapping between Debian/RPM/conda etc in the general case is that the dependencies are generally expressed in terms of *names* rather than runtime actions, and you also encounter problems with different dependency management ecosystems splitting up (or aggregating!) different upstream components differently. This means you end up with a situation that's more like a lossy transcoding between MP3 and OGG Vorbis or vice-versa than it is a pristine encoding to either from a losslessly encoded FLAC or WAV file. The approach Tennessee and Robert Collins came up with (which still sounds sensible to me) is that instead of dependencies on particular external components, what we want to be able express is instead a range of *actions* that the software is going to take: - "I am going to dynamically load a library named " - "I am going to execute a subprocess for a command named " And rather than expecting folks to figure that stuff out for themselves, you'd use tools like auditwheel and strace to find ways to generate it and/or validate it. > I mean, it would be awesome if someone pulls it off. But it's possible that > this is like saying that it's not an inherent design limitation of my bike > that it's only suited for terrestrial use, because I could always strap on > wings and a rocket... When it comes to things that stand in the way of fully automating the PyPI -> RPM pipeline, there are still a lot of *much* lower hanging fruit out there than this. However, the immediate incentives of folks working on package management line up in such a way that I'm reasonably confident that the lack of these capabilities is a matter of economic incentives rather than insurmountable technical barriers - it's a hard, tedious problem to automate, and manual repackaging into a platform specific format has long been a pretty easy thing for commercial open source redistributors to sell. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Thu Nov 3 05:10:18 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 3 Nov 2016 02:10:18 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Nov 3, 2016 1:40 AM, "Nick Coghlan" wrote: > > On 3 November 2016 at 05:28, Nathaniel Smith wrote: > > On Nov 2, 2016 9:52 AM, "Nick Coghlan" wrote: > >> Tennessee Leeuwenberg started a draft PEP for that first part last > >> year: https://github.com/pypa/interoperability-peps/pull/30/files > >> > >> dnf/yum, apt, brew, conda, et al all *work around* the current lack of > >> such a system by asking humans (aka "downstream package maintainers") > >> to supply the additional information by hand in a platform specific > >> format. > > > > To be fair, though, it's not yet clear whether such a system is actually > > possible. AFAIK no one has ever managed to reliably translate between > > different languages that Linux distros use for describing environment > > constraints, never mind handling the places where they're genuinely > > irreconcilable (e.g. the way different distro openssl packages have > > incompatible ABIs), plus other operating systems too. > > The main problem with mapping between Debian/RPM/conda etc in the > general case is that the dependencies are generally expressed in terms > of *names* rather than runtime actions, and you also encounter > problems with different dependency management ecosystems splitting up > (or aggregating!) different upstream components differently. This > means you end up with a situation that's more like a lossy transcoding > between MP3 and OGG Vorbis or vice-versa than it is a pristine > encoding to either from a losslessly encoded FLAC or WAV file. > > The approach Tennessee and Robert Collins came up with (which still > sounds sensible to me) is that instead of dependencies on particular > external components, what we want to be able express is instead a > range of *actions* that the software is going to take: > > - "I am going to dynamically load a library named " > - "I am going to execute a subprocess for a command named " And then it segfaults because it turns out that your library named is not abi compatible with my library named . Or it would have been if you had the right version, but distros don't agree on how to express version numbers either. (Just think about epochs.) Or we're on Windows, so it's interesting to know that we need a library named , but what are we supposed to do with that information exactly? Again, I don't want to just be throwing around stop energy -- if people want to tackle these problems then I wish them luck. But I don't think we should be hand waving this as a basically solved problem that just needs a bit of coding, because that also can create stop energy that blocks an honest evaluation of alternative approaches. > And rather than expecting folks to figure that stuff out for > themselves, you'd use tools like auditwheel and strace to find ways to > generate it and/or validate it. > > > I mean, it would be awesome if someone pulls it off. But it's possible that > > this is like saying that it's not an inherent design limitation of my bike > > that it's only suited for terrestrial use, because I could always strap on > > wings and a rocket... > > When it comes to things that stand in the way of fully automating the > PyPI -> RPM pipeline, there are still a lot of *much* lower hanging > fruit out there than this. However, the immediate incentives of folks > working on package management line up in such a way that I'm > reasonably confident that the lack of these capabilities is a matter > of economic incentives rather than insurmountable technical barriers - > it's a hard, tedious problem to automate, and manual repackaging into > a platform specific format has long been a pretty easy thing for > commercial open source redistributors to sell. The idea does seem much more plausible as a form of metadata hints attached to sdists that are used as inputs to a semi-automated-but-human-supervised distro package building system. What I'm skeptical about is specifically the idea that someday I'll be able to build a wheel, distribute it to end users, and then pip will do some negotiation with dnf/apt/pacman/chocolatey/whatever and make my wheel work everywhere -- and that this will be an viable alternative to conda. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Nov 3 05:29:30 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 3 Nov 2016 09:29:30 +0000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On 3 November 2016 at 00:02, Matthew Brett wrote: > Anaconda has an overwhelming advantage on Windows, in that Continuum > can bear the licensing liabilities enforced by the Intel Fortran > compiler, and we can not. We therefore have no license-compatible > Fortran compiler for Python 3.5 - so we can't build scipy, and that's > a full stop for the scipy stack. I'm sure you know, but the only > practical open-source option is mingw-w64, that does not work with the > Microsoft runtime used by Python 3.5 [1]. It appears the only person > capable of making mingw-w64 compatible with this runtime is Ray > Donnelly, who now works for Continuum, and we haven't yet succeeded in > finding the 15K USD or so to pay Continuum for his time. Is this something the PSF could assist with? Either in terms of funding work to get mingw-w64 working, or in terms of funding (or negotiating) Intel Fortran licenses for the community? Paul From p.f.moore at gmail.com Thu Nov 3 05:34:25 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 3 Nov 2016 09:34:25 +0000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On 2 November 2016 at 23:08, Chris Barker wrote: > After all, you can use pip from within a conda environment just fine :-) In my experience (some time ago) doing so ended up with the "tangled mess of multiple systems" you mentioned. Conda can't uninstall something I installed with pip, and I have no obvious way of introspecting "which installer did I use to install X?" And if I pip uninstall something installed by conda, I break things. (Apologies if I misrepresent, it was a while ago, but I know I ended up deciding that I needed to keep a log of every package I installed and what I installed it with if I wanted to keep things straight...) Has that improved? I'm not trying to complain, I don't have any particular expectation that conda "has to" support using pip on a conda installation. But I do want to make sure we're clear on what works and what doesn't. Paul From ncoghlan at gmail.com Thu Nov 3 06:39:32 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Nov 2016 20:39:32 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 3 November 2016 at 19:10, Nathaniel Smith wrote: > On Nov 3, 2016 1:40 AM, "Nick Coghlan" wrote: >> The approach Tennessee and Robert Collins came up with (which still >> sounds sensible to me) is that instead of dependencies on particular >> external components, what we want to be able express is instead a >> range of *actions* that the software is going to take: >> >> - "I am going to dynamically load a library named " >> - "I am going to execute a subprocess for a command named " > > And then it segfaults because it turns out that your library named is > not abi compatible with my library named . Or it would have been if you > had the right version, but distros don't agree on how to express version > numbers either. (Just think about epochs.) Or we're on Windows, so it's > interesting to know that we need a library named , but what are we > supposed to do with that information exactly? I don't think there's much chance of any of this ever working on Windows - conda will rule there, and rightly so. Mac OS X seems likely to go the same way, although there's an outside chance brew may pick up some of the otherwise Linux-only capabilities if they prove successful. Linux distros have larger package management communities than conda though, and excellent integration testing infrastructure on the commercial side of things, so we "just" need to get some of their software integration pipelines arranged in better configurations, and responding to the right software publication events :) > Again, I don't want to just be throwing around stop energy -- if people want > to tackle these problems then I wish them luck. But I don't think we should > be hand waving this as a basically solved problem that just needs a bit of > coding, because that also can create stop energy that blocks an honest > evaluation of alternative approaches. There's a reason I'm not volunteering to work on this in my own time, and am instead letting the Fedora Modularity and Python maintenance folks decide when it's a major blocker to further process improvement efforts on their side of things, while I work on other non-Python-specific aspects of Red Hat's software supply chain management. What I'm mainly reacting to here is Chris's apparent position that he sees the entire effort as pointless. It's not pointless, just hard, since it involves working towards process improvements that span *multiple* open source communities, rather than being able to work with just the Python community or just the conda community. Even if automation can only cover 80-90% of legacy cases and 95% of future cases (numbers plucked out of thin air), that's still a huge improvement since folks generally aren't adding *new* huge-and-gnarly C/C++/FORTRAN dependencies to the mix of things pip users need to worry about, while Go and Rust have been paying much closer attention to supporting distributed automated builds from the start of their existence. >> When it comes to things that stand in the way of fully automating the >> PyPI -> RPM pipeline, there are still a lot of *much* lower hanging >> fruit out there than this. However, the immediate incentives of folks >> working on package management line up in such a way that I'm >> reasonably confident that the lack of these capabilities is a matter >> of economic incentives rather than insurmountable technical barriers - >> it's a hard, tedious problem to automate, and manual repackaging into >> a platform specific format has long been a pretty easy thing for >> commercial open source redistributors to sell. > > The idea does seem much more plausible as a form of metadata hints attached > to sdists that are used as inputs to a semi-automated-but-human-supervised > distro package building system. What I'm skeptical about is specifically the > idea that someday I'll be able to build a wheel, distribute it to end users, > and then pip will do some negotiation with > dnf/apt/pacman/chocolatey/whatever and make my wheel work everywhere -- and > that this will be an viable alternative to conda. Not in the general case it won't, no, but it should be entirely feasible for distro-specific Linux wheels and manylinux1 wheels to be as user-friendly as conda in cases where folks on the distro side are willing to put in the effort to make the integration work properly. It may also be feasible to define an ABI for "linuxconda" that's broader than the manylinux1 ABI, so folks can publish conda wheels direct to PyPI, and then pip could define a way for distros to indicate their ABI is "conda compatible" somehow. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Thu Nov 3 06:50:27 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 3 Nov 2016 10:50:27 +0000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 3 November 2016 at 10:39, Nick Coghlan wrote: > It may also be feasible to define an ABI for "linuxconda" that's > broader than the manylinux1 ABI, so folks can publish conda wheels > direct to PyPI, and then pip could define a way for distros to > indicate their ABI is "conda compatible" somehow. Even on the "hard" cases like Windows, it may be possible to define a standard approach that goes something along the lines of defining a standard location that (somehow) gets added to the load path, and interested parties provide DLLs for external dependencies, which the users can then manually place in those locations. Or there's the option that's been mentioned before, but never (to my knowledge) developed into a complete proposal, for packaging up external dependencies as wheels. In some ways, Windows is actually an *easier* platform in this regard, as it's much more consistent in what it does provide - the CRT, and nothing else, basically. So all of the rest of the "external dependencies" need to be shipped, which is a bad thing, but there's no combinatorial explosion of system dependencies to worry about, which is good (as long as the "external dependencies" ecosystem maintains internal consistency). Paul From barry at python.org Thu Nov 3 13:17:03 2016 From: barry at python.org (Barry Warsaw) Date: Thu, 3 Nov 2016 13:17:03 -0400 Subject: [Distutils] Travis-CI is not open source. Was: Current Python packaging status (from my point of view) References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> Message-ID: <20161103131703.2d3cb4b1@anarchist> On Nov 03, 2016, at 12:54 AM, Nick Coghlan wrote: >This is also an area where I'm fine with recommending freemium >solutions if they're the lowest barrier to entry option for new users, >and "Use GitHub + Travis CI" qualifies on that front. I won't rehash the GitHub/GitLab debate, but in some of my projects (hosted on GH) I've had to ditch Travis because of limitations on that platform. Specifically, I needed to run various tests on an exact specification of various Ubuntu platforms, e.g. does X run on an up-to-date Ubuntu Y.Z? I originally used Docker for this, but our projects had additional constraints, such as needing to bind-mount, which aren't supported on the Travis+Docker platform. So we ended up ditching the Travis integration and hooking our test suite into the Ubuntu autopkgtest system (which is nearly identical to the Debian autopkgtest system but runs on Ubuntu infrastructure). Python may not be affected by similar constraints, but it is worth keeping in mind. Travis isn't a perfect technical solution for all projects, but it may be good enough for Python. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From matthew.brett at gmail.com Thu Nov 3 13:56:47 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 3 Nov 2016 10:56:47 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: Hi, On Thu, Nov 3, 2016 at 2:29 AM, Paul Moore wrote: > On 3 November 2016 at 00:02, Matthew Brett wrote: >> Anaconda has an overwhelming advantage on Windows, in that Continuum >> can bear the licensing liabilities enforced by the Intel Fortran >> compiler, and we can not. We therefore have no license-compatible >> Fortran compiler for Python 3.5 - so we can't build scipy, and that's >> a full stop for the scipy stack. I'm sure you know, but the only >> practical open-source option is mingw-w64, that does not work with the >> Microsoft runtime used by Python 3.5 [1]. It appears the only person >> capable of making mingw-w64 compatible with this runtime is Ray >> Donnelly, who now works for Continuum, and we haven't yet succeeded in >> finding the 15K USD or so to pay Continuum for his time. > > Is this something the PSF could assist with? Either in terms of > funding work to get mingw-w64 working, or in terms of funding (or > negotiating) Intel Fortran licenses for the community? I'm afraid paying for the Intel Fortran licenses won't help because the problem is in the Intel Fortran runtime license [1]. But - it would be a huge help if the PSF could help with funding to get mingw-w64 working. This is the crucial blocker for progress on binary wheels on Windows. Nathaniel - do you have any recent news on progress? Cheers, Matthew [1] https://software.intel.com/en-us/comment/1881526 From glyph at twistedmatrix.com Thu Nov 3 14:08:31 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Thu, 3 Nov 2016 11:08:31 -0700 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: <20161103131703.2d3cb4b1@anarchist> References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> Message-ID: > On Nov 3, 2016, at 10:17 AM, Barry Warsaw wrote: > > On Nov 03, 2016, at 12:54 AM, Nick Coghlan wrote: > >> This is also an area where I'm fine with recommending freemium >> solutions if they're the lowest barrier to entry option for new users, >> and "Use GitHub + Travis CI" qualifies on that front. > > I won't rehash the GitHub/GitLab debate, but in some of my projects (hosted on > GH) I've had to ditch Travis because of limitations on that platform. > Specifically, I needed to run various tests on an exact specification of > various Ubuntu platforms, e.g. does X run on an up-to-date Ubuntu Y.Z? > > I originally used Docker for this, but our projects had additional > constraints, such as needing to bind-mount, which aren't supported on the > Travis+Docker platform. So we ended up ditching the Travis integration and > hooking our test suite into the Ubuntu autopkgtest system (which is nearly > identical to the Debian autopkgtest system but runs on Ubuntu infrastructure). > > Python may not be affected by similar constraints, but it is worth keeping in > mind. Travis isn't a perfect technical solution for all projects, but it may > be good enough for Python. I think phrasing this in terms of "perfect" and "good enough" presents a highly misleading framing. Examined in this fashion, of course we may reluctantly use the "good enough" option, but don't we want the best option? A better way to look at it is cost vs. benefit. How much does it cost you in terms of time and money to run and administer the full spectrum of "real" operating systems X.Z that you wish to support? How much does it cost in terms of waiting for all that extra build infrastructure to run all the time? How much additional confidence and assurance that it will work does that buy you, over the confidence of passing tests within a docker container? Is that additional confidence worth the investment of resources? Of course, volunteer-driven projects are not concerned directly with top-level management allocation of ostensibly fungible resources, and so a hard "costly" solution that someone is interested in and committed to is far less expensive than a "cheap" solution that everyone finds boring, so we have to take that into account as well. As it happens, Twisted has a massive investment in existing Buildbot CI infrastructure _as well as_ Travis and Appveyor. Travis and Appveyor address something that our CI can't, which is allowing unauthenticated builds from randos issuing their first pull requests. This gives contributors much faster feedback which is adequate for the majority of changes. However, many of our ancillary projects, which do not have as many platform-sensitive components, are built using Travis only, and that's a very good compromise for them. It has allowed us to maintain a much larger and more diverse ecosystem with a much smaller team than we used to be able to. In the future, we may have to move to a different CI service, but I can tell you that for sure that 90% of the work involved in getting builds to run on Travis is transferrable to any platform that can run a shell script. There's a bit of YAML configuration we would need to replicate, and we might have to do some fancy dancing with Docker to get other ancillary services run on the backend in some other way, but I would not worry about vendor lock-in at all for this sort of service. Probably, the amount of time and energy on system maintenance that Travis saves us in a given week is enough to balance out all the possible future migration work. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Nov 3 14:27:56 2016 From: barry at python.org (Barry Warsaw) Date: Thu, 3 Nov 2016 14:27:56 -0400 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> Message-ID: <20161103142756.2acdf468@anarchist> On Nov 03, 2016, at 11:08 AM, Glyph Lefkowitz wrote: >I think phrasing this in terms of "perfect" and "good enough" presents a >highly misleading framing. Examined in this fashion, of course we may >reluctantly use the "good enough" option, but don't we want the best option? What are the criteria for "best"? I'm not saying don't use Travis, I'm just trying to express that there are technical limitations, which is almost definitely true of any CI infrastructure. If those limitations don't affect your project, great, go for it! Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From njs at pobox.com Thu Nov 3 16:07:56 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 3 Nov 2016 13:07:56 -0700 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: <20161103142756.2acdf468@anarchist> References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> <20161103142756.2acdf468@anarchist> Message-ID: I think we're drifting pretty far off topic here... IIRC the original discussion was about whether the travis-ci infrastructure could be suborned to provide an sdist->wheel autobuilding service for pypi. (Answer: maybe, though it would be pretty awkward, and no one seems to be jumping up to make it happen.) On Nov 3, 2016 11:28 AM, "Barry Warsaw" wrote: > On Nov 03, 2016, at 11:08 AM, Glyph Lefkowitz wrote: > > >I think phrasing this in terms of "perfect" and "good enough" presents a > >highly misleading framing. Examined in this fashion, of course we may > >reluctantly use the "good enough" option, but don't we want the best > option? > > What are the criteria for "best"? > > I'm not saying don't use Travis, I'm just trying to express that there are > technical limitations, which is almost definitely true of any CI > infrastructure. If those limitations don't affect your project, great, go > for > it! > > Cheers, > -Barry > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Thu Nov 3 16:56:51 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Thu, 3 Nov 2016 22:56:51 +0200 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> <20161103142756.2acdf468@anarchist> Message-ID: I don't know if it has been mentioned before, but Travis already provides a way to automatically package and upload sdists and wheels to PyPI: https://docs.travis-ci.com/user/deployment/pypi/ I've been using it myself in many projects and it has worked quite well. Granted, I haven't had to deal with binary wheels yet, but I think Travis should be able to handle creation of macOS and manylinux1 wheels. 03.11.2016, 22:07, Nathaniel Smith kirjoitti: > > I think we're drifting pretty far off topic here... IIRC the original > discussion was about whether the travis-ci infrastructure could be > suborned to provide an sdist->wheel autobuilding service for pypi. > (Answer: maybe, though it would be pretty awkward, and no one seems to > be jumping up to make it happen.) > > > On Nov 3, 2016 11:28 AM, "Barry Warsaw" > wrote: > > On Nov 03, 2016, at 11:08 AM, Glyph Lefkowitz wrote: > > >I think phrasing this in terms of "perfect" and "good enough" > presents a > >highly misleading framing. Examined in this fashion, of course > we may > >reluctantly use the "good enough" option, but don't we want the > best option? > > What are the criteria for "best"? > > I'm not saying don't use Travis, I'm just trying to express that > there are > technical limitations, which is almost definitely true of any CI > infrastructure. If those limitations don't affect your project, > great, go for > it! > > Cheers, > -Barry > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Nov 3 17:30:14 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 3 Nov 2016 14:30:14 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On Thu, Nov 3, 2016 at 2:34 AM, Paul Moore wrote: > On 2 November 2016 at 23:08, Chris Barker wrote: > > After all, you can use pip from within a conda environment just fine :-) > > In my experience (some time ago) doing so ended up with the "tangled > mess of multiple systems" you mentioned. well, yes, my experience too, as a of two years ago, -- which is why I'm taken the time make conda packages of all the pure python libs I need (and now that there is conda-forge, it's not a huge lift) > Conda can't uninstall > something I installed with pip, sure can't! -- though that's too much of a barrier -- it'll give you an error, and then you use pip to uninstall it. > and I have no obvious way of > introspecting "which installer did I use to install X?" And if I pip > uninstall something installed by conda, I break things. (Apologies if > I misrepresent, it was a while ago, but I know I ended up deciding > that I needed to keep a log of every package I installed and what I > installed it with if I wanted to keep things straight...) > > Has that improved? > yes, it has -- and lots of folks seem to have it work fine for them, though I"d say in more of a one-way street approach: - create a conda environment - install all the conda packages you need - install all the pip packages you need. you're done. And most conda packages of python packages are built "properly" with pip-compatible meta data, so a pip install won't get confused and think it needs to install a dependency that is already there. then you can export your environment to a yaml file that keeps track of which packages are conda, and which pip, and you (and your users) can re-create that environment reliably -- even on a different platform. I'm not trying to complain, I don't have any particular expectation > that conda "has to" support using pip on a conda installation. But I > do want to make sure we're clear on what works and what doesn't. I'm not the expert here -- honestly do try avoid mixing pip and conda -- but support for it has gotten a lot better. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Nov 3 17:44:04 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 3 Nov 2016 14:44:04 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Thu, Nov 3, 2016 at 3:39 AM, Nick Coghlan wrote: > I don't think there's much chance of any of this ever working on > Windows - conda will rule there, and rightly so. Mac OS X seems likely > to go the same way, although there's an outside chance brew may pick > up some of the otherwise Linux-only capabilities if they prove > successful. > this is a really good point then -- we're building "platform independent system that, oh actually is a "a few different Linux vendors" system. Which is a perfectly worthy goal, but it was not at all the goal of conda. I wasn't there, but I think conda started with the goal of supporting three core platforms (that are pretty different) with the same UI, and as much of the same code as possible. And if you want to do that -- it really makes a lot of sense to control as much of the system as possible. > What I'm mainly reacting to here is Chris's apparent position that he > sees the entire effort as pointless. I don't believe I actually ever said that -- though I can see why'd you'd think so. But for the record, I don't see it as pointless. It's not pointless, just hard, > That I do see -- I see it as hard, so hard that it's likely to hit road blocks, so we could make better progress with other solutions if that's where people put their energies. And you said yourself earlier in this post that: "I don't think there's much chance of any of this ever working on Windows" One of the "points" for me it to support Windows, OS-X and Linux -- so in that sense -- I guess it is pointless ;-) By the way, I was quite disappointed when i discovered that conda has done literally nothing to make building cross-platfrom -- I understand why, but none the less, building can be a real pain. But it's also nice that it has a made a nice clean distinction between package management and building, a distinction that is all too blurred in the distutils/setuptools/pip/wheel stack (which, I know, is slowly being untangled..) Not in the general case it won't, no, but it should be entirely > feasible for distro-specific Linux wheels and manylinux1 wheels to be > as user-friendly as conda in cases where folks on the distro side are > willing to put in the effort to make the integration work properly. > I agree -- though I think the work up front by the distributors will be more. > It may also be feasible to define an ABI for "linuxconda" that's > broader than the manylinux1 ABI, so folks can publish conda wheels > direct to PyPI, and then pip could define a way for distros to > indicate their ABI is "conda compatible" somehow. Hmm -- now THAT's an interesting idea... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Nov 3 17:48:38 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 3 Nov 2016 14:48:38 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Thu, Nov 3, 2016 at 3:50 AM, Paul Moore wrote: > Even on the "hard" cases like Windows, it may be possible to define a > standard approach that goes something along the lines of defining a > standard location that (somehow) gets added to the load path, and > interested parties provide DLLs for external dependencies, which the > users can then manually place in those locations. that't pretty much what conda is :-) though it adds the ability to handle multiple environments, and tools so you don't have to manually place anything. It would probably be feasible to make a conda-for-everything-but-python-itself. I'm just not sure that would buy you much. -CHB Or there's the > option that's been mentioned before, but never (to my knowledge) > developed into a complete proposal, for packaging up external > dependencies as wheels. > Folks were working on this at Pycon last spring -- any progress? > In some ways, Windows is actually an *easier* platform in this regard, > as it's much more consistent in what it does provide - the CRT, and > nothing else, basically. So all of the rest of the "external > dependencies" need to be shipped, which is a bad thing, but there's no > combinatorial explosion of system dependencies to worry about, which > is good and pretty much what manylinux is about, yes? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Nov 3 17:55:22 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 3 Nov 2016 14:55:22 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On Thu, Nov 3, 2016 at 10:56 AM, Matthew Brett wrote: > But - it would be a huge help if the PSF could help with funding to > get mingw-w64 working. This is the crucial blocker for progress on > binary wheels on Windows. for what it's worth, this is a blocker for many of for using Win64 at all! As far as I know, conda has not solved this one -- maybe for Continuum's paying customers? (or for binaries that continuum builds, but not stuff I need to build myself) If it's just money we need, I'd hope we could scrape it together somehow! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Nov 3 18:02:56 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 3 Nov 2016 22:02:56 +0000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 3 November 2016 at 21:48, Chris Barker wrote: > On Thu, Nov 3, 2016 at 3:50 AM, Paul Moore wrote: > >> >> Even on the "hard" cases like Windows, it may be possible to define a >> standard approach that goes something along the lines of defining a >> standard location that (somehow) gets added to the load path, and >> interested parties provide DLLs for external dependencies, which the >> users can then manually place in those locations. > > that't pretty much what conda is :-) Well, it doesn't feel like that. Maybe we're not understanding each other. Am I able to take a non-conda installation of Python, use conda to install *just* a set of non-Python DLLs (libxml, libyaml, ...) and then use pip to install wheels for lxml, pyyaml, etc? I understand that there currently isn't a way to *build* an lxml wheel that links to a conda-installed libxml, but that's not the point I'm making - if conda provided a way to manage external DLLs only, then it would be possible in theory to contribute a setup.py fix to a project like lxml that detected and linked to a conda-installed libxml. That single source could then be used to build *both* wheels and conda packages, avoiding the current situation where effort on getting lxml to build is duplicated, once for conda and once for wheel. > though it adds the ability to handle multiple environments, and tools so you > don't have to manually place anything. Well, there are other tools (virtualenv, venv) to do this as well, so again conda is to an extent offering a "buy into our infrastructure or you're on your own" proposition. That's absolutely the right thing for people wanting an integrated "just get on with it" solution, so don't assume this is a criticism - just an explanation of why conda doesn't suit *my* needs. > It would probably be feasible to make a > conda-for-everything-but-python-itself. I'm just not sure that would buy you > much. It would buy *me* flexibility to use python.org build of Python, or my own builds. And not to have to wait for conda to release a build of a new version. Of course, I have specialised needs, so that's not an important consideration for the conda developers. Paul From chris.barker at noaa.gov Thu Nov 3 18:15:30 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 3 Nov 2016 15:15:30 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On Wed, Nov 2, 2016 at 5:02 PM, Matthew Brett wrote: > Anaconda has an overwhelming advantage on Windows, in that Continuum > can bear the licensing liabilities enforced by the Intel Fortran > compiler, and we can not. Technically, that's an advantage that a commercial distribution has -- really nothing to do with the conda technology per se. But yes, from a practical perspective Continuum's support for conda an Anaconda is a major bonus in many ways. Note that continuum in now (I think) default ing to MKL also -- they seem to have solved the res-distribution issues. But the conda-forge project has its own builds of lots of stuff -- including the core scipy stack, based on OpenBLAS. There is a scipy build there: https://github.com/conda-forge/scipy-feedstock/tree/master/recipe but alas, still stuck: ``` build: # We lack openblas on Windows, and therefore can't build scipy there either currently. skip: true # [win or np!=111] ``` > I'm sure you know, but the only > practical open-source option is mingw-w64, that does not work with the > Microsoft runtime used by Python 3.5 [1]. OT -- but is it stable for Python2.7 now? > > But not pyHDF, netCDF5, gdal, shapely, ... (to name a few that I need to > > work with). And these are ugly: which means very hard for end-users to > > build, and very hard for people to package up into wheels (is it even > > possible?) > > I'd be surprised if these packages were very hard to build OSX and > Linux wheels for. We're already building hard packages like Pillow > Pilllow is relatively easy :-) -- I was doing that years ago. > and Matplotlib and h5py and pytables. Do h5py and pytables share libhdf5 somehow? Or is the whole mess statically linked into each? If you mean netCDF4 - there are > already OSX and Windows wheels for that [2]. > God bless Chris Gohlke --- I have no idea how he does all that! So take all the hassle of those -- and multiply by about ten to get what the GIS stack needs: GDAL/OGR, Shapely, Fiona, etc.... Which doesn't mean it couldn't be done -- just that it would be a pain, and you'd end up with some pretty bloated wheels -- in the packages above, how many copies of libpng will there be? how many of hdf5? proj.4? geos? Maybe that simply doesn't matter -- computers have a lot of disk space and memory these days. But there is one more use-case -- folks that need to build their own stuff against some of those libs (netcdf4 in my case). The nice thing about conda is that it can provide the libs for me to use in my own custom built projects, or for other folks to use to build conda packages with. Maybe pip could be used to distribute those libs, too -- I know folks are working on that. Then there are non-python stuff that you need to work with that would be nice to mange in environments, too -- R, MySQL, who knows? As someone else on this thread noted -- it's jamming a square peg into a round hole -- and why do that when there is a square hole you can use instead? At least that's what I decided. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Nov 3 18:36:40 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 3 Nov 2016 15:36:40 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Thu, Nov 3, 2016 at 3:02 PM, Paul Moore wrote: > it may be possible to define a > >> standard approach that goes something along the lines of defining a > >> standard location that (somehow) gets added to the load path, and > >> interested parties provide DLLs for external dependencies, which the > >> users can then manually place in those locations. > > > > that't pretty much what conda is :-) > > Well, it doesn't feel like that. Maybe we're not understanding each > other. Am I able to take a non-conda installation of Python, no -- not at all -- but there is no "system install" of Python on Windows that your IT folks are telling you that you shoudl use. (except maybe IBM, if I recall from a David Beazley talk) So conda provides that as well. But anyway, I meant _conceptually_ it's the same thing. use conda > to install *just* a set of non-Python DLLs (libxml, libyaml, ...) and > then use pip to install wheels for lxml, pyyaml, etc? If you have a conda python already, then yes, you can install conda packages of various libs, and then build pyton packages that depend on them. And now that I think about it -- you could probably: install conda (which WILL install python -- cond needs it itself...) do some munging of environment variables. Use another python install to build wheels, etc. If you got your environment variables set up right -- then building from anywhere should be able to find the conda stuff. But I don't know that you'd get any of the advantages of conda environments this way. I'm still trying to figure out why you'd want to do that on Windows, though -- why not let conda manage your python too? that there currently isn't a way to *build* an lxml wheel that links > to a conda-installed libxml, but that's not the point I'm making - if > conda provided a way to manage external DLLs only, then it would be > possible in theory to contribute a setup.py fix to a project like lxml > that detected and linked to a conda-installed libxml. That single > source could then be used to build *both* wheels and conda packages, > avoiding the current situation where effort on getting lxml to build > is duplicated, once for conda and once for wheel. see above -- conda is putting dlls into a standard place -- no reason you couldn't teach other systems to find them. In fact, I did something like this on OS-X a long while back. There was a project (still is!) called "Unix Compatibility Frameworks": http://www.kyngchaos.com/software/frameworks It has a bunch of the dependencies needed for various python packages I wanted to support (matplotlib, PIL, gdal...). So I installed that, then build bdist_mpgks against it (no wheel back then!). I distributed those packages. In the install instructions, I told folks to go install the "Unix Compatibility Frameworks", and then all my packages would work. Similar concept. However, one "trick' here is multiple dependency versions. A given conda environment can have only one version of a package. If you need different version, you use different packages. but that would get ugly if you wanted your external wheels to link to a particular package. This all gets easier if you manage the whole stack with one tool -- that's why conda is built that way. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Nov 3 18:45:37 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 3 Nov 2016 15:45:37 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Thu, Nov 3, 2016 at 3:02 PM, Paul Moore wrote: > It would buy *me* flexibility to use python.org build of Python, or my > own builds. And not to have to wait for conda to release a build of a > new version. you are perfectly free to make your own conda package of python -- it's not that hard. In fact, that's one of the key benefits of conda's approach -- conda manages python itself, so you can easily have a conda environment for the latest development release of python. of, for that matter, just build python from source inside an environment. ANd if you want to build a custom python package, you don't even need your own recipe: https://github.com/conda-forge/python-feedstock/tree/master/recipe and conda can provide all the dependencies for you too :-) > Of course, I have specialised needs, so that's not an > important consideration for the conda developers. > True -- though the particular need of using a custom-compiled python is actually pretty well supported! Caveat: I've never tried this -- and since conda itself uses Python, you may break conda if you install a python that can't run conda. But maybe not -- conda could be smart enough to use its "root" install of python regardless of what python you have in your environment. in fact, I think that is the case. when I am in an activated conda environment, the "conda" command is a link to the root conda command, so Im pretty sure it will use the root python to run itself. -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Nov 3 19:23:49 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 3 Nov 2016 16:23:49 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On Thu, Nov 3, 2016 at 2:48 PM, Chris Barker wrote: > On Thu, Nov 3, 2016 at 3:50 AM, Paul Moore wrote: >> Or there's the >> option that's been mentioned before, but never (to my knowledge) >> developed into a complete proposal, for packaging up external >> dependencies as wheels. > > > Folks were working on this at Pycon last spring -- any progress? I figured out how to make it work in principle; the problem is that it turns out we need to write some nasty code to munge Mach-O files first. It's totally doable, just a bit more of a hurdle than previously expected :-). And then this happened: https://vorpus.org/blog/emerging-from-the-underworld/ so I'm way behind on everything right now. (If anyone wants to help then speak up!) -n -- Nathaniel J. Smith -- https://vorpus.org From chris.barker at noaa.gov Fri Nov 4 16:25:09 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 4 Nov 2016 13:25:09 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: Final note after a long thread: Just like Nick pointed out in his original post (if I read it right) , the pip vs the conda approach comes down to this: Do you want to a system to manage the whole stack? or do you want a system to manage Python packages? Personally, I think that no matter how you slice it, someone needs to manage the whole stack. I find it highly preferable to manage the whole stack with one tool. And even more so to manage the stack on multiple platforms with the same tool. But others have reasons to manage part of the stack (python itself, for instance) with a particular system, and thus only need a tool to manage the python packages part. Two different deployment use cases, that will probably always exist. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 5 02:29:08 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 5 Nov 2016 16:29:08 +1000 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> <20161103142756.2acdf468@anarchist> Message-ID: On 4 November 2016 at 06:07, Nathaniel Smith wrote: > I think we're drifting pretty far off topic here... IIRC the original > discussion was about whether the travis-ci infrastructure could be suborned > to provide an sdist->wheel autobuilding service for pypi. (Answer: maybe, > though it would be pretty awkward, and no one seems to be jumping up to make > it happen.) The hard part of designing any such system isn't so much the building process, it's the authentication, authorisation and trust management for the release publication step. At the moment, that amounts to "Give the service your PyPI password, so PyPI will 100% trust that they're you" due to limitations on the PyPI side of things, and "Hello web service developer, I grant you 100% authority to impersonate me to the rest of the world however you like" is a questionable idea in any circumstance, let alone when we're talking about publishing software. Since we don't currently provide end-to-end package signing, PyPI initiated builds would solve the trust problem by having PyPI trust *itself*, and only require user credentials for the initial source upload. This has the major downside that "safely" running arbitrary code from unknown publishers is a Hard Problem, which is one of the big reasons that Linux distros put so many hurdles in the way of becoming a package maintainer (i.e. package maintainers get to run arbitrary code not just inside the distro build system but also on end user machines, usually with elevated privileges, so you want to establish a pretty high level of trust before letting people do it). If I understand correctly, conda-forge works on the same basic principle - reviewing the publishers before granting them publication access, rather than defending against arbitrarily malicious code at build time. A more promising long term path is trust federation, which many folks will already be familiar with through granting other services access to their GitHub repositories, or using Twitter/Facebook/Google/et al to sign into various systems. That's not going to be a quick fix though, as it's contingent on sorting out the Warehouse migration challenges, and those are already significant enough without piling additional proposed changes on top of the already pending work. However, something that could potentially be achieved in the near term given folks interested enough in the idea to set about designing it would be a default recommendation for a Travis CI + Appveyor + GitHub Releases based setup that automated everything *except* the final upload to PyPI, but then also offered a relatively simple way for folks to pull their built artifacts from GitHub and push them to PyPI (such that their login credentials never left their local system). Folks that care enough about who hosts their source code to want to avoid GitHub, or have complex enough build system needs that Travis CI isn't sufficient, are likely to be technically sophisticated enough to adapt service specific instructions to their particular circumstances, and any such design work now would be a useful template for full automation at some point in the future. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Nov 5 02:36:43 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 5 Nov 2016 16:36:43 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On 4 November 2016 at 03:56, Matthew Brett wrote: > But - it would be a huge help if the PSF could help with funding to > get mingw-w64 working. This is the crucial blocker for progress on > binary wheels on Windows. Such a grant was already awarded earlier this year by way of the Scientific Python Working Group (which is a collaborative funding initiative between the PSF and NumFocus): https://mail.python.org/pipermail/scientific/2016-January/000271.html However, we hadn't received a status update by the time I stepped down from the Board, although it sounds like progress hasn't been good if folks aren't even aware that the grant was awarded in the first place. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From matthew.brett at gmail.com Sat Nov 5 02:49:26 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 4 Nov 2016 23:49:26 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: Hi, On Fri, Nov 4, 2016 at 11:36 PM, Nick Coghlan wrote: > On 4 November 2016 at 03:56, Matthew Brett wrote: >> But - it would be a huge help if the PSF could help with funding to >> get mingw-w64 working. This is the crucial blocker for progress on >> binary wheels on Windows. > > Such a grant was already awarded earlier this year by way of the > Scientific Python Working Group (which is a collaborative funding > initiative between the PSF and NumFocus): > https://mail.python.org/pipermail/scientific/2016-January/000271.html > > However, we hadn't received a status update by the time I stepped down > from the Board, although it sounds like progress hasn't been good if > folks aren't even aware that the grant was awarded in the first place. Ah no, that was a related project, but for a different set of work. Specifically that was a 5K grant to Carl Kleffner to configure and package up the mingw-w64 compilers to make it easier to build extensions for Pythons prior to 3.5 - see [1]. Cheers, Matthew [1] https://mingwpy.github.io/ From njs at pobox.com Sat Nov 5 02:57:43 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 4 Nov 2016 23:57:43 -0700 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On Fri, Nov 4, 2016 at 11:36 PM, Nick Coghlan wrote: > On 4 November 2016 at 03:56, Matthew Brett wrote: >> But - it would be a huge help if the PSF could help with funding to >> get mingw-w64 working. This is the crucial blocker for progress on >> binary wheels on Windows. > > Such a grant was already awarded earlier this year by way of the > Scientific Python Working Group (which is a collaborative funding > initiative between the PSF and NumFocus): > https://mail.python.org/pipermail/scientific/2016-January/000271.html > > However, we hadn't received a status update by the time I stepped down > from the Board, although it sounds like progress hasn't been good if > folks aren't even aware that the grant was awarded in the first place. There's two separate projects here that turn out to be unrelated: one to get mingw-w64 support for CPython < 3.5, and one for CPython >= 3.5. (This has to do with the thing where MSVC totally redid how their C runtime works.) The grant you're thinking of is for the python < 3.5 part; what Matthew's talking about is for the python >= 3.5 part, which is a totally different plan and team. The first blocker on getting funding for the >= 3.5 project though is getting the team to write down an actual plan and cost estimate, which has not yet happened... -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Sat Nov 5 03:43:48 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 5 Nov 2016 17:43:48 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 4 November 2016 at 07:44, Chris Barker wrote: > On Thu, Nov 3, 2016 at 3:39 AM, Nick Coghlan wrote: >> >> I don't think there's much chance of any of this ever working on >> Windows - conda will rule there, and rightly so. Mac OS X seems likely >> to go the same way, although there's an outside chance brew may pick >> up some of the otherwise Linux-only capabilities if they prove >> successful. > > > this is a really good point then -- we're building "platform independent > system that, oh actually is a "a few different Linux vendors" system. Which > is a perfectly worthy goal, but it was not at all the goal of conda. conda needs raw material from software publishers, and it gets most of that as software published in language specific formats on PyPI and CRAN (which it then redistributes in both source and binary form), *not* through original software being published specifically as conda packages. That latter does happen, but it's the exception rather than the rule. That's *exactly* the same redistribution model that Linux distros use, hence my semi-regular comparison between "I only want to publish and support conda packages" and "We only want to support CentOS/RHEL/Debian stable/Ubuntu LTS/insert your distro of choice". Constraining the platforms we will offer free support for is a 100% reasonable thing for volunteers to do, so it's entirely OK to tell folks that use another platform "Sorry, I can't reproduce that behaviour on a platform I support, so you're going to have to ask for help in a platform specific forum", and then point them at a community support forum like askubuntu.com or ask.fedoraproject.org, or else nudge them in the direction of their vendor (if they're running a commercially supported Python distribution). I do that myself - as a concrete example, I knew the interaction between the -m switch and multiprocessing on Windows was broken for years before I finally fixed it when support for the multiprocessing "spawn" mode behaviour that is used on Windows was added to the CPython Linux builds in Python 3.4. >From a conda perspective though, the difference between Linux and other deployment targets in the current situation is that one of the competing CPython ABI providers is the operating system itself, while on Windows it's mainly other commercial Python redistributors (ActiveState, Enthought), and on Mac OS X it's mainly a community driven package management community (brew) and a community run Python redistributor (pyenv). You're also up against the fact that most folks using Linux on a client device are doing so by specific choice, and that Linux has been the default supported system for open source projects for 20+ years. This means that, despite what I said above, it's much easier in practice for Python project participants to say they don't support and can't help with PyPM/Enthought/brew/pyenv than it is to say that they can't help folks running their software directly in the Linux system Python. Putting my work hat back on for a moment, I actually wish more people *would* start saying that, as Red Hat actively want people to stop running their own applications in the system Python, and start using Software Collections (either directly or via the Docker images) instead. Sharing a language runtime environment between operating system components and end user applications creates all sorts of maintainability problems (not least of which is the inability to upgrade to new feature releases for fear of breaking end user applications and vice-versa), to the point where Fedora is planning to cede the "/usr/bin/python3" namespace to end users, and start migrating system components out to "/usr/bin/system-python": https://fedoraproject.org/wiki/Changes/System_Python > I wasn't there, but I think conda started with the goal of supporting three > core platforms (that are pretty different) with the same UI, and as much of > the same code as possible. And if you want to do that -- it really makes a > lot of sense to control as much of the system as possible. Yep, hence my use of the phrase "cross-platform platform" - conda defines a cross-platform API and build infrastructure for a few different ABIs (e.g. x86_64 Linux, x32/x86_64 Windows, Mac OS X) As a bit of additional background on this topic, folks that haven't seen it may want to take a look at my "Python beyond CPython: Adventures in Software Distribution" keynote from SciPy 2014, especially the part starting from around 7:10 about software redistribution networks: https://youtu.be/IVzjVqr_Bzs?t=458 The written speaker notes for that are available at https://bitbucket.org/ncoghlan/misc/src/default/talks/2014-07-scipy/talk.md >> It's not pointless, just hard, > > That I do see -- I see it as hard, so hard that it's likely to hit road > blocks, so we could make better progress with other solutions if that's > where people put their energies. And you said yourself earlier in this post > that: > > "I don't think there's much chance of any of this ever working on Windows" > > One of the "points" for me it to support Windows, OS-X and Linux -- so in > that sense -- I guess it is pointless ;-) For Windows and Mac OS X, my typical advice is "just use conda" and pointing folks towards the Software Carpentry installation guide :) The only time I deviate from that advice is if I know someone is specifically interested in getting into professional Python development as a paid gig, as in that context you're likely to need to learn to be flexible about the base runtimes you work with. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From wes.turner at gmail.com Sat Nov 5 06:43:03 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 5 Nov 2016 05:43:03 -0500 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> <20161103142756.2acdf468@anarchist> Message-ID: On Saturday, November 5, 2016, Nick Coghlan wrote: > On 4 November 2016 at 06:07, Nathaniel Smith > > wrote: > > I think we're drifting pretty far off topic here... IIRC the original > > discussion was about whether the travis-ci infrastructure could be > suborned > > to provide an sdist->wheel autobuilding service for pypi. (Answer: maybe, > > though it would be pretty awkward, and no one seems to be jumping up to > make > > it happen.) > > The hard part of designing any such system isn't so much the building > process, it's the authentication, authorisation and trust management > for the release publication step. At the moment, that amounts to "Give > the service your PyPI password, so PyPI will 100% trust that they're > you" due to limitations on the PyPI side of things, and "Hello web > service developer, I grant you 100% authority to impersonate me to the > rest of the world however you like" is a questionable idea in any > circumstance, let alone when we're talking about publishing software. > > Since we don't currently provide end-to-end package signing, PyPI > initiated builds would solve the trust problem by having PyPI trust > *itself*, and only require user credentials for the initial source > upload. This has the major downside that "safely" running arbitrary > code from unknown publishers is a Hard Problem, which is one of the > big reasons that Linux distros put so many hurdles in the way of > becoming a package maintainer (i.e. package maintainers get to run > arbitrary code not just inside the distro build system but also on end > user machines, usually with elevated privileges, so you want to > establish a pretty high level of trust before letting people do it). > If I understand correctly, conda-forge works on the same basic > principle - reviewing the publishers before granting them publication > access, rather than defending against arbitrarily malicious code at > build time. - https://conda-forge.github.io - https://github.com/conda-forge - https://github.com/conda-forge/feedstocks - https://github.com/conda-forge/conda-smithy > > A more promising long term path is trust federation, which many folks > will already be familiar with through granting other services access > to their GitHub repositories, or using Twitter/Facebook/Google/et al > to sign into various systems. That's not going to be a quick fix > though, as it's contingent on sorting out the Warehouse migration > challenges, and those are already significant enough without piling > additional proposed changes on top of the already pending work. - [ ] Warehouse: ENH,SEC: A table, form, API for creating and revoking OAuth authz - (project, key, UPLOAD_RELEASE) - key renewal date There are a few existing OAuth Server libraries for pyramid (Warehouse): - https://github.com/sneridagh/osiris - https://github.com/tilgovi/pyramid-oauthlib - https://github.com/elliotpeele/pyramid_oauth2_provider - [ ] CI Release utility secrets: - VCS commit signature checking keyring - Package signing key (GPG ASC) - Package signature pubkey I just found these: - https://gist.github.com/audreyr/5990987 - https://github.com/michaeljoseph/changes - https://pypi.python.org/pypi/jarn.mkrelease - scripted GPG - https://caremad.io/posts/2013/07/packaging-signing-not-holy-grail/ - SHOULD have OOB keyring dist - https://github.com/pypa/twine/issues/157 - twine uploads *.asc if it exists - http://pythonhosted.org/distlib/tutorial.html#signing-a-distribution - http://pythonhosted.org/distlib/tutorial.html#verifying-signatures - SHOULD specify a limited keying-dir/ - https://packaging.python.org/distributing/ - [ ] howto create an .asc signature - https://docs.travis-ci.com/user/deployment/pypi/ - https://github.com/travis-ci/dpl/blob/master/lib/dpl/provider/pypi.rb - [x] https://github.com/travis-ci/dpl/issues/253 - [ ] oauth key instead of pass - https://docs.travis-ci.com/user/deployment/releases/ - upload release to GitHub - [ ] Is it possible to maintain a simple index of GitHub-hosted releases and .asc signatures w/ gh-pages? (for backups) - twine: Is uploading GitHub releases in scope? - https://pypi.org/search/?q=Github+release > However, something that could potentially be achieved in the near term > given folks interested enough in the idea to set about designing it > would be a default recommendation for a Travis CI + Appveyor + GitHub > Releases based setup that automated everything *except* the final > upload to PyPI, but then also offered a relatively simple way for > folks to pull their built artifacts from GitHub and push them to PyPI > (such that their login credentials never left their local system). > Folks that care enough about who hosts their source code to want to > avoid GitHub, or have complex enough build system needs that Travis CI > isn't sufficient, are likely to be technically sophisticated enough to > adapt service specific instructions to their particular circumstances, > and any such design work now would be a useful template for full > automation at some point in the future. https://www.google.com/search&q=Travis+Appveyor+GitHub+pypi ... also useful: - GitLab CI - .gitlab-ci.yml , config.toml - https://docs.gitlab.com/ce/ci/docker/using_docker_images.html - Jenkins - https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin - https://wiki.jenkins-ci.org/display/JENKINS/ShiningPanda+Plugin - https://wiki.jenkins-ci.org/display/JENKINS/GitHub+Plugin - https://wiki.jenkins-ci.org/display/JENKINS/Release+Plugin - Buildbot - http://docs.buildbot.net/latest/manual/cfg-workers-docker.html - GitFlow and HubFlow specify a release/ branch with actual release tags all on master/ - https://datasift.github.io/gitflow/IntroducingGitFlow.html > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, > Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Nov 5 07:02:59 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 5 Nov 2016 06:02:59 -0500 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> <20161103142756.2acdf468@anarchist> Message-ID: For automated deployment / continuous deployment / "continuous delivery": - pip maintains a local cache - devpi can be configured as a transparent proxy cache (in front of pypi.org ) - GitLab CI can show a checkmark for a deploy pipeline stage On Saturday, November 5, 2016, Wes Turner wrote: > > > On Saturday, November 5, 2016, Nick Coghlan > wrote: > >> On 4 November 2016 at 06:07, Nathaniel Smith wrote: >> > I think we're drifting pretty far off topic here... IIRC the original >> > discussion was about whether the travis-ci infrastructure could be >> suborned >> > to provide an sdist->wheel autobuilding service for pypi. (Answer: >> maybe, >> > though it would be pretty awkward, and no one seems to be jumping up to >> make >> > it happen.) >> >> The hard part of designing any such system isn't so much the building >> process, it's the authentication, authorisation and trust management >> for the release publication step. At the moment, that amounts to "Give >> the service your PyPI password, so PyPI will 100% trust that they're >> you" due to limitations on the PyPI side of things, and "Hello web >> service developer, I grant you 100% authority to impersonate me to the >> rest of the world however you like" is a questionable idea in any >> circumstance, let alone when we're talking about publishing software. >> >> Since we don't currently provide end-to-end package signing, PyPI >> initiated builds would solve the trust problem by having PyPI trust >> *itself*, and only require user credentials for the initial source >> upload. This has the major downside that "safely" running arbitrary >> code from unknown publishers is a Hard Problem, which is one of the >> big reasons that Linux distros put so many hurdles in the way of >> becoming a package maintainer (i.e. package maintainers get to run >> arbitrary code not just inside the distro build system but also on end >> user machines, usually with elevated privileges, so you want to >> establish a pretty high level of trust before letting people do it). >> If I understand correctly, conda-forge works on the same basic >> principle - reviewing the publishers before granting them publication >> access, rather than defending against arbitrarily malicious code at >> build time. > > > - https://conda-forge.github.io > - https://github.com/conda-forge > - https://github.com/conda-forge/feedstocks > - https://github.com/conda-forge/conda-smithy > > >> >> A more promising long term path is trust federation, which many folks >> will already be familiar with through granting other services access >> to their GitHub repositories, or using Twitter/Facebook/Google/et al >> to sign into various systems. That's not going to be a quick fix >> though, as it's contingent on sorting out the Warehouse migration >> challenges, and those are already significant enough without piling >> additional proposed changes on top of the already pending work. > > > - [ ] Warehouse: ENH,SEC: A table, form, API for creating and revoking > OAuth authz > - (project, key, UPLOAD_RELEASE) > - key renewal date > > There are a few existing OAuth Server libraries for pyramid (Warehouse): > > - https://github.com/sneridagh/osiris > - https://github.com/tilgovi/pyramid-oauthlib > - https://github.com/elliotpeele/pyramid_oauth2_provider > > > > - [ ] CI Release utility secrets: > - VCS commit signature checking keyring > - Package signing key (GPG ASC) > - Package signature pubkey > > I just found these: > > - https://gist.github.com/audreyr/5990987 > - https://github.com/michaeljoseph/changes > > - https://pypi.python.org/pypi/jarn.mkrelease > - scripted GPG > > - https://caremad.io/posts/2013/07/packaging-signing-not-holy-grail/ > - SHOULD have OOB keyring dist > > - https://github.com/pypa/twine/issues/157 > - twine uploads *.asc if it exists > > - http://pythonhosted.org/distlib/tutorial.html#signing-a-distribution > - http://pythonhosted.org/distlib/tutorial.html#verifying-signatures > - SHOULD specify a limited keying-dir/ > > - https://packaging.python.org/distributing/ > - [ ] howto create an .asc signature > > - https://docs.travis-ci.com/user/deployment/pypi/ > - https://github.com/travis-ci/dpl/blob/master/lib/dpl/provider/pypi.rb > - [x] https://github.com/travis-ci/dpl/issues/253 > - [ ] oauth key instead of pass > > - https://docs.travis-ci.com/user/deployment/releases/ > - upload release to GitHub > > - [ ] Is it possible to maintain a simple index of GitHub-hosted releases > and .asc signatures w/ gh-pages? (for backups) > - twine: Is uploading GitHub releases in scope? > - https://pypi.org/search/?q=Github+release > > >> However, something that could potentially be achieved in the near term >> given folks interested enough in the idea to set about designing it >> would be a default recommendation for a Travis CI + Appveyor + GitHub >> Releases based setup that automated everything *except* the final >> upload to PyPI, but then also offered a relatively simple way for >> folks to pull their built artifacts from GitHub and push them to PyPI >> (such that their login credentials never left their local system). >> Folks that care enough about who hosts their source code to want to >> avoid GitHub, or have complex enough build system needs that Travis CI >> isn't sufficient, are likely to be technically sophisticated enough to >> adapt service specific instructions to their particular circumstances, >> and any such design work now would be a useful template for full >> automation at some point in the future. > > > https://www.google.com/search&q=Travis+Appveyor+GitHub+pypi > > ... also useful: > > - GitLab CI > - .gitlab-ci.yml , config.toml > - https://docs.gitlab.com/ce/ci/docker/using_docker_images.html > > - Jenkins > - https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin > - https://wiki.jenkins-ci.org/display/JENKINS/ShiningPanda+Plugin > - https://wiki.jenkins-ci.org/display/JENKINS/GitHub+Plugin > - https://wiki.jenkins-ci.org/display/JENKINS/Release+Plugin > > - Buildbot > - http://docs.buildbot.net/latest/manual/cfg-workers-docker.html > > - GitFlow and HubFlow specify a release/ branch with actual release tags > all on master/ > - https://datasift.github.io/gitflow/IntroducingGitFlow.html > > > >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Nov 5 07:04:58 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 5 Nov 2016 06:04:58 -0500 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> <20161103142756.2acdf468@anarchist> Message-ID: On Saturday, November 5, 2016, Wes Turner wrote: > For automated deployment / continuous deployment / "continuous delivery": > > - pip maintains a local cache > - devpi can be configured as a transparent proxy cache (in front of > pypi.org) > http://doc.devpi.net/latest/quickstart-pypimirror.html > > - GitLab CI can show a checkmark for a deploy pipeline stage > > On Saturday, November 5, 2016, Wes Turner > wrote: > >> >> >> On Saturday, November 5, 2016, Nick Coghlan wrote: >> >>> On 4 November 2016 at 06:07, Nathaniel Smith wrote: >>> > I think we're drifting pretty far off topic here... IIRC the original >>> > discussion was about whether the travis-ci infrastructure could be >>> suborned >>> > to provide an sdist->wheel autobuilding service for pypi. (Answer: >>> maybe, >>> > though it would be pretty awkward, and no one seems to be jumping up >>> to make >>> > it happen.) >>> >>> The hard part of designing any such system isn't so much the building >>> process, it's the authentication, authorisation and trust management >>> for the release publication step. At the moment, that amounts to "Give >>> the service your PyPI password, so PyPI will 100% trust that they're >>> you" due to limitations on the PyPI side of things, and "Hello web >>> service developer, I grant you 100% authority to impersonate me to the >>> rest of the world however you like" is a questionable idea in any >>> circumstance, let alone when we're talking about publishing software. >>> >>> Since we don't currently provide end-to-end package signing, PyPI >>> initiated builds would solve the trust problem by having PyPI trust >>> *itself*, and only require user credentials for the initial source >>> upload. This has the major downside that "safely" running arbitrary >>> code from unknown publishers is a Hard Problem, which is one of the >>> big reasons that Linux distros put so many hurdles in the way of >>> becoming a package maintainer (i.e. package maintainers get to run >>> arbitrary code not just inside the distro build system but also on end >>> user machines, usually with elevated privileges, so you want to >>> establish a pretty high level of trust before letting people do it). >>> If I understand correctly, conda-forge works on the same basic >>> principle - reviewing the publishers before granting them publication >>> access, rather than defending against arbitrarily malicious code at >>> build time. >> >> >> - https://conda-forge.github.io >> - https://github.com/conda-forge >> - https://github.com/conda-forge/feedstocks >> - https://github.com/conda-forge/conda-smithy >> >> >>> >>> A more promising long term path is trust federation, which many folks >>> will already be familiar with through granting other services access >>> to their GitHub repositories, or using Twitter/Facebook/Google/et al >>> to sign into various systems. That's not going to be a quick fix >>> though, as it's contingent on sorting out the Warehouse migration >>> challenges, and those are already significant enough without piling >>> additional proposed changes on top of the already pending work. >> >> >> - [ ] Warehouse: ENH,SEC: A table, form, API for creating and revoking >> OAuth authz >> - (project, key, UPLOAD_RELEASE) >> - key renewal date >> >> There are a few existing OAuth Server libraries for pyramid (Warehouse): >> >> - https://github.com/sneridagh/osiris >> - https://github.com/tilgovi/pyramid-oauthlib >> - https://github.com/elliotpeele/pyramid_oauth2_provider >> >> >> >> - [ ] CI Release utility secrets: >> - VCS commit signature checking keyring >> - Package signing key (GPG ASC) >> - Package signature pubkey >> >> I just found these: >> >> - https://gist.github.com/audreyr/5990987 >> - https://github.com/michaeljoseph/changes >> >> - https://pypi.python.org/pypi/jarn.mkrelease >> - scripted GPG >> >> - https://caremad.io/posts/2013/07/packaging-signing-not-holy-grail/ >> - SHOULD have OOB keyring dist >> >> - https://github.com/pypa/twine/issues/157 >> - twine uploads *.asc if it exists >> >> - http://pythonhosted.org/distlib/tutorial.html#signing-a-distribution >> - http://pythonhosted.org/distlib/tutorial.html#verifying-signatures >> - SHOULD specify a limited keying-dir/ >> >> - https://packaging.python.org/distributing/ >> - [ ] howto create an .asc signature >> >> - https://docs.travis-ci.com/user/deployment/pypi/ >> - https://github.com/travis-ci/dpl/blob/master/lib/dpl/provider/pypi.rb >> - [x] https://github.com/travis-ci/dpl/issues/253 >> - [ ] oauth key instead of pass >> >> - https://docs.travis-ci.com/user/deployment/releases/ >> - upload release to GitHub >> >> - [ ] Is it possible to maintain a simple index of GitHub-hosted releases >> and .asc signatures w/ gh-pages? (for backups) >> - twine: Is uploading GitHub releases in scope? >> - https://pypi.org/search/?q=Github+release >> >> >>> However, something that could potentially be achieved in the near term >>> given folks interested enough in the idea to set about designing it >>> would be a default recommendation for a Travis CI + Appveyor + GitHub >>> Releases based setup that automated everything *except* the final >>> upload to PyPI, but then also offered a relatively simple way for >>> folks to pull their built artifacts from GitHub and push them to PyPI >>> (such that their login credentials never left their local system). >>> Folks that care enough about who hosts their source code to want to >>> avoid GitHub, or have complex enough build system needs that Travis CI >>> isn't sufficient, are likely to be technically sophisticated enough to >>> adapt service specific instructions to their particular circumstances, >>> and any such design work now would be a useful template for full >>> automation at some point in the future. >> >> >> https://www.google.com/search&q=Travis+Appveyor+GitHub+pypi >> >> ... also useful: >> >> - GitLab CI >> - .gitlab-ci.yml , config.toml >> - https://docs.gitlab.com/ce/ci/docker/using_docker_images.html >> >> - Jenkins >> - https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin >> - https://wiki.jenkins-ci.org/display/JENKINS/ShiningPanda+Plugin >> - https://wiki.jenkins-ci.org/display/JENKINS/GitHub+Plugin >> - https://wiki.jenkins-ci.org/display/JENKINS/Release+Plugin >> >> - Buildbot >> - http://docs.buildbot.net/latest/manual/cfg-workers-docker.html >> >> - GitFlow and HubFlow specify a release/ branch with actual release tags >> all on master/ >> - https://datasift.github.io/gitflow/IntroducingGitFlow.html >> >> >> >>> Cheers, >>> Nick. >>> >>> -- >>> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ubernostrum at gmail.com Sat Nov 5 09:57:23 2016 From: ubernostrum at gmail.com (James Bennett) Date: Sat, 5 Nov 2016 14:57:23 +0100 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: Could we get 'Framework :: Django :: 1.10' please? Django 1.10 has been out for a while :) On Fri, Dec 4, 2015 at 12:59 AM, Maurits van Rees < m.van.rees at zestsoftware.nl> wrote: > Fair enough. :-) > > See you in six or more months. ;-) > > Maurits > > Op 04/12/15 om 00:53 schreef Richard Jones: > >> I prefer not to add classifiers unless they're actually going to be >> used. Half a year could turn into a year :-) >> >> On 4 December 2015 at 08:22, Maurits van Rees >> > wrote: >> >> Could you add Plone 5.1 too? It may still take half a year or more >> before we release this, but we are starting to think about what we >> want to put in there. >> So this one: >> >> Framework :: Plone :: 5.1 >> > > > -- > Maurits van Rees: http://maurits.vanrees.org/ > Zest Software: http://zestsoftware.nl > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Nov 5 10:45:59 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 5 Nov 2016 14:45:59 +0000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: Message-ID: <20161105144559.GC24597@yuggoth.org> On 2016-11-05 17:43:48 +1000 (+1000), Nick Coghlan wrote: [...] > Putting my work hat back on for a moment, I actually wish more people > *would* start saying that, as Red Hat actively want people to stop > running their own applications in the system Python, and start using > Software Collections (either directly or via the Docker images) > instead. Sharing a language runtime environment between operating > system components and end user applications creates all sorts of > maintainability problems (not least of which is the inability to > upgrade to new feature releases for fear of breaking end user > applications and vice-versa), to the point where Fedora is planning to > cede the "/usr/bin/python3" namespace to end users, and start > migrating system components out to "/usr/bin/system-python": > https://fedoraproject.org/wiki/Changes/System_Python [...] It's a grey area, complicated by the fact that many people are writing their software with the intent of also packaging it for/in common distros and so want to make sure it works with the "system Python" present within them. There it looks like Fedora is splitting their Python-using packages along some arbitrary line of "is it a system component?" vs. "is it a user-facing application?" which is probably tractable given the (relatively) limited number of packages in their distribution. I can't fathom how to go about trying to introduce a similar restructuring in, say, Debian. -- Jeremy Stanley From ncoghlan at gmail.com Sat Nov 5 12:19:22 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 Nov 2016 02:19:22 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: <20161105144559.GC24597@yuggoth.org> References: <20161105144559.GC24597@yuggoth.org> Message-ID: On 6 November 2016 at 00:45, Jeremy Stanley wrote: > On 2016-11-05 17:43:48 +1000 (+1000), Nick Coghlan wrote: > [...] >> Putting my work hat back on for a moment, I actually wish more people >> *would* start saying that, as Red Hat actively want people to stop >> running their own applications in the system Python, and start using >> Software Collections (either directly or via the Docker images) >> instead. Sharing a language runtime environment between operating >> system components and end user applications creates all sorts of >> maintainability problems (not least of which is the inability to >> upgrade to new feature releases for fear of breaking end user >> applications and vice-versa), to the point where Fedora is planning to >> cede the "/usr/bin/python3" namespace to end users, and start >> migrating system components out to "/usr/bin/system-python": >> https://fedoraproject.org/wiki/Changes/System_Python > [...] > > It's a grey area, complicated by the fact that many people are > writing their software with the intent of also packaging it for/in > common distros and so want to make sure it works with the "system > Python" present within them. There's a lot more folks starting to challenge the idea of the monolithic all-inclusive distro as a sensible software management technique, though. The most popular client distro (Android) maintains a strict separation between the vendor provided platform and the sandboxed applications running on top of it, and that turns out to provide a lot of desirable characteristics so long as your build and publication infrastructure can cope with respinning the world for security updates. Go back 10 years and "rebuild the world on demand" simply wasn't feasible, but it's becoming a lot more viable now, especially as the mainstream Linux sandboxing capabilities continue to improve. > There it looks like Fedora is splitting > their Python-using packages along some arbitrary line of "is it a > system component?" vs. "is it a user-facing application?" which is > probably tractable given the (relatively) limited number of packages > in their distribution. There's still more than five thousand Python using packages in the Fedora repos, and most of them are likely to remain classified as non-system optional addons. However, actual core system components like dnf, the Anaconda installer, firewalld, SELinux policy management, etc are likely to migrate - the idea being that "sudo dnf uninstall python3" shouldn't render your system fundamentally unusable. (Part of the motivation here is also simply size - Fedora hasn't historically had the python/python-minimal split that Debian has, which is becoming more of a problem as folks try to minimise base image sizes) > I can't fathom how to go about trying to > introduce a similar restructuring in, say, Debian. The problem isn't so much with the components that are in the distro, as it is the end user applications and scripts that aren't published online anywhere at all, let alone as distro packages. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ralf.gommers at gmail.com Sat Nov 5 18:13:14 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 6 Nov 2016 11:13:14 +1300 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On Sat, Nov 5, 2016 at 7:57 PM, Nathaniel Smith wrote: > On Fri, Nov 4, 2016 at 11:36 PM, Nick Coghlan wrote: > > On 4 November 2016 at 03:56, Matthew Brett > wrote: > >> But - it would be a huge help if the PSF could help with funding to > >> get mingw-w64 working. This is the crucial blocker for progress on > >> binary wheels on Windows. > > > > Such a grant was already awarded earlier this year by way of the > > Scientific Python Working Group (which is a collaborative funding > > initiative between the PSF and NumFocus): > > https://mail.python.org/pipermail/scientific/2016-January/000271.html > > > > However, we hadn't received a status update by the time I stepped down > > from the Board, This status update was sent to the PSF board in June: http://mingwpy.github.io/roadmap.html#status-update-june-16. Up until that report the progress was good, but after that progress has stalled due to unavailability for private reasons of Carl Kleffner (the main author of MingwPy). Ralf > > although it sounds like progress hasn't been good if > > folks aren't even aware that the grant was awarded in the first place. > > There's two separate projects here that turn out to be unrelated: one > to get mingw-w64 support for CPython < 3.5, and one for CPython >= > 3.5. (This has to do with the thing where MSVC totally redid how their > C runtime works.) The grant you're thinking of is for the python < 3.5 > part; what Matthew's talking about is for the python >= 3.5 part, > which is a totally different plan and team. > > The first blocker on getting funding for the >= 3.5 project though is > getting the team to write down an actual plan and cost estimate, which > has not yet happened... > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Sun Nov 6 00:40:05 2016 From: robertc at robertcollins.net (Robert Collins) Date: Sun, 6 Nov 2016 17:40:05 +1300 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 3 November 2016 at 21:40, Nick Coghlan wrote: > On 3 November 2016 at 05:28, Nathaniel Smith wrote: >> On Nov 2, 2016 9:52 AM, "Nick Coghlan" wrote: >>> Tennessee Leeuwenberg started a draft PEP for that first part last >>> year: https://github.com/pypa/interoperability-peps/pull/30/files >>> >>> dnf/yum, apt, brew, conda, et al all *work around* the current lack of >>> such a system by asking humans (aka "downstream package maintainers") >>> to supply the additional information by hand in a platform specific >>> format. >> >> To be fair, though, it's not yet clear whether such a system is actually >> possible. AFAIK no one has ever managed to reliably translate between >> different languages that Linux distros use for describing environment >> constraints, never mind handling the places where they're genuinely >> irreconcilable (e.g. the way different distro openssl packages have >> incompatible ABIs), plus other operating systems too. > > The main problem with mapping between Debian/RPM/conda etc in the > general case is that the dependencies are generally expressed in terms > of *names* rather than runtime actions, and you also encounter > problems with different dependency management ecosystems splitting up > (or aggregating!) different upstream components differently. This > means you end up with a situation that's more like a lossy transcoding > between MP3 and OGG Vorbis or vice-versa than it is a pristine > encoding to either from a losslessly encoded FLAC or WAV file. > > The approach Tennessee and Robert Collins came up with (which still > sounds sensible to me) is that instead of dependencies on particular > external components, what we want to be able express is instead a > range of *actions* that the software is going to take: > > - "I am going to dynamically load a library named " > - "I am going to execute a subprocess for a command named " > And rather than expecting folks to figure that stuff out for > themselves, you'd use tools like auditwheel and strace to find ways to > generate it and/or validate it. > I don't think thats what we had proposed though its similar. My recollection is proposing that dependencies on system components be expressed as dependencies on the specific files rather than on abstract names. Packaging systems generally have metadata to map files to packages - and so a packaging specific driver can map 'libfoo1.2.so' into libfoo-1.2 on Debian, or libfoo1.2 on a slightly different naming style platform. The use of a tuple (type, relpath) was to abstract away from different FSH's. ABI's for libraries are well established and at the control of upstream, so there's no need for random audit tools :). -Rob From ncoghlan at gmail.com Sun Nov 6 00:44:06 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 Nov 2016 14:44:06 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <2779974B-A4D5-4A8F-8D47-B00A9C32BDD1@stufft.io> Message-ID: On 6 November 2016 at 08:13, Ralf Gommers wrote: >> On Fri, Nov 4, 2016 at 11:36 PM, Nick Coghlan wrote: >> > Such a grant was already awarded earlier this year by way of the >> > Scientific Python Working Group (which is a collaborative funding >> > initiative between the PSF and NumFocus): >> > https://mail.python.org/pipermail/scientific/2016-January/000271.html >> > >> > However, we hadn't received a status update by the time I stepped down >> > from the Board, > > This status update was sent to the PSF board in June: > http://mingwpy.github.io/roadmap.html#status-update-june-16. Very cool - thanks for the update! > Up until that report the progress was good, but after that progress has > stalled due to unavailability for private reasons of Carl Kleffner (the main > author of MingwPy). Ah, that's unfortunate. Hopefully nothing too serious, although I realise it wouldn't be appropriate to share details on a public list like this one. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Sun Nov 6 00:44:21 2016 From: robertc at robertcollins.net (Robert Collins) Date: Sun, 6 Nov 2016 17:44:21 +1300 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 3 November 2016 at 22:10, Nathaniel Smith wrote: > On Nov 3, 2016 1:40 AM, "Nick Coghlan" wrote: > And then it segfaults because it turns out that your library named is > not abi compatible with my library named . Or it would have been if you > had the right version, but distros don't agree on how to express version > numbers either. (Just think about epochs.) Or we're on Windows, so it's > interesting to know that we need a library named , but what are we > supposed to do with that information exactly? Well, we weren't trying to fix the problem with incompatible ABI's: the thoughts that I recall were primarily around getting development header files in place, to permit building (and then caching) local binary wheels. The ports system for example, works very very robustly, even though (it used to) require building everything from scratch. That was my inspiration. The manylinux approach seems better to me for solving the 'install a binary wheel in many places' problem, because of the issues with variance across higher layered libraries you mention :). > Again, I don't want to just be throwing around stop energy -- if people want > to tackle these problems then I wish them luck. But I don't think we should > be hand waving this as a basically solved problem that just needs a bit of > coding, because that also can create stop energy that blocks an honest > evaluation of alternative approaches. +1. ...> dnf/apt/pacman/chocolatey/whatever and make my wheel work everywhere -- and > that this will be an viable alternative to conda. As a distribution of sources, I think its very viable with the approach above: indeed we do similar things using bindep in OpenStack and that seems to be working out pretty well. -Rob From ncoghlan at gmail.com Sun Nov 6 01:25:50 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 Nov 2016 15:25:50 +1000 Subject: [Distutils] Current Python packaging status (from my point of view) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> Message-ID: On 6 November 2016 at 14:44, Robert Collins wrote: > On 3 November 2016 at 22:10, Nathaniel Smith wrote: > ...> dnf/apt/pacman/chocolatey/whatever and make my wheel work everywhere -- and >> that this will be an viable alternative to conda. > > As a distribution of sources, I think its very viable with the > approach above: indeed we do similar things using bindep in OpenStack > and that seems to be working out pretty well. Right, and I guess we should also clearly distinguish between "publisher provided binaries" and "redistributor provided binaries", as I think a potentially good way to go long term for Linux might look something like this: * manylinuxN A relatively narrow multi-distro ABI with updates driven mainly by when old CentOS versions go EOL We already have manylinux1, and it covers many cases, but not everything (e.g. lxml would need to statically link libxml2, libxslt and maybe a few other libraries to be manylinux1 compatible). Most publishers aren't going to want to go beyond providing manylinux binaries, since the combinatorial explosion of possibilities beyond that point makes doing so impractical. * linux_{arch}_{distro}_{version} This is the scheme Nate Coraor adopted in order to migrate Galaxy Project to using wheel files for binary caching: https://docs.galaxyproject.org/en/master/admin/framework_dependencies.html#galaxy-pip-and-wheel The really neat thing about that approach is that it's primarily local config file driven: https://docs.galaxyproject.org/en/master/admin/framework_dependencies.html#wheel-platform-compatibility What that means is that if we decided to standardise that approach, then conda could readily declare itself as a distinct platform in its own right - wheels built with a conda Python install could get tagged as conda wheels (since they might depend on external libraries from the conda ecosystem), and folks running conda environments could consume those wheels without needing to rebuild them. You'd also be able to share a single wheelhouse between the system Python and conda installs without inadvertently clobbering cached wheel files with incompatible versions. Being config file driven would also allow this to be extended to Mac OS X and Windows fairly readily, where the platform ABI implicitly defined by the python.org CPython binaries is even narrower than that of manylinux1. This scheme would mostly be useful in the near term for the way Galaxy Project is using it: maintainers of a controlled environment using it for managed caching of built artifacts that can then be installed with pip. However, it *might* eventually see use in another context as well: redistributors using it to provide prebuilt venv-compatible wheel files for their particular Python distribution. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris.barker at noaa.gov Sun Nov 6 16:20:11 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Sun, 6 Nov 2016 13:20:11 -0800 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> <20161103142756.2acdf468@anarchist> Message-ID: On Fri, Nov 4, 2016 at 11:29 PM, Nick Coghlan wrote: > If I understand correctly, conda-forge works on the same basic > principle - reviewing the publishers before granting them publication > access, rather than defending against arbitrarily malicious code at > build time. > yup -- that's pretty much it. you need a conda-forge member to merge your PR before you get a "feedstock" tied into the system. I'm confused though -- IIUC, ANYONE can put something up on PyPi with arbitrary code in it that will get run by someone when they do pip install of it. So how is allowing anyone to push something to PyPi that will run arbitrary code on a CI server, that will push arbitrary code to PyPi that will then get run by anyone that pip installs it? Essentially, we have already said that there is no such thing as "trusting PyPi" -- you need to trust each individual package. So how in any sort of auto-build system going to change that?? -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Nov 6 22:28:37 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 7 Nov 2016 13:28:37 +1000 Subject: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source) In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> <20161103142756.2acdf468@anarchist> Message-ID: On 7 November 2016 at 07:20, Chris Barker wrote: > So how is allowing anyone to push something to PyPi that will run arbitrary > code on a CI server, that will push arbitrary code to PyPi that will then > get run by anyone that pip installs it? PyPI currently has the ability to impersonate any PyPI publisher, which makes it an enormous security threat in and of itself, so we need to limit the attack surfaces that it exposes. > Essentially, we have already said that there is no such thing as "trusting > PyPi" -- you need to trust each individual package. So how in any sort of > auto-build system going to change that?? Currently we're reasonably confident that the only folks that can compromise Django users (for example) are the Django devs and the PyPI service administrators. The former is an inherent problem in trusting any software publisher, while the latter we currently mitigate by tightly controlling admin access to the production PyPI service, and strictly limiting the server-side processing that PyPI performs on uploaded files to reduce the opportunities for privilege escalation attacks. Once you start providing a server-side build service however, you're opening up additional attack vectors on the core publishing system, and getting any aspect of that wrong may lead to publishers being able to impersonate *each other*. Unfortunately, offering secure multi-tenancy in software services when you allow tenants to run arbitrary code is a really hard problem - it's the main reason that OpenShift v3 hasn't fully displaced OpenShift v2 yet, and that's with the likes of Red Hat, Google, CoreOS and Deis collaborating on the underlying Kubernetes infrastructure. Linux distros and conda-forge duck that multi-tenancy problem by treating the build system itself as the publisher, with everyone with access to it being a relatively trusted co-tenant (think "share house with no locks on interior doors" rather than "apartment complex"). That approach works OK at smaller scales, but the gatekeeping involved in approving new co-publishers introduces off-putting friction for potential participants (hence both app developers and data analysts finding ways to bypass the sysadmin and OS developer dominated Linux packaging ecosystems). For PyPI, we can mitigate the difficulty by getting the builds to happen somewhere else (like external CI services), but even then you still have a non-trivial service integration problem to manage, especially if you decide to tackle it through a "bring your own build service" approach (ala GitHub CI integration). Whichever way you go though (native build service, or integration with external build services), you're signing up for a major ongoing maintenance task, as you're either now responsible for a shared build system serving tens of thousands of software publishers [1], or else you're responsible for maintaining a coherent publisher UX while also maintaining compatibility with multiple external systems that you don't directly control. Cheers, Nick. [1] There were ~35k distinct publisher accounts on PyPI when Donald last checked in August: https://github.com/pypa/warehouse/issues/1428 -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guettliml at thomas-guettler.de Wed Nov 9 02:47:55 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 9 Nov 2016 08:47:55 +0100 Subject: [Distutils] Trust Management .... In-Reply-To: References: <9470caa2-0afc-7dce-3bd8-25c0a256bfe0@thomas-guettler.de> <993cb068-0e5e-22cc-b699-7cd72789068b@thomas-guettler.de> <20161103131703.2d3cb4b1@anarchist> <20161103142756.2acdf468@anarchist> Message-ID: <9104e125-deee-e56f-3193-e7da8699aef3@thomas-guettler.de> Am 05.11.2016 um 07:29 schrieb Nick Coghlan: > On 4 November 2016 at 06:07, Nathaniel Smith wrote: >> I think we're drifting pretty far off topic here... IIRC the original >> discussion was about whether the travis-ci infrastructure could be suborned >> to provide an sdist->wheel autobuilding service for pypi. (Answer: maybe, >> though it would be pretty awkward, and no one seems to be jumping up to make >> it happen.) > > The hard part of designing any such system isn't so much the building > process, it's the authentication, authorisation and trust management > for the release publication step. Yes, trust management is very hard. I think it can't be solved, and never will. Things are different if you run a build server in your own LAN (self hosting). Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From christoph at grothesque.org Thu Nov 10 06:27:07 2016 From: christoph at grothesque.org (Christoph Groth) Date: Thu, 10 Nov 2016 12:27:07 +0100 Subject: [Distutils] Role of distutils.cfg Message-ID: <87lgwrh8xg.fsf@grothesque.org> Hi, I have a question on how to best handle parameters to the distribution given that they can be shadowed by the global configuration file, distutils.cfg. Our project [1] contains C-extensions that include NumPy?s headers. To this end, our setup.py [2] sets the include_dirs parameter of setup() to NumPy?s include path. We have chosen this way, since it allows to add a common include path to all the extensions in one go. One advantage of this approach is that when the include_dirs parameters of the individual extensions are reconfigured (for example with a build configuration file), this does not interfere with the numpy include path. This has been working well for most of other users, but recently we got a bug report by someone for whom it doesn?t. It turns out that his system has a distutils.cfg that precedes over the include_dirs parameter to setup() [3]. My question is now: is there a policy on the correct use of distutils.cfg? After all, it can override any parameter to any distutils command. As such, is passing options like include_dirs to setup() a bad idea in the first place, or should rather the use of distutils.cfg be reserved to cases like choosing an alternative compiler? Thanks for any clarification, Christoph -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From ralf.gommers at gmail.com Fri Nov 11 02:58:05 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 11 Nov 2016 20:58:05 +1300 Subject: [Distutils] Role of distutils.cfg In-Reply-To: <87lgwrh8xg.fsf@grothesque.org> References: <87lgwrh8xg.fsf@grothesque.org> Message-ID: On Fri, Nov 11, 2016 at 12:27 AM, Christoph Groth wrote: > Hi, > > I have a question on how to best handle parameters to the distribution > given that they can be shadowed by the global configuration file, > distutils.cfg. > > Our project [1] You forgot to add all your links. > contains C-extensions that include NumPy?s headers. To this end, our > setup.py [2] sets the include_dirs parameter of setup() to NumPy?s include > path. We have chosen this way, since it allows to add a common include > path to all the extensions in one go. One advantage of this approach is > that when the include_dirs parameters of the individual extensions are > reconfigured (for example with a build configuration file), this does not > interfere with the numpy include path. > > This has been working well for most of other users, but recently we got a > bug report by someone for whom it doesn?t. It turns out that his system > has a distutils.cfg that precedes over the include_dirs parameter to > setup() [3]. > > My question is now: is there a policy on the correct use of > distutils.cfg? After all, it can override any parameter to any distutils > command. As such, is passing options like include_dirs to setup() a bad > idea in the first place, or should rather the use of distutils.cfg be > reserved to cases like choosing an alternative compiler? > I'm not aware of any policy, but in general I'd recommend to pass as little to setup() as possible. Most robust is to only pass metadata (name, maintainer, url, install_requires, etc.). In a number of cases you're forced to pass ext_modules or cmdclass, which usually works fine. Passing individual paths, compiler flags, etc. sounds unhealthy. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From christoph at grothesque.org Fri Nov 11 03:12:24 2016 From: christoph at grothesque.org (Christoph Groth) Date: Fri, 11 Nov 2016 09:12:24 +0100 Subject: [Distutils] Role of distutils.cfg In-Reply-To: (Ralf Gommers's message of "Fri, 11 Nov 2016 20:58:05 +1300") References: <87lgwrh8xg.fsf@grothesque.org> Message-ID: <87vavuo2on.fsf@grothesque.org> Ralf Gommers wrote: > You forgot to add all your links. I accidentally deleted them when re-posting my message. The first time I sent it to this list without being subscribed, and it was unfortunately *silently* dropped. (I had assumed that postings by non-members are moderated.) Here they are: [1] https://pypi.python.org/pypi/kwant/1.2.2 [2] https://gitlab.kwant-project.org/kwant/kwant/blob/master/setup.py [3] https://gitlab.kwant-project.org/kwant/kwant/issues/48#note_2494 > Most robust is to only pass metadata (name, maintainer, url, > install_requires, etc.). In a number of cases you're forced to > pass ext_modules or cmdclass, which usually works fine. Passing > individual paths, compiler flags, etc. sounds unhealthy. Sounds reasonable, thanks for your advice. Is there any alternative to passing ext_modules? Christoph -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From ralf.gommers at gmail.com Fri Nov 11 03:22:09 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 11 Nov 2016 21:22:09 +1300 Subject: [Distutils] Role of distutils.cfg In-Reply-To: <87vavuo2on.fsf@grothesque.org> References: <87lgwrh8xg.fsf@grothesque.org> <87vavuo2on.fsf@grothesque.org> Message-ID: On Fri, Nov 11, 2016 at 9:12 PM, Christoph Groth wrote: > Ralf Gommers wrote: > > You forgot to add all your links. >> > > I accidentally deleted them when re-posting my message. The first time I > sent it to this list without being subscribed, and it was unfortunately > *silently* dropped. (I had assumed that postings by non-members are > moderated.) Here they are: > > [1] https://pypi.python.org/pypi/kwant/1.2.2 > [2] https://gitlab.kwant-project.org/kwant/kwant/blob/master/setup.py > [3] https://gitlab.kwant-project.org/kwant/kwant/issues/48#note_2494 > > Most robust is to only pass metadata (name, maintainer, url, >> install_requires, etc.). In a number of cases you're forced to pass >> ext_modules or cmdclass, which usually works fine. Passing individual >> paths, compiler flags, etc. sounds unhealthy. >> > > Sounds reasonable, thanks for your advice. > > Is there any alternative to passing ext_modules? > What Numpy and Scipy do is pass a single Configuration instance, and define all extensions and libraries in nested setup.py files, one per submodule. Example: https://github.com/scipy/scipy/blob/master/setup.py#L327 If you pass ext_modules as a list of Extension instances, you'd likely also avoid the issue. Example: https://github.com/PyWavelets/pywt/blob/master/setup.py#L145 Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Nov 11 03:28:51 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 11 Nov 2016 00:28:51 -0800 Subject: [Distutils] Role of distutils.cfg In-Reply-To: <87vavuo2on.fsf@grothesque.org> References: <87lgwrh8xg.fsf@grothesque.org> <87vavuo2on.fsf@grothesque.org> Message-ID: On Fri, Nov 11, 2016 at 12:12 AM, Christoph Groth wrote: > Ralf Gommers wrote: > >> You forgot to add all your links. > > > I accidentally deleted them when re-posting my message. The first time I > sent it to this list without being subscribed, and it was unfortunately > *silently* dropped. (I had assumed that postings by non-members are > moderated.) Here they are: > > [1] https://pypi.python.org/pypi/kwant/1.2.2 > [2] https://gitlab.kwant-project.org/kwant/kwant/blob/master/setup.py It looks like you already have a ton of code that's looping over various representations of your ext_modules and autogenerating things... wouldn't it be fairly straightforward to have a bit more code to automatically and unconditionally stick numpy.get_include() into every extension's include_dirs? (More code than doing it at the setup.py level, for sure, but on the scale of distutils workarounds...) -n -- Nathaniel J. Smith -- https://vorpus.org From christoph at grothesque.org Fri Nov 11 03:44:13 2016 From: christoph at grothesque.org (Christoph Groth) Date: Fri, 11 Nov 2016 09:44:13 +0100 Subject: [Distutils] Role of distutils.cfg In-Reply-To: (Nathaniel Smith's message of "Fri, 11 Nov 2016 00:28:51 -0800") References: <87lgwrh8xg.fsf@grothesque.org> <87vavuo2on.fsf@grothesque.org> Message-ID: <87r36io17m.fsf@grothesque.org> Nathaniel Smith wrote: > It looks like you already have a ton of code that's looping over > various representations of your ext_modules and autogenerating > things... Indeed, some of it could be IMHO interesting for inclusion into distutils/setuptools or whatnot. For example if Cython is not to be run we warn if the the pyx files (and their dependencies) are newer. > wouldn't it be fairly straightforward to have a bit more code to > automatically and unconditionally stick numpy.get_include() into > every extension's include_dirs? Sure, that?s what we are going to do now. I preferred the other solution since we have machinery in place that allows to redefine any parameter to any extension through a "build.conf" file. Now if someone redefines include_dirs in "build.conf", they will have to add the numpy include path as well. But that?s OK. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From nir36g at gmail.com Tue Nov 22 12:44:37 2016 From: nir36g at gmail.com (Nir Cohen) Date: Tue, 22 Nov 2016 09:44:37 -0800 (PST) Subject: [Distutils] Packaging multiple wheels in the same archive Message-ID: Hey all, As part of working on Cloudify (http://github.com/cloudify-cosmo) we've had to provide a way for customers to install our plugins in an environment where PyPI isn't accessible. These plugins are sets of Python packages which necessarily depend on one another (i.e. a regular python package with dependencies). We decided that we want to package sets of wheels together created or downloaded by `pip wheel`, add relevant metadata, package them together into a single archive (tar.gz or zip) and use the same tool which packs them up to install them later on, on the destination hosts. We came up with a tool (http://github.com/cloudify-cosmo/wagon) to do just that and that's what we currently use to create and install our plugins. While wheel solves the problem of generating wheels, there is no single, standard method for taking an entire set of dependencies packaged in a single location and installing them in a different location. We thought it would be a good idea to propose a PEP for that and wanted to get your feedback before we start writing the proposal. Our proposed implementation is not the issue here of course, but rather if you think there should be a PEP describing the way multiple wheels should be packaged together to create a standalone installable package. We would greatly appreciate your feedback on this. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From waynejwerner at gmail.com Tue Nov 22 16:03:54 2016 From: waynejwerner at gmail.com (Wayne Werner) Date: Tue, 22 Nov 2016 15:03:54 -0600 Subject: [Distutils] Fwd: Packaging multiple wheels in the same archive In-Reply-To: References: Message-ID: This should've gone to the list but apparently the google groups doesn't have me signed up? If you get a double post, sorry about that ---------- Forwarded message ---------- From: Wayne Werner Date: Tue, Nov 22, 2016 at 2:59 PM Subject: Re: [Distutils] Packaging multiple wheels in the same archive To: Nir Cohen Cc: distutils-sig FWIW you can already specify the `--find-links` option[1] to specify that you want to install wheels from a given directory. Add `--no-index` if you want to install from *only* that dir. An approach that certainly might be interesting is if pip supported a zipfile of wheels vs a directory of wheels, then this becomes as easy as zipping up all your wheels. -Wayne [1]: https://pip.pypa.io/en/stable/reference/pip_wheel/#cmdoption-f =========================================================== I welcome VSRE emails. Learn more at http://vsre.info/ =========================================================== On Tue, Nov 22, 2016 at 11:44 AM, Nir Cohen wrote: > Hey all, > > As part of working on Cloudify (http://github.com/cloudify-cosmo) we've > had to provide a way for customers to install our plugins in an environment > where PyPI isn't accessible. These plugins are sets of Python packages > which necessarily depend on one another (i.e. a regular python package with > dependencies). > > We decided that we want to package sets of wheels together created or > downloaded by `pip wheel`, add relevant metadata, package them together > into a single archive (tar.gz or zip) and use the same tool which packs > them up to install them later on, on the destination hosts. > > We came up with a tool (http://github.com/cloudify-cosmo/wagon) to do > just that and that's what we currently use to create and install our > plugins. > While wheel solves the problem of generating wheels, there is no single, > standard method for taking an entire set of dependencies packaged in a > single location and installing them in a different location. > > We thought it would be a good idea to propose a PEP for that and wanted to > get your feedback before we start writing the proposal. Our proposed > implementation is not the issue here of course, but rather if you think > there should be a PEP describing the way multiple wheels should be packaged > together to create a standalone installable package. > > We would greatly appreciate your feedback on this. > > Thanks! > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 23 01:06:50 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 Nov 2016 16:06:50 +1000 Subject: [Distutils] Packaging multiple wheels in the same archive In-Reply-To: References: Message-ID: [Some folks are going to get this twice - unfortunately, Google's mailing list mirrors are fundamentally broken, so replies to them don't actually go to the original mailing list properly] (Note for context: I stumbled across Wagon recently, and commented that we don't currently have a good target-environment-independent way of bundling up a set of wheels as a single transferable unit) On 23 November 2016 at 03:44, Nir Cohen wrote: > We came up with a tool (http://github.com/cloudify-cosmo/wagon) to do just > that and that's what we currently use to create and install our plugins. > While wheel solves the problem of generating wheels, there is no single, > standard method for taking an entire set of dependencies packaged in a > single location and installing them in a different location. Where I see this being potentially valuable is in terms of having a common "multiwheel" transfer format that can be used for cases where the goal is essentially wheelhouse caching and transfer. The two main cases I'm aware of where this comes up: - offline installation support (i.e. the Cloudify plugins use case, where the installation environment doesn't have networked access to an index server) - saving and restoring the wheelhouse cache (e.g. this comes up in container build pipelines) The latter problem arises from an issue with the way some container build environments (most notable Docker's) currently work: they always run in a clean environment, which means they can't see the host's wheel cache. One of the solutions to this is to let container builds specify a "cache state" which is archived by the build management service at the end of the build process, and then restored when starting the next incremental image build. This kind of cache transfer is already *possible* today, but having a standardised way of doing it makes it easier for people to write general purpose tooling around the concept, without requiring that the tool used to create the archive be the same tool used to unpack it at install time. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From thomas at kluyver.me.uk Wed Nov 23 09:35:30 2016 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Wed, 23 Nov 2016 14:35:30 +0000 Subject: [Distutils] PEP 517: Build system API Message-ID: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> I'd like to push PEP 517 forwards again. This PEP specifies a general build system interface so that a source tree can specify a tool (such as flit), and pip can ask that tool to generate a wheel. This would allow us to install from an sdist or a VCS checkout without running a setup.py file. https://www.python.org/dev/peps/pep-0517/ Developments since the last time this was discussed (to my knowledge): - It now uses the pyproject.toml file specified in PEP 518, which was already accepted. 517 originally specified a JSON file; 518 explains why TOML is preferred (basically: comments). - I have implemented the proposed build-system API in a pull request for Flit; this has been fairly straightforward so far. https://github.com/takluyver/flit/pull/77 Questions: 1. Editable installs. The PEP currenly specifies a hook to do an editable install (like 'pip install -e' or 'setup.py develop') into a given prefix. I don't think that specifying a prefix is sufficient to cover '--user' installation, which uses a different install scheme, especially on Windows and OSX framework builds. We can: a. Add an extra parameter 'user' to the hook, to override the prefix and do a user install. b. Leave it as is, and do not support editable user installation (which would make me sad, as I do editable user installs regularly) c. Decide that editable installs are too fiddly to standardise, and leave it to users to invoke a tool directly to do an editable install. 2. Dash vs. underscore, bikeshed reloaded! Currently, the table name uses a dash: [build-system], but the key added by PEP 517 uses an underscore: build_backend. This seems a bit messy. I propose that we change build_backend to build-backend for consistency. Dashes and underscores can both be used in a TOML key without needing quoting. Thanks, Thomas From brett at python.org Wed Nov 23 14:17:51 2016 From: brett at python.org (Brett Cannon) Date: Wed, 23 Nov 2016 19:17:51 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: Thanks for starting this up again! My vote is for 1c (easier to add 1a later), and dashes (for some reason I just like how they look more in config files). On Wed, Nov 23, 2016, 06:36 Thomas Kluyver, wrote: > I'd like to push PEP 517 forwards again. This PEP specifies a general > build system interface so that a source tree can specify a tool (such as > flit), and pip can ask that tool to generate a wheel. This would allow > us to install from an sdist or a VCS checkout without running a setup.py > file. > > https://www.python.org/dev/peps/pep-0517/ > > Developments since the last time this was discussed (to my knowledge): > - It now uses the pyproject.toml file specified in PEP 518, which was > already accepted. 517 originally specified a JSON file; 518 explains why > TOML is preferred (basically: comments). > - I have implemented the proposed build-system API in a pull request for > Flit; this has been fairly straightforward so far. > https://github.com/takluyver/flit/pull/77 > > Questions: > 1. Editable installs. The PEP currenly specifies a hook to do an > editable install (like 'pip install -e' or 'setup.py develop') into a > given prefix. I don't think that specifying a prefix is sufficient to > cover '--user' installation, which uses a different install scheme, > especially on Windows and OSX framework builds. We can: > a. Add an extra parameter 'user' to the hook, to override the prefix and > do a user install. > b. Leave it as is, and do not support editable user installation (which > would make me sad, as I do editable user installs regularly) > c. Decide that editable installs are too fiddly to standardise, and > leave it to users to invoke a tool directly to do an editable install. > > 2. Dash vs. underscore, bikeshed reloaded! Currently, the table name > uses a dash: [build-system], but the key added by PEP 517 uses an > underscore: build_backend. This seems a bit messy. I propose that we > change build_backend to build-backend for consistency. Dashes and > underscores can both be used in a TOML key without needing quoting. > > Thanks, > Thomas > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Nov 23 14:22:12 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 23 Nov 2016 14:22:12 -0500 Subject: [Distutils] PEP 517: Build system API In-Reply-To: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: My 2 cents: I think at a minimum we should get PEP 518 support into pip first. I don't think layering more things on top of a PEP that isn't yet implemented is a good approach to this. Sent from my iPhone > On Nov 23, 2016, at 9:35 AM, Thomas Kluyver wrote: > > I'd like to push PEP 517 forwards again. From fred at fdrake.net Wed Nov 23 14:24:42 2016 From: fred at fdrake.net (Fred Drake) Date: Wed, 23 Nov 2016 14:24:42 -0500 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: On Wed, Nov 23, 2016 at 2:17 PM, Brett Cannon wrote: > My vote is for 1c (easier to add 1a later), and dashes (for some reason I > just like how they look more in config files). I'm glad I'm not the only one! Configuration shouldn't require the use of the hated 'shift' key. -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From brett at python.org Wed Nov 23 14:30:11 2016 From: brett at python.org (Brett Cannon) Date: Wed, 23 Nov 2016 19:30:11 +0000 Subject: [Distutils] Packaging multiple wheels in the same archive In-Reply-To: References: Message-ID: This then ties into Kenneth's pipfile idea he's working on as it then makes sense to make a wagon/wheelhouse for a lock file. To also tie into the container aspect, if you dev on Windows but deploy to Linux, this can allow for gathering your dependencies locally for Linux on your Windows box and then deploy the set as a unit to your server (something Steve Dower and I have thought about and why we support a lock file concept). And if we use zip files with no nesting then as long as it's only Python code you could use zipimporter on the bundle directly. On Tue, Nov 22, 2016, 22:07 Nick Coghlan, wrote: > [Some folks are going to get this twice - unfortunately, Google's > mailing list mirrors are fundamentally broken, so replies to them > don't actually go to the original mailing list properly] > > (Note for context: I stumbled across Wagon recently, and commented > that we don't currently have a good target-environment-independent way > of bundling up a set of wheels as a single transferable unit) > > On 23 November 2016 at 03:44, Nir Cohen wrote: > > We came up with a tool (http://github.com/cloudify-cosmo/wagon) to do > just > > that and that's what we currently use to create and install our plugins. > > While wheel solves the problem of generating wheels, there is no single, > > standard method for taking an entire set of dependencies packaged in a > > single location and installing them in a different location. > > Where I see this being potentially valuable is in terms of having a > common "multiwheel" transfer format that can be used for cases where > the goal is essentially wheelhouse caching and transfer. The two main > cases I'm aware of where this comes up: > > - offline installation support (i.e. the Cloudify plugins use case, > where the installation environment doesn't have networked access to an > index server) > - saving and restoring the wheelhouse cache (e.g. this comes up in > container build pipelines) > > The latter problem arises from an issue with the way some container > build environments (most notable Docker's) currently work: they always > run in a clean environment, which means they can't see the host's > wheel cache. One of the solutions to this is to let container builds > specify a "cache state" which is archived by the build management > service at the end of the build process, and then restored when > starting the next incremental image build. > > This kind of cache transfer is already *possible* today, but having a > standardised way of doing it makes it easier for people to write > general purpose tooling around the concept, without requiring that the > tool used to create the archive be the same tool used to unpack it at > install time. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Nov 23 14:41:05 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 23 Nov 2016 11:41:05 -0800 Subject: [Distutils] PEP 517: Build system API In-Reply-To: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: Hi, On Wed, Nov 23, 2016 at 6:35 AM, Thomas Kluyver wrote: > I'd like to push PEP 517 forwards again. This PEP specifies a general > build system interface so that a source tree can specify a tool (such as > flit), and pip can ask that tool to generate a wheel. This would allow > us to install from an sdist or a VCS checkout without running a setup.py > file. > > https://www.python.org/dev/peps/pep-0517/ > > Developments since the last time this was discussed (to my knowledge): > - It now uses the pyproject.toml file specified in PEP 518, which was > already accepted. 517 originally specified a JSON file; 518 explains why > TOML is preferred (basically: comments). > - I have implemented the proposed build-system API in a pull request for > Flit; this has been fairly straightforward so far. > https://github.com/takluyver/flit/pull/77 > > Questions: > 1. Editable installs. The PEP currenly specifies a hook to do an > editable install (like 'pip install -e' or 'setup.py develop') into a > given prefix. I don't think that specifying a prefix is sufficient to > cover '--user' installation, which uses a different install scheme, > especially on Windows and OSX framework builds. We can: > a. Add an extra parameter 'user' to the hook, to override the prefix and > do a user install. > b. Leave it as is, and do not support editable user installation (which > would make me sad, as I do editable user installs regularly) > c. Decide that editable installs are too fiddly to standardise, and > leave it to users to invoke a tool directly to do an editable install. I think the standard advice nowadays is to do user installs via pip, so I would vote for some mechanism to do this - option a). > 2. Dash vs. underscore, bikeshed reloaded! Currently, the table name > uses a dash: [build-system], but the key added by PEP 517 uses an > underscore: build_backend. This seems a bit messy. I propose that we > change build_backend to build-backend for consistency. Dashes and > underscores can both be used in a TOML key without needing quoting. dash seems good to me too, Cheers, Matthew From p.f.moore at gmail.com Wed Nov 23 17:24:31 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 23 Nov 2016 22:24:31 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: On 23 November 2016 at 19:17, Brett Cannon wrote: > My vote is for 1c (easier to add 1a later), and dashes (for some reason I > just like how they look more in config files). I agree, 1c sounds like a reasonable starting place (but I don't tend to use editable installs, so treat my views on this with caution :-)) Also, I like dashes. But I agree with Donald here. Layering multiple unimplemented features sounds like a mistake. Can we not get PEP 518 implemented before we start on this again? Thomas, are you interested in implementing the pip side of either or both of PEPs 517 and 518? Paul From chris.barker at noaa.gov Wed Nov 23 18:14:56 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 23 Nov 2016 15:14:56 -0800 Subject: [Distutils] PEP 517: Build system API In-Reply-To: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: On Wed, Nov 23, 2016 at 6:35 AM, Thomas Kluyver wrote: > Questions: > 1. Editable installs. The PEP currenly specifies a hook to do an > editable install (like 'pip install -e' or 'setup.py develop') into a > given prefix. I don't think that specifying a prefix is sufficient to > cover '--user' installation, which uses a different install scheme, > especially on Windows and OSX framework builds. We can: > a. Add an extra parameter 'user' to the hook, to override the prefix and > do a user install. > b. Leave it as is, and do not support editable user installation (which > would make me sad, as I do editable user installs regularly) > Please, please, let's figure SOMETHING our here - editable installs (or "develop" installs) are a critical tool. Frankly, I don't know how anyone can develop a package without them. Back in the day I struggle mightily with kludging sys.path, and, relative imports that never really worked right, and on and on -- it SUCKED. Then I discovered setuptools develop mode -- yeah! IN fact, I don't think I'd ever use setuptools at all if I didn't need it to get develop mode! c. Decide that editable installs are too fiddly to standardise, and > leave it to users to invoke a tool directly to do an editable install. > Not sure what that means -- does that mean that you couldn't get an editable isntall with pip? but rather you would do: setup.py develop if you had setuptools as your build system, and some_other_command if you had some other build tool? Not too bad, but why not have a standard way to invoke develop mode? If the tool can support it, why not have a way for pip to tell an arbitrary build system to "please install this package in develop mode" On the other hand: I've always thought we were moving toward proper separation of concerns, in which case, pip should be focused on resolving dependencies and finding and installing packages. Should it even be possible to build and install a package from source with pip? But if it is, then it might as well support editable installs as well. The complication I see here is that a tool can't know how to install in editable mode unless it knows about the python environment it it running in -- which is easy for a tool built with python, but a problem for a tool written some other way. However, I see from PEP 517: The build backend object is expected to have attributes which provide some or all of the following hooks. The commonconfig_settings argument is described after the individual hooks: def get_build_requires(config_settings): ... So I guess we can count on a Python front end, at least, so 1(a) should be doable. In fact, is the user-install issue any different for editable installs than regular ones? -CHB > 2. Dash vs. underscore, bikeshed reloaded! Currently, the table name > uses a dash: [build-system], but the key added by PEP 517 uses an > underscore: build_backend. This seems a bit messy. I propose that we > change build_backend to build-backend for consistency. Dashes and > underscores can both be used in a TOML key without needing quoting. > > Thanks, > Thomas > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Nov 23 19:23:22 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 24 Nov 2016 00:23:22 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: I wouldn't be afraid of editable installs. They are trivial and involve building the package in place and putting a .pth file where it will be noticed. Specify editable packages can't necessarily be uninstalled in a standard way and you are done. The bespoke build tool tells pip where the package root is (where .dist-info will be written), usually . or ./src, then pip does .pth. On Wed, Nov 23, 2016, 17:16 Chris Barker wrote: > On Wed, Nov 23, 2016 at 6:35 AM, Thomas Kluyver > wrote: > > > Questions: > 1. Editable installs. The PEP currenly specifies a hook to do an > editable install (like 'pip install -e' or 'setup.py develop') into a > given prefix. I don't think that specifying a prefix is sufficient to > cover '--user' installation, which uses a different install scheme, > especially on Windows and OSX framework builds. We can: > a. Add an extra parameter 'user' to the hook, to override the prefix and > do a user install. > b. Leave it as is, and do not support editable user installation (which > would make me sad, as I do editable user installs regularly) > > > Please, please, let's figure SOMETHING our here - editable installs (or > "develop" installs) are a critical tool. Frankly, I don't know how anyone > can develop a package without them. > > Back in the day I struggle mightily with kludging sys.path, and, relative > imports that never really worked right, and on and on -- it SUCKED. > > Then I discovered setuptools develop mode -- yeah! IN fact, I don't think > I'd ever use setuptools at all if I didn't need it to get develop mode! > > c. Decide that editable installs are too fiddly to standardise, and > leave it to users to invoke a tool directly to do an editable install. > > > Not sure what that means -- does that mean that you couldn't get an > editable isntall with pip? but rather you would do: > > setup.py develop if you had setuptools as your build system, and > > some_other_command if you had some other build tool? > > Not too bad, but why not have a standard way to invoke develop mode? If > the tool can support it, why not have a way for pip to tell an arbitrary > build system to "please install this package in develop mode" > > On the other hand: > > I've always thought we were moving toward proper separation of concerns, > in which case, pip should be focused on resolving dependencies and finding > and installing packages. > > Should it even be possible to build and install a package from source with > pip? > > But if it is, then it might as well support editable installs as well. > > The complication I see here is that a tool can't know how to install in > editable mode unless it knows about the python environment it it running in > -- which is easy for a tool built with python, but a problem for a tool > written some other way. > > However, I see from PEP 517: > > > The build backend object is expected to have attributes which provide some > or all of the following hooks. The commonconfig_settings argument is > described after the individual hooks: > > def get_build_requires(config_settings): > ... > > > So I guess we can count on a Python front end, at least, so 1(a) should be > doable. > > In fact, is the user-install issue any different for editable installs > than regular ones? > > -CHB > > > > > > > > > > > > > > 2. Dash vs. underscore, bikeshed reloaded! Currently, the table name > uses a dash: [build-system], but the key added by PEP 517 uses an > underscore: build_backend. This seems a bit messy. I propose that we > change build_backend to build-backend for consistency. Dashes and > underscores can both be used in a TOML key without needing quoting. > > Thanks, > Thomas > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Nov 23 19:32:50 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 23 Nov 2016 16:32:50 -0800 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: On Wed, Nov 23, 2016 at 3:14 PM, Chris Barker wrote: > On Wed, Nov 23, 2016 at 6:35 AM, Thomas Kluyver > wrote: > >> >> Questions: >> 1. Editable installs. The PEP currenly specifies a hook to do an >> editable install (like 'pip install -e' or 'setup.py develop') into a >> given prefix. I don't think that specifying a prefix is sufficient to >> cover '--user' installation, which uses a different install scheme, >> especially on Windows and OSX framework builds. We can: >> a. Add an extra parameter 'user' to the hook, to override the prefix and >> do a user install. >> b. Leave it as is, and do not support editable user installation (which >> would make me sad, as I do editable user installs regularly) > > > Please, please, let's figure SOMETHING our here - editable installs (or > "develop" installs) are a critical tool. Frankly, I don't know how anyone > can develop a package without them. Would a 'pip watch' command that watched your source tree and rebuilt/reinstalled it whenever you edited a file work for you? What do you do for conda packages? Does conda have an editable install equivalent? The problem with editable installs is that they can't really be made to work "properly", in the sense of having solid invariants and fitting nicely into a rigorously defined metadata model -- they're always going to have weird footguns around the metadata / .py files / .so files getting out-of-sync with each other, and they make it harder to improve pip's infrastructure around upgrading, because upgrading requires pip to have a solid understanding of exactly what's installed. So it's absolutely possible to have some useful hack, like we do now, but useful hacks by their nature are hard to standardize: standardization means "this is the final solution to this problem, it won't need further tweaking", and editable installs really feel like they might benefit from further tweaking. Or something. Also note that just like we decided to split the basic pyproject.toml proposal (now PEP 518) from the build system interface proposal (now PEP 517), it might (probably) makes sense to split the editable install part of the build system interface proposal from the rest, just in the interests of making incremental progress. -n -- Nathaniel J. Smith -- https://vorpus.org From p.f.moore at gmail.com Thu Nov 24 04:06:28 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 24 Nov 2016 09:06:28 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: On 24 November 2016 at 00:32, Nathaniel Smith wrote: > Also note that just like we decided to split the basic pyproject.toml > proposal (now PEP 518) from the build system interface proposal (now > PEP 517), it might (probably) makes sense to split the editable > install part of the build system interface proposal from the rest, > just in the interests of making incremental progress. This would be my preference. I'm assuming (again, incremental progress) that "pip install -e" will still work as at present for projects that don't use the new infrastructure, so it's hardly an urgent need. Paul From ncoghlan at gmail.com Thu Nov 24 08:49:46 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 24 Nov 2016 23:49:46 +1000 Subject: [Distutils] Packaging multiple wheels in the same archive In-Reply-To: References: Message-ID: On 24 November 2016 at 16:45, Nir Cohen wrote: > Well, creating on Windows and deploying on Linux will only be possible if > the entire set of dependencies either have no C extensions or are manylinux1 > wheels.. but yeah, that's pretty much what we're doing right now with our > reference implementation. > > Regarding zipimporter, as far as I understand (correct me if I'm wrong) > there's no such a solution for wheels (i.e. you can't use zipimporter on a > zip of wheels) so does that means we'll have to package python files for all > dependencies directly in the archive? Right, there would be a couple of significant barriers to doing this in the general case: - firstly, wheels themselves are officially only a transport format, with direct imports being a matter of "we're not going to do anything to deliberately break the cases that work, but you're also on your own if anything goes wrong for any given use case": https://www.python.org/dev/peps/pep-0427/#is-it-possible-to-import-python-code-directly-from-a-wheel-file - secondly, I don't think zipimporter handles archives-within-archives - it handles directories within archives, so it would require that the individual wheels by unpacked and the whole structure archived as one big directory tree Overall, it sounds to me more like the "archive an entire installed virtual environment" use case than it does the "transfer a collection of pre-built artifacts from point A to point B" use case (which, to be fair, is an interesting use case in its own right, its just a slightly different problem). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Thu Nov 24 11:28:02 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 24 Nov 2016 08:28:02 -0800 Subject: [Distutils] Packaging multiple wheels in the same archive In-Reply-To: References: Message-ID: On Nov 23, 2016 11:33 AM, "Brett Cannon" wrote: > > This then ties into Kenneth's pipfile idea he's working on as it then makes sense to make a wagon/wheelhouse for a lock file. To also tie into the container aspect, if you dev on Windows but deploy to Linux, this can allow for gathering your dependencies locally for Linux on your Windows box and then deploy the set as a unit to your server (something Steve Dower and I have thought about and why we support a lock file concept). > > And if we use zip files with no nesting then as long as it's only Python code you could use zipimporter on the bundle directly. The "only Python code" restriction pretty much rules this out as anything like a general solution though... If people are investigating this, though, pex should also be considered as a source of prior art / inspiration: https://pex.readthedocs.io/en/stable/whatispex.html -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at kluyver.me.uk Thu Nov 24 14:50:36 2016 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Thu, 24 Nov 2016 19:50:36 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> On Thu, Nov 24, 2016, at 12:23 AM, Daniel Holth wrote: > I wouldn't be afraid of editable installs. They are trivial and > involve building the package in place and putting a .pth file where it > will be noticed. Specify editable packages can't necessarily be > uninstalled in a standard way and you are done. > The bespoke build tool tells pip where the package root is (where .dist- > info will be written), usually . or ./src, then pip does .pth. The way it's specified at present is for pip to ask the build tool (setuptools, flit, etc.) to do an editable install by whatever means. I hate the thing setuptools does with .pth files all over the place, so the equivalent operation in flit symlinks packages to site-packages. I made a PR to flit to handle this case better in uninstallation. I think Nathaniel's right: editable installs are going to be a source of controversity and complexity. Let's drop that hook from the PEP for now (ie. 1c) - it's still useful without it, and the design of the build interface allows extra hooks to be specified and easily added in the future. Does anyone feel strongly that we should keep the editable install hook in the current PEP? If not, I'll make a PR removing it (and switching the underscore to dash, as that seems to be the consensus). I'm also happy to look into implementing PEPs 518 and 517 in pip, though it might be a week or two before I get time. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at kluyver.me.uk Thu Nov 24 14:51:28 2016 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Thu, 24 Nov 2016 19:51:28 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> Message-ID: <1480017088.784115.798350449.18F0CF62@webmail.messagingengine.com> On Thu, Nov 24, 2016, at 07:50 PM, Thomas Kluyver wrote: > I made a PR to flit to handle this case better in uninstallation. I meant a PR to *pip*. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Nov 24 15:07:12 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 24 Nov 2016 20:07:12 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> Message-ID: On 24 November 2016 at 19:50, Thomas Kluyver wrote: > I hate the thing setuptools does with .pth files all over the place, so the > equivalent operation in flit symlinks packages to site-packages. Just curious - how does flit handle Windows for this? Symlinks aren't really an option there (you need elevation to create a symlink). Paul From thomas at kluyver.me.uk Thu Nov 24 16:51:23 2016 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Thu, 24 Nov 2016 21:51:23 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> Message-ID: <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> On Thu, Nov 24, 2016, at 08:07 PM, Paul Moore wrote: > Just curious - how does flit handle Windows for this? Symlinks aren't > really an option there (you need elevation to create a symlink). > Paul It largely doesn't at present; it started out as a personal tool for me, and I mostly use Linux. I have a few ideas about how to deal with Windows, though: - There's a feature called NTFS Junction Points, which is supposed to be like symlinks, but only for directories. I believe these can be created by regular users without elevated permissions. It might be possible to use this for packages, albeit not for single file directories. - Create a script/tool to give a user the permission bit that allows creating symlinks, so users who know what they're doing can enable it. Maybe there's some reason it's admin-only by default, though? It's safe enough for normal users on Unix, but I don't know how different Windows is. - Run a daemon to watch package folders and reinstall on any changes, something like Nathaniel mentioned with 'pip watch'. I probably wouldn't do this as part of flit, but I'd be happy to see it as a separate tool. As Nathaniel pointed out, this can actually support more stuff than setuptools editable installs, or my symlinks, because it can re-run build steps. Thomas From thomas at kluyver.me.uk Thu Nov 24 17:33:54 2016 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Thu, 24 Nov 2016 22:33:54 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: <1480026834.815261.798441841.650369C3@webmail.messagingengine.com> I've made PRs against PEP 517 for: Underscore to dash in build-backend: https://github.com/python/peps/pull/139 1a: Add a user parameter to the install_editable hook https://github.com/python/peps/pull/140 OR: 1c: Get rid of the install_editable hook https://github.com/python/peps/pull/141 I'm leaning towards 1c; it can always be standardised in a separate PEP, and an optional hook added back. Thomas On Thu, Nov 24, 2016, at 09:06 AM, Paul Moore wrote: > On 24 November 2016 at 00:32, Nathaniel Smith wrote: > > Also note that just like we decided to split the basic pyproject.toml > > proposal (now PEP 518) from the build system interface proposal (now > > PEP 517), it might (probably) makes sense to split the editable > > install part of the build system interface proposal from the rest, > > just in the interests of making incremental progress. > > This would be my preference. I'm assuming (again, incremental > progress) that "pip install -e" will still work as at present for > projects that don't use the new infrastructure, so it's hardly an > urgent need. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Thu Nov 24 18:10:57 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 24 Nov 2016 23:10:57 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On 24 November 2016 at 21:51, Thomas Kluyver wrote: > On Thu, Nov 24, 2016, at 08:07 PM, Paul Moore wrote: >> Just curious - how does flit handle Windows for this? Symlinks aren't >> really an option there (you need elevation to create a symlink). >> Paul > > It largely doesn't at present; it started out as a personal tool for me, > and I mostly use Linux. I have a few ideas about how to deal with > Windows, though: > > - There's a feature called NTFS Junction Points, which is supposed to be > like symlinks, but only for directories. I believe these can be created > by regular users without elevated permissions. It might be possible to > use this for packages, albeit not for single file directories. > - Create a script/tool to give a user the permission bit that allows > creating symlinks, so users who know what they're doing can enable it. > Maybe there's some reason it's admin-only by default, though? It's safe > enough for normal users on Unix, but I don't know how different Windows > is. However you do it, it needs elevation, which is going to be a problem. I don't think symlinks per se are any worse on Windows (apart from the unfamiliarity aspect, which also applies to junction points, which may mean tools don't handle them gracefully). But because you can't use them without elevation/permission fiddling, they simply aren't an end-user concept on Windows. > - Run a daemon to watch package folders and reinstall on any changes, > something like Nathaniel mentioned with 'pip watch'. I probably wouldn't > do this as part of flit, but I'd be happy to see it as a separate tool. > As Nathaniel pointed out, this can actually support more stuff than > setuptools editable installs, or my symlinks, because it can re-run > build steps. That sounds like it could also have some issues making it work cleanly on Windows (would it leave a console window open?). Honestly, I don't see what's so bad about pth files. They are a standard supported Python approach. Maybe setuptools' use of them is messy? I recall it was possible to end up with a lot of clutter, but that was going back to the egg days, I'm not sure if it's like that any more. Why not just have a single pth file, maintained by the build tool, for all editable installs? Most users would then only have one or two pth files. Anyway, this is OT, and I'm sure you'll come up with something appropriate as and when the need arises. As I say, I was mostly just curious. Paul From greg.ewing at canterbury.ac.nz Thu Nov 24 17:19:22 2016 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 25 Nov 2016 11:19:22 +1300 Subject: [Distutils] PEP 517: Build system API In-Reply-To: <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: <5837676A.9010207@canterbury.ac.nz> Thomas Kluyver wrote: > - There's a feature called NTFS Junction Points, which is supposed to be > like symlinks, but only for directories. Things might have changed, but last time I played with junction points they seemed rather dangerous, because if you deleted one using the GUI it deleted the contents of the directory it was linked to. And it appeared indistinguishible from a normal folder in the GUI, so you got no warning that was about to happen. -- Greg From ncoghlan at gmail.com Thu Nov 24 20:04:04 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Nov 2016 11:04:04 +1000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On 25 November 2016 at 09:10, Paul Moore wrote: > Honestly, I don't see what's so bad about pth files. They are a > standard supported Python approach. Maybe setuptools' use of them is > messy? I recall it was possible to end up with a lot of clutter, but > that was going back to the egg days, I'm not sure if it's like that > any more. Why not just have a single pth file, maintained by the build > tool, for all editable installs? Most users would then only have one > or two pth files The bad reputation of ".pth" doesn't generally stem from their normal usage (i.e. just listing absolute or relative directory names that the import system will then append to __path__). Rather, it stems from this little aspect: "Lines starting with import (followed by space or tab) are executed." (from https://docs.python.org/3/library/site.html ) As such, rather than purely being a way of extending "__path__" with additional directories, path files are *also* a way of programmatically customising a particular Python installation, without risking filename collisions the way sitecustomize and usercustomize do. Since setuptools is the first or only encounter most folks have with ".pth" files, and it makes use of such lines to execute arbitrary code at interpreter startup, the targeted criticism of "the 'import line' feature of path files and the specific ways that setuptools uses that feature are hard to debug when they break" morphed over time into the blanket criticism of "path files are bad". The main problem with fixing that is that folks really do rely on the auto-execution feature, sometimes even without knowing it (e.g. because setuptools uses it implicitly). Therefore, in order to deprecate and remove the magic import lines, we'd need to do something like adding an "__autorun__" directory to site-packages and user site-packages as a long term replacement for: - sitecustomize.py - usercustomize.py - executable import lines in path files Supporting such a subdirectory in older Python versions could then be bootstrapped via an "autorun.pth" file that provided the same start-up logic that 3.7+ did. I suspect this would actually be an improvement from a security awareness perspective as well, as it would make it much clearer that enabling implicit code execution at Python interpreter start-up is trivial if you have write access to the site-packages or user site-packages directory. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Thu Nov 24 21:26:32 2016 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 25 Nov 2016 15:26:32 +1300 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On 25 November 2016 at 14:04, Nick Coghlan wrote: .. > > The bad reputation of ".pth" doesn't generally stem from their normal > usage (i.e. just listing absolute or relative directory names that the > import system will then append to __path__). > > Rather, it stems from this little aspect: "Lines starting with import > (followed by space or tab) are executed." (from > https://docs.python.org/3/library/site.html ) I think its also worth exploring a targeted, modern namespace aware replacement. That is - there are two, related, use cases for .pth files vis-a-vis package installation. One is legacy namespace packages, which AFAICT are still in use - the migration is "complicated". The second is arguable a variant of that same thing: putting the current working dir into the PYTHONPATH without putting it in PYTHONPATH. The former case may be sufficiently covered by (forget the pep #) that we don't need to do anything, and the latter is certainly something that we should be able to deliver without needing the turing complete capability that you're suggesting. Something data driven rather than code driven. -Rob From opensource at ronnypfannschmidt.de Fri Nov 25 09:23:47 2016 From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt) Date: Fri, 25 Nov 2016 15:23:47 +0100 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: <57416035-4dd5-0ed7-4112-0bd3694aa980@ronnypfannschmidt.de> actually editable installs can be made uninstallable trivially in gumby-elf i would create a fake wheel with files inside to facilitate the path building for namespaces, and a local version number (so pip would create my exe files and uninstall clean) -- Ronny On 24.11.2016 01:23, Daniel Holth wrote: > > I wouldn't be afraid of editable installs. They are trivial and > involve building the package in place and putting a .pth file where it > will be noticed. Specify editable packages can't necessarily be > uninstalled in a standard way and you are done. > > The bespoke build tool tells pip where the package root is (where > .dist-info will be written), usually . or ./src, then pip does .pth. > > > On Wed, Nov 23, 2016, 17:16 Chris Barker > wrote: > > On Wed, Nov 23, 2016 at 6:35 AM, Thomas Kluyver > > wrote: > > > Questions: > 1. Editable installs. The PEP currenly specifies a hook to do an > editable install (like 'pip install -e' or 'setup.py develop') > into a > given prefix. I don't think that specifying a prefix is > sufficient to > cover '--user' installation, which uses a different install > scheme, > especially on Windows and OSX framework builds. We can: > a. Add an extra parameter 'user' to the hook, to override the > prefix and > do a user install. > b. Leave it as is, and do not support editable user > installation (which > would make me sad, as I do editable user installs regularly) > > > Please, please, let's figure SOMETHING our here - editable > installs (or "develop" installs) are a critical tool. Frankly, I > don't know how anyone can develop a package without them. > > Back in the day I struggle mightily with kludging sys.path, and, > relative imports that never really worked right, and on and on -- > it SUCKED. > > Then I discovered setuptools develop mode -- yeah! IN fact, I > don't think I'd ever use setuptools at all if I didn't need it to > get develop mode! > > c. Decide that editable installs are too fiddly to > standardise, and > leave it to users to invoke a tool directly to do an editable > install. > > > Not sure what that means -- does that mean that you couldn't get > an editable isntall with pip? but rather you would do: > > setup.py develop if you had setuptools as your build system, and > > some_other_command if you had some other build tool? > > Not too bad, but why not have a standard way to invoke develop > mode? If the tool can support it, why not have a way for pip to > tell an arbitrary build system to "please install this package in > develop mode" > > On the other hand: > > I've always thought we were moving toward proper separation of > concerns, in which case, pip should be focused on resolving > dependencies and finding and installing packages. > > Should it even be possible to build and install a package from > source with pip? > > But if it is, then it might as well support editable installs as well. > > The complication I see here is that a tool can't know how to > install in editable mode unless it knows about the python > environment it it running in -- which is easy for a tool built > with python, but a problem for a tool written some other way. > > However, I see from PEP 517: > > > The build backend object is expected to have attributes which > provide some or all of the following hooks. The > commonconfig_settings argument is described after the individual > hooks: > > def get_build_requires(config_settings): > ... > > > So I guess we can count on a Python front end, at least, so 1(a) > should be doable. > > In fact, is the user-install issue any different for editable > installs than regular ones? > > -CHB > > > > > > > > > > > > > > 2. Dash vs. underscore, bikeshed reloaded! Currently, the > table name > uses a dash: [build-system], but the key added by PEP 517 uses an > underscore: build_backend. This seems a bit messy. I propose > that we > change build_backend to build-backend for consistency. Dashes and > underscores can both be used in a TOML key without needing > quoting. > > Thanks, > Thomas > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at kluyver.me.uk Fri Nov 25 10:13:31 2016 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Fri, 25 Nov 2016 15:13:31 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: <1480086811.1023937.799000297.33A555DD@webmail.messagingengine.com> On Wed, Nov 23, 2016, at 07:22 PM, Donald Stufft wrote: > I think at a minimum we should get PEP 518 support into pip first. I > don't think layering more things on top of a PEP that isn't yet > implemented is a good approach to this. I went to make a start on this, but I got stuck on whether and how pip should create temporary environments to install build dependencies and build wheels. I've posted more details on the issue: https://github.com/pypa/pip/issues/3691 From ncoghlan at gmail.com Sat Nov 26 02:52:11 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 Nov 2016 17:52:11 +1000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On 25 November 2016 at 12:26, Robert Collins wrote: > On 25 November 2016 at 14:04, Nick Coghlan wrote: >> The bad reputation of ".pth" doesn't generally stem from their normal >> usage (i.e. just listing absolute or relative directory names that the >> import system will then append to __path__). >> >> Rather, it stems from this little aspect: "Lines starting with import >> (followed by space or tab) are executed." (from >> https://docs.python.org/3/library/site.html ) > > I think its also worth exploring a targeted, modern namespace aware replacement. Right. For a long time I thought "existing .pth files, only with the import line execution deprecated" was the only approach that stood any chance whatsoever of being adopted in a reasonable time frame, but we can technically use the "executable .pth file" trick ourselves to let people opt in to a new sys.path extension format on earlier Python versions. > That is - there are two, related, use cases for .pth files vis-a-vis > package installation. One is legacy namespace packages, which AFAICT > are still in use - the migration is "complicated". The second is > arguable a variant of that same thing: putting the current working dir > into the PYTHONPATH without putting it in PYTHONPATH. Third case: making zip archive contents available to applications without unpacking the archive first. > The former case may be sufficiently covered by (forget the pep #) that > we don't need to do anything, PEP 420, and I believe the only thing that can get tricky there now is figuring out how to do things in such a way that they work with both old-style namespace packages and the automatic Python 3 ones. > and the latter is certainly something > that we should be able to deliver without needing the turing complete > capability that you're suggesting. Something data driven rather than > code driven. While I agree with this, I think there's two pieces to it: - how to handle the data-driven use cases that we're entirely fine with - how to handle the arbitrary-code-execution use cases that we'd like to discourage, but don't want to make entirely impossible The problem with proposing an entirely new implicit path manipulation format is that if we deprecate ".pth" files entirely, then that hits *both* of those groups equally, while if we don't deprecate them at all, then neither group has an incentive to migrate to the new system. By contrast, if we only propose deprecating "import" lines in ".pth" files, and also propose a more explicit approach to automatic code execution at interpreter startup, then it's only folks relying on the arbitrary-code-execution feature that would incur any migration costs. The language level pay-offs to justify that cost would be: - simplification of the current systems for implicit code execution at start-up - making ".pth" files less dangerous as a format Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From wes.turner at gmail.com Sat Nov 26 04:34:22 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 26 Nov 2016 03:34:22 -0600 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On Friday, November 25, 2016, Nick Coghlan wrote: > On 25 November 2016 at 12:26, Robert Collins > wrote: > > On 25 November 2016 at 14:04, Nick Coghlan > wrote: > >> The bad reputation of ".pth" doesn't generally stem from their normal > >> usage (i.e. just listing absolute or relative directory names that the > >> import system will then append to __path__). > >> > >> Rather, it stems from this little aspect: "Lines starting with import > >> (followed by space or tab) are executed." (from > >> https://docs.python.org/3/library/site.html ) > > > > I think its also worth exploring a targeted, modern namespace aware > replacement. > > Right. For a long time I thought "existing .pth files, only with the > import line execution deprecated" was the only approach that stood any > chance whatsoever of being adopted in a reasonable time frame, but we > can technically use the "executable .pth file" trick ourselves to let > people opt in to a new sys.path extension format on earlier Python > versions. > > > That is - there are two, related, use cases for .pth files vis-a-vis > > package installation. One is legacy namespace packages, which AFAICT > > are still in use - the migration is "complicated". The second is > > arguable a variant of that same thing: putting the current working dir > > into the PYTHONPATH without putting it in PYTHONPATH. > > Third case: making zip archive contents available to applications > without unpacking the archive first. > > > The former case may be sufficiently covered by (forget the pep #) that > > we don't need to do anything, > > PEP 420, and I believe the only thing that can get tricky there now is > figuring out how to do things in such a way that they work with both > old-style namespace packages and the automatic Python 3 ones. > > > and the latter is certainly something > > that we should be able to deliver without needing the turing complete > > capability that you're suggesting. Something data driven rather than > > code driven. > > While I agree with this, I think there's two pieces to it: > > - how to handle the data-driven use cases that we're entirely fine with > - how to handle the arbitrary-code-execution use cases that we'd like > to discourage, but don't want to make entirely impossible > > The problem with proposing an entirely new implicit path manipulation > format is that if we deprecate ".pth" files entirely, then that hits > *both* of those groups equally, while if we don't deprecate them at > all, then neither group has an incentive to migrate to the new system. > > By contrast, if we only propose deprecating "import" lines in ".pth" > files, and also propose a more explicit approach to automatic code > execution at interpreter startup, then it's only folks relying on the > arbitrary-code-execution feature that would incur any migration costs. > The language level pay-offs to justify that cost would be: > > - simplification of the current systems for implicit code execution at > start-up There's python -m site But is there a -m tool for checking .pth files? (For code in .pth.py files)? ... def cache_pth_sys_path(file='cached_pth.pyc'): for (dirpath, pth, ouptut) in iter_pth_files(): with file(pth, 'r') as f: print('\n# pth %r' % (dirpath, pth), file=f) print(output, file=f) # ... def iter_pth_files(): for dirpath in sys.path: for pth in sorted(glob.glob(*.pth, dirpath)): output = file(pth, 'r', utf8).read().close() # imports = ast_imports(output) # PERF yield (dirpath, pth, output)) > - making ".pth" files less dangerous as a format > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, > Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 26 06:18:42 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 Nov 2016 21:18:42 +1000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On 26 November 2016 at 19:34, Wes Turner wrote: > On Friday, November 25, 2016, Nick Coghlan wrote: >> By contrast, if we only propose deprecating "import" lines in ".pth" >> files, and also propose a more explicit approach to automatic code >> execution at interpreter startup, then it's only folks relying on the >> arbitrary-code-execution feature that would incur any migration costs. >> The language level pay-offs to justify that cost would be: >> >> - simplification of the current systems for implicit code execution at >> start-up > > There's > > python -m site > > But is there a -m tool for checking .pth files? > (For code in .pth.py files)? Not currently, there's only the code that implicitly gets called from site.main() (which is in turn called as a side effect of importing the site module). So even though that machinery is already there in a pure Python module in every Python installation, none of it is exposed as a public Python-level API. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From nir36g at gmail.com Thu Nov 24 01:45:29 2016 From: nir36g at gmail.com (Nir Cohen) Date: Thu, 24 Nov 2016 06:45:29 +0000 Subject: [Distutils] Packaging multiple wheels in the same archive In-Reply-To: References: Message-ID: Well, creating on Windows and deploying on Linux will only be possible if the entire set of dependencies either have no C extensions or are manylinux1 wheels.. but yeah, that's pretty much what we're doing right now with our reference implementation. Regarding zipimporter, as far as I understand (correct me if I'm wrong) there's no such a solution for wheels (i.e. you can't use zipimporter on a zip of wheels) so does that means we'll have to package python files for all dependencies directly in the archive? Our current implementation simply runs `pip wheel --wheel-dir /my/wheelhosue/path --find-links /my/wheelhosue/path`, packages the wheelhouse, adds metadata and applies a name to the file and on the destination machine, wagon simply extracts the wheels and runs `pip install --no-index --find-links /extracted/wheelhosue/path`. On Wed, Nov 23, 2016 at 9:30 PM Brett Cannon wrote: > This then ties into Kenneth's pipfile idea he's working on as it then > makes sense to make a wagon/wheelhouse for a lock file. To also tie into > the container aspect, if you dev on Windows but deploy to Linux, this can > allow for gathering your dependencies locally for Linux on your Windows box > and then deploy the set as a unit to your server (something Steve Dower and > I have thought about and why we support a lock file concept). > > And if we use zip files with no nesting then as long as it's only Python > code you could use zipimporter on the bundle directly. > > On Tue, Nov 22, 2016, 22:07 Nick Coghlan, wrote: > > [Some folks are going to get this twice - unfortunately, Google's > mailing list mirrors are fundamentally broken, so replies to them > don't actually go to the original mailing list properly] > > (Note for context: I stumbled across Wagon recently, and commented > that we don't currently have a good target-environment-independent way > of bundling up a set of wheels as a single transferable unit) > > On 23 November 2016 at 03:44, Nir Cohen wrote: > > We came up with a tool (http://github.com/cloudify-cosmo/wagon) to do > just > > that and that's what we currently use to create and install our plugins. > > While wheel solves the problem of generating wheels, there is no single, > > standard method for taking an entire set of dependencies packaged in a > > single location and installing them in a different location. > > Where I see this being potentially valuable is in terms of having a > common "multiwheel" transfer format that can be used for cases where > the goal is essentially wheelhouse caching and transfer. The two main > cases I'm aware of where this comes up: > > - offline installation support (i.e. the Cloudify plugins use case, > where the installation environment doesn't have networked access to an > index server) > - saving and restoring the wheelhouse cache (e.g. this comes up in > container build pipelines) > > The latter problem arises from an issue with the way some container > build environments (most notable Docker's) currently work: they always > run in a clean environment, which means they can't see the host's > wheel cache. One of the solutions to this is to let container builds > specify a "cache state" which is archived by the build management > service at the end of the build process, and then restored when > starting the next incremental image build. > > This kind of cache transfer is already *possible* today, but having a > standardised way of doing it makes it easier for people to write > general purpose tooling around the concept, without requiring that the > tool used to create the archive be the same tool used to unpack it at > install time. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nir36g at gmail.com Thu Nov 24 11:11:51 2016 From: nir36g at gmail.com (Nir Cohen) Date: Thu, 24 Nov 2016 16:11:51 +0000 Subject: [Distutils] Packaging multiple wheels in the same archive In-Reply-To: References: Message-ID: Should we provide that as an abstraction in the wagon maybe so that it allows for easy importing? On Thu, Nov 24, 2016 at 3:49 PM Nick Coghlan wrote: > On 24 November 2016 at 16:45, Nir Cohen wrote: > > Well, creating on Windows and deploying on Linux will only be possible if > > the entire set of dependencies either have no C extensions or are > manylinux1 > > wheels.. but yeah, that's pretty much what we're doing right now with our > > reference implementation. > > > > Regarding zipimporter, as far as I understand (correct me if I'm wrong) > > there's no such a solution for wheels (i.e. you can't use zipimporter on > a > > zip of wheels) so does that means we'll have to package python files for > all > > dependencies directly in the archive? > > Right, there would be a couple of significant barriers to doing this > in the general case: > > - firstly, wheels themselves are officially only a transport format, > with direct imports being a matter of "we're not going to do anything > to deliberately break the cases that work, but you're also on your own > if anything goes wrong for any given use case": > > https://www.python.org/dev/peps/pep-0427/#is-it-possible-to-import-python-code-directly-from-a-wheel-file > - secondly, I don't think zipimporter handles archives-within-archives > - it handles directories within archives, so it would require that the > individual wheels by unpacked and the whole structure archived as one > big directory tree > > Overall, it sounds to me more like the "archive an entire installed > virtual environment" use case than it does the "transfer a collection > of pre-built artifacts from point A to point B" use case (which, to be > fair, is an interesting use case in its own right, its just a slightly > different problem). > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccdjelfa at gmail.com Thu Nov 24 11:24:41 2016 From: ccdjelfa at gmail.com (Code Club Djelfa) Date: Thu, 24 Nov 2016 17:24:41 +0100 Subject: [Distutils] how to use python with cmd! Message-ID: hi i am from algeria and i ask you! how to use python with cmd in windows 7 and How to download and install Python Packages and Modules with Pip help me *-* *-* -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Nov 26 15:41:07 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 26 Nov 2016 14:41:07 -0600 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: I suppose a __main__.py could also/instead be added as - site.tools.__main__ https://github.com/python/cpython/blob/master/Lib/site.py - site.__doc__ - site._script() - distutils.__main__ https://github.com/python/cpython/tree/master/Lib/distutils/command - setuptools.__main__ https://github.com/pypa/setuptools/blob/master/setup.py - _gen_console_scripts() # $ easy_install[%s] - distutils.setup_keywords() - pip.commands._ https://github.com/pypa/pip/tree/master/pip/commands And then how does site.py interact with importlib and .pth files? - https://github.com/python/cpython/blob/master/Lib/importlib/abc.py - https://github.com/python/cpython/blob/master/Lib/importlib/_bootstrap_external.py - source_from_cache() On Saturday, November 26, 2016, Nick Coghlan wrote: > On 26 November 2016 at 19:34, Wes Turner > wrote: > > On Friday, November 25, 2016, Nick Coghlan > wrote: > >> By contrast, if we only propose deprecating "import" lines in ".pth" > >> files, and also propose a more explicit approach to automatic code > >> execution at interpreter startup, then it's only folks relying on the > >> arbitrary-code-execution feature that would incur any migration costs. > >> The language level pay-offs to justify that cost would be: > >> > >> - simplification of the current systems for implicit code execution at > >> start-up > > > > There's > > > > python -m site > > > > But is there a -m tool for checking .pth files? > > (For code in .pth.py files)? > > Not currently, there's only the code that implicitly gets called from > site.main() (which is in turn called as a side effect of importing the > site module). > > So even though that machinery is already there in a pure Python module > in every Python installation, none of it is exposed as a public > Python-level API. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, > Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Nov 27 16:22:50 2016 From: brett at python.org (Brett Cannon) Date: Sun, 27 Nov 2016 21:22:50 +0000 Subject: [Distutils] how to use python with cmd! In-Reply-To: References: Message-ID: The best place to ask for this kind of general help is the python-list or python-tutor mailing lists. On Sat, 26 Nov 2016 at 09:41 Code Club Djelfa wrote: > hi > i am from algeria and i ask you! > how to use python with cmd in windows 7 > and How to download and install Python Packages and Modules with Pip > help me *-* *-* > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Nov 28 12:48:09 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 28 Nov 2016 09:48:09 -0800 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> Message-ID: On Wed, Nov 23, 2016 at 4:32 PM, Nathaniel Smith wrote: > On Wed, Nov 23, 2016 at 3:14 PM, Chris Barker > wrote: > > > Please, please, let's figure SOMETHING our here - editable installs (or > > "develop" installs) are a critical tool. Frankly, I don't know how anyone > > can develop a package without them. > > Would a 'pip watch' command that watched your source tree and > rebuilt/reinstalled it whenever you edited a file work for you? probably, yes -- though that seems pretty inefficient -- I suppose ti could be smart enough to only update the changed file(s), which wouldn't be too bad. as for rebuild -- setuptools' develop mode doesn't rebuild anyway -- you have to rebuild by hand if you change anything that isn't pure python -- which frankly works fine, as extensions need o be rebuild anyhow. What > do you do for conda packages? develop mode is for, well, developing -- so no need for a conda package, you are working from source be definition, and conda is a packaging system, not a build system (i.e. for python packages, conda build usually calls setuptools (ideally via pip) to do the building anyway) Does conda have an editable install > equivalent? > despite what I said above, conda does have a develop command: http://conda.pydata.org/docs/commands/build/conda-develop.html It's needed because conda does some trickery with editing relative paths for linking shared libs at install time. If you use setuptools' develop mode with an extension module, it may find the wrong version of libs at run time. I honestly don't know how well it works -- I built my own kludge for my stuff before it was ready to go: https://github.com/NOAA-ORR-ERD/PyGnome/blob/master/py_gnome/re_link_for_anaconda.py And this is a Mac only solution -- linking happens differently on Windows, such that I think this is a non-issue, and not sure about Linux -- we are deploying with conda on Linux, but I don't think anyone is developing on Linux. > they make it harder > to improve pip's infrastructure around upgrading, because upgrading > requires pip to have a solid understanding of exactly what's > installed. So it's absolutely possible to have some useful hack, like > we do now, but useful hacks by their nature are hard to standardize: > True -- but I'd still rather a useful hack than nothing standardization means "this is the final solution to this problem, it > won't need further tweaking", and editable installs really feel like > they might benefit from further tweaking. Or something. > maybe an officially marked as preliminary part of the standard? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Nov 28 12:53:32 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 28 Nov 2016 09:53:32 -0800 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On Thu, Nov 24, 2016 at 3:10 PM, Paul Moore wrote: > Honestly, I don't see what's so bad about pth files. They are a > standard supported Python approach. Maybe setuptools' use of them is > messy? exactly. The fact that setuptools over-uses (abuses?) pth files doesn't mean that they can't be used reasonably. and given the differences of filesystems, it's a not-too-bad way to simulate symbolic links :-) Why not just have a single pth file, maintained by the build > tool, for all editable installs? shouldn't that be maintained by the install tool? i.e. pip -- the whole idea is that the install tool is different than the built tool, yes? and adding a package in editable mode is an installation job, not a build job. Also -- the idea here is that pip will know it's installed so it can upgrade, de-install, etc, so it really is pip's job to maintain the "editable_install.pth" file. Most users would then only have one > or two pth files. > let's us make sure we don't end up with the easy_install.pth nightmare! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Nov 28 13:01:50 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 28 Nov 2016 18:01:50 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On 28 November 2016 at 17:53, Chris Barker wrote: >> Why not just have a single pth file, maintained by the build >> tool, for all editable installs? > > shouldn't that be maintained by the install tool? i.e. pip -- the whole idea > is that the install tool is different than the built tool, yes? and adding a > package in editable mode is an installation job, not a build job. > > Also -- the idea here is that pip will know it's installed so it can > upgrade, de-install, etc, so it really is pip's job to maintain the > "editable_install.pth" file. Sorry - I was confusing "build tool" vs "install tool" here. Not intentionally, but the confusion is real. Setuptools is a build tool, and yet (currently) handles editable installs. So IMO, part of finalising editable install support would be thrashing out which aspects of the process are the responsibility of the build tool, and which the install tool. That's a non-trivial design job, so in the interests of keeping things moving, it seems to me that "defer a decision for now" remains the right decision here. Not to claim that editable installs aren't important, simply to avoid having all of the rest of the proposal stalled while we sort out the editable design. Paul From chris.barker at noaa.gov Mon Nov 28 13:14:27 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 28 Nov 2016 10:14:27 -0800 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On Mon, Nov 28, 2016 at 10:01 AM, Paul Moore wrote: > On 28 November 2016 at 17:53, Chris Barker wrote: > >> Why not just have a single pth file, maintained by the build > >> tool, for all editable installs? > > > > shouldn't that be maintained by the install tool? i.e. pip -- the whole > idea > > is that the install tool is different than the built tool, yes? and > adding a > > package in editable mode is an installation job, not a build job. > > > > Also -- the idea here is that pip will know it's installed so it can > > upgrade, de-install, etc, so it really is pip's job to maintain the > > "editable_install.pth" file. > > Sorry - I was confusing "build tool" vs "install tool" here. Not > intentionally, but the confusion is real. it sure is! > Setuptools is a build tool, > and yet (currently) handles editable installs. setuptools is a build tool, and an install tool, and a runtime resource manager, and a package manger / install tool -- source of lots of confusion, and what we are trying to get away from, yes? ;-) So IMO, part of > finalising editable install support would be thrashing out which > aspects of the process are the responsibility of the build tool, and > which the install tool. That's a non-trivial design job, so in the > interests of keeping things moving, it seems to me that "defer a > decision for now" remains the right decision here. fair enough. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Nov 28 13:23:42 2016 From: dholth at gmail.com (Daniel Holth) Date: Mon, 28 Nov 2016 18:23:42 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: Here is the design. The build tool provides the install tool with a directory, or if you want to get fancy, a list of directories, that should be added to PYTHONPATH. The build tool builds into that path. The install tool makes sure Python can find it. This will make anyone who uses setuptools happy. So PEP 517 does not need an editable install hook to support editable installs. It can just expose the equivalent of setuptools' package_dir={'':'src'}, and include a 'build in place' hook, and the feature is preserved. In this way setuptools users can be happy, and only users of build systems that can't work this way are waiting for a more complicated design. On Mon, Nov 28, 2016 at 1:01 PM Paul Moore wrote: > On 28 November 2016 at 17:53, Chris Barker wrote: > >> Why not just have a single pth file, maintained by the build > >> tool, for all editable installs? > > > > shouldn't that be maintained by the install tool? i.e. pip -- the whole > idea > > is that the install tool is different than the built tool, yes? and > adding a > > package in editable mode is an installation job, not a build job. > > > > Also -- the idea here is that pip will know it's installed so it can > > upgrade, de-install, etc, so it really is pip's job to maintain the > > "editable_install.pth" file. > > Sorry - I was confusing "build tool" vs "install tool" here. Not > intentionally, but the confusion is real. Setuptools is a build tool, > and yet (currently) handles editable installs. So IMO, part of > finalising editable install support would be thrashing out which > aspects of the process are the responsibility of the build tool, and > which the install tool. That's a non-trivial design job, so in the > interests of keeping things moving, it seems to me that "defer a > decision for now" remains the right decision here. Not to claim that > editable installs aren't important, simply to avoid having all of the > rest of the proposal stalled while we sort out the editable design. > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From craigorlandorattray at gmail.com Mon Nov 28 20:26:28 2016 From: craigorlandorattray at gmail.com (craigorlandorattray at gmail.com) Date: Mon, 28 Nov 2016 20:26:28 -0500 Subject: [Distutils] Adding modules to library Message-ID: <583cd944.52a01f0a.b88a7.e0c4@mx.google.com> Having trouble installing numpy and picamera modules to python shell. Need your help please. I am just learning Python Laguage. Sent from Mail for Windows 10 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Nov 29 00:58:56 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 Nov 2016 15:58:56 +1000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On 29 November 2016 at 04:23, Daniel Holth wrote: > Here is the design. The build tool provides the install tool with a > directory, or if you want to get fancy, a list of directories, that should > be added to PYTHONPATH. The build tool builds into that path. The install > tool makes sure Python can find it. This will make anyone who uses > setuptools happy. > > So PEP 517 does not need an editable install hook to support editable > installs. It can just expose the equivalent of setuptools' > package_dir={'':'src'}, and include a 'build in place' hook, and the feature > is preserved. > > In this way setuptools users can be happy, and only users of build systems > that can't work this way are waiting for a more complicated design. I think this would also help resolve one of the limitations of PEP 517 as it currently stands, which is the differences between environment-per-build and shared-environment, as the former is what you want for releases, but the latter is often what you want locally (e.g. to regenerate cffi and Cython bindings, or to ensure dependencies and the project itself are importable). "local_build" might be a better name for that than "build_in_place" though, and the suggestion of "Sort out release builds first, tackle local builds in a follow-up PEP" still has merit (since the use cases and trade-offs for the latter case are genuinely different from those for builds that are expected to be published to an index server). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Tue Nov 29 01:35:34 2016 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 29 Nov 2016 19:35:34 +1300 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: On 29 November 2016 at 07:23, Daniel Holth wrote: > Here is the design. The build tool provides the install tool with a > directory, or if you want to get fancy, a list of directories, that should > be added to PYTHONPATH. The build tool builds into that path. The install > tool makes sure Python can find it. This will make anyone who uses > setuptools happy. > > So PEP 517 does not need an editable install hook to support editable > installs. It can just expose the equivalent of setuptools' > package_dir={'':'src'}, and include a 'build in place' hook, and the feature > is preserved. > > In this way setuptools users can be happy, and only users of build systems > that can't work this way are waiting for a more complicated design. Thats a possible route; note that some situations don't work with in-place builds, and I don't see any reason to hardcode in-place builds as the thing; this is why I was trying to punt on it, because -e is actually somewhat deep in its own right. -Rob From ralf.gommers at gmail.com Tue Nov 29 03:11:19 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 29 Nov 2016 21:11:19 +1300 Subject: [Distutils] Adding modules to library In-Reply-To: <583cd944.52a01f0a.b88a7.e0c4@mx.google.com> References: <583cd944.52a01f0a.b88a7.e0c4@mx.google.com> Message-ID: On Tue, Nov 29, 2016 at 2:26 PM, wrote: > Having trouble installing numpy and picamera modules to python shell. > > Need your help please. > > I am just learning Python Laguage. > On Windows, you're best off using a scientific Python distribution (like Anaconda, see http://scipy.org/install.html#scientific-python-distributions for all your options). If you just need numpy and know how to use pip already, "pip install numpy" should work. Note that as soon as you need more scientific packages (like scipy) that won't work though, so better start off right and install a distribution. Note that the mailing list of numpy or stackoverflow are better places than this list to ask how best to install numpy. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at kluyver.me.uk Tue Nov 29 06:41:09 2016 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Tue, 29 Nov 2016 11:41:09 +0000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> Message-ID: <1480419669.3001712.802311657.5496A98A@webmail.messagingengine.com> On Thu, Nov 24, 2016, at 11:10 PM, Paul Moore wrote: > Honestly, I don't see what's so bad about pth files. They are a > standard supported Python approach. Maybe setuptools' use of them is > messy? I recall it was possible to end up with a lot of clutter, but > that was going back to the egg days, I'm not sure if it's like that > any more. Why not just have a single pth file, maintained by the build > tool, for all editable installs? Most users would then only have one > or two pth files. They would definitely be a lot more tolerable if it wasn't for setuptools' use of the execution loophole, and indeed that misfeature as a whole. But even without that, they're not ideal. I routinely have development installs of quite a lot of different packages. If each one of those is an entry on sys.path from a .pth file, there's a performance penalty, as every 'import qux' has to stat() in dozens of directories which don't contain qux at all: stat('.../foo/qux.py') stat('.../foo/qux/__init__.py') stat('.../foo/qux.so') stat('.../bar/qux.py') stat('.../bar/qux/__init__.py') ... stat('.../qux/qux.py') # At last! If you put tooling scripts/modules in the root of your project directory, it also increases the chance of name clashes. Maybe we should look into something more like symlinks for doing editable installs, so a file called '.../site-packages/qux.pylink' would contain the path to the qux module/package. This would only be read on 'import qux', so it wouldn't affect other imports at all. Brett has merged my PR for option 1c, so PEP 517 no longer specifies a hook for editable installs. If we need extra hooks to support editable installs, we can specify them separately in a later PEP. Thomas From ncoghlan at gmail.com Tue Nov 29 08:21:51 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 Nov 2016 23:21:51 +1000 Subject: [Distutils] PEP 517: Build system API In-Reply-To: <1480419669.3001712.802311657.5496A98A@webmail.messagingengine.com> References: <1479911730.383490.797001697.2BB4C8CB@webmail.messagingengine.com> <1480017036.783566.798340441.126A00BB@webmail.messagingengine.com> <1480024283.808007.798411521.27AEA659@webmail.messagingengine.com> <1480419669.3001712.802311657.5496A98A@webmail.messagingengine.com> Message-ID: On 29 November 2016 at 21:41, Thomas Kluyver wrote: > On Thu, Nov 24, 2016, at 11:10 PM, Paul Moore wrote: > But even without that, they're not ideal. I routinely have development > installs of quite a lot of different packages. If each one of those is > an entry on sys.path from a .pth file, there's a performance penalty, as > every 'import qux' has to stat() in dozens of directories which don't > contain qux at all: > > stat('.../foo/qux.py') > stat('.../foo/qux/__init__.py') > stat('.../foo/qux.so') > stat('.../bar/qux.py') > stat('.../bar/qux/__init__.py') > ... > stat('.../qux/qux.py') # At last! The Python 3 import system (and the importlib2 backport) cache directory listings rather than making repeated stat calls, so the performance penalty of longer paths is dramatically reduced relative to the old "n stat calls per directory" approach. (For NFS based imports with network round trips involved, we're talking multiple orders of magnitude reductions in application start-up times) > If you put tooling scripts/modules in the root of your project > directory, it also increases the chance of name clashes. > > Maybe we should look into something more like symlinks for doing > editable installs, so a file called '.../site-packages/qux.pylink' would > contain the path to the qux module/package. This would only be read on > 'import qux', so it wouldn't affect other imports at all. The main challenge is to do this kind of thing without breaking PEP 376 (the directory based installation metadata layout), and without coupling the adoption period to CPython adoption cycles (where 2.7 hasn't even completely supplanted 2.6, let alone the 3.x series taking over from 2.x). And executable .pth file *could* be used to bootstrap a new system, but then the interaction with setuptools may get even more complicated due to relative import orders (as far as I know the order of processing for pth files is entirely arbitrary). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia