From luis.de.sousa at protonmail.ch Wed Jun 1 13:10:17 2016 From: luis.de.sousa at protonmail.ch (=?UTF-8?Q?Lu=C3=AD=C2=ADs_de_Sousa?=) Date: Wed, 01 Jun 2016 13:10:17 -0400 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: Hi again Ian, Installing setuptools for python 3 dealt away with that error but the upload is still failing. I am not sure this is related to twine or not, but I'll leave here the log in any case, someone might suggest something. Thank you, Lu?s $ python3 setup.py sdist running sdist running egg_info creating hex_utils.egg-info writing hex_utils.egg-info/PKG-INFO writing top-level names to hex_utils.egg-info/top_level.txt writing dependency_links to hex_utils.egg-info/dependency_links.txt writing entry points to hex_utils.egg-info/entry_points.txt writing manifest file 'hex_utils.egg-info/SOURCES.txt' reading manifest file 'hex_utils.egg-info/SOURCES.txt' writing manifest file 'hex_utils.egg-info/SOURCES.txt' warning: sdist: standard file not found: should have one of README, README.rst, README.txt running check creating hex-utils-0.2 creating hex-utils-0.2/hex_utils creating hex-utils-0.2/hex_utils.egg-info making hard links in hex-utils-0.2... hard linking setup.py -> hex-utils-0.2 hard linking hex_utils/__init__.py -> hex-utils-0.2/hex_utils hard linking hex_utils/asc.py -> hex-utils-0.2/hex_utils hard linking hex_utils/asc2hasc.py -> hex-utils-0.2/hex_utils hard linking hex_utils/grid.py -> hex-utils-0.2/hex_utils hard linking hex_utils/hasc.py -> hex-utils-0.2/hex_utils hard linking hex_utils/hasc2gml.py -> hex-utils-0.2/hex_utils hard linking hex_utils/surfaceSimple.py -> hex-utils-0.2/hex_utils hard linking hex_utils.egg-info/PKG-INFO -> hex-utils-0.2/hex_utils.egg-info hard linking hex_utils.egg-info/SOURCES.txt -> hex-utils-0.2/hex_utils.egg-info hard linking hex_utils.egg-info/dependency_links.txt -> hex-utils-0.2/hex_utils.egg-info hard linking hex_utils.egg-info/entry_points.txt -> hex-utils-0.2/hex_utils.egg-info hard linking hex_utils.egg-info/top_level.txt -> hex-utils-0.2/hex_utils.egg-info Writing hex-utils-0.2/setup.cfg creating dist Creating tar archive removing 'hex-utils-0.2' (and everything under it) $ twine upload dist/hex-utils-0.2.sdist Uploading distributions to https://pypi.python.org/pypi ValueError: Cannot find file (or expand pattern): 'dist/hex-utils-0.2.sdist' $ ls -la dist total 12 drwxrwxrwx 1 root root 176 Jun 1 19:04 . drwxrwxrwx 1 root root 4096 Jun 1 19:04 .. -rwxrwxrwx 1 root root 6091 Jun 1 19:04 hex-utils-0.2.tar.gz Sent from [ProtonMail](https://protonmail.ch), encrypted email based in Switzerland. -------- Original Message -------- Subject: Re: [Distutils] PyPi upload fails with TypeError Local Time: 24 May 2016 8:16 PM UTC Time: 24 May 2016 18:16 From: graffatcolmingov at gmail.com To: luis.de.sousa at protonmail.ch CC: berker.peksag at gmail.com,Distutils-Sig at python.org Luis, it looks like you're running twine on Python 3 and setuptools is installed for Python 2. Try doing: python3 -m pip install setuptools or apt-get install -y python3-setuptools -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Wed Jun 1 13:40:24 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Wed, 1 Jun 2016 12:40:24 -0500 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: There's a warning in your output. Is there anything in dist/ ? On Jun 1, 2016 10:09 AM, "Lu??s de Sousa" wrote: > Hi again Ian, > > Installing setuptools for python 3 dealt away with that error but the > upload is still failing. I am not sure this is related to twine or not, but > I'll leave here the log in any case, someone might suggest something. > > Thank you, > > Lu?s > > $ python3 setup.py sdist > running sdist > running egg_info > creating hex_utils.egg-info > writing hex_utils.egg-info/PKG-INFO > writing top-level names to hex_utils.egg-info/top_level.txt > writing dependency_links to hex_utils.egg-info/dependency_links.txt > writing entry points to hex_utils.egg-info/entry_points.txt > writing manifest file 'hex_utils.egg-info/SOURCES.txt' > reading manifest file 'hex_utils.egg-info/SOURCES.txt' > writing manifest file 'hex_utils.egg-info/SOURCES.txt' > warning: sdist: standard file not found: should have one of README, > README.rst, README.txt > > running check > creating hex-utils-0.2 > creating hex-utils-0.2/hex_utils > creating hex-utils-0.2/hex_utils.egg-info > making hard links in hex-utils-0.2... > hard linking setup.py -> hex-utils-0.2 > hard linking hex_utils/__init__.py -> hex-utils-0.2/hex_utils > hard linking hex_utils/asc.py -> hex-utils-0.2/hex_utils > hard linking hex_utils/asc2hasc.py -> hex-utils-0.2/hex_utils > hard linking hex_utils/grid.py -> hex-utils-0.2/hex_utils > hard linking hex_utils/hasc.py -> hex-utils-0.2/hex_utils > hard linking hex_utils/hasc2gml.py -> hex-utils-0.2/hex_utils > hard linking hex_utils/surfaceSimple.py -> hex-utils-0.2/hex_utils > hard linking hex_utils.egg-info/PKG-INFO -> > hex-utils-0.2/hex_utils.egg-info > hard linking hex_utils.egg-info/SOURCES.txt -> > hex-utils-0.2/hex_utils.egg-info > hard linking hex_utils.egg-info/dependency_links.txt -> > hex-utils-0.2/hex_utils.egg-info > hard linking hex_utils.egg-info/entry_points.txt -> > hex-utils-0.2/hex_utils.egg-info > hard linking hex_utils.egg-info/top_level.txt -> > hex-utils-0.2/hex_utils.egg-info > Writing hex-utils-0.2/setup.cfg > creating dist > Creating tar archive > removing 'hex-utils-0.2' (and everything under it) > > $ twine upload dist/hex-utils-0.2.sdist > Uploading distributions to https://pypi.python.org/pypi > ValueError: Cannot find file (or expand pattern): > 'dist/hex-utils-0.2.sdist' > > $ ls -la dist > total 12 > drwxrwxrwx 1 root root 176 Jun 1 19:04 . > drwxrwxrwx 1 root root 4096 Jun 1 19:04 .. > -rwxrwxrwx 1 root root 6091 Jun 1 19:04 hex-utils-0.2.tar.gz > > > > *Sent from ProtonMail , encrypted email based in > Switzerland.* > > > -------- Original Message -------- > Subject: Re: [Distutils] PyPi upload fails with TypeError > Local Time: 24 May 2016 8:16 PM > UTC Time: 24 May 2016 18:16 > From: graffatcolmingov at gmail.com > To: luis.de.sousa at protonmail.ch > CC: berker.peksag at gmail.com,Distutils-Sig at python.org > > Luis, it looks like you're running twine on Python 3 and setuptools is > installed for Python 2. Try doing: > > python3 -m pip install setuptools > > or > > apt-get install -y python3-setuptools > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Wed Jun 1 16:03:03 2016 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Wed, 1 Jun 2016 13:03:03 -0700 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: On Wed, Jun 1, 2016 at 10:10 AM, Lu??s de Sousa wrote: > $ twine upload dist/hex-utils-0.2.sdist > Uploading distributions to https://pypi.python.org/pypi > ValueError: Cannot find file (or expand pattern): 'dist/hex-utils-0.2.sdist' > > $ ls -la dist > total 12 > drwxrwxrwx 1 root root 176 Jun 1 19:04 . > drwxrwxrwx 1 root root 4096 Jun 1 19:04 .. > -rwxrwxrwx 1 root root 6091 Jun 1 19:04 hex-utils-0.2.tar.gz It looks like you are telling it "dist/hex-utils-0.2.sdist" but the directory contains "hex-utils-0.2.tar.gz". --Chris > > > > Sent from ProtonMail, encrypted email based in Switzerland. > > > -------- Original Message -------- > Subject: Re: [Distutils] PyPi upload fails with TypeError > Local Time: 24 May 2016 8:16 PM > UTC Time: 24 May 2016 18:16 > From: graffatcolmingov at gmail.com > To: luis.de.sousa at protonmail.ch > CC: berker.peksag at gmail.com,Distutils-Sig at python.org > > Luis, it looks like you're running twine on Python 3 and setuptools is > installed for Python 2. Try doing: > > python3 -m pip install setuptools > > or > > apt-get install -y python3-setuptools > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From luis.de.sousa at protonmail.ch Thu Jun 2 03:47:44 2016 From: luis.de.sousa at protonmail.ch (=?UTF-8?Q?Lu=C3=AD=C2=ADs_de_Sousa?=) Date: Thu, 02 Jun 2016 03:47:44 -0400 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: Yes Chris, that is correct. But note that I am using the sdist option both with setuptools and twine. Thank you, Lu?s Sent from [ProtonMail](https://protonmail.ch), encrypted email based in Switzerland. -------- Original Message -------- Subject: Re: [Distutils] PyPi upload fails with TypeError Local Time: June 1, 2016 10:03 PM UTC Time: June 1, 2016 8:03 PM From: chris.jerdonek at gmail.com To: luis.de.sousa at protonmail.ch CC: graffatcolmingov at gmail.com,Distutils-Sig at python.org It looks like you are telling it "dist/hex-utils-0.2.sdist" but the directory contains "hex-utils-0.2.tar.gz". --Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Thu Jun 2 09:04:01 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Thu, 2 Jun 2016 08:04:01 -0500 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: Source distributions are .tar.gz files. I don't know what sdist option you're using with twine. On Jun 2, 2016 12:47 AM, "Lu??s de Sousa" wrote: > Yes Chris, that is correct. But note that I am using the sdist option > both with setuptools and twine. > > Thank you, > > Lu?s > > *Sent from ProtonMail , encrypted email based in > Switzerland.* > > > -------- Original Message -------- > Subject: Re: [Distutils] PyPi upload fails with TypeError > Local Time: June 1, 2016 10:03 PM > UTC Time: June 1, 2016 8:03 PM > From: chris.jerdonek at gmail.com > To: luis.de.sousa at protonmail.ch > CC: graffatcolmingov at gmail.com,Distutils-Sig at python.org > > It looks like you are telling it "dist/hex-utils-0.2.sdist" but the > directory contains "hex-utils-0.2.tar.gz". > > --Chris > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luis.de.sousa at protonmail.ch Thu Jun 2 10:17:05 2016 From: luis.de.sousa at protonmail.ch (=?UTF-8?Q?Lu=C3=AD=C2=ADs_de_Sousa?=) Date: Thu, 02 Jun 2016 10:17:05 -0400 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: Hi everyone, I got it working (sort of) on Ubuntu 14.04. I had pip installed from the Ubuntu repositories and that package marks as dependencies a number of other packages that are largely outdated. Apparently, these outdated packages (e.g. requests) where embroiling twine. The first step was to remove the pip package: $ sudo apt purge python-requests Then download the pip install scirpt and run it: $ wget https://bootstrap.pypa.io/get-pip.py $ sudo python3 get-pip.py Make sure pip is the latest version: $ sudo pip install -U pip Then the new package can be built: $ python3 setup.py clean $ python3 setup.py sdist bdist_wheel And finally uploaded: $ twine upload dist/* -u username -p password The last command returns a 500 error, but the package is correctly uploaded to the PyPi repository. Regards, Lu?s -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jun 2 15:22:17 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 2 Jun 2016 12:22:17 -0700 Subject: [Distutils] Officially deferring the Metadata 2.0 specification Message-ID: Metadata 2.0 has been deferred-in-practice for quite some time, but I just updated the PEP itself to reflect the deferral and link to more of the incremental improvement PEPs: https://hg.python.org/peps/rev/94c1afae202e This makes the current policy that enhancement proposals should avoid depending on the definition and implementation of metadata 2.0 explicit. Regards, Nick. P.S. PEP 459 (standard metadata extensions) depends on 426, and has also been marked as Deferred -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From prometheus235 at gmail.com Thu Jun 2 18:08:19 2016 From: prometheus235 at gmail.com (Nick Timkovich) Date: Thu, 2 Jun 2016 17:08:19 -0500 Subject: [Distutils] Alternate long_description formats, server-side Message-ID: When I brought up enabling Markdown previously, I was a bit over-eager and started talking implementation before there was any sort of consensus about it in principle, so it was a bit of a straw-man. The previous discussion seemed to rest with the onus being on the user to do it themselves, perhaps by installing (the non-Python) pandoc (non-trivial on Windows) as well as pypandoc and then integrating it into twine or some such. There seems to be a chasm between the GitHubbers that have various issues and PRs to this effect on the pypa/readme_renderer repo that are met with a frustrating silence from people with commit-power. Is the solution to basically write a full spec (PEP?) before touching any code? That seems like an actionable solution, but is it simply going to be rejected out-of-hand because of the "just learn rST" sentiment among the maintainers? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jun 2 18:19:03 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 2 Jun 2016 18:19:03 -0400 Subject: [Distutils] Alternate long_description formats, server-side In-Reply-To: References: Message-ID: <3C25C3E2-6A53-4009-A324-C8BCF01082D3@stufft.io> > On Jun 2, 2016, at 6:08 PM, Nick Timkovich wrote: > > When I brought up enabling Markdown previously, I was a bit over-eager and started talking implementation before there was any sort of consensus about it in principle, so it was a bit of a straw-man. > > The previous discussion seemed to rest with the onus being on the user to do it themselves, perhaps by installing (the non-Python) pandoc (non-trivial on Windows) as well as pypandoc and then integrating it into twine or some such. > > There seems to be a chasm between the GitHubbers that have various issues and PRs to this effect on the pypa/readme_renderer repo that are met with a frustrating silence from people with commit-power. Is the solution to basically write a full spec (PEP?) before touching any code? That seems like an actionable solution, but is it simply going to be rejected out-of-hand because of the "just learn rST" sentiment among the maintainers? Speaking with my PyPI and readme_renderer committer hat on, I have zero opposition to the idea of rendering Markdown (or any other alternative proposal)? in fact I think it?s all together a good thing. You?d need/want some buy-in from setuptools (likely Jason Coombs is all you need there) but I don?t think there?d be an issue actually adding it once there was some agreed upon thing. So yea, we need some sort of standard. It could be as simple as just adding a field to the existing metadata specification with something like: Description-Format: txt|rst|md|whatever With the assumption that if you omit the field then we do the legacy behavior of ?attempt to render as rst, fallback to plain text?. You?ll probably want a registry of recommended values (or perhaps, mandatory values? How do we add a new type of format to the list?). Anyways, just an off the cuff idea, but I don?t think there?s anyone seriously opposed to the idea. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jun 2 20:35:45 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 2 Jun 2016 17:35:45 -0700 Subject: [Distutils] Alternate long_description formats, server-side In-Reply-To: <3C25C3E2-6A53-4009-A324-C8BCF01082D3@stufft.io> References: <3C25C3E2-6A53-4009-A324-C8BCF01082D3@stufft.io> Message-ID: On 2 June 2016 at 15:19, Donald Stufft wrote: > On Jun 2, 2016, at 6:08 PM, Nick Timkovich wrote: > So yea, we need some sort of standard. It could be as simple as just adding > a field to the existing metadata specification with something like: > > Description-Format: txt|rst|md|whatever > > With the assumption that if you omit the field then we do the legacy > behavior of ?attempt to render as rst, fallback to plain text?. You?ll > probably want a registry of recommended values (or perhaps, mandatory > values? How do we add a new type of format to the list?). > > Anyways, just an off the cuff idea, but I don?t think there?s anyone > seriously opposed to the idea. Yep, it's not about opposition, just a matter of there being a range of more important problems ahead of it in the priority queue. That said, we do now have a mechanism to document additional metadata fields without requiring an entire new metadata version (see Provides-Extra in https://packaging.python.org/en/latest/specifications/#core-metadata for an example), and there's a catalog of anticipated formats in https://www.python.org/dev/peps/pep-0459/#document-names, so the idea of defining a Description-Format field sounds plausible to me (even if it takes a while for tools to start emitting or reading it). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From prometheus235 at gmail.com Thu Jun 2 21:16:28 2016 From: prometheus235 at gmail.com (Nick Timkovich) Date: Thu, 2 Jun 2016 20:16:28 -0500 Subject: [Distutils] Alternate long_description formats, server-side In-Reply-To: References: <3C25C3E2-6A53-4009-A324-C8BCF01082D3@stufft.io> Message-ID: I can definitely believe there are more important things to do, but some of us aren't versed in the intricacies of what's up top and don't have the familiarity to dive in. Us GitHub plebs are just raring to work on a feature we think is within our grasp ;-) On Thu, Jun 2, 2016 at 7:35 PM, Nick Coghlan wrote: > On 2 June 2016 at 15:19, Donald Stufft wrote: > > On Jun 2, 2016, at 6:08 PM, Nick Timkovich > wrote: > > So yea, we need some sort of standard. It could be as simple as just > adding > > a field to the existing metadata specification with something like: > > > > Description-Format: txt|rst|md|whatever > > > > With the assumption that if you omit the field then we do the legacy > > behavior of ?attempt to render as rst, fallback to plain text?. You?ll > > probably want a registry of recommended values (or perhaps, mandatory > > values? How do we add a new type of format to the list?). > > > > Anyways, just an off the cuff idea, but I don?t think there?s anyone > > seriously opposed to the idea. > > Yep, it's not about opposition, just a matter of there being a range > of more important problems ahead of it in the priority queue. > > That said, we do now have a mechanism to document additional metadata > fields without requiring an entire new metadata version (see > Provides-Extra in > https://packaging.python.org/en/latest/specifications/#core-metadata > for an example), and there's a catalog of anticipated formats in > https://www.python.org/dev/peps/pep-0459/#document-names, so the idea > of defining a Description-Format field sounds plausible to me (even if > it takes a while for tools to start emitting or reading it). > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jun 3 07:03:10 2016 From: donald at stufft.io (Donald Stufft) Date: Fri, 3 Jun 2016 07:03:10 -0400 Subject: [Distutils] Alternate long_description formats, server-side In-Reply-To: References: <3C25C3E2-6A53-4009-A324-C8BCF01082D3@stufft.io> Message-ID: <1B803956-0CD9-4CC5-A7E6-06074A8F826D@stufft.io> > On Jun 2, 2016, at 9:16 PM, Nick Timkovich wrote: > > I can definitely believe there are more important things to do, but some of us aren't versed in the intricacies of what's up top and don't have the familiarity to dive in. Us GitHub plebs are just raring to work on a feature we think is within our grasp ;-) > Yup! Nick was speaking to why folks like myself haven?t done it yet- If some enterprising person (perhaps you!) takes the time to write up a PEP and work it through the process, then there?s no reason to wait for me (or Nick, or any of the other core team) to do it :) ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From afe.young at gmail.com Wed Jun 1 23:05:11 2016 From: afe.young at gmail.com (Young Yang) Date: Thu, 2 Jun 2016 11:05:11 +0800 Subject: [Distutils] How to build python-packages depends on the output of other project Message-ID: hi, I'm writing python-binding for project A. My python-binding depends on the compile output of project A(It is a .so file), and the project A is not installed in the system(so we can't find the .so files in the system libraries pathes) What's the elegant way to package my python-binding, so that I can install everything by run `python setup.py` ? Any suggestions and comments will be appreciated :) -- Best wishes, Young -------------- next part -------------- An HTML attachment was scrubbed... URL: From afe.young at gmail.com Thu Jun 2 22:35:01 2016 From: afe.young at gmail.com (Young Yang) Date: Fri, 3 Jun 2016 10:35:01 +0800 Subject: [Distutils] How to build python-packages depends on the output of other project In-Reply-To: References: Message-ID: Hi, My current solution is like this I get the source code of project A. And use `cmdclass={"install": my_install},` in my setup function in setup.py. my_install is a subclass of `from setuptools.command.install import install` ``` class my_install(install): def run(self): # DO something I want. Such as compiling the code of project A and copy the output of it (i.e. the .so file) to my binding folder install.run(self) ``` At last I add these options in my setup function in setup.py to include the shared library in the install package. ``` package_dir={'my_binding_package': 'my_binding_folder'}, package_data={ 'my_binding_package': ['Shared_lib.so'], }, include_package_data=True, ``` But I think there should be better ways to achieve these. Could anyone give me any elegant examples to achieve the same goal? Thanks in advance On Thu, Jun 2, 2016 at 11:05 AM, Young Yang wrote: > hi, > > I'm writing python-binding for project A. > > My python-binding depends on the compile output of project A(It is a .so > file), and the project A is not installed in the system(so we can't find > the .so files in the system libraries pathes) > > What's the elegant way to package my python-binding, so that I can install > everything by run `python setup.py` ? > > Any suggestions and comments will be appreciated :) > > -- > Best wishes, > Young > -- Best wishes, Young Yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at cbaines.net Fri Jun 3 08:20:27 2016 From: mail at cbaines.net (Christopher Baines) Date: Fri, 3 Jun 2016 13:20:27 +0100 Subject: [Distutils] Accessing tests_require and setup_requires from the setup.py Message-ID: I'm trying to write a script to get information about a source distributions requirements (from the source distribution), but I'm not sure how to access the tests_require and setup_requires that can sometimes be found in the setup.py? Apologies if this is really simple, and I've just missed the answer, but I've searched for it a few times now, and not come up with anything. Thanks, Chris From p.f.moore at gmail.com Fri Jun 3 09:19:30 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 3 Jun 2016 14:19:30 +0100 Subject: [Distutils] Accessing tests_require and setup_requires from the setup.py In-Reply-To: References: Message-ID: On 3 June 2016 at 13:20, Christopher Baines wrote: > I'm trying to write a script to get information about a source > distributions requirements (from the source distribution), but I'm not > sure how to access the tests_require and setup_requires that can > sometimes be found in the setup.py? > > Apologies if this is really simple, and I've just missed the answer, but > I've searched for it a few times now, and not come up with anything. If I understand what you're trying to achieve, the only way of getting the "final" information (i.e, what will actually get used to install) is by running the setup.py script. That's basically the key issue with the executable setup.py format - there's no way to know the information without running the script. You may be able to get the information without doing a full install by using the "setup.py egg_info" subcommand provided by setuptools. That's what pip uses, for example (but pip doesn't look at tests_require or setup_requires, so you'd have to check if that information was available by that route). Paul From p.f.moore at gmail.com Fri Jun 3 09:28:55 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 3 Jun 2016 14:28:55 +0100 Subject: [Distutils] Accessing tests_require and setup_requires from the setup.py In-Reply-To: <5119a6cf-5fc3-600e-b223-80b20656b124@cbaines.net> References: <5119a6cf-5fc3-600e-b223-80b20656b124@cbaines.net> Message-ID: On 3 June 2016 at 14:24, Christopher Baines wrote: > On 03/06/16 14:19, Paul Moore wrote: >> On 3 June 2016 at 13:20, Christopher Baines wrote: >>> I'm trying to write a script to get information about a source >>> distributions requirements (from the source distribution), but I'm not >>> sure how to access the tests_require and setup_requires that can >>> sometimes be found in the setup.py? >>> >>> Apologies if this is really simple, and I've just missed the answer, but >>> I've searched for it a few times now, and not come up with anything. >> >> If I understand what you're trying to achieve, the only way of getting >> the "final" information (i.e, what will actually get used to install) >> is by running the setup.py script. That's basically the key issue with >> the executable setup.py format - there's no way to know the >> information without running the script. >> >> You may be able to get the information without doing a full install by >> using the "setup.py egg_info" subcommand provided by setuptools. >> That's what pip uses, for example (but pip doesn't look at >> tests_require or setup_requires, so you'd have to check if that >> information was available by that route). > > As far as I can see (I checked setuptools and flake8), neither > tests_require or setup_requires are present in the egg_info metadata > directory. > > Is there no way of getting setuptools to write the data out to a file? Maybe you could write your own command class? Or monkeypatch setuptools.setup() to write its arguments to a file? I don't know of any non-ugly way, though, sorry... Paul From contact at ionelmc.ro Fri Jun 3 09:46:56 2016 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 3 Jun 2016 16:46:56 +0300 Subject: [Distutils] How to build python-packages depends on the output of other project In-Reply-To: References: Message-ID: On Fri, Jun 3, 2016 at 5:35 AM, Young Yang wrote: > my_install is a subclass of `from setuptools.command.install import > install` > ``` > class my_install(install): > def run(self): > # DO something I want. Such as compiling the code of project A and > copy the output of it (i.e. the .so file) to my binding folder > install.run(self) > ``` > > At last I add these options in my setup function in setup.py to include > the shared library in the install package. > ``` > package_dir={'my_binding_package': 'my_binding_folder'}, > package_data={ > 'my_binding_package': ['Shared_lib.so'], > }, > include_package_data=True, > ``` > > But I think there should be better ways to achieve these. > ?Overriding only the `install` will make bdist_wheel? produce the wrong result. There's also the `develop` command. Some ideas about what commands you might need to override: https://github.com/pytest-dev/pytest-cov/blob/master/setup.py#L30-L63 An alternative approach would be to create a custom Extension class, check this https://github.com/cython/cython/tree/master/Cython/Distutils for ideas. Unfortunately the internals of distutils/setuptools aren't really well documented so you'll have to rely on examples, simply reading distutils code, coffee or even painkillers :-) ? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Jun 3 09:58:10 2016 From: dholth at gmail.com (Daniel Holth) Date: Fri, 03 Jun 2016 13:58:10 +0000 Subject: [Distutils] Accessing tests_require and setup_requires from the setup.py In-Reply-To: References: <5119a6cf-5fc3-600e-b223-80b20656b124@cbaines.net> Message-ID: Here is how you can write setup_requires and test_requires to a file, by adding a plugin to egg_info.writers in setuptools. https://gist.github.com/dholth/59e4c8a0c0d963b019d81e18bf0a89e3 On Fri, Jun 3, 2016 at 9:29 AM Paul Moore wrote: > On 3 June 2016 at 14:24, Christopher Baines wrote: > > On 03/06/16 14:19, Paul Moore wrote: > >> On 3 June 2016 at 13:20, Christopher Baines wrote: > >>> I'm trying to write a script to get information about a source > >>> distributions requirements (from the source distribution), but I'm not > >>> sure how to access the tests_require and setup_requires that can > >>> sometimes be found in the setup.py? > >>> > >>> Apologies if this is really simple, and I've just missed the answer, > but > >>> I've searched for it a few times now, and not come up with anything. > >> > >> If I understand what you're trying to achieve, the only way of getting > >> the "final" information (i.e, what will actually get used to install) > >> is by running the setup.py script. That's basically the key issue with > >> the executable setup.py format - there's no way to know the > >> information without running the script. > >> > >> You may be able to get the information without doing a full install by > >> using the "setup.py egg_info" subcommand provided by setuptools. > >> That's what pip uses, for example (but pip doesn't look at > >> tests_require or setup_requires, so you'd have to check if that > >> information was available by that route). > > > > As far as I can see (I checked setuptools and flake8), neither > > tests_require or setup_requires are present in the egg_info metadata > > directory. > > > > Is there no way of getting setuptools to write the data out to a file? > > Maybe you could write your own command class? Or monkeypatch > setuptools.setup() to write its arguments to a file? > > I don't know of any non-ugly way, though, sorry... > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Jun 3 10:46:49 2016 From: dholth at gmail.com (Daniel Holth) Date: Fri, 03 Jun 2016 14:46:49 +0000 Subject: [Distutils] How to build python-packages depends on the output of other project In-Reply-To: References: Message-ID: I have some less elegant suggestions. In my ed25519ll package I abuse the Extension class to compile some source code (that is not a Python extension). This works if the C package you are compiling is simple enough that it can be built with the limited Extension interface. https://bitbucket.org/dholth/ed25519ll/src/37719c56b7b621a98dc694b109ccfca1c946ed65/setup.py?fileviewer=file-view-default#setup.py-43 For example setup( ... ext_modules=[ Extension('ed25519ll._ed25519_%s' % plat_name, sources=[ 'ed25519-supercop-ref10/ge_frombytes.c', # many more 'ed25519-supercop-ref10/py.c'], include_dirs=['ed25519-supercop-ref10', ], export_symbols=["crypto_sign", "crypto_sign_open", "crypto_sign_keypair"]) ) I added the file `py.c` with an empty function `void init_ed25519_win32() {}` to make the linker happy, and I list the symbols that need to be exported, otherwise those symbols would not be visible on Windows. Then I open the shared module with cffi or ctypes. Not very pretty but it works well enough. Another thing you can do without extending distutils that may not be immediately obvious is to run as much code as you want before calling setup(). It is even possible to install other Python modules by calling pip in a subprocess, then importing them, then calling setup(), all in the same file. On Fri, Jun 3, 2016 at 9:47 AM Ionel Cristian M?rie? < distutils-sig at python.org> wrote: > > On Fri, Jun 3, 2016 at 5:35 AM, Young Yang wrote: > >> my_install is a subclass of `from setuptools.command.install import >> install` >> ``` >> class my_install(install): >> def run(self): >> # DO something I want. Such as compiling the code of project A >> and copy the output of it (i.e. the .so file) to my binding folder >> install.run(self) >> ``` >> >> At last I add these options in my setup function in setup.py to include >> the shared library in the install package. >> ``` >> package_dir={'my_binding_package': 'my_binding_folder'}, >> package_data={ >> 'my_binding_package': ['Shared_lib.so'], >> }, >> include_package_data=True, >> ``` >> >> But I think there should be better ways to achieve these. >> > > ?Overriding only the `install` will make bdist_wheel? produce the wrong > result. There's also the `develop` command. Some ideas about what commands > you might need to override: > https://github.com/pytest-dev/pytest-cov/blob/master/setup.py#L30-L63 > > An alternative approach would be to create a custom Extension class, check > this https://github.com/cython/cython/tree/master/Cython/Distutils for > ideas. > > Unfortunately the internals of distutils/setuptools aren't really well > documented so you'll have to rely on examples, simply reading distutils > code, coffee or even painkillers :-) > ? > > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Jun 3 11:01:11 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 3 Jun 2016 08:01:11 -0700 Subject: [Distutils] How to build python-packages depends on the output of other project In-Reply-To: References: Message-ID: First, what you have is not all that inelegant -- it is the way to do it :-) But there are a few options when you are wrapping a C/C++ lib for python: Do you need to access that lib from other extensions or only from the one extension? IF others, then you pretty much need to build a shared lib and make sure that all your extension link to it. But if you only need to get to it from one extension than there are three options: 1) don't compile a lib -- rather, build all your C/C++ code to the extension itself. you can simply add the files to the "source" list -- for a straightforward lib, this is the easiest way to go. 2) statically link -- build the lib as a static lib, and then link it in to your extension. then there is no extra .so to keep track of and ship. at least on *nix you can bypass teh linker by passing teh static lib in as "extra_objects" -- I think. Something like that. 3) what you did -- build the .so and ship it with the extension. HTH, -Chris On Thu, Jun 2, 2016 at 7:35 PM, Young Yang wrote: > Hi, > > My current solution is like this > > I get the source code of project A. And use `cmdclass={"install": > my_install},` in my setup function in setup.py. > > my_install is a subclass of `from setuptools.command.install import > install` > ``` > class my_install(install): > def run(self): > # DO something I want. Such as compiling the code of project A and > copy the output of it (i.e. the .so file) to my binding folder > install.run(self) > ``` > > At last I add these options in my setup function in setup.py to include > the shared library in the install package. > ``` > package_dir={'my_binding_package': 'my_binding_folder'}, > package_data={ > 'my_binding_package': ['Shared_lib.so'], > }, > include_package_data=True, > ``` > > But I think there should be better ways to achieve these. > Could anyone give me any elegant examples to achieve the same goal? > > Thanks in advance > > > On Thu, Jun 2, 2016 at 11:05 AM, Young Yang wrote: > >> hi, >> >> I'm writing python-binding for project A. >> >> My python-binding depends on the compile output of project A(It is a .so >> file), and the project A is not installed in the system(so we can't find >> the .so files in the system libraries pathes) >> >> What's the elegant way to package my python-binding, so that I can >> install everything by run `python setup.py` ? >> >> Any suggestions and comments will be appreciated :) >> >> -- >> Best wishes, >> Young >> > > > > -- > Best wishes, > Young Yang > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Jun 3 12:46:41 2016 From: dholth at gmail.com (Daniel Holth) Date: Fri, 03 Jun 2016 16:46:41 +0000 Subject: [Distutils] Accessing tests_require and setup_requires from the setup.py In-Reply-To: <2908D9E2-ABAB-4674-8C3A-3E923A7A593D@nyu.edu> References: <5119a6cf-5fc3-600e-b223-80b20656b124@cbaines.net> <2908D9E2-ABAB-4674-8C3A-3E923A7A593D@nyu.edu> Message-ID: Tell me what you know about SAT solvers, dnf and composer. On Fri, Jun 3, 2016, 12:28 Sebastien Awwad wrote: > This ties into what I've been working on to fix the package dependency > conflict resolution problem for pip > , actually: > > You may be able to use a tool I wrote to automatically extract > requirements from setup.py, without installing (knowing that setup.py is > arbitrary code and that dependencies are not strictly static). I opted to > go with an admittedly drastic method of patching pip 8 to extract > dependency data from each source distribution it touches in download mode > when called by my dependency scraper. I decided that in the absence of > static requirements for source distributions, the best I could really do in > practice was to parse requirements exactly the way pip does. If you want, > you can run the scraper from my project, which is here (project itself > still a WIP). In particular, if you install it and run 'python depresolve/scrape_deps_and_detect_conflicts.py > "some-package-name(1.0.0)"', it'll spit out the dependencies to a json > file for the sdist for version 1.0.0 of some-package-name (more > instructions here > - it > can also operate with local sdists or indexes). > > In my case, for pypa/pip:issue988, I needed to harvest mass dependency > info to test a few different dependency conflict resolvers on. I'm working > on writing up some of what I've learned and will probably end up > recommending a basic integrated backtracking resolver within pip - probably > an updated version of rbtcollins' backtracking resolver pip patches > (which I'd be happy to rework and > send a PR to pip on, if Robert doesn't have the bandwidth for it). > > Sebastien > > On Jun 3, 2016, at 09:58, Daniel Holth wrote: > > Here is how you can write setup_requires and test_requires to a file, by > adding a plugin to egg_info.writers in setuptools. > > https://gist.github.com/dholth/59e4c8a0c0d963b019d81e18bf0a89e3 > > On Fri, Jun 3, 2016 at 9:29 AM Paul Moore wrote: > >> On 3 June 2016 at 14:24, Christopher Baines wrote: >> > On 03/06/16 14:19, Paul Moore wrote: >> >> On 3 June 2016 at 13:20, Christopher Baines wrote: >> >>> I'm trying to write a script to get information about a source >> >>> distributions requirements (from the source distribution), but I'm not >> >>> sure how to access the tests_require and setup_requires that can >> >>> sometimes be found in the setup.py? >> >>> >> >>> Apologies if this is really simple, and I've just missed the answer, >> but >> >>> I've searched for it a few times now, and not come up with anything. >> >> >> >> If I understand what you're trying to achieve, the only way of getting >> >> the "final" information (i.e, what will actually get used to install) >> >> is by running the setup.py script. That's basically the key issue with >> >> the executable setup.py format - there's no way to know the >> >> information without running the script. >> >> >> >> You may be able to get the information without doing a full install by >> >> using the "setup.py egg_info" subcommand provided by setuptools. >> >> That's what pip uses, for example (but pip doesn't look at >> >> tests_require or setup_requires, so you'd have to check if that >> >> information was available by that route). >> > >> > As far as I can see (I checked setuptools and flake8), neither >> > tests_require or setup_requires are present in the egg_info metadata >> > directory. >> > >> > Is there no way of getting setuptools to write the data out to a file? >> >> Maybe you could write your own command class? Or monkeypatch >> setuptools.setup() to write its arguments to a file? >> >> I don't know of any non-ugly way, though, sorry... >> Paul >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Fri Jun 3 15:03:11 2016 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 3 Jun 2016 15:03:11 -0400 Subject: [Distutils] Accessing tests_require and setup_requires from the setup.py In-Reply-To: References: <5119a6cf-5fc3-600e-b223-80b20656b124@cbaines.net> Message-ID: <5751D46F.1050108@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 06/03/2016 09:58 AM, Daniel Holth wrote: > Here is how you can write setup_requires and test_requires to a file, > by adding a plugin to egg_info.writers in setuptools. FWIW, eggtestinfo is an 'egg_info.writers' plugin which dumps test-related metadata to a text file: https://pypi.python.org/pypi/eggtestinfo Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJXUdRpAAoJEPKpaDSJE9HYQhsP/jrMq4q1BgCCMCaazvo9B4vX uS9uzHj+T2noHwMcluSQB9zdSvCufNbKSoXQ62MUkbPEKwa1TarKzm+XZci/tBba eOxwEPilHHRBF7no44ZTgo6kq27oOC9Z0em7F9awADxWmJ0mcgOksQTXjzczuP7h BPoj1v/vxSLc87Q61qXYWMFTpiY4SjHn7+0xu7txQ13GJSaNjqNK/geQI+18Tpw2 76G0I1TDbXel+2JlgYEmwHkFPpFuhVaJXyJi8ZjvjdH/jN47oT6wsFwyxRzlVOXx oFp//hdb05K9i2R9rdLznKzkrQCrlYUTEBXEIpMCacfnoHgKrn+MKHCdI4FXKXGz Rm8LUh+oS0VU5IA8Rt4od08Qd68HE9CQERvx03JOQ7mKJIjxEODeC80JduzVGZxj 7YpA5sRxbsDTbnHD+8uodpFDDU0h9NMWLbxdDJ42Q1j2iNNr7ZZDR2tJZ8Y76EMl 55Mf5WZK5W+kzkUVodpK1yTxLR8+swadfjno2QnVhjA9h3vW1ZV11AREPTLauHbE S+eoyN92ga4eyhZSwaQdhz6KsZgFpUcmhwvlrsl0dIlkkHs1mFjPL/J4PvR2SVKl ug1Wmm0EvwPUQ0cDcRBeJtPF4git5SeMYLvGcMbsIpHdZloK+xaFVyMjgrTGA1o5 NTZokg2p47XvwuCA4XlH =d2oJ -----END PGP SIGNATURE----- From mail at cbaines.net Fri Jun 3 09:24:21 2016 From: mail at cbaines.net (Christopher Baines) Date: Fri, 3 Jun 2016 14:24:21 +0100 Subject: [Distutils] Accessing tests_require and setup_requires from the setup.py In-Reply-To: References: Message-ID: <5119a6cf-5fc3-600e-b223-80b20656b124@cbaines.net> On 03/06/16 14:19, Paul Moore wrote: > On 3 June 2016 at 13:20, Christopher Baines wrote: >> I'm trying to write a script to get information about a source >> distributions requirements (from the source distribution), but I'm not >> sure how to access the tests_require and setup_requires that can >> sometimes be found in the setup.py? >> >> Apologies if this is really simple, and I've just missed the answer, but >> I've searched for it a few times now, and not come up with anything. > > If I understand what you're trying to achieve, the only way of getting > the "final" information (i.e, what will actually get used to install) > is by running the setup.py script. That's basically the key issue with > the executable setup.py format - there's no way to know the > information without running the script. > > You may be able to get the information without doing a full install by > using the "setup.py egg_info" subcommand provided by setuptools. > That's what pip uses, for example (but pip doesn't look at > tests_require or setup_requires, so you'd have to check if that > information was available by that route). As far as I can see (I checked setuptools and flake8), neither tests_require or setup_requires are present in the egg_info metadata directory. Is there no way of getting setuptools to write the data out to a file? From sebastien.awwad at nyu.edu Fri Jun 3 12:28:20 2016 From: sebastien.awwad at nyu.edu (Sebastien Awwad) Date: Fri, 3 Jun 2016 12:28:20 -0400 Subject: [Distutils] Accessing tests_require and setup_requires from the setup.py In-Reply-To: References: <5119a6cf-5fc3-600e-b223-80b20656b124@cbaines.net> Message-ID: <2908D9E2-ABAB-4674-8C3A-3E923A7A593D@nyu.edu> This ties into what I've been working on to fix the package dependency conflict resolution problem for pip , actually: You may be able to use a tool I wrote to automatically extract requirements from setup.py, without installing (knowing that setup.py is arbitrary code and that dependencies are not strictly static). I opted to go with an admittedly drastic method of patching pip 8 to extract dependency data from each source distribution it touches in download mode when called by my dependency scraper. I decided that in the absence of static requirements for source distributions, the best I could really do in practice was to parse requirements exactly the way pip does. If you want, you can run the scraper from my project, which is here (project itself still a WIP). In particular, if you install it and run 'python depresolve/scrape_deps_and_detect_conflicts.py "some-package-name(1.0.0)"', it'll spit out the dependencies to a json file for the sdist for version 1.0.0 of some-package-name (more instructions here - it can also operate with local sdists or indexes). In my case, for pypa/pip:issue988, I needed to harvest mass dependency info to test a few different dependency conflict resolvers on. I'm working on writing up some of what I've learned and will probably end up recommending a basic integrated backtracking resolver within pip - probably an updated version of rbtcollins' backtracking resolver pip patches (which I'd be happy to rework and send a PR to pip on, if Robert doesn't have the bandwidth for it). Sebastien > On Jun 3, 2016, at 09:58, Daniel Holth wrote: > > Here is how you can write setup_requires and test_requires to a file, by adding a plugin to egg_info.writers in setuptools. > > https://gist.github.com/dholth/59e4c8a0c0d963b019d81e18bf0a89e3 > > On Fri, Jun 3, 2016 at 9:29 AM Paul Moore > wrote: > On 3 June 2016 at 14:24, Christopher Baines > wrote: > > On 03/06/16 14:19, Paul Moore wrote: > >> On 3 June 2016 at 13:20, Christopher Baines > wrote: > >>> I'm trying to write a script to get information about a source > >>> distributions requirements (from the source distribution), but I'm not > >>> sure how to access the tests_require and setup_requires that can > >>> sometimes be found in the setup.py? > >>> > >>> Apologies if this is really simple, and I've just missed the answer, but > >>> I've searched for it a few times now, and not come up with anything. > >> > >> If I understand what you're trying to achieve, the only way of getting > >> the "final" information (i.e, what will actually get used to install) > >> is by running the setup.py script. That's basically the key issue with > >> the executable setup.py format - there's no way to know the > >> information without running the script. > >> > >> You may be able to get the information without doing a full install by > >> using the "setup.py egg_info" subcommand provided by setuptools. > >> That's what pip uses, for example (but pip doesn't look at > >> tests_require or setup_requires, so you'd have to check if that > >> information was available by that route). > > > > As far as I can see (I checked setuptools and flake8), neither > > tests_require or setup_requires are present in the egg_info metadata > > directory. > > > > Is there no way of getting setuptools to write the data out to a file? > > Maybe you could write your own command class? Or monkeypatch > setuptools.setup() to write its arguments to a file? > > I don't know of any non-ugly way, though, sorry... > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From sebastienawwad at gmail.com Fri Jun 3 12:57:16 2016 From: sebastienawwad at gmail.com (Sebastien Awwad) Date: Fri, 03 Jun 2016 16:57:16 +0000 Subject: [Distutils] Accessing tests_require and setup_requires from the setup.py In-Reply-To: References: <5119a6cf-5fc3-600e-b223-80b20656b124@cbaines.net> <2908D9E2-ABAB-4674-8C3A-3E923A7A593D@nyu.edu> Message-ID: - I'm not at all familiar with composer. - For DNF - I assume you mean CNF, logical expression format generally used in SAT solvers (not DNF as in I did-not-finish the blog post I'm working on on this yet). If the question is why backtracking rather than SAT solving, I start to get into that in background here . I'm hesitant to link to this scribbling. I'll have a broader summary with data later - I've run some comparisons using enthought/depsolver's SAT solver, along with some backtrackers (the pip patch previously linked to and a currently-flawed one of my own). On Fri, Jun 3, 2016 at 12:46 PM Daniel Holth wrote: > Tell me what you know about SAT solvers, dnf and composer. > > On Fri, Jun 3, 2016, 12:28 Sebastien Awwad > wrote: > >> This ties into what I've been working on to fix the package dependency >> conflict resolution problem for pip >> , actually: >> >> You may be able to use a tool I wrote to automatically extract >> requirements from setup.py, without installing (knowing that setup.py is >> arbitrary code and that dependencies are not strictly static). I opted >> to go with an admittedly drastic method of patching pip 8 to extract >> dependency data from each source distribution it touches in download mode >> when called by my dependency scraper. I decided that in the absence of >> static requirements for source distributions, the best I could really do in >> practice was to parse requirements exactly the way pip does. If you want, >> you can run the scraper from my project, which is here (project itself >> still a WIP). In particular, if you install it and run 'python depresolve/scrape_deps_and_detect_conflicts.py >> "some-package-name(1.0.0)"', it'll spit out the dependencies to a json >> file for the sdist for version 1.0.0 of some-package-name (more >> instructions here >> - it >> can also operate with local sdists or indexes). >> >> In my case, for pypa/pip:issue988, I needed to harvest mass dependency >> info to test a few different dependency conflict resolvers on. I'm working >> on writing up some of what I've learned and will probably end up >> recommending a basic integrated backtracking resolver within pip - probably >> an updated version of rbtcollins' backtracking resolver pip patches >> (which I'd be happy to rework >> and send a PR to pip on, if Robert doesn't have the bandwidth for it). >> >> Sebastien >> > On Jun 3, 2016, at 09:58, Daniel Holth wrote: >> >> Here is how you can write setup_requires and test_requires to a file, by >> adding a plugin to egg_info.writers in setuptools. >> >> https://gist.github.com/dholth/59e4c8a0c0d963b019d81e18bf0a89e3 >> >> On Fri, Jun 3, 2016 at 9:29 AM Paul Moore wrote: >> >>> On 3 June 2016 at 14:24, Christopher Baines wrote: >>> > On 03/06/16 14:19, Paul Moore wrote: >>> >> On 3 June 2016 at 13:20, Christopher Baines wrote: >>> >>> I'm trying to write a script to get information about a source >>> >>> distributions requirements (from the source distribution), but I'm >>> not >>> >>> sure how to access the tests_require and setup_requires that can >>> >>> sometimes be found in the setup.py? >>> >>> >>> >>> Apologies if this is really simple, and I've just missed the answer, >>> but >>> >>> I've searched for it a few times now, and not come up with anything. >>> >> >>> >> If I understand what you're trying to achieve, the only way of getting >>> >> the "final" information (i.e, what will actually get used to install) >>> >> is by running the setup.py script. That's basically the key issue with >>> >> the executable setup.py format - there's no way to know the >>> >> information without running the script. >>> >> >>> >> You may be able to get the information without doing a full install by >>> >> using the "setup.py egg_info" subcommand provided by setuptools. >>> >> That's what pip uses, for example (but pip doesn't look at >>> >> tests_require or setup_requires, so you'd have to check if that >>> >> information was available by that route). >>> > >>> > As far as I can see (I checked setuptools and flake8), neither >>> > tests_require or setup_requires are present in the egg_info metadata >>> > directory. >>> > >>> > Is there no way of getting setuptools to write the data out to a file? >>> >>> Maybe you could write your own command class? Or monkeypatch >>> setuptools.setup() to write its arguments to a file? >>> >>> I don't know of any non-ugly way, though, sorry... >>> Paul >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaraco at jaraco.com Sat Jun 4 07:02:25 2016 From: jaraco at jaraco.com (Jason R. Coombs) Date: Sat, 4 Jun 2016 11:02:25 +0000 Subject: [Distutils] on integrated docs in Warehouse and PyPI Message-ID: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> I had some thoughts on documentation integration (and its deprecation in Warehouse), which I wrote up here: https://paper.dropbox.com/doc/Integrated-Docs-is-an-Essential-Feature-HEqnF8iWzCFwkCDaz0p8t I include the full text below for ease of access and response. Yesterday, as I was migrating Setuptools from PyPI to Warehouse due to this issue, Donald alerted me to the fact that Warehouse does not plan to support documentation hosting, and as a result, integrated documentation for the packaging infrastructure is deprecated. At first blush, this decision seems like a sound one - decouple independent operations and allow a mature, established organization like RTD to support and manage Python Package documentation. I have nothing but respect for RTD; I reference them here only as the prime example of a third-party doc hosting solution. After spending most of a day working on getting just one project documentation (mostly) moved from PyPI to RTD, I?ve realized there are several shortcomings with this approach. Integrated hosting provides several benefits not offered by RTD: * Uniform custody - the person or team that owns/maintains the package also owns/maintains the documentation with a single point of management and authorization. * Shared credentials - the accounts used to administer the packages are re-used to authenticate authorized users to the documentation. RTD requires a separate set of accounts for each user involved with the documentation. * Shared naming - a name registered as a package is automatically reserved for the documentation. * Automatic linkage - PyPI provides a Package Documentation link on each package that has documentation. * Control over the build process - although RTD does provide excellent hooks for customization and control, the process is essentially out of the hands of the operator. Thus when issues like this arise (probably rarely), the user is at the mercy of the system. With PyPI hosting, it was possible to manually build and upload docs when necessary and to control every aspect of the build process, including platform, Python version, etc. * One obvious choice - although RTD today is the obvious choice for hosting, I can see other prominent alternatives - using Github pages or self hosting or perhaps another service will emerge that?s more integrated with the packaging process. Having a sanctioned, integrated documentation hosting gives users confidence that it?s at least a good default choice if not the best one. * API access - Although RTD hopes to provide support for project creation through an API, it currently only allows querying through the public API. Therefore, it?s not feasible through tools to mechanically configure projects in RTD. Each project has to be manually configured and administered. For me, this last limitation is the most onerous. I maintain dozens of projects, many of them in collaboration with other teams, and in many of those, I rely on a model implementation that leverages PyPI hosting as part of the package release process to publish documentation. Moving each of these projects to another hosting service would require the manual creation and configuration of another project for each. As I consider the effort it would take to port all of these projects and maintain them in a new infrastructure, I?m inclined to drop documentation support for all but the most prominent projects. The linkage provided by PyPI was a most welcome feature, and I?m really sad to see it go. I?d be willing to give up item 5 (Control) if the other items could be addressed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Jun 4 09:33:08 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 4 Jun 2016 06:33:08 -0700 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> Message-ID: I think everyone would agree that having some nice doc hosting service available as an option would be, well, nice. Everyone likes options. But the current doc hosting is unpopular and feature poor, falls outside of the PyPI core mission, and is redundant with other more popular services, at a time when the PyPI developers are struggling to maintain core services. How do you propose to implement and support this hypothetical doc service, given PyPI's resource constraints? Who's going to implement and maintain it? What other features are you suggesting be de-prioritized so that people can focus on this instead? Opportunity cost is a real cost. (You might also be interested in this: http://blog.readthedocs.com/rtd-awarded-mozilla-open-source-support-grant/) -n On Jun 4, 2016 4:18 AM, "Jason R. Coombs" wrote: > I had some thoughts on documentation integration (and its deprecation in > Warehouse), which I wrote up here: > https://paper.dropbox.com/doc/Integrated-Docs-is-an-Essential-Feature-HEqnF8iWzCFwkCDaz0p8t > > I include the full text below for ease of access and response. > > Yesterday, as I was migrating Setuptools from PyPI to Warehouse due to this > issue , Donald alerted me > to the fact that Warehouse does not plan to support documentation hosting > , > and as a result, integrated documentation for the packaging infrastructure > is deprecated. > > At first blush, this decision seems like a sound one - decouple > independent operations and allow a mature, established organization like > RTD to support and manage Python Package > documentation. I have nothing but respect for RTD; I reference them here > only as the prime example of a third-party doc hosting solution. > > After spending most of a day working on getting just one project > documentation (mostly) moved from PyPI to RTD, I?ve realized there are > several shortcomings with this approach. Integrated hosting provides > several benefits not offered by RTD: > > > - Uniform custody - the person or team that owns/maintains the package > also owns/maintains the documentation with a single point of management and > authorization. > - Shared credentials - the accounts used to administer the packages > are re-used to authenticate authorized users to the documentation. RTD > requires a separate set of accounts for each user involved with the > documentation. > - Shared naming - a name registered as a package is automatically > reserved for the documentation. > - Automatic linkage - PyPI provides a Package Documentation link on > each package that has documentation. > - Control over the build process - although RTD does provide excellent > hooks for customization and control, the process is essentially out of the > hands of the operator. Thus when issues like this > arise (probably > rarely), the user is at the mercy of the system. With PyPI hosting, it was > possible to manually build and upload docs when necessary and to control > every aspect of the build process, including platform, Python version, etc. > - One obvious choice - although RTD today is the obvious choice for > hosting, I can see other prominent alternatives - using Github pages or > self hosting or perhaps another service will emerge that?s more integrated > with the packaging process. Having a sanctioned, integrated documentation > hosting gives users confidence that it?s at least a good default choice if > not the best one. > - API access - Although RTD hopes to provide support for project > creation through an API, it currently only allows querying through the > public API. Therefore, it?s not feasible through tools to mechanically > configure projects in RTD. Each project has to be manually configured and > administered. > > > For me, this last limitation is the most onerous. I maintain dozens of > projects, many of them in collaboration with other teams, and in many of > those, I rely on a model implementation > that leverages PyPI hosting as part > of the package release process to publish documentation. Moving each of > these projects to another hosting service would require the manual creation > and configuration of another project for each. As I consider the effort it > would take to port all of these projects and maintain them in a new > infrastructure, I?m inclined to drop documentation support for all but the > most prominent projects. > > The linkage provided by PyPI was a most welcome feature, and I?m really > sad to see it go. I?d be willing to give up item 5 (Control) if the other > items could be addressed. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jun 4 09:53:53 2016 From: donald at stufft.io (Donald Stufft) Date: Sat, 4 Jun 2016 09:53:53 -0400 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> Message-ID: <1897BDDA-75E1-4564-A2F5-E3C15E1E5A5F@stufft.io> > On Jun 4, 2016, at 9:33 AM, Nathaniel Smith wrote: > > I think everyone would agree that having some nice doc hosting service available as an option would be, well, nice. Everyone likes options. But the current doc hosting is unpopular and feature poor, falls outside of the PyPI core mission, and is redundant with other more popular services, at a time when the PyPI developers are struggling to maintain core services. To add to what Nathaniel said here, there are a few problems with the current situation: Documentation hosting largely worked ?OK? when it was just writing files out to disk, however we?ve since removed all use of the local disk (so that we can scale past 1 machine) and we?re now storing things in S3. This makes documentation hosting particularly expensive in terms of API calls because we need to do expensive list key operations to discover which files exist (versus package files where we have a database full of files). On top of that, S3 is an eventually consistent store, while we have worked around this for package files (by changing the URL to include a hash of the file contents) this is not something that we can do for documentation. This means that while it will often be updated in a short time period, it could be hours and hours (or even days) for someone to see their uploaded documentation reflected on PyPI. I feel that the current documentation hosting is a bit of an attractive nuisance for people. It?s easy to get started with, but it lacks even basic features? features that I don?t have time to implement. This causes people to request these features be added and for me to have to tell them no (or to just ignore them) because I don?t have the time for it. I feel that people are going to be much better served using some mechanism whose core competency is this, then they are going to be by a barely supported feature on PyPI. If they?re using Sphinx they can use RTD and get powerful features like automatic building of the documentation on push, or if they like the ?just upload a tarball? approach, they can get something similar with S3 on their own, or using Github pages, or Gitlab pages. So all in all, while I think there are some nice benefits of having this attached to PyPI, I think that we don?t currently have the resources to actually make it a good service, and I think it?s better to drop a badly supported service than it is to leave it there to take up time and effort of both PyPI maintainers, and for people using it not knowing that it?s barely supported. For what it?s worth, here?s the previous discussion on this topic https://mail.python.org/pipermail/distutils-sig/2015-May/026327.html ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From waynejwerner at gmail.com Sat Jun 4 09:54:39 2016 From: waynejwerner at gmail.com (Wayne Werner) Date: Sat, 4 Jun 2016 08:54:39 -0500 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> Message-ID: On Sat, Jun 4, 2016 at 8:33 AM, Nathaniel Smith wrote: > How do you propose to implement and support this hypothetical doc service, > given PyPI's resource constraints? Who's going to implement and maintain > it? What other features are you suggesting be de-prioritized so that people > can focus on this instead? Opportunity cost is a real cost. > Would it be worthwhile to write up a PEP specifying some kind of documentation API? Having something well-specified would allow a motivated non-core-Warehouse dev to implement the details and (assuming it's as uncomplicated as I imagine) just create a PR. I assume it wouldn't really need much in the way of maintenance. But it would also give the RTD folks (and others?) the ability to implement their own tools for generating/hosting the docs. -W -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jun 4 10:13:56 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 4 Jun 2016 15:13:56 +0100 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> Message-ID: On 4 June 2016 at 14:54, Wayne Werner wrote: > On Sat, Jun 4, 2016 at 8:33 AM, Nathaniel Smith wrote: >> >> How do you propose to implement and support this hypothetical doc service, >> given PyPI's resource constraints? Who's going to implement and maintain it? >> What other features are you suggesting be de-prioritized so that people can >> focus on this instead? Opportunity cost is a real cost. > > Would it be worthwhile to write up a PEP specifying some kind of > documentation API? > > Having something well-specified would allow a motivated non-core-Warehouse > dev to implement the details and (assuming it's as uncomplicated as I > imagine) just create a PR. I assume it wouldn't really need much in the way > of maintenance. But it would also give the RTD folks (and others?) the > ability to implement their own tools for generating/hosting the docs. Possibly it would be useful to write a PEP, I don't know. But even that effort needs to come from someone willing to invest the time into the work. Maybe Jason is interested in doing this, I don't know. Ultimately, I don't think there's any benefit in debating how a motivated individual should take this forward - better to leave the decisions to whoever that may be. The fundamental point is that nobody currently working on the packaging infrastructure (which mostly means "Donald"...) has sufficient time to pick this up, but that doesn't mean that someone else couldn't - just don't expect any huge amount of assistance from the existing team. Paul From ncoghlan at gmail.com Sun Jun 5 00:33:57 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 4 Jun 2016 21:33:57 -0700 Subject: [Distutils] Alternate long_description formats, server-side In-Reply-To: References: <3C25C3E2-6A53-4009-A324-C8BCF01082D3@stufft.io> <1B803956-0CD9-4CC5-A7E6-06074A8F826D@stufft.io> Message-ID: On 3 Jun 2016 4:03 am, "Donald Stufft" wrote: > > >> On Jun 2, 2016, at 9:16 PM, Nick Timkovich wrote: >> >> I can definitely believe there are more important things to do, but some of us aren't versed in the intricacies of what's up top and don't have the familiarity to dive in. Us GitHub plebs are just raring to work on a feature we think is within our grasp ;-) >> > > Yup! Nick was speaking to why folks like myself haven?t done it yet- If some enterprising person (perhaps you!) takes the time to write up a PEP and work it through the process, then there?s no reason to wait for me (or Nick, or any of the other core team) to do it :) Right, the key part of my post is the second paragraph: we actually now have a relatively simple mechanism to capture proposals like this one, which is issues and pull requests against the "specifications" section of the Python packaging user guide on GitHub. Previously a key sticking point was not having a way to document added fields without a full PEP, which imposed way too much overhead for adding a simple attribute like Provides-Extra or the Description-Format field being considered here. For this to work, I think the concrete changes you would need would be: - Python packaging user guide to document the new field in the core metadata spec - setuptools to support setting it - Warehouse to respect it for all defined rendering formats - potentially legacy PyPI to support respecting it at least for reStructured text and Markdown (treating others as plain text) - PyPUG again to provide usage docs once the default tools support it Cheers, Nick. > > > ? > Donald Stufft > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jun 5 00:33:57 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 4 Jun 2016 21:33:57 -0700 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: <1897BDDA-75E1-4564-A2F5-E3C15E1E5A5F@stufft.io> References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> <1897BDDA-75E1-4564-A2F5-E3C15E1E5A5F@stufft.io> Message-ID: On 4 Jun 2016 6:54 am, "Donald Stufft" wrote: > > >> On Jun 4, 2016, at 9:33 AM, Nathaniel Smith wrote: >> >> I think everyone would agree that having some nice doc hosting service available as an option would be, well, nice. Everyone likes options. But the current doc hosting is unpopular and feature poor, falls outside of the PyPI core mission, and is redundant with other more popular services, at a time when the PyPI developers are struggling to maintain core services. > > > > To add to what Nathaniel said here, there are a few problems with the current situation: > > Documentation hosting largely worked ?OK? when it was just writing files out to disk, however we?ve since removed all use of the local disk (so that we can scale past 1 machine) and we?re now storing things in S3. This makes documentation hosting particularly expensive in terms of API calls because we need to do expensive list key operations to discover which files exist (versus package files where we have a database full of files). Amazon do offer higher level alternatives like https://aws.amazon.com/efs/ for use cases like PyPI's docs hosting that assume they have access to a normal filesystem. Given the credential management benefits of integrated docs, it does seem worthwhile to me for the PSF to invest in a lowest common denominator static file hosting capability, even if we also use the PyPI project admin pages to promote ReadTheDocs and the static site hosting offered by version control hosting companies. Regards, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Sun Jun 5 02:25:05 2016 From: tritium-list at sdamon.com (tritium-list at sdamon.com) Date: Sun, 5 Jun 2016 02:25:05 -0400 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> <1897BDDA-75E1-4564-A2F5-E3C15E1E5A5F@stufft.io> Message-ID: <096101d1bef3$0067fb80$0137f280$@hotmail.com> Is this something that can be deferred until after the launch of warehouse? I am, personally, more interested in pypi features that are broken (because of the current codebase?s unmaintainability) that cannot be offloaded to other services than the ones that can be offloaded. From: Distutils-SIG [mailto:distutils-sig-bounces+tritium-list=sdamon.com at python.org] On Behalf Of Nick Coghlan Sent: Sunday, June 5, 2016 12:34 AM To: Donald Stufft Cc: DistUtils mailing list"" Subject: Re: [Distutils] on integrated docs in Warehouse and PyPI On 4 Jun 2016 6:54 am, "Donald Stufft" > wrote: > > >> On Jun 4, 2016, at 9:33 AM, Nathaniel Smith > wrote: >> >> I think everyone would agree that having some nice doc hosting service available as an option would be, well, nice. Everyone likes options. But the current doc hosting is unpopular and feature poor, falls outside of the PyPI core mission, and is redundant with other more popular services, at a time when the PyPI developers are struggling to maintain core services. > > > > To add to what Nathaniel said here, there are a few problems with the current situation: > > Documentation hosting largely worked ?OK? when it was just writing files out to disk, however we?ve since removed all use of the local disk (so that we can scale past 1 machine) and we?re now storing things in S3. This makes documentation hosting particularly expensive in terms of API calls because we need to do expensive list key operations to discover which files exist (versus package files where we have a database full of files). Amazon do offer higher level alternatives like https://aws.amazon.com/efs/ for use cases like PyPI's docs hosting that assume they have access to a normal filesystem. Given the credential management benefits of integrated docs, it does seem worthwhile to me for the PSF to invest in a lowest common denominator static file hosting capability, even if we also use the PyPI project admin pages to promote ReadTheDocs and the static site hosting offered by version control hosting companies. Regards, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jun 5 05:18:25 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 5 Jun 2016 11:18:25 +0200 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> <1897BDDA-75E1-4564-A2F5-E3C15E1E5A5F@stufft.io> Message-ID: On Sun, Jun 5, 2016 at 6:33 AM, Nick Coghlan wrote: > > On 4 Jun 2016 6:54 am, "Donald Stufft" wrote: > > > > > >> On Jun 4, 2016, at 9:33 AM, Nathaniel Smith wrote: > >> > >> I think everyone would agree that having some nice doc hosting service > available as an option would be, well, nice. Everyone likes options. But > the current doc hosting is unpopular and feature poor, falls outside of the > PyPI core mission, and is redundant with other more popular services, at a > time when the PyPI developers are struggling to maintain core services. > > > > > > > > To add to what Nathaniel said here, there are a few problems with the > current situation: > > > > Documentation hosting largely worked ?OK? when it was just writing files > out to disk, however we?ve since removed all use of the local disk (so that > we can scale past 1 machine) and we?re now storing things in S3. This makes > documentation hosting particularly expensive in terms of API calls because > we need to do expensive list key operations to discover which files exist > (versus package files where we have a database full of files). > > Amazon do offer higher level alternatives like https://aws.amazon.com/efs/ > for use cases like PyPI's docs hosting that assume they have access to a > normal filesystem. > > Given the credential management benefits of integrated docs, > >From the RTD blog post linked by Nathaniel: "" Our proposed grant, for $48,000, is to build a separate instance that integrates with the Python Package Index?s upcoming website, Warehouse . This integration will provide automatic API reference documentation upon package release, with authentication tied to PyPI and simple configuration inside the distribution. "" > it does seem worthwhile to me for the PSF to invest in a lowest common > denominator static file hosting capability, > Seems like a very poor way to spend money and developer time imho. The original post by Jason brings up a few shortcomings of RTD, but I'm amazed that that leads multiple people here to conclude that starting a new doc hosting effort is the right answer to that. The much better alternative is: read the RTD contributing guide [1] and their plans for PyPI integration [2], then start helping out with adding those features to RTD. There is very little chance that a new effort as discussed here can come close to RTD, which is a quite active project with by now over 200 contributors. Starting a new project should be done for the right reasons: existing projects don't have and don't want to implement features you need, you have a better technical design, you want to reimplement to learn from it, etc. There are no such reasons here as far as I can tell. If there's money left for packaging related work, I'm sure we can think of better ways to spend it. First thoughts: - Accelerate PyPI integration plans for RTD - Accelerate work on Warehouse - Pay someone to review and merge distutils patches in the Python bug tracker Final thought: there's nothing wrong with distributed infrastructure for projects. A typical project today may have code hosting on GitHub or Bitbucket, use multiple CI providers in parallel, use a separate code coverage service, upload releases to PyPI, conda-forge and GitHub Releases, and host docs on RTD. Integrating doc hosting with PyPI doesn't change that picture really. Ralf [1] https://github.com/rtfd/readthedocs.org/blob/master/docs/contribute.rst [2] https://github.com/rtfd/readthedocs.org/issues/1957 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun Jun 5 06:44:23 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 5 Jun 2016 05:44:23 -0500 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> <1897BDDA-75E1-4564-A2F5-E3C15E1E5A5F@stufft.io> Message-ID: On Sunday, June 5, 2016, Ralf Gommers wrote: > > > On Sun, Jun 5, 2016 at 6:33 AM, Nick Coghlan wrote: > >> >> On 4 Jun 2016 6:54 am, "Donald Stufft" wrote: >> > >> > >> >> On Jun 4, 2016, at 9:33 AM, Nathaniel Smith wrote: >> >> >> >> I think everyone would agree that having some nice doc hosting service >> available as an option would be, well, nice. Everyone likes options. But >> the current doc hosting is unpopular and feature poor, falls outside of the >> PyPI core mission, and is redundant with other more popular services, at a >> time when the PyPI developers are struggling to maintain core services. >> > >> > >> > >> > To add to what Nathaniel said here, there are a few problems with the >> current situation: >> > >> > Documentation hosting largely worked ?OK? when it was just writing >> files out to disk, however we?ve since removed all use of the local disk >> (so that we can scale past 1 machine) and we?re now storing things in S3. >> This makes documentation hosting particularly expensive in terms of API >> calls because we need to do expensive list key operations to discover which >> files exist (versus package files where we have a database full of files). >> >> Amazon do offer higher level alternatives like >> https://aws.amazon.com/efs/ for use cases like PyPI's docs hosting that >> assume they have access to a normal filesystem. >> >> Given the credential management benefits of integrated docs, >> > From the RTD blog post linked by Nathaniel: > "" > Our proposed grant, for $48,000, is to build a separate instance that > integrates with the Python Package Index?s upcoming website, Warehouse > . This integration will provide automatic > API reference documentation upon package release, with authentication tied > to PyPI and simple configuration inside the distribution. > "" > >> it does seem worthwhile to me for the PSF to invest in a lowest common >> denominator static file hosting capability, >> > Seems like a very poor way to spend money and developer time imho. The > original post by Jason brings up a few shortcomings of RTD, but I'm amazed > that that leads multiple people here to conclude that starting a new doc > hosting effort is the right answer to that. The much better alternative is: > read the RTD contributing guide [1] and their plans for PyPI integration > [2], then start helping out with adding those features to RTD. > +1 RTD builds static HTML from Sphinx ReStructuredText and Commonmark Markdown IDK what the best way to do eg epydoc API docs (static HTML hosting) is w/ RTD. - cp into ./docs/_static/ before the RTD build? for GitHub Pages, I like ghp-import (which pushes a directory and a .nojekyll file to a fresh commit in a gh-pages branch). BitBucket Pages is a bit different. https://pypi.python.org/pypi/ghp-import I wrote a tool called 'pgs' (as a bottle app) which will serve static HTML from a git-branch (with pa/th -> pa/th.html redirection, too) but haven't yet spent the time to add proper git bindings, so the per-file process overhead is n (after browser (and potentially WSGI) caching) git branch static HTML support is not in scope for Warehouse because Warehouse doesnt do SCM. .. note:: | project.readthedocs.org URLs are now | project.readthedocs.io (Because cookie origins) http://blog.readthedocs.com/securing-subdomains/ One great feature of RTD, IMHO, is that you can host historical versions of the docs with stable versioned URLs like /en/v1.0.0/ (and a /en/latest/ redirect). > There is very little chance that a new effort as discussed here can come > close to RTD, which is a quite active project with by now over 200 > contributors. Starting a new project should be done for the right reasons: > existing projects don't have and don't want to implement features you need, > you have a better technical design, you want to reimplement to learn from > it, etc. There are no such reasons here as far as I can tell. > > If there's money left for packaging related work, I'm sure we can think of > better ways to spend it. First thoughts: > - Accelerate PyPI integration plans for RTD > - add a link to the Docs: in README.rst (long_description)? > - Accelerate work on Warehouse > - Pay someone to review and merge distutils patches in the Python bug > tracker > > > Final thought: there's nothing wrong with distributed infrastructure for > projects. A typical project today may have code hosting on GitHub or > Bitbucket, use multiple CI providers in parallel, use a separate code > coverage service, upload releases to PyPI, conda-forge and GitHub Releases, > and host docs on RTD. Integrating doc hosting with PyPI doesn't change that > picture really. > +1 I like to include anchor-text-free links to all of these project links in the README.rst (and thus long_description) as RST inline blocks: [CI build badges] name ============ [description] | Wikipedia: ``_ | Homepage: https:// | Docs: | Src: git | Src: hg | Build: | PyPI: | Warehouse: | Conda: | Conda-forge: ( #PEP426 JSONLD / Metadata 2.0 could, ideally, include these links as structured data; this is a very simple solution which works w/ GitHub, BitBucket, PyPI, Warehouse, RTD, and anything else that'll render README.rst. An RST DL definition list or just a list would be more semantically helpful than inline blocks, but less visual space efficient ... YMMV ) awesome-sphinxdoc is an outstanding resource for Sphinx (and thus RTD) things like themes: https://github.com/yoloseem/awesome-sphinxdoc/blob/master/README.rst#themes > Ralf > > [1] > https://github.com/rtfd/readthedocs.org/blob/master/docs/contribute.rst > [2] https://github.com/rtfd/readthedocs.org/issues/1957 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jun 5 09:42:59 2016 From: donald at stufft.io (Donald Stufft) Date: Sun, 5 Jun 2016 09:42:59 -0400 Subject: [Distutils] Uploading to Warehouse Message-ID: <21875485-AC31-4C02-9440-B60F3E76C588@stufft.io> Hey all, As anyone here who has uploaded anything to PyPI recently is aware of, PyPI's uploading routine has been in a semi broken state for awhile now. It will regularly raise a 500 error (but will often still record the release, but sometimes only a partial release). Well, the good news is, that Warehouse's upload routines should generally be good enough to use now. This will be more or less the same as if you uploaded to PyPI itself (they share a backing data store) but hitting the newer, better code base instead of the slowly decaying legacy code base. You can upload via Warehouse by editing your ~/.pypirc file and making it look something like this: [distutils] index-servers = pypi warehouse [pypi] username: password: [warehouse] repository:https://upload.pypi.io/legacy/ username: password: Alternatively, you can ditch the [warehouse] section, and just totally switch over to this by doing: [distutils] index-servers = pypi [pypi] repository:https://upload.pypi.io/legacy/ username: password: Then you can upload using ``twine upload -r warehouse dist/*`` or ``twine upload dist/*`` based upon which of the above options you picked. This code is not as battle tested as PyPI itself is, so if you run into any bugs please file an issue with https://github.com/pypa/warehouse. This code should generally give a successful response all of the time (given the upload itself was good) and properly utilizes database transactions so that it should completely eliminate the cases where you get a partial upload. Hopefully this will help solve some of the problems folks are having with PyPI in the interim before we can completely switch over to Warehouse. ? Donald Stufft From ncoghlan at gmail.com Sun Jun 5 18:04:16 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 5 Jun 2016 15:04:16 -0700 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> <1897BDDA-75E1-4564-A2F5-E3C15E1E5A5F@stufft.io> Message-ID: On 5 Jun 2016 2:18 am, "Ralf Gommers" wrote: > > > > On Sun, Jun 5, 2016 at 6:33 AM, Nick Coghlan wrote: >> >> >> On 4 Jun 2016 6:54 am, "Donald Stufft" wrote: >> > >> > >> >> On Jun 4, 2016, at 9:33 AM, Nathaniel Smith wrote: >> >> >> >> I think everyone would agree that having some nice doc hosting service available as an option would be, well, nice. Everyone likes options. But the current doc hosting is unpopular and feature poor, falls outside of the PyPI core mission, and is redundant with other more popular services, at a time when the PyPI developers are struggling to maintain core services. >> > >> > >> > >> > To add to what Nathaniel said here, there are a few problems with the current situation: >> > >> > Documentation hosting largely worked ?OK? when it was just writing files out to disk, however we?ve since removed all use of the local disk (so that we can scale past 1 machine) and we?re now storing things in S3. This makes documentation hosting particularly expensive in terms of API calls because we need to do expensive list key operations to discover which files exist (versus package files where we have a database full of files). >> >> Amazon do offer higher level alternatives like https://aws.amazon.com/efs/ for use cases like PyPI's docs hosting that assume they have access to a normal filesystem. >> >> Given the credential management benefits of integrated docs, > > From the RTD blog post linked by Nathaniel: > "" > Our proposed grant, for $48,000, is to build a separate instance that integrates with the Python Package Index?s upcoming website, Warehouse. This integration will provide automatic API reference documentation upon package release, with authentication tied to PyPI and simple configuration inside the distribution. > "" >> >> it does seem worthwhile to me for the PSF to invest in a lowest common denominator static file hosting capability, > > Seems like a very poor way to spend money and developer time imho. The original post by Jason brings up a few shortcomings of RTD, but I'm amazed that that leads multiple people here to conclude that starting a new doc hosting effort is the right answer to that. It isn't about funding a new idea, it's about keeping an existing solution working rather than breaking it abruptly and forcing other time strapped community volunteers to change how they do things immediately, or else leave their users without documentation. Since there's been zero previous discussion with distutils-sig or the PSF Board regarding improving the integration of ReadTheDocs with PyPI, I had absolutely no idea they had sought a grant from Mozilla to invest time in improving that aspect of things. Regards, Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jun 5 18:21:50 2016 From: donald at stufft.io (Donald Stufft) Date: Sun, 5 Jun 2016 18:21:50 -0400 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> <1897BDDA-75E1-4564-A2F5-E3C15E1E5A5F@stufft.io> Message-ID: <9E0738EB-B609-4552-B15D-319E01CA7CA4@stufft.io> > On Jun 5, 2016, at 6:04 PM, Nick Coghlan wrote: > > > On 5 Jun 2016 2:18 am, "Ralf Gommers" > wrote: > > > > > > > > On Sun, Jun 5, 2016 at 6:33 AM, Nick Coghlan > wrote: > >> > >> > >> On 4 Jun 2016 6:54 am, "Donald Stufft" > wrote: > >> > > >> > > >> >> On Jun 4, 2016, at 9:33 AM, Nathaniel Smith > wrote: > >> >> > >> >> I think everyone would agree that having some nice doc hosting service available as an option would be, well, nice. Everyone likes options. But the current doc hosting is unpopular and feature poor, falls outside of the PyPI core mission, and is redundant with other more popular services, at a time when the PyPI developers are struggling to maintain core services. > >> > > >> > > >> > > >> > To add to what Nathaniel said here, there are a few problems with the current situation: > >> > > >> > Documentation hosting largely worked ?OK? when it was just writing files out to disk, however we?ve since removed all use of the local disk (so that we can scale past 1 machine) and we?re now storing things in S3. This makes documentation hosting particularly expensive in terms of API calls because we need to do expensive list key operations to discover which files exist (versus package files where we have a database full of files). > >> > >> Amazon do offer higher level alternatives like https://aws.amazon.com/efs/ for use cases like PyPI's docs hosting that assume they have access to a normal filesystem. > >> > >> Given the credential management benefits of integrated docs, > > > > From the RTD blog post linked by Nathaniel: > > "" > > Our proposed grant, for $48,000, is to build a separate instance that integrates with the Python Package Index?s upcoming website, Warehouse. This integration will provide automatic API reference documentation upon package release, with authentication tied to PyPI and simple configuration inside the distribution. > > "" > >> > >> it does seem worthwhile to me for the PSF to invest in a lowest common denominator static file hosting capability, > > > > Seems like a very poor way to spend money and developer time imho. The original post by Jason brings up a few shortcomings of RTD, but I'm amazed that that leads multiple people here to conclude that starting a new doc hosting effort is the right answer to that. > > It isn't about funding a new idea, it's about keeping an existing solution working rather than breaking it abruptly and forcing other time strapped community volunteers to change how they do things immediately, or else leave their users without documentation. > I mean ?abruptly?. It was originally posted on distutils-sig just over a year ago with a relevant Warehouse issue at the same time [1]. In addition, we?re not deleting the capability to do it in legacy, just not adding it to Warehouse, so as we phase people over to Warehouse (like earlier I posted asking for folks to start opting in to using Warehouse) people will lose the ability to do this? but they?ll be able to go back and use legacy PyPI to regain it for as long as legacy PyPI is still running. We?re also going to switch the twine default (and the Python default) for uploads prior to legacy getting shut down. So essentially, we?re going to get a phased rollout with the ability to revert to the legacy behavior? at least until the legacy behavior is gone. The plan as indicated a year ago also says we?re not going to delete the existing documentation (since that requires zero ongoing effort to keep up, it?s just chucking things in a S3 bucket and forgetting about it) so their users won?t be without documentation, they?ll just be without *new* documentation until they come up with an alternative plan (after having it been phased in over time). > Since there's been zero previous discussion with distutils-sig or the PSF Board regarding improving the integration of ReadTheDocs with PyPI, I had absolutely no idea they had sought a grant from Mozilla to invest time in improving that aspect of things. > > There was some discussion on the Warehouse issues [1] and Eric had been talking to me on IRC about it for awhile too. Probably it would have been good to mention it on distutils-sig too, but just to be clear that RTD hadn?t done this in stealth mode :) > Regards, > Nick > [1] https://github.com/pypa/warehouse/issues/509 ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jun 5 18:34:31 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 5 Jun 2016 15:34:31 -0700 Subject: [Distutils] on integrated docs in Warehouse and PyPI In-Reply-To: <9E0738EB-B609-4552-B15D-319E01CA7CA4@stufft.io> References: <26DA3720-1712-44B0-8656-68127FD2BD54@jaraco.com> <1897BDDA-75E1-4564-A2F5-E3C15E1E5A5F@stufft.io> <9E0738EB-B609-4552-B15D-319E01CA7CA4@stufft.io> Message-ID: On 5 Jun 2016 3:21 pm, "Donald Stufft" wrote: > >> On Jun 5, 2016, at 6:04 PM, Nick Coghlan wrote: >> Since there's been zero previous discussion with distutils-sig or the PSF Board regarding improving the integration of ReadTheDocs with PyPI, I had absolutely no idea they had sought a grant from Mozilla to invest time in improving that aspect of things. > > There was some discussion on the Warehouse issues [1] and Eric had been talking to me on IRC about it for awhile too. Probably it would have been good to mention it on distutils-sig too, but just to be clear that RTD hadn?t done this in stealth mode :) Yeah, with the additional info about RTD's Mozilla grant, I think Jason's feedback here basically becomes a feature request list for the Warehouse/RTD integration. Ralf may have assumed I already knew about that, but I didn't - Nathaniel's description of the link only made it sound remotely related rather than directly on-topic, and I hadn't personally encountered it through any other channels. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From afe.young at gmail.com Sat Jun 4 11:01:32 2016 From: afe.young at gmail.com (Young Yang) Date: Sat, 4 Jun 2016 23:01:32 +0800 Subject: [Distutils] How to build python-packages depends on the output of other project In-Reply-To: References: Message-ID: Thanks for your detailed reply. @Ionel Thanks for reminding me extending other classes! @Daniel @Chris Thanks for telling me the extension solution. I didn't know that before. But the project A I relied is complex one and has it's cmakelist.txt. I don't want to re-implement the compile process with the setup extension. The .so produce by project A will also used by other programs. I think the thing that makes it strange is that the the project A just compile the .so, but it doesn't install the .so to the system(such as /usr/local/lib/) where normal program can find. I plan to change my solution like this I put a `config.ini` file in my python-binding package. `config.ini` will contain the path to the .so produced by project A. If the user want to run setup.py, it will try to find the .so with the path in `config.ini` first. - If .so is found, the search result will prompt and the `config.ini` will be modified accordingly. - If not found, the setup.py will fail to run and prompt users that the `config.ini` should be configured before installation. Will this solution more elegant in this case? On Fri, Jun 3, 2016 at 11:01 PM, Chris Barker wrote: > First, > > what you have is not all that inelegant -- it is the way to do it :-) > > But there are a few options when you are wrapping a C/C++ lib for python: > > Do you need to access that lib from other extensions or only from the one > extension? IF others, then you pretty much need to build a shared lib and > make sure that all your extension link to it. But if you only need to get > to it from one extension than there are three options: > > 1) don't compile a lib -- rather, build all your C/C++ code to the > extension itself. you can simply add the files to the "source" list -- for > a straightforward lib, this is the easiest way to go. > > 2) statically link -- build the lib as a static lib, and then link it in > to your extension. then there is no extra .so to keep track of and ship. at > least on *nix you can bypass teh linker by passing teh static lib in as > "extra_objects" -- I think. Something like that. > > 3) what you did -- build the .so and ship it with the extension. > > HTH, > > -Chris > > > > > > > On Thu, Jun 2, 2016 at 7:35 PM, Young Yang wrote: > >> Hi, >> >> My current solution is like this >> >> I get the source code of project A. And use `cmdclass={"install": >> my_install},` in my setup function in setup.py. >> >> my_install is a subclass of `from setuptools.command.install import >> install` >> ``` >> class my_install(install): >> def run(self): >> # DO something I want. Such as compiling the code of project A >> and copy the output of it (i.e. the .so file) to my binding folder >> install.run(self) >> ``` >> >> At last I add these options in my setup function in setup.py to include >> the shared library in the install package. >> ``` >> package_dir={'my_binding_package': 'my_binding_folder'}, >> package_data={ >> 'my_binding_package': ['Shared_lib.so'], >> }, >> include_package_data=True, >> ``` >> >> But I think there should be better ways to achieve these. >> Could anyone give me any elegant examples to achieve the same goal? >> >> Thanks in advance >> >> >> On Thu, Jun 2, 2016 at 11:05 AM, Young Yang wrote: >> >>> hi, >>> >>> I'm writing python-binding for project A. >>> >>> My python-binding depends on the compile output of project A(It is a .so >>> file), and the project A is not installed in the system(so we can't find >>> the .so files in the system libraries pathes) >>> >>> What's the elegant way to package my python-binding, so that I can >>> install everything by run `python setup.py` ? >>> >>> Any suggestions and comments will be appreciated :) >>> >>> -- >>> Best wishes, >>> Young >>> >> >> >> >> -- >> Best wishes, >> Young Yang >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -- Best wishes, ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From afe.young at gmail.com Tue Jun 7 08:58:44 2016 From: afe.young at gmail.com (Young Yang) Date: Tue, 7 Jun 2016 20:58:44 +0800 Subject: [Distutils] How to register my python package which relys on the output of other project on pypi Message-ID: Hi, I'm writing a python-binding project for project A written in C++. Project A is on github. It supports compiling itself to produce .so in linux or .dll in windows. My python-binding project contains depends on the .dll and .so file. Now I want to register my package on pypi. So that users can install my package with only running `pip install XXXX`. I have to support both windows and linux. The only solution I can figure out is to include both .dll and .so in my package. This will end up with both .so and .dll installed in any platforms. It sounds dirty. Is there any better ways to achieve my goal? PS: The compile process of Project A is a little complicated, so I can't use python Extension to build my dynamic library. This question comes after this question https://mail.python.org/pipermail/distutils-sig/2016-June/029059.html Thanks in advance :) -- Best wishes, Young Yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Jun 7 11:05:05 2016 From: dholth at gmail.com (Daniel Holth) Date: Tue, 07 Jun 2016 15:05:05 +0000 Subject: [Distutils] How to register my python package which relys on the output of other project on pypi In-Reply-To: References: Message-ID: My pysdl2-cffi project has a dependency ':sys_platform=="win32"': ['sdl2_lib'] meaning 'depends on sdl2_lib only on Windows' (see its setup.py). sdl2_lib is a specially made wheel that only contains DLL's for Windows. On other platforms we don't try to install sdl2_lib, assuming you have already installed SDL2 some other way. If I wanted to distribute the Linux so's on PyPy I could upload a second wheel with the 'manylinux1' tag, and pip would choose the right one. Distributing Linux binaries is more complicated than for Windows, see PEP 513 and https://pypi.python.org/pypi/auditwheel On Tue, Jun 7, 2016 at 10:32 AM Young Yang wrote: > Hi, > > I'm writing a python-binding project for project A written in C++. > Project A is on github. It supports compiling itself to produce .so in > linux or .dll in windows. > My python-binding project contains depends on the .dll and .so file. > > Now I want to register my package on pypi. So that users can install my > package with only running `pip install XXXX`. > > I have to support both windows and linux. The only solution I can figure > out is to include both .dll and .so in my package. This will end up with > both .so and .dll installed in any platforms. It sounds dirty. > > Is there any better ways to achieve my goal? > > > PS: The compile process of Project A is a little complicated, so I can't > use python Extension to build my dynamic library. > > > This question comes after this question > https://mail.python.org/pipermail/distutils-sig/2016-June/029059.html > > Thanks in advance :) > > > -- > Best wishes, > Young Yang > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Jun 7 15:59:01 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 7 Jun 2016 15:59:01 -0400 Subject: [Distutils] Deprecating PyPI as an OpenID Provider Message-ID: Just a heads up, I do not intend to implement the ?Use PyPI as an OpenID Provider? functionality in Warehouse. Looking at the actual usage of this, it appears that very few people have ever taken advantage of it (a total of 201 (user, site) combinations in the database of what people have set to ?Allow Always?) and I don?t believe it?s worth continuing. Longer term, Python should get a centralized Sign on across all it?s web properties, which could re-implement this feature for folks who want it. If you?re using this for anything, I recommend migrating off of it onto something else. Warehouse Issue: https://github.com/pypa/warehouse/issues/60 ? Donald Stufft From graffatcolmingov at gmail.com Tue Jun 7 16:24:08 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Tue, 7 Jun 2016 15:24:08 -0500 Subject: [Distutils] Deprecating PyPI as an OpenID Provider In-Reply-To: References: Message-ID: On Tue, Jun 7, 2016 at 2:59 PM, Donald Stufft wrote: > Longer term, Python should get a centralized Sign on across all it?s web properties, which could re-implement this feature for folks who want it. I agree. This is should be a goal for the PSF. This would help with elections and a number of other things as well. I think there are a few solutions like FreeIPA if anyone on this list is interested in looking into helping the PSF implement this. Cheers, Ian From tseaver at palladion.com Tue Jun 7 16:33:46 2016 From: tseaver at palladion.com (Tres Seaver) Date: Tue, 7 Jun 2016 16:33:46 -0400 Subject: [Distutils] Deprecating PyPI as an OpenID Provider In-Reply-To: References: Message-ID: <57572FAA.90308@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 06/07/2016 04:24 PM, Ian Cordasco wrote: > On Tue, Jun 7, 2016 at 2:59 PM, Donald Stufft > wrote: >> Longer term, Python should get a centralized Sign on across all it?s >> web properties, which could re-implement this feature for folks who >> want it. > > I agree. This is should be a goal for the PSF. This would help with > elections and a number of other things as well. I think there are a > few solutions like FreeIPA if anyone on this list is interested in > looking into helping the PSF implement this. IIRC, Christian Heimes works on FreeIPA in his day job for RedHat. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJXVy+lAAoJEPKpaDSJE9HYIBcQAImzTHtAzENNR//GswKjxrCT zYxSQzdsQtBjpWOB279roeHfpZRSf6xhy9HjUhOWGIQ+nDgEcIgDUIU1BIg2ZPgJ zQcMcjV0wiOAeyeVdgRtKXw3o4UW4XijuS43C29s+rxHvGPLPziQGQLWk7SZdzjf YAi12Us96rj2m0uuXjpOPVZjrztI0ABdcA2yRJuDYiMtVONH/4TP86n6NHnUznO1 RCoPsfLTn7BbU1hMDOdrGKOED4QYNskiDAiB5qtL/uB8zDgKRLlIhxTHq9/d2T7D 0836zTvNwoo/TwBwfUFn1Op8CXydls7yGfpRq43NNqgBNwhWxzImpfVTxB+D8MHm NCd4tPvgSJdyov2loYqToCytH3d+ueKdL77PG9ag7GCuOcrTggbBLhG5aaNknOSo LN/yZgF+geuGp81WlHNwl3duo0JgapX/ilQg96IZSy19DzZa4/mihJu1SurGB53u Jj7bpz10wCBMK5NJqzFYJ/gcAEqPQ7E6jzFQps65Eqs3HQTb33xOryptpmc7kZSR fT/NJG+0ctwjdAbr7pZX0Jyg8G2hWEOsi+pfY/22Ejs4gxmNnMWqqVcLzBm7glB2 sDc4sm573WtDRM6+dkCyyjKZMYwvWSbsIlWiIo27pdHXOTx4fGKxiJgqt4cD1IhC DN02co6A9A+t5Meory24 =fMAO -----END PGP SIGNATURE----- From ncoghlan at gmail.com Tue Jun 7 17:48:13 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Jun 2016 14:48:13 -0700 Subject: [Distutils] Deprecating PyPI as an OpenID Provider In-Reply-To: <57572FAA.90308@palladion.com> References: <57572FAA.90308@palladion.com> Message-ID: On 7 June 2016 at 13:33, Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 06/07/2016 04:24 PM, Ian Cordasco wrote: >> On Tue, Jun 7, 2016 at 2:59 PM, Donald Stufft >> wrote: >>> Longer term, Python should get a centralized Sign on across all it?s >>> web properties, which could re-implement this feature for folks who >>> want it. >> >> I agree. This is should be a goal for the PSF. This would help with >> elections and a number of other things as well. I think there are a >> few solutions like FreeIPA if anyone on this list is interested in >> looking into helping the PSF implement this. > > IIRC, Christian Heimes works on FreeIPA in his day job for RedHat. He does, and improving the identity management situation is also on the PSF Infrastructure Manager's TODO list - even properly documenting all our identity silos and access management requirements will take a bit of work (as in addition to the community facing services, there are also PSF staff services and backend infrastructure to take into account). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From sylvain.corlay at gmail.com Wed Jun 8 13:38:24 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Wed, 8 Jun 2016 13:38:24 -0400 Subject: [Distutils] Distutils improvements regarding header installation and building C extension modules In-Reply-To: References: Message-ID: Hi all, I am following up on this: - Is there any chance to get the has_flag and has_functions patches in? - Would there be any interest in a patch adding the *pip.locations.distutils_scheme *functionality to distutils? - Same question for the install_headers command item mentioned earlier. Thanks, Sylvain On Wed, May 25, 2016 at 3:07 PM, Sylvain Corlay wrote: > On Wed, May 25, 2016 at 8:35 PM, Tim Smith wrote: > >> >> As a Homebrew maintainer this sounds like something that Homebrew >> could influence. Are there any packages in the wild that use this >> mechanism? It seems that headers are mostly installed beneath >> site-packages. I don't have strong feelings about whether Homebrew >> should have better support for install_headers or whether that would >> be straightforward to implement but IIRC we've had no prior reports of >> this causing trouble. >> >> Thanks, >> Tim > > > Thanks Tim, > > The OS X python install is the only one I know of where headers installed > with the install_headers command are not placed in a subdirectory of > `sys.config('include')`. Besides, `pip.locations.distutils_scheme` returns > the right include directory even in the case of homebrew. > > My point here was that this pip.locations function should probably be a > feature of distutils itself. Although I would probably not have discovered > the need for it if homebrew was placing extra headers in the same place as > everyone else ! > > I don't think that using the package_data to place headers under > site-package is the right thing to do in general. Then you need to rely on > some python function to return the include directory... > > Sylvain > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaraco at jaraco.com Fri Jun 10 09:22:12 2016 From: jaraco at jaraco.com (Jason R. Coombs) Date: Fri, 10 Jun 2016 13:22:12 +0000 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar Message-ID: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> In #pypa-dev, I raised the possibility of moving our PyPA support channels from IRC to another hosted solution that enables persistence. Although IRC has served us well, there are systems now with clear feature advantages, which are crucial to my continuous participation: - always-on experience; even if one?s device is suspended or otherwise offline. - mobile support ? the in-cloud experience is essential for low power and intermittently connected devices. - push notifications allow a project leader to remain largely inactive in a channel, but attention raised promptly when users make a relevant mention. - continuous, integrated logging for catching up on the conversation. Both Gitter and Slack offer the experience I?m after, with Gitter feeling like a better fit for open-source projects (or groups of them). I?ve tried using IRCCloud, and it provides a similar, suitable experience on the same IRC infrastructure, with one big difference. While Gitter and Slack offer the above features for free, IRCCloud requires a $5/user/month subscription (otherwise, connections are dropped after two hours). I did reach out to them to see if they could offer some professional consideration for contributors, but I haven?t heard from them. Furthermore, IRCCloud requires an additional account on top of the account required for Freenode. In addition to the critical features above, Gitter and Slack offer other advantages: - For Gitter, single-sign on using the same Github account for authentication and authorization means no extra accounts. Slack requires one new account. - An elegant web-based interface as a first-class feature, a lower barrier of entry for users. - Zero-install or config. - Integration with source code and other systems. It?s because of the limitations of these systems that I find myself rarely in IRC, only joining when I have a specific issue, even though I?d like to be permanently present. Donald has offered to run an IRC bouncer for me, but such a bouncer is only a half-solution, not providing the push notifications, mobile apps (IRC apps exist, but just get disconnected, and often fail to connect on mobile provider networks), or integrated logging. I note that both Gitter and Slack offer IRC interfaces, so those users who prefer their IRC workflow can continue to use that if they so choose. I know there are other alternatives, like self-hosted solutions, but I?d like to avoid adding the burden of administering such a system. If someone wanted to take on that role, I?d be open to that alternative. I?d like to propose we move #pypa-dev to /pypa/dev and #pypa to /pypa/support in gitter. Personally, the downsides to moving to Gitter (other than enacting the move itself) seem negligible. What do you think? What downsides am I missing? From tritium-list at sdamon.com Fri Jun 10 09:53:00 2016 From: tritium-list at sdamon.com (Alex Walters) Date: Fri, 10 Jun 2016 09:53:00 -0400 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: <042501d1c31f$67565a50$36030ef0$@sdamon.com> Other than the fact that we now would need to redirect the 2000 (at any given time) users of #python, and 200 (roughly) users of #pypa to an inherently commercial, and frankly, hard to discover service? It's tantamount to discontinuing chat support, which if that?s what you want to do, so be it, but that?s what it is. (I have no argument about moving -dev) > -----Original Message----- > From: Distutils-SIG [mailto:distutils-sig-bounces+tritium- > list=sdamon.com at python.org] On Behalf Of Jason R. Coombs > Sent: Friday, June 10, 2016 9:22 AM > To: distutils-sig at python.org > Subject: [Distutils] Switch PyPA from IRC to Gitter or similar > > In #pypa-dev, I raised the possibility of moving our PyPA support channels > from IRC to another hosted solution that enables persistence. Although IRC > has served us well, there are systems now with clear feature advantages, > which are crucial to my continuous participation: > > - always-on experience; even if one?s device is suspended or otherwise > offline. > - mobile support ? the in-cloud experience is essential for low power and > intermittently connected devices. > - push notifications allow a project leader to remain largely inactive in a > channel, but attention raised promptly when users make a relevant mention. > - continuous, integrated logging for catching up on the conversation. > > Both Gitter and Slack offer the experience I?m after, with Gitter feeling like a > better fit for open-source projects (or groups of them). > > I?ve tried using IRCCloud, and it provides a similar, suitable experience on the > same IRC infrastructure, with one big difference. While Gitter and Slack offer > the above features for free, IRCCloud requires a $5/user/month subscription > (otherwise, connections are dropped after two hours). I did reach out to > them to see if they could offer some professional consideration for > contributors, but I haven?t heard from them. Furthermore, IRCCloud requires > an additional account on top of the account required for Freenode. > > In addition to the critical features above, Gitter and Slack offer other > advantages: > > - For Gitter, single-sign on using the same Github account for authentication > and authorization means no extra accounts. Slack requires one new account. > - An elegant web-based interface as a first-class feature, a lower barrier of > entry for users. > - Zero-install or config. > - Integration with source code and other systems. > > It?s because of the limitations of these systems that I find myself rarely in > IRC, only joining when I have a specific issue, even though I?d like to be > permanently present. > > Donald has offered to run an IRC bouncer for me, but such a bouncer is only a > half-solution, not providing the push notifications, mobile apps (IRC apps > exist, but just get disconnected, and often fail to connect on mobile provider > networks), or integrated logging. > > I note that both Gitter and Slack offer IRC interfaces, so those users who > prefer their IRC workflow can continue to use that if they so choose. > > I know there are other alternatives, like self-hosted solutions, but I?d like to > avoid adding the burden of administering such a system. If someone wanted > to take on that role, I?d be open to that alternative. > > I?d like to propose we move #pypa-dev to /pypa/dev and #pypa to > /pypa/support in gitter. > > Personally, the downsides to moving to Gitter (other than enacting the move > itself) seem negligible. What do you think? What downsides am I missing? > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From graffatcolmingov at gmail.com Fri Jun 10 10:24:46 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 10 Jun 2016 09:24:46 -0500 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: On Fri, Jun 10, 2016 at 8:22 AM, Jason R. Coombs wrote: > In #pypa-dev, I raised the possibility of moving our PyPA support channels from IRC to another hosted solution that enables persistence. Although IRC has served us well, there are systems now with clear feature advantages, which are crucial to my continuous participation: I'm choosing not to read this as a threat. > - always-on experience; even if one?s device is suspended or otherwise offline. > - mobile support ? the in-cloud experience is essential for low power and intermittently connected devices. > - push notifications allow a project leader to remain largely inactive in a channel, but attention raised promptly when users make a relevant mention. > - continuous, integrated logging for catching up on the conversation. So here's a question: Why are these crucial to you? You've explained potential benefits but not why they're crucial to you and should be crucial to anyone else. Why do you need an "always-on experience"? Why do you feel required to always be on? Do other people tell you that you need to always be on chat? Push notifications allow for prompt attention to mentions, but are all mentions push notification worthy? Do we all need to be herded to platforms that will spam us because someone mentioned us by nick or name? I personally see this as a net negative. I don't need an email or push notification to my phone because someone said my name in a channel. That's a distraction. It prevents me from working on things because it creates a false sense of alarm. Continuous logging is on for #pypa and #pypa-dev as I understand it. Surely it's not "integrated" into your chat client, but it's not as if the logging doesn't exist. > Both Gitter and Slack offer the experience I?m after, with Gitter feeling like a better fit for open-source projects (or groups of them). I've tried using Gitter several times in the past. Unless they've fixed their bugs related to sending me emails every day about activity in a channel I spoke in once and left, I think they should be eliminated. Slack has also had several outages lately that should also disqualify it (besides the fact that it's incredibly closed source and will be expensive to maintain logs in). > I?ve tried using IRCCloud, and it provides a similar, suitable experience on the same IRC infrastructure, with one big difference. While Gitter and Slack offer the above features for free, IRCCloud requires a $5/user/month subscription (otherwise, connections are dropped after two hours). I did reach out to them to see if they could offer some professional consideration for contributors, but I haven?t heard from them. Furthermore, IRCCloud requires an additional account on top of the account required for Freenode. > > In addition to the critical features above, Gitter and Slack offer other advantages: > > - For Gitter, single-sign on using the same Github account for authentication and authorization means no extra accounts. Slack requires one new account. IRC requires one new account. > - An elegant web-based interface as a first-class feature, a lower barrier of entry for users. webchat.freenode.net may not be elegant, but it is first-class. > - Zero-install or config. Slack pesters you to install their desktop client and if you don't want constant channel notifications you do have to configure it. webchat.freenode.net offers no config. > - Integration with source code and other systems. Do you mean things like GitHub? GitHub already integrates with IRC. What special kind of integration are do you think Gitter and Slack have that GitHub's IRC integration doesn't? > It?s because of the limitations of these systems that I find myself rarely in IRC, only joining when I have a specific issue, even though I?d like to be permanently present. > > Donald has offered to run an IRC bouncer for me, but such a bouncer is only a half-solution, not providing the push notifications, mobile apps (IRC apps exist, but just get disconnected, and often fail to connect on mobile provider networks), or integrated logging. > > I note that both Gitter and Slack offer IRC interfaces, so those users who prefer their IRC workflow can continue to use that if they so choose. They're very poor IRC interfaces, making people who want to use a simple, free, standard second class citizens (which is par for the course as far as open source projects go). > I know there are other alternatives, like self-hosted solutions, but I?d like to avoid adding the burden of administering such a system. If someone wanted to take on that role, I?d be open to that alternative. > > I?d like to propose we move #pypa-dev to /pypa/dev and #pypa to /pypa/support in gitter. > > Personally, the downsides to moving to Gitter (other than enacting the move itself) seem negligible. What do you think? What downsides am I missing? With IRC we can run our own logging solution. Gitter used to have a similar model to Slack where you had to pay to access all of the logs. Further, to allow anyone to use Slack, we have to set up and maintain a separate webapp (which can be deployed to Heroku, but for the kind of traffic we would expect, would actively cost us money to run there). From fred at fdrake.net Fri Jun 10 12:36:20 2016 From: fred at fdrake.net (Fred Drake) Date: Fri, 10 Jun 2016 12:36:20 -0400 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: On Fri, Jun 10, 2016 at 10:24 AM, Ian Cordasco wrote: > So here's a question: Why are these crucial to you? You've explained > potential benefits but not why they're crucial to you and should be > crucial to anyone else. For people who want an always-on presence, there's also Quassel IRC: http://quassel-irc.org/ Not sure if that fits Jason's goals, but avoids impacting other users. -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From waynejwerner at gmail.com Fri Jun 10 13:28:06 2016 From: waynejwerner at gmail.com (Wayne Werner) Date: Fri, 10 Jun 2016 12:28:06 -0500 (CDT) Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: On Fri, 10 Jun 2016, Jason R. Coombs wrote: > Personally, the downsides to moving to Gitter (other than enacting the move itself) seem negligible. What do you think? What downsides am I missing? That may be the main issue. Given the responses thus far, it seems like most people are happy with the IRC experience. I know that I mostly prefer the IRC experience - indeed, I actually access *my* gitter rooms through their IRC bridge. Here's a question - why not run something like hubot or https://gist.github.com/RobertSzkutak/1326452 or just use logging with irssi? If you're using a bot, it's pretty trivial to have your bot send you emails, or setup and arduino/raspberry pi and a servo so when you're mentioned you *literally* get pushed :) You could probably configure irssi to work the same way, though I've never bothered to do such things. You should remember that the political effort of changing the status quo *is* an effort and there is always a cost associated. Sometimes it's worth the effort (Python3 and unicode), and sometimes it's not. Personally, I'm already connected to both the gitter bridge and freenode through irssi, though I'm not present in the #pypa room. I could, and thinking about it I probably *should* join the #python and #pypa rooms and configure irssi to join those rooms when I join freenode. If you really just prefer the gitter interface, you *could* always just setup your bot to relay messages from IRC to your own gitter room, and vice versa. Unless that violates someones TOS, I really don't know, because that's not something that I'm interested in :) -W From njs at pobox.com Fri Jun 10 13:54:02 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 10 Jun 2016 10:54:02 -0700 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: On Jun 10, 2016 07:25, "Ian Cordasco" wrote: > [...] > I've tried using Gitter several times in the past. Unless they've > fixed their bugs related to sending me emails every day about activity > in a channel I spoke in once and left, I think they should be > eliminated. As a point of information, there's definitely a toggle you can flip to turn this off. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Fri Jun 10 14:30:43 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Fri, 10 Jun 2016 11:30:43 -0700 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: Chef is in the process of navigating an IRC->Slack migration. https://github.com/chef/chef-rfc/blob/master/rfc074-community-slack.md is the document I wrote up on the pros and cons of various options. Gitter has a better UX for new users compared to Slack because it was built to be for public use from the start, but their actual chat UI/UX isn't as polished as Slack. --Noah > On Jun 10, 2016, at 6:22 AM, Jason R. Coombs wrote: > > In #pypa-dev, I raised the possibility of moving our PyPA support channels from IRC to another hosted solution that enables persistence. Although IRC has served us well, there are systems now with clear feature advantages, which are crucial to my continuous participation: > > - always-on experience; even if one?s device is suspended or otherwise offline. > - mobile support ? the in-cloud experience is essential for low power and intermittently connected devices. > - push notifications allow a project leader to remain largely inactive in a channel, but attention raised promptly when users make a relevant mention. > - continuous, integrated logging for catching up on the conversation. > > Both Gitter and Slack offer the experience I?m after, with Gitter feeling like a better fit for open-source projects (or groups of them). > > I?ve tried using IRCCloud, and it provides a similar, suitable experience on the same IRC infrastructure, with one big difference. While Gitter and Slack offer the above features for free, IRCCloud requires a $5/user/month subscription (otherwise, connections are dropped after two hours). I did reach out to them to see if they could offer some professional consideration for contributors, but I haven?t heard from them. Furthermore, IRCCloud requires an additional account on top of the account required for Freenode. > > In addition to the critical features above, Gitter and Slack offer other advantages: > > - For Gitter, single-sign on using the same Github account for authentication and authorization means no extra accounts. Slack requires one new account. > - An elegant web-based interface as a first-class feature, a lower barrier of entry for users. > - Zero-install or config. > - Integration with source code and other systems. > > It?s because of the limitations of these systems that I find myself rarely in IRC, only joining when I have a specific issue, even though I?d like to be permanently present. > > Donald has offered to run an IRC bouncer for me, but such a bouncer is only a half-solution, not providing the push notifications, mobile apps (IRC apps exist, but just get disconnected, and often fail to connect on mobile provider networks), or integrated logging. > > I note that both Gitter and Slack offer IRC interfaces, so those users who prefer their IRC workflow can continue to use that if they so choose. > > I know there are other alternatives, like self-hosted solutions, but I?d like to avoid adding the burden of administering such a system. If someone wanted to take on that role, I?d be open to that alternative. > > I?d like to propose we move #pypa-dev to /pypa/dev and #pypa to /pypa/support in gitter. > > Personally, the downsides to moving to Gitter (other than enacting the move itself) seem negligible. What do you think? What downsides am I missing? > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From graffatcolmingov at gmail.com Fri Jun 10 14:47:01 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 10 Jun 2016 13:47:01 -0500 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: On Fri, Jun 10, 2016 at 12:54 PM, Nathaniel Smith wrote: > On Jun 10, 2016 07:25, "Ian Cordasco" wrote: >> > [...] >> I've tried using Gitter several times in the past. Unless they've >> fixed their bugs related to sending me emails every day about activity >> in a channel I spoke in once and left, I think they should be >> eliminated. > > As a point of information, there's definitely a toggle you can flip to turn > this off. At the time I did flip that toggle and the only thing that worked was removing my account from Gitter. If that toggle actually does something now, they at least eventually fix bugs. From robertc at robertcollins.net Fri Jun 10 16:42:01 2016 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 11 Jun 2016 08:42:01 +1200 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: On 11 June 2016 at 01:22, Jason R. Coombs wrote: > In #pypa-dev, I raised the possibility of moving our PyPA support channels from IRC to another hosted solution that enables persistence. Although IRC has served us well, there are systems now with clear feature advantages, which are crucial to my continuous participation: FWIW, I won't be switching to gitter or slack - I find the push notifications and always-on experience exhausting. It gets in the way of thinking or doing anything useful. I often leave my phone in priority-only mode to shut facebook and twitter and such things up for hours - or days - at a time. Thats presuming there is a IRC bridge in whatever the community as a whole does. If there isn't, then I'd setup an account for use from a browser when desired, but from the sounds of it I'd need to blacklist their email address to keep noise down. -Rob From glyph at twistedmatrix.com Fri Jun 10 17:53:30 2016 From: glyph at twistedmatrix.com (Glyph) Date: Fri, 10 Jun 2016 14:53:30 -0700 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: > On Jun 10, 2016, at 6:22 AM, Jason R. Coombs wrote: > > In #pypa-dev, I raised the possibility of moving our PyPA support channels from IRC to another hosted solution that enables persistence. Opinions about free software vs. federation vs. user experience aside, I don't think that this is a worthwhile or even realistic goal. There are many people who are going to ask for support on IRC, and there are many who are willing to provide it there. Therefore "support channels" on IRC will continue to exist in some form or fashion. It seems fine to me that you would start one on Gitter or Slack - other folks seem to like those channels as well. There might be a useful discussion buried in here about where official real-time _development_ discussion should go - but then, any weighty decisions should be discussed on the mailing list anyway to ensure that people from different time zones can see the discussion and participate in it asynchronously. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Fri Jun 10 17:58:37 2016 From: glyph at twistedmatrix.com (Glyph) Date: Fri, 10 Jun 2016 14:58:37 -0700 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: <9E38FA0C-BD4C-4954-8ADC-A7928B2F26CF@twistedmatrix.com> > On Jun 10, 2016, at 1:42 PM, Robert Collins wrote: > > FWIW, I won't be switching to gitter or slack - I find the push > notifications and always-on experience exhausting. It gets in the way > of thinking or doing anything useful. I often leave my phone in > priority-only mode to shut facebook and twitter and such things up for > hours - or days - at a time. If you read my blog , you'll know this already ? I also find Slack tiring in exactly this way. I acknowledge the usability benefits over IRC, but the effect of those benefits is mostly just prompting me to spend more time staring at chat. Which is not what I want to be doing. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Fri Jun 10 18:03:18 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 10 Jun 2016 17:03:18 -0500 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: <9E38FA0C-BD4C-4954-8ADC-A7928B2F26CF@twistedmatrix.com> References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> <9E38FA0C-BD4C-4954-8ADC-A7928B2F26CF@twistedmatrix.com> Message-ID: On Fri, Jun 10, 2016 at 4:58 PM, Glyph wrote: > > On Jun 10, 2016, at 1:42 PM, Robert Collins > wrote: > > FWIW, I won't be switching to gitter or slack - I find the push > notifications and always-on experience exhausting. It gets in the way > of thinking or doing anything useful. I often leave my phone in > priority-only mode to shut facebook and twitter and such things up for > hours - or days - at a time. > > > If you read my blog, you'll know this already ? I also find Slack tiring in > exactly this way. I acknowledge the usability benefits over IRC, but the > effect of those benefits is mostly just prompting me to spend more time > staring at chat. Which is not what I want to be doing. It's in the service's best interest to constantly grab your attention because it provides a false sense of dependence. It's why Gitter spammed me, it's why Slack emails me everytime someone says @channel, and it's why I've uninstalled Twitter from my phone (so I don't get 2 or more notifications for someone trying to interact with me there). From annaraven at gmail.com Fri Jun 10 18:07:01 2016 From: annaraven at gmail.com (Anna Ravenscroft) Date: Fri, 10 Jun 2016 15:07:01 -0700 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: On Fri, Jun 10, 2016 at 2:53 PM, Glyph wrote: > There might be a useful discussion buried in here about where official > real-time _development_ discussion should go - but then, any weighty > decisions should be discussed on the mailing list anyway to ensure that > people from different time zones can see the discussion and participate in > it asynchronously. > +1 for asynchronous discussion and participation. -- cordially, Anna -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Jun 11 14:38:30 2016 From: brett at python.org (Brett Cannon) Date: Sat, 11 Jun 2016 18:38:30 +0000 Subject: [Distutils] Switch PyPA from IRC to Gitter or similar In-Reply-To: References: <6C367582-6AA5-487D-A3BD-1913D1546520@jaraco.com> Message-ID: On Fri, 10 Jun 2016 at 07:25 Ian Cordasco wrote: > On Fri, Jun 10, 2016 at 8:22 AM, Jason R. Coombs > wrote: > > In #pypa-dev, I raised the possibility of moving our PyPA support > channels from IRC to another hosted solution that enables persistence. > Although IRC has served us well, there are systems now with clear feature > advantages, which are crucial to my continuous participation: > > I'm choosing not to read this as a threat. > I don't think it was a threat to begin with. For it to be a threat it would somehow need to affect you personally. I think all Jason was trying to do was to point out this is not some idle conversation to him, but in fact impacts how he participates in communications regarding setuptools and PyPA. > > > - always-on experience; even if one?s device is suspended or otherwise > offline. > > - mobile support ? the in-cloud experience is essential for low power > and intermittently connected devices. > > - push notifications allow a project leader to remain largely inactive > in a channel, but attention raised promptly when users make a relevant > mention. > > - continuous, integrated logging for catching up on the conversation. > > So here's a question: Why are these crucial to you? You've explained > potential benefits but not why they're crucial to you and should be > crucial to anyone else. > > Why do you need an "always-on experience"? Why do you feel required to > always be on? Do other people tell you that you need to always be on > chat? > > Push notifications allow for prompt attention to mentions, but are all > mentions push notification worthy? Do we all need to be herded to > platforms that will spam us because someone mentioned us by nick or > name? I personally see this as a net negative. I don't need an email > or push notification to my phone because someone said my name in a > channel. That's a distraction. It prevents me from working on things > because it creates a false sense of alarm. > I think there's a difference between getting a push notification that rings your phone and one that simply sits in your notification bar. For the former that could get annoying if abused, but for the latter it could be handy if e.g. you are working on a bug and you happen to know that Jason could help speed things up for you by answering a question if he happens to be available. To me there's three levels of engagement: (1) I'm willing to be interrupted, (2) if I'm available I'll notice, else I will ignore it, and (3) just collect all the messages and I will check the next time I explicitly log in. You solve (1) by simply being logged into IRC all the time, but I don't know how to get (2) or (3) to work in IRC w/o setting up something like a reflector or something fancy (#3 can be covered by email, and maybe #2 with the right setup). > > Continuous logging is on for #pypa and #pypa-dev as I understand it. > Surely it's not "integrated" into your chat client, but it's not as if > the logging doesn't exist. > For me the constant connection allows for collecting mentions into a single notification area (#3 in the engagement level list I mentioned above). I personally have 4 devices I connect to the internet with and none of them short of my phone is on constantly. With some kind of constant connection I could then have all of the mentions I have not seen collected into a single place so I could address them no matter what device I choose to check in with. Otherwise I have to restrict myself to only using a device which has an IRC client that can reconcile where I left off and pick up on all the messages that mentioned me since I last logged in. And as for mobile access, that's just a matter of occasional convenience. I don't think that messaging all the time from my mobile phone is the best option, but it is one that I can do while e.g. sitting on the bus on the way to work so that I can spend my time at home doing other things potentially. > > > Both Gitter and Slack offer the experience I?m after, with Gitter > feeling like a better fit for open-source projects (or groups of them). > > I've tried using Gitter several times in the past. Unless they've > fixed their bugs related to sending me emails every day about activity > in a channel I spoke in once and left, I think they should be > eliminated. > > Slack has also had several outages lately that should also disqualify > it (besides the fact that it's incredibly closed source and will be > expensive to maintain logs in). > > > I?ve tried using IRCCloud, and it provides a similar, suitable > experience on the same IRC infrastructure, with one big difference. While > Gitter and Slack offer the above features for free, IRCCloud requires a > $5/user/month subscription (otherwise, connections are dropped after two > hours). I did reach out to them to see if they could offer some > professional consideration for contributors, but I haven?t heard from them. > Furthermore, IRCCloud requires an additional account on top of the account > required for Freenode. > > > > In addition to the critical features above, Gitter and Slack offer other > advantages: > > > > - For Gitter, single-sign on using the same Github account for > authentication and authorization means no extra accounts. Slack requires > one new account. > > IRC requires one new account. > > > - An elegant web-based interface as a first-class feature, a lower > barrier of entry for users. > > webchat.freenode.net may not be elegant, but it is first-class. > Does it track where you left off since you last logged in? I just tried it and it didn't look sophisticated enough to. > > > - Zero-install or config. > > Slack pesters you to install their desktop client and if you don't > want constant channel notifications you do have to configure it. > webchat.freenode.net offers no config. > > > - Integration with source code and other systems. > > Do you mean things like GitHub? GitHub already integrates with IRC. > What special kind of integration are do you think Gitter and Slack > have that GitHub's IRC integration doesn't? > > > It?s because of the limitations of these systems that I find myself > rarely in IRC, only joining when I have a specific issue, even though I?d > like to be permanently present. > > > > Donald has offered to run an IRC bouncer for me, but such a bouncer is > only a half-solution, not providing the push notifications, mobile apps > (IRC apps exist, but just get disconnected, and often fail to connect on > mobile provider networks), or integrated logging. > > > > I note that both Gitter and Slack offer IRC interfaces, so those users > who prefer their IRC workflow can continue to use that if they so choose. > > They're very poor IRC interfaces, making people who want to use a > simple, free, standard second class citizens (which is par for the > course as far as open source projects go). > > > I know there are other alternatives, like self-hosted solutions, but I?d > like to avoid adding the burden of administering such a system. If someone > wanted to take on that role, I?d be open to that alternative. > > > > I?d like to propose we move #pypa-dev to /pypa/dev and #pypa to > /pypa/support in gitter. > > > > Personally, the downsides to moving to Gitter (other than enacting the > move itself) seem negligible. What do you think? What downsides am I > missing? > > With IRC we can run our own logging solution. Gitter used to have a > similar model to Slack where you had to pay to access all of the logs. > Further, to allow anyone to use Slack, we have to set up and maintain > a separate webapp (which can be deployed to Heroku, but for the kind > of traffic we would expect, would actively cost us money to run > there). > I think the key point in that paragraph is "your own". At least in my case (and probably in Jason's), I don't want to run my own logging solution that I have to keep up and running. I too considered irccloud but also balked at having to spend $5/month in order to be more engaged with IRC. I'm afraid that for real-time chat there's no good solution. For random tech support, just popping into #pypa is good enough if you happen to think about it. But for real-time discussion w/ fellow developers when you need to hash something out, it will simply require scheduling ahead of time when to either be on IRC or use something like Hangouts like various other chats spawning from distutils-sig discussions have done (or use WebRTC solutions). Personally I would rather give Discourse a try. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.van.rees at zestsoftware.nl Sat Jun 11 19:48:40 2016 From: m.van.rees at zestsoftware.nl (Maurits van Rees) Date: Sun, 12 Jun 2016 01:48:40 +0200 Subject: [Distutils] Uploading to Warehouse In-Reply-To: <21875485-AC31-4C02-9440-B60F3E76C588@stufft.io> References: <21875485-AC31-4C02-9440-B60F3E76C588@stufft.io> Message-ID: Op 05/06/16 om 15:42 schreef Donald Stufft: > Hey all, > > As anyone here who has uploaded anything to PyPI recently is aware of, PyPI's > uploading routine has been in a semi broken state for awhile now. It will > regularly raise a 500 error (but will often still record the release, but > sometimes only a partial release). > > Well, the good news is, that Warehouse's upload routines should generally be > good enough to use now. This will be more or less the same as if you uploaded > to PyPI itself (they share a backing data store) but hitting the newer, better > code base instead of the slowly decaying legacy code base. I had problems uploading to PyPI recently, getting an error although the upload seemed to have gone fine in reality. I did 8 uploads of Plone packages today using the warehouse url. All have gone fine. Thanks a lot, Donald! -- Maurits van Rees: http://maurits.vanrees.org/ Zest Software: http://zestsoftware.nl From afe.young at gmail.com Sun Jun 12 10:59:38 2016 From: afe.young at gmail.com (Young Yang) Date: Sun, 12 Jun 2016 22:59:38 +0800 Subject: [Distutils] How to register my python package which relys on the output of other project on pypi In-Reply-To: References: Message-ID: Thanks your advice :) Finally, I decide not to create an pypi package containing only binaries by other program(not setup.py Extension). Because I think my binding-project is an extension for project A, I should assume the user has installed A successfully. Otherwise I will give some information to guide users to install project A. On Tue, Jun 7, 2016 at 11:05 PM, Daniel Holth wrote: > My pysdl2-cffi project has a dependency ':sys_platform=="win32"': > ['sdl2_lib'] meaning 'depends on sdl2_lib only on Windows' (see its > setup.py). sdl2_lib is a specially made wheel that only contains DLL's for > Windows. On other platforms we don't try to install sdl2_lib, assuming you > have already installed SDL2 some other way. > > If I wanted to distribute the Linux so's on PyPy I could upload a second > wheel with the 'manylinux1' tag, and pip would choose the right one. > Distributing Linux binaries is more complicated than for Windows, see PEP > 513 and https://pypi.python.org/pypi/auditwheel > > > On Tue, Jun 7, 2016 at 10:32 AM Young Yang wrote: > >> Hi, >> >> I'm writing a python-binding project for project A written in C++. >> Project A is on github. It supports compiling itself to produce .so in >> linux or .dll in windows. >> My python-binding project contains depends on the .dll and .so file. >> >> Now I want to register my package on pypi. So that users can install my >> package with only running `pip install XXXX`. >> >> I have to support both windows and linux. The only solution I can figure >> out is to include both .dll and .so in my package. This will end up with >> both .so and .dll installed in any platforms. It sounds dirty. >> >> Is there any better ways to achieve my goal? >> >> >> PS: The compile process of Project A is a little complicated, so I can't >> use python Extension to build my dynamic library. >> >> >> This question comes after this question >> https://mail.python.org/pipermail/distutils-sig/2016-June/029059.html >> >> Thanks in advance :) >> >> >> -- >> Best wishes, >> Young Yang >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > -- Best wishes, Young Yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jun 13 13:29:13 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 13 Jun 2016 17:29:13 +0000 Subject: [Distutils] Uploading to Warehouse In-Reply-To: References: <21875485-AC31-4C02-9440-B60F3E76C588@stufft.io> Message-ID: <20160613172912.GJ15295@yuggoth.org> On 2016-06-12 01:48:40 +0200 (+0200), Maurits van Rees wrote: [...] > I had problems uploading to PyPI recently, getting an error although the > upload seemed to have gone fine in reality. > I did 8 uploads of Plone packages today using the warehouse url. > All have gone fine. > > Thanks a lot, Donald! Agreed. I switched OpenStack's release upload automation to hit Warehouse a week ago. Before that we were frequently getting HTTP 500 errors uploading releases of our various projects directly to PyPI, and (aside from one day last week where a PyPI denial of service attack was impacting uploads) the transition has very gone smoothly (some 50+ releases so far). Thrilled to see Warehouse coming along! -- Jeremy Stanley From matthew.brett at gmail.com Mon Jun 13 14:51:07 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 13 Jun 2016 11:51:07 -0700 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> Message-ID: Hi, On Thu, May 26, 2016 at 11:47 AM, Donald Stufft wrote: > >> On May 26, 2016, at 2:41 PM, Matthew Brett wrote: >> >> On Thu, May 26, 2016 at 2:28 PM, Daniel Holth wrote: >>> Maybe there could be a way to say "the most recent release that has a wheel >>> for my platform". That would help with the problem of binaries not being >>> available concurrently with a new source distribution. >> >> Yes, that would certainly help get over some of the immediate problems. >> >> Sorry for my ignorance - but does ``--only-binary`` search for an >> earlier release with a binary or just bomb out if the latest release >> does not have a binary? It would also be good to have a flag to say >> "if this is pure Python go ahead and use the source, otherwise error". >> Otherwise I guess we'd have to rely on everyone with a pure Python >> package generating wheels. > > I believe it would find the latest version that has a wheel available, > I could be misremembering though. Reflecting a bit more on this - how easy would be be to add a flag ``--prefer-binary`` that would have the effect of: * Installing a binary wheel for current release if available, otherwise; * Checking previous release for binary wheel, installing if present, otherwise; * Install from sdist I think that would help a great deal in reducing surprise in wheel installs while we get better at putting up wheels at the same time as sdists. Cheers, Matthew From pradyunsg at gmail.com Tue Jun 14 01:58:47 2016 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Tue, 14 Jun 2016 05:58:47 +0000 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> Message-ID: (this is my first mail to the list, hopefully, this goes through) Hey Matthew, FYI, the --prefer-binary flag that you mention has come up in earlier discussions and already has an issue over at pip?s github repo ( https://github.com/pypa/pip/issues/3785). Regards, Pradyun ? On Tue, 14 Jun 2016 at 00:21 Matthew Brett wrote: > Hi, > > On Thu, May 26, 2016 at 11:47 AM, Donald Stufft wrote: > > > >> On May 26, 2016, at 2:41 PM, Matthew Brett > wrote: > >> > >> On Thu, May 26, 2016 at 2:28 PM, Daniel Holth wrote: > >>> Maybe there could be a way to say "the most recent release that has a > wheel > >>> for my platform". That would help with the problem of binaries not > being > >>> available concurrently with a new source distribution. > >> > >> Yes, that would certainly help get over some of the immediate problems. > >> > >> Sorry for my ignorance - but does ``--only-binary`` search for an > >> earlier release with a binary or just bomb out if the latest release > >> does not have a binary? It would also be good to have a flag to say > >> "if this is pure Python go ahead and use the source, otherwise error". > >> Otherwise I guess we'd have to rely on everyone with a pure Python > >> package generating wheels. > > > > I believe it would find the latest version that has a wheel available, > > I could be misremembering though. > > Reflecting a bit more on this - how easy would be be to add a flag > ``--prefer-binary`` that would have the effect of: > > * Installing a binary wheel for current release if available, otherwise; > * Checking previous release for binary wheel, installing if present, > otherwise; > * Install from sdist > > I think that would help a great deal in reducing surprise in wheel > installs while we get better at putting up wheels at the same time as > sdists. > > Cheers, > > Matthew > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinout at vanrees.org Wed Jun 15 05:07:21 2016 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 15 Jun 2016 11:07:21 +0200 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install Message-ID: Hi, Buzzword bingo in the subject... Situation: I'm experimenting with docker, mostly in combination with buildout. But it also applies to pip/virtualenv. I build a docker container with a Dockerfile: install some .deb packages, add the current directory as /code/, run buildout (or pip), ready. Works fine. Now local development: it is normal to mount the current directory as /code/, so that now is overlayed over the originally-added-to-the-docker /code/. This means that anything done inside /code/ is effectively discarded in development. So a "bin/buildout" run has to be done again, because the bin/, parts/, eggs/ etc directories are gone. Same problem with a virtualenv. *Not* though when you run pip directly and let it install packages globally! Those are installed outside of /code in /usr/local/somewhere. A comment and a question: - Comment: "everybody" uses virtualenv, but be aware that it is apparently normal *not* to use virtualenv when building dockers. - Question: buildout, like virtualenv+pip, installs everything in the current directory. Would an option to install it globally instead make sense? I don't know if it is possible. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From sylvain.corlay at gmail.com Wed Jun 15 05:30:19 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Wed, 15 Jun 2016 05:30:19 -0400 Subject: [Distutils] Removal of wheels deleting more than the data files Message-ID: I discovered a quite serious bug in wheels ( http://bugs.python.org/issue27317) When specifying an empty list for the list of data_files in a given directory, the entire directory is being deleted on uninstall of the wheel, even if it contained other resources from other pacakges. Example: from setuptools import setup > setup(name='remover', data_files=[('share/plugins', [])]) The expected behavior is that only the specified list of files is removed, (which is empty in that case). When the list is not empty, the behavior is the one expected. For example, from setuptools import setup > setup(name='remover', data_files=[('share/plugins', ['foobar.json'])]) will only remove `foobar.json` on uninstall and the `plugins` directory will not be removed if it is not empty. Thanks, Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Wed Jun 15 05:31:01 2016 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 15 Jun 2016 12:31:01 +0300 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: On Wed, Jun 15, 2016 at 12:07 PM, Reinout van Rees wrote: > Now local development: it is normal to mount the current directory as > /code/, so that now is overlayed over the originally-added-to-the-docker > /code/. > > This means that anything done inside /code/ is effectively discarded in > development. So a "bin/buildout" run has to be done again, because the > bin/, parts/, eggs/ etc directories are gone. > ? Think of it like this: if you don't mount ./ as /code in the container then you practically have to rebuild the image? all the time. This is fine for toy projects, but bigger ones have lots of files (eg: templates, images, css, tons of js 1-line modules etc). That means you get a big context that is slow to send (exacerbated in situations that have remote docker daemons like osx/windows docker clients). ?Another compound problem: if you rebuild the image for every code change the new context will invalidate the docker image cache - everything will be slow, all the time. Ideally you'd have a stack of images, like a base image that takes big context (with code/assets) + smaller images that build from that but have very small contexts. I wrote some ideas about that here if you have patience (it's a bit too long read). Basically what I'm saying is that you got no choice but to mount stuff in a development scenario :-) Regarding the buildout problem, I suspect a src-layout can solve the problem: - you only mount ./src inside container - you can happily run buildout inside the container (assuming it don't touch any of the code in src) I suspect you don't need to run buildout for every code change so it's fine if you stick that into the image building. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimfulton.info Wed Jun 15 07:53:59 2016 From: jim at jimfulton.info (Jim Fulton) Date: Wed, 15 Jun 2016 07:53:59 -0400 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: On Wed, Jun 15, 2016 at 5:07 AM, Reinout van Rees wrote: > Hi, > > Buzzword bingo in the subject... > > Situation: I'm experimenting with docker, mostly in combination with > buildout. But it also applies to pip/virtualenv. > > I build a docker container with a Dockerfile: install some .deb packages, > add the current directory as /code/, run buildout (or pip), ready. Works > fine. > > Now local development: it is normal to mount the current directory as > /code/, so that now is overlayed over the originally-added-to-the-docker > /code/. > > > > This means that anything done inside /code/ is effectively discarded in > development. So a "bin/buildout" run has to be done again, because the bin/, > parts/, eggs/ etc directories are gone. > > Same problem with a virtualenv. *Not* though when you run pip directly and > let it install packages globally! Those are installed outside of /code in > /usr/local/somewhere. > > > > A comment and a question: > > - Comment: "everybody" uses virtualenv, but be aware that it is apparently > normal *not* to use virtualenv when building dockers. > > - Question: buildout, like virtualenv+pip, installs everything in the > current directory. Would an option to install it globally instead make > sense? I don't know if it is possible. So, a number of comments. I'm not going to address your points directly. 1. Creating production docker images works best when all you're doing is installing a bunch of binary artifacts (e.g. .debs, eggs, wheels). If you actually build programs as part of image building, then your image contains build tools, leading to image bloat and potentially security problems as the development tools provide a greater attack surface. You can uninstall the development tools after building, but that does nothing to reduce the bloat -- it just increases it, and increases buid time. 2. #1 is a shame, as it dilutes the simplicity of using docker. :( 3. IMO, you're better off separating development and image-building workflows, with development feeding image building. Unfortunately, AFAIK, there aren't standard docker workflows for this, although I haven't been paying close attention to this lately. There are standard docker images for building system packages (debs, rpms), but, for myself, part of the benefit of docker is not having to build system packages anymore. (Building system packages, especially applications, for use in docker could be a lot easier than normal as you could be a lot sloppier about the ways that packages are built, thus simpler packaging files...) 4. I've tried to come up with good buildout-based workflows for building docker images in the past. One approach was to build in a dev environment and then, when building the production images, using eggs built in development, or built on a CI server. At the time my attempts were stymied by the fact that setuptools prefers source distributions over eggs so if it saw the download cache, it would try to build from source. Looking back, I'm not sure why this wasn't surmountable -- probably just lack of priority and attention at the time. 5. As far as setting up a build environment goes, just mounting a host volume is simple and works well and even works with docker-machine. It's a little icky to have linux build artifacts in my Mac directories, but it's a minor annoyance. (I've also created images with ssh servers that I could log into and do development in. Emacs makes this easy for me and provides all the development capabilities I normally use, so this setup works well for me, but probably doesn't work for non-emacs users. :) ) Jim -- Jim Fulton http://jimfulton.info From donald at stufft.io Wed Jun 15 07:57:38 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 15 Jun 2016 07:57:38 -0400 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: > On Jun 15, 2016, at 7:53 AM, Jim Fulton wrote: > > If you actually build programs as part of image building, then your > image contains build tools, leading to image bloat and potentially > security problems as the development tools provide a greater attack > surface. This isn?t strictly true, the layering in Docker works on a per RUN command basis, so if you compose a single command that installs the build tools, builds the thing, installs the thing, and uninstalls the build tools (and cleans up any cache), then that?s roughly equivalent to installing a single binary (except of course, in the time it takes). ? Donald Stufft From jim at jimfulton.info Wed Jun 15 08:12:43 2016 From: jim at jimfulton.info (Jim Fulton) Date: Wed, 15 Jun 2016 08:12:43 -0400 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: On Wed, Jun 15, 2016 at 7:57 AM, Donald Stufft wrote: > >> On Jun 15, 2016, at 7:53 AM, Jim Fulton wrote: >> >> If you actually build programs as part of image building, then your >> image contains build tools, leading to image bloat and potentially >> security problems as the development tools provide a greater attack >> surface. > > This isn?t strictly true, the layering in Docker works on a per RUN command basis, so if you compose a single command that installs the build tools, builds the thing, installs the thing, and uninstalls the build tools (and cleans up any cache), then that?s roughly equivalent to installing a single binary (except of course, in the time it takes). OK, fair enough. People would typically start from an image that had the build tools installed already. But as you point out, you could have a single step that installed the build tools, built and then uninstalled the build tools. You'd avoid the bloat, but have extremely long build times. Jim -- Jim Fulton http://jimfulton.info From njs at pobox.com Wed Jun 15 13:38:37 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 15 Jun 2016 10:38:37 -0700 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: For a lot of good general information on these subjects, I recommend Glyph's talk at pycon this year: https://www.youtube.com/watch?v=5BqAeN-F9Qs One point that's discussed is why you definitely should use virtualenv inside your containers :-) -n On Jun 15, 2016 2:07 AM, "Reinout van Rees" wrote: > Hi, > > Buzzword bingo in the subject... > > Situation: I'm experimenting with docker, mostly in combination with > buildout. But it also applies to pip/virtualenv. > > I build a docker container with a Dockerfile: install some .deb packages, > add the current directory as /code/, run buildout (or pip), ready. Works > fine. > > Now local development: it is normal to mount the current directory as > /code/, so that now is overlayed over the originally-added-to-the-docker > /code/. > > > > This means that anything done inside /code/ is effectively discarded in > development. So a "bin/buildout" run has to be done again, because the > bin/, parts/, eggs/ etc directories are gone. > > Same problem with a virtualenv. *Not* though when you run pip directly and > let it install packages globally! Those are installed outside of /code in > /usr/local/somewhere. > > > > A comment and a question: > > - Comment: "everybody" uses virtualenv, but be aware that it is apparently > normal *not* to use virtualenv when building dockers. > > - Question: buildout, like virtualenv+pip, installs everything in the > current directory. Would an option to install it globally instead make > sense? I don't know if it is possible. > > > > > Reinout > > -- > Reinout van Rees http://reinout.vanrees.org/ > reinout at vanrees.org http://www.nelen-schuurmans.nl/ > "Learning history by destroying artifacts is a time-honored atrocity" > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Wed Jun 15 15:29:49 2016 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Wed, 15 Jun 2016 12:29:49 -0700 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: On Wed, Jun 15, 2016 at 2:07 AM, Reinout van Rees wrote: > Situation: I'm experimenting with docker, mostly in combination with > buildout. But it also applies to pip/virtualenv. > ... > Now local development: it is normal to mount the current directory as > /code/, so that now is overlayed over the originally-added-to-the-docker > /code/. > > This means that anything done inside /code/ is effectively discarded in > development. So a "bin/buildout" run has to be done again, because the bin/, > parts/, eggs/ etc directories are gone. In my experience, the only point of friction is the code being worked on, which I pip-install in a virtualenv inside the container in "editable" mode. This lets code changes on your local machine be reflected immediately in the applications running inside the container. I asked for advice on this scenario here: https://mail.python.org/pipermail/distutils-sig/2016-March/028478.html If your virtualenv is outside of "/code/", then your third-party dependencies shouldn't need to be reinstalled. Only the "editable" installs should need to be reinstalled after mounting your directory, which is the scenario I asked about above. --Chris From reinout at vanrees.org Wed Jun 15 17:28:36 2016 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 15 Jun 2016 23:28:36 +0200 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: Op 15-06-16 om 11:31 schreef Ionel Cristian M?rie? via Distutils-SIG: > ... I wrote some ideas about that here > > if you have patience (it's a bit too long read). I definitively have the patience for such a long read. Reading it definitively also made sure I won't ever see Docker as a potential magic bullet :-) > Basically what I'm saying is that you got no choice but to mount stuff > in a development scenario :-) > > Regarding the buildout problem, I suspect a src-layout can solve the > problem: > > * you only mount ./src inside container > * you can happily run buildout inside the container (assuming it don't > touch any of the code in src) src/* for the mr.developer checkouts and my_package_dir/ for the actual code. And take care when you're downloading/developing javascript stuff, too, as that might also be in another directory. Ok... I'll have to watch out. Good to have a potential working solution, though. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From reinout at vanrees.org Wed Jun 15 17:39:00 2016 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 15 Jun 2016 23:39:00 +0200 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: Op 15-06-16 om 13:53 schreef Jim Fulton: > > 1. Creating production docker images works best when all you're doing > is installing a bunch of binary artifacts (e.g. .debs, eggs, wheels). That's where pip and its "everything is in requirements.txt anyway" has an advantage. The buildouts I use always have most/all dependencies mentioned in the setup.py with version pins in the buildout config. So I need the setup.py and so I need my full source code as the setup.py won't work otherwise. I *do* like to specify all "install_requires" items in my setup.py, so this is something I need to think about. > 3. IMO, you're better off separating development and image-building > workflows, with development feeding image building. Unfortunately, > AFAIK, there aren't standard docker workflows for this, although I > haven't been paying close attention to this lately. docker-compose helps a little bit here in that you can "compose" your docker into several scenarios. It *does* mean you have to watch carefully what you're doing, as one of the goals of docker, as far as I see it, is to have a large measure of "production/development parity" as per the "12 factor app" manifesto (http://12factor.net/) > 5. As far as setting up a build environment goes, just mounting a host > volume is simple and works well and even works with docker-machine. > It's a little icky to have linux build artifacts in my Mac > directories, but it's a minor annoyance. (I've also created images > with ssh servers that I could log into and do development in. Emacs > makes this easy for me and provides all the development capabilities I > normally use, so this setup works well for me, but probably doesn't > work for non-emacs users. :) ) I, like any deranged (and effective) human being, have memorized lots of emacs shortcuts and will use it till the day I die. Sadly I have colleagues that are a little bit more sane :-) Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From reinout at vanrees.org Wed Jun 15 18:52:03 2016 From: reinout at vanrees.org (Reinout van Rees) Date: Thu, 16 Jun 2016 00:52:03 +0200 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: Op 15-06-16 om 19:38 schreef Nathaniel Smith: > For a lot of good general information on these subjects, I recommend > Glyph's talk at pycon this year: > https://www.youtube.com/watch?v=5BqAeN-F9Qs That's the most valuable youtube video I saw this year! > One point that's discussed is why you definitely should use virtualenv > inside your containers :-) Seems like a very valid point. It is a very valid point that seems to point solidly at wheels. As, if you use a virtualenv, you don't have access to system python packages. And that's one of the main reasons why the company I work for (still) uses buildout. Buildout has the "syseggrecipe" package that makes it easy to use specific system packages. So.... numpy, psycopg2, matplotlib, mapnik, scipy, hard-to-install packages like that. So once you use a virtualenv, you're either in a hell because you're compiling all those hard-to-compile packages from scratch, or you're using binary wheels. Once you've chucked everything into a wheel, there's suddenly a bit less of a necessity for docker. At least, that might be the case. On the other hand, I could imagine having a base 16.04 docker that we build to instead of a manylinux1 machine that we build to. Aaargh, that Gliph talk gave me a *lot* of food for thought. And that's a good thing. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From donald at stufft.io Wed Jun 15 19:10:49 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 15 Jun 2016 19:10:49 -0400 Subject: [Distutils] Notice: PyPI APIs now return 403 when accessed via HTTP Message-ID: <8A627727-627F-4B80-B628-3F1CE69B1491@stufft.io> In part of an ongoing effort to improve the security of PyPI, instead of redirecting (or silently allowing) requests made over HTTP to PyPI APIs, these APIs will now return a 403 and require people to make the initial request over HTTPS. This does not affect the UI portions of the site that are designed to be used by humans, for these we will still redirect (which will cause the browser to see the HSTS header and force the user to use HTTPS from then on out). Thanks! ? Donald Stufft From njs at pobox.com Wed Jun 15 19:19:08 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 15 Jun 2016 16:19:08 -0700 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: On Wed, Jun 15, 2016 at 3:52 PM, Reinout van Rees wrote: [...] > It is a very valid point that seems to point solidly at wheels. As, if you > use a virtualenv, you don't have access to system python packages. And > that's one of the main reasons why the company I work for (still) uses > buildout. Buildout has the "syseggrecipe" package that makes it easy to use > specific system packages. So.... numpy, psycopg2, matplotlib, mapnik, scipy, > hard-to-install packages like that. > > So once you use a virtualenv, you're either in a hell because you're > compiling all those hard-to-compile packages from scratch, or you're using > binary wheels. Of your list above, I know numpy and scipy now ship manylinux1 wheels, so you don't need to compile them. matplotlib has manylinux1 wheels posted to their mailing list for testing for a few weeks now -- I suspect they'll upload them on for real any day now. Maybe you should nag the psycopg2 and mapnik developers to get with the program (or offer to help) ;-). > Once you've chucked everything into a wheel, there's suddenly a bit less of > a necessity for docker. At least, that might be the case. > > On the other hand, I could imagine having a base 16.04 docker that we build > to instead of a manylinux1 machine that we build to. Using a manylinux1 image for building is necessary if you want to distribute wheels on PyPI, but if you're just building wheels to deploy to your own controlled environment then it's unnecessary. You can install manylinux1 wheels into a 16.04 docker together with your home-built 16.04 wheels. -n -- Nathaniel J. Smith -- https://vorpus.org From contact at ionelmc.ro Wed Jun 15 19:54:25 2016 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Thu, 16 Jun 2016 02:54:25 +0300 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: On Thu, Jun 16, 2016 at 1:52 AM, Reinout van Rees wrote: > Aaargh, that Gliph talk gave me a *lot* of food for thought. And that's a > good thing. > ?His main argument is that not using a virtualenv can lead to version conflicts. Eg: app you installed with apt will probably break if it depends on packages installed with pip. Not to mention the metadata and uninstall mess that come with that. A sound argument. However, none of that applies to a Docker image. You'd have a single set of dependencies, so what's the point of using unnecessary tools and abstractions. :-)? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jun 15 19:57:07 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 15 Jun 2016 19:57:07 -0400 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: <93773A48-75E4-4829-BCD4-5ED6CFD399D3@stufft.io> > On Jun 15, 2016, at 7:54 PM, Ionel Cristian M?rie? via Distutils-SIG wrote: > > A sound argument. However, none of that applies to a Docker image. You'd have a single set of dependencies, so what's the point of using unnecessary tools and abstractions. :-)? Of course It still applies to Docker. You still have an operating system inside that container and unless you install zero Python using packages from the system then all of that can still conflict with your own application?s dependencies. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Wed Jun 15 20:10:16 2016 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Thu, 16 Jun 2016 03:10:16 +0300 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: <93773A48-75E4-4829-BCD4-5ED6CFD399D3@stufft.io> References: <93773A48-75E4-4829-BCD4-5ED6CFD399D3@stufft.io> Message-ID: On Thu, Jun 16, 2016 at 2:57 AM, Donald Stufft wrote: > Of course It still applies to Docker. You still have an operating system > inside that container and unless you install zero Python using packages > from the system then all of that can still conflict with your own > application?s dependencies. ?You're correct, theoretically. But in reality is best to not stick a dozen services or apps in a single docker image. What's the point of using docker if you're having a single container with everything in it? Eg: something smells if you need to run a supervisor inside the container. If we're talking about python packages managed by the os package manager ... there are very few situations where it makes sense to use them (eg: pysvn is especially hard to build), but other than that? Plus it's easier to cache some wheels than to cache some apt packages when building images, way easier. ? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jun 15 20:29:52 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 15 Jun 2016 20:29:52 -0400 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: <93773A48-75E4-4829-BCD4-5ED6CFD399D3@stufft.io> Message-ID: <592A51CA-F782-46B2-8B1B-A49BCA690608@stufft.io> > On Jun 15, 2016, at 8:10 PM, Ionel Cristian M?rie? wrote: > > ?You're correct, theoretically. But in reality is best to not stick a dozen services or apps in a single docker image. It?s not really about services as it is about things that you might need *for the service*. For instance, maybe you want node installed so you can build some static files using node.js tooling, but node requires gyp which requires Python. Now you install a new version of setuptools that breaks the OS installed gyp and suddenly now you can?t build your static files anymore. A virtual environment costs very little and protects you from these things. Of course, if you?re not using the System python (for instance, the official Python images install their own Python into /usr/local) then there?s no reason to use a virtual environment because you?re already isolated from the system. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Jun 15 20:33:01 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 15 Jun 2016 17:33:01 -0700 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: <93773A48-75E4-4829-BCD4-5ED6CFD399D3@stufft.io> Message-ID: On Wed, Jun 15, 2016 at 5:10 PM, Ionel Cristian M?rie? wrote: > > On Thu, Jun 16, 2016 at 2:57 AM, Donald Stufft wrote: >> >> Of course It still applies to Docker. You still have an operating system >> inside that container and unless you install zero Python using packages from >> the system then all of that can still conflict with your own application?s >> dependencies. > > > You're correct, theoretically. But in reality is best to not stick a dozen > services or apps in a single docker image. What's the point of using docker > if you're having a single container with everything in it? Eg: something > smells if you need to run a supervisor inside the container. > > If we're talking about python packages managed by the os package manager ... > there are very few situations where it makes sense to use them (eg: pysvn is > especially hard to build), but other than that? Plus it's easier to cache > some wheels than to cache some apt packages when building images, way > easier. The problem is that the bits of the OS that you use inside the container might themselves be written in Python. Probably the most obvious example is that on Fedora, if you want to install a system package, then the apt equivalent (dnf) is written in Python, so sudo pip install could break your package manager. Debian-derived distros also use Python for core system stuff, and might use it more in the future. So sure, you might get away with this, depending on the details of your container and your base image and if you religiously follow other best-practices for containerization and ... -- but why risk it? Using a virtualenv is cheap and means you don't have to know or care about these potential problems, so it's what we should recommend as best practice. -n -- Nathaniel J. Smith -- https://vorpus.org From contact at ionelmc.ro Wed Jun 15 20:52:25 2016 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Thu, 16 Jun 2016 03:52:25 +0300 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: <592A51CA-F782-46B2-8B1B-A49BCA690608@stufft.io> References: <93773A48-75E4-4829-BCD4-5ED6CFD399D3@stufft.io> <592A51CA-F782-46B2-8B1B-A49BCA690608@stufft.io> Message-ID: On Thu, Jun 16, 2016 at 3:29 AM, Donald Stufft wrote: > Now you install a new version of setuptools that breaks the OS installed > gyp and suddenly now you can?t build your static files anymore. ?gyp or node-gyp don't depend on python-setuptools, at least not on Ubuntu. Are you referring to some concrete pkg_resources issue, assuming gyp uses entrypoints api, and that api changed in some setuptools version? ? ??On Thu, Jun 16, 2016 at 3:33 AM, Nathaniel Smith wrote: > Using a virtualenv is cheap Virtualenvs broke down on me and had mindboggling bugs one too many times for me to consider them "cheap" to use. They make things weird (ever tried to load debug symbols for a python from a virtualenv in gdb?) I'm not saying they aren't useful. ? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimfulton.info Thu Jun 16 08:01:12 2016 From: jim at jimfulton.info (Jim Fulton) Date: Thu, 16 Jun 2016 08:01:12 -0400 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: On Wed, Jun 15, 2016 at 5:39 PM, Reinout van Rees wrote: > Op 15-06-16 om 13:53 schreef Jim Fulton: >> >> >> 1. Creating production docker images works best when all you're doing >> is installing a bunch of binary artifacts (e.g. .debs, eggs, wheels). > > > That's where pip and its "everything is in requirements.txt anyway" has an > advantage. The buildouts I use always have most/all dependencies mentioned > in the setup.py with version pins in the buildout config. So I need the > setup.py and so I need my full source code as the setup.py won't work > otherwise. No. If you had a built version of your project, the requirement information in the setup.py would still be available and used by buildout. How you specify requirements is beside the point. Whether you use buildout or pip, you need built versions of your non-pure-oython dependencies available and used. > > I *do* like to specify all "install_requires" items in my setup.py, so this > is something I need to think about. I don't see how this is a problem. If your project is pure python, you don't need build tools to build it, so building it as part of building a docker image isn't a problem. If it isn't pure Python, then you'll need to have a built version available, and the meta-data in that built version will provide the necessary requirements information. I'm not trying here to plug buildout vs pip. I'm just saying that the pip vs builtout (or both) decision is irrelevant to building a docker image or docker workflow. > >> 3. IMO, you're better off separating development and image-building >> workflows, with development feeding image building. Unfortunately, >> AFAIK, there aren't standard docker workflows for this, although I >> haven't been paying close attention to this lately. > > > docker-compose helps a little bit here in that you can "compose" your docker > into several scenarios. It *does* mean you have to watch carefully what > you're doing, as one of the goals of docker, as far as I see it, is to have > a large measure of "production/development parity" as per the "12 factor > app" manifesto (http://12factor.net/) I think this only helps to the extent that it encourages you to decompose applications into lots of separately deployed bits. I'm a fan of docker, but it seems to me that build workflow is a an unsolved problem if you need build tools that you don't want be included at run time. Jim -- Jim Fulton http://jimfulton.info From taochengwei at emar.com Thu Jun 16 03:48:17 2016 From: taochengwei at emar.com (taochengwei at emar.com) Date: Thu, 16 Jun 2016 15:48:17 +0800 Subject: [Distutils] Fw: Re: [Webmaster] Upload the package to the Pypi problem Message-ID: <2016061615481729559111@emar.com> From: Tim Golden Date: 2016-06-16 15:26 To: taochengwei at emar.com; webmaster Subject: Re: [Webmaster] Upload the package to the Pypi problem Please repost this to the distutils-sig mailing list: https://mail.python.org/mailman/listinfo/distutils-sig TJG On 16/06/2016 07:23, taochengwei at emar.com wrote: > I run `python setup.py register sdist bdist_egg upload`, > """ > running upload > Submitting dist/SpliceURL-0.1.tar.gz to http://pypi.python.org/pypi > Upload failed (403): Must access using HTTPS instead of HTTP > Submitting dist/SpliceURL-0.1-py2.7.egg to http://pypi.python.org/pypi > Upload failed (403): Must access using HTTPS instead of HTTP > """ > My system: > python 2.6.6 > setuptools (23.0.0) > > My .pypirc > """ > [distutils] > index-servers = > pypi > > [pypi] > repository=https://pypi.python.org/pypi > username=saintic > password=zzz > """ > I change a Centos7 system, is the same(no pypirc, python2.7). > > > ------------------------------------------------------------------------ > _______________________________________________ > Webmaster mailing list > Webmaster at python.org > https://mail.python.org/mailman/listinfo/webmaster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jun 16 08:36:35 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 16 Jun 2016 08:36:35 -0400 Subject: [Distutils] Fw: Re: [Webmaster] Upload the package to the Pypi problem In-Reply-To: <2016061615481729559111@emar.com> References: <2016061615481729559111@emar.com> Message-ID: Heya, I?m still writing up the documentation for this, but you should use Twine (https://pypi.python.org/pypi/twine ) to upload instead of ``setup.py upload``. > On Jun 16, 2016, at 3:48 AM, taochengwei at emar.com wrote: > > > > From: Tim Golden > Date: 2016-06-16 15:26 > To: taochengwei at emar.com ; webmaster > Subject: Re: [Webmaster] Upload the package to the Pypi problem > Please repost this to the distutils-sig mailing list: > > https://mail.python.org/mailman/listinfo/distutils-sig > > TJG > > On 16/06/2016 07:23, taochengwei at emar.com wrote: > > I run `python setup.py register sdist bdist_egg upload`, > > """ > > running upload > > Submitting dist/SpliceURL-0.1.tar.gz to http://pypi.python.org/pypi > > Upload failed (403): Must access using HTTPS instead of HTTP > > Submitting dist/SpliceURL-0.1-py2.7.egg to http://pypi.python.org/pypi > > Upload failed (403): Must access using HTTPS instead of HTTP > > """ > > My system: > > python 2.6.6 > > setuptools (23.0.0) > > > > My .pypirc > > """ > > [distutils] > > index-servers = > > pypi > > > > [pypi] > > repository=https://pypi.python.org/pypi > > username=saintic > > password=zzz > > """ > > I change a Centos7 system, is the same(no pypirc, python2.7). > > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > > Webmaster mailing list > > Webmaster at python.org > > https://mail.python.org/mailman/listinfo/webmaster > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jun 16 20:47:58 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 16 Jun 2016 17:47:58 -0700 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: On 16 June 2016 at 05:01, Jim Fulton wrote: > I'm a fan of docker, but it seems to me that build workflow is a an > unsolved problem if you need build tools that you don't want be > included at run time. For OpenShift's source-to-image (which combines builder images with a git repo to create your deployment image [1]), they tackled that problem by setting up a multi-step build pipeline where the first builder image created the binary artifact (they use a WAR file in their example, but it can be an arbitrary tarball since you're not doing anything with it except handing it to the next stage of the pipeline), and then a second builder image that provides the required runtime environment and also unpacks the tarball into the right place. For pure Python (Ruby, JavaScript, etc) projects you'll often be able to get away without the separate builder image, but I suspect that kind of multi-step model would be a better fit for the way buildout works. Cheers, Nick. [1] https://github.com/openshift/source-to-image -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jim at jimfulton.info Fri Jun 17 07:29:27 2016 From: jim at jimfulton.info (Jim Fulton) Date: Fri, 17 Jun 2016 07:29:27 -0400 Subject: [Distutils] Docker, development, buildout, virtualenv, local/global install In-Reply-To: References: Message-ID: On Thu, Jun 16, 2016 at 8:47 PM, Nick Coghlan wrote: > On 16 June 2016 at 05:01, Jim Fulton wrote: >> I'm a fan of docker, but it seems to me that build workflow is a an >> unsolved problem if you need build tools that you don't want be >> included at run time. > > For OpenShift's source-to-image (which combines builder images with a > git repo to create your deployment image [1]), they tackled that > problem by setting up a multi-step build pipeline where the first > builder image created the binary artifact (they use a WAR file in > their example, but it can be an arbitrary tarball since you're not > doing anything with it except handing it to the next stage of the > pipeline), and then a second builder image that provides the required > runtime environment and also unpacks the tarball into the right place. > > For pure Python (Ruby, JavaScript, etc) projects you'll often be able > to get away without the separate builder image, but I suspect that > kind of multi-step model would be a better fit for the way buildout > works. Or pip. I don't think this is really about buildout or pip. I agree that a multi-step process is needed if builds are required. I suggested that in my original response. When I said this was an unsolved problem, I didn't mean that there weren't solutions, but that there didn't seem to be widely accepted workflows for this within the docker community (or at least docs). The common assumption is that images are defined by a single build file, to the point that images are often "published" as git repositories with build files, rather than through a docker repository. Anyway, getting back to Reinout's original question, I don't think docker build should be used for development. Build should assemble (again, using buildout, pip, or whatever) existing resources without actually building anything. (Of course, docker can be used for development, and docker build could and probably should be used to create build environments.) Jim -- Jim Fulton http://jimfulton.info From sylvain.corlay at gmail.com Mon Jun 20 12:56:06 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Mon, 20 Jun 2016 18:56:06 +0200 Subject: [Distutils] Removal of wheels deleting more than the data files In-Reply-To: References: Message-ID: FYI, this could probably be a security issue with wheel: a wheel package that has an empty list of data files in any important subdirectory of sys.prefix can delete all the content of that directory upon uninstall or update. Thanks, Sylvain On Wed, Jun 15, 2016 at 11:30 AM, Sylvain Corlay wrote: > I discovered a quite serious bug in wheels ( > http://bugs.python.org/issue27317) > > When specifying an empty list for the list of data_files in a given > directory, the entire directory is being deleted on uninstall of the wheel, > even if it contained other resources from other pacakges. > > Example: > > from setuptools import setup >> setup(name='remover', data_files=[('share/plugins', [])]) > > > The expected behavior is that only the specified list of files is removed, > (which is empty in that case). > > When the list is not empty, the behavior is the one expected. For example, > > from setuptools import setup >> setup(name='remover', data_files=[('share/plugins', ['foobar.json'])]) > > > will only remove `foobar.json` on uninstall and the `plugins` directory > will not be removed if it is not empty. > > Thanks, > > Sylvain > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Jun 20 13:47:21 2016 From: dholth at gmail.com (Daniel Holth) Date: Mon, 20 Jun 2016 17:47:21 +0000 Subject: [Distutils] Removal of wheels deleting more than the data files In-Reply-To: References: Message-ID: It looks like this is a pip and setuptools bug. I was only able to reproduce by running "pip install ." in the package directory, in which case 'remover-0.0.0-py2.7.egg-info/installed-files.txt' contains ../../share/plugins dependency_links.txt PKG-INFO SOURCES.txt top_level.txt Installing in this way pip has invoked 'setup.py install' for us. Uninstall will remove share/plugins and its contents but not share. However running 'setup.py bdist_wheel' and then installing said wheel leaves no record of '../share/plugins' in 'remover-0.0.0.dist-info/RECORD'. On Mon, Jun 20, 2016 at 12:56 PM Sylvain Corlay wrote: > FYI, this could probably be a security issue with wheel: a wheel package > that has an empty list of data files in any important subdirectory of > sys.prefix can delete all the content of that directory upon uninstall or > update. > > Thanks, > > Sylvain > > On Wed, Jun 15, 2016 at 11:30 AM, Sylvain Corlay > wrote: > >> I discovered a quite serious bug in wheels ( >> http://bugs.python.org/issue27317) >> >> When specifying an empty list for the list of data_files in a given >> directory, the entire directory is being deleted on uninstall of the wheel, >> even if it contained other resources from other pacakges. >> >> Example: >> >> from setuptools import setup >>> setup(name='remover', data_files=[('share/plugins', [])]) >> >> >> The expected behavior is that only the specified list of files is >> removed, (which is empty in that case). >> >> When the list is not empty, the behavior is the one expected. For example, >> >> from setuptools import setup >>> setup(name='remover', data_files=[('share/plugins', ['foobar.json'])]) >> >> >> will only remove `foobar.json` on uninstall and the `plugins` directory >> will not be removed if it is not empty. >> >> Thanks, >> >> Sylvain >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkruglikov at numenta.com Tue Jun 21 21:55:55 2016 From: vkruglikov at numenta.com (Vitaly Kruglikov) Date: Wed, 22 Jun 2016 01:55:55 +0000 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files Message-ID: There have been no updates in over a year. Has this effort died, or transitioned to another medium? Thx -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Jun 22 11:51:46 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 22 Jun 2016 08:51:46 -0700 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: References: Message-ID: I believe the status is that there's general consensus that something like this would be useful, but there's no one who is currently actively working in it. On Jun 22, 2016 5:53 AM, "Vitaly Kruglikov" wrote: There have been no updates in over a year. Has this effort died, or transitioned to another medium? Thx _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Wed Jun 22 12:28:34 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Wed, 22 Jun 2016 09:28:34 -0700 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: References: Message-ID: <808CB562-342D-44FB-847A-62E36C042D98@coderanger.net> Manylinux has mostly replaced it as that covers the platforms 99% of people worry about. The tooling for manylinux is more complex than this would have been, but sunk cost etc etc and now that we have it might as well save everyone some headache. --Noah > On Jun 22, 2016, at 8:51 AM, Nathaniel Smith wrote: > > I believe the status is that there's general consensus that something like this would be useful, but there's no one who is currently actively working in it. > > On Jun 22, 2016 5:53 AM, "Vitaly Kruglikov" wrote: > There have been no updates in over a year. Has this effort died, or transitioned to another medium? Thx > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Wed Jun 22 15:21:11 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 22 Jun 2016 12:21:11 -0700 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: References: <808CB562-342D-44FB-847A-62E36C042D98@coderanger.net> Message-ID: There are still use cases for distro-specific wheels, though -- some examples include Raspbian wheels (manylinux1 is x86/x86-64 only), Alpine Linux wheels (manylinux1 is glibc only), internal deploys that want to build on Ubuntu 14.04 and deploy on 14.04 and don't need the hassle of generating manylinux-style binaries but would like a more meaningful platform tag than "linux", and for everyone who wants to extend wheel metadata to allow dependencies on external distro packages then having distro-specific wheels is probably a necessary first step. -n On Jun 22, 2016 09:49, "Noah Kantrowitz" wrote: Manylinux has mostly replaced it as that covers the platforms 99% of people worry about. The tooling for manylinux is more complex than this would have been, but sunk cost etc etc and now that we have it might as well save everyone some headache. --Noah > On Jun 22, 2016, at 8:51 AM, Nathaniel Smith wrote: > > I believe the status is that there's general consensus that something like this would be useful, but there's no one who is currently actively working in it. > > On Jun 22, 2016 5:53 AM, "Vitaly Kruglikov" wrote: > There have been no updates in over a year. Has this effort died, or transitioned to another medium? Thx > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Wed Jun 22 17:38:35 2016 From: glyph at twistedmatrix.com (Glyph) Date: Wed, 22 Jun 2016 14:38:35 -0700 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: References: <808CB562-342D-44FB-847A-62E36C042D98@coderanger.net> Message-ID: <5838A617-0FB5-4D02-8D1B-5A1E25CB3B13@twistedmatrix.com> > On Jun 22, 2016, at 12:21, Nathaniel Smith wrote: > There are still use cases for distro-specific wheels, though -- some examples include Raspbian wheels (manylinux1 is x86/x86-64 only), Alpine Linux wheels (manylinux1 is glibc only), internal deploys that want to build on Ubuntu 14.04 and deploy on 14.04 and don't need the hassle of generating manylinux-style binaries but would like a more meaningful platform tag than "linux", and for everyone who wants to extend wheel metadata to allow dependencies on external distro packages then having distro-specific wheels is probably a necessary first step. > If we want to treat distros as first-class deployment targets I think being able to use their platform features in a way that's compatible with PyPI is an important next step. However, wheel tags might be insufficient here; the main appeal of creating distro-specific wheels is being able to use distro-specific features, but those typically come along with specific package dependencies as well, and we don't have a way to express those yet. I think this is worthwhile to do, but figuring out a way to do it is probably going to take a lot of discussion. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jun 22 17:42:27 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 22 Jun 2016 17:42:27 -0400 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: <5838A617-0FB5-4D02-8D1B-5A1E25CB3B13@twistedmatrix.com> References: <808CB562-342D-44FB-847A-62E36C042D98@coderanger.net> <5838A617-0FB5-4D02-8D1B-5A1E25CB3B13@twistedmatrix.com> Message-ID: <74F97012-056D-4BA2-BC71-D0E943CF25B1@stufft.io> > On Jun 22, 2016, at 5:38 PM, Glyph wrote: > > >> On Jun 22, 2016, at 12:21, Nathaniel Smith > wrote: >> There are still use cases for distro-specific wheels, though -- some examples include Raspbian wheels (manylinux1 is x86/x86-64 only), Alpine Linux wheels (manylinux1 is glibc only), internal deploys that want to build on Ubuntu 14.04 and deploy on 14.04 and don't need the hassle of generating manylinux-style binaries but would like a more meaningful platform tag than "linux", and for everyone who wants to extend wheel metadata to allow dependencies on external distro packages then having distro-specific wheels is probably a necessary first step. >> > > If we want to treat distros as first-class deployment targets I think being able to use their platform features in a way that's compatible with PyPI is an important next step. However, wheel tags might be insufficient here; the main appeal of creating distro-specific wheels is being able to use distro-specific features, but those typically come along with specific package dependencies as well, and we don't have a way to express those yet. I don?t think these two things need to be bound together. People can already today depend on platform specific things just by not publishing wheels. Adding these tags naturally follows that, where people would need to manually install items from the OS before they used them. Adding some mechanism for automating this would be a good, further addition, but I think they are separately useful (and even more useful and powerful when combined). > > I think this is worthwhile to do, but figuring out a way to do it is probably going to take a lot of discussion. > > -glyph > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed Jun 22 17:52:57 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 23 Jun 2016 09:52:57 +1200 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: <74F97012-056D-4BA2-BC71-D0E943CF25B1@stufft.io> References: <808CB562-342D-44FB-847A-62E36C042D98@coderanger.net> <5838A617-0FB5-4D02-8D1B-5A1E25CB3B13@twistedmatrix.com> <74F97012-056D-4BA2-BC71-D0E943CF25B1@stufft.io> Message-ID: They're definitely separate; there's a draft PEP Tennessee and I started at PyCon AU 2014 which would make a good basis for such an effort. -Rob On 23 June 2016 at 09:42, Donald Stufft wrote: > > On Jun 22, 2016, at 5:38 PM, Glyph wrote: > > > On Jun 22, 2016, at 12:21, Nathaniel Smith wrote: > > There are still use cases for distro-specific wheels, though -- some > examples include Raspbian wheels (manylinux1 is x86/x86-64 only), Alpine > Linux wheels (manylinux1 is glibc only), internal deploys that want to build > on Ubuntu 14.04 and deploy on 14.04 and don't need the hassle of generating > manylinux-style binaries but would like a more meaningful platform tag than > "linux", and for everyone who wants to extend wheel metadata to allow > dependencies on external distro packages then having distro-specific wheels > is probably a necessary first step. > > If we want to treat distros as first-class deployment targets I think being > able to use their platform features in a way that's compatible with PyPI is > an important next step. However, wheel tags might be insufficient here; the > main appeal of creating distro-specific wheels is being able to use > distro-specific features, but those typically come along with specific > package dependencies as well, and we don't have a way to express those yet. > > > I don?t think these two things need to be bound together. People can already > today depend on platform specific things just by not publishing wheels. > Adding these tags naturally follows that, where people would need to > manually install items from the OS before they used them. Adding some > mechanism for automating this would be a good, further addition, but I think > they are separately useful (and even more useful and powerful when > combined). > > > I think this is worthwhile to do, but figuring out a way to do it is > probably going to take a lot of discussion. > > -glyph > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > > ? > Donald Stufft > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From glyph at twistedmatrix.com Wed Jun 22 18:04:21 2016 From: glyph at twistedmatrix.com (Glyph) Date: Wed, 22 Jun 2016 15:04:21 -0700 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: <74F97012-056D-4BA2-BC71-D0E943CF25B1@stufft.io> References: <808CB562-342D-44FB-847A-62E36C042D98@coderanger.net> <5838A617-0FB5-4D02-8D1B-5A1E25CB3B13@twistedmatrix.com> <74F97012-056D-4BA2-BC71-D0E943CF25B1@stufft.io> Message-ID: <8F6278ED-DDC4-4C78-884A-B699C066FE31@twistedmatrix.com> > On Jun 22, 2016, at 14:42, Donald Stufft wrote: > > I don?t think these two things need to be bound together. People can already today depend on platform specific things just by not publishing wheels. Adding these tags naturally follows that, where people would need to manually install items from the OS before they used them. Adding some mechanism for automating this would be a good, further addition, but I think they are separately useful (and even more useful and powerful when combined). OK, fair point. I don't disagree with any of that. As long as everyone has a clear idea that this is part of a phased approach so the wheel tags should be ultimately upwards compatible with something that does dependencies, this all sounds pretty good (and an incremental approach with benefits at each stage is more likely to succeed anyhow). -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Wed Jun 22 17:44:28 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Wed, 22 Jun 2016 14:44:28 -0700 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: <74F97012-056D-4BA2-BC71-D0E943CF25B1@stufft.io> References: <808CB562-342D-44FB-847A-62E36C042D98@coderanger.net> <5838A617-0FB5-4D02-8D1B-5A1E25CB3B13@twistedmatrix.com> <74F97012-056D-4BA2-BC71-D0E943CF25B1@stufft.io> Message-ID: <06DF3AA8-E516-41F9-9249-120D39D67C40@coderanger.net> > On Jun 22, 2016, at 2:42 PM, Donald Stufft wrote: > > >> On Jun 22, 2016, at 5:38 PM, Glyph wrote: >> >> >>> On Jun 22, 2016, at 12:21, Nathaniel Smith wrote: >>> There are still use cases for distro-specific wheels, though -- some examples include Raspbian wheels (manylinux1 is x86/x86-64 only), Alpine Linux wheels (manylinux1 is glibc only), internal deploys that want to build on Ubuntu 14.04 and deploy on 14.04 and don't need the hassle of generating manylinux-style binaries but would like a more meaningful platform tag than "linux", and for everyone who wants to extend wheel metadata to allow dependencies on external distro packages then having distro-specific wheels is probably a necessary first step. >>> >> If we want to treat distros as first-class deployment targets I think being able to use their platform features in a way that's compatible with PyPI is an important next step. However, wheel tags might be insufficient here; the main appeal of creating distro-specific wheels is being able to use distro-specific features, but those typically come along with specific package dependencies as well, and we don't have a way to express those yet. > > I don?t think these two things need to be bound together. People can already today depend on platform specific things just by not publishing wheels. Adding these tags naturally follows that, where people would need to manually install items from the OS before they used them. Adding some mechanism for automating this would be a good, further addition, but I think they are separately useful (and even more useful and powerful when combined). I could see an argument for maybe building support into Pip but disallowing them on PyPI until we feel comfortable with the UX. That doesn't add much over existing private index support though. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From noah at coderanger.net Wed Jun 22 17:44:37 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Wed, 22 Jun 2016 14:44:37 -0700 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: <74F97012-056D-4BA2-BC71-D0E943CF25B1@stufft.io> References: <808CB562-342D-44FB-847A-62E36C042D98@coderanger.net> <5838A617-0FB5-4D02-8D1B-5A1E25CB3B13@twistedmatrix.com> <74F97012-056D-4BA2-BC71-D0E943CF25B1@stufft.io> Message-ID: <46D0C350-0CC0-4F69-B67E-186EFBEBEF7A@coderanger.net> > On Jun 22, 2016, at 2:42 PM, Donald Stufft wrote: > > >> On Jun 22, 2016, at 5:38 PM, Glyph wrote: >> >> >>> On Jun 22, 2016, at 12:21, Nathaniel Smith wrote: >>> There are still use cases for distro-specific wheels, though -- some examples include Raspbian wheels (manylinux1 is x86/x86-64 only), Alpine Linux wheels (manylinux1 is glibc only), internal deploys that want to build on Ubuntu 14.04 and deploy on 14.04 and don't need the hassle of generating manylinux-style binaries but would like a more meaningful platform tag than "linux", and for everyone who wants to extend wheel metadata to allow dependencies on external distro packages then having distro-specific wheels is probably a necessary first step. >>> >> If we want to treat distros as first-class deployment targets I think being able to use their platform features in a way that's compatible with PyPI is an important next step. However, wheel tags might be insufficient here; the main appeal of creating distro-specific wheels is being able to use distro-specific features, but those typically come along with specific package dependencies as well, and we don't have a way to express those yet. > > I don?t think these two things need to be bound together. People can already today depend on platform specific things just by not publishing wheels. Adding these tags naturally follows that, where people would need to manually install items from the OS before they used them. Adding some mechanism for automating this would be a good, further addition, but I think they are separately useful (and even more useful and powerful when combined). I could see an argument for maybe building support into Pip but disallowing them on PyPI until we feel comfortable with the UX. That doesn't add much over existing private index support though. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed Jun 22 21:53:39 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 22 Jun 2016 18:53:39 -0700 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files In-Reply-To: References: Message-ID: On 22 June 2016 at 08:51, Nathaniel Smith wrote: > I believe the status is that there's general consensus that something like > this would be useful, but there's no one who is currently actively working > in it. Checking https://wheels.galaxyproject.org/packages/ suggests that the Galaxy Project folks did make some progress with it (scroll down to the psycopg2 packages), so it may be worth pinging Nate Coraor directly to see how far they got with the tooling support. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vkruglikov at numenta.com Wed Jun 22 21:00:07 2016 From: vkruglikov at numenta.com (Vitaly Kruglikov) Date: Thu, 23 Jun 2016 01:00:07 +0000 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files Message-ID: Thanks everyone for following up on my questions. This helps a lot. In particular, thank you for pointing me to manylinux1. https://www.python.org/dev/peps/pep-0513 appears to be incredibly well written, with excellent insights. I will likely adopt the manylinux1 approach. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkruglikov at numenta.com Thu Jun 23 14:44:45 2016 From: vkruglikov at numenta.com (Vitaly Kruglikov) Date: Thu, 23 Jun 2016 18:44:45 +0000 Subject: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files Message-ID: On 22 June 2016 at 08:51, Nathaniel Smith wrote: > I believe the status is that there's general consensus that something like > this would be useful, but there's no one who is currently actively working > in it. Thanks everyone for following up on my questions. This helps a lot. In particular, thank you for pointing me to manylinux1. https://www.python.org/dev/peps/pep-0513 appears to be incredibly well written, with excellent insights. I will likely adopt the manylinux1 approach. -------------- next part -------------- An HTML attachment was scrubbed... URL: From count-python.org at flatline.de Fri Jun 24 07:55:09 2016 From: count-python.org at flatline.de (Andreas Kotes) Date: Fri, 24 Jun 2016 11:55:09 +0000 (UTC) Subject: [Distutils] Notice: PyPI APIs now return 403 when accessed via HTTP References: <8A627727-627F-4B80-B628-3F1CE69B1491@stufft.io> Message-ID: Hello Donald, Donald Stufft stufft.io> writes: > In part of an ongoing effort to improve the security of PyPI, instead of redirecting (or silently allowing) > requests made over HTTP to PyPI APIs, these APIs will now return a 403 and require people to make the initial > request over HTTPS. > > This does not affect the UI portions of the site that are designed to be used by humans, for these we will still > redirect (which will cause the browser to see the HSTS header and force the user to use HTTPS from then on out). I have to kindly request this change to be reverted, or at least to be exempt for the SimpleRPC call. There's an installed base of tens of thousands of Puppet installations installing pip modules via a fscked up pip provider that's hardcoded to work against the http-based SimpleRPC endpoint, all of which are broken now :( cURL equivalent of an example call they are making: curl -v -X POST http://pypi.python.org/pypi -H 'Content-type: text/xml' -d " package_releases pip" fix they've done on their side: https://github.com/puppetlabs/puppet/commit/152299cc859fc74343c697841848 086d4e41b6f8 related Jira issue on their side: https://tickets.puppetlabs.com/browse/PUP-6120 as this change is only included in the very latest Puppet release (4.5) and means crossing one major and multiple minor releases for almost everyone using that code, I see no option but to plea to revert (the relevant part) of this on behalf of the affected admins and systems. thank you for your consideration, count -- Andreas 'count' Kotes Taming computers for humans since 1990. "Don't ask what the world needs. Ask what makes you come alive, and go do it. Because what the world needs is people who have come alive." -- Howard Thurman From pav at iki.fi Fri Jun 24 14:49:39 2016 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 24 Jun 2016 18:49:39 +0000 (UTC) Subject: [Distutils] Benchmark regression feeds Message-ID: Hi, In case someone is interested in getting notifications of performance regressions in the Numpy and Scipy benchmarks, this is available as Atom feeds at: https://pv.github.io/numpy-bench/regressions.xml https://pv.github.io/scipy-bench/regressions.xml From pav at iki.fi Fri Jun 24 14:51:58 2016 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 24 Jun 2016 18:51:58 +0000 (UTC) Subject: [Distutils] Benchmark regression feeds Message-ID: Sorry for the unrelated noise --- sent to the wrong mailing list :/ From pradyunsg at gmail.com Sat Jun 25 06:25:59 2016 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Sat, 25 Jun 2016 10:25:59 +0000 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install Message-ID: Hello List! There is currently a proposal to change the behaviour to pip install to upgrade a package that is passed even if it is already installed. This behaviour change is accompanied with a change in the upgrade strategy - pip would stop ?eagerly? upgrading dependencies and would become more conservative, upgrading a dependency only when it doesn?t meet lower constraints of the newer version of a parent. Moreover, the behaviour of pip install --target would also be changed so that --upgrade no longer affects it. A PEP-style write-up ( https://gist.github.com/pradyunsg/4c9db6a212239fee69b429c96cdc3d73) documents the reasoning behind this proposed change, what the change is and some alternate proposals that were rejected in the discussions. This is a request for comments on the pull-request implementing this proposal (https://github.com/pypa/pip/pull/3806) for your views, thoughts and possible concerns related to it. ? Cheers, Pradyun Gedam ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Sat Jun 25 12:31:30 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Sat, 25 Jun 2016 11:31:30 -0500 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: On Sat, Jun 25, 2016 at 5:25 AM, Pradyun Gedam wrote: > Hello List! > > There is currently a proposal to change the behaviour to pip install to > upgrade a package that is passed even if it is already installed. > > This behaviour change is accompanied with a change in the upgrade strategy - > pip would stop ?eagerly? upgrading dependencies and would become more > conservative, upgrading a dependency only when it doesn?t meet lower > constraints of the newer version of a parent. Moreover, the behaviour of pip > install --target would also be changed so that --upgrade no longer affects > it. > > A PEP-style write-up > (https://gist.github.com/pradyunsg/4c9db6a212239fee69b429c96cdc3d73) > documents the reasoning behind this proposed change, what the change is and > some alternate proposals that were rejected in the discussions. > > This is a request for comments on the pull-request implementing this > proposal (https://github.com/pypa/pip/pull/3806) for your views, thoughts > and possible concerns related to it. Having `pip install` implicitly upgrade a package will break the way tooling like ansible, puppet, salt, and chef all work. Most expect that if you do `pip install requests` twice you won't get two different versions. At the very least, there should be an option added that implies that if the version of the specified package is already installed and satisfactory, that it is not upgraded. That said, this will also break those tools understanding of how `pip install --upgrade` works if instead `-U/--upgrade` is kept. People who are automating deployments using pip (perhaps in virtualenvs or however else, it really doesn't matter) will now be forced to periodically increase their lower bounds or list out *every* dependency to ensure they upgrade if that's what they want rather than being able to rely on pip to do that. They'll now have to specify their requirements list in more than one place, and this will not be something that's an improvement to the tooling in the community. Will it make some things more explicit? Maybe. Will it cause people trying to deploy software to waste hours of their time? Definitely. If you want to change the upgrade behaviour of pip, I suggest not modifying the existing `pip install` and instead deprecating that in favor of a new command. Silently pulling the rug out from under people seems like an incredibly bad idea. From donald at stufft.io Sat Jun 25 13:16:51 2016 From: donald at stufft.io (Donald Stufft) Date: Sat, 25 Jun 2016 13:16:51 -0400 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> > On Jun 25, 2016, at 12:31 PM, Ian Cordasco wrote: > > On Sat, Jun 25, 2016 at 5:25 AM, Pradyun Gedam wrote: >> Hello List! >> >> There is currently a proposal to change the behaviour to pip install to >> upgrade a package that is passed even if it is already installed. >> >> This behaviour change is accompanied with a change in the upgrade strategy - >> pip would stop ?eagerly? upgrading dependencies and would become more >> conservative, upgrading a dependency only when it doesn?t meet lower >> constraints of the newer version of a parent. Moreover, the behaviour of pip >> install --target would also be changed so that --upgrade no longer affects >> it. >> >> A PEP-style write-up >> (https://gist.github.com/pradyunsg/4c9db6a212239fee69b429c96cdc3d73) >> documents the reasoning behind this proposed change, what the change is and >> some alternate proposals that were rejected in the discussions. >> >> This is a request for comments on the pull-request implementing this >> proposal (https://github.com/pypa/pip/pull/3806) for your views, thoughts >> and possible concerns related to it. > > Having `pip install` implicitly upgrade a package will break the way > tooling like ansible, puppet, salt, and chef all work. Most expect > that if you do `pip install requests` twice you won't get two > different versions. At the very least, there should be an option added > that implies that if the version of the specified package is already > installed and satisfactory, that it is not upgraded. So this may be true, but it?s also not particularly hard to work around similarly to what they do already for system package managers, they can do ``pip list`` or ``pip freeze`` to see if something is already installed. Not only will this work across versions, but it?ll also work *better* because it won?t involve hitting the network to determine this information. > > That said, this will also break those tools understanding of how `pip > install --upgrade` works if instead `-U/--upgrade` is kept. People who > are automating deployments using pip (perhaps in virtualenvs or > however else, it really doesn't matter) will now be forced to > periodically increase their lower bounds or list out *every* > dependency to ensure they upgrade if that's what they want rather than > being able to rely on pip to do that. They'll now have to specify > their requirements list in more than one place, and this will not be > something that's an improvement to the tooling in the community. Will > it make some things more explicit? Maybe. Will it cause people trying > to deploy software to waste hours of their time? Definitely. > > If you want to change the upgrade behaviour of pip, I suggest not > modifying the existing `pip install` and instead deprecating that in > favor of a new command. Silently pulling the rug out from under people > seems like an incredibly bad idea. > This is going to break things, no matter what we do? and the current behavior is also broken in ways that are causing other projects to need to do broken things. The current solution I think breaks *less* things and once the initial pain of breakage is over will end up with a much nicer situation. Importantly the vast bulk of the breakage will be centralized in things like Chef where one group can make a change and fix it for thousands. ? Donald Stufft From dholth at gmail.com Sat Jun 25 13:41:44 2016 From: dholth at gmail.com (Daniel Holth) Date: Sat, 25 Jun 2016 17:41:44 +0000 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> References: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> Message-ID: Are you suggesting that my current vagrant provisioning script "ensure x is installed": ensure_x.sh: #!/bin/sh pip install x Which IIUC does not currently check the network if x is already installed, is no longer idempotent, and will permanently brick my development environment as soon as x is upgraded on pypi? Do I have to include logic for checking the current version of pip, and then decide how to upgrade? Do I just pin all dependencies always? I just spent 3 weeks fixing a nodejs deployment that had been upgraded when I was not ready, and I would rather not have to do the same in pip. Why don't we just implement something like pip install foobar at latest if you want the upgrade? pip upsert? The idea of always upgrading when you specify a concrete dependency, a file or a URL, is a good one. On Sat, Jun 25, 2016 at 1:17 PM Donald Stufft wrote: > > > On Jun 25, 2016, at 12:31 PM, Ian Cordasco > wrote: > > > > On Sat, Jun 25, 2016 at 5:25 AM, Pradyun Gedam > wrote: > >> Hello List! > >> > >> There is currently a proposal to change the behaviour to pip install to > >> upgrade a package that is passed even if it is already installed. > >> > >> This behaviour change is accompanied with a change in the upgrade > strategy - > >> pip would stop ?eagerly? upgrading dependencies and would become more > >> conservative, upgrading a dependency only when it doesn?t meet lower > >> constraints of the newer version of a parent. Moreover, the behaviour > of pip > >> install --target would also be changed so that --upgrade no longer > affects > >> it. > >> > >> A PEP-style write-up > >> (https://gist.github.com/pradyunsg/4c9db6a212239fee69b429c96cdc3d73) > >> documents the reasoning behind this proposed change, what the change is > and > >> some alternate proposals that were rejected in the discussions. > >> > >> This is a request for comments on the pull-request implementing this > >> proposal (https://github.com/pypa/pip/pull/3806) for your views, > thoughts > >> and possible concerns related to it. > > > > Having `pip install` implicitly upgrade a package will break the way > > tooling like ansible, puppet, salt, and chef all work. Most expect > > that if you do `pip install requests` twice you won't get two > > different versions. At the very least, there should be an option added > > that implies that if the version of the specified package is already > > installed and satisfactory, that it is not upgraded. > > So this may be true, but it?s also not particularly hard to work around > similarly to what they do already for system package managers, they can > do ``pip list`` or ``pip freeze`` to see if something is already installed. > Not only will this work across versions, but it?ll also work *better* > because it won?t involve hitting the network to determine this information. > > > > > That said, this will also break those tools understanding of how `pip > > install --upgrade` works if instead `-U/--upgrade` is kept. People who > > are automating deployments using pip (perhaps in virtualenvs or > > however else, it really doesn't matter) will now be forced to > > periodically increase their lower bounds or list out *every* > > dependency to ensure they upgrade if that's what they want rather than > > being able to rely on pip to do that. They'll now have to specify > > their requirements list in more than one place, and this will not be > > something that's an improvement to the tooling in the community. Will > > it make some things more explicit? Maybe. Will it cause people trying > > to deploy software to waste hours of their time? Definitely. > > > > If you want to change the upgrade behaviour of pip, I suggest not > > modifying the existing `pip install` and instead deprecating that in > > favor of a new command. Silently pulling the rug out from under people > > seems like an incredibly bad idea. > > > > This is going to break things, no matter what we do? and the current > behavior is also broken in ways that are causing other projects to need > to do broken things. > > The current solution I think breaks *less* things and once the initial > pain of breakage is over will end up with a much nicer situation. > Importantly > the vast bulk of the breakage will be centralized in things like Chef where > one group can make a change and fix it for thousands. > > ? > Donald Stufft > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Jun 25 17:31:19 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 25 Jun 2016 14:31:19 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: On 25 June 2016 at 03:25, Pradyun Gedam wrote: > Hello List! > > There is currently a proposal to change the behaviour to pip install to > upgrade a package that is passed even if it is already installed. > > This behaviour change is accompanied with a change in the upgrade strategy - > pip would stop ?eagerly? upgrading dependencies and would become more > conservative, upgrading a dependency only when it doesn?t meet lower > constraints of the newer version of a parent. Moreover, the behaviour of pip > install --target would also be changed so that --upgrade no longer affects > it. > > A PEP-style write-up > (https://gist.github.com/pradyunsg/4c9db6a212239fee69b429c96cdc3d73) > documents the reasoning behind this proposed change, what the change is and > some alternate proposals that were rejected in the discussions. > > This is a request for comments on the pull-request implementing this > proposal (https://github.com/pypa/pip/pull/3806) for your views, thoughts > and possible concerns related to it. Hi Pradyun, Thanks for putting that together. The assertion in the write-up that the proposed behaviour matches that of operating system level package managers doesn't sound right to me: $ sudo dnf install -q python Package python-2.7.11-5.fc24.x86_64 is already installed, skipping. (My system actually has a Python update pending, so the "-q" option is suppressing the output telling me about that, but either way, it doesn't make any local changes unless I use the update or upgrade subcommand or supply the "--best" upgrade strategy option to the install command) As far as I am aware, apt-get install behaves the same way - if you only give a package name, and that package is already installed on the system, it won't do anything, even if a newer version of that component is available, and you need to use the "upgrade" subcommand instead to say "upgrade to the latest available version". Switching pip's "--upgrade" command line option to a non-eager upgrade strategy sounds good to me, but making "install" upgrade the requested component by default when given just a name (rather than a specific file or URL reference) would be very surprising and problematic. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sat Jun 25 18:22:07 2016 From: donald at stufft.io (Donald Stufft) Date: Sat, 25 Jun 2016 18:22:07 -0400 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: > On Jun 25, 2016, at 5:31 PM, Nick Coghlan wrote: > > On 25 June 2016 at 03:25, Pradyun Gedam wrote: >> Hello List! >> >> There is currently a proposal to change the behaviour to pip install to >> upgrade a package that is passed even if it is already installed. >> >> This behaviour change is accompanied with a change in the upgrade strategy - >> pip would stop ?eagerly? upgrading dependencies and would become more >> conservative, upgrading a dependency only when it doesn?t meet lower >> constraints of the newer version of a parent. Moreover, the behaviour of pip >> install --target would also be changed so that --upgrade no longer affects >> it. >> >> A PEP-style write-up >> (https://gist.github.com/pradyunsg/4c9db6a212239fee69b429c96cdc3d73) >> documents the reasoning behind this proposed change, what the change is and >> some alternate proposals that were rejected in the discussions. >> >> This is a request for comments on the pull-request implementing this >> proposal (https://github.com/pypa/pip/pull/3806) for your views, thoughts >> and possible concerns related to it. > > Hi Pradyun, > > Thanks for putting that together. The assertion in the write-up that > the proposed behaviour matches that of operating system level package > managers doesn't sound right to me: > > $ sudo dnf install -q python > Package python-2.7.11-5.fc24.x86_64 is already installed, skipping. > > (My system actually has a Python update pending, so the "-q" option is > suppressing the output telling me about that, but either way, it > doesn't make any local changes unless I use the update or upgrade > subcommand or supply the "--best" upgrade strategy option to the > install command) Yum behaves as indicated, see: https://s.caremad.io/0E3gYYBWFP/ It appears that maybe DNF decided to be an odd duck like pip currently is and put this behind a flag (``dnf install ?best``). Although it arguably makes more sense for a distro package manager to use pip?s current behavior since they don?t tend to do major upgrades inside of a single distro (so you?re like to only get bug fixes and security fixes available) and they tend to have dedicated ?upgrade all the things? commands that people run regularly. > > As far as I am aware, apt-get install behaves the same way - if you > only give a package name, and that package is already installed on the > system, it won't do anything, even if a newer version of that > component is available, and you need to use the "upgrade" subcommand > instead to say "upgrade to the latest available version?. No, apt-get install upgrades if is already installed, see: https://askubuntu.com/questions/44122/how-to-upgrade-a-single-package-using-apt-get > > Switching pip's "--upgrade" command line option to a non-eager upgrade > strategy sounds good to me, but making "install" upgrade the requested > component by default when given just a name (rather than a specific > file or URL reference) would be very surprising and problematic. I think the current behavior is surprising and problematic TBH and has lead projects to telling people they should do ``pip install -U `` instead of just ``pip install ``. It?s a bit weird that you get different versions of installed based on what version you already have installed when you run ``pip install ``. The proposed behavior matches that of all of the other language package managers I tried as well. ? Donald Stufft From njs at pobox.com Sat Jun 25 18:40:37 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 25 Jun 2016 15:40:37 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: On Sat, Jun 25, 2016 at 2:31 PM, Nick Coghlan wrote: > Thanks for putting that together. The assertion in the write-up that > the proposed behaviour matches that of operating system level package > managers doesn't sound right to me: > > $ sudo dnf install -q python > Package python-2.7.11-5.fc24.x86_64 is already installed, skipping. > > (My system actually has a Python update pending, so the "-q" option is > suppressing the output telling me about that, but either way, it > doesn't make any local changes unless I use the update or upgrade > subcommand or supply the "--best" upgrade strategy option to the > install command) Right -- this is partly my error, because I missed this earlier in the discussion (didn't yum use to at least offer an interactive prompt to upgrade or something instead? I guess it doesn't matter). > As far as I am aware, apt-get install behaves the same way - if you > only give a package name, and that package is already installed on the > system, it won't do anything, even if a newer version of that > component is available, and you need to use the "upgrade" subcommand > instead to say "upgrade to the latest available version". No, FWIW, apt-get acts like the proposed pip install. From the apt-get man page (my emphasis): install install is followed by one or more packages desired for installation *or upgrading* [...] This is also the target to use if you want to upgrade one or more already-installed packages without upgrading every package you have on your system. Unlike the "upgrade" target, which installs the newest version of all currently installed packages, "install" will install the newest version of only the package(s) specified. Simply provide the name of the package(s) you wish to upgrade, and if a newer version is available, it (and its dependencies, as described above) will be downloaded and installed. Personally I think this is better UX than the dnf approach. Let's take a Bayesian approach :-). What people really want is software that reads their mind and does what they mean. We can't read the user's mind, but if they type 'pip install foo' and 'foo' is already installed, then their mental state must have somehow prompted them to think that this was a good thing to do, so we can make some inferences about what they must be thinking. I think there are two main mental states that might have led them to type this strange thing: - they didn't know 'foo' was installed, so they expected 'pip install foo' to install it from scratch, and leave them with the latest version. - they knew 'foo' was installed, and they (mistakenly or not) believed that 'pip install' acts like 'apt-get install' and this is the way to upgrade to the latest version. (Maybe they believe this because they are Ubuntu users, maybe because 'pip install' is the only command they know and they are making a guess, whatever), Either way, having 'pip install' go ahead and do the upgrade gives the user what they were expecting. It also reduces the state space: if 'pip install foo' has the post-condition that it always leaves you with the latest version, then that's much simpler to reason about then the version where it might leave you with the latest version or it might not depending on what other commands you might have run a year ago. And you don't have to remember that if you really want to make sure you have the latest version regardless of your starting state then that's 'pip install foo || pip upgrade foo', or some other special incantation. And of course if you write the wrong thing it will work fine on your machine, but then when your users try following your tutorial then it goes wrong... Of course, there is a third mental state that the user might have been in: that they didn't know whether 'foo' is installed, and they wanted to guarantee that some version of 'foo' is installed, but they genuinely didn't care what version that is, *and* they'd prefer to keep an old version rather than upgrade. That's a fairly odd and complicated mental state to be in, but I guess it does come up sometimes (like in Ian's use case of writing automated sysadmin scripts). I don't have any objection to making those semantics *available*, but I think it's a bad idea to attach them to 'pip install', since that's the obvious thing that new and non-expert users will reach for, and new and non-expert users by definition do not have that kind of complicated mental state. But it would make sense to me to provide this functionality under an opt-in command specifically for experts like 'pip require foo', or 'pip install --prefer-current foo'. -n -- Nathaniel J. Smith -- https://vorpus.org From dholth at gmail.com Sat Jun 25 20:41:21 2016 From: dholth at gmail.com (Daniel Holth) Date: Sun, 26 Jun 2016 00:41:21 +0000 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: The problem is that users expect to install things but scripts expect to ensure something is installed. Simple: just check for isatty() to see if you should upgrade. ? On Sat, Jun 25, 2016, 18:45 Nathaniel Smith wrote: > On Sat, Jun 25, 2016 at 2:31 PM, Nick Coghlan wrote: > > Thanks for putting that together. The assertion in the write-up that > > the proposed behaviour matches that of operating system level package > > managers doesn't sound right to me: > > > > $ sudo dnf install -q python > > Package python-2.7.11-5.fc24.x86_64 is already installed, skipping. > > > > (My system actually has a Python update pending, so the "-q" option is > > suppressing the output telling me about that, but either way, it > > doesn't make any local changes unless I use the update or upgrade > > subcommand or supply the "--best" upgrade strategy option to the > > install command) > > Right -- this is partly my error, because I missed this earlier in the > discussion (didn't yum use to at least offer an interactive prompt to > upgrade or something instead? I guess it doesn't matter). > > > As far as I am aware, apt-get install behaves the same way - if you > > only give a package name, and that package is already installed on the > > system, it won't do anything, even if a newer version of that > > component is available, and you need to use the "upgrade" subcommand > > instead to say "upgrade to the latest available version". > > No, FWIW, apt-get acts like the proposed pip install. From the apt-get > man page (my emphasis): > > install > install is followed by one or more packages desired for > installation *or upgrading* > [...] > This is also the target to use if you want to upgrade one or > more > already-installed packages without upgrading every package you > have > on your system. Unlike the "upgrade" target, which installs the > newest version of all currently installed packages, "install" > will > install the newest version of only the package(s) specified. > Simply > provide the name of the package(s) you wish to upgrade, and if a > newer version is available, it (and its dependencies, as > described > above) will be downloaded and installed. > > Personally I think this is better UX than the dnf approach. Let's take > a Bayesian approach :-). What people really want is software that > reads their mind and does what they mean. We can't read the user's > mind, but if they type 'pip install foo' and 'foo' is already > installed, then their mental state must have somehow prompted them to > think that this was a good thing to do, so we can make some inferences > about what they must be thinking. I think there are two main mental > states that might have led them to type this strange thing: > > - they didn't know 'foo' was installed, so they expected 'pip install > foo' to install it from scratch, and leave them with the latest > version. > > - they knew 'foo' was installed, and they (mistakenly or not) believed > that 'pip install' acts like 'apt-get install' and this is the way to > upgrade to the latest version. (Maybe they believe this because they > are Ubuntu users, maybe because 'pip install' is the only command they > know and they are making a guess, whatever), > > Either way, having 'pip install' go ahead and do the upgrade gives the > user what they were expecting. > > It also reduces the state space: if 'pip install foo' has the > post-condition that it always leaves you with the latest version, then > that's much simpler to reason about then the version where it might > leave you with the latest version or it might not depending on what > other commands you might have run a year ago. And you don't have to > remember that if you really want to make sure you have the latest > version regardless of your starting state then that's 'pip install foo > || pip upgrade foo', or some other special incantation. And of course > if you write the wrong thing it will work fine on your machine, but > then when your users try following your tutorial then it goes wrong... > > Of course, there is a third mental state that the user might have been > in: that they didn't know whether 'foo' is installed, and they wanted > to guarantee that some version of 'foo' is installed, but they > genuinely didn't care what version that is, *and* they'd prefer to > keep an old version rather than upgrade. That's a fairly odd and > complicated mental state to be in, but I guess it does come up > sometimes (like in Ian's use case of writing automated sysadmin > scripts). I don't have any objection to making those semantics > *available*, but I think it's a bad idea to attach them to 'pip > install', since that's the obvious thing that new and non-expert users > will reach for, and new and non-expert users by definition do not have > that kind of complicated mental state. But it would make sense to me > to provide this functionality under an opt-in command specifically for > experts like 'pip require foo', or 'pip install --prefer-current foo'. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dana.powers at gmail.com Sat Jun 25 19:15:34 2016 From: dana.powers at gmail.com (Dana Powers) Date: Sat, 25 Jun 2016 16:15:34 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: Should note that the 'upgrade' behavior of apt-get install requires that you've also run `apt-get update` (I believe?) -- that seems like a pretty big difference since the user has control over when they update their upstream package lists. A quick and dirty survey of other package manager behavior (very quick... may be wrong!): gem install is not idempotent, supports --conservative flag to alter. non-eager. npm install also not idempotent, and I don't see a flag to alter this. eager, but sub-dependencies are isolated per package -Dana On Sat, Jun 25, 2016 at 3:40 PM, Nathaniel Smith wrote: > On Sat, Jun 25, 2016 at 2:31 PM, Nick Coghlan wrote: >> Thanks for putting that together. The assertion in the write-up that >> the proposed behaviour matches that of operating system level package >> managers doesn't sound right to me: >> >> $ sudo dnf install -q python >> Package python-2.7.11-5.fc24.x86_64 is already installed, skipping. >> >> (My system actually has a Python update pending, so the "-q" option is >> suppressing the output telling me about that, but either way, it >> doesn't make any local changes unless I use the update or upgrade >> subcommand or supply the "--best" upgrade strategy option to the >> install command) > > Right -- this is partly my error, because I missed this earlier in the > discussion (didn't yum use to at least offer an interactive prompt to > upgrade or something instead? I guess it doesn't matter). > >> As far as I am aware, apt-get install behaves the same way - if you >> only give a package name, and that package is already installed on the >> system, it won't do anything, even if a newer version of that >> component is available, and you need to use the "upgrade" subcommand >> instead to say "upgrade to the latest available version". > > No, FWIW, apt-get acts like the proposed pip install. From the apt-get > man page (my emphasis): > > install > install is followed by one or more packages desired for > installation *or upgrading* > [...] > This is also the target to use if you want to upgrade one or more > already-installed packages without upgrading every package you have > on your system. Unlike the "upgrade" target, which installs the > newest version of all currently installed packages, "install" will > install the newest version of only the package(s) specified. Simply > provide the name of the package(s) you wish to upgrade, and if a > newer version is available, it (and its dependencies, as described > above) will be downloaded and installed. > > Personally I think this is better UX than the dnf approach. Let's take > a Bayesian approach :-). What people really want is software that > reads their mind and does what they mean. We can't read the user's > mind, but if they type 'pip install foo' and 'foo' is already > installed, then their mental state must have somehow prompted them to > think that this was a good thing to do, so we can make some inferences > about what they must be thinking. I think there are two main mental > states that might have led them to type this strange thing: > > - they didn't know 'foo' was installed, so they expected 'pip install > foo' to install it from scratch, and leave them with the latest > version. > > - they knew 'foo' was installed, and they (mistakenly or not) believed > that 'pip install' acts like 'apt-get install' and this is the way to > upgrade to the latest version. (Maybe they believe this because they > are Ubuntu users, maybe because 'pip install' is the only command they > know and they are making a guess, whatever), > > Either way, having 'pip install' go ahead and do the upgrade gives the > user what they were expecting. > > It also reduces the state space: if 'pip install foo' has the > post-condition that it always leaves you with the latest version, then > that's much simpler to reason about then the version where it might > leave you with the latest version or it might not depending on what > other commands you might have run a year ago. And you don't have to > remember that if you really want to make sure you have the latest > version regardless of your starting state then that's 'pip install foo > || pip upgrade foo', or some other special incantation. And of course > if you write the wrong thing it will work fine on your machine, but > then when your users try following your tutorial then it goes wrong... > > Of course, there is a third mental state that the user might have been > in: that they didn't know whether 'foo' is installed, and they wanted > to guarantee that some version of 'foo' is installed, but they > genuinely didn't care what version that is, *and* they'd prefer to > keep an old version rather than upgrade. That's a fairly odd and > complicated mental state to be in, but I guess it does come up > sometimes (like in Ian's use case of writing automated sysadmin > scripts). I don't have any objection to making those semantics > *available*, but I think it's a bad idea to attach them to 'pip > install', since that's the obvious thing that new and non-expert users > will reach for, and new and non-expert users by definition do not have > that kind of complicated mental state. But it would make sense to me > to provide this functionality under an opt-in command specifically for > experts like 'pip require foo', or 'pip install --prefer-current foo'. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Sun Jun 26 00:44:44 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 25 Jun 2016 21:44:44 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: On 25 June 2016 at 15:40, Nathaniel Smith wrote: > On Sat, Jun 25, 2016 at 2:31 PM, Nick Coghlan wrote: >> Thanks for putting that together. The assertion in the write-up that >> the proposed behaviour matches that of operating system level package >> managers doesn't sound right to me: >> >> $ sudo dnf install -q python >> Package python-2.7.11-5.fc24.x86_64 is already installed, skipping. >> >> (My system actually has a Python update pending, so the "-q" option is >> suppressing the output telling me about that, but either way, it >> doesn't make any local changes unless I use the update or upgrade >> subcommand or supply the "--best" upgrade strategy option to the >> install command) > > Right -- this is partly my error, because I missed this earlier in the > discussion (didn't yum use to at least offer an interactive prompt to > upgrade or something instead? I guess it doesn't matter). > >> As far as I am aware, apt-get install behaves the same way - if you >> only give a package name, and that package is already installed on the >> system, it won't do anything, even if a newer version of that >> component is available, and you need to use the "upgrade" subcommand >> instead to say "upgrade to the latest available version". > > No, FWIW, apt-get acts like the proposed pip install. From the apt-get > man page (my emphasis): > > install > install is followed by one or more packages desired for > installation *or upgrading* > [...] > This is also the target to use if you want to upgrade one or more > already-installed packages without upgrading every package you have > on your system. Unlike the "upgrade" target, which installs the > newest version of all currently installed packages, "install" will > install the newest version of only the package(s) specified. Simply > provide the name of the package(s) you wish to upgrade, and if a > newer version is available, it (and its dependencies, as described > above) will be downloaded and installed. > > Personally I think this is better UX than the dnf approach. Let's take > a Bayesian approach :-). What people really want is software that > reads their mind and does what they mean. We can't read the user's > mind, but if they type 'pip install foo' and 'foo' is already > installed, then their mental state must have somehow prompted them to > think that this was a good thing to do, so we can make some inferences > about what they must be thinking. I think there are two main mental > states that might have led them to type this strange thing: > > - they didn't know 'foo' was installed, so they expected 'pip install > foo' to install it from scratch, and leave them with the latest > version. > > - they knew 'foo' was installed, and they (mistakenly or not) believed > that 'pip install' acts like 'apt-get install' and this is the way to > upgrade to the latest version. (Maybe they believe this because they > are Ubuntu users, maybe because 'pip install' is the only command they > know and they are making a guess, whatever), No, it's an idempotent assertion about the system state: "Make sure X is available, installing it if necessary". Maybe Debian folks are used to system packages stripping out the pip metadata, so pip has no idea what's installed, even if the system site-packages is configured to be visible in the venv? Python is not Linux, and it definitely isn't just Debian, so "apt does it that way" is not a good argument for changing behaviout that isn't broken. Changing "--upgrade" to the non-eager update strategy for dependencies makes perfect sense, I just want "pip install" without the --upgrade flag to continue to be treated the same way that dependency updates are going to be treated in the "--upgrade" case. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Sun Jun 26 00:59:01 2016 From: robertc at robertcollins.net (Robert Collins) Date: Sun, 26 Jun 2016 16:59:01 +1200 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> References: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> Message-ID: On 26 June 2016 at 05:16, Donald Stufft wrote: > >> On Jun 25, 2016, at 12:31 PM, Ian Cordasco wrote: >> >> On Sat, Jun 25, 2016 at 5:25 AM, Pradyun Gedam wrote: >>> Hello List! >>> >>> There is currently a proposal to change the behaviour to pip install to >>> upgrade a package that is passed even if it is already installed. >>> >>> This behaviour change is accompanied with a change in the upgrade strategy - >>> pip would stop ?eagerly? upgrading dependencies and would become more >>> conservative, upgrading a dependency only when it doesn?t meet lower >>> constraints of the newer version of a parent. Moreover, the behaviour of pip >>> install --target would also be changed so that --upgrade no longer affects >>> it. >>> >>> A PEP-style write-up >>> (https://gist.github.com/pradyunsg/4c9db6a212239fee69b429c96cdc3d73) >>> documents the reasoning behind this proposed change, what the change is and >>> some alternate proposals that were rejected in the discussions. >>> >>> This is a request for comments on the pull-request implementing this >>> proposal (https://github.com/pypa/pip/pull/3806) for your views, thoughts >>> and possible concerns related to it. >> >> Having `pip install` implicitly upgrade a package will break the way >> tooling like ansible, puppet, salt, and chef all work. Most expect >> that if you do `pip install requests` twice you won't get two >> different versions. At the very least, there should be an option added >> that implies that if the version of the specified package is already >> installed and satisfactory, that it is not upgraded. > > So this may be true, but it?s also not particularly hard to work around > similarly to what they do already for system package managers, they can > do ``pip list`` or ``pip freeze`` to see if something is already installed. > Not only will this work across versions, but it?ll also work *better* > because it won?t involve hitting the network to determine this information. Would we wait to make this change for the existing config systems to implement this change, and get it into all their stable versions? If not, then it doesn't matter whether its easy to work around - we're causing countless installs of those systems to break if we make the change. >> If you want to change the upgrade behaviour of pip, I suggest not >> modifying the existing `pip install` and instead deprecating that in >> favor of a new command. Silently pulling the rug out from under people >> seems like an incredibly bad idea. +1 ^ > This is going to break things, no matter what we do? and the current > behavior is also broken in ways that are causing other projects to need > to do broken things. How so? I haven't actually seen a bug or issue describing what folk are doing. > The current solution I think breaks *less* things and once the initial > pain of breakage is over will end up with a much nicer situation. Importantly > the vast bulk of the breakage will be centralized in things like Chef where > one group can make a change and fix it for thousands. I think the primary outcome of the proposed plan is that we won't hear about breakage until it shows up on 'haveibeenpawned.com'. There are I think two discussions to have: - what should we be giving users as a UX - how to get there For the how-to-get-there aspect, I see absolutely no reason to make incompatible changes. We can add new options -O or whatever and leave the existing ones alone. Changing the behaviour for 'install NAME' I could potentially see, but I'd hesitate on that too. Our users don't react well to instability. For the what-should-we-aim-for question, thats more tricky. There are I think several different aspects that all interact, and make comparisons to dnf/apt etc quite meaningless. Firstly, those systems have automated cron-like systems to keep them secure; they can afford to not worry about security issues in their dependency chains because everything will be pushed out automatically. venvs have no such mechanism today - pip doesn't even have a suitable command today (though you can of course write one around it). Without one of these systems, I expect that venvs will accrete *years* old versions of software, and we'll see the exact inverse breakage that this proposed change to -U is about: lower minimums in PyPI are often useless. Secondly, apt/dnf etc have a presumption that all versions of all software either work together or explicitly do not (via a combination of conflicts and dependencies) - because they're dealing with an order of magnitude less software that we are, and the system is an integrated system, not a federated system like pip's data model. pip doesn't have that presumption - hence this whole proposal - but the consequence is that rather than giving uses the most *likely* to work combination of software, we're going to give them the *least* likely to work combination [because there are billions of combinations of latest-X + older-Y, but only one latest-X + latest-Y]. Lastly, by defaulting to non-recursive upgrades we're putting the burden on our users to identify sensitive components and manage them much more carefully. Wearing my user hat, while I hate it when pip breaks something, I really want pip to do as much thinking for me as possible, and this seems like a really weird combination of do-more-thinking(upstall named things), and do-less-thinking-for-me (leave everything else at random versions). -Rob From ncoghlan at gmail.com Sun Jun 26 01:29:20 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 25 Jun 2016 22:29:20 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> Message-ID: On 25 June 2016 at 21:59, Robert Collins wrote: > Lastly, by defaulting to non-recursive upgrades we're putting the > burden on our users to identify sensitive components and manage them > much more carefully. Huh, true. I was looking at this proposal from the point of view of container build processes where the system packages are visible from the venv, and there the "only install what I tell you, and only upgrade what you absolutely have to" behaviour is useful (especially since you're mainly doing it in the context of generating a new requirements.txt that is used to to the *actual* build). However, I now think Robert's right that that's the wrong way to look at it - I am *not* a suitable audience for the defaults, since we can adapt our automation pipeline to whatever combination of settings pip tells us we need to get the behaviour we want (as long as we're given suitable deprecation periods to adjust to any behavioural changes, and we can largely control that ourselves by controlling when we upgrade pip, including patching it if absolutely necessary). By contrast, for folks that *aren't* using something like VersionEye or requires.io to stay on top of security updates, "always run the latest version of everything, and try to keep up with that upgrade treadmill" really is the safest way to go, and that's what the current eager upgrade behaviour provides. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From pradyunsg at gmail.com Sun Jun 26 02:27:32 2016 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Sun, 26 Jun 2016 06:27:32 +0000 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> Message-ID: I think it's useful to see what other tools and package managers do. Doing something like them because they do it is not a good reason. Doing it because it's better UX is a good reason. I like what git does, printing a (possibly long) message in a version cycle to warn users that the behavior would be changing in a future version and how they can/should act on it. It has a clean transition period that allows users to make educated decisions. I, personally, am more in the favor of providing a `--upgrade-strategy (eager/non-eager)` flag (currently defaulting to eager) with `--upgrade` with the the warning message printing as above followed by a switch to non-eager. These two combined is IMO the best way to do a transition to non-eager upgrades by default. That said, a change to non-eager upgrades be could potentially cause a security vulnerability because a security package is kept at an older version. This asks if we should default to non-eager upgrades. This definitely something that should be addressed first, before we even talk about the transition. I'm by no means in a position to make an proper response and decision on this but someone else on this thread probably is. On Sun, 26 Jun 2016 at 10:59 Nick Coghlan wrote: > On 25 June 2016 at 21:59, Robert Collins > wrote: > > Lastly, by defaulting to non-recursive upgrades we're putting the > > burden on our users to identify sensitive components and manage them > > much more carefully. > > Huh, true. I was looking at this proposal from the point of view of > container build processes where the system packages are visible from > the venv, and there the "only install what I tell you, and only > upgrade what you absolutely have to" behaviour is useful (especially > since you're mainly doing it in the context of generating a new > requirements.txt that is used to to the *actual* build). > > However, I now think Robert's right that that's the wrong way to look > at it - I am *not* a suitable audience for the defaults, since we can > adapt our automation pipeline to whatever combination of settings pip > tells us we need to get the behaviour we want (as long as we're given > suitable deprecation periods to adjust to any behavioural changes, and > we can largely control that ourselves by controlling when we upgrade > pip, including patching it if absolutely necessary). > > By contrast, for folks that *aren't* using something like VersionEye > or requires.io to stay on top of security updates, "always run the > latest version of everything, and try to keep up with that upgrade > treadmill" really is the safest way to go, and that's what the current > eager upgrade behaviour provides. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jun 26 05:40:18 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 26 Jun 2016 11:40:18 +0200 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> Message-ID: @Pradyun thanks a lot for trying to get some movement in this issue again! On Sun, Jun 26, 2016 at 8:27 AM, Pradyun Gedam wrote: > I think it's useful to see what other tools and package managers do. Doing > something like them because they do it is not a good reason. Doing it > because it's better UX is a good reason. > > I like what git does, printing a (possibly long) message in a version > cycle to warn users that the behavior would be changing in a future version > and how they can/should act on it. It has a clean transition period that > allows users to make educated decisions. > > I, personally, am more in the favor of providing a `--upgrade-strategy > (eager/non-eager)` flag (currently defaulting to eager) with `--upgrade` > with the the warning message printing as above followed by a switch to > non-eager. > > These two combined is IMO the best way to do a transition to non-eager > upgrades by default. > > That said, a change to non-eager upgrades be could potentially cause a > security vulnerability because a security package is kept at an older > version. This asks if we should default to non-eager upgrades. This > definitely something that should be addressed first, before we even talk > about the transition. I'm by no means in a position to make an proper > response and decision on this but someone else on this thread probably is. > This was addressed, many times over. On the main issue [1], on the pypa-dev mailing list [2], on this list [3]. The decision that this is going to happen is even documented in the pip docs [4]. The PR was explicitly asked for after all that discussion [5], and has been submitted last year already with all concrete review comments addressed. There's also a reason that issue [1] is one of the most "+1"-ed issues I've come across on GitHub - the current upgrade behavior is absolutely horrible (see [3] for why). > On Sun, 26 Jun 2016 at 10:59 Nick Coghlan wrote: > >> On 25 June 2016 at 21:59, Robert Collins >> wrote: >> > Lastly, by defaulting to non-recursive upgrades we're putting the >> > burden on our users to identify sensitive components and manage them >> > much more carefully. >> > That's why there's also a plan to add an update-all command (see [1]). And there's still "update --recursive" as well. > Huh, true. I was looking at this proposal from the point of view of >> container build processes where the system packages are visible from >> the venv, and there the "only install what I tell you, and only >> upgrade what you absolutely have to" behaviour is useful (especially >> since you're mainly doing it in the context of generating a new >> requirements.txt that is used to to the *actual* build). >> > That's not the main reason it's useful. Just wanting no unexpected upgrades and no failing upgrades of working packages which contain compiled code when you do a simple "pip install -U smallpurepythonpackage" are much more important. See [3] for a more eloquent explanation. Ralf [1] https://github.com/pypa/pip/issues/59 [2] https://groups.google.com/forum/#!searchin/pypa-dev/upgrade/pypa-dev/vVLmo1PevTg/oBkHCPBLb9YJ [3] http://article.gmane.org/gmane.comp.python.distutils.devel/24218 [4] https://pip.pypa.io/en/stable/user_guide/#only-if-needed-recursive-upgrade [5] http://thread.gmane.org/gmane.comp.python.scientific.user/36377 [6] https://github.com/pypa/pip/pull/3194 > However, I now think Robert's right that that's the wrong way to look >> at it - I am *not* a suitable audience for the defaults, since we can >> adapt our automation pipeline to whatever combination of settings pip >> tells us we need to get the behaviour we want (as long as we're given >> suitable deprecation periods to adjust to any behavioural changes, and >> we can largely control that ourselves by controlling when we upgrade >> pip, including patching it if absolutely necessary). >> >> By contrast, for folks that *aren't* using something like VersionEye >> or requires.io to stay on top of security updates, "always run the >> latest version of everything, and try to keep up with that upgrade >> treadmill" really is the safest way to go, and that's what the current >> eager upgrade behaviour provides. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheus235 at gmail.com Sat Jun 25 19:10:12 2016 From: prometheus235 at gmail.com (Nick Timkovich) Date: Sat, 25 Jun 2016 18:10:12 -0500 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: Jaded dev-ops person in me says that "pip install requests" in production is crazy anyways; thou shalt pin versions. If pip install with an implicit "--upgrade" could ever break something, you're just trading six for a half dozen because you're not guaranteed to be at a consistent state either way. I assume "pip install django==1.9.7" or ...django>=1.9,<1.10" (and --requirements) would still work as expected? In the latter case the implicit upgrade might even be kinda handy. On Sat, Jun 25, 2016 at 5:40 PM, Nathaniel Smith wrote: > On Sat, Jun 25, 2016 at 2:31 PM, Nick Coghlan wrote: > > Thanks for putting that together. The assertion in the write-up that > > the proposed behaviour matches that of operating system level package > > managers doesn't sound right to me: > > > > $ sudo dnf install -q python > > Package python-2.7.11-5.fc24.x86_64 is already installed, skipping. > > > > (My system actually has a Python update pending, so the "-q" option is > > suppressing the output telling me about that, but either way, it > > doesn't make any local changes unless I use the update or upgrade > > subcommand or supply the "--best" upgrade strategy option to the > > install command) > > Right -- this is partly my error, because I missed this earlier in the > discussion (didn't yum use to at least offer an interactive prompt to > upgrade or something instead? I guess it doesn't matter). > > > As far as I am aware, apt-get install behaves the same way - if you > > only give a package name, and that package is already installed on the > > system, it won't do anything, even if a newer version of that > > component is available, and you need to use the "upgrade" subcommand > > instead to say "upgrade to the latest available version". > > No, FWIW, apt-get acts like the proposed pip install. From the apt-get > man page (my emphasis): > > install > install is followed by one or more packages desired for > installation *or upgrading* > [...] > This is also the target to use if you want to upgrade one or > more > already-installed packages without upgrading every package you > have > on your system. Unlike the "upgrade" target, which installs the > newest version of all currently installed packages, "install" > will > install the newest version of only the package(s) specified. > Simply > provide the name of the package(s) you wish to upgrade, and if a > newer version is available, it (and its dependencies, as > described > above) will be downloaded and installed. > > Personally I think this is better UX than the dnf approach. Let's take > a Bayesian approach :-). What people really want is software that > reads their mind and does what they mean. We can't read the user's > mind, but if they type 'pip install foo' and 'foo' is already > installed, then their mental state must have somehow prompted them to > think that this was a good thing to do, so we can make some inferences > about what they must be thinking. I think there are two main mental > states that might have led them to type this strange thing: > > - they didn't know 'foo' was installed, so they expected 'pip install > foo' to install it from scratch, and leave them with the latest > version. > > - they knew 'foo' was installed, and they (mistakenly or not) believed > that 'pip install' acts like 'apt-get install' and this is the way to > upgrade to the latest version. (Maybe they believe this because they > are Ubuntu users, maybe because 'pip install' is the only command they > know and they are making a guess, whatever), > > Either way, having 'pip install' go ahead and do the upgrade gives the > user what they were expecting. > > It also reduces the state space: if 'pip install foo' has the > post-condition that it always leaves you with the latest version, then > that's much simpler to reason about then the version where it might > leave you with the latest version or it might not depending on what > other commands you might have run a year ago. And you don't have to > remember that if you really want to make sure you have the latest > version regardless of your starting state then that's 'pip install foo > || pip upgrade foo', or some other special incantation. And of course > if you write the wrong thing it will work fine on your machine, but > then when your users try following your tutorial then it goes wrong... > > Of course, there is a third mental state that the user might have been > in: that they didn't know whether 'foo' is installed, and they wanted > to guarantee that some version of 'foo' is installed, but they > genuinely didn't care what version that is, *and* they'd prefer to > keep an old version rather than upgrade. That's a fairly odd and > complicated mental state to be in, but I guess it does come up > sometimes (like in Ian's use case of writing automated sysadmin > scripts). I don't have any objection to making those semantics > *available*, but I think it's a bad idea to attach them to 'pip > install', since that's the obvious thing that new and non-expert users > will reach for, and new and non-expert users by definition do not have > that kind of complicated mental state. But it would make sense to me > to provide this functionality under an opt-in command specifically for > experts like 'pip require foo', or 'pip install --prefer-current foo'. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jun 26 07:11:16 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 26 Jun 2016 12:11:16 +0100 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: On 25 June 2016 at 23:40, Nathaniel Smith wrote: > Of course, there is a third mental state that the user might have been > in: that they didn't know whether 'foo' is installed, and they wanted > to guarantee that some version of 'foo' is installed, but they > genuinely didn't care what version that is, *and* they'd prefer to > keep an old version rather than upgrade. That's a fairly odd and > complicated mental state to be in, but I guess it does come up > sometimes (like in Ian's use case of writing automated sysadmin > scripts). It's not *that* strange a mental state. Windows users often have issues installing packages, either because they don't have a compiler, or because dependencies are hard to get right. So "don't do any non-essential install steps in case they go wrong" is an entirely reasonable viewpoint. And then, "pip install foo" meaning "install a copy if it's not there, otherwise leave me with my working version" seems to me to be a perfectly sensible expectation. Actually, that's more general than just windows. Wanting to have foo available, but not wanting to risk the possibility of a failed install for *whatever* reason, seems reasonable. Maybe it's just that Windows users are more used to installs failing (before wheels became common)? Paul From donald at stufft.io Sun Jun 26 13:26:56 2016 From: donald at stufft.io (Donald Stufft) Date: Sun, 26 Jun 2016 13:26:56 -0400 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: > On Jun 25, 2016, at 7:10 PM, Nick Timkovich wrote: > > I assume "pip install django==1.9.7" or ...django>=1.9,<1.10" (and --requirements) would still work as expected? In the latter case the implicit upgrade might even be kinda handy. Assuming you expect the first to install Django 1.9.7 and the second to install the latest in the 1.9 series, then yes. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jun 26 13:32:47 2016 From: donald at stufft.io (Donald Stufft) Date: Sun, 26 Jun 2016 13:32:47 -0400 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> > On Jun 25, 2016, at 6:25 AM, Pradyun Gedam wrote: > > There is currently a proposal to change the behaviour to pip install to upgrade a package that is passed even if it is already installed. > > This behaviour change is accompanied with a change in the upgrade strategy - pip would stop ?eagerly? upgrading dependencies and would become more conservative, upgrading a dependency only when it doesn?t meet lower constraints of the newer version of a parent. Moreover, the behaviour of pip install --target would also be changed so that --upgrade no longer affects it. > I think bundling these two changes (and I think I might have been the one that originally suggested it) is making this discussion harder than it needs to be as folks are having to fight on multiple different fronts at once. I think the change to the default behavior of pip install is dependent on the change to ?upgrade, so I suggest we focus on the change to ?upgrade first, changing from a ?recursive? to a ?conservative? strategy. Once we get that change figured out and landed then we can worry about what to do with pip install. I?m not going to repeat the entire post, but I just made a fairly lengthy comment at https://github.com/pypa/pip/issues/3786#issuecomment-228611906 but to try and boil it down to a few points: * ``pip install ?upgrade`` is not a good security mechanism, relying on it is inconsistent at best. If we want to support trying to keep people on secure versions of software we need a better mechanism than this anyways, so we shouldn?t let it influence our choice here. * For the general case, it?s not going to matter a lot which way we go, but not upgrading has the greatest chance of not breaking *already installed software*. * For the hard-to-upgrade case, the current behavior is so bad that people are outright attempting to subvert the way pip typically behaviors, *AND* advocating for other?s to do the same, in an attempt to escape that behavior. I think that this is not a good place to be in. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Jun 26 14:59:17 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 26 Jun 2016 11:59:17 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: Hi, On Sun, Jun 26, 2016 at 10:32 AM, Donald Stufft wrote: > > On Jun 25, 2016, at 6:25 AM, Pradyun Gedam wrote: > > There is currently a proposal to change the behaviour to pip install to > upgrade a package that is passed even if it is already installed. > > This behaviour change is accompanied with a change in the upgrade strategy - > pip would stop ?eagerly? upgrading dependencies and would become more > conservative, upgrading a dependency only when it doesn?t meet lower > constraints of the newer version of a parent. Moreover, the behaviour of pip > install --target would also be changed so that --upgrade no longer affects > it. > > > I think bundling these two changes (and I think I might have been the one > that originally suggested it) is making this discussion harder than it needs > to be as folks are having to fight on multiple different fronts at once. I > think the change to the default behavior of pip install is dependent on the > change to ?upgrade, so I suggest we focus on the change to ?upgrade first, > changing from a ?recursive? to a ?conservative? strategy. Once we get that > change figured out and landed then we can worry about what to do with pip > install. > > I?m not going to repeat the entire post, but I just made a fairly lengthy > comment at https://github.com/pypa/pip/issues/3786#issuecomment-228611906 > but to try and boil it down to a few points: > > * ``pip install ?upgrade`` is not a good security mechanism, relying on it > is inconsistent at best. If we want to support trying to keep people on > secure versions of software we need a better mechanism than this anyways, so > we shouldn?t let it influence our choice here. > * For the general case, it?s not going to matter a lot which way we go, but > not upgrading has the greatest chance of not breaking *already installed > software*. > * For the hard-to-upgrade case, the current behavior is so bad that people > are outright attempting to subvert the way pip typically behaviors, *AND* > advocating for other?s to do the same, in an attempt to escape that > behavior. I think that this is not a good place to be in. I wonder whether it is worth going back to the proposal [1] to add pip upgrade To anyone who hasn't read [1], this would have the behavior proposed (always upgrades named packages, does not do recursive upgrade). Meanwhile `pip install` stays as is, but deprecates the `--upgrade` flag in favor of the new command. The cost of the new command, that duplicates some behavior of `install` - seems rather small - and we could always deprecate it later, once people had got used the new behavior. Cheers, Matthew [1] https://gist.github.com/pradyunsg/4c9db6a212239fee69b429c96cdc3d73#add-a-pip-upgrade-command From njs at pobox.com Sun Jun 26 15:03:04 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 26 Jun 2016 12:03:04 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> Message-ID: On Sat, Jun 25, 2016 at 10:29 PM, Nick Coghlan wrote: [...] > By contrast, for folks that *aren't* using something like VersionEye > or requires.io to stay on top of security updates, "always run the > latest version of everything, and try to keep up with that upgrade > treadmill" really is the safest way to go, and that's what the current > eager upgrade behaviour provides. It's really not, though :-(. I am *incredibly* sympathetic to the idea that we should be doing whatever we can to nudge users into keeping up to date. If there was a button I could push that would enable Android-style updates (= "hey the elves upgraded everything while you were sleeping, hope you like it") by default, then I would push that button (as long as there was an option to opt-out). In numpy-land we have really damaging feedback loop where users don't upgrade numpy, so downstream packages insist on supporting old numpy's b/c users have them, so downstream packages insist on working around numpy limitations instead of fixing them because fixes will only be in new versions, and then the teetering pile of workarounds further rusts-over numpy's brokenness, which makes it more likely that changes break things, so users don't want to upgrades, ... it's bad. But, given pip and its context, the right way to do this is: - make explicit upgrades like 'pip install -U foo' non-recursive - provide a 'pip upgrade-all' command (under whatever spelling) - provide messaging and hints to encourage people to use it ("pip install foo" -> "okay done, and fyi you have 12 out-of-date packages, you should run pip upgrade-all") The advantage of this is that it puts the user in control. When I want to install or upgrade a specific package, then I can do that. When I want to upgrade everything, I can do that. Everything is predictable, and does what it says on the tin. Each command addresses one specific problem that users understand. Pip is my friend who works with me to help me accomplish my goals. The current 'pip install -U' is none of these things. I say "I want to upgrade foo", and then pip looks at that like "ah-HAH I really want to upgrade all the things, it's for your own good, and you just gave me permission to do that, or at least you gave me permission to do something *like* that, close enough that I can pretend, so I'm just going to go ahead and do the most that I think I can get away with, don't worry, you'll totally appreciate this someday, and anyway, I'm just doing what you told me to do (kinda)". This is, like... just rude and disrespectful. It takes away my agency as a user, with a bit of gaslighting on top. Obviously the context is totally different, I'm not going to take this next analogy any further, but notice that this is literally the same basic interactional pattern as men who are like "oh that woman nodded at me while passing in the subway aisle, I'm going to assume that that means she wants to have a long conversation with me for the rest of the ride and nothing will convince me otherwise, I'm a really awesome guy, she'll see that eventually, and anyway, she totally asked me for it". Yes, sure, pip upgrading packages is for my own good, but users hate being condescended to by computers. And as a user I can't predict what's going to happen ("I asked for a new version of Pyramid and it's upgrading setuptools?"), it's not what I asked for, and compared to a real 'upgrade-all' command the end result is *still* a haphazard mix of up-to-date and non-up-to-date packages, so even in the best case it's a lousy piece of social engineering that doesn't accomplish the stated goal. And then it pisses users off so much that they implement elaborate workarounds to take control back from pip: http://article.gmane.org/gmane.comp.python.distutils.devel/24218 and the basic relationship between users and pip becomes adversarial rather than cooperative. tl;dr: +100 on finding ways to keep users up to date on package versions, but having recursive upgrades by default is an ineffective mechanism that causes lots of collateral damage, we should find a different mechanism that works better and doesn't make users hate us. -n -- Nathaniel J. Smith -- https://vorpus.org From matthew.brett at gmail.com Sun Jun 26 15:11:51 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 26 Jun 2016 12:11:51 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On Sun, Jun 26, 2016 at 11:59 AM, Matthew Brett wrote: > Hi, > > On Sun, Jun 26, 2016 at 10:32 AM, Donald Stufft wrote: >> >> On Jun 25, 2016, at 6:25 AM, Pradyun Gedam wrote: >> >> There is currently a proposal to change the behaviour to pip install to >> upgrade a package that is passed even if it is already installed. >> >> This behaviour change is accompanied with a change in the upgrade strategy - >> pip would stop ?eagerly? upgrading dependencies and would become more >> conservative, upgrading a dependency only when it doesn?t meet lower >> constraints of the newer version of a parent. Moreover, the behaviour of pip >> install --target would also be changed so that --upgrade no longer affects >> it. >> >> >> I think bundling these two changes (and I think I might have been the one >> that originally suggested it) is making this discussion harder than it needs >> to be as folks are having to fight on multiple different fronts at once. I >> think the change to the default behavior of pip install is dependent on the >> change to ?upgrade, so I suggest we focus on the change to ?upgrade first, >> changing from a ?recursive? to a ?conservative? strategy. Once we get that >> change figured out and landed then we can worry about what to do with pip >> install. >> >> I?m not going to repeat the entire post, but I just made a fairly lengthy >> comment at https://github.com/pypa/pip/issues/3786#issuecomment-228611906 >> but to try and boil it down to a few points: >> >> * ``pip install ?upgrade`` is not a good security mechanism, relying on it >> is inconsistent at best. If we want to support trying to keep people on >> secure versions of software we need a better mechanism than this anyways, so >> we shouldn?t let it influence our choice here. >> * For the general case, it?s not going to matter a lot which way we go, but >> not upgrading has the greatest chance of not breaking *already installed >> software*. >> * For the hard-to-upgrade case, the current behavior is so bad that people >> are outright attempting to subvert the way pip typically behaviors, *AND* >> advocating for other?s to do the same, in an attempt to escape that >> behavior. I think that this is not a good place to be in. > > I wonder whether it is worth going back to the proposal [1] to add > > pip upgrade > > To anyone who hasn't read [1], this would have the behavior proposed > (always upgrades named packages, does not do recursive upgrade). > Meanwhile `pip install` stays as is, but deprecates the `--upgrade` > flag in favor of the new command. > > The cost of the new command, that duplicates some behavior of > `install` - seems rather small - and we could always deprecate it > later, once people had got used the new behavior. And for the specific case of `pip install pkg` always upgrading, that seems like a bad idea. There are two sensible things that `pip install pkg` could do, one is checking and not upgrading (current behavior), the other is upgrading (proposed change). I think it would piss a lot of people off if we change from one sensible behavior to another without a significant degree of prior warning. Cheers, Matthew From njs at pobox.com Sun Jun 26 15:33:19 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 26 Jun 2016 12:33:19 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: Message-ID: On Sat, Jun 25, 2016 at 9:44 PM, Nick Coghlan wrote: > On 25 June 2016 at 15:40, Nathaniel Smith wrote: [...] >> - they didn't know 'foo' was installed, so they expected 'pip install >> foo' to install it from scratch, and leave them with the latest >> version. >> >> - they knew 'foo' was installed, and they (mistakenly or not) believed >> that 'pip install' acts like 'apt-get install' and this is the way to >> upgrade to the latest version. (Maybe they believe this because they >> are Ubuntu users, maybe because 'pip install' is the only command they >> know and they are making a guess, whatever), > > No, it's an idempotent assertion about the system state: "Make sure X > is available, installing it if necessary". > > Maybe Debian folks are used to system packages stripping out the pip > metadata, so pip has no idea what's installed, even if the system > site-packages is configured to be visible in the venv? > > Python is not Linux, and it definitely isn't just Debian, so "apt does > it that way" is not a good argument for changing behaviout that isn't > broken. Okay, I know how to put this more succinctly :-). I think I remember you saying at PyCon how we need to move to a model where our defaults are optimized for non-experts, with the expert stuff available but non-default? My argument isn't "Debian does it this way so we should do", my argument is "if you know what 'idempotent' means then you're an expert". -n -- Nathaniel J. Smith -- https://vorpus.org From chris.jerdonek at gmail.com Sun Jun 26 16:36:18 2016 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 26 Jun 2016 13:36:18 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <24596AF3-C0FB-403C-B626-5886EBF11DA1@stufft.io> Message-ID: On Sat, Jun 25, 2016 at 10:41 AM, Daniel Holth wrote: > Are you suggesting that my current vagrant provisioning script "ensure x is installed": > > ensure_x.sh: > #!/bin/sh > pip install x > > Which IIUC does not currently check the network if x is already installed, is no longer idempotent, and will permanently brick my development environment as soon as x is upgraded on pypi? Do I have to include logic for checking the current version of pip, and then decide how to upgrade? Do I just pin all dependencies always? I would say yes, you should always pin dependencies for environment setup and provisioning to known versions (including dev environments). By the way, the "bricking" argument would still apply for the first time you install something if you don't pin. --Chris > I just spent 3 weeks fixing a nodejs deployment that had been upgraded when I was not ready, and I would rather not have to do the same in pip. > > Why don't we just implement something like pip install foobar at latest if you want the upgrade? > > pip upsert? > > The idea of always upgrading when you specify a concrete dependency, a file or a URL, is a good one. > > On Sat, Jun 25, 2016 at 1:17 PM Donald Stufft wrote: >> >> >> > On Jun 25, 2016, at 12:31 PM, Ian Cordasco wrote: >> > >> > On Sat, Jun 25, 2016 at 5:25 AM, Pradyun Gedam wrote: >> >> Hello List! >> >> >> >> There is currently a proposal to change the behaviour to pip install to >> >> upgrade a package that is passed even if it is already installed. >> >> >> >> This behaviour change is accompanied with a change in the upgrade strategy - >> >> pip would stop ?eagerly? upgrading dependencies and would become more >> >> conservative, upgrading a dependency only when it doesn?t meet lower >> >> constraints of the newer version of a parent. Moreover, the behaviour of pip >> >> install --target would also be changed so that --upgrade no longer affects >> >> it. >> >> >> >> A PEP-style write-up >> >> (https://gist.github.com/pradyunsg/4c9db6a212239fee69b429c96cdc3d73) >> >> documents the reasoning behind this proposed change, what the change is and >> >> some alternate proposals that were rejected in the discussions. >> >> >> >> This is a request for comments on the pull-request implementing this >> >> proposal (https://github.com/pypa/pip/pull/3806) for your views, thoughts >> >> and possible concerns related to it. >> > >> > Having `pip install` implicitly upgrade a package will break the way >> > tooling like ansible, puppet, salt, and chef all work. Most expect >> > that if you do `pip install requests` twice you won't get two >> > different versions. At the very least, there should be an option added >> > that implies that if the version of the specified package is already >> > installed and satisfactory, that it is not upgraded. >> >> So this may be true, but it?s also not particularly hard to work around >> similarly to what they do already for system package managers, they can >> do ``pip list`` or ``pip freeze`` to see if something is already installed. >> Not only will this work across versions, but it?ll also work *better* >> because it won?t involve hitting the network to determine this information. >> >> > >> > That said, this will also break those tools understanding of how `pip >> > install --upgrade` works if instead `-U/--upgrade` is kept. People who >> > are automating deployments using pip (perhaps in virtualenvs or >> > however else, it really doesn't matter) will now be forced to >> > periodically increase their lower bounds or list out *every* >> > dependency to ensure they upgrade if that's what they want rather than >> > being able to rely on pip to do that. They'll now have to specify >> > their requirements list in more than one place, and this will not be >> > something that's an improvement to the tooling in the community. Will >> > it make some things more explicit? Maybe. Will it cause people trying >> > to deploy software to waste hours of their time? Definitely. >> > >> > If you want to change the upgrade behaviour of pip, I suggest not >> > modifying the existing `pip install` and instead deprecating that in >> > favor of a new command. Silently pulling the rug out from under people >> > seems like an incredibly bad idea. >> > >> >> This is going to break things, no matter what we do? and the current >> behavior is also broken in ways that are causing other projects to need >> to do broken things. >> >> The current solution I think breaks *less* things and once the initial >> pain of breakage is over will end up with a much nicer situation. Importantly >> the vast bulk of the breakage will be centralized in things like Chef where >> one group can make a change and fix it for thousands. >> >> ? >> Donald Stufft >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From dholth at gmail.com Sun Jun 26 16:45:52 2016 From: dholth at gmail.com (Daniel Holth) Date: Sun, 26 Jun 2016 20:45:52 +0000 Subject: [Distutils] enscons, a prototype SCons-powered wheel & sdist builder Message-ID: I've been working on a prototype Python packaging system powered by SCons called enscons. https://bitbucket.org/dholth/enscons . It is designed to be an easier way to experiment with packaging compared to hacking on distutils or setuptools which are notoriously difficult to extend. Now it is at the very earliest state where it might be interesting to others who are less familiar with the details of pip and Python package formats. Enscons is able to create wheels and source distributions that can be installed by pip. Source distributions built with enscons use pyproject.toml from PEP 518 ( https://www.python.org/dev/peps/pep-0518/ ) to install enscons before setup.py is run, and since pip does not yet implement PEP 518, include a setup.py shim for PEP 518. The same pyproject.toml includes most of the arguments normally passed to setuptools.setup(), which are used to create the metadata files needed by pip. Enscons converts pip's setup.py arguments to SCons ones and then invokes SCons, and the project's SConstruct does the rest. For a normal from-pypi installation, enscons generates just enough egg_info to let pip build a wheel, then pip builds and installs the wheel which has more complete metadata. Of course SCons is more interesting for more complicated projects. I've been working on building my pysdl2-cffi binding with SCons. It includes a custom Python code generation step that distutils knew nothing about. It's practically a one-liner to add my custom build step to SCons, and it knows how to rebuild each part of the project only when its dependencies have changed. I don't know how to do that with setuptools. It is still a very early prototype. Don't expect it to be very good. Here are some of its limitations: - There is no error checking. - The sdists can only be built inside virtualenvs; otherwise pip complains that --user and --target cannot be used together. - It also doesn't work if one of the pyproject.toml build requirements conflicts with something that's already installed. - 'pip install .' doesn't work; it still tries to invoke 'setup.py install', which is not implemented. - 'setup.py develop' is not implemented. Try PYTHONPATH. - I am not an experienced SCons user. It also relies on what I'm sure will be a short-lived repackaging of SCons called 'import_scons' that makes 'python -m SCons' work, and includes a Python 3 version of SCons from github.com/timj/scons that seems to work well enough for my purposes. On the plus side, it's short. If you're interested in implementing Python packaging in SCons, or are interested in the straightforward but tedious task of copying just enough distutils trickery to implement a robust Python C extension compiler in SCons, then perhaps you should check out enscons. Here is a SConstruct for enscons itself: import pytoml as tomlimport ensconsmetadata = dict(toml.load(open('pyproject.toml')))['tool']['enscons'] env = Environment(tools=['default', 'packaging', enscons.generate], PACKAGE_METADATA=metadata, WHEEL_TAG='py2.py3-none-any', ROOT_IS_PURELIB=True)py_source = Glob('enscons/*.py') sdist = env.Package( NAME=env['PACKAGE_NAME'], VERSION=env['PACKAGE_METADATA']['version'], PACKAGETYPE='src_zip', target=['dist/' + env['PACKAGE_NAME'] + '-' + env['PACKAGE_VERSION']], source=FindSourceFiles() + ['PKG-INFO', 'setup.py'], ) env.NoClean(sdist) env.Alias('sdist', sdist)env.Whl('purelib', py_source, root='.') Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jun 26 17:40:48 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 26 Jun 2016 23:40:48 +0200 Subject: [Distutils] enscons, a prototype SCons-powered wheel & sdist builder In-Reply-To: References: Message-ID: On Sun, Jun 26, 2016 at 10:45 PM, Daniel Holth wrote: > I've been working on a prototype Python packaging system powered by SCons > called enscons. https://bitbucket.org/dholth/enscons . It is designed to > be an easier way to experiment with packaging compared to hacking on > distutils or setuptools which are notoriously difficult to extend. Now it > is at the very earliest state where it might be interesting to others who > are less familiar with the details of pip and Python package formats. > Interesting, thanks Daniel. This does immediately bring back memories of the now deceased Numscons: https://github.com/cournape/numscons. David Cournapeau wrote quite a bit about it on his blog: https://cournape.wordpress.com/?s=numscons Are you aware of it? It was able to build numpy and scipy, so maybe there's still something worth stealing from it (it's 6 years old by now though). Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sun Jun 26 18:50:33 2016 From: dholth at gmail.com (Daniel Holth) Date: Sun, 26 Jun 2016 22:50:33 +0000 Subject: [Distutils] enscons, a prototype SCons-powered wheel & sdist builder In-Reply-To: References: Message-ID: I hadn't seen it. It looks very thorough. One difference is that mine is not a distutils extension. It might have some good tricks. On Sun, Jun 26, 2016, 17:40 Ralf Gommers wrote: > On Sun, Jun 26, 2016 at 10:45 PM, Daniel Holth wrote: > >> I've been working on a prototype Python packaging system powered by SCons >> called enscons. https://bitbucket.org/dholth/enscons . It is designed to >> be an easier way to experiment with packaging compared to hacking on >> distutils or setuptools which are notoriously difficult to extend. Now it >> is at the very earliest state where it might be interesting to others >> who are less familiar with the details of pip and Python package formats. >> > > Interesting, thanks Daniel. > > This does immediately bring back memories of the now deceased Numscons: > https://github.com/cournape/numscons. David Cournapeau wrote quite a bit > about it on his blog: https://cournape.wordpress.com/?s=numscons > Are you aware of it? It was able to build numpy and scipy, so maybe > there's still something worth stealing from it (it's 6 years old by now > though). > > Cheers, > Ralf > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradyunsg at gmail.com Mon Jun 27 02:36:28 2016 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Mon, 27 Jun 2016 06:36:28 +0000 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: Hello List! I feel it?s fine to hold back the other changes for later but the upgrade-strategy change should get shipped out to the world as quickly as possible. Even how the change is exposed the user can also be discussed later. I request the list members to focus on *only* the change of the default upgrade strategy to be non-eager. Does anyone have any concerns regarding the change of the default upgrade strategy to be non-eager? If not, let?s get *just* that shipped out as soon as possible. Cheers, Pradyun Gedam On Mon, 27 Jun 2016 at 12:02 Pradyun Gedam pradyunsg at gmail.com wrote: On Sun, 26 Jun 2016 at 23:02 Donald Stufft wrote: > >> >> On Jun 25, 2016, at 6:25 AM, Pradyun Gedam wrote: >> >> There is currently a proposal to change the behaviour to pip install to >> upgrade a package that is passed even if it is already installed. >> >> This behaviour change is accompanied with a change in the upgrade >> strategy - pip would stop ?eagerly? upgrading dependencies and would become >> more conservative, upgrading a dependency only when it doesn?t meet lower >> constraints of the newer version of a parent. Moreover, the behaviour of pip >> install --target would also be changed so that --upgrade no longer >> affects it. >> >> I think bundling these two changes (and I think I might have been the one >> that originally suggested it) is making this discussion harder than it >> needs to be as folks are having to fight on multiple different fronts at >> once. I think the change to the default behavior of pip install is >> dependent on the change to ?upgrade, so I suggest we focus on the change to >> ?upgrade first, changing from a ?recursive? to a ?conservative? strategy. >> Once we get that change figured out and landed then we can worry about what >> to do with pip install. >> > > You were. In fact, the majority swayed in favour of changing the behaviour > of pip install post one of your comments on Github. > > I'll be happier *only* seeing in change the behaviour of --upgrade and not > --target or pip install. It reduces the number of things that changes from > 3 to 1. Much easier to discuss about. > > I?m not going to repeat the entire post, but I just made a fairly lengthy >> comment at https://github.com/pypa/pip/issues/3786#issuecomment-228611906 but >> to try and boil it down to a few points: >> > > Thanks for this. > > >> * ``pip install ?upgrade`` is not a good security mechanism, relying on >> it is inconsistent at best. If we want to support trying to keep people on >> secure versions of software we need a better mechanism than this anyways, >> so we shouldn?t let it influence our choice here. >> > > AFAIK, this was the only outstanding concern raised against having a > non-eager (conservative) upgrade strategy. > > * For the general case, it?s not going to matter a lot which way we go, >> but not upgrading has the greatest chance of not breaking *already >> installed software*. >> > > I strongly agree with this. Another thing worth a mention is that it's > easier to get the lower bounds of your requirements correct, rather than > upper bounds. > > >> * For the hard-to-upgrade case, the current behavior is so bad that >> people are outright attempting to subvert the way pip typically behaviors, >> *AND* advocating for other?s to do the same, in an attempt to escape that >> behavior. I think that this is not a good place to be in. >> > > Ditto. > > ? >> >> Donald Stufft >> > > Happy-to-see-Donald's-response-ly, > Pradyun Gedam > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradyunsg at gmail.com Mon Jun 27 02:38:56 2016 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Mon, 27 Jun 2016 06:38:56 +0000 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: I?ll be happier *only* seeing in change s/*only* seeing/seeing *only*/? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Jun 27 02:51:31 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 27 Jun 2016 08:51:31 +0200 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On Mon, Jun 27, 2016 at 8:36 AM, Pradyun Gedam wrote: > Hello List! > > I feel it?s fine to hold back the other changes for later but the > upgrade-strategy change should get shipped out to the world as quickly as > possible. Even how the change is exposed the user can also be discussed > later. > What do you mean by "ship" if you say the behavior can still be changed later? > I request the list members to focus on *only* the change of the default > upgrade strategy to be non-eager. > > Does anyone have any concerns regarding the change of the default upgrade > strategy to be non-eager? If not, let?s get *just* that shipped out as > soon as possible. > The concerns were always with how to change it, one of: (1) add "pip upgrade" (2) change behavior of "pip install --upgrade" (3) change behavior of "pip install" Your sentence above suggests you're asking for agreement on (2), but I think you want agreement on (3) right? At least that was the conclusion of your PEP-style writeup. Personally I don't have a preference anymore, as long as a choice is made so we don't remain stuck where we are now. Ralf > Cheers, > Pradyun Gedam > > On Mon, 27 Jun 2016 at 12:02 Pradyun Gedam pradyunsg at gmail.com > wrote: > > On Sun, 26 Jun 2016 at 23:02 Donald Stufft wrote: >> >>> >>> On Jun 25, 2016, at 6:25 AM, Pradyun Gedam wrote: >>> >>> There is currently a proposal to change the behaviour to pip install to >>> upgrade a package that is passed even if it is already installed. >>> >>> This behaviour change is accompanied with a change in the upgrade >>> strategy - pip would stop ?eagerly? upgrading dependencies and would become >>> more conservative, upgrading a dependency only when it doesn?t meet lower >>> constraints of the newer version of a parent. Moreover, the behaviour of >>> pip install --target would also be changed so that --upgrade no longer >>> affects it. >>> >>> I think bundling these two changes (and I think I might have been the >>> one that originally suggested it) is making this discussion harder than it >>> needs to be as folks are having to fight on multiple different fronts at >>> once. I think the change to the default behavior of pip install is >>> dependent on the change to ?upgrade, so I suggest we focus on the change to >>> ?upgrade first, changing from a ?recursive? to a ?conservative? strategy. >>> Once we get that change figured out and landed then we can worry about what >>> to do with pip install. >>> >> >> You were. In fact, the majority swayed in favour of changing the >> behaviour of pip install post one of your comments on Github. >> >> I'll be happier *only* seeing in change the behaviour of --upgrade and >> not --target or pip install. It reduces the number of things that changes >> from 3 to 1. Much easier to discuss about. >> >> I?m not going to repeat the entire post, but I just made a fairly lengthy >>> comment at >>> https://github.com/pypa/pip/issues/3786#issuecomment-228611906 but to >>> try and boil it down to a few points: >>> >> >> Thanks for this. >> >> >>> * ``pip install ?upgrade`` is not a good security mechanism, relying on >>> it is inconsistent at best. If we want to support trying to keep people on >>> secure versions of software we need a better mechanism than this anyways, >>> so we shouldn?t let it influence our choice here. >>> >> >> AFAIK, this was the only outstanding concern raised against having a >> non-eager (conservative) upgrade strategy. >> >> * For the general case, it?s not going to matter a lot which way we go, >>> but not upgrading has the greatest chance of not breaking *already >>> installed software*. >>> >> >> I strongly agree with this. Another thing worth a mention is that it's >> easier to get the lower bounds of your requirements correct, rather than >> upper bounds. >> >> >>> * For the hard-to-upgrade case, the current behavior is so bad that >>> people are outright attempting to subvert the way pip typically behaviors, >>> *AND* advocating for other?s to do the same, in an attempt to escape that >>> behavior. I think that this is not a good place to be in. >>> >> >> Ditto. >> >> ? >>> >>> Donald Stufft >>> >> >> Happy-to-see-Donald's-response-ly, >> Pradyun Gedam >> > ? > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradyunsg at gmail.com Mon Jun 27 02:32:09 2016 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Mon, 27 Jun 2016 06:32:09 +0000 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On Sun, 26 Jun 2016 at 23:02 Donald Stufft wrote: > > On Jun 25, 2016, at 6:25 AM, Pradyun Gedam wrote: > > There is currently a proposal to change the behaviour to pip install to > upgrade a package that is passed even if it is already installed. > > This behaviour change is accompanied with a change in the upgrade strategy > - pip would stop ?eagerly? upgrading dependencies and would become more > conservative, upgrading a dependency only when it doesn?t meet lower > constraints of the newer version of a parent. Moreover, the behaviour of pip > install --target would also be changed so that --upgrade no longer > affects it. > > I think bundling these two changes (and I think I might have been the one > that originally suggested it) is making this discussion harder than it > needs to be as folks are having to fight on multiple different fronts at > once. I think the change to the default behavior of pip install is > dependent on the change to ?upgrade, so I suggest we focus on the change to > ?upgrade first, changing from a ?recursive? to a ?conservative? strategy. > Once we get that change figured out and landed then we can worry about what > to do with pip install. > You were. In fact, the majority swayed in favour of changing the behaviour of pip install post one of your comments on Github. I'll be happier *only* seeing in change the behaviour of --upgrade and not --target or pip install. It reduces the number of things that changes from 3 to 1. Much easier to discuss about. I?m not going to repeat the entire post, but I just made a fairly lengthy > comment at https://github.com/pypa/pip/issues/3786#issuecomment-228611906 but > to try and boil it down to a few points: > Thanks for this. > * ``pip install ?upgrade`` is not a good security mechanism, relying on it > is inconsistent at best. If we want to support trying to keep people on > secure versions of software we need a better mechanism than this anyways, > so we shouldn?t let it influence our choice here. > AFAIK, this was the only outstanding concern raised against having a non-eager (conservative) upgrade strategy. * For the general case, it?s not going to matter a lot which way we go, but > not upgrading has the greatest chance of not breaking *already installed > software*. > I strongly agree with this. Another thing worth a mention is that it's easier to get the lower bounds of your requirements correct, rather than upper bounds. > * For the hard-to-upgrade case, the current behavior is so bad that people > are outright attempting to subvert the way pip typically behaviors, *AND* > advocating for other?s to do the same, in an attempt to escape that > behavior. I think that this is not a good place to be in. > Ditto. ? > > Donald Stufft > Happy-to-see-Donald's-response-ly, Pradyun Gedam -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Mon Jun 27 03:27:31 2016 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 27 Jun 2016 19:27:31 +1200 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: I've replied in the github issue; I don't really want to carry out *two* conversations, so perhaps we can pick one forum and stick there? -Rob From pradyunsg at gmail.com Mon Jun 27 03:29:14 2016 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Mon, 27 Jun 2016 07:29:14 +0000 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On Mon, 27 Jun 2016 at 12:21 Ralf Gommers wrote: > On Mon, Jun 27, 2016 at 8:36 AM, Pradyun Gedam > wrote: > >> Hello List! >> >> I feel it?s fine to hold back the other changes for later but the >> upgrade-strategy change should get shipped out to the world as quickly as >> possible. Even how the change is exposed the user can also be discussed >> later. >> > What do you mean by "ship" if you say the behavior can still be changed > later? > Sorry for the confusion and a useless-mail! Allow me to re-phrase that statement... I think it's fine if we hold back the changes to install-as-upstall and --target for later. The upgrade-strategy change should get released to the world as quickly as possible. If no one has any outstanding issues with switching over to non-eager upgrades by default, may we start a discussing on how the change is to be exposed the user? > >> I request the list members to focus on *only* the change of the default >> upgrade strategy to be non-eager. >> >> Does anyone have any concerns regarding the change of the default upgrade >> strategy to be non-eager? If not, let?s get *just* that shipped out as >> soon as possible. >> > The concerns were always with how to change it > There were security-related concerns raised by Robert which were addressed [1] by Donald. > one of: > (1) add "pip upgrade" > (2) change behavior of "pip install --upgrade" > (3) change behavior of "pip install" > > Your sentence above suggests you're asking for agreement on (2), but I > think you want agreement on (3) right? At least that was the conclusion of > your PEP-style writeup. > > As it stands, we currently have (3) implemented (as proposed in that write-up) since that was what was extensively discussed and decided over at Github. If no one has issues with it, let's go ahead with it. If there is anyone has issues, I'm fine with going (2) as well. I do not like (1) due to the weird model it creates providing 2 ways to do what most people would see as one thing. Basically, If we were voting, -1 for (1), +0.5 for (2), +1 for (3). Personally I don't have a preference anymore, as long as a choice is made > so we don't remain stuck where we are now. > > Ralf > [1] There's probably a better word for this than "addressed". -- Pradyun Gedam -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradyunsg at gmail.com Mon Jun 27 03:30:50 2016 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Mon, 27 Jun 2016 07:30:50 +0000 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: Indeed. Let's move the further discussion over to https://github.com/pypa/pip/issues/3786. On Mon, 27 Jun 2016 at 12:57 Robert Collins wrote: > I've replied in the github issue; I don't really want to carry out > *two* conversations, so perhaps we can pick one forum and stick there? > > -Rob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at spvi.com Mon Jun 27 07:45:08 2016 From: steve at spvi.com (Steve Spicklemire) Date: Mon, 27 Jun 2016 07:45:08 -0400 Subject: [Distutils] pip behavior Message-ID: <20B4ABA4-5DD0-4399-A4B8-6E26D60FA987@spvi.com> Hi Distutils-SIG Folks, If this is not the best place for a ?pip? question, please direct me to a better destination. I have several virtual environments. As far as I know they should all be about the same WRT the python binary that was used to create them (home-brew python2.7.11/OSX). However they have different behavior WRT pip install. I?m trying to track down the cues that pip uses to choose a binary wheel, or a source/build install. I?ve been using ?pip -v? to get some hints, but it?s still not clear. I?ve saved two log files that illustrate the issue: https://dl.dropboxusercontent.com/u/20562746/pydev_pip_log.txt https://dl.dropboxusercontent.com/u/20562746/pytest_pip_log.txt So my question is: How can I determine why pip is choosing ?wheel? in some cases and ?source? in others when I have the same python build in both cases? thanks! -steve From robertc at robertcollins.net Mon Jun 27 19:57:53 2016 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 28 Jun 2016 11:57:53 +1200 Subject: [Distutils] pip behavior In-Reply-To: <20B4ABA4-5DD0-4399-A4B8-6E26D60FA987@spvi.com> References: <20B4ABA4-5DD0-4399-A4B8-6E26D60FA987@spvi.com> Message-ID: Its almost certainly the version of pip within each environment. -Rob On 27 June 2016 at 23:45, Steve Spicklemire wrote: > Hi Distutils-SIG Folks, > > If this is not the best place for a ?pip? question, please direct me to a better destination. > > I have several virtual environments. As far as I know they should all be about the same WRT the python binary that was used to create them (home-brew python2.7.11/OSX). However they have different behavior WRT pip install. I?m trying to track down the cues that pip uses to choose a binary wheel, or a source/build install. I?ve been using ?pip -v? to get some hints, but it?s still not clear. I?ve saved two log files that illustrate the issue: > > https://dl.dropboxusercontent.com/u/20562746/pydev_pip_log.txt > > https://dl.dropboxusercontent.com/u/20562746/pytest_pip_log.txt > > So my question is: How can I determine why pip is choosing ?wheel? in some cases and ?source? in others when I have the same python build in both cases? > > thanks! > -steve > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From steve at spvi.com Tue Jun 28 07:23:49 2016 From: steve at spvi.com (Steve Spicklemire) Date: Tue, 28 Jun 2016 07:23:49 -0400 Subject: [Distutils] pip behavior In-Reply-To: References: <20B4ABA4-5DD0-4399-A4B8-6E26D60FA987@spvi.com> Message-ID: <5B16230C-A066-44D3-862A-4CC1986882BE@spvi.com> Indeed! I've typed ?pip install -U pip? so many times I thought they were all up to date. thanks, -steve > On Jun 27, 2016, at 7:57 PM, Robert Collins wrote: > > Its almost certainly the version of pip within each environment. > > -Rob > > On 27 June 2016 at 23:45, Steve Spicklemire wrote: >> Hi Distutils-SIG Folks, >> >> If this is not the best place for a ?pip? question, please direct me to a better destination. >> >> I have several virtual environments. As far as I know they should all be about the same WRT the python binary that was used to create them (home-brew python2.7.11/OSX). However they have different behavior WRT pip install. I?m trying to track down the cues that pip uses to choose a binary wheel, or a source/build install. I?ve been using ?pip -v? to get some hints, but it?s still not clear. I?ve saved two log files that illustrate the issue: >> >> https://dl.dropboxusercontent.com/u/20562746/pydev_pip_log.txt >> >> https://dl.dropboxusercontent.com/u/20562746/pytest_pip_log.txt >> >> So my question is: How can I determine why pip is choosing ?wheel? in some cases and ?source? in others when I have the same python build in both cases? >> >> thanks! >> -steve >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Tue Jun 28 18:16:31 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Jun 2016 15:16:31 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On 26 Jun 2016 23:37, "Pradyun Gedam" wrote: > > Hello List! > > I feel it?s fine to hold back the other changes for later but the > upgrade-strategy change should get shipped out to the world as quickly as > possible. Even how the change is exposed the user can also be discussed later. > > I request the list members to focus on only the change of the default > upgrade strategy to be non-eager. > > Does anyone have any concerns regarding the change of the default upgrade > strategy to be non-eager? If not, let?s get just that shipped out as soon as possible. Pairing that change with an explicit "pip upgrade-all" command would get a +1 from me, especially if there was a printed warning when the new upgrade strategy skips packages the old one would have updated. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Jun 28 18:38:28 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 29 Jun 2016 00:38:28 +0200 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan wrote: > > On 26 Jun 2016 23:37, "Pradyun Gedam" wrote: > > > > Hello List! > > > > I feel it?s fine to hold back the other changes for later but the > > upgrade-strategy change should get shipped out to the world as quickly as > > possible. Even how the change is exposed the user can also be discussed > later. > > > > I request the list members to focus on only the change of the default > > upgrade strategy to be non-eager. > > > > Does anyone have any concerns regarding the change of the default upgrade > > strategy to be non-eager? If not, let?s get just that shipped out as > soon as possible. > > Pairing that change with an explicit "pip upgrade-all" command would get a > +1 from me, especially if there was a printed warning when the new upgrade > strategy skips packages the old one would have updated. > Please do not mix upgrade with upgrade-all. The latter is blocked by lack of a SAT solver for a long time, and at the current pace that status may not change for another couple of years. Also mixing these up is unnecessary, and it was discussed last year on this list already to move ahead with upgrade: http://article.gmane.org/gmane.comp.python.distutils.devel/24219 Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Jun 28 18:39:47 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 29 Jun 2016 00:39:47 +0200 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On Wed, Jun 29, 2016 at 12:38 AM, Ralf Gommers wrote: > > > > On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan wrote: > >> >> On 26 Jun 2016 23:37, "Pradyun Gedam" wrote: >> > >> > Hello List! >> > >> > I feel it?s fine to hold back the other changes for later but the >> > upgrade-strategy change should get shipped out to the world as quickly >> as >> > possible. Even how the change is exposed the user can also be discussed >> later. >> > >> > I request the list members to focus on only the change of the default >> > upgrade strategy to be non-eager. >> > >> > Does anyone have any concerns regarding the change of the default >> upgrade >> > strategy to be non-eager? If not, let?s get just that shipped out as >> soon as possible. >> >> Pairing that change with an explicit "pip upgrade-all" command would get >> a +1 from me, especially if there was a printed warning when the new >> upgrade strategy skips packages the old one would have updated. >> > Please do not mix upgrade with upgrade-all. The latter is blocked by lack > of a SAT solver for a long time, and at the current pace that status may > not change for another couple of years. Also mixing these up is > unnecessary, and it was discussed last year on this list already to move > ahead with upgrade: > http://article.gmane.org/gmane.comp.python.distutils.devel/24219 > And, on request of Robert all discussion was moved to https://github.com/pypa/pip/issues/3786, so we probably should not continue this thread. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Jun 28 18:45:25 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 29 Jun 2016 10:45:25 +1200 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On 29 June 2016 at 10:38, Ralf Gommers wrote: > > > On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan wrote: >> >> >> On 26 Jun 2016 23:37, "Pradyun Gedam" wrote: >> > >> > Hello List! >> > >> > I feel it?s fine to hold back the other changes for later but the >> > upgrade-strategy change should get shipped out to the world as quickly >> > as >> > possible. Even how the change is exposed the user can also be discussed >> > later. >> > >> > I request the list members to focus on only the change of the default >> > upgrade strategy to be non-eager. >> > >> > Does anyone have any concerns regarding the change of the default >> > upgrade >> > strategy to be non-eager? If not, let?s get just that shipped out as >> > soon as possible. >> >> Pairing that change with an explicit "pip upgrade-all" command would get a >> +1 from me, especially if there was a printed warning when the new upgrade >> strategy skips packages the old one would have updated. > > Please do not mix upgrade with upgrade-all. The latter is blocked by lack of > a SAT solver for a long time, and at the current pace that status may not > change for another couple of years. Also mixing these up is unnecessary, and > it was discussed last year on this list already to move ahead with upgrade: > http://article.gmane.org/gmane.comp.python.distutils.devel/24219 I realise the consensus on the ticket is that its blocked, but I don't actually agree. Yes, you can't do it *right* without a full resolver, but you can do an approximation that would be a lot better than nothing (just narrow the specifiers given across all requirements). That is actually reasonable when you're dealing with a presumed-good-set of versions (which install doesn't deal with). -Rob From ralf.gommers at gmail.com Tue Jun 28 20:03:58 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 29 Jun 2016 02:03:58 +0200 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On Wed, Jun 29, 2016 at 12:45 AM, Robert Collins wrote: > On 29 June 2016 at 10:38, Ralf Gommers wrote: > > > > > > On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan > wrote: > >> > >> > >> On 26 Jun 2016 23:37, "Pradyun Gedam" wrote: > >> > > >> > Hello List! > >> > > >> > I feel it?s fine to hold back the other changes for later but the > >> > upgrade-strategy change should get shipped out to the world as quickly > >> > as > >> > possible. Even how the change is exposed the user can also be > discussed > >> > later. > >> > > >> > I request the list members to focus on only the change of the default > >> > upgrade strategy to be non-eager. > >> > > >> > Does anyone have any concerns regarding the change of the default > >> > upgrade > >> > strategy to be non-eager? If not, let?s get just that shipped out as > >> > soon as possible. > >> > >> Pairing that change with an explicit "pip upgrade-all" command would > get a > >> +1 from me, especially if there was a printed warning when the new > upgrade > >> strategy skips packages the old one would have updated. > > > > Please do not mix upgrade with upgrade-all. The latter is blocked by > lack of > > a SAT solver for a long time, and at the current pace that status may not > > change for another couple of years. Also mixing these up is unnecessary, > and > > it was discussed last year on this list already to move ahead with > upgrade: > > http://article.gmane.org/gmane.comp.python.distutils.devel/24219 > > I realise the consensus on the ticket is that its blocked, but I don't > actually agree. > > Yes, you can't do it *right* without a full resolver, but you can do > an approximation that would be a lot better than nothing (just narrow > the specifiers given across all requirements). That is actually > reasonable when you're dealing with a presumed-good-set of versions > (which install doesn't deal with). > Honestly, not sure how to respond. You may be right, I don't have a technical opinion on an approximate upgrade-all now. Don't really want to have one either - when N core PyPA devs have been in consensus for a couple of years, then when dev N+1 comes along at the very last moment to challenge that consensus plus make it blocking for something we agreed was unrelated, that just feels frustrating (especially because it's becoming a pattern). Mixing separate discussions/implementations up together does seem to be a good way to make the whole thing stall again though, so I'll first try repeating "this is unnecessary, please do not mix upgrade and upgrade-all". Here's an alternative for the small minority that values the current upgrade behavior: 1. add a --recursive flag to keep that behavior accessible. 2. add the printed warning that Nick suggests above. That way we can have better defaults soon (Pradyun's PR seems to be in decent shape), and add upgrade-all either when someone implements the full resolver or when there's agreement on your approximate version. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From annaraven at gmail.com Tue Jun 28 22:35:08 2016 From: annaraven at gmail.com (Anna Ravenscroft) Date: Tue, 28 Jun 2016 19:35:08 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: I find the whole discussion quite unnerving honestly. pip install should do just that. notifying me if I have a version and an upgrade is availalbe for *pip*, makes sense. doing anything *else* is scary. I expect --upgrade for upgrading things. THEN it should upgrade to the latest version, unless I use a flag to specify otherwise. Note -I'm talking from the "naive user" viewpoint. On Tue, Jun 28, 2016 at 5:03 PM, Ralf Gommers wrote: > > > On Wed, Jun 29, 2016 at 12:45 AM, Robert Collins < > robertc at robertcollins.net> wrote: > >> On 29 June 2016 at 10:38, Ralf Gommers wrote: >> > >> > >> > On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan >> wrote: >> >> >> >> >> >> On 26 Jun 2016 23:37, "Pradyun Gedam" wrote: >> >> > >> >> > Hello List! >> >> > >> >> > I feel it?s fine to hold back the other changes for later but the >> >> > upgrade-strategy change should get shipped out to the world as >> quickly >> >> > as >> >> > possible. Even how the change is exposed the user can also be >> discussed >> >> > later. >> >> > >> >> > I request the list members to focus on only the change of the default >> >> > upgrade strategy to be non-eager. >> >> > >> >> > Does anyone have any concerns regarding the change of the default >> >> > upgrade >> >> > strategy to be non-eager? If not, let?s get just that shipped out as >> >> > soon as possible. >> >> >> >> Pairing that change with an explicit "pip upgrade-all" command would >> get a >> >> +1 from me, especially if there was a printed warning when the new >> upgrade >> >> strategy skips packages the old one would have updated. >> > >> > Please do not mix upgrade with upgrade-all. The latter is blocked by >> lack of >> > a SAT solver for a long time, and at the current pace that status may >> not >> > change for another couple of years. Also mixing these up is >> unnecessary, and >> > it was discussed last year on this list already to move ahead with >> upgrade: >> > http://article.gmane.org/gmane.comp.python.distutils.devel/24219 >> >> I realise the consensus on the ticket is that its blocked, but I don't >> actually agree. >> >> Yes, you can't do it *right* without a full resolver, but you can do >> an approximation that would be a lot better than nothing (just narrow >> the specifiers given across all requirements). That is actually >> reasonable when you're dealing with a presumed-good-set of versions >> (which install doesn't deal with). >> > > Honestly, not sure how to respond. You may be right, I don't have a > technical opinion on an approximate upgrade-all now. Don't really want to > have one either - when N core PyPA devs have been in consensus for a couple > of years, then when dev N+1 comes along at the very last moment to > challenge that consensus plus make it blocking for something we agreed was > unrelated, that just feels frustrating (especially because it's becoming a > pattern). > > Mixing separate discussions/implementations up together does seem to be a > good way to make the whole thing stall again though, so I'll first try > repeating "this is unnecessary, please do not mix upgrade and upgrade-all". > Here's an alternative for the small minority that values the current > upgrade behavior: > 1. add a --recursive flag to keep that behavior accessible. > 2. add the printed warning that Nick suggests above. > That way we can have better defaults soon (Pradyun's PR seems to be in > decent shape), and add upgrade-all either when someone implements the full > resolver or when there's agreement on your approximate version. > > Ralf > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -- cordially, Anna -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jun 29 01:46:08 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Jun 2016 22:46:08 -0700 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On 28 Jun 2016 5:04 pm, "Ralf Gommers" wrote: > > > > On Wed, Jun 29, 2016 at 12:45 AM, Robert Collins < robertc at robertcollins.net> wrote: >> >> On 29 June 2016 at 10:38, Ralf Gommers wrote: >> > >> > >> > On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan wrote: >> >> >> >> >> >> On 26 Jun 2016 23:37, "Pradyun Gedam" wrote: >> >> > >> >> > Hello List! >> >> > >> >> > I feel it?s fine to hold back the other changes for later but the >> >> > upgrade-strategy change should get shipped out to the world as quickly >> >> > as >> >> > possible. Even how the change is exposed the user can also be discussed >> >> > later. >> >> > >> >> > I request the list members to focus on only the change of the default >> >> > upgrade strategy to be non-eager. >> >> > >> >> > Does anyone have any concerns regarding the change of the default >> >> > upgrade >> >> > strategy to be non-eager? If not, let?s get just that shipped out as >> >> > soon as possible. >> >> >> >> Pairing that change with an explicit "pip upgrade-all" command would get a >> >> +1 from me, especially if there was a printed warning when the new upgrade >> >> strategy skips packages the old one would have updated. >> > >> > Please do not mix upgrade with upgrade-all. The latter is blocked by lack of >> > a SAT solver for a long time, and at the current pace that status may not >> > change for another couple of years. Also mixing these up is unnecessary, and >> > it was discussed last year on this list already to move ahead with upgrade: >> > http://article.gmane.org/gmane.comp.python.distutils.devel/24219 >> >> I realise the consensus on the ticket is that its blocked, but I don't >> actually agree. >> >> Yes, you can't do it *right* without a full resolver, but you can do >> an approximation that would be a lot better than nothing (just narrow >> the specifiers given across all requirements). That is actually >> reasonable when you're dealing with a presumed-good-set of versions >> (which install doesn't deal with). > > > Honestly, not sure how to respond. You may be right, I don't have a technical opinion on an approximate upgrade-all now. Don't really want to have one either - when N core PyPA devs have been in consensus for a couple of years, then when dev N+1 comes along at the very last moment to challenge that consensus plus make it blocking for something we agreed was unrelated, that just feels frustrating (especially because it's becoming a pattern). "yum upgrade" has worked well enough for years without a proper SAT solver, and the package set in a typical Linux install is much larger than that in a typical virtual environment (although distro curation does reduce the likelihood of conflicting requirements arising in the first place). That said, rerunning pip-compile and then doing a pip-sync is already a functional equivalent of an upgrade-all operation (as is destroying and recreating a venv), so I agree there's no need to couple the question of supporting bulk upgrades in baseline pip with changing the behaviour of upgrading named components. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Jun 29 03:15:19 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 29 Jun 2016 09:15:19 +0200 Subject: [Distutils] Request for comment: Proposal to change behaviour of pip install In-Reply-To: References: <182C763B-BFB6-4711-AF99-AD7718D9B171@stufft.io> Message-ID: On Wed, Jun 29, 2016 at 7:46 AM, Nick Coghlan wrote: > > On 28 Jun 2016 5:04 pm, "Ralf Gommers" wrote: > > > > > > > > On Wed, Jun 29, 2016 at 12:45 AM, Robert Collins < > robertc at robertcollins.net> wrote: > >> > >> On 29 June 2016 at 10:38, Ralf Gommers wrote: > >> > > >> > > >> > On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan > wrote: > >> >> > >> >> > >> >> On 26 Jun 2016 23:37, "Pradyun Gedam" wrote: > >> >> > > >> >> > Hello List! > >> >> > > >> >> > I feel it?s fine to hold back the other changes for later but the > >> >> > upgrade-strategy change should get shipped out to the world as > quickly > >> >> > as > >> >> > possible. Even how the change is exposed the user can also be > discussed > >> >> > later. > >> >> > > >> >> > I request the list members to focus on only the change of the > default > >> >> > upgrade strategy to be non-eager. > >> >> > > >> >> > Does anyone have any concerns regarding the change of the default > >> >> > upgrade > >> >> > strategy to be non-eager? If not, let?s get just that shipped out > as > >> >> > soon as possible. > >> >> > >> >> Pairing that change with an explicit "pip upgrade-all" command would > get a > >> >> +1 from me, especially if there was a printed warning when the new > upgrade > >> >> strategy skips packages the old one would have updated. > >> > > >> > Please do not mix upgrade with upgrade-all. The latter is blocked by > lack of > >> > a SAT solver for a long time, and at the current pace that status may > not > >> > change for another couple of years. Also mixing these up is > unnecessary, and > >> > it was discussed last year on this list already to move ahead with > upgrade: > >> > http://article.gmane.org/gmane.comp.python.distutils.devel/24219 > >> > >> I realise the consensus on the ticket is that its blocked, but I don't > >> actually agree. > >> > >> Yes, you can't do it *right* without a full resolver, but you can do > >> an approximation that would be a lot better than nothing (just narrow > >> the specifiers given across all requirements). That is actually > >> reasonable when you're dealing with a presumed-good-set of versions > >> (which install doesn't deal with). > > > > > > Honestly, not sure how to respond. You may be right, I don't have a > technical opinion on an approximate upgrade-all now. Don't really want to > have one either - when N core PyPA devs have been in consensus for a couple > of years, then when dev N+1 comes along at the very last moment to > challenge that consensus plus make it blocking for something we agreed was > unrelated, that just feels frustrating (especially because it's becoming a > pattern). > > "yum upgrade" has worked well enough for years without a proper SAT > solver, and the package set in a typical Linux install is much larger than > that in a typical virtual environment (although distro curation does reduce > the likelihood of conflicting requirements arising in the first place). > Interesting. Issue https://github.com/pypa/pip/issues/59 is now dedicated to upgrade-all (https://github.com/pypa/pip/issues/3786 is for upgrade), so I'll copy the comments of Robert and you there. > That said, rerunning pip-compile and then doing a pip-sync is already a > functional equivalent of an upgrade-all operation (as is destroying and > recreating a venv), so I agree there's no need to couple the question of > supporting bulk upgrades in baseline pip with changing the behaviour of > upgrading named components. > Thank you Nick. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: