From cosimo at anthrotype.com Sun Jan 7 05:12:35 2018 From: cosimo at anthrotype.com (Cosimo Lupo) Date: Sun, 7 Jan 2018 10:12:35 +0000 Subject: [Distutils] Should abi3 tag be usable on Windows? Message-ID: <86bead53-3fd6-43c7-a176-ddd498ed98b0@Spark> Hello, CFFI has recently added support for Py_LIMITED_API extension modules built for CPython 3. The wheel module since version 0.30.0 also supports passing ?py-limited-api cp3X to bdist_wheel command to allow the generated .whl to be installed on all CPython versions equal or greater than the one specified. Yesterday I was trying to apply this on a cffi-built extension module, and it worked for Linux and macOS but failed for Windows: https://github.com/hynek/argon2_cffi/pull/32 The AssertionError from wheel.pep425tags complains that a tag with abi3 would be unsupported for the target platform. Alex Gronholm commented > imp.get_suffixes() does not seem to contain any ABI3 suffixes, but I'm not sure if this is even applicable on Windows. Incidentally, I noticed one specific package, PyQt5, that distributes both abi3-tagged wheels for Mac and manylinux and Windows wheels for a range of cp35.cp36.cp37 but with abi tag set as ?none?, and they do seem to work. So, can one make such py_limited_api wheels work on Windows with the current state of the tooling, and if so how? Thank you in advance Cosimo Lupo -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosimo at anthrotype.com Sun Jan 7 07:39:44 2018 From: cosimo at anthrotype.com (Cosimo Lupo) Date: Sun, 07 Jan 2018 12:39:44 +0000 Subject: [Distutils] Should abi3 tag be usable on Windows? In-Reply-To: <86bead53-3fd6-43c7-a176-ddd498ed98b0@Spark> References: <86bead53-3fd6-43c7-a176-ddd498ed98b0@Spark> Message-ID: I jut found this related pip issue: https://github.com/pypa/pip/issues/4445 (I myself as @anthrotype commented on that thread back in April last year but then completely forgot...) After re-reading it now, it's still not clear to me what the resolution on that issue was. On Sun, Jan 7, 2018 at 10:12 AM Cosimo Lupo wrote: > Hello, > > CFFI has recently added support for Py_LIMITED_API extension modules built > for CPython 3. > The wheel module since version 0.30.0 also supports passing > ?py-limited-api cp3X to bdist_wheel command to allow the generated .whl to > be installed on all CPython versions equal or greater than the one > specified. > > Yesterday I was trying to apply this on a cffi-built extension module, and > it worked for Linux and macOS but failed for Windows: > > > *https://github.com/hynek/argon2_cffi/pull/32* > > The AssertionError from wheel.pep425tags complains that a tag with abi3 > would be unsupported for the target platform. > > Alex Gronholm commented > > > imp.get_suffixes() does not seem to contain any ABI3 suffixes, but I'm > not sure if this is even applicable on Windows. > > Incidentally, I noticed one specific package, PyQt5, that distributes both > abi3-tagged wheels for Mac and manylinux and Windows wheels for a range of > cp35.cp36.cp37 but with abi tag set as ?none?, and they do seem to work. > > So, can one make such py_limited_api wheels work on Windows with the > current state of the tooling, and if so how? > > Thank you in advance > > Cosimo Lupo > -- Cosimo Lupo -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosimo at anthrotype.com Sun Jan 7 08:40:29 2018 From: cosimo at anthrotype.com (Cosimo Lupo) Date: Sun, 07 Jan 2018 13:40:29 +0000 Subject: [Distutils] Should abi3 tag be usable on Windows? In-Reply-To: References: <86bead53-3fd6-43c7-a176-ddd498ed98b0@Spark> Message-ID: It turns out setuptools is still linking to PYTHON36.DLL instead of PYTHNO3.DLL even when py_limited_api=True https://github.com/pypa/setuptools/issues/1248 This is a different, though somewhat related, problem from the one concering wheel/pip pep425tags disallowing `abi3` tag on Windows. On Sun, Jan 7, 2018 at 12:39 PM Cosimo Lupo wrote: > I jut found this related pip issue: > https://github.com/pypa/pip/issues/4445 > > (I myself as @anthrotype commented on that thread back in April last year > but then completely forgot...) > > After re-reading it now, it's still not clear to me what the resolution on > that issue was. > > On Sun, Jan 7, 2018 at 10:12 AM Cosimo Lupo wrote: > >> Hello, >> >> CFFI has recently added support for Py_LIMITED_API extension modules >> built for CPython 3. >> The wheel module since version 0.30.0 also supports passing >> ?py-limited-api cp3X to bdist_wheel command to allow the generated .whl to >> be installed on all CPython versions equal or greater than the one >> specified. >> >> Yesterday I was trying to apply this on a cffi-built extension module, >> and it worked for Linux and macOS but failed for Windows: >> >> >> *https://github.com/hynek/argon2_cffi/pull/32* >> >> The AssertionError from wheel.pep425tags complains that a tag with abi3 >> would be unsupported for the target platform. >> >> Alex Gronholm commented >> >> > imp.get_suffixes() does not seem to contain any ABI3 suffixes, but I'm >> not sure if this is even applicable on Windows. >> >> Incidentally, I noticed one specific package, PyQt5, that distributes >> both abi3-tagged wheels for Mac and manylinux and Windows wheels for a >> range of cp35.cp36.cp37 but with abi tag set as ?none?, and they do seem to >> work. >> >> So, can one make such py_limited_api wheels work on Windows with the >> current state of the tooling, and if so how? >> >> Thank you in advance >> >> Cosimo Lupo >> > > > -- > Cosimo Lupo > -- Cosimo Lupo -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Sun Jan 7 20:31:18 2018 From: steve.dower at python.org (Steve Dower) Date: Mon, 8 Jan 2018 12:31:18 +1100 Subject: [Distutils] Should abi3 tag be usable on Windows? In-Reply-To: References: <86bead53-3fd6-43c7-a176-ddd498ed98b0@Spark> Message-ID: There is a solution to that problem on the linked issue. Basically, you need to declare Py_LIMITED_API in your code or as an extra preprocessor variable. Windows doesn?t use a filename suffix for python3.dll linked extensions as it will be handled at load time. The tag for the wheel is outside of my area, so hopefully someone can chime in on that for you. Cheers, Steve Top-posted from my Windows phone From: Cosimo Lupo Sent: Monday, January 8, 2018 10:32 To: distutils-sig at python.org Subject: Re: [Distutils] Should abi3 tag be usable on Windows? It turns out setuptools is still linking to PYTHON36.DLL instead of PYTHNO3.DLL even when py_limited_api=True https://github.com/pypa/setuptools/issues/1248 This is a different, though somewhat related, problem from the one concering wheel/pip pep425tags disallowing `abi3` tag on Windows. On Sun, Jan 7, 2018 at 12:39 PM Cosimo Lupo wrote: I jut found this related pip issue: https://github.com/pypa/pip/issues/4445 (I myself as @anthrotype commented on that thread back in April last year but then completely forgot...) After re-reading it now, it's still not clear to me what the resolution on that issue was. On Sun, Jan 7, 2018 at 10:12 AM Cosimo Lupo wrote: Hello, CFFI has recently added support for Py_LIMITED_API extension modules built for CPython 3. The wheel module since version 0.30.0 also supports passing ?py-limited-api cp3X to bdist_wheel command to allow the generated .whl to be installed on all CPython versions equal or greater than the one specified. Yesterday I was trying to apply this on a cffi-built extension module, and it worked for Linux and macOS but failed for Windows: https://github.com/hynek/argon2_cffi/pull/32 The AssertionError from wheel.pep425tags complains that a tag with abi3 would be unsupported for the target platform. Alex Gronholm commented > imp.get_suffixes() does not seem to contain any ABI3 suffixes, but I'm not sure if this is even applicable on Windows. Incidentally, I noticed one specific package, PyQt5, that distributes both abi3-tagged wheels for Mac and manylinux and Windows wheels for a range of cp35.cp36.cp37 but with abi tag set as ?none?, and they do seem to work. So, can one make such py_limited_api wheels work on Windows with the current state of the tooling, and if so how? Thank you in advance Cosimo Lupo -- Cosimo Lupo -- Cosimo Lupo -------------- next part -------------- An HTML attachment was scrubbed... URL: From vano at mail.mipt.ru Tue Jan 9 16:02:48 2018 From: vano at mail.mipt.ru (Ivan Pozdeev) Date: Wed, 10 Jan 2018 00:02:48 +0300 Subject: [Distutils] Allow to debug extension modules Message-ID: <81b9894e-4d13-5206-59ad-bdf8e415d235@mail.mipt.ru> In the https://mail.python.org/pipermail/python-ideas/2018-January/048558.html thread, it was discovered that, stumpingly enough, it seems not possible to debug extension modules! * For a release Python, it's not even possible to build an extension without optimizations * For a debug Python, it's possible to build but not possible to install it, or any other package that depends on it Moreover, the feedback says that it makes no sense to build an extension with _DEBUG against a release Python and vice versa. But distutils' build_ext logic does (or at least trying to do since that breaks) just this, as it looks at the --debug option to `build' instead of checking the type of the running Python. Am I missing something crucial here, or do I need to fix distutils now to be able to debug a problem in my extension module? -- Regards, Ivan From vano at mail.mipt.ru Tue Jan 9 17:57:48 2018 From: vano at mail.mipt.ru (Ivan Pozdeev) Date: Wed, 10 Jan 2018 01:57:48 +0300 Subject: [Distutils] Add processor generation, Windows version (and possibly other things) to wheel platform tag Message-ID: This is a follow-up to this thread (the 2nd link is continuation): https://mail.python.org/pipermail/distutils-sig/2017-October/031750.html https://mail.python.org/pipermail/distutils-sig/2017-November/031759.html After discovering https://www.python.org/dev/peps/pep-0425/ , I see that what I was asking for is add more data to the wheel's platform tag. Specifically: * Processor generation if it's higher than what Python officially supports * Windows version if it's higher than what Python officially supports (like it's already done for MacOS) In fact, references to other libraries that the wheel expects on the system that Nick Coghlan proposed in https://www.python.org/dev/peps/pep-0459/ can also be implemented in the platform tag. They belong there with the current infrastructure because: * Anything external to a Python installation that is expected on the system, combined, _is_ the platform. * A wheel's name has to be unique and descriptive (it needs to include everything required for pip to pick the best download). Sure, this can make the name very long, but the more to describe, the longer the description. -- Regards, Ivan From vano at mail.mipt.ru Tue Jan 9 18:16:02 2018 From: vano at mail.mipt.ru (Ivan Pozdeev) Date: Wed, 10 Jan 2018 02:16:02 +0300 Subject: [Distutils] Allow to debug extension modules In-Reply-To: <81b9894e-4d13-5206-59ad-bdf8e415d235@mail.mipt.ru> References: <81b9894e-4d13-5206-59ad-bdf8e415d235@mail.mipt.ru> Message-ID: <83e8f1c2-4f54-0b2f-3098-70957702fad8@mail.mipt.ru> On 10.01.2018 0:02, Ivan Pozdeev via Distutils-SIG wrote: > In the > https://mail.python.org/pipermail/python-ideas/2018-January/048558.html > thread, it was discovered that, stumpingly enough, it seems not > possible to debug extension modules! > > * For a release Python, it's not even possible to build an extension > without optimizations > > * For a debug Python, it's possible to build but not possible to > install it, or any other package that depends on it I see now that it's possible to write `<...>\cpython\python.bat setup.py build --debug install' to install a single package containing an extension. pip is still broken though: try e.g. to install `scandir' (a dependency of IPython) with it. > > Moreover, the feedback says that it makes no sense to build an > extension with _DEBUG against a release Python and vice versa. But > distutils' build_ext logic does (or at least trying to do since that > breaks) just this, as it looks at the --debug option to `build' > instead of checking the type of the running Python. > > Am I missing something crucial here, or do I need to fix distutils now > to be able to debug a problem in my extension module? > -- Regards, Ivan From thomas at kluyver.me.uk Wed Jan 10 08:44:36 2018 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Wed, 10 Jan 2018 13:44:36 +0000 Subject: [Distutils] PEP 566 - metadata 1.3 changes Message-ID: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> I hope everyone has had a good break. :-) I'd like to see PEP 566 move forwards. From the last thread, I think people were mostly happy with it as stands, but there were some proposals to introduce further metadata changes. I'd suggest that we finalise the PEP as it currently stands, and save up other changes for a future metadata revision. One change I would like to the current text is to make more explicit that the format of several fields allowing version specifiers (Requires-Dist, Provides-Dist, Obsoletes-Dist, Requires-External) has changed, as PEP 508 removes the need for parentheses around the version specifier. The section on version specifiers currently refers to PEP 440, which doesn't mention the parentheses at all, and says they are otherwise unchanged from PEP 345, which says parentheses are required. I would like to add some text to that section, such as: "Following PEP 508, version specifiers no longer need to be surrounded by parentheses in the fields Requires-Dist, Provides-Dist, Obsoletes-Dist or Requires-External, so e.g. `requests >= 2.8.1` is now a valid value. The recommended format is without parentheses, but tools parsing metadata should also be able to handle version specifiers in parentheses." Thanks, Thomas From dholth at gmail.com Wed Jan 10 09:54:08 2018 From: dholth at gmail.com (Daniel Holth) Date: Wed, 10 Jan 2018 14:54:08 +0000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> Message-ID: AFAICT the only missing feature from old-Metadata-2.0 is "description as message body", which places readable description text after the key/value pairs. On Wed, Jan 10, 2018 at 8:45 AM Thomas Kluyver wrote: > I hope everyone has had a good break. :-) > > I'd like to see PEP 566 move forwards. From the last thread, I think > people were mostly happy with it as stands, but there were some proposals > to introduce further metadata changes. I'd suggest that we finalise the PEP > as it currently stands, and save up other changes for a future metadata > revision. > > One change I would like to the current text is to make more explicit that > the format of several fields allowing version specifiers (Requires-Dist, > Provides-Dist, Obsoletes-Dist, Requires-External) has changed, as PEP 508 > removes the need for parentheses around the version specifier. > > The section on version specifiers currently refers to PEP 440, which > doesn't mention the parentheses at all, and says they are otherwise > unchanged from PEP 345, which says parentheses are required. I would like > to add some text to that section, such as: > > "Following PEP 508, version specifiers no longer need to be surrounded by > parentheses in the fields Requires-Dist, Provides-Dist, Obsoletes-Dist or > Requires-External, so e.g. `requests >= 2.8.1` is now a valid value. The > recommended format is without parentheses, but tools parsing metadata > should also be able to handle version specifiers in parentheses." > > Thanks, > Thomas > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jan 10 18:42:09 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Jan 2018 09:42:09 +1000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> Message-ID: +1 from me for the adjustment Thomas suggested, since that's in the intended vein of "properly document the status quo". On 11 January 2018 at 00:54, Daniel Holth wrote: > AFAICT the only missing feature from old-Metadata-2.0 is "description as > message body", which places readable description text after the key/value > pairs. Do pip/PyPI/et al currently support that? If yes, then I think it would be good to include. On the other hand, if it's a genuine enhancement, then we probably want to defer it to metadata 1.4 in order to clearly separate the "Update the specification to match reality" step from the "Further improve the usability of the specification" step. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From thomas at kluyver.me.uk Fri Jan 12 10:26:40 2018 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Fri, 12 Jan 2018 15:26:40 +0000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> Message-ID: <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> On Wed, Jan 10, 2018, at 11:42 PM, Nick Coghlan wrote: > On 11 January 2018 at 00:54, Daniel Holth wrote: > > AFAICT the only missing feature from old-Metadata-2.0 is "description as > > message body", which places readable description text after the key/value > > pairs. > > Do pip/PyPI/et al currently support that? It looks like twine supports it, at least for wheels: https://github.com/pypa/twine/blob/f74eae5506300387572c65c9dbfe240d927788c2/twine/wheel.py#L99 I don't think pip needs to support it (does pip do anything with descriptions?). I haven't looked at PyPI's code, but I'd guess it uses the metadata sent with the upload by tools like twine and flit. Thomas From alex.gronholm at nextday.fi Fri Jan 12 11:00:21 2018 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Fri, 12 Jan 2018 18:00:21 +0200 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: On the same note, wheel currently writes "2.0" as its metadata version. Shouldn't this be changed into 1.3 (along with ditching metadata.json)? Thomas Kluyver kirjoitti 12.01.2018 klo 17:26: > On Wed, Jan 10, 2018, at 11:42 PM, Nick Coghlan wrote: >> On 11 January 2018 at 00:54, Daniel Holth wrote: >>> AFAICT the only missing feature from old-Metadata-2.0 is "description as >>> message body", which places readable description text after the key/value >>> pairs. >> Do pip/PyPI/et al currently support that? > It looks like twine supports it, at least for wheels: > https://github.com/pypa/twine/blob/f74eae5506300387572c65c9dbfe240d927788c2/twine/wheel.py#L99 > > I don't think pip needs to support it (does pip do anything with descriptions?). I haven't looked at PyPI's code, but I'd guess it uses the metadata sent with the upload by tools like twine and flit. > > Thomas > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From dholth at gmail.com Fri Jan 12 11:02:58 2018 From: dholth at gmail.com (Daniel Holth) Date: Fri, 12 Jan 2018 16:02:58 +0000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: Yes, after the PEP is prep'd. On Fri, Jan 12, 2018 at 11:00 AM Alex Gr?nholm wrote: > On the same note, wheel currently writes "2.0" as its metadata version. > Shouldn't this be changed into 1.3 (along with ditching metadata.json)? > > > Thomas Kluyver kirjoitti 12.01.2018 klo 17:26: > > On Wed, Jan 10, 2018, at 11:42 PM, Nick Coghlan wrote: > >> On 11 January 2018 at 00:54, Daniel Holth wrote: > >>> AFAICT the only missing feature from old-Metadata-2.0 is "description > as > >>> message body", which places readable description text after the > key/value > >>> pairs. > >> Do pip/PyPI/et al currently support that? > > It looks like twine supports it, at least for wheels: > > > https://github.com/pypa/twine/blob/f74eae5506300387572c65c9dbfe240d927788c2/twine/wheel.py#L99 > > > > I don't think pip needs to support it (does pip do anything with > descriptions?). I haven't looked at PyPI's code, but I'd guess it uses the > metadata sent with the upload by tools like twine and flit. > > > > Thomas > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jan 12 15:51:23 2018 From: donald at stufft.io (Donald Stufft) Date: Fri, 12 Jan 2018 15:51:23 -0500 Subject: [Distutils] Deprecating/Removing OpenID/Google login support for PyPI Message-ID: As folks are likely aware, legacy PyPI currently supports logging in using OpenID and Google Auth while Warehouse does not. After much deliberation, I?ve decided that Warehouse will not be implementing OpenID or Google logins, and once we shutdown legacy PyPI, OpenID/ and Google logins to PyPI will no longer be possible. This decision was made for a few reasons: * Very few people actually are using OpenID or Google logins as it is. In one month we had ~15k logins using the web form, ~5k using basic auth, and 62 using Google and 7 using OpenID. This is a several orders of magnitude difference. * Regardless of how you log into PyPI (Password or Google/OpenID) you?re required to have a password added to your account to actually upload anything to PyPI. This negates much of the benefit of a federated authentication for PyPI as it stands. * Keeping these requires ongoing maintenance to deal with any changes in the specification or to update as Google deprecates/changes things. * Adding support for them to Warehouse requires additional work that could better be used elsewhere, where it would have a higher impact. - Donald From cosimo at anthrotype.com Sat Jan 13 13:00:44 2018 From: cosimo at anthrotype.com (Cosimo Lupo) Date: Sat, 13 Jan 2018 18:00:44 +0000 Subject: [Distutils] Should abi3 tag be usable on Windows? In-Reply-To: <5a52c9ea.e495500a.d4117.7fc0SMTPIN_ADDED_MISSING@mx.google.com> References: <86bead53-3fd6-43c7-a176-ddd498ed98b0@Spark> <5a52c9ea.e495500a.d4117.7fc0SMTPIN_ADDED_MISSING@mx.google.com> Message-ID: Thank you, Steve. Ok so, setuptools can successfully build extension modules with Py_LIMITED_API on Windows, with a generic *.pyd suffix and no extra tags. However, there are still two problems before I can distribute them: 1) it's not clear how wheels containing such extension modules should be tagged on Windows, given that `abi3` tag is not recognized for that platform. The function imp.get_suffixes() used by both pip and wheel to get the list of supported tags does not return any abi3 suffixes on Windows, and for this reason the bdist_wheel command assumes from this that Windows does not support extensions built with Py_LIMITED_API, and gives an error. Some have resorted to tagging their Windows wheels with abi "none" to work around this, however "none" seems to imply that extension does not use the Python ABI which is not true. Alex Gr?nholm (current maintainer of pypa/wheel package) pointed me to this other related discussion on distutils-sig (from 2013) where Paul Moore explains what the problem is: https://mail.python.org/pipermail/distutils-sig/2013-February/020022.html Has anything happened since the to address this either in the PEPs and/or the tooling? 2) the current virtualenv module (not the built-in venv) does not copy over the required PYTHON3.DLL and thus importing such extension modules from within the virtual environment fails. The latter issue was already fixed in virtualenv's master branch, but virtualenv package hasn't seen a release since then (the last release 15.10.0 was in November 2016). https://github.com/pypa/virtualenv/pull/1083 I would love to take advantage of CPython limited API feature, and be able to distribute a one/two wheels per platform instead of four (multiply that by two for 32 vs 64 bit arch), and having to rebuild one for each new Python 3.x release. It works fine for Linux and macOS; I hope that Windows python users will also be able to enjoy this soon. I am willing to contribute my time to help solving this issue. Thanks, Cosimo On Mon, Jan 8, 2018 at 1:31 AM Steve Dower wrote: > There is a solution to that problem on the linked issue. Basically, you > need to declare Py_LIMITED_API in your code or as an extra preprocessor > variable. > > > > Windows doesn?t use a filename suffix for python3.dll linked extensions as > it will be handled at load time. The tag for the wheel is outside of my > area, so hopefully someone can chime in on that for you. > > > > Cheers, > > Steve > > > > Top-posted from my Windows phone > > > > *From: *Cosimo Lupo > *Sent: *Monday, January 8, 2018 10:32 > *To: *distutils-sig at python.org > *Subject: *Re: [Distutils] Should abi3 tag be usable on Windows? > > > > It turns out setuptools is still linking to PYTHON36.DLL instead of > PYTHNO3.DLL even when py_limited_api=True > > https://github.com/pypa/setuptools/issues/1248 > > > > This is a different, though somewhat related, problem from the one > concering wheel/pip pep425tags disallowing `abi3` tag on Windows. > > > > > > On Sun, Jan 7, 2018 at 12:39 PM Cosimo Lupo wrote: > > I jut found this related pip issue: > https://github.com/pypa/pip/issues/4445 > > > > (I myself as @anthrotype commented on that thread back in April last year > but then completely forgot...) > > > > After re-reading it now, it's still not clear to me what the resolution on > that issue was. > > > > On Sun, Jan 7, 2018 at 10:12 AM Cosimo Lupo wrote: > > Hello, > > CFFI has recently added support for Py_LIMITED_API extension modules built > for CPython 3. > The wheel module since version 0.30.0 also supports passing > ?py-limited-api cp3X to bdist_wheel command to allow the generated .whl to > be installed on all CPython versions equal or greater than the one > specified. > > Yesterday I was trying to apply this on a cffi-built extension module, and > it worked for Linux and macOS but failed for Windows: > > https://github.com/hynek/argon2_cffi/pull/32 > > The AssertionError from wheel.pep425tags complains that a tag with abi3 > would be unsupported for the target platform. > > Alex Gronholm commented > > > imp.get_suffixes() does not seem to contain any ABI3 suffixes, but I'm > not sure if this is even applicable on Windows. > > Incidentally, I noticed one specific package, PyQt5, that distributes both > abi3-tagged wheels for Mac and manylinux and Windows wheels for a range of > cp35.cp36.cp37 but with abi tag set as ?none?, and they do seem to work. > > So, can one make such py_limited_api wheels work on Windows with the > current state of the tooling, and if so how? > > Thank you in advance > > > > Cosimo Lupo > > > > -- > > Cosimo Lupo > > > > -- > > Cosimo Lupo > > > -- Cosimo Lupo -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Sat Jan 13 13:39:44 2018 From: bussonniermatthias at gmail.com (Matthias Bussonnier) Date: Sat, 13 Jan 2018 19:39:44 +0100 Subject: [Distutils] Deprecating/Removing OpenID/Google login support for PyPI In-Reply-To: References: Message-ID: > * Very few people actually are using OpenID or Google logins as it is. In one month we had ~15k logins using the web form, ~5k using basic auth, and 62 using Google and 7 using OpenID. This is a several orders of magnitude difference. Not opposing to open-id/Google-ID removal, but I would love to login-with-google, though because I already have an account (and can't associate my google account with the PyPI one) I'm not using login with google. Also it did not work for as long as I can remember. So the low number of people actually _using_ it might not reflect people who would like to use it. Maybe look at the number of people trying and failing ? -- M On 12 January 2018 at 21:51, Donald Stufft wrote: > As folks are likely aware, legacy PyPI currently supports logging in using > OpenID and Google Auth while Warehouse does not. After much deliberation, > I?ve decided that Warehouse will not be implementing OpenID or Google > logins, and once we shutdown legacy PyPI, OpenID/ and Google logins to PyPI > will no longer be possible. > > This decision was made for a few reasons: > > * Very few people actually are using OpenID or Google logins as it is. In > one month we had ~15k logins using the web form, ~5k using basic auth, and > 62 using Google and 7 using OpenID. This is a several orders of magnitude > difference. > * Regardless of how you log into PyPI (Password or Google/OpenID) you?re > required to have a password added to your account to actually upload > anything to PyPI. This negates much of the benefit of a federated > authentication for PyPI as it stands. > * Keeping these requires ongoing maintenance to deal with any changes in > the specification or to update as Google deprecates/changes things. > * Adding support for them to Warehouse requires additional work that could > better be used elsewhere, where it would have a higher impact. > > - Donald > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat Jan 13 13:55:30 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 13 Jan 2018 13:55:30 -0500 Subject: [Distutils] Deprecating/Removing OpenID/Google login support for PyPI In-Reply-To: References: Message-ID: python-social-auth supports OAuth 1, OAuth 2, OpenID, SAML with many auth providers and python trsmeworks; including Pyramid, BitBucket, Google, GitHub, GitLab, https://python-social-auth.readthedocs.io/en/latest/ http://python-social-auth.readthedocs.io/en/latest/backends/ https://github.com/python-social-auth/social-app-pyramid/ There's likely someone with more experience with a different authentication abstraction API? https://github.com/uralbash/awesome-pyramid/#authentication lists quite a few authentication and authorization systems which may also be useful for implementing TUF? On Friday, January 12, 2018, Donald Stufft wrote: > As folks are likely aware, legacy PyPI currently supports logging in using > OpenID and Google Auth while Warehouse does not. After much deliberation, > I?ve decided that Warehouse will not be implementing OpenID or Google > logins, and once we shutdown legacy PyPI, OpenID/ and Google logins to PyPI > will no longer be possible. > > This decision was made for a few reasons: > > * Very few people actually are using OpenID or Google logins as it is. In > one month we had ~15k logins using the web form, ~5k using basic auth, and > 62 using Google and 7 using OpenID. This is a several orders of magnitude > difference. > * Regardless of how you log into PyPI (Password or Google/OpenID) you?re > required to have a password added to your account to actually upload > anything to PyPI. This negates much of the benefit of a federated > authentication for PyPI as it stands. > * Keeping these requires ongoing maintenance to deal with any changes in > the specification or to update as Google deprecates/changes things. > * Adding support for them to Warehouse requires additional work that could > better be used elsewhere, where it would have a higher impact. > > - Donald > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at kluyver.me.uk Sat Jan 13 13:59:48 2018 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Sat, 13 Jan 2018 18:59:48 +0000 Subject: [Distutils] Deprecating/Removing OpenID/Google login support for PyPI In-Reply-To: References: Message-ID: <1515869988.720882.1234315968.3EBF05CE@webmail.messagingengine.com> On Sat, Jan 13, 2018, at 6:39 PM, Matthias Bussonnier wrote: > Not opposing to open-id/Google-ID removal, but I would love to login-with- > google, though because I already have an account (and can't associate > my google account with the PyPI one) I'm not using login with google. > Also it did not work for as long as I can remember. So the low number > of people actually _using_ it might not reflect people who would like > to use it. Maybe look at the number of people trying and failing ? On the other hand, I created my account using OpenID years ago, and now I always log in with a password. Based on the numbers Donald gave, I don't think it's worth spending more time investigating the demand. If there really is demand, people will make it known on issues etc., and it could be considered for Warehouse further down the line. For now, passwords are working, and there are more important things to build and maintain. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jan 14 09:29:42 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 15 Jan 2018 00:29:42 +1000 Subject: [Distutils] Deprecating/Removing OpenID/Google login support for PyPI In-Reply-To: <1515869988.720882.1234315968.3EBF05CE@webmail.messagingengine.com> References: <1515869988.720882.1234315968.3EBF05CE@webmail.messagingengine.com> Message-ID: On 14 January 2018 at 04:59, Thomas Kluyver wrote: > Based on the numbers Donald gave, I don't think it's worth spending more > time investigating the demand. If there really is demand, people will make > it known on issues etc., and it could be considered for Warehouse further > down the line. For now, passwords are working, and there are more important > things to build and maintain. Something else I'll note here is that there are a few things we want to explore post-Warehouse migration that require PyPI to serve as its own identity provider: - two-factor authentication support - orgs, teams, and role-based access control - revocable CLI (and other app) token support Adding back social auth some time post-migration will likely still be an option (similar to the way Atlassian allow social auth logins to establish and authenticate for BitBucket accounts), it would just be lower priority than the above (and leaving it out for the time being should simplify the development of the above capabilities). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris at withers.org Mon Jan 15 03:00:49 2018 From: chris at withers.org (Chris Withers) Date: Mon, 15 Jan 2018 08:00:49 +0000 Subject: [Distutils] pipenv best practices? Message-ID: Hi All, Couldn't find an obviously better discussion forum than this for pipenv stuff, but please point me to where the rest of the discussions are happening... So, with pipenv, what files do we version control for a project? both Pipfile and Pipfile.lock? Hopefully one I missed from the docs: with the correct files source controlled, how do I reproduce the environment on another machine? Continuing on the deployment theme: I'm used to being able to put /path/to/some/ve/bin/some_script in crontabs and the like, where some_script comes form a setuptools entry point. What's the canonical way of doing this with pipenv? Last up, how should pipenv be used for library development? (ie: where dependencies and minimal constraints are expressed in setup.py and where you want to test against as many python versions and library combinations as you support). Other than sadness around capitalisation of the config file names, it looks really exciting :-) cheers, Chris From j.orponen at 4teamwork.ch Mon Jan 15 02:03:05 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Mon, 15 Jan 2018 08:03:05 +0100 Subject: [Distutils] Deprecating/Removing OpenID/Google login support for PyPI In-Reply-To: References: Message-ID: On Fri, Jan 12, 2018 at 9:51 PM, Donald Stufft wrote: > As folks are likely aware, legacy PyPI currently supports logging in using > OpenID and Google Auth while Warehouse does not. After much deliberation, > I?ve decided that Warehouse will not be implementing OpenID or Google > logins, and once we shutdown legacy PyPI, OpenID/ and Google logins to PyPI > will no longer be possible. > > This decision was made for a few reasons: > > * Very few people actually are using OpenID or Google logins as it is. In > one month we had ~15k logins using the web form, ~5k using basic auth, and > 62 using Google and 7 using OpenID. This is a several orders of magnitude > difference. > For reference: OpenID has never worked for me and I think content blockers rip out the Google option for me. * Regardless of how you log into PyPI (Password or Google/OpenID) you?re > required to have a password added to your account to actually upload > anything to PyPI. This negates much of the benefit of a federated > authentication for PyPI as it stands. > OAuth app tokens are a possible way to achieve this as well and might suite various release pipelines better. * Keeping these requires ongoing maintenance to deal with any changes in > the specification or to update as Google deprecates/changes things. > * Adding support for them to Warehouse requires additional work that could > better be used elsewhere, where it would have a higher impact. > All that said, +1 for not bothering with it. If it ever is tackled, I'm sure this day and age will bring more, more visible and more direct feedback on it working or not working for users than the previous iteration. -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Jan 15 14:46:09 2018 From: brett at python.org (Brett Cannon) Date: Mon, 15 Jan 2018 19:46:09 +0000 Subject: [Distutils] pipenv best practices? In-Reply-To: References: Message-ID: On Mon, 15 Jan 2018 at 00:33 Chris Withers wrote: > Hi All, > > Couldn't find an obviously better discussion forum than this for pipenv > stuff, but please point me to where the rest of the discussions are > happening... > Not here from what I can tell. :) Probably your best bet is to ask on the pipenv GitHub project and file an issue so the pipenv docs can give you guidance. I know for me personally it's getting a bit hazy between this list and pypa-dev where stuff should go (I'm personally starting to shift towards pypa-dev and considering this actually for distutils proper). But I happen to know the answer to your pipenv questions, Chris, so I'll answer them here. > > So, with pipenv, what files do we version control for a project? both > Pipfile and Pipfile.lock? > Yes. > Hopefully one I missed from the docs: with the correct files source > controlled, how do I reproduce the environment on another machine? > pipenv install --ignore-pipfile > > Continuing on the deployment theme: I'm used to being able to put > /path/to/some/ve/bin/some_script in crontabs and the like, where > some_script comes form a setuptools entry point. > What's the canonical way of doing this with pipenv? > Beats me. I think there's a command to get back to the venv that pipenv created on your behalf. > > Last up, how should pipenv be used for library development? (ie: where > dependencies and minimal constraints are expressed in setup.py and where > you want to test against as many python versions and library > combinations as you support). > That doesn't fall under pipenv's purview. Think of pipenv as trying to make venv + pip easier to work with. Since you don't use pip to express dependencies for your library then you shouldn't with pipenv either. Or put another way, think of your pipfile as just a different format for a requirements.txt file, not a replacement for flit or setuptools. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sh at changeset.nyc Mon Jan 15 22:33:09 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Mon, 15 Jan 2018 22:33:09 -0500 Subject: [Distutils] Fwd: Warehouse update: recent progress In-Reply-To: References: Message-ID: Forwarding from pypa-dev. -------- Forwarded Message -------- Subject: Warehouse update: recent progress Date: Mon, 15 Jan 2018 22:32:45 -0500 From: Sumana Harihareswara To: pypa-dev at googlegroups.com Happy new year! After the late-December holiday season, the Warehouse team had a meeting and followup chats last week to discuss the first milestone, the MVP for package maintainers to try Warehouse and give feedback.[0] Full notes from a January 10th meeting and its followups are on the PSF wiki.[1] Highlights from that meeting and related work in the last week and change: * A number of items were blocked on Nicole Harris's availability -- she's the designer working on our front-end views. We sorted out things she needs and are making better progress on those now. * Donald wrote to distutils-sig[2] about removing OpenID/Google login support for PyPI.[3] * Ernest and Nicole are working on a maintainer views meta-issue to "Classify, describe, and share the *current* functionality implemented by PyPI to assist contributors while developing warehouse's maintainer UI."[4] * The Warehouse roadmap, which previously only lived in GitHub milestone descriptions, is now all in one wiki page for easier reading and linking.[5] * Dustin has been working on adding the ability for a project owner to add/remove roles for their projects.[6] * Ernest has been continuing to work on deployment infrastructure, which is getting more robust and resilient. * We've created, reviewed, improved, and/or merged several other pull requests to Warehouse and related projects. In particular, development is now getting easier, as we have an SMTP service for development[7] and a better Makefile[8], and are getting better testing instructions.[9] We are incrementally getting a clearer picture of what work is left in this milestone so I can get you an estimate of when the Maintainer MVP will be. And Warehouse continues to get easier to hack on; feel free to join us on Freenode in case you want any help as you poke around the Warehouse code![10] [0] https://github.com/pypa/warehouse/milestone/8 [1] https://wiki.python.org/psf/PackagingWG/2018-01-10-Warehouse [2] https://mail.python.org/pipermail/distutils-sig/2018-January/031855.html [3] https://github.com/pypa/warehouse/issues/61 [4] https://github.com/pypa/warehouse/issues/2734 [5] https://wiki.python.org/psf/WarehouseRoadmap [6] https://github.com/pypa/warehouse/pull/2705 [7] https://github.com/pypa/warehouse/pull/2709 [8] https://github.com/pypa/warehouse/pull/2728 [9] https://github.com/pypa/warehouse/pull/2758 [10] https://webchat.freenode.net/?channels=%23pypa-dev -- Sumana Harihareswara Changeset Consulting https://changeset.nyc From ncoghlan at gmail.com Mon Jan 15 23:24:54 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jan 2018 14:24:54 +1000 Subject: [Distutils] pipenv best practices? In-Reply-To: References: Message-ID: On 16 January 2018 at 05:46, Brett Cannon wrote: > On Mon, 15 Jan 2018 at 00:33 Chris Withers wrote: >> >> Hi All, >> >> Couldn't find an obviously better discussion forum than this for pipenv >> stuff, but please point me to where the rest of the discussions are >> happening... > > > Not here from what I can tell. :) Probably your best bet is to ask on the > pipenv GitHub project and file an issue so the pipenv docs can give you > guidance. I know for me personally it's getting a bit hazy between this list > and pypa-dev where stuff should go (I'm personally starting to shift towards > pypa-dev and considering this actually for distutils proper). Personally, I don't think mailing lists in general are a great medium for asking usage and support questions. Anyway, we genuinely don't have a clear answer to this question, so I've posted a meta-issue on the pipenv tracker to decide how we want to handle it: https://github.com/pypa/pipenv/issues/1305 >> Continuing on the deployment theme: I'm used to being able to put >> /path/to/some/ve/bin/some_script in crontabs and the like, where >> some_script comes form a setuptools entry point. >> What's the canonical way of doing this with pipenv? > > Beats me. I think there's a command to get back to the venv that pipenv > created on your behalf. "pipenv --venv" gives the path for the current project, so what I tend to do is run "ls -s $(pipenv --venv) .venv" from the directory containing Pipfile and Pipfile.lock in order to create a deterministic path name. >> Last up, how should pipenv be used for library development? (ie: where >> dependencies and minimal constraints are expressed in setup.py and where >> you want to test against as many python versions and library >> combinations as you support). > > That doesn't fall under pipenv's purview. Think of pipenv as trying to make > venv + pip easier to work with. Since you don't use pip to express > dependencies for your library then you shouldn't with pipenv either. Or put > another way, think of your pipfile as just a different format for a > requirements.txt file, not a replacement for flit or setuptools. pipenv does cover this case to some degree - the trick is that it involves thinking of your library's *test suite* as the application you want to deploy. (Hence the reference to "development and testing environments" at the end of the intro to https://packaging.python.org/tutorials/managing-dependencies/) For that case, you don't put *any* of your dependencies from `setup.py` into `Pipfile`, you just run "pipenv install -e src", which will give you a Pipfile entry like: "" = {editable = true, path = "src"} I personally then edit that entry to replace the commit hash with the actual project name All the testing and linting dependencies then go under [dev-packages]. The part you can't fully rely on yet is the lock file, since that aims to lock down the version of Python as well, not just the versions of the Python level dependencies. So if you're testing across multiple versions (e.g. with tox), the best current approach is to use "pipenv lock --requirements" to export just the library dependencies from Pipfile.lock. That's an area that could definitely use some UX improvement, but it isn't clear to me at this point what that would actually look like (short of adding a notion of named "install profiles" to the lock file format, which could then be aligned with tox test environments). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Jan 15 23:25:59 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jan 2018 14:25:59 +1000 Subject: [Distutils] pipenv best practices? In-Reply-To: References: Message-ID: On 16 January 2018 at 14:24, Nick Coghlan wrote: > On 16 January 2018 at 05:46, Brett Cannon wrote: >> On Mon, 15 Jan 2018 at 00:33 Chris Withers wrote: >>> Continuing on the deployment theme: I'm used to being able to put >>> /path/to/some/ve/bin/some_script in crontabs and the like, where >>> some_script comes form a setuptools entry point. >>> What's the canonical way of doing this with pipenv? >> >> Beats me. I think there's a command to get back to the venv that pipenv >> created on your behalf. > > "pipenv --venv" gives the path for the current project, so what I tend > to do is run "ls -s $(pipenv --venv) .venv" from the directory > containing Pipfile and Pipfile.lock in order to create a deterministic > path name. Oops, that should have been "ln -s $(pipenv --venv) .venv". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris at withers.org Tue Jan 16 03:08:06 2018 From: chris at withers.org (Chris Withers) Date: Tue, 16 Jan 2018 08:08:06 +0000 Subject: [Distutils] pipenv best practices? In-Reply-To: References: Message-ID: <6b8db301-8ac7-8537-c162-4cace0ac2b48@withers.org> On 15/01/2018 19:46, Brett Cannon wrote: > > But I happen to know the answer to your pipenv questions, Chris, so I'll > answer them here. Continuing here then :-) > So, with pipenv, what files do we version control for a project? both > Pipfile and Pipfile.lock? > > > Yes. great, thanks! How does Pipfile.lock work in the context of a project which may be installed on multiple operating systems with different final package requirements? > Hopefully one I missed from the docs: with the correct files source > controlled, how do I reproduce the environment on another machine? > > pipenv install --ignore-pipfile That's a surprising spelling. Why does Pipfile need to be ignored? Surely Pipfile.lock will be consulted if present and used for the explicit requirements? > Last up, how should pipenv be used for library development? (ie: where > dependencies and minimal constraints are expressed in setup.py and where > you want to test against as many python versions and library > combinations as you support). > > > That doesn't fall under pipenv's purview. Think of pipenv as trying to > make venv + pip easier to work with. Since you don't use pip to express > dependencies for your library then you shouldn't with pipenv either. Or > put another way, think of your pipfile as just a different format for a > requirements.txt file, not a replacement for flit or setuptools. Well, kinda, pipenv is shaping up to be the "replacement" for pip+virtualenv, where the latter becomes just an implementation detail. Would be great if it had a good story for that use case :-) cheers, Chris From chris at withers.org Tue Jan 16 03:16:25 2018 From: chris at withers.org (Chris Withers) Date: Tue, 16 Jan 2018 08:16:25 +0000 Subject: [Distutils] pipenv best practices? In-Reply-To: References: Message-ID: <5c7f2499-32e5-0cd0-237d-fb1e15aa53fd@withers.org> On 16/01/2018 04:24, Nick Coghlan wrote: > Anyway, we genuinely don't have a clear answer to this question, so > I've posted a meta-issue on the pipenv tracker to decide how we want > to handle it: https://github.com/pypa/pipenv/issues/1305 Thanks! >>> Continuing on the deployment theme: I'm used to being able to put >>> /path/to/some/ve/bin/some_script in crontabs and the like, where >>> some_script comes form a setuptools entry point. >>> What's the canonical way of doing this with pipenv? >> > "pipenv --venv" gives the path for the current project, so what I tend > to do is run "ln -s $(pipenv --venv) .venv" from the directory > containing Pipfile and Pipfile.lock in order to create a deterministic > path name. Hmm, that feels hacky. It doesn't feel like there's consensus on this issue, and there's enough dupes coming in that it feels a shame that https://github.com/pypa/pipenv/issues/1049 has been closed. I actually want both use cases in different scenarios. For dev, I typically want one venv per python version (how do I do that?) somewhere central. For deployment, I want a ./.venv or ./ve right next to the Pipfile. >>> Last up, how should pipenv be used for library development? (ie: where >>> dependencies and minimal constraints are expressed in setup.py and where >>> you want to test against as many python versions and library >>> combinations as you support). >> > pipenv does cover this case to some degree - the trick is that it > involves thinking of your library's *test suite* as the application > you want to deploy. (Hence the reference to "development and testing > environments" at the end of the intro to > https://packaging.python.org/tutorials/managing-dependencies/) As an aside: I find the doc split difficult. Even as a relatively experience python dev, knowing that I wanted to figure out what pipenv does and how to use it, I started at http://pipenv.readthedocs.io/en/latest/ which has no links to the tutorial(s). > For that case, you don't put *any* of your dependencies from > `setup.py` into `Pipfile`, you just run "pipenv install -e src", which > will give you a Pipfile entry like: > > "" = {editable = true, path = "src"} > > I personally then edit that entry to replace the commit hash with the > actual project name I'm afraid I don't know enough about pipenv internals to understand the implications of the above, or why you might want to change the commit has to the project name, and indeed, what the implications of that might be. > All the testing and linting dependencies then go under [dev-packages]. Where can I find out more about [dev-packages]? > The part you can't fully rely on yet is the lock file, since that aims > to lock down the version of Python as well, not just the versions of > the Python level dependencies. So if you're testing across multiple > versions (e.g. with tox), ...or a .travis.yml axis for CI, or just separate venvs for local dev, in my case... > the best current approach is to use "pipenv > lock --requirements" to export just the library dependencies from > Pipfile.lock. ...but what do I do with Pipfile.lock then? .gitigore? cheers, Chris From p.f.moore at gmail.com Tue Jan 16 04:47:18 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jan 2018 09:47:18 +0000 Subject: [Distutils] pipenv best practices? In-Reply-To: <6b8db301-8ac7-8537-c162-4cace0ac2b48@withers.org> References: <6b8db301-8ac7-8537-c162-4cace0ac2b48@withers.org> Message-ID: On 16 January 2018 at 08:08, Chris Withers wrote: >> That doesn't fall under pipenv's purview. Think of pipenv as trying to >> make venv + pip easier to work with. Since you don't use pip to express >> dependencies for your library then you shouldn't with pipenv either. Or put >> another way, think of your pipfile as just a different format for a >> requirements.txt file, not a replacement for flit or setuptools. > > > Well, kinda, pipenv is shaping up to be the "replacement" for > pip+virtualenv, where the latter becomes just an implementation detail. > Would be great if it had a good story for that use case :-) I think the big issue is that it's not clear what use cases pipenv is *trying* to address. Your description of it being a "replacement" for pip+virtualenv is more or less what I originally assumed it was intended to be. But I tripped over this a while back, when I found it very difficult to use pipenv on a project where I needed to: 1. Reference an unreleased version of a project from github 2. Have one of its dependencies be satisfied from a locally (within the project directory) stored copy of a wheel, as there was no suitable wheel on PyPI. I can't recall the details - they are in various issues on the pipenv tracker - but it was clear that I was trying to do something that wasn't considered "expected use" of pipenv, and I was getting very frustrated by the workarounds I needed to use to support my requirements. The guys on the pipenv tracker were very helpful, but when you're hitting design limitations, you can't expect quick fixes, so I ended up going back to pew + requrements.txt. I think that if the pipenv docs had some better guidance on what use cases it was intended to cover (and what it wasn't, in relation to the broader range of pip+virtualenv use cases) that would help people better understand its place in the ecosystem. Paul From ncoghlan at gmail.com Tue Jan 16 05:03:13 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jan 2018 20:03:13 +1000 Subject: [Distutils] pipenv best practices? In-Reply-To: References: <6b8db301-8ac7-8537-c162-4cace0ac2b48@withers.org> Message-ID: On 16 January 2018 at 19:47, Paul Moore wrote: > I think that if the pipenv docs had some better guidance on what use > cases it was intended to cover (and what it wasn't, in relation to the > broader range of pip+virtualenv use cases) that would help people > better understand its place in the ecosystem. That's fair, and making the PyPUG tutorial specifically about *application* dependency management was an initial step in that direction - for the library development use case, you're generally going to step out of pipenv's world as soon as you try to run your tests across multiple versions. The basic usage constraint is that pipenv expects you to control your target Python version, and it expects you to have exactly one - to go beyond that (as is needed for multi-version library support), you'll still need to drop down to the lower level tooling, either skipping the use of pipenv entirely, or else by using `pipfile lock --requirements` to shift levels. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Jan 16 05:22:36 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jan 2018 10:22:36 +0000 Subject: [Distutils] pipenv use cases Message-ID: On 16 January 2018 at 10:03, Nick Coghlan wrote: > On 16 January 2018 at 19:47, Paul Moore wrote: >> I think that if the pipenv docs had some better guidance on what use >> cases it was intended to cover (and what it wasn't, in relation to the >> broader range of pip+virtualenv use cases) that would help people >> better understand its place in the ecosystem. > > That's fair, and making the PyPUG tutorial specifically about > *application* dependency management was an initial step in that > direction - for the library development use case, you're generally > going to step out of pipenv's world as soon as you try to run your > tests across multiple versions. > > The basic usage constraint is that pipenv expects you to control your > target Python version, and it expects you to have exactly one - to go > beyond that (as is needed for multi-version library support), you'll > still need to drop down to the lower level tooling, either skipping > the use of pipenv entirely, or else by using `pipfile lock > --requirements` to shift levels. New subject as I don't want to hijack the original thread to rehash my old issue, but I do want to make a couple of points on this (and I agree, it's hard to find a good forum to discuss things like this as it stands). 1. "pipenv expects you to control your target Python version" - I'm not 100% sure what that means, but if it's saying "pipenv is only really for code that will only be used with a single Python version" then that's basically excluding a huge chunk of Python development. Specifically, library development of all forms. Admittedly my experience of what's "typical" is biased strongly by the communities I work with, but conversely the "writing standalone apps in Python" community doesn't really seem to have a web presence that we can promote pipenv through, so it's becoming visible via the "general Python development" community, which quite biased to libraries, and so is not the right target audience, from what you say. 2. My specific issues weren't outside that constraint - I *was* writing code that would only ever need to run under Python 3.6. My problem was the need for "local" build dependencies, which seems entirely within that use case. In fairness, the pipenv devs weren't totally against the functionality I needed, it's just not something they had considered important. Maybe the problem here is that there isn't a good set of "developing apps in Python" best practices (as opposed to the library development situation - use install_requires, test with tox, ...), so I didn't know the "recommended" solution to my problem, that pipenv would have been expecting me to use. Maybe it's a chicken-and-egg situation - the "best practice" is to use pipenv, but until pipenv gets to encounter all the various use cases, that "best practice" doesn't properly cover every situation... Paul From ncoghlan at gmail.com Tue Jan 16 05:44:38 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jan 2018 20:44:38 +1000 Subject: [Distutils] pipenv use cases In-Reply-To: References: Message-ID: On 16 January 2018 at 20:22, Paul Moore wrote: > On 16 January 2018 at 10:03, Nick Coghlan wrote: >> On 16 January 2018 at 19:47, Paul Moore wrote: >>> I think that if the pipenv docs had some better guidance on what use >>> cases it was intended to cover (and what it wasn't, in relation to the >>> broader range of pip+virtualenv use cases) that would help people >>> better understand its place in the ecosystem. >> >> That's fair, and making the PyPUG tutorial specifically about >> *application* dependency management was an initial step in that >> direction - for the library development use case, you're generally >> going to step out of pipenv's world as soon as you try to run your >> tests across multiple versions. >> >> The basic usage constraint is that pipenv expects you to control your >> target Python version, and it expects you to have exactly one - to go >> beyond that (as is needed for multi-version library support), you'll >> still need to drop down to the lower level tooling, either skipping >> the use of pipenv entirely, or else by using `pipfile lock >> --requirements` to shift levels. > > New subject as I don't want to hijack the original thread to rehash my > old issue, but I do want to make a couple of points on this (and I > agree, it's hard to find a good forum to discuss things like this as > it stands). > > 1. "pipenv expects you to control your target Python version" - I'm > not 100% sure what that means, but if it's saying "pipenv is only > really for code that will only be used with a single Python version" > then that's basically excluding a huge chunk of Python development. > Specifically, library development of all forms. Yes, that's deliberate. We want to target app developers as our initial audience, since library and framework developers have different needs (and for folks just starting out with library development, pipenv + the latest version of Python is actually fine - matrix testing only comes into play once folks actually want to support older versions, perhaps because the version they started out with is no longer the latest one). > Admittedly my > experience of what's "typical" is biased strongly by the communities I > work with, but conversely the "writing standalone apps in Python" > community doesn't really seem to have a web presence that we can > promote pipenv through, so it's becoming visible via the "general > Python development" community, which quite biased to libraries, and so > is not the right target audience, from what you say. As I noted in my original comment, pipenv only applies to library and framework development insofar as you can treat your test suite as an "app" to be deployed to developer's machine (the deployment mechanism is "git clone", but it's still a deployment). Where that currently breaks down is as soon as you're testing across multiple versions, where the combination of wheel files and environment markers means the current lock file format runs into trouble (but so do requirements files). > 2. My specific issues weren't outside that constraint - I *was* > writing code that would only ever need to run under Python 3.6. My > problem was the need for "local" build dependencies, which seems > entirely within that use case. In fairness, the pipenv devs weren't > totally against the functionality I needed, it's just not something > they had considered important. Maybe the problem here is that there > isn't a good set of "developing apps in Python" best practices (as > opposed to the library development situation - use install_requires, > test with tox, ...), so I didn't know the "recommended" solution to my > problem, that pipenv would have been expecting me to use. Maybe it's a > chicken-and-egg situation - the "best practice" is to use pipenv, but > until pipenv gets to encounter all the various use cases, that "best > practice" doesn't properly cover every situation... Local build dependencies are within scope, but pipenv doesn't magically fix the development resource constraint problem :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Jan 16 06:21:26 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jan 2018 11:21:26 +0000 Subject: [Distutils] pipenv use cases In-Reply-To: References: Message-ID: On 16 January 2018 at 10:44, Nick Coghlan wrote: > Yes, that's deliberate. We want to target app developers as our > initial audience, since library and framework developers have > different needs (and for folks just starting out with library > development, pipenv + the latest version of Python is actually fine - > matrix testing only comes into play once folks actually want to > support older versions, perhaps because the version they started out > with is no longer the latest one). Having that up front in the pipenv docs/webpage would probably help communicate the intention better. At the moment, it's pretty hard to find. And the general "pipenv is cool!" enthusiasm (which I agree with - it is :-)) tends to encourage people to try it out for whatever their use case is (hence Chris' original post). Also, there's a quite common recommendation around to "build your app as a library and have its main program as a console script entry point" - tools like pip, tox, pytest, pew, pipenv itself, etc take that approach, for example. So separating "app developers" and "library developers" still leaves a fairly large grey area. > Local build dependencies are within scope, but pipenv doesn't > magically fix the development resource constraint problem :) Understood - as I said, the pipenv devs were very helpful, it just took a bit of time to establish what I was trying to do, and that it *was* something that they saw the need to support. Actually implementing that support can take as long as it needs - I appreciate the "so much to do, so little time" problem. Better publicised "Python application development workflow" best practices might have helped save a little of our time in establishing I had a valid use case and they intended to support it but didn't yet. That's all I'm really saying. Paul From thomas at kluyver.me.uk Tue Jan 16 06:42:29 2018 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Tue, 16 Jan 2018 11:42:29 +0000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: <1516102949.2240487.1236971072.112C0C72@webmail.messagingengine.com> Allow me to prod this topic again. ;-) I'm happy with PEP 566 as it stands. Do we want to specify 'email body is long description' in this PEP? It appears to have at least some real world support, but I'm not familiar enough with the email metadata format to write a proper description of it. Thanks, Thomas On Fri, Jan 12, 2018, at 4:02 PM, Daniel Holth wrote: > Yes, after the PEP is prep'd. > > On Fri, Jan 12, 2018 at 11:00 AM Alex Gr?nholm > wrote:>> On the same note, wheel currently writes "2.0" as its metadata >> version.>> Shouldn't this be changed into 1.3 (along with ditching >> metadata.json)?>> >> >> Thomas Kluyver kirjoitti 12.01.2018 klo 17:26: >> > On Wed, Jan 10, 2018, at 11:42 PM, Nick Coghlan wrote: >> >> On 11 January 2018 at 00:54, Daniel Holth >> >> wrote:>> >>> AFAICT the only missing feature from old-Metadata-2.0 is >> >>> "description as>> >>> message body", which places readable description text after the >> >>> key/value>> >>> pairs. >> >> Do pip/PyPI/et al currently support that? >> > It looks like twine supports it, at least for wheels: >> > https://github.com/pypa/twine/blob/f74eae5506300387572c65c9dbfe240d927788c2/twine/wheel.py#L99>> > >> > I don't think pip needs to support it (does pip do anything with >> > descriptions?). I haven't looked at PyPI's code, but I'd guess it >> > uses the metadata sent with the upload by tools like twine and >> > flit.>> > >> > Thomas >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > https://mail.python.org/mailman/listinfo/distutils-sig >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > _________________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From bigepunken at gmail.com Tue Jan 16 04:37:15 2018 From: bigepunken at gmail.com (bige12655 .) Date: Tue, 16 Jan 2018 01:37:15 -0800 Subject: [Distutils] (no subject) Message-ID: Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Jan 16 10:44:25 2018 From: dholth at gmail.com (Daniel Holth) Date: Tue, 16 Jan 2018 15:44:25 +0000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: <1516102949.2240487.1236971072.112C0C72@webmail.messagingengine.com> References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> <1516102949.2240487.1236971072.112C0C72@webmail.messagingengine.com> Message-ID: This is the old text. Describing the distribution =========================== The distribution metadata should include a longer description of the distribution that may run to several paragraphs. Software that deals with metadata should not assume any maximum size for the description. The recommended location for the description is in the metadata payload, separated from the header fields by at least one completely blank line (that is, two successive line separators with no other characters between them, not even whitespace). Alternatively, the description may be provided in the `Description`__ metadata header field. Providing both a ``Description`` field and a payload is an error. __ `Description (optional, deprecated)`_ The distribution description can be written using reStructuredText markup [1]_. For programs that work with the metadata, supporting markup is optional; programs may also display the contents of the field as plain text without any special formatting. This means that authors should be conservative in the markup they use. On Tue, Jan 16, 2018 at 6:44 AM Thomas Kluyver wrote: > Allow me to prod this topic again. ;-) > > I'm happy with PEP 566 as it stands. > > Do we want to specify 'email body is long description' in this PEP? It > appears to have at least some real world support, but I'm not familiar > enough with the email metadata format to write a proper description of it. > > Thanks, > Thomas > > On Fri, Jan 12, 2018, at 4:02 PM, Daniel Holth wrote: > > Yes, after the PEP is prep'd. > > On Fri, Jan 12, 2018 at 11:00 AM Alex Gr?nholm > wrote: > > On the same note, wheel currently writes "2.0" as its metadata version. > Shouldn't this be changed into 1.3 (along with ditching metadata.json)? > > > Thomas Kluyver kirjoitti 12.01.2018 klo 17:26: > > On Wed, Jan 10, 2018, at 11:42 PM, Nick Coghlan wrote: > >> On 11 January 2018 at 00:54, Daniel Holth wrote: > >>> AFAICT the only missing feature from old-Metadata-2.0 is "description > as > >>> message body", which places readable description text after the > key/value > >>> pairs. > >> Do pip/PyPI/et al currently support that? > > It looks like twine supports it, at least for wheels: > > > https://github.com/pypa/twine/blob/f74eae5506300387572c65c9dbfe240d927788c2/twine/wheel.py#L99 > > > > I don't think pip needs to support it (does pip do anything with > descriptions?). I haven't looked at PyPI's code, but I'd guess it uses the > metadata sent with the upload by tools like twine and flit. > > > > Thomas > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > *_______________________________________________* > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Jan 16 11:58:02 2018 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 16 Jan 2018 08:58:02 -0800 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: On Jan 12, 2018 8:00 AM, "Alex Gr?nholm" wrote: On the same note, wheel currently writes "2.0" as its metadata version. Shouldn't this be changed into 1.3 (along with ditching metadata.json)? Should wheel change to emit 1.3, or should the PEP change to become 2.0? I know there were great hopes for "metadata 2.0", but given that there are bazillions of packages out there with a metadata version of 2.0, we're never going to be able to meaningfully use that version number for anything else, and it's confusing if when reading package metadata the ordering is 1.2 < 2.0 == 1.3 < 1.4. So maybe we should declare that this update is 2.0 or 2.1, the next one will be 2.1 or 2.2, etc., and if anyone asks why the major version bump, well, it's hardly the strangest thing we've done for compatibility :-). (And in the unlikely event that PEP 426 lurches back to life, we can make it 3.0.) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Tue Jan 16 12:07:59 2018 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 16 Jan 2018 19:07:59 +0200 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: Whichever we choose, the metadata version should match the PEP version, which it currently does not. Nathaniel Smith kirjoitti 16.01.2018 klo 18:58: > On Jan 12, 2018 8:00 AM, "Alex Gr?nholm" > wrote: > > On the same note, wheel currently writes "2.0" as its metadata > version. Shouldn't this be changed into 1.3 (along with ditching > metadata.json)? > > > Should wheel change to emit 1.3, or should the PEP change to become > 2.0? I know there were great hopes for "metadata 2.0", but given that > there are bazillions of packages out there with a metadata version of > 2.0, we're never going to be able to meaningfully use that version > number for anything else, and it's confusing if when reading package > metadata the ordering is 1.2 < 2.0 == 1.3 < 1.4. So maybe we should > declare that this update is 2.0 or 2.1, the next one will be 2.1 or > 2.2, etc., and if anyone asks why the major version bump, well, it's > hardly the strangest thing we've done for compatibility :-). (And in > the unlikely event that PEP 426 lurches back to life, we can make it 3.0.) > > -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From nathaniel at google.com Tue Jan 16 10:28:01 2018 From: nathaniel at google.com (Nathaniel Manista) Date: Tue, 16 Jan 2018 07:28:01 -0800 Subject: [Distutils] Deprecating/Removing OpenID/Google login support for PyPI In-Reply-To: References: Message-ID: On Sat, Jan 13, 2018 at 10:39 AM, Matthias Bussonnier < bussonniermatthias at gmail.com> wrote: > > * Very few people actually are using OpenID or Google logins as it is. > In one month we had ~15k logins using the web form, ~5k using basic auth, > and 62 using Google and 7 using OpenID. This is a several orders of > magnitude difference. > > Not opposing to open-id/Google-ID removal, but I would love to > login-with-google, though because I already have an account (and can't > associate my google account with the PyPI one) I'm not using login with > google. Also it did not work for as long as I can remember. So the low > number of people actually _using_ it might not reflect people who would > like to use it. Maybe look at the number of people trying and failing ? > I also am a user that has always wanted PyPI's OpenID and Google logins to work and has for years never seen them actually work. :-( -Nathaniel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4849 bytes Desc: S/MIME Cryptographic Signature URL: From brett at python.org Tue Jan 16 12:36:28 2018 From: brett at python.org (Brett Cannon) Date: Tue, 16 Jan 2018 17:36:28 +0000 Subject: [Distutils] pipenv use cases In-Reply-To: References: Message-ID: On Tue, 16 Jan 2018 at 02:45 Nick Coghlan wrote: > On 16 January 2018 at 20:22, Paul Moore wrote: > > On 16 January 2018 at 10:03, Nick Coghlan wrote: > >> On 16 January 2018 at 19:47, Paul Moore wrote: > >>> I think that if the pipenv docs had some better guidance on what use > >>> cases it was intended to cover (and what it wasn't, in relation to the > >>> broader range of pip+virtualenv use cases) that would help people > >>> better understand its place in the ecosystem. > >> > >> That's fair, and making the PyPUG tutorial specifically about > >> *application* dependency management was an initial step in that > >> direction - for the library development use case, you're generally > >> going to step out of pipenv's world as soon as you try to run your > >> tests across multiple versions. > >> > >> The basic usage constraint is that pipenv expects you to control your > >> target Python version, and it expects you to have exactly one - to go > >> beyond that (as is needed for multi-version library support), you'll > >> still need to drop down to the lower level tooling, either skipping > >> the use of pipenv entirely, or else by using `pipfile lock > >> --requirements` to shift levels. > > > > New subject as I don't want to hijack the original thread to rehash my > > old issue, but I do want to make a couple of points on this (and I > > agree, it's hard to find a good forum to discuss things like this as > > it stands). > > > > 1. "pipenv expects you to control your target Python version" - I'm > > not 100% sure what that means, but if it's saying "pipenv is only > > really for code that will only be used with a single Python version" > > then that's basically excluding a huge chunk of Python development. > > Specifically, library development of all forms. > > Yes, that's deliberate. We want to target app developers as our > initial audience, since library and framework developers have > different needs (and for folks just starting out with library > development, pipenv + the latest version of Python is actually fine - > matrix testing only comes into play once folks actually want to > support older versions, perhaps because the version they started out > with is no longer the latest one). > Is there a library developer workflow that's being promoted then somewhere that I'm just not finding? Or does that need to be written for packaging.python.org (which I *might* be willing to write; see below for motivation)? At least from a VS Code perspective it would be great to have a target of supporting the workflows as documented at packaging.python.org so people know how they should generally structure things as well as making sure VS Code always supports the modern workflows. Also being able to say "your workflow is not normal" is also always helpful when dealing with feature requests. :) -Brett > > > Admittedly my > > experience of what's "typical" is biased strongly by the communities I > > work with, but conversely the "writing standalone apps in Python" > > community doesn't really seem to have a web presence that we can > > promote pipenv through, so it's becoming visible via the "general > > Python development" community, which quite biased to libraries, and so > > is not the right target audience, from what you say. > > As I noted in my original comment, pipenv only applies to library and > framework development insofar as you can treat your test suite as an > "app" to be deployed to developer's machine (the deployment mechanism > is "git clone", but it's still a deployment). > > Where that currently breaks down is as soon as you're testing across > multiple versions, where the combination of wheel files and > environment markers means the current lock file format runs into > trouble (but so do requirements files). > > > 2. My specific issues weren't outside that constraint - I *was* > > writing code that would only ever need to run under Python 3.6. My > > problem was the need for "local" build dependencies, which seems > > entirely within that use case. In fairness, the pipenv devs weren't > > totally against the functionality I needed, it's just not something > > they had considered important. Maybe the problem here is that there > > isn't a good set of "developing apps in Python" best practices (as > > opposed to the library development situation - use install_requires, > > test with tox, ...), so I didn't know the "recommended" solution to my > > problem, that pipenv would have been expecting me to use. Maybe it's a > > chicken-and-egg situation - the "best practice" is to use pipenv, but > > until pipenv gets to encounter all the various use cases, that "best > > practice" doesn't properly cover every situation... > > Local build dependencies are within scope, but pipenv doesn't > magically fix the development resource constraint problem :) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Jan 16 13:14:24 2018 From: dholth at gmail.com (Daniel Holth) Date: Tue, 16 Jan 2018 18:14:24 +0000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: 5 On Tue, Jan 16, 2018 at 12:08 PM Alex Gr?nholm wrote: > Whichever we choose, the metadata version should match the PEP version, > which it currently does not. > > Nathaniel Smith kirjoitti 16.01.2018 klo 18:58: > > On Jan 12, 2018 8:00 AM, "Alex Gr?nholm" wrote: > > On the same note, wheel currently writes "2.0" as its metadata version. > Shouldn't this be changed into 1.3 (along with ditching metadata.json)? > > > Should wheel change to emit 1.3, or should the PEP change to become 2.0? I > know there were great hopes for "metadata 2.0", but given that there are > bazillions of packages out there with a metadata version of 2.0, we're > never going to be able to meaningfully use that version number for anything > else, and it's confusing if when reading package metadata the ordering is > 1.2 < 2.0 == 1.3 < 1.4. So maybe we should declare that this update is 2.0 > or 2.1, the next one will be 2.1 or 2.2, etc., and if anyone asks why the > major version bump, well, it's hardly the strangest thing we've done for > compatibility :-). (And in the unlikely event that PEP 426 lurches back to > life, we can make it 3.0.) > > -n > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Jan 16 13:42:58 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jan 2018 18:42:58 +0000 Subject: [Distutils] pipenv use cases In-Reply-To: References: Message-ID: On 16 January 2018 at 17:36, Brett Cannon wrote: > Is there a library developer workflow that's being promoted then somewhere > that I'm just not finding? Or does that need to be written for > packaging.python.org (which I might be willing to write; see below for > motivation)? Possibly not as such, but there's a fairly broad consensus over such things as: * Use tox to test your code against the versions you want to use * Declare dependencies with install_requires and leave the version requirements loose Maybe some other things I'm not thinking of right now. But basically most projects I see use a broadly similar structure, and they are all libraries. There's nowhere near as many good examples of Python applications that I know of, and little obvious consensus (for example, there's nothing I've ever seen that suggests a "standard" method of deploying applications - the nearest thing I know of is "write the app as a library and use a console entry point", which is pretty much the opposite of a separate app development standard :-) > At least from a VS Code perspective it would be great to have a target of > supporting the workflows as documented at packaging.python.org so people > know how they should generally structure things as well as making sure VS > Code always supports the modern workflows. Also being able to say "your > workflow is not normal" is also always helpful when dealing with feature > requests. :) That's my feeling too - if we have reasonable community consensus on "normal workflows", it's a lot easier to define boundaries on what applications like pipenv or VS Code will support. Consensus is hard to come by, though... Paul From chris at withers.org Tue Jan 16 14:01:24 2018 From: chris at withers.org (Chris Withers) Date: Tue, 16 Jan 2018 19:01:24 +0000 Subject: [Distutils] library development patterns In-Reply-To: References: Message-ID: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> Okay, so lets be up front: pipenv is not for libraries or reusable apps, it's for deployments of re-usable apps or development of single-use application code. I think that's a great aim and covers *all* the end use cases of Python at its extreme. However, library devs, and I'd lump reusable app devs in that too... On 16/01/2018 17:36, Brett Cannon wrote: > > Is there a library developer workflow that's being promoted then > somewhere that I'm just not finding? Or does that need to be written for > packaging.python.org (which I /might/ be > willing to write; see below for motivation)? Well, I can only speak from my experience as a maintainer of lots of small libraries that are occasionally heavily used... What's worked well for me is specifying dependency ranges in setup.py and using 'build' and 'test' extras to indicate libraries that are needed to build artifacts for readthedocs or pypi, the latter for running automated tests. I generally use pip install -e . in a checkout to set up a development environment but beyond this I think things branch out a lot: How do you do axis development? (often python version, but can be a major version of a dependency such as django, or operating system, or for the lucky masochists out there: a dot product of each of those...) For me, I use travis-ci coupled with a few local virtualenvs for canary versions. Some people like tox here, I never got on with it. Then it's "what testrunner do you use?", I'm gradually moving to pytest, but I still have a lot of nose 1. As far as build and deployment, again, travis's tag-based deployment model that pushes artifacts to pypi, coupled with readthedocs pull-and-publish works for the things I care about. Then I guess you could talk about issue trackers and, indeed, community discussion channels ;-) I wonder how much of this makes sense to put in a how-to for library developers and where it branches out into areas where there are multiple legitimate choices? cheers, Chris From brett at python.org Tue Jan 16 14:21:35 2018 From: brett at python.org (Brett Cannon) Date: Tue, 16 Jan 2018 19:21:35 +0000 Subject: [Distutils] pipenv use cases In-Reply-To: References: Message-ID: On Tue, 16 Jan 2018 at 10:43 Paul Moore wrote: > On 16 January 2018 at 17:36, Brett Cannon wrote: > > Is there a library developer workflow that's being promoted then > somewhere > > that I'm just not finding? Or does that need to be written for > > packaging.python.org (which I might be willing to write; see below for > > motivation)? > > Possibly not as such, but there's a fairly broad consensus over such > things as: > > * Use tox to test your code against the versions you want to use > * Declare dependencies with install_requires and leave the version > requirements loose > > Maybe some other things I'm not thinking of right now. But basically > most projects I see use a broadly similar structure, and they are all > libraries. There's nowhere near as many good examples of Python > applications that I know of, and little obvious consensus (for > example, there's nothing I've ever seen that suggests a "standard" > method of deploying applications - the nearest thing I know of is > "write the app as a library and use a console entry point", which is > pretty much the opposite of a separate app development standard :-) > > > > At least from a VS Code perspective it would be great to have a target of > > supporting the workflows as documented at packaging.python.org so people > > know how they should generally structure things as well as making sure VS > > Code always supports the modern workflows. Also being able to say "your > > workflow is not normal" is also always helpful when dealing with feature > > requests. :) > > That's my feeling too - if we have reasonable community consensus on > "normal workflows", it's a lot easier to define boundaries on what > applications like pipenv or VS Code will support. Consensus is hard to > come by, though... > Well, technically *we* need consensus so we agree that what goes up on packaging.python.org makes sense; I'm not worried about the whole world. :) I mean I'm not even after specifically recommended testing practices and such since that seems outside the realm of packaging.python.org. I'm more interested in e.g. how should people handle venvs when they support more than one version of Python? Documenting not pinning your dependency versions. How do you specify what is required for testing and such so that you can generically know how to get those dependencies installed in your venv if you don't want to have your eventual wheel to include those test-only dependencies? Basically I'm after a tutorial on how to handle packaging while developing a library. I view "use tox, use pytest" as something under TIP's purview to create a tutorial/guide for. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sh at changeset.nyc Tue Jan 16 14:34:02 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Tue, 16 Jan 2018 14:34:02 -0500 Subject: [Distutils] barriers to Warehouse contribution Message-ID: <82dfb926-d5de-44af-85b3-47ffbad8fbc4@changeset.nyc> I've sent a question to pypa-dev https://groups.google.com/forum/#!topic/pypa-dev/U6NO55bExbE which can be summarized: what should we fix or set up to make it easier for you to hack on Warehouse, especially with a goal of a Warehouse/packaging sprint at PyCon in May? I'd appreciate discussion on pypa-dev to keep it in one place (that list tends to get ~15 posts/month), but if you aren't subscribed there and don't want to be, go ahead & reply here. :) -- Sumana Harihareswara Changeset Consulting https://changeset.nyc From brett at python.org Tue Jan 16 14:13:31 2018 From: brett at python.org (Brett Cannon) Date: Tue, 16 Jan 2018 19:13:31 +0000 Subject: [Distutils] library development patterns In-Reply-To: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> References: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> Message-ID: On Tue, 16 Jan 2018 at 11:00 Chris Withers wrote: > Okay, so lets be up front: pipenv is not for libraries or reusable apps, > it's for deployments of re-usable apps or development of single-use > application code. I think that's a great aim and covers *all* the end > use cases of Python at its extreme. > > However, library devs, and I'd lump reusable app devs in that too... > > On 16/01/2018 17:36, Brett Cannon wrote: > > > > Is there a library developer workflow that's being promoted then > > somewhere that I'm just not finding? Or does that need to be written for > > packaging.python.org (which I /might/ be > > willing to write; see below for motivation)? > > Well, I can only speak from my experience as a maintainer of lots of > small libraries that are occasionally heavily used... > > What's worked well for me is specifying dependency ranges in setup.py > and using 'build' and 'test' extras to indicate libraries that are > needed to build artifacts for readthedocs or pypi, the latter for > running automated tests. > This works if you use setuptools, but e.g. flit doesn't support extras dependencies so this isn't a general solution yet. > > I generally use pip install -e . in a checkout to set up a development > environment but beyond this I think things branch out a lot: > > How do you do axis development? (often python version, but can be a > major version of a dependency such as django, or operating system, or > for the lucky masochists out there: a dot product of each of those...) > > For me, I use travis-ci coupled with a few local virtualenvs for canary > versions. Some people like tox here, I never got on with it. > This is part of what I would want us to come to a consensus on. For example, do people just create a venv per Python version they want to test/support, do they use pew or some other tool I don't know about? For VS Code we need to know how to detect what Python interpreters you might want to use with your workspace/folder so we know what interpreters to present to you to select from (and you *have *to select one for when you do things like want to execute a file or run tests). > > Then it's "what testrunner do you use?", I'm gradually moving to pytest, > but I still have a lot of nose 1. > > As far as build and deployment, again, travis's tag-based deployment > model that pushes artifacts to pypi, coupled with readthedocs > pull-and-publish works for the things I care about. Then I guess you > could talk about issue trackers and, indeed, community discussion > channels ;-) > > > I wonder how much of this makes sense to put in a how-to for library > developers and where it branches out into areas where there are multiple > legitimate choices? > I would assume scoping of what is discussed for packaging.python.org would be scoped to what https://packaging.python.org/tutorials/managing-dependencies/ covers, which is just stuff related to dependency management. What test runner to use, etc. seems out-of-scope for there and starts to fall into blog posts and such (e.g. if TIP wants to start a testing.python.org and have tutorials there they probably could, but otherwise there is no official testing location like there is for packaging). -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jan 16 15:33:07 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 16 Jan 2018 20:33:07 +0000 Subject: [Distutils] library development patterns In-Reply-To: References: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> Message-ID: <20180116203307.twqsdvtnwtsp63ap@yuggoth.org> On 2018-01-16 19:13:31 +0000 (+0000), Brett Cannon wrote: > On Tue, 16 Jan 2018 at 11:00 Chris Withers wrote: [...] > > I generally use pip install -e . in a checkout to set up a development > > environment but beyond this I think things branch out a lot: > > > > How do you do axis development? (often python version, but can be a > > major version of a dependency such as django, or operating system, or > > for the lucky masochists out there: a dot product of each of those...) > > > > For me, I use travis-ci coupled with a few local virtualenvs for canary > > versions. Some people like tox here, I never got on with it. > > > > This is part of what I would want us to come to a consensus on. For > example, do people just create a venv per Python version they want to > test/support, do they use pew or some other tool I don't know about? For VS > Code we need to know how to detect what Python interpreters you might want > to use with your workspace/folder so we know what interpreters to present > to you to select from (and you *have *to select one for when you do things > like want to execute a file or run tests). [...] At least with tox you get this more or less automagically (I know plenty of people aren't tox fans, but it still merits pointing out). For those unfamiliar, it has implicit environments defined for minor (in the SemVer sense) Python versions so your project can define a list of which versions it's intended to support and then anyone running it by default gets tests executed under each of those for which they have a viable interpreter on their path. My local dev environment includes from-source builds of the latest point release for each minor Python version my projects intend to support plus the most recent alpha/beta/rc for the next unreleased Python 3.x, though I'll admit that keeping up with the various build-deps and compile-time optimizations for each of them is mildly time-consuming (as is bootstrapping a new dev environment when the need arises). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ncoghlan at gmail.com Tue Jan 16 21:51:15 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jan 2018 12:51:15 +1000 Subject: [Distutils] pipenv use cases In-Reply-To: References: Message-ID: On 17 January 2018 at 05:21, Brett Cannon wrote: > Well, technically we need consensus so we agree that what goes up on > packaging.python.org makes sense; I'm not worried about the whole world. :) > I mean I'm not even after specifically recommended testing practices and > such since that seems outside the realm of packaging.python.org. I'm more > interested in e.g. how should people handle venvs when they support more > than one version of Python? Documenting not pinning your dependency > versions. How do you specify what is required for testing and such so that > you can generically know how to get those dependencies installed in your > venv if you don't want to have your eventual wheel to include those > test-only dependencies? Basically I'm after a tutorial on how to handle > packaging while developing a library. I view "use tox, use pytest" as > something under TIP's purview to create a tutorial/guide for. Right, one of the things we worked out when Jon wrote the pipenv-based application dependency management guide was that "Install this library to my user directory so that I, the human Nick Coghlan, can use it in my personal scripts" is a different task from declaring application dependencies, and the subsequent feedback we've received (including this thread) shows that library/framework dependency management is slightly different again (and dependency management for portable applications that work across multiple Python versions will look more like the latter than the former). I think tox provides a good precedent for what's needed when it comes to effectively targeting multiple environments: you want a venv per target environment per project, not just a venv per project. However, while that's a configuration I'd personally like to see us better *enable* in pipenv, I wouldn't want it to come at the expense of the core notion of a "preferred deployment environment" as represented by Pipfile.lock, and I'd also prefer that it didn't come at the expense of making the lock file format itself more complex. One way we could do that is to add a "--lockfile" option to both "pipenv lock" and the "pipenv sync" subcommand proposed in https://github.com/pypa/pipenv/issues/1255, and a "--venv" option to the latter, such that the workflow for libraries/frameworks/portable applications would be: * use Pipfile/Pipfile.lock as normal for your "recommended development environment" (e.g. the most recent supported Python version for access to the most up to date developer tools, or perhaps the oldest supported Python version to minimise the risk of breaking compatibility). The "interactive" pipenv subcommands (install, uninstall, update, run, shell) would all continue to work specifically on the default lock file and venv. * use "pipenv lock --python TARGET --lockfile PATH_TO_TARGET_LOCKFILE" to generate target specific lockfiles for other environments * use "pipenv sync --lockfile PATH_TO_TARGET_LOCKFILE --venv PATH_TO_TARGET_VENV" to configure virtual environments based on those alternate lock files Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Jan 16 22:01:28 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jan 2018 13:01:28 +1000 Subject: [Distutils] library development patterns In-Reply-To: <20180116203307.twqsdvtnwtsp63ap@yuggoth.org> References: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> <20180116203307.twqsdvtnwtsp63ap@yuggoth.org> Message-ID: On 17 January 2018 at 06:33, Jeremy Stanley wrote: > On 2018-01-16 19:13:31 +0000 (+0000), Brett Cannon wrote: >> This is part of what I would want us to come to a consensus on. For >> example, do people just create a venv per Python version they want to >> test/support, do they use pew or some other tool I don't know about? For VS >> Code we need to know how to detect what Python interpreters you might want >> to use with your workspace/folder so we know what interpreters to present >> to you to select from (and you *have *to select one for when you do things >> like want to execute a file or run tests). > [...] > > At least with tox you get this more or less automagically (I know > plenty of people aren't tox fans, but it still merits pointing out). > For those unfamiliar, it has implicit environments defined for minor > (in the SemVer sense) Python versions so your project can define a > list of which versions it's intended to support and then anyone > running it by default gets tests executed under each of those for > which they have a viable interpreter on their path. The tox model is the one we decided to natively support in Fedora as well - while there's only ever one "full" Python 3 stack in the main repos (with all the distro API bindings, etc), there are also interpreter-only packages for other still supported and/or still popular Python X.Y branches, and "dnf install tox" will bring in all of them as weak dependencies. Hence my preference for where I think it would make sense to take pipenv in this regard: better *enable* the tox model, without *duplicating* the tox model. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Jan 16 23:55:08 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jan 2018 14:55:08 +1000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: On 17 January 2018 at 02:58, Nathaniel Smith wrote: > Should wheel change to emit 1.3, or should the PEP change to become 2.0? I > know there were great hopes for "metadata 2.0", but given that there are > bazillions of packages out there with a metadata version of 2.0, we're never > going to be able to meaningfully use that version number for anything else, > and it's confusing if when reading package metadata the ordering is 1.2 < > 2.0 == 1.3 < 1.4. So maybe we should declare that this update is 2.0 or 2.1, > the next one will be 2.1 or 2.2, etc., and if anyone asks why the major > version bump, well, it's hardly the strangest thing we've done for > compatibility :-). (And in the unlikely event that PEP 426 lurches back to > life, we can make it 3.0.) While I never changed the title, PEP 426 actually already specifies 3.0 in https://www.python.org/dev/peps/pep-0426/#metadata-version for exactly this reason :) The reason for *not* making PEP 566 a major version bump is in case anyone actually implemented this draft requirement from PEP 426: "Automated tools consuming metadata SHOULD warn if metadata_version is greater than the highest version they support, and MUST fail if metadata_version has a greater major version than the highest version they support (as described in PEP 440, the major version is the value before the first dot)." Metadata 1.3 is backwards compatible with metadata 1.2, so it should keep the same major version number. It's also an open question at this point whether or not there will ever *be* a metadata 2.0, since we've never worked out a way to resolve the chicken-and-egg adoption problem that arises from proposing to publish metadata in a format that existing client tools don't know how to read (hence the change in PEP 566 to treat the JSON form as something to be *derived* from the line-oriented key/value form, rather than as a replacement for it). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Wed Jan 17 00:32:01 2018 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 16 Jan 2018 21:32:01 -0800 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: On Tue, Jan 16, 2018 at 8:55 PM, Nick Coghlan wrote: > The reason for *not* making PEP 566 a major version bump is in case > anyone actually implemented this draft requirement from PEP 426: > "Automated tools consuming metadata SHOULD warn if metadata_version is > greater than the highest version they support, and MUST fail if > metadata_version has a greater major version than the highest version > they support (as described in PEP 440, the major version is the value > before the first dot)." >From a quick glance at 'git annotate', it appears that every wheel built between 2013 and now has used metadata_version=2.0. So I think we can be pretty sure that no-one is implementing this recommendation! Or if they are, then they've coded their tools to assume that they *do* understand metadata_version=2.0, which is even worse. That's the advantage of bumping to 2.0 now -- it keeps our ordering linear, so that we have the option of implementing a rule like the one from PEP 426 in the future without breaking existing packages. Otherwise we're in this weird position where we have teach our tools that just because they understand 1.3 and 2.0 doesn't mean they understand 1.4. -n -- Nathaniel J. Smith -- https://vorpus.org From di at di.codes Wed Jan 17 19:22:11 2018 From: di at di.codes (Dustin Ingram) Date: Wed, 17 Jan 2018 18:22:11 -0600 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: Hi all, Thanks very much for all your suggestions and feedback. I want to take a moment to summarize & respond to some outstanding issues from this thread & previous threads. [0][1][2] First, I'd like to reiterate that the goal of this PEP is to: 1. Define the Core Metadata document as the source for the specification; 2. Motivate, describe and formalize any new fields already in this spec; 3. Resolve any differences between this specification and other accepted PEPs. With that in mind: > How should multiple author-email and maintainer-email addresses be specified? These fields already accept legal RFC-822 style headers, which includes the possibility for multiple comma-separated addresses. Note, however, that Warehouse currently incorrectly rejects such fields. [3] > Should we add security-email and/or security-disclosure-instructions? Let's defer this to a future version as it is not included in the current spec. > Version specifiers and OR clauses Let's defer this to a future version as it is not included in the current spec. > Replacing hyphens with underscores when transforming to JSON-compatible metadata This change was added to the draft. > Differences with parentheses in version specifiers between PEP 345 and PEP 508 This change was added to the draft. > AFAICT the only missing feature from old-Metadata-2.0 is "description as > message body", which places readable description text after the key/value > pairs. Since twine does not currently support this for anything other than wheels, and since it wouldn't be backwards-compatible, I'm inclined to defer this to a future version. > Metadata 1.3 vs Metadata 2.0 I agree with Nick here that since this version is backwards-compatible, that it should remain Metadata 1.3. In addition, I think we should avoid overloading the already-in-use "2.0" version as possibly being either a "PEP 566 flavor 2.0" or a "PEP 426 flavor 2.0". > Conclusion I'm happy with this PEP as it stands, and I think it's ready to be formally submitted for Daniel's review. Thanks, D. [0]: https://mail.python.org/pipermail/distutils-sig/2017-December/031788.html [1]: https://mail.python.org/pipermail/distutils-sig/2017-December/031805.html [2]: https://mail.python.org/pipermail/distutils-sig/2017-December/031814.html [3]: https://github.com/pypa/warehouse/issues/2679 From ncoghlan at gmail.com Wed Jan 17 23:10:37 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jan 2018 14:10:37 +1000 Subject: [Distutils] pipenv use cases In-Reply-To: References: Message-ID: On 17 January 2018 at 12:51, Nick Coghlan wrote: > I think tox provides a good precedent for what's needed when it comes > to effectively targeting multiple environments: you want a venv per > target environment per project, not just a venv per project. After double-checking the status quo, I'll note that simply doing `pipenv install --dev` should already work pretty well for testing use cases (even if you use environment markers): https://docs.pipenv.org/advanced/#testing-projects Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Jan 17 23:58:54 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jan 2018 14:58:54 +1000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: On 18 January 2018 at 10:22, Dustin Ingram wrote: >> Metadata 1.3 vs Metadata 2.0 > > I agree with Nick here that since this version is backwards-compatible, that it > should remain Metadata 1.3. > > In addition, I think we should avoid overloading the already-in-use "2.0" > version as possibly being either a "PEP 566 flavor 2.0" or a "PEP 426 flavor > 2.0". Nathaniel raised a good point, which is that client tools have already been accepting "bdist_wheel-flavoured metadata 2.0" for years, so we can be confident no client tools are actually rejecting metadata versions that start with "2.x". Given that, I think it would be reasonable to finally Withdraw PEP 426 (rather than continuing to defer it), and have PEP 566 define metadata version 2.1, so that it's unambiguously the latest metadata version. I wouldn't hold up the PEP over that (since I think calling it metadata 1.3 is also fine), but I just wanted to note that my original objection to the idea was ill-founded. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From j.orponen at 4teamwork.ch Thu Jan 18 09:14:02 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Thu, 18 Jan 2018 15:14:02 +0100 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: On Thu, Jan 18, 2018 at 5:58 AM, Nick Coghlan wrote: > On 18 January 2018 at 10:22, Dustin Ingram wrote: > >> Metadata 1.3 vs Metadata 2.0 > > > > I agree with Nick here that since this version is backwards-compatible, > that it > > should remain Metadata 1.3. > > > > In addition, I think we should avoid overloading the already-in-use "2.0" > > version as possibly being either a "PEP 566 flavor 2.0" or a "PEP 426 > flavor > > 2.0". > > Nathaniel raised a good point, which is that client tools have already > been accepting "bdist_wheel-flavoured metadata 2.0" for years, so we > can be confident no client tools are actually rejecting metadata > versions that start with "2.x". > > Given that, I think it would be reasonable to finally Withdraw PEP 426 > (rather than continuing to defer it), and have PEP 566 define metadata > version 2.1, so that it's unambiguously the latest metadata version. > Jump straight to 3.0 to clear out any confusion and/or ambiguity on the next backwards-incompatible one? -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From di at di.codes Thu Jan 18 11:37:09 2018 From: di at di.codes (Dustin Ingram) Date: Thu, 18 Jan 2018 10:37:09 -0600 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: > Given that, I think it would be reasonable to finally Withdraw PEP 426 > (rather than continuing to defer it), and have PEP 566 define metadata > version 2.1, so that it's unambiguously the latest metadata version. I'm amenable to any version number that is > 1.2 and not 2.0. D. From dholth at gmail.com Thu Jan 18 11:56:29 2018 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jan 2018 16:56:29 +0000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: >1.2 and not 2.0 is correct. I took a look at pkg_resources. It doesn't read Metadata-Version at all. It only cares about Version, and in wheels Requires-Dist and Provides-Extra. Everything else is ignored. So PEP 566 won't break anything there as long as someone checks that pkg_resources can handle the optional parenthesis which seems likely. On Thu, Jan 18, 2018 at 11:37 AM Dustin Ingram wrote: > > Given that, I think it would be reasonable to finally Withdraw PEP 426 > > (rather than continuing to defer it), and have PEP 566 define metadata > > version 2.1, so that it's unambiguously the latest metadata version. > > I'm amenable to any version number that is > 1.2 and not 2.0. > > D. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From di at di.codes Thu Jan 18 12:23:29 2018 From: di at di.codes (Dustin Ingram) Date: Thu, 18 Jan 2018 11:23:29 -0600 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: It does, it's using the `packaging` module under the hood: >>> from pkg_resources import Requirement >>> Requirement.parse("requests >= 2.8.1") == Requirement.parse("requests (>= 2.8.1)") True D. On Thu, Jan 18, 2018 at 10:56 AM, Daniel Holth wrote: >>1.2 and not 2.0 is correct. > > I took a look at pkg_resources. It doesn't read Metadata-Version at all. It > only cares about Version, and in wheels Requires-Dist and Provides-Extra. > Everything else is ignored. So PEP 566 won't break anything there as long as > someone checks that pkg_resources can handle the optional parenthesis which > seems likely. > > On Thu, Jan 18, 2018 at 11:37 AM Dustin Ingram wrote: >> >> > Given that, I think it would be reasonable to finally Withdraw PEP 426 >> > (rather than continuing to defer it), and have PEP 566 define metadata >> > version 2.1, so that it's unambiguously the latest metadata version. >> >> I'm amenable to any version number that is > 1.2 and not 2.0. >> >> D. >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig From barry at python.org Thu Jan 18 14:07:56 2018 From: barry at python.org (Barry Warsaw) Date: Thu, 18 Jan 2018 11:07:56 -0800 Subject: [Distutils] library development patterns In-Reply-To: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> References: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> Message-ID: Chris Withers wrote: > For me, I use travis-ci coupled with a few local virtualenvs for canary > versions. Some people like tox here, I never got on with it. For me, tox is transformative. While there are a couple of usability issues that my clone army seems to be remiss in fixing, for the most part tox is awesome. In fact, I usually don't put any of my test dependencies in setup.py any more; these all go in my tox.ini. It integrates well with flake8, and lets me definite all the various testing matrixes I care about (e.g. all the Python versions w/ and w/o coverage, "qa" environments for flake8, and a "docs" environment for ensuring the docs build (and building them as a nice side benefit). tox of course manages all the venvs nicely, so it's easy to drop into them when necessary. tox doesn't yet play as nicely with flit/pyproject.toml, and I wish tox supported environment groups and null generated environments, but I've mostly found ways around those minor problems. > Then it's "what testrunner do you use?", I'm gradually moving to pytest, > but I still have a lot of nose 1. I have a ton of nose2, which I really like, but I'm gradually moving to pytest too. I'm getting over my cosmetic complaints about pytest, since it really is a very robust framework and does seem to have the most momentum. > As far as build and deployment, again, travis's tag-based deployment > model that pushes artifacts to pypi, coupled with readthedocs > pull-and-publish works for the things I care about. Then I guess you > could talk about issue trackers and, indeed, community discussion > channels ;-) Yep, RTD is pretty darn awesome; I'm a fan of GitLab for various reasons I suppose is off topic for this thread. > I wonder how much of this makes sense to put in a how-to for library > developers and where it branches out into areas where there are multiple > legitimate choices? Variation and competition is good; it keeps pushing the state of the art even higher. Some day we'll all be amazing that we used setup.py files for a couple of decades in Python's early history. . I think that means it's worth at least describing in blog posts and such, but there's not going to be a once-size-fits-all solution. I do love hearing about how others do it, because it gives me the ootz I need to try other alternatives I might like better. -Barry From barry at python.org Thu Jan 18 14:12:32 2018 From: barry at python.org (Barry Warsaw) Date: Thu, 18 Jan 2018 11:12:32 -0800 Subject: [Distutils] library development patterns In-Reply-To: References: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> <20180116203307.twqsdvtnwtsp63ap@yuggoth.org> Message-ID: Nick Coghlan wrote: > The tox model is the one we decided to natively support in Fedora as > well - while there's only ever one "full" Python 3 stack in the main > repos (with all the distro API bindings, etc), there are also > interpreter-only packages for other still supported and/or still > popular Python X.Y branches, and "dnf install tox" will bring in all > of them as weak dependencies. > > Hence my preference for where I think it would make sense to take > pipenv in this regard: better *enable* the tox model, without > *duplicating* the tox model. I'm a big fan of the tox model. It works great on Debian/Ubuntu where you can have multiple Python 3 interpreters (with some shared infrastructure) during transitions, and macOS development where you might have multiple versions of Python installed from brew/fink/macports and from-source installations, including the current Python development versions. It also works well for things like https://gitlab.com/python-devs/ci-images/tree/master tox provides a nice, easy to invoke and remember CLI, good separation of concerns (e.g. runtime deps in setup.py, test deps in tox.ini), and convenient management of venvs. Cheers, -Barry From barry at python.org Thu Jan 18 14:20:02 2018 From: barry at python.org (Barry Warsaw) Date: Thu, 18 Jan 2018 11:20:02 -0800 Subject: [Distutils] Deprecating/Removing OpenID/Google login support for PyPI In-Reply-To: References: Message-ID: Donald Stufft wrote: > > * Very few people actually are using OpenID or Google logins as it is. In one month we had ~15k logins using the web form, ~5k using basic auth, and 62 using Google and 7 using OpenID. This is a several orders of magnitude difference. > * Regardless of how you log into PyPI (Password or Google/OpenID) you?re required to have a password added to your account to actually upload anything to PyPI. This negates much of the benefit of a federated authentication for PyPI as it stands. > * Keeping these requires ongoing maintenance to deal with any changes in the specification or to update as Google deprecates/changes things. > * Adding support for them to Warehouse requires additional work that could better be used elsewhere, where it would have a higher impact. I'm one of those 7, but I really can't argue for you to keep supporting it just for *me* :). Have you considered allowing developers to use their GitHub, GitLab, Bitbucket logins? Those three probably cover a large majority of package authors on PyPI. I don't know how hard that would be to support though. -Barry From chris.jerdonek at gmail.com Thu Jan 18 15:57:05 2018 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Thu, 18 Jan 2018 20:57:05 +0000 Subject: [Distutils] library development patterns In-Reply-To: References: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> <20180116203307.twqsdvtnwtsp63ap@yuggoth.org> Message-ID: I haven?t yet seen pyenv mentioned in this discussion. Having the ability to switch between Python versions for interactive exploration seems like an important piece for On Thu, Jan 18, 2018 at 11:18 AM Barry Warsaw wrote: > Nick Coghlan wrote: > > The tox model is the one we decided to natively support in Fedora as > > well - while there's only ever one "full" Python 3 stack in the main > > repos (with all the distro API bindings, etc), there are also > > interpreter-only packages for other still supported and/or still > > popular Python X.Y branches, and "dnf install tox" will bring in all > > of them as weak dependencies. > > > > Hence my preference for where I think it would make sense to take > > pipenv in this regard: better *enable* the tox model, without > > *duplicating* the tox model. > > I'm a big fan of the tox model. It works great on Debian/Ubuntu where > you can have multiple Python 3 interpreters (with some shared > infrastructure) during transitions, and macOS development where you > might have multiple versions of Python installed from brew/fink/macports > and from-source installations, including the current Python development > versions. It also works well for things like > https://gitlab.com/python-devs/ci-images/tree/master > > tox provides a nice, easy to invoke and remember CLI, good separation of > concerns (e.g. runtime deps in setup.py, test deps in tox.ini), and > convenient management of venvs. > > Cheers, > -Barry > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Thu Jan 18 16:04:55 2018 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Thu, 18 Jan 2018 13:04:55 -0800 Subject: [Distutils] library development patterns In-Reply-To: References: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> <20180116203307.twqsdvtnwtsp63ap@yuggoth.org> Message-ID: [Oops, my phone weirdly sent that email prematurely.] I haven?t yet seen pyenv mentioned in this discussion. Having the ability to switch between Python versions for interactive exploration seems like an important piece for library development, and pyenv makes this really easy. My only complaint is that I've had problems with tox and pyenv playing nice together. (IIRC, you need to create a pyenv virtualenv a certain way for tox to be able to use it, and I think there are some pyenv issues about this that have always been marked as "wontfix.") --Chris On Thu, Jan 18, 2018 at 12:57 PM, Chris Jerdonek wrote: > > I haven?t yet seen pyenv mentioned in this discussion. Having the ability to switch between Python versions for interactive exploration seems like an important piece for > > On Thu, Jan 18, 2018 at 11:18 AM Barry Warsaw wrote: >> >> Nick Coghlan wrote: >> > The tox model is the one we decided to natively support in Fedora as >> > well - while there's only ever one "full" Python 3 stack in the main >> > repos (with all the distro API bindings, etc), there are also >> > interpreter-only packages for other still supported and/or still >> > popular Python X.Y branches, and "dnf install tox" will bring in all >> > of them as weak dependencies. >> > >> > Hence my preference for where I think it would make sense to take >> > pipenv in this regard: better *enable* the tox model, without >> > *duplicating* the tox model. >> >> I'm a big fan of the tox model. It works great on Debian/Ubuntu where >> you can have multiple Python 3 interpreters (with some shared >> infrastructure) during transitions, and macOS development where you >> might have multiple versions of Python installed from brew/fink/macports >> and from-source installations, including the current Python development >> versions. It also works well for things like >> https://gitlab.com/python-devs/ci-images/tree/master >> >> tox provides a nice, easy to invoke and remember CLI, good separation of >> concerns (e.g. runtime deps in setup.py, test deps in tox.ini), and >> convenient management of venvs. >> >> Cheers, >> -Barry >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig From chris.jerdonek at gmail.com Thu Jan 18 16:13:10 2018 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Thu, 18 Jan 2018 13:13:10 -0800 Subject: [Distutils] library development patterns In-Reply-To: References: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> <20180116203307.twqsdvtnwtsp63ap@yuggoth.org> Message-ID: PS - this is the pyenv / tox compatibility issue I had in mind: https://github.com/pyenv/pyenv-virtualenv/issues/202 And this I have found is the simplest workaround: https://github.com/pyenv/pyenv-virtualenv/issues/202#issuecomment-284728205 On Thu, Jan 18, 2018 at 1:04 PM, Chris Jerdonek wrote: > [Oops, my phone weirdly sent that email prematurely.] > > I haven?t yet seen pyenv mentioned in this discussion. Having the > ability to switch between Python versions for interactive exploration > seems like an important piece for library development, and pyenv makes > this really easy. My only complaint is that I've had problems with tox > and pyenv playing nice together. (IIRC, you need to create a pyenv > virtualenv a certain way for tox to be able to use it, and I think > there are some pyenv issues about this that have always been marked as > "wontfix.") > > --Chris > > > On Thu, Jan 18, 2018 at 12:57 PM, Chris Jerdonek > wrote: >> >> I haven?t yet seen pyenv mentioned in this discussion. Having the ability to switch between Python versions for interactive exploration seems like an important piece for >> >> On Thu, Jan 18, 2018 at 11:18 AM Barry Warsaw wrote: >>> >>> Nick Coghlan wrote: >>> > The tox model is the one we decided to natively support in Fedora as >>> > well - while there's only ever one "full" Python 3 stack in the main >>> > repos (with all the distro API bindings, etc), there are also >>> > interpreter-only packages for other still supported and/or still >>> > popular Python X.Y branches, and "dnf install tox" will bring in all >>> > of them as weak dependencies. >>> > >>> > Hence my preference for where I think it would make sense to take >>> > pipenv in this regard: better *enable* the tox model, without >>> > *duplicating* the tox model. >>> >>> I'm a big fan of the tox model. It works great on Debian/Ubuntu where >>> you can have multiple Python 3 interpreters (with some shared >>> infrastructure) during transitions, and macOS development where you >>> might have multiple versions of Python installed from brew/fink/macports >>> and from-source installations, including the current Python development >>> versions. It also works well for things like >>> https://gitlab.com/python-devs/ci-images/tree/master >>> >>> tox provides a nice, easy to invoke and remember CLI, good separation of >>> concerns (e.g. runtime deps in setup.py, test deps in tox.ini), and >>> convenient management of venvs. >>> >>> Cheers, >>> -Barry >>> >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig From j.orponen at 4teamwork.ch Thu Jan 18 16:54:13 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Thu, 18 Jan 2018 22:54:13 +0100 Subject: [Distutils] library development patterns In-Reply-To: References: <43c2c049-0b0f-76bd-4462-46d1ed11ab4b@withers.org> <20180116203307.twqsdvtnwtsp63ap@yuggoth.org> Message-ID: On Thu, Jan 18, 2018 at 10:13 PM, Chris Jerdonek wrote: > PS - this is the pyenv / tox compatibility issue I had in mind: > https://github.com/pyenv/pyenv-virtualenv/issues/202 > > And this I have found is the simplest workaround: > https://github.com/pyenv/pyenv-virtualenv/issues/202# > issuecomment-284728205 I find it simpler if you just define multiple pythons in a project specific .python-version for pyenv as it simply injects them into your path-esque shim resolving mechanism in order. The content of one example .python-version of mine: 2.7.14 2.7.14/envs/detox 3.4.7 3.5.4 3.6.4 This way the python2.7 tox finds first is the one from the clean one and thus creates a clean functional virtualenv from it, likewise for the Python 3 variants. The detox virtualenv is there so the pyenv provided shim for tox/detox resolves to that as the first hit. I like for my tox tests to run in parallel, but this also works for just plain tox (and then you can choose for which Python version you want to install tox - detox is Python 2.7 only for now). The detox virtualenv can be initially created via pyenv virtualenv 2.7.14 detox. Then I usually pyenv shell 2.7.14/envs/detox and pip install detox. Or this can also be done via a requirements file, if a project pins it for reproducibility, in which case I'll probably create a project specific detox/tox virtualenv via pyenv and add that to the project .python-versions file, thus fully isolating things from each other and having everything pinned with explicit configuration per project. And I usually also have a separate 3.6.4/envs/projectname for actually running the project / for interactive work with the project I enable via pyenv shell 3.6.4/envs/projectname. To me this seems the intended way to use pyenv - relying on its core mechanisms instead of piling up layers of workarounds. A simpler setup might be: 3.6.4 3.6.4/envs/projectname 3.5.4 3.4.7 2.7.14 But one needs to remember to pyenv shell 3.6.4/envs/projectname in order to pip install anything to the correct environment as pip3.6 would resolve to the global one (and one should try to keep that in pristine if trying to do things this way, even if the point of virtualenv is to isolate you from the system site-packages). At least the way pyenv modifies the shell prompt keeps things helpful enough for me to get by, but it is one more layer to keep track of. -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jan 19 22:07:27 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jan 2018 13:07:27 +1000 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: On 19 January 2018 at 00:14, Joni Orponen wrote: > On Thu, Jan 18, 2018 at 5:58 AM, Nick Coghlan wrote: >> Given that, I think it would be reasonable to finally Withdraw PEP 426 >> (rather than continuing to defer it), and have PEP 566 define metadata >> version 2.1, so that it's unambiguously the latest metadata version. > > Jump straight to 3.0 to clear out any confusion and/or ambiguity on the next > backwards-incompatible one? While we could do that, wheel's use of "2.0" actually stems from early drafts of PEP 426, and PEP 566 *is* backwards compatible with that. So I like 2.1 - higher than everything previously used, but an incremental update to the early versions of 426 before we/I started imagining a ground up redesign of the metadata definition. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From alex.gronholm at nextday.fi Sat Jan 20 05:58:06 2018 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Sat, 20 Jan 2018 12:58:06 +0200 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: +1 from me. While I dislike the fact that "2.0" was put to use prematurely, using "2.1" is still less confusing than going from 2.0 to 1.3. Nick Coghlan kirjoitti 20.01.2018 klo 05:07: > On 19 January 2018 at 00:14, Joni Orponen wrote: >> On Thu, Jan 18, 2018 at 5:58 AM, Nick Coghlan wrote: >>> Given that, I think it would be reasonable to finally Withdraw PEP 426 >>> (rather than continuing to defer it), and have PEP 566 define metadata >>> version 2.1, so that it's unambiguously the latest metadata version. >> Jump straight to 3.0 to clear out any confusion and/or ambiguity on the next >> backwards-incompatible one? > While we could do that, wheel's use of "2.0" actually stems from early > drafts of PEP 426, and PEP 566 *is* backwards compatible with that. > > So I like 2.1 - higher than everything previously used, but an > incremental update to the early versions of 426 before we/I started > imagining a ground up redesign of the metadata definition. > > Cheers, > Nick. > From di at di.codes Mon Jan 22 21:42:52 2018 From: di at di.codes (Dustin Ingram) Date: Mon, 22 Jan 2018 20:42:52 -0600 Subject: [Distutils] PEP 566 - metadata 1.3 changes In-Reply-To: References: <1515591876.1977781.1230551776.08931300@webmail.messagingengine.com> <1515770800.3215267.1233239040.16CD87E7@webmail.messagingengine.com> Message-ID: I've updated the PEP to use "2.1" as the version: https://www.python.org/dev/peps/pep-0566/ D. From sh at changeset.nyc Tue Jan 23 22:36:12 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Tue, 23 Jan 2018 22:36:12 -0500 Subject: [Distutils] Fwd: Warehouse update: role management & welcoming first-time contributors In-Reply-To: References: Message-ID: <2c6f7e31-4b56-b218-3f09-9e12a7e03296@changeset.nyc> Forwarding from pypa-dev . -------- Forwarded Message -------- Subject: Warehouse update: role management & welcoming first-time contributors Date: Tue, 23 Jan 2018 22:33:46 -0500 From: Sumana Harihareswara To: pypa-dev at googlegroups.com In the past week, the Warehouse team's continued making progress despite a few of us getting sick. The biggest news is that the master branch now includes the foundation for a bunch of useful UI for maintainers. Several people collaborated on a role management feature[0] so a project Owner can add and remove Maintainer and Owner roles for their projects. This enables us to work on further release management features. We made progress on more improvements, including to developer experience, that you'll see in future updates. And thanks to Srinivas Garlapati for starting a password reset feature PR that we were able to finish up and merge.[1] We've turned a number of umbrella issues into more specific issues for the maintainer MVP milestone[2] which we continue working on. And if you're looking for a good first issue as you start contributing to Warehouse, there's one in our current milestone we'd love help with: "Valid `Author-email` and `Maintainer-email` fields are rejected".[3] If you are or know someone who wants to be a first-time contributor, check out Ernest's offer of neat stickers and mentorship time![4] As we get closer to the maintainer MVP milestone, we're preparing to publicize it and future milestones, including to developers who don't usually watch distribution and packaging discussions. So we're making lists of places to post notices, and we're using PyPI data and libraries.io to find projects and maintainers to personally contact. And we're working on future announcement channels, e.g., banners and a special announcement mailing list.[5] Once again, thanks to Mozilla for their support for this project.[6] More next week! [0] https://github.com/pypa/warehouse/pull/2705 [1] https://github.com/pypa/warehouse/pull/2764 [2] https://github.com/pypa/warehouse/milestone/8 [3] https://github.com/pypa/warehouse/issues/2679 [4] https://twitter.com/EWDurbin/status/955413628408205313 [5] https://github.com/python/psf-infra-meta/issues/1 [6] https://pyfound.blogspot.com/2017/11/the-psf-awarded-moss-grant-pypi.html -- Sumana Harihareswara Changeset Consulting https://changeset.nyc From pradyunsg at gmail.com Fri Jan 26 10:11:17 2018 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Fri, 26 Jan 2018 15:11:17 +0000 Subject: [Distutils] Installed Extras Metadata Message-ID: Hello! I hope everyone's had a great start to 2018! :) A few months back, while working on pip, I had noticed an oddity about extras. Installing a package with extras would not store information about the fact that the extras were requested. This means, later, it is not possible to know which extra-based optional dependencies of a package have to be considered when verifying that the packages are compatible with each other. This information is relavant for resolution/validation since without it, it is not possible to know which the extra-requirements to care about. As an example, installing ``requests[security]`` and then uninstalling ``PyOpenSSL`` leaves you in a state where you don't really satisfy what was asked for but there's no way to detect that either. Thus, obviously, I'm interested in making pip to be able to store this information. As I understand, this is done needs to be specified in a PEP and/or on PyPUG's specification page. To that end, here's seeding proposal for the discussion: a new `extras-requested.txt` file in the .dist-info directory, storing the extra names in a one-per-line format. Cheers! Pradyun -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Fri Jan 26 10:39:18 2018 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Fri, 26 Jan 2018 17:39:18 +0200 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: <385d4900-d4fa-01c0-5874-0301ca57a624@nextday.fi> Pradyun Gedam kirjoitti 26.01.2018 klo 17:11: > Hello! I hope everyone's had a great start to 2018! :) > > A few months back, while working on pip, I had noticed an oddity about > extras. > > Installing a package with extras would not store information about the > fact that the extras were requested. This means, later, it is not > possible to know which extra-based optional dependencies of a package > have to be considered when verifying that the packages are compatible > with each other. This information is relavant for > resolution/validation since without it, it is not possible to know > which the extra-requirements to care about. > > As an example, installing ``requests[security]`` and then uninstalling > ``PyOpenSSL`` leaves you in a state where you don't really satisfy > what was asked for but there's no way to detect that either. What here is specific to extras really? "pip uninstall" does not consider dependencies anyway and will happily let you uninstall whatever you want, even if it's a dependency of some still installed distribution. > > Thus, obviously, I'm interested in making pip to be able to store this > information. As I understand, this is done needs to be specified in a > PEP and/or on PyPUG's specification page. > > To that end, here's seeding proposal for the discussion: a new > `extras-requested.txt` file in the .dist-info directory, storing the > extra names in a one-per-line format. > > Cheers! > Pradyun > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Jan 26 10:51:05 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 26 Jan 2018 15:51:05 +0000 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On 26 January 2018 at 15:11, Pradyun Gedam wrote: > Installing a package with extras would not store information about the fact > that the extras were requested. This means, later, it is not possible to > know which extra-based optional dependencies of a package have to be > considered when verifying that the packages are compatible with each other. > This information is relavant for resolution/validation since without it, it > is not possible to know which the extra-requirements to care about. > > As an example, installing ``requests[security]`` and then uninstalling > ``PyOpenSSL`` leaves you in a state where you don't really satisfy what was > asked for but there's no way to detect that either. 1. pip uninstall doesn't check validity, so there's no issue for the uninstall 2. pip check needs this information if it's to complain, and I believe that's the key point in your question I think that if we want pip check to validate this situation, we need to store the data when we install. Where we store it needs to be decided - I'd assume it would go in the dist-info directory for requests somewhere, and that's the bit of metadata that needs agreeing. Is there any other place where current functionality needs this information, *apart* from pip check? Are there any proposed additional features that might need it? > Thus, obviously, I'm interested in making pip to be able to store this > information. As I understand, this is done needs to be specified in a PEP > and/or on PyPUG's specification page. > > To that end, here's seeding proposal for the discussion: a new > `extras-requested.txt` file in the .dist-info directory, storing the extra > names in a one-per-line format. Looks OK to me. But I don't know how important it is to satisfy this use case. I've never needed this feature (I don't think I've ever used pip check, and I've very rarely used extras) so I won't comment on that. Paul PS I know we talked a bit off-list about this and I said I didn't have any opinion. I'd misunderstood what you were suggesting a bit, mostly because the conversation veered off into uninstalling requests[security] and what that means... So now that you've restated the issue, I have a bit of an opinion (although still not much :-)) From pradyunsg at gmail.com Fri Jan 26 11:07:53 2018 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Fri, 26 Jan 2018 16:07:53 +0000 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On Fri, 26 Jan 2018, 21:21 Paul Moore, wrote: > On 26 January 2018 at 15:11, Pradyun Gedam wrote: > > Installing a package with extras would not store information about the > fact > > that the extras were requested. This means, later, it is not possible to > > know which extra-based optional dependencies of a package have to be > > considered when verifying that the packages are compatible with each > other. > > This information is relavant for resolution/validation since without it, > it > > is not possible to know which the extra-requirements to care about. > > > > As an example, installing ``requests[security]`` and then uninstalling > > ``PyOpenSSL`` leaves you in a state where you don't really satisfy what > was > > asked for but there's no way to detect that either. > > 1. pip uninstall doesn't check validity, so there's no issue for the > uninstall > 2. pip check needs this information if it's to complain, and I believe > that's the key point in your question > Indeed. Thanks for adding this clarification. I guess this answers Alex's question as well. > I think that if we want pip check to validate this situation, we need > to store the data when we install. Where we store it needs to be > decided - I'd assume it would go in the dist-info directory for > requests somewhere, and that's the bit of metadata that needs > agreeing. > > Is there any other place where current functionality needs this > information, *apart* from pip check? Are there any proposed additional > features that might need it? > Yes. I currently have a branch where during pip install, after the packages to be installed are selected, it does a pip check on what would be the installed packages if the installation is successful. This would mean that if we do select an incompatible version for installation, pip install would be able to print a warning. In other words, a pip check would be done during every pip install run. > Thus, obviously, I'm interested in making pip to be able to store this > > information. As I understand, this is done needs to be specified in a PEP > > and/or on PyPUG's specification page. > > > > To that end, here's seeding proposal for the discussion: a new > > `extras-requested.txt` file in the .dist-info directory, storing the > extra > > names in a one-per-line format. > > Looks OK to me. > > But I don't know how important it is to satisfy this use case. I've > never needed this feature (I don't think I've ever used pip check, and > I've very rarely used extras) so I won't comment on that. > > Paul > > PS I know we talked a bit off-list about this and I said I didn't have > any opinion. I'd misunderstood what you were suggesting a bit, mostly > because the conversation veered off into uninstalling > requests[security] and what that means... So now that you've restated > the issue, I have a bit of an opinion (although still not much :-)) > Hehe. No problems. :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Jan 26 11:46:01 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 26 Jan 2018 16:46:01 +0000 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On 26 January 2018 at 16:07, Pradyun Gedam wrote: >> Is there any other place where current functionality needs this >> information, *apart* from pip check? Are there any proposed additional >> features that might need it? > > Yes. I currently have a branch where during pip install, after the packages > to be installed are selected, it does a pip check on what would be the > installed packages if the installation is successful. This would mean that > if we do select an incompatible version for installation, pip install would > be able to print a warning. I'd rather see us get a proper solver that would mean this should never happen :-) Paul From pradyunsg at gmail.com Fri Jan 26 12:40:59 2018 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Fri, 26 Jan 2018 17:40:59 +0000 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: Resending coz I clicked the wrong button. On Fri, 26 Jan 2018 at 22:16 Paul Moore wrote: > On 26 January 2018 at 16:07, Pradyun Gedam wrote: > >> Is there any other place where current functionality needs this > >> information, *apart* from pip check? Are there any proposed additional > >> features that might need it? > > > > Yes. I currently have a branch where during pip install, after the > packages > > to be installed are selected, it does a pip check on what would be the > > installed packages if the installation is successful. This would mean > that > > if we do select an incompatible version for installation, pip install > would > > be able to print a warning. > > I'd rather see us get a proper solver that would mean this should > never happen :-) > > Yep, I am working on that. This is more of a short-term thing that I'm thinking of, just in case the proper solver isn't ready in time for pip 10. :) Aside, I've just posted an update over at #988 -- https://github.com/pypa/pip/issues/988#issuecomment-360846457 -- about the state of the resolver work and the aforementioned branch. > Paul > -- Pradyun -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradyunsg at gmail.com Fri Jan 26 12:44:59 2018 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Fri, 26 Jan 2018 17:44:59 +0000 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On Fri, 26 Jan 2018 at 23:10 Pradyun Gedam wrote: > Resending coz I clicked the wrong button. > > On Fri, 26 Jan 2018 at 22:16 Paul Moore wrote: > >> On 26 January 2018 at 16:07, Pradyun Gedam wrote: >> >> Is there any other place where current functionality needs this >> >> information, *apart* from pip check? Are there any proposed additional >> >> features that might need it? >> > >> > Yes. I currently have a branch where during pip install, after the >> packages >> > to be installed are selected, it does a pip check on what would be the >> > installed packages if the installation is successful. This would mean >> that >> > if we do select an incompatible version for installation, pip install >> would >> > be able to print a warning. >> >> I'd rather see us get a proper solver that would mean this should >> never happen :-) > > >> > Yep, I am working on that. This is more of a short-term thing that I'm > thinking of, just in case the proper solver isn't ready in time for pip 10. > :) > And the resolver will need this extras-installed metadata as well. > Aside, I've just posted an update over at #988 -- > https://github.com/pypa/pip/issues/988#issuecomment-360846457 -- about > the state of the resolver work and the aforementioned branch. > > >> Paul >> > > -- > > Pradyun > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jan 26 21:36:59 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 27 Jan 2018 12:36:59 +1000 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On 27 January 2018 at 03:44, Pradyun Gedam wrote: > > On Fri, 26 Jan 2018 at 23:10 Pradyun Gedam wrote: > >> Resending coz I clicked the wrong button. >> >> On Fri, 26 Jan 2018 at 22:16 Paul Moore wrote: >> >>> On 26 January 2018 at 16:07, Pradyun Gedam wrote: >>> >> Is there any other place where current functionality needs this >>> >> information, *apart* from pip check? Are there any proposed additional >>> >> features that might need it? >>> > >>> > Yes. I currently have a branch where during pip install, after the >>> packages >>> > to be installed are selected, it does a pip check on what would be the >>> > installed packages if the installation is successful. This would mean >>> that >>> > if we do select an incompatible version for installation, pip install >>> would >>> > be able to print a warning. >>> >>> I'd rather see us get a proper solver that would mean this should >>> never happen :-) >> >> >>> >> Yep, I am working on that. This is more of a short-term thing that I'm >> thinking of, just in case the proper solver isn't ready in time for pip 10. >> :) >> > > And the resolver will need this extras-installed metadata as well. > `pipenv lock` could likely also benefit from this (although in our case we can also check `Pipfile` to see if any extras have been explicitly requested). >From the point of view of post-install-metadata-readability, something else we don't currently record is the *result* of environment marker expressions at the time of installation. This means that there's no particularly straightforward way to check the portability of an entire virtual environment, whereas if we recorded the environment marker expressions and their outcomes somewhere, then we'd be able to extract a consolidated list of the environmental expectations for that venv. Ideally, we'd also be able to record somewhere which dependencies had been satisfied from *outside* the venv, again with the goal of extracting a consolidated list of the expectations the venv is placing on the host Python environment. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Jan 26 22:46:44 2018 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 26 Jan 2018 19:46:44 -0800 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On Fri, Jan 26, 2018 at 7:11 AM, Pradyun Gedam wrote: > Hello! I hope everyone's had a great start to 2018! :) > > A few months back, while working on pip, I had noticed an oddity about > extras. > > Installing a package with extras would not store information about the fact > that the extras were requested. This means, later, it is not possible to > know which extra-based optional dependencies of a package have to be > considered when verifying that the packages are compatible with each other. > This information is relavant for resolution/validation since without it, it > is not possible to know which the extra-requirements to care about. > > As an example, installing ``requests[security]`` and then uninstalling > ``PyOpenSSL`` leaves you in a state where you don't really satisfy what was > asked for but there's no way to detect that either. Another important use case is upgrades: if requests[security] v1 just depends on pyopenssl, and then requests[security] v2 adds a dependency on certifi, and I do pip install requests[security] == 1 pip upgrade then upgrade should give me requests[security] == 2, and thus install certifi. But this doesn't work if you don't have any record that 'requests[security]' is even installed :-). > Thus, obviously, I'm interested in making pip to be able to store this > information. As I understand, this is done needs to be specified in a PEP > and/or on PyPUG's specification page. > > To that end, here's seeding proposal for the discussion: a new > `extras-requested.txt` file in the .dist-info directory, storing the extra > names in a one-per-line format. I'm going to put in another plug here for my "reified extras" idea: https://mail.python.org/pipermail/distutils-sig/2015-October/027364.html Essentially, the idea is to promote extras to full packages -- normally ones that contain no files, just metadata like dependencies, though that's not a necessary requirement, it's just how we'd interpret existing extras specifications. Then installing 'requests[security]' would install the 'requests[security]' package, which depends on both 'requests' and 'pyopenssl', and we have a 'requests[security]-$VERSION.dist-info' directory recording that we installed it. The advantages are: - it's a simpler way to record information the information you want here, without adding more special cases to dist-info: most code doesn't even have to know what 'extras' are, just what packages are - it opens the door to lots of more advanced features, like 'foo[test]' being a package that actually contains foo's tests, or build variants like 'numpy[mkl]' being numpy built against the MKL library, or maybe making it possible to track which version of numpy's ABI different packages use. (The latter two cases need some kind of provides: support, which is impossible right now because we don't want to allow random-other-package to say 'provides-dist: cryptography'; but, it would be okay if 'numpy[mkl]' said 'provides-dist: numpy', because we know 'numpy[mkl]' and 'numpy' are maintained by the same people.) I know there's a lot of precedent for this kind of clever use of metadata-only packages in Debian (e.g. search for "metapackages"), and I guess the RPM world probably has similar tricks. -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Fri Jan 26 23:37:36 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 27 Jan 2018 14:37:36 +1000 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On 27 January 2018 at 13:46, Nathaniel Smith wrote: > The advantages are: > > - it's a simpler way to record information the information you want > here, without adding more special cases to dist-info: most code > doesn't even have to know what 'extras' are, just what packages are > > - it opens the door to lots of more advanced features, like > 'foo[test]' being a package that actually contains foo's tests, or > build variants like 'numpy[mkl]' being numpy built against the MKL > library, or maybe making it possible to track which version of numpy's > ABI different packages use. (The latter two cases need some kind of > provides: support, which is impossible right now because we don't want > to allow random-other-package to say 'provides-dist: cryptography'; > but, it would be okay if 'numpy[mkl]' said 'provides-dist: numpy', > because we know 'numpy[mkl]' and 'numpy' are maintained by the same > people.) > > I know there's a lot of precedent for this kind of clever use of > metadata-only packages in Debian (e.g. search for "metapackages"), and > I guess the RPM world probably has similar tricks. > While I agree with this idea in principle, I'll note that RPM makes it relatively straightforward to have a single SRPM emit multiple RPMs, so defining a metapackage is just a few extra lines in a spec file. (I'm not sure how Debian's metapackages work, but I believe they're similarly simple on the publisher's side). We don't currently have a comparable mechanism to readily allow a single source project to expand to multiple package index entries that all share a common sdist, but include different subsets in their respective wheel files (defining one would definitely be possible, it's just a tricky migration problem to work out). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jan 28 00:44:51 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 28 Jan 2018 15:44:51 +1000 Subject: [Distutils] PEP 508, environment markers & the possibility of Python 3.10 Message-ID: Hi folks, In https://github.com/python/peps/issues/560, a user pointed out that the current definition of python_version in PEP 508 assumes single-digit major and minor version numbers: platform.python_version()[:3] There's a reasonable chance we'll see 3.10 rather than 4.0 in a few years time, at which point that definition would break. The suggested fix is to amend that definition to be: ".".join(platform.python_version_tuple()[:2]) This seems like a good suggestion to me, so my inclination is to handle this in a similar way to https://www.python.org/dev/peps/pep-0440/#summary-of-changes-to-pep-440: fix it in place, and add a section at the end of the PEP listing the post-publication changes. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Sun Jan 28 01:27:35 2018 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 27 Jan 2018 22:27:35 -0800 Subject: [Distutils] PEP 508, environment markers & the possibility of Python 3.10 In-Reply-To: References: Message-ID: +1 to all of that. On Sat, Jan 27, 2018 at 9:44 PM, Nick Coghlan wrote: > Hi folks, > > In https://github.com/python/peps/issues/560, a user pointed out that > the current definition of python_version in PEP 508 assumes > single-digit major and minor version numbers: > > platform.python_version()[:3] > > There's a reasonable chance we'll see 3.10 rather than 4.0 in a few > years time, at which point that definition would break. > > The suggested fix is to amend that definition to be: > > ".".join(platform.python_version_tuple()[:2]) > > This seems like a good suggestion to me, so my inclination is to > handle this in a similar way to > https://www.python.org/dev/peps/pep-0440/#summary-of-changes-to-pep-440: > fix it in place, and add a section at the end of the PEP listing the > post-publication changes. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -- Nathaniel J. Smith -- https://vorpus.org From pradyunsg at gmail.com Sun Jan 28 06:28:08 2018 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Sun, 28 Jan 2018 11:28:08 +0000 Subject: [Distutils] PEP 508, environment markers & the possibility of Python 3.10 In-Reply-To: References: Message-ID: On Sun, 28 Jan 2018 at 11:15 Nick Coghlan wrote: > Hi folks, > > In https://github.com/python/peps/issues/560, a user pointed out that > the current definition of python_version in PEP 508 assumes > single-digit major and minor version numbers: > > platform.python_version()[:3] > > There's a reasonable chance we'll see 3.10 rather than 4.0 in a few > years time, at which point that definition would break. > > The suggested fix is to amend that definition to be: > > ".".join(platform.python_version_tuple()[:2]) > > This seems like a good suggestion to me, so my inclination is to > handle this in a similar way to > https://www.python.org/dev/peps/pep-0440/#summary-of-changes-to-pep-440: > fix it in place, and add a section at the end of the PEP listing the > post-publication changes. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig +1 from me too! :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Sun Jan 28 07:12:37 2018 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Sun, 28 Jan 2018 14:12:37 +0200 Subject: [Distutils] PEP 508, environment markers & the possibility of Python 3.10 In-Reply-To: References: Message-ID: <1881d537-1e41-c78b-6893-e1b69ab23f3e@nextday.fi> Hasn't Guido said publicly that 4.0 would be the next version from 3.9 (since he hates double digits)? Nick Coghlan kirjoitti 28.01.2018 klo 07:44: > Hi folks, > > In https://github.com/python/peps/issues/560, a user pointed out that > the current definition of python_version in PEP 508 assumes > single-digit major and minor version numbers: > > platform.python_version()[:3] > > There's a reasonable chance we'll see 3.10 rather than 4.0 in a few > years time, at which point that definition would break. > > The suggested fix is to amend that definition to be: > > ".".join(platform.python_version_tuple()[:2]) > > This seems like a good suggestion to me, so my inclination is to > handle this in a similar way to > https://www.python.org/dev/peps/pep-0440/#summary-of-changes-to-pep-440: > fix it in place, and add a section at the end of the PEP listing the > post-publication changes. > > Cheers, > Nick. > From ncoghlan at gmail.com Sun Jan 28 08:28:15 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 28 Jan 2018 23:28:15 +1000 Subject: [Distutils] PEP 508, environment markers & the possibility of Python 3.10 In-Reply-To: <1881d537-1e41-c78b-6893-e1b69ab23f3e@nextday.fi> References: <1881d537-1e41-c78b-6893-e1b69ab23f3e@nextday.fi> Message-ID: On 28 January 2018 at 22:12, Alex Gr?nholm wrote: > Hasn't Guido said publicly that 4.0 would be the next version from 3.9 > (since he hates double digits)? No, he overcame that aversion when we published 2.7.10. So while the assumption was that Guido wouldn't allow 3.10 back when I wrote https://developers.redhat.com/blog/2014/09/17/why-python-4-0-wont-be-like-python-3-0/, the current status is that we won't make a firm decision either way until after the 3.9 maintenance branch is created. There are downsides either way, and weighing them up depends on external factors (like the state of Linux distro migrations) where we won't have access to good information until the decision needs to be made. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From pradyunsg at gmail.com Sun Jan 28 10:01:55 2018 From: pradyunsg at gmail.com (Pradyun Gedam) Date: Sun, 28 Jan 2018 15:01:55 +0000 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On Sat, 27 Jan 2018 at 09:16 Nathaniel Smith wrote: > On Fri, Jan 26, 2018 at 7:11 AM, Pradyun Gedam > wrote: > > Hello! I hope everyone's had a great start to 2018! :) > > > > A few months back, while working on pip, I had noticed an oddity about > > extras. > > > > Installing a package with extras would not store information about the > fact > > that the extras were requested. This means, later, it is not possible to > > know which extra-based optional dependencies of a package have to be > > considered when verifying that the packages are compatible with each > other. > > This information is relavant for resolution/validation since without it, > it > > is not possible to know which the extra-requirements to care about. > > > > As an example, installing ``requests[security]`` and then uninstalling > > ``PyOpenSSL`` leaves you in a state where you don't really satisfy what > was > > asked for but there's no way to detect that either. > > Another important use case is upgrades: if requests[security] v1 just > depends on pyopenssl, and then requests[security] v2 adds a dependency > on certifi, and I do > > pip install requests[security] == 1 > pip upgrade > > then upgrade should give me requests[security] == 2, and thus install > certifi. But this doesn't work if you don't have any record that > 'requests[security]' is even installed :-). > Yes! Essentially, if there's a situation where a package may be modified, we should care about having this information, to ensure it still does satisfy the extra's requirements which may change themselves when the base package changes. > > Thus, obviously, I'm interested in making pip to be able to store this > > information. As I understand, this is done needs to be specified in a PEP > > and/or on PyPUG's specification page. > > > > To that end, here's seeding proposal for the discussion: a new > > `extras-requested.txt` file in the .dist-info directory, storing the > extra > > names in a one-per-line format. > > I'm going to put in another plug here for my "reified extras" idea: > https://mail.python.org/pipermail/distutils-sig/2015-October/027364.html > > Essentially, the idea is to promote extras to full packages -- > normally ones that contain no files, just metadata like dependencies, > though that's not a necessary requirement, it's just how we'd > interpret existing extras specifications. > > Then installing 'requests[security]' would install the > 'requests[security]' package, which depends on both 'requests' and > 'pyopenssl', and we have a 'requests[security]-$VERSION.dist-info' > directory recording that we installed it. > I like this. This is how I'm modelling extras within the resolver currently, by just considering extras as just-another-requirement and having them depend on the base package and the extra dependencies. Prof. Justin Cappos had suggested this to me. I imagine this'll result in simplification somewhere due to this consistency between what the resolver consumes and what's on the disk. I think if we go this way, we should probably aim to just something equivalent of Debian's metapackages for now. The rest of the advanced features can be brought in at a latter stage. The advantages are: > > - it's a simpler way to record information the information you want > here, without adding more special cases to dist-info: most code > doesn't even have to know what 'extras' are, just what packages are > > - it opens the door to lots of more advanced features, like > 'foo[test]' being a package that actually contains foo's tests, or > build variants like 'numpy[mkl]' being numpy built against the MKL > library, or maybe making it possible to track which version of numpy's > ABI different packages use. (The latter two cases need some kind of > provides: support, which is impossible right now because we don't want > to allow random-other-package to say 'provides-dist: cryptography'; > but, it would be okay if 'numpy[mkl]' said 'provides-dist: numpy', > because we know 'numpy[mkl]' and 'numpy' are maintained by the same > people.) > > I know there's a lot of precedent for this kind of clever use of > metadata-only packages in Debian (e.g. search for "metapackages"), and > I guess the RPM world probably has similar tricks. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barney.gale at gmail.com Sun Jan 28 15:31:12 2018 From: barney.gale at gmail.com (Barney Gale) Date: Sun, 28 Jan 2018 20:31:12 +0000 Subject: [Distutils] Any interest in Perforce support in pip? Message-ID: Hi all, Is there any interest in support for Perforce in pip? Perforce is a commercial version control system used in mostly corporate environments. It's also used for FreeBSD development. I have a pull request against the pypa/pip project (#4857) that allows the following: pip install p4+p4://host/depot/path ... but as the pip developers lack experience with Perforce, I've been directed to this mailing list to gauge whether this is a feature worth adding. I'm aware that Perforce is used in the game and film industries, but beyond a github issue requesting support (#2754) I'm not sure who else might use this! I'd appreciate any feedback. Thanks. Barney -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Sun Jan 28 20:54:30 2018 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 29 Jan 2018 14:54:30 +1300 Subject: [Distutils] Any interest in Perforce support in pip? In-Reply-To: References: Message-ID: It won't be testable by most developers, if be inclined to say that making the VCS layer a supported plugin point would be a better patch to merge. That would be testable. On 29 Jan. 2018 13:57, "Barney Gale" wrote: > Hi all, > > Is there any interest in support for Perforce in pip? Perforce is a > commercial version control system used in mostly corporate environments. > It's also used for FreeBSD development. I have a pull request against the > pypa/pip project (#4857) that allows the following: > > pip install p4+p4://host/depot/path > > ... but as the pip developers lack experience with Perforce, I've been > directed to this mailing list to gauge whether this is a feature worth > adding. I'm aware that Perforce is used in the game and film industries, > but beyond a github issue requesting support (#2754) I'm not sure who else > might use this! I'd appreciate any feedback. Thanks. > > Barney > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jan 28 23:45:44 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Jan 2018 14:45:44 +1000 Subject: [Distutils] Any interest in Perforce support in pip? In-Reply-To: References: Message-ID: On 29 January 2018 at 11:54, Robert Collins wrote: > It won't be testable by most developers, if be inclined to say that making > the VCS layer a supported plugin point would be a better patch to merge. > That would be testable. Right, I suspect without that, the Perforce support would end up intermittently broken, since pre-merge CI wouldn't be able to cover it. By contrast, if there was a patch that managed a registry of VCS scheme handlers, and the existing scheme handlers (git, Mercurial, bzr, svn) were migrated to that, then: - pip's own pre-merge CI would ensure that the plugin API itself always worked - a separate "pip-vcs-p4" project could require Perforce availability to run their CI As a sketch of a way that could work, pip could define a new "pip.vcs" entry point group, where the keys were scheme names, and the values were object references pointing to handlers for those schemes. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Mon Jan 29 04:35:30 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 29 Jan 2018 09:35:30 +0000 Subject: [Distutils] Any interest in Perforce support in pip? In-Reply-To: References: Message-ID: On 29 January 2018 at 04:45, Nick Coghlan wrote: > On 29 January 2018 at 11:54, Robert Collins wrote: >> It won't be testable by most developers, if be inclined to say that making >> the VCS layer a supported plugin point would be a better patch to merge. >> That would be testable. > > Right, I suspect without that, the Perforce support would end up > intermittently broken, since pre-merge CI wouldn't be able to cover > it. The existing PR does actually include tests that install and verify Perforce functionality. I don't follow all the details, but the intent is definitely there - Barney has done a good job with the PR. > By contrast, if there was a patch that managed a registry of VCS > scheme handlers, and the existing scheme handlers (git, Mercurial, > bzr, svn) were migrated to that, then: > > - pip's own pre-merge CI would ensure that the plugin API itself always worked > - a separate "pip-vcs-p4" project could require Perforce availability > to run their CI The worst problem here is that without a supported pip API, pip-vcs-p4 has to rely on APIs that could potentially change at short notice, making plugin versioning a non-trivial problem to solve. I'm personally fine with plugins relying on internal APIs (on a "you deal with the risks" basis) but we'd need to give plugins a way to say "I only work with versions X, Y and Z of pip, as an API I use changed". Maybe existing dependency methods can handle this, but I've not thought that through fully. > As a sketch of a way that could work, pip could define a new "pip.vcs" > entry point group, where the keys were scheme names, and the values > were object references pointing to handlers for those schemes. If that could work (see the comments about coupling to pip versions I mentioned above) then I think that it would be a reasonable option. Usual caveat - it's easy to discuss ideas like this, but much harder to find anyone to work them through to completion. There's been at least one previous iteration of this idea (the only one I can find at the moment is https://github.com/pypa/pip/pull/4146, but that seems focused on the pip API issue, and I think there were others that covered other aspects of the problem, but my github-foo is failing me right now). Paul PS Although I'm accepting the idea of 3rd party VCS plugins working with pip's internal APIs above, I'm saying this purely on the basis of looking for a practical way out of the current problem, where we don't want to bring code to handle reasonable use cases into pip core, but we don't offer any way for interested parties to maintain it outside of pip. I fully expect "you changed this internal API" and "VCS plugins use it why can't we?" issues to make us regret this decision if we make it... From sujeckib116 at gmail.com Mon Jan 29 09:32:19 2018 From: sujeckib116 at gmail.com (Brian Sujecki) Date: Mon, 29 Jan 2018 09:32:19 -0500 Subject: [Distutils] Begin to Code With Python Message-ID: To whom it may concern: My name is Brian Sujecki and I am currently reading Begin to Code With Python and am unable to import the library snaps even after downloading the library from the command shell on my mac. It's as if shell hasn't updated the library. Can you help me? it's driving me crazy I can't move on. Brian Sujecki -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Jan 29 11:38:29 2018 From: brett at python.org (Brett Cannon) Date: Mon, 29 Jan 2018 16:38:29 +0000 Subject: [Distutils] Begin to Code With Python In-Reply-To: References: Message-ID: Probably the best place to ask for help for this is the python-tutor mailing list. On Mon, Jan 29, 2018, 06:33 Brian Sujecki, wrote: > To whom it may concern: > My name is Brian Sujecki and I am currently reading Begin to Code With > Python and am unable to import the library snaps even after downloading the > library from the command shell on my mac. It's as if shell hasn't updated > the library. Can you help me? it's driving me crazy I can't move on. > Brian Sujecki > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shimizukawa at gmail.com Mon Jan 29 17:14:22 2018 From: shimizukawa at gmail.com (Takayuki Shimizukawa) Date: Mon, 29 Jan 2018 22:14:22 +0000 Subject: [Distutils] Current status of ``setup.py register --list-classifiers`` Message-ID: Hi, I tried getting a list of Trove Classifiers by running ``setup.py register --list-classifiers`` command, the HTML of https://upload.pypi.org/legacy/ was displayed on the console. Does this mean that the ``--list-classifiers`` option was broken when the PyPI endpoint changed? Or was it made intentional change? Either way, I could not find such a report or announcement. I confirmed the execution of ``setup.py register --list-classifiers`` with the following combinations. Python-3.5 + distutils setup: OK Python-3.5 + setuptools-26 setup: OK Python-3.5 + setuptools-27 setup: NG Python-3.5 + setuptools-38 setup: NG Python-3.6 + distutils setup: NG Python-3.6 + setuptools-26 setup: NG Python-3.6 + setuptools-27 setup: NG Python-3.6 + setuptools-38 setup: NG Also, in any of the above combinations, it was OK if I specified https://pypi.python.org/pypi as the repository. (BTW, although document says "The --repository or -r option lets you specify a PyPI server different from the default", if there is no URL specified in ``.pypirc`` it will display "ValueError: not found in .pypirc") So, in my understanding, ``register --list-classifiers`` does not work as expected. Regards, -- Takayuki Shimizukawa -------------- next part -------------- An HTML attachment was scrubbed... URL: From di at di.codes Mon Jan 29 17:51:30 2018 From: di at di.codes (Dustin Ingram) Date: Mon, 29 Jan 2018 16:51:30 -0600 Subject: [Distutils] Current status of ``setup.py register --list-classifiers`` In-Reply-To: References: Message-ID: Yes, it seems broken. With the default repository of: https://upload.pypi.org/legacy/ this would look at: https://upload.pypi.org/legacy/?:action=list_classifiers but the correct location would be: https://pypi.org/pypi?:action=list_classifiers Given that the `register` command is unnecessary for pypi.org, it might make more sense to deprecate it than attempt to fix it. D. On Mon, Jan 29, 2018 at 4:14 PM, Takayuki Shimizukawa wrote: > Hi, > > I tried getting a list of Trove Classifiers by running ``setup.py register > --list-classifiers`` command, the HTML of https://upload.pypi.org/legacy/ > was displayed on the console. > Does this mean that the ``--list-classifiers`` option was broken when the > PyPI endpoint changed? Or was it made intentional change? > Either way, I could not find such a report or announcement. > > I confirmed the execution of ``setup.py register --list-classifiers`` with > the following combinations. > > Python-3.5 + distutils setup: OK > Python-3.5 + setuptools-26 setup: OK > Python-3.5 + setuptools-27 setup: NG > Python-3.5 + setuptools-38 setup: NG > > Python-3.6 + distutils setup: NG > Python-3.6 + setuptools-26 setup: NG > Python-3.6 + setuptools-27 setup: NG > Python-3.6 + setuptools-38 setup: NG > > Also, in any of the above combinations, it was OK if I specified > https://pypi.python.org/pypi as the repository. > > (BTW, although document says "The --repository or -r option lets you specify > a PyPI server different from the default", if there is no URL specified in > ``.pypirc`` it will display "ValueError: not found in .pypirc") > > So, in my understanding, ``register --list-classifiers`` does not work as > expected. > > Regards, > -- > Takayuki Shimizukawa > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From shimizukawa at gmail.com Mon Jan 29 18:10:01 2018 From: shimizukawa at gmail.com (Takayuki Shimizukawa) Date: Mon, 29 Jan 2018 23:10:01 +0000 Subject: [Distutils] Current status of ``setup.py register --list-classifiers`` In-Reply-To: References: Message-ID: I see. I have no objection to deprecate the register command. In the future (already?), that means we can not get Trove Classifiers list on the command line with setup.py (I do not know how much this command was used...). Either way, I got it clearer. Thanks. Regards, On Tue, Jan 30, 2018 at 7:51 AM Dustin Ingram wrote: > Yes, it seems broken. With the default repository of: > > https://upload.pypi.org/legacy/ > > this would look at: > > https://upload.pypi.org/legacy/?:action=list_classifiers > > but the correct location would be: > > https://pypi.org/pypi?:action=list_classifiers > > Given that the `register` command is unnecessary for pypi.org, it > might make more sense to deprecate it than attempt to fix it. > > D. > > On Mon, Jan 29, 2018 at 4:14 PM, Takayuki Shimizukawa > wrote: > > Hi, > > > > I tried getting a list of Trove Classifiers by running ``setup.py > register > > --list-classifiers`` command, the HTML of > https://upload.pypi.org/legacy/ > > was displayed on the console. > > Does this mean that the ``--list-classifiers`` option was broken when the > > PyPI endpoint changed? Or was it made intentional change? > > Either way, I could not find such a report or announcement. > > > > I confirmed the execution of ``setup.py register --list-classifiers`` > with > > the following combinations. > > > > Python-3.5 + distutils setup: OK > > Python-3.5 + setuptools-26 setup: OK > > Python-3.5 + setuptools-27 setup: NG > > Python-3.5 + setuptools-38 setup: NG > > > > Python-3.6 + distutils setup: NG > > Python-3.6 + setuptools-26 setup: NG > > Python-3.6 + setuptools-27 setup: NG > > Python-3.6 + setuptools-38 setup: NG > > > > Also, in any of the above combinations, it was OK if I specified > > https://pypi.python.org/pypi as the repository. > > > > (BTW, although document says "The --repository or -r option lets you > specify > > a PyPI server different from the default", if there is no URL specified > in > > ``.pypirc`` it will display "ValueError: not found in .pypirc") > > > > So, in my understanding, ``register --list-classifiers`` does not work as > > expected. > > > > Regards, > > -- > > Takayuki Shimizukawa > > > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jan 29 22:56:06 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 30 Jan 2018 13:56:06 +1000 Subject: [Distutils] Any interest in Perforce support in pip? In-Reply-To: References: Message-ID: On 29 January 2018 at 19:35, Paul Moore wrote: > On 29 January 2018 at 04:45, Nick Coghlan wrote: >> On 29 January 2018 at 11:54, Robert Collins wrote: >>> It won't be testable by most developers, if be inclined to say that making >>> the VCS layer a supported plugin point would be a better patch to merge. >>> That would be testable. >> >> Right, I suspect without that, the Perforce support would end up >> intermittently broken, since pre-merge CI wouldn't be able to cover >> it. > > The existing PR does actually include tests that install and verify > Perforce functionality. I don't follow all the details, but the intent > is definitely there - Barney has done a good job with the PR. If the functionality can be tested in Travis, then I'd say go for it, and don't worry about gating it behind defining a stable plugin API that 3rd party projects can implement. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Jan 29 23:05:28 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 30 Jan 2018 14:05:28 +1000 Subject: [Distutils] Current status of ``setup.py register --list-classifiers`` In-Reply-To: References: Message-ID: On 30 January 2018 at 09:10, Takayuki Shimizukawa wrote: > I see. > I have no objection to deprecate the register command. > In the future (already?), that means we can not get Trove Classifiers list > on the command line with setup.py (I do not know how much this command was > used...). Right, but you can instead get them directly from the PyPI API with $ python3 -c "from urllib.request import urlopen; print(urlopen('https://pypi.org/pypi?:action=list_classifiers').read().decode())" (curl/wget/etc will also work, but the advantage of the above version is that it only requires the Python 3 standard library) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From shimizukawa at gmail.com Mon Jan 29 23:28:52 2018 From: shimizukawa at gmail.com (Takayuki Shimizukawa) Date: Tue, 30 Jan 2018 04:28:52 +0000 Subject: [Distutils] Current status of ``setup.py register --list-classifiers`` In-Reply-To: References: Message-ID: > $ python3 -c "from urllib.request import urlopen; > print(urlopen(' https://pypi.org/pypi?:action=list_classifiers').read().decode())" Yeah, it is bit long... anyway, Thanks! -- Takayuki Shimizukawa On Tue, Jan 30, 2018 at 1:05 PM Nick Coghlan wrote: > On 30 January 2018 at 09:10, Takayuki Shimizukawa > wrote: > > I see. > > I have no objection to deprecate the register command. > > In the future (already?), that means we can not get Trove Classifiers > list > > on the command line with setup.py (I do not know how much this command > was > > used...). > > Right, but you can instead get them directly from the PyPI API with > > $ python3 -c "from urllib.request import urlopen; > print(urlopen(' > https://pypi.org/pypi?:action=list_classifiers').read().decode())" > > (curl/wget/etc will also work, but the advantage of the above version > is that it only requires the Python 3 standard library) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevinurban at comcast.net Mon Jan 29 12:46:36 2018 From: kevinurban at comcast.net (Kevin Urban) Date: Mon, 29 Jan 2018 10:46:36 -0700 Subject: [Distutils] print module Message-ID: <981448FF-34EB-42EA-8657-80CFB268E5E5@comcast.net> Just starting out unable to use print function on a mac. Which module do I import to use print? Kevin Urban -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Tue Jan 30 08:28:01 2018 From: tritium-list at sdamon.com (Alex Walters) Date: Tue, 30 Jan 2018 08:28:01 -0500 Subject: [Distutils] print module In-Reply-To: <981448FF-34EB-42EA-8657-80CFB268E5E5@comcast.net> References: <981448FF-34EB-42EA-8657-80CFB268E5E5@comcast.net> Message-ID: <036f01d399ce$27554a10$75ffde30$@sdamon.com> This is not the correct list to ask about problems with your code (It is very likely that your program redefined the name print. Python-list would be the list to ask). There is no module to import to get the print function. There is a special import to turn the print keyword to the print function in python 2.7 (from __future__ import print_function). There is no third party module for this. Even if there where, this list is for the packaging infrastructure of python, not module support. From: Distutils-SIG [mailto:distutils-sig-bounces+tritium-list=sdamon.com at python.org] On Behalf Of Kevin Urban Sent: Monday, January 29, 2018 12:47 PM To: distutils-sig at python.org Subject: [Distutils] print module Just starting out unable to use print function on a mac. Which module do I import to use print? Kevin Urban -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Tue Jan 30 08:41:54 2018 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 30 Jan 2018 08:41:54 -0500 Subject: [Distutils] print module In-Reply-To: <981448FF-34EB-42EA-8657-80CFB268E5E5@comcast.net> References: <981448FF-34EB-42EA-8657-80CFB268E5E5@comcast.net> Message-ID: https://www.google.com/search?q=Which+module+do+I+import+to+use+print%3F (the StackOverflow result answers your question) https://www.google.com/search?q=python+print+function (first result) https://docs.python.org/3/search.html?q=Print (second result -> print function) https://docs.python.org/2/search.html?q=Print (second result -> print statement) https://docs.python.org/3/library/__future__.html?highlight=print (what is a PEP?) This list is for distutils-related discussions. https://mail.python.org/mailman/listinfo/distutils-sig This list is for Python language and standard library questions: https://mail.python.org/mailman/listinfo/tutor This forum is also for similar questions: https://www.reddit.com/r/learnpython https://www.reddit.com/r/learnpython/search?q=print+function On Monday, January 29, 2018, Kevin Urban wrote: > > Just starting out unable to use print function on a mac. Which module do I > import to use print? > > > Kevin Urban > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sh at changeset.nyc Tue Jan 30 13:39:41 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Tue, 30 Jan 2018 13:39:41 -0500 Subject: [Distutils] Packaging/Warehouse sprint at PyCon 2018 Message-ID: <9c6f5ec0-e598-3753-3b0e-8e423fc5bc66@changeset.nyc> In case you're planning your PyCon Cleveland travel: we are planning to hold a Warehouse/packaging sprint at PyCon (the sprints are Monday, May 14th - Thursday, May 17th 2018). We welcome package maintainers, backend and frontend web developers, infrastructure administrators, technical writers, and testers to help us make the new PyPI, and the packaging ecosystem more generally, as usable and robust as possible. I took the liberty of updating https://us.pycon.org/2018/community/sprints/ to say so. Once we're closer to the sprints I'll work on a more detailed list of things we'll work on in Cleveland. -- Sumana Harihareswara Changeset Consulting https://changeset.nyc From sh at changeset.nyc Tue Jan 30 14:44:46 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Tue, 30 Jan 2018 14:44:46 -0500 Subject: [Distutils] Fwd: Warehouse update: Estimate for 1st milestone, new announce list In-Reply-To: <880ec585-faac-6d47-b686-c29a0d3c1874@changeset.nyc> References: <880ec585-faac-6d47-b686-c29a0d3c1874@changeset.nyc> Message-ID: <4a70fc04-a8d4-0242-ded2-cd66ae89fbde@changeset.nyc> Our weekly Warehouse update just went out to pypa-dev: https://groups.google.com/forum/#!topic/pypa-dev/es_-fC-sdpk and is included below. -------- Forwarded Message -------- Subject: Warehouse update: Estimate for 1st milestone, new announce list Date: Tue, 30 Jan 2018 14:33:36 -0500 From: Sumana Harihareswara To: pypa-dev at googlegroups.com The big news in Warehouse world is that we tentatively believe we can get our first milestone[0] out by the end of February. So around then you can expect emails and other announcements asking package maintainers to test pypi.org and give us bug reports. Speaking of announcements, we now have a new PyPI-announce mailing list.[1] I encourage you to subscribe. It'll be low-traffic and we'll only post there with major PyPI news. Ernest is nearing the end of his concentration on infrastructure work, especially on Cabotage[2] which helps manage our Kubernetes security.[3] Nicole and Dustin are steadily finishing views and design for maintainer features (e.g., project edit button[4]). And Laura and Sumana are preparing for the publicity push around the Maintainer MVP, and have improved some docs to improve developer experience (e.g., twine[5], Warehouse testing instructions[6], and the PyPA roadmap[7]). Details are in our meeting notes from yesterday.[8] Thanks alanbato for improving Warehouse's error messages![9] If you want to get started contributing to Warehouse, Ernest wants to help you and give you stickers, and has 30-minute 1:1 slots available.[10] Thanks to Mozilla for their support for the PyPI & Warehouse work![11] [0] https://github.com/pypa/warehouse/milestone/8 [1] https://mail.python.org/mm3/mailman3/lists/pypi-announce.python.org/ [2] https://github.com/cabotage/cabotage-app [3] https://github.com/python/pypi-infra/pull/3 [4] https://github.com/pypa/warehouse/pull/2823 [5] https://github.com/pypa/twine/pull/292 [6] https://github.com/pypa/warehouse/pull/2758 [7] https://github.com/pypa/pypa.io/pull/23 [8] https://wiki.python.org/psf/PackagingWG/2018-01-29-Warehouse [9] https://github.com/pypa/warehouse/pull/2767 [10] https://twitter.com/EWDurbin/status/955415184339849217 [11] https://pyfound.blogspot.com/2017/11/the-psf-awarded-moss-grant-pypi.html -- Sumana Harihareswara Warehouse project manager Changeset Consulting https://changeset.nyc From mrw at enotuniq.org Wed Jan 31 19:01:24 2018 From: mrw at enotuniq.org (Mark Williams) Date: Wed, 31 Jan 2018 16:01:24 -0800 Subject: [Distutils] draft PEP: manylinux2 Message-ID: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> Hi everyone! The manylinux1 platform tag has been tremendously useful, but unfortunately it's showing its age: https://mail.python.org/pipermail/distutils-sig/2017-April/030360.html https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html Nathaniel identified a list of things to do for its successor, manylinux2: https://mail.python.org/pipermail/distutils-sig/2017-April/030361.html Please find below a draft PEP for manylinux2 that attempts to address these issues. I've also opened a PR against python/peps: https://github.com/python/peps/pull/565 Docker images for x86_64 (and soon i686) are available to test drive: https://hub.docker.com/r/markrwilliams/manylinux2/tags/ Thanks! ---- PEP: 9999 Title: The manylinux2 Platform Tag Version: $Revision$ Last-Modified: $Date$ Author: Mark Williams BDFL-Delegate: Nick Coghlan Discussions-To: Distutils SIG Status: Active Type: Informational Content-Type: text/x-rst Created: Post-History: Resolution: Abstract ======== This PEP proposes the creation of a ``manylinux2`` platform tag to succeed the ``manylinux1`` tag introduced by PEP 513 [1]_. It also proposes that PyPI and ``pip`` both be updated to support uploading, downloading, and installing ``manylinux2`` distributions on compatible platforms. Rationale ========= True to its name, the ``manylinux1`` platform tag has made the installation of binary extension modules a reality on many Linux systems. Libraries like ``cryptography`` [2]_ and ``numpy`` [3]_ are more accessible to Python developers now that their installation on common architectures does not depend on fragile development environments and build toolchains. ``manylinux1`` wheels achieve their portability by allowing the extension modules they contain to link against only a small set of system-level shared libraries that export versioned symbols old enough to benefit from backwards-compatibility policies. Extension modules in a ``manylinux1`` wheel that rely on ``glibc``, for example, must be built against version 2.5 or earlier; they may then be run systems that provide more recent ``glibc`` version that still export the required symbols at version 2.5. PEP 513 drew its whitelisted shared libraries and their symbol versions from CentOS 5.11, which was the oldest supported CentOS release at the time of its writing. Unfortunately, CentOS 5.11 reached its end-of-life on March 31st, 2017 with a clear warning against its continued use. [4]_ No further updates, such as security patches, will be made available. This means that its packages will remain at obsolete versions that hamper the efforts of Python software packagers who use the ``manylinux1`` Docker image. CentOS 6.9 is now the oldest supported CentOS release, and will receive maintenance updates through November 30th, 2020. [5]_ We propose that a new PEP 425-style [6]_ platform tag called ``manylinux2`` be derived from CentOS 6.9 and that the ``manylinux`` toolchain, PyPI, and ``pip`` be updated to support it. The ``manylinux2`` policy ========================= The following criteria determine a ``linux`` wheel's eligibility for the ``manylinux2`` tag: 1. The wheel may only contain binary executables and shared objects compiled for one of the two architectures supported by CentOS 6.9: x86_64 or i686. [5]_ 2. The wheel's binary executables or shared objects may not link against externally-provided libraries except those in the following whitelist: :: libgcc_s.so.1 libstdc++.so.6 libm.so.6 libdl.so.2 librt.so.1 libcrypt.so.1 libc.so.6 libnsl.so.1 libutil.so.1 libpthread.so.0 libresolv.so.2 libX11.so.6 libXext.so.6 libXrender.so.1 libICE.so.6 libSM.so.6 libGL.so.1 libgobject-2.0.so.0 libgthread-2.0.so.0 libglib-2.0.so.0 This list is identical to the externally-provided libraries whitelisted for ``manylinux1``, minus ``libncursesw.so.5`` and ``libpanelw.so.5``. [7]_ ``libpythonX.Y`` remains ineligible for inclusion for the same reasons outlined in PEP 513. On Debian-based systems, these libraries are provided by the packages: ============ ======================================================= Package Libraries ============ ======================================================= libc6 libdl.so.2, libresolv.so.2, librt.so.1, libc.so.6, libpthread.so.0, libm.so.6, libutil.so.1, libcrypt.so.1, libnsl.so.1 libgcc1 libgcc_s.so.1 libgl1 libGL.so.1 libglib2.0-0 libgobject-2.0.so.0, libgthread-2.0.so.0, libglib-2.0.so.0 libice6 libICE.so.6 libsm6 libSM.so.6 libstdc++6 libstdc++.so.6 libx11-6 libX11.so.6 libxext6 libXext.so.6 libxrender1 libXrender.so.1 ============ ======================================================= On RPM-based systems, they are provided by these packages: ============ ======================================================= Package Libraries ============ ======================================================= glib2 libglib-2.0.so.0, libgthread-2.0.so.0, libgobject-2.0.so.0 glibc libresolv.so.2, libutil.so.1, libnsl.so.1, librt.so.1, libcrypt.so.1, libpthread.so.0, libdl.so.2, libm.so.6, libc.so.6 libICE libICE.so.6 libX11 libX11.so.6 libXext: libXext.so.6 libXrender libXrender.so.1 libgcc: libgcc_s.so.1 libstdc++ libstdc++.so.6 mesa libGL.so.1 ============ ======================================================= 3. If the wheel contains binary executables or shared objects linked against any whitelisted libraries that also export versioned symbols, they may only depend on the following maximum versions:: GLIBC_2.12 CXXABI_1.3.3 GLIBCXX_3.4.13 GCC_4.3.0 As an example, ``manylinux2`` wheels may include binary artifacts that require ``glibc`` symbols at version ``GLIBC_2.4``, because this an earlier version than the maximum of ``GLIBC_2.12``. 4. If a wheel is built for any version of CPython 2 or CPython versions 3.0 up to and including 3.2, it *must* include a CPython ABI tag indicating its Unicode ABI. A ``manylinux2`` wheel built against Python 2, then, must include either the ``cpy27mu`` tag indicating it was built against an interpreter with the UCS-4 ABI or the ``cpy27m`` tag indicating an interpeter with the UCS-2 ABI. *[Citation for UCS ABI tags?]* 5. A wheel *must not* require the ``PyFPE_jbuf`` symbol. This is achieved by building it against a Python compiled *without* the ``--with-fpectl`` ``configure`` flag. Compilation of Compliant Wheels =============================== Like ``manylinux1``, the ``auditwheel`` tool adds ```manylinux2`` platform tags to ``linux`` wheels built by ``pip wheel`` or ``bdist_wheel`` a ``manylinux2`` Docker container. Docker Images ------------- ``manylinux2`` Docker images based on CentOS 6.9 x86_64 and i686 are provided for building binary ``linux`` wheels that can reliably be converted to ``manylinux2`` wheels. [8]_ These images come with a full compiler suite installed (``gcc``, ``g++``, and ``gfortran`` 4.8.2) as well as the latest releases of Python and ``pip``. Compatibility with kernels that lack ``vsyscall`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A Docker container assumes that its userland is compatible with its host's kernel. Unfortunately, an increasingly common kernel configuration breaks breaks this assumption for x86_64 CentOS 6.9 Docker images. Versions 2.14 and earlier of ``glibc`` require the kernel provide an archaic system call optimization known as ``vsyscall`` on x86_64. [9]_ To effect the optimization, the kernel maps a read-only page of frequently-called system calls -- most notably ``time(2)`` -- into each process at a fixed memory location. ``glibc`` then invokes these system calls by dereferencing a function pointer to the appropriate offset into the ``vsyscall`` page and calling it. This avoids the overhead associated with invoking the kernel that affects normal system call invocation. ``vsyscall`` has long been deprecated in favor of an equivalent mechanism known as vDSO, or "virtual dynamic shared object", in which the kernel instead maps a relocatable virtual shared object containing the optimized system calls into each process. [10]_ The ``vsyscall`` page has serious security implications because it does not participate in address space layout randomization (ASLR). Its predictable location and contents make it a useful source of gadgets used in return-oriented programming attacks. [11]_ At the same time, its elimination breaks the x86_64 ABI, because ``glibc`` versions that depend on ``vsyscall`` suffer from segmentation faults when attempting to dereference a system call pointer into a non-existent page. As a compromise, Linux 3.1 implemented an "emulated" ``vsyscall`` that reduced the executable code, and thus the material for ROP gadgets, mapped into the process. [12]_ ``vsyscall=emulated`` has been the default configuration in most distribution's kernels for many years. Unfortunately, ``vsyscall`` emulation still exposes predicatable code at a reliable memory location, and continues to be useful for return-oriented programming. [13]_ Because most distributions have now upgraded to ``glibc`` versions that do not depend on ``vsyscall``, they are beginning to ship kernels that do not support ``vsyscall`` at all. [14]_ CentOS 5.11 and 6.9 both include versions of ``glibc`` that depend on the ``vsyscall`` page (2.5 and 2.12.2 respectively), so containers based on either cannot run under kernels provided with many distribution's upcoming releases. [15]_ Continuum Analytics faces a related problem with its conda software suite, and as they point out, this will pose a significant obstacle to using these tools in hosted services. [16]_ If Travis CI, for example, begins running jobs under a kernel that does not provide the ``vsyscall`` interface, Python packagers will not be able to use our Docker images there to build ``manylinux`` wheels. [17]_ We have derived a patch from the ``glibc`` git repository that backports the removal of all dependencies on ``vsyscall`` to the version of ``glibc`` included with our ``manylinux2`` image. [18]_ Rebuilding ``glibc``, and thus building ``manylinux2`` image itself, still requires a host kernel that provides the ``vsyscall`` mechanism, but the resulting image can be both run on hosts that provide it and those that do not. Because the ``vsyscall`` interface is an optimization that is only applied to running processes, the ``manylinux2`` wheels built with this modified image should be identical to those built on an unmodified CentOS 6.9 system. Also, the ``vsyscall`` problem applies only to x86_64; it is not part of the i686 ABI. Auditwheel ---------- The ``auditwheel`` tool has also been updated to produce ``manylinux2`` wheels. [19]_ Its behavior and purpose are otherwise unchanged from PEP 513. Platform Detection for Installers ================================= Platforms may define a ``manylinux2_compatible`` boolean attribute on the ``_manylinux`` module described in PEP 513. A platform is considered incompatible with ``manylinux2`` if the attribute is ``False``. Backwards compatibility with ``manylinux1`` wheels ================================================== As explained in PEP 513, the specified symbol versions for ``manylinux1`` whitelisted libraries constitute an *upper bound*. The same is true for the symbol versions defined for ``manylinux2`` in this PEP. As a result, ``manylinux1`` wheels are considered ``manylinux2`` wheels. A ``pip`` that recognizes the ``manylinux2`` platform tag will thus install ``manylinux1`` wheels for ``manylinux2`` platforms -- even when explicitly set -- when no ``manylinux2`` wheels are available. [20]_ PyPI Support ============ PyPI should permit wheels containing the ``manylinux2`` platform tag to be uploaded in the same way that it permits ``manylinux1``. It should not attempt to verify the compatibility of ``manylinux2`` wheels. References ========== .. [1] PEP 513 -- A Platform Tag for Portable Linux Built Distributions (https://www.python.org/dev/peps/pep-0513/) .. [2] pyca/cryptography (https://cryptography.io/) .. [3] numpy (https://numpy.org) .. [4] CentOS 5.11 EOL announcement (https://lists.centos.org/pipermail/centos-announce/2017-April/022350.html) .. [5] CentOS Product Specifications (https://web.archive.org/web/20180108090257/https://wiki.centos.org/About/Product) .. [6] PEP 425 -- Compatibility Tags for Built Distributions (https://www.python.org/dev/peps/pep-0425/) .. [7] ncurses 5 -> 6 transition means we probably need to drop some libraries from the manylinux whitelist (https://github.com/pypa/manylinux/issues/94) .. [8] manylinux2 Docker images (https://hub.docker.com/r/markrwilliams/manylinux2/) .. [9] On vsyscalls and the vDSO (https://lwn.net/Articles/446528/) .. [10] vdso(7) (http://man7.org/linux/man-pages/man7/vdso.7.html) .. [11] Framing Signals -- A Return to Portable Shellcode (http://www.cs.vu.nl/~herbertb/papers/srop_sp14.pdf) .. [12] ChangeLog-3.1 (https://www.kernel.org/pub/linux/kernel/v3.x/ChangeLog-3.1) .. [13] Project Zero: Three bypasses and a fix for one of Flash's Vector.<*> mitigations (https://googleprojectzero.blogspot.com/2015/08/three-bypasses-and-fix-for-one-of.html) .. [14] linux: activate CONFIG_LEGACY_VSYSCALL_NONE ? (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852620) .. [15] [Wheel-builders] Heads-up re: new kernel configurations breaking the manylinux docker image (https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html) .. [16] Due to glibc 2.12 limitation, static executables that use time(), cpuinfo() and maybe a few others cannot be run on systems that do not support or use `vsyscall=emulate` (https://github.com/ContinuumIO/anaconda-issues/issues/8203) .. [17] Travis CI (https://travis-ci.org/) .. [18] remove-vsyscall.patch https://github.com/markrwilliams/manylinux/commit/e9493d55471d153089df3aafca8cfbcb50fa8093#diff-3eda4130bdba562657f3ec7c1b3f5720 .. [19] auditwheel manylinux2 branch (https://github.com/markrwilliams/auditwheel/tree/manylinux2) .. [20] pip manylinux2 branch https://github.com/markrwilliams/pip/commits/manylinux2 Copyright ========= This document has been placed into the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -- Mark Williams mrw at enotuniq.org