From mrw at enotuniq.org Thu Feb 1 20:03:40 2018 From: mrw at enotuniq.org (Mark Williams) Date: Thu, 01 Feb 2018 17:03:40 -0800 Subject: [Distutils] draft PEP: manylinux2 Message-ID: <1517533420.3096327.1256665624.58AB46B1@webmail.messagingengine.com> Hi everyone! The manylinux1 platform tag has been tremendously useful, but unfortunately it's showing its age: https://mail.python.org/pipermail/distutils-sig/2017-April/030360.html https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html Nathaniel identified a list of things to do for its successor, manylinux2: https://mail.python.org/pipermail/distutils-sig/2017-April/030361.html Please find below a draft PEP for manylinux2 that attempts to address these issues. I've also opened a PR against python/peps: https://github.com/python/peps/pull/565 Docker images for x86_64 and i686 are available to test drive: https://hub.docker.com/r/markrwilliams/manylinux2/tags/ Thanks! ---- PEP: 9999 Title: The manylinux2 Platform Tag Version: $Revision$ Last-Modified: $Date$ Author: Mark Williams BDFL-Delegate: Nick Coghlan Discussions-To: Distutils SIG Status: Active Type: Informational Content-Type: text/x-rst Created: Post-History: Resolution: Abstract ======== This PEP proposes the creation of a ``manylinux2`` platform tag to succeed the ``manylinux1`` tag introduced by PEP 513 [1]_. It also proposes that PyPI and ``pip`` both be updated to support uploading, downloading, and installing ``manylinux2`` distributions on compatible platforms. Rationale ========= True to its name, the ``manylinux1`` platform tag has made the installation of binary extension modules a reality on many Linux systems. Libraries like ``cryptography`` [2]_ and ``numpy`` [3]_ are more accessible to Python developers now that their installation on common architectures does not depend on fragile development environments and build toolchains. ``manylinux1`` wheels achieve their portability by allowing the extension modules they contain to link against only a small set of system-level shared libraries that export versioned symbols old enough to benefit from backwards-compatibility policies. Extension modules in a ``manylinux1`` wheel that rely on ``glibc``, for example, must be built against version 2.5 or earlier; they may then be run systems that provide more recent ``glibc`` version that still export the required symbols at version 2.5. PEP 513 drew its whitelisted shared libraries and their symbol versions from CentOS 5.11, which was the oldest supported CentOS release at the time of its writing. Unfortunately, CentOS 5.11 reached its end-of-life on March 31st, 2017 with a clear warning against its continued use. [4]_ No further updates, such as security patches, will be made available. This means that its packages will remain at obsolete versions that hamper the efforts of Python software packagers who use the ``manylinux1`` Docker image. CentOS 6.9 is now the oldest supported CentOS release, and will receive maintenance updates through November 30th, 2020. [5]_ We propose that a new PEP 425-style [6]_ platform tag called ``manylinux2`` be derived from CentOS 6.9 and that the ``manylinux`` toolchain, PyPI, and ``pip`` be updated to support it. The ``manylinux2`` policy ========================= The following criteria determine a ``linux`` wheel's eligibility for the ``manylinux2`` tag: 1. The wheel may only contain binary executables and shared objects compiled for one of the two architectures supported by CentOS 6.9: x86_64 or i686. [5]_ 2. The wheel's binary executables or shared objects may not link against externally-provided libraries except those in the following whitelist: :: libgcc_s.so.1 libstdc++.so.6 libm.so.6 libdl.so.2 librt.so.1 libcrypt.so.1 libc.so.6 libnsl.so.1 libutil.so.1 libpthread.so.0 libresolv.so.2 libX11.so.6 libXext.so.6 libXrender.so.1 libICE.so.6 libSM.so.6 libGL.so.1 libgobject-2.0.so.0 libgthread-2.0.so.0 libglib-2.0.so.0 This list is identical to the externally-provided libraries whitelisted for ``manylinux1``, minus ``libncursesw.so.5`` and ``libpanelw.so.5``. [7]_ ``libpythonX.Y`` remains ineligible for inclusion for the same reasons outlined in PEP 513. On Debian-based systems, these libraries are provided by the packages: ============ ======================================================= Package Libraries ============ ======================================================= libc6 libdl.so.2, libresolv.so.2, librt.so.1, libc.so.6, libpthread.so.0, libm.so.6, libutil.so.1, libcrypt.so.1, libnsl.so.1 libgcc1 libgcc_s.so.1 libgl1 libGL.so.1 libglib2.0-0 libgobject-2.0.so.0, libgthread-2.0.so.0, libglib-2.0.so.0 libice6 libICE.so.6 libsm6 libSM.so.6 libstdc++6 libstdc++.so.6 libx11-6 libX11.so.6 libxext6 libXext.so.6 libxrender1 libXrender.so.1 ============ ======================================================= On RPM-based systems, they are provided by these packages: ============ ======================================================= Package Libraries ============ ======================================================= glib2 libglib-2.0.so.0, libgthread-2.0.so.0, libgobject-2.0.so.0 glibc libresolv.so.2, libutil.so.1, libnsl.so.1, librt.so.1, libcrypt.so.1, libpthread.so.0, libdl.so.2, libm.so.6, libc.so.6 libICE libICE.so.6 libX11 libX11.so.6 libXext: libXext.so.6 libXrender libXrender.so.1 libgcc: libgcc_s.so.1 libstdc++ libstdc++.so.6 mesa libGL.so.1 ============ ======================================================= 3. If the wheel contains binary executables or shared objects linked against any whitelisted libraries that also export versioned symbols, they may only depend on the following maximum versions:: GLIBC_2.12 CXXABI_1.3.3 GLIBCXX_3.4.13 GCC_4.3.0 As an example, ``manylinux2`` wheels may include binary artifacts that require ``glibc`` symbols at version ``GLIBC_2.4``, because this an earlier version than the maximum of ``GLIBC_2.12``. 4. If a wheel is built for any version of CPython 2 or CPython versions 3.0 up to and including 3.2, it *must* include a CPython ABI tag indicating its Unicode ABI. A ``manylinux2`` wheel built against Python 2, then, must include either the ``cpy27mu`` tag indicating it was built against an interpreter with the UCS-4 ABI or the ``cpy27m`` tag indicating an interpeter with the UCS-2 ABI. *[Citation for UCS ABI tags?]* 5. A wheel *must not* require the ``PyFPE_jbuf`` symbol. This is achieved by building it against a Python compiled *without* the ``--with-fpectl`` ``configure`` flag. Compilation of Compliant Wheels =============================== Like ``manylinux1``, the ``auditwheel`` tool adds ```manylinux2`` platform tags to ``linux`` wheels built by ``pip wheel`` or ``bdist_wheel`` a ``manylinux2`` Docker container. Docker Images ------------- ``manylinux2`` Docker images based on CentOS 6.9 x86_64 and i686 are provided for building binary ``linux`` wheels that can reliably be converted to ``manylinux2`` wheels. [8]_ These images come with a full compiler suite installed (``gcc``, ``g++``, and ``gfortran`` 4.8.2) as well as the latest releases of Python and ``pip``. Compatibility with kernels that lack ``vsyscall`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A Docker container assumes that its userland is compatible with its host's kernel. Unfortunately, an increasingly common kernel configuration breaks breaks this assumption for x86_64 CentOS 6.9 Docker images. Versions 2.14 and earlier of ``glibc`` require the kernel provide an archaic system call optimization known as ``vsyscall`` on x86_64. [9]_ To effect the optimization, the kernel maps a read-only page of frequently-called system calls -- most notably ``time(2)`` -- into each process at a fixed memory location. ``glibc`` then invokes these system calls by dereferencing a function pointer to the appropriate offset into the ``vsyscall`` page and calling it. This avoids the overhead associated with invoking the kernel that affects normal system call invocation. ``vsyscall`` has long been deprecated in favor of an equivalent mechanism known as vDSO, or "virtual dynamic shared object", in which the kernel instead maps a relocatable virtual shared object containing the optimized system calls into each process. [10]_ The ``vsyscall`` page has serious security implications because it does not participate in address space layout randomization (ASLR). Its predictable location and contents make it a useful source of gadgets used in return-oriented programming attacks. [11]_ At the same time, its elimination breaks the x86_64 ABI, because ``glibc`` versions that depend on ``vsyscall`` suffer from segmentation faults when attempting to dereference a system call pointer into a non-existent page. As a compromise, Linux 3.1 implemented an "emulated" ``vsyscall`` that reduced the executable code, and thus the material for ROP gadgets, mapped into the process. [12]_ ``vsyscall=emulated`` has been the default configuration in most distribution's kernels for many years. Unfortunately, ``vsyscall`` emulation still exposes predicatable code at a reliable memory location, and continues to be useful for return-oriented programming. [13]_ Because most distributions have now upgraded to ``glibc`` versions that do not depend on ``vsyscall``, they are beginning to ship kernels that do not support ``vsyscall`` at all. [14]_ CentOS 5.11 and 6.9 both include versions of ``glibc`` that depend on the ``vsyscall`` page (2.5 and 2.12.2 respectively), so containers based on either cannot run under kernels provided with many distribution's upcoming releases. [15]_ Continuum Analytics faces a related problem with its conda software suite, and as they point out, this will pose a significant obstacle to using these tools in hosted services. [16]_ If Travis CI, for example, begins running jobs under a kernel that does not provide the ``vsyscall`` interface, Python packagers will not be able to use our Docker images there to build ``manylinux`` wheels. [17]_ We have derived a patch from the ``glibc`` git repository that backports the removal of all dependencies on ``vsyscall`` to the version of ``glibc`` included with our ``manylinux2`` image. [18]_ Rebuilding ``glibc``, and thus building ``manylinux2`` image itself, still requires a host kernel that provides the ``vsyscall`` mechanism, but the resulting image can be both run on hosts that provide it and those that do not. Because the ``vsyscall`` interface is an optimization that is only applied to running processes, the ``manylinux2`` wheels built with this modified image should be identical to those built on an unmodified CentOS 6.9 system. Also, the ``vsyscall`` problem applies only to x86_64; it is not part of the i686 ABI. Auditwheel ---------- The ``auditwheel`` tool has also been updated to produce ``manylinux2`` wheels. [19]_ Its behavior and purpose are otherwise unchanged from PEP 513. Platform Detection for Installers ================================= Platforms may define a ``manylinux2_compatible`` boolean attribute on the ``_manylinux`` module described in PEP 513. A platform is considered incompatible with ``manylinux2`` if the attribute is ``False``. Backwards compatibility with ``manylinux1`` wheels ================================================== As explained in PEP 513, the specified symbol versions for ``manylinux1`` whitelisted libraries constitute an *upper bound*. The same is true for the symbol versions defined for ``manylinux2`` in this PEP. As a result, ``manylinux1`` wheels are considered ``manylinux2`` wheels. A ``pip`` that recognizes the ``manylinux2`` platform tag will thus install ``manylinux1`` wheels for ``manylinux2`` platforms -- even when explicitly set -- when no ``manylinux2`` wheels are available. [20]_ PyPI Support ============ PyPI should permit wheels containing the ``manylinux2`` platform tag to be uploaded in the same way that it permits ``manylinux1``. It should not attempt to verify the compatibility of ``manylinux2`` wheels. References ========== .. [1] PEP 513 -- A Platform Tag for Portable Linux Built Distributions (https://www.python.org/dev/peps/pep-0513/) .. [2] pyca/cryptography (https://cryptography.io/) .. [3] numpy (https://numpy.org) .. [4] CentOS 5.11 EOL announcement (https://lists.centos.org/pipermail/centos-announce/2017-April/022350.html) .. [5] CentOS Product Specifications (https://web.archive.org/web/20180108090257/https://wiki.centos.org/About/Product) .. [6] PEP 425 -- Compatibility Tags for Built Distributions (https://www.python.org/dev/peps/pep-0425/) .. [7] ncurses 5 -> 6 transition means we probably need to drop some libraries from the manylinux whitelist (https://github.com/pypa/manylinux/issues/94) .. [8] manylinux2 Docker images (https://hub.docker.com/r/markrwilliams/manylinux2/) .. [9] On vsyscalls and the vDSO (https://lwn.net/Articles/446528/) .. [10] vdso(7) (http://man7.org/linux/man-pages/man7/vdso.7.html) .. [11] Framing Signals -- A Return to Portable Shellcode (http://www.cs.vu.nl/~herbertb/papers/srop_sp14.pdf) .. [12] ChangeLog-3.1 (https://www.kernel.org/pub/linux/kernel/v3.x/ChangeLog-3.1) .. [13] Project Zero: Three bypasses and a fix for one of Flash's Vector.<*> mitigations (https://googleprojectzero.blogspot.com/2015/08/three-bypasses-and-fix-for-one-of.html) .. [14] linux: activate CONFIG_LEGACY_VSYSCALL_NONE ? (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852620) .. [15] [Wheel-builders] Heads-up re: new kernel configurations breaking the manylinux docker image (https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html) .. [16] Due to glibc 2.12 limitation, static executables that use time(), cpuinfo() and maybe a few others cannot be run on systems that do not support or use `vsyscall=emulate` (https://github.com/ContinuumIO/anaconda-issues/issues/8203) .. [17] Travis CI (https://travis-ci.org/) .. [18] remove-vsyscall.patch https://github.com/markrwilliams/manylinux/commit/e9493d55471d153089df3aafca8cfbcb50fa8093#diff-3eda4130bdba562657f3ec7c1b3f5720 .. [19] auditwheel manylinux2 branch (https://github.com/markrwilliams/auditwheel/tree/manylinux2) .. [20] pip manylinux2 branch https://github.com/markrwilliams/pip/commits/manylinux2 Copyright ========= This document has been placed into the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -- Mark Williams mrw at enotuniq.org From mrw at twistedmatrix.com Thu Feb 1 23:44:13 2018 From: mrw at twistedmatrix.com (Mark Williams) Date: Thu, 01 Feb 2018 20:44:13 -0800 Subject: [Distutils] draft PEP: manylinux2 Message-ID: <1517546653.3180468.1256815312.1552E4CB@webmail.messagingengine.com> Hi everyone! The manylinux1 platform tag has been tremendously useful, but unfortunately it's showing its age: https://mail.python.org/pipermail/distutils-sig/2017-April/030360.html https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html Nathaniel identified a list of things to do for its successor, manylinux2: https://mail.python.org/pipermail/distutils-sig/2017-April/030361.html Please find below a draft PEP for manylinux2 that attempts to address these issues. I've also opened a PR against python/peps: https://github.com/python/peps/pull/565 Docker images for x86_64 and i686 are available to test drive: https://hub.docker.com/r/markrwilliams/manylinux2/tags/ Thanks! ---- PEP: 9999 Title: The manylinux2 Platform Tag Version: $Revision$ Last-Modified: $Date$ Author: Mark Williams BDFL-Delegate: Nick Coghlan Discussions-To: Distutils SIG Status: Active Type: Informational Content-Type: text/x-rst Created: Post-History: Resolution: Abstract ======== This PEP proposes the creation of a ``manylinux2`` platform tag to succeed the ``manylinux1`` tag introduced by PEP 513 [1]_. It also proposes that PyPI and ``pip`` both be updated to support uploading, downloading, and installing ``manylinux2`` distributions on compatible platforms. Rationale ========= True to its name, the ``manylinux1`` platform tag has made the installation of binary extension modules a reality on many Linux systems. Libraries like ``cryptography`` [2]_ and ``numpy`` [3]_ are more accessible to Python developers now that their installation on common architectures does not depend on fragile development environments and build toolchains. ``manylinux1`` wheels achieve their portability by allowing the extension modules they contain to link against only a small set of system-level shared libraries that export versioned symbols old enough to benefit from backwards-compatibility policies. Extension modules in a ``manylinux1`` wheel that rely on ``glibc``, for example, must be built against version 2.5 or earlier; they may then be run systems that provide more recent ``glibc`` version that still export the required symbols at version 2.5. PEP 513 drew its whitelisted shared libraries and their symbol versions from CentOS 5.11, which was the oldest supported CentOS release at the time of its writing. Unfortunately, CentOS 5.11 reached its end-of-life on March 31st, 2017 with a clear warning against its continued use. [4]_ No further updates, such as security patches, will be made available. This means that its packages will remain at obsolete versions that hamper the efforts of Python software packagers who use the ``manylinux1`` Docker image. CentOS 6.9 is now the oldest supported CentOS release, and will receive maintenance updates through November 30th, 2020. [5]_ We propose that a new PEP 425-style [6]_ platform tag called ``manylinux2`` be derived from CentOS 6.9 and that the ``manylinux`` toolchain, PyPI, and ``pip`` be updated to support it. The ``manylinux2`` policy ========================= The following criteria determine a ``linux`` wheel's eligibility for the ``manylinux2`` tag: 1. The wheel may only contain binary executables and shared objects compiled for one of the two architectures supported by CentOS 6.9: x86_64 or i686. [5]_ 2. The wheel's binary executables or shared objects may not link against externally-provided libraries except those in the following whitelist: :: libgcc_s.so.1 libstdc++.so.6 libm.so.6 libdl.so.2 librt.so.1 libcrypt.so.1 libc.so.6 libnsl.so.1 libutil.so.1 libpthread.so.0 libresolv.so.2 libX11.so.6 libXext.so.6 libXrender.so.1 libICE.so.6 libSM.so.6 libGL.so.1 libgobject-2.0.so.0 libgthread-2.0.so.0 libglib-2.0.so.0 This list is identical to the externally-provided libraries whitelisted for ``manylinux1``, minus ``libncursesw.so.5`` and ``libpanelw.so.5``. [7]_ ``libpythonX.Y`` remains ineligible for inclusion for the same reasons outlined in PEP 513. On Debian-based systems, these libraries are provided by the packages: ============ ======================================================= Package Libraries ============ ======================================================= libc6 libdl.so.2, libresolv.so.2, librt.so.1, libc.so.6, libpthread.so.0, libm.so.6, libutil.so.1, libcrypt.so.1, libnsl.so.1 libgcc1 libgcc_s.so.1 libgl1 libGL.so.1 libglib2.0-0 libgobject-2.0.so.0, libgthread-2.0.so.0, libglib-2.0.so.0 libice6 libICE.so.6 libsm6 libSM.so.6 libstdc++6 libstdc++.so.6 libx11-6 libX11.so.6 libxext6 libXext.so.6 libxrender1 libXrender.so.1 ============ ======================================================= On RPM-based systems, they are provided by these packages: ============ ======================================================= Package Libraries ============ ======================================================= glib2 libglib-2.0.so.0, libgthread-2.0.so.0, libgobject-2.0.so.0 glibc libresolv.so.2, libutil.so.1, libnsl.so.1, librt.so.1, libcrypt.so.1, libpthread.so.0, libdl.so.2, libm.so.6, libc.so.6 libICE libICE.so.6 libX11 libX11.so.6 libXext: libXext.so.6 libXrender libXrender.so.1 libgcc: libgcc_s.so.1 libstdc++ libstdc++.so.6 mesa libGL.so.1 ============ ======================================================= 3. If the wheel contains binary executables or shared objects linked against any whitelisted libraries that also export versioned symbols, they may only depend on the following maximum versions:: GLIBC_2.12 CXXABI_1.3.3 GLIBCXX_3.4.13 GCC_4.3.0 As an example, ``manylinux2`` wheels may include binary artifacts that require ``glibc`` symbols at version ``GLIBC_2.4``, because this an earlier version than the maximum of ``GLIBC_2.12``. 4. If a wheel is built for any version of CPython 2 or CPython versions 3.0 up to and including 3.2, it *must* include a CPython ABI tag indicating its Unicode ABI. A ``manylinux2`` wheel built against Python 2, then, must include either the ``cpy27mu`` tag indicating it was built against an interpreter with the UCS-4 ABI or the ``cpy27m`` tag indicating an interpeter with the UCS-2 ABI. *[Citation for UCS ABI tags?]* 5. A wheel *must not* require the ``PyFPE_jbuf`` symbol. This is achieved by building it against a Python compiled *without* the ``--with-fpectl`` ``configure`` flag. Compilation of Compliant Wheels =============================== Like ``manylinux1``, the ``auditwheel`` tool adds ```manylinux2`` platform tags to ``linux`` wheels built by ``pip wheel`` or ``bdist_wheel`` a ``manylinux2`` Docker container. Docker Images ------------- ``manylinux2`` Docker images based on CentOS 6.9 x86_64 and i686 are provided for building binary ``linux`` wheels that can reliably be converted to ``manylinux2`` wheels. [8]_ These images come with a full compiler suite installed (``gcc``, ``g++``, and ``gfortran`` 4.8.2) as well as the latest releases of Python and ``pip``. Compatibility with kernels that lack ``vsyscall`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A Docker container assumes that its userland is compatible with its host's kernel. Unfortunately, an increasingly common kernel configuration breaks breaks this assumption for x86_64 CentOS 6.9 Docker images. Versions 2.14 and earlier of ``glibc`` require the kernel provide an archaic system call optimization known as ``vsyscall`` on x86_64. [9]_ To effect the optimization, the kernel maps a read-only page of frequently-called system calls -- most notably ``time(2)`` -- into each process at a fixed memory location. ``glibc`` then invokes these system calls by dereferencing a function pointer to the appropriate offset into the ``vsyscall`` page and calling it. This avoids the overhead associated with invoking the kernel that affects normal system call invocation. ``vsyscall`` has long been deprecated in favor of an equivalent mechanism known as vDSO, or "virtual dynamic shared object", in which the kernel instead maps a relocatable virtual shared object containing the optimized system calls into each process. [10]_ The ``vsyscall`` page has serious security implications because it does not participate in address space layout randomization (ASLR). Its predictable location and contents make it a useful source of gadgets used in return-oriented programming attacks. [11]_ At the same time, its elimination breaks the x86_64 ABI, because ``glibc`` versions that depend on ``vsyscall`` suffer from segmentation faults when attempting to dereference a system call pointer into a non-existent page. As a compromise, Linux 3.1 implemented an "emulated" ``vsyscall`` that reduced the executable code, and thus the material for ROP gadgets, mapped into the process. [12]_ ``vsyscall=emulated`` has been the default configuration in most distribution's kernels for many years. Unfortunately, ``vsyscall`` emulation still exposes predicatable code at a reliable memory location, and continues to be useful for return-oriented programming. [13]_ Because most distributions have now upgraded to ``glibc`` versions that do not depend on ``vsyscall``, they are beginning to ship kernels that do not support ``vsyscall`` at all. [14]_ CentOS 5.11 and 6.9 both include versions of ``glibc`` that depend on the ``vsyscall`` page (2.5 and 2.12.2 respectively), so containers based on either cannot run under kernels provided with many distribution's upcoming releases. [15]_ Continuum Analytics faces a related problem with its conda software suite, and as they point out, this will pose a significant obstacle to using these tools in hosted services. [16]_ If Travis CI, for example, begins running jobs under a kernel that does not provide the ``vsyscall`` interface, Python packagers will not be able to use our Docker images there to build ``manylinux`` wheels. [17]_ We have derived a patch from the ``glibc`` git repository that backports the removal of all dependencies on ``vsyscall`` to the version of ``glibc`` included with our ``manylinux2`` image. [18]_ Rebuilding ``glibc``, and thus building ``manylinux2`` image itself, still requires a host kernel that provides the ``vsyscall`` mechanism, but the resulting image can be both run on hosts that provide it and those that do not. Because the ``vsyscall`` interface is an optimization that is only applied to running processes, the ``manylinux2`` wheels built with this modified image should be identical to those built on an unmodified CentOS 6.9 system. Also, the ``vsyscall`` problem applies only to x86_64; it is not part of the i686 ABI. Auditwheel ---------- The ``auditwheel`` tool has also been updated to produce ``manylinux2`` wheels. [19]_ Its behavior and purpose are otherwise unchanged from PEP 513. Platform Detection for Installers ================================= Platforms may define a ``manylinux2_compatible`` boolean attribute on the ``_manylinux`` module described in PEP 513. A platform is considered incompatible with ``manylinux2`` if the attribute is ``False``. Backwards compatibility with ``manylinux1`` wheels ================================================== As explained in PEP 513, the specified symbol versions for ``manylinux1`` whitelisted libraries constitute an *upper bound*. The same is true for the symbol versions defined for ``manylinux2`` in this PEP. As a result, ``manylinux1`` wheels are considered ``manylinux2`` wheels. A ``pip`` that recognizes the ``manylinux2`` platform tag will thus install ``manylinux1`` wheels for ``manylinux2`` platforms -- even when explicitly set -- when no ``manylinux2`` wheels are available. [20]_ PyPI Support ============ PyPI should permit wheels containing the ``manylinux2`` platform tag to be uploaded in the same way that it permits ``manylinux1``. It should not attempt to verify the compatibility of ``manylinux2`` wheels. References ========== .. [1] PEP 513 -- A Platform Tag for Portable Linux Built Distributions (https://www.python.org/dev/peps/pep-0513/) .. [2] pyca/cryptography (https://cryptography.io/) .. [3] numpy (https://numpy.org) .. [4] CentOS 5.11 EOL announcement (https://lists.centos.org/pipermail/centos-announce/2017-April/022350.html) .. [5] CentOS Product Specifications (https://web.archive.org/web/20180108090257/https://wiki.centos.org/About/Product) .. [6] PEP 425 -- Compatibility Tags for Built Distributions (https://www.python.org/dev/peps/pep-0425/) .. [7] ncurses 5 -> 6 transition means we probably need to drop some libraries from the manylinux whitelist (https://github.com/pypa/manylinux/issues/94) .. [8] manylinux2 Docker images (https://hub.docker.com/r/markrwilliams/manylinux2/) .. [9] On vsyscalls and the vDSO (https://lwn.net/Articles/446528/) .. [10] vdso(7) (http://man7.org/linux/man-pages/man7/vdso.7.html) .. [11] Framing Signals -- A Return to Portable Shellcode (http://www.cs.vu.nl/~herbertb/papers/srop_sp14.pdf) .. [12] ChangeLog-3.1 (https://www.kernel.org/pub/linux/kernel/v3.x/ChangeLog-3.1) .. [13] Project Zero: Three bypasses and a fix for one of Flash's Vector.<*> mitigations (https://googleprojectzero.blogspot.com/2015/08/three-bypasses-and-fix-for-one-of.html) .. [14] linux: activate CONFIG_LEGACY_VSYSCALL_NONE ? (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852620) .. [15] [Wheel-builders] Heads-up re: new kernel configurations breaking the manylinux docker image (https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html) .. [16] Due to glibc 2.12 limitation, static executables that use time(), cpuinfo() and maybe a few others cannot be run on systems that do not support or use `vsyscall=emulate` (https://github.com/ContinuumIO/anaconda-issues/issues/8203) .. [17] Travis CI (https://travis-ci.org/) .. [18] remove-vsyscall.patch https://github.com/markrwilliams/manylinux/commit/e9493d55471d153089df3aafca8cfbcb50fa8093#diff-3eda4130bdba562657f3ec7c1b3f5720 .. [19] auditwheel manylinux2 branch (https://github.com/markrwilliams/auditwheel/tree/manylinux2) .. [20] pip manylinux2 branch https://github.com/markrwilliams/pip/commits/manylinux2 Copyright ========= This document has been placed into the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -- Mark Williams mrw at twistedmatrix.com From njs at pobox.com Sat Feb 3 03:11:51 2018 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 3 Feb 2018 00:11:51 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> Message-ID: On Wed, Jan 31, 2018 at 4:01 PM, Mark Williams wrote: > Hi everyone! > > The manylinux1 platform tag has been tremendously useful, but unfortunately it's showing its age: > > https://mail.python.org/pipermail/distutils-sig/2017-April/030360.html > https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html > > Nathaniel identified a list of things to do for its successor, manylinux2: > > https://mail.python.org/pipermail/distutils-sig/2017-April/030361.html > > Please find below a draft PEP for manylinux2 that attempts to address these issues. I've also opened a PR against python/peps: > > https://github.com/python/peps/pull/565 > > Docker images for x86_64 (and soon i686) are available to test drive: > > https://hub.docker.com/r/markrwilliams/manylinux2/tags/ Huzzah! This is an amazing bit of work, and I'm glad you got that weird email problem sorted out :-). I have a few minor comments below, but overall this all looks fine and sensible to me. Also, I think we should try to move quickly on this if we can, because the manylinux1 images are currently in the process of collapsing into unmaintainability. (For example: the openssl that CentOS 5 ships with is now so old that you can no longer use it to connect to the openssl web site to download a newer version.) > 4. If a wheel is built for any version of CPython 2 or CPython > versions 3.0 up to and including 3.2, it *must* include a CPython > ABI tag indicating its Unicode ABI. A ``manylinux2`` wheel built > against Python 2, then, must include either the ``cpy27mu`` tag > indicating it was built against an interpreter with the UCS-4 ABI > or the ``cpy27m`` tag indicating an interpeter with the UCS-2 > ABI. *[Citation for UCS ABI tags?]* For the citation: maybe PEP 3149? Or just https://github.com/pypa/pip/pull/3075 > Compilation of Compliant Wheels > =============================== > > Like ``manylinux1``, the ``auditwheel`` tool adds ```manylinux2`` > platform tags to ``linux`` wheels built by ``pip wheel`` or > ``bdist_wheel`` a ``manylinux2`` Docker container. Missing word: "*in* a" > Docker Images > ------------- > > ``manylinux2`` Docker images based on CentOS 6.9 x86_64 and i686 are > provided for building binary ``linux`` wheels that can reliably be > converted to ``manylinux2`` wheels. [8]_ These images come with a > full compiler suite installed (``gcc``, ``g++``, and ``gfortran`` > 4.8.2) as well as the latest releases of Python and ``pip``. We can and should use newer compiler versions than that, and probably upgrade them again over the course of the image's lifespan, so let's just drop the version numbers from the PEP entirely. (Maybe s/6.9/6/ as well for the same reason.) > Compatibility with kernels that lack ``vsyscall`` > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section is maybe not *strictly* necessary in the PEP but I think we might as well keep it; maybe someone will find it useful. > Backwards compatibility with ``manylinux1`` wheels > ================================================== > > As explained in PEP 513, the specified symbol versions for > ``manylinux1`` whitelisted libraries constitute an *upper bound*. The > same is true for the symbol versions defined for ``manylinux2`` in > this PEP. As a result, ``manylinux1`` wheels are considered > ``manylinux2`` wheels. A ``pip`` that recognizes the ``manylinux2`` > platform tag will thus install ``manylinux1`` wheels for > ``manylinux2`` platforms -- even when explicitly set -- when no > ``manylinux2`` wheels are available. [20]_ I'm a little confused about what this section is trying to say (especially the words "even when explicitly set"). Should we maybe just say something like: In general, systems that can use manylinux2 wheels can also use manylinux1 wheels; pip and similar installers should prefer manylinux2 wheels where available. -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Sun Feb 4 21:15:50 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 5 Feb 2018 12:15:50 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> Message-ID: On 1 February 2018 at 10:01, Mark Williams wrote: > Hi everyone! > > The manylinux1 platform tag has been tremendously useful, but unfortunately it's showing its age: > > https://mail.python.org/pipermail/distutils-sig/2017-April/030360.html > https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html > > Nathaniel identified a list of things to do for its successor, manylinux2: > > https://mail.python.org/pipermail/distutils-sig/2017-April/030361.html > > Please find below a draft PEP for manylinux2 that attempts to address these issues. Thanks for this! Something we've discussed in the past is switching manylinux over to a variant of CalVer, where the manylinux version number inherently conveys the era of operating system compatibility that each variant is targeting. In the case of this PEP, that would be `manylinux2010`, since the RHEL/CentOS 6 ABI was formally set with the release of RHEL 6 in November 2010. The intended benefit of that is that it would allow folks to go ahead and propose newer manylinux variants that allow for ppc64le and aarch64 support as needed, without having to guess where those definitions should come in a sequential series. Would a manylinux2 -> manylinux2010 version numbering switch significantly complicate implementation of the PEP? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vano at mail.mipt.ru Mon Feb 5 03:03:01 2018 From: vano at mail.mipt.ru (Ivan Pozdeev) Date: Mon, 5 Feb 2018 11:03:01 +0300 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> Message-ID: <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> On 05.02.2018 5:15, Nick Coghlan wrote: > On 1 February 2018 at 10:01, Mark Williams wrote: >> Hi everyone! >> >> The manylinux1 platform tag has been tremendously useful, but unfortunately it's showing its age: >> >> https://mail.python.org/pipermail/distutils-sig/2017-April/030360.html >> https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html >> >> Nathaniel identified a list of things to do for its successor, manylinux2: >> >> https://mail.python.org/pipermail/distutils-sig/2017-April/030361.html >> >> Please find below a draft PEP for manylinux2 that attempts to address these issues. > Thanks for this! > > Something we've discussed in the past is switching manylinux over to a > variant of CalVer, where the manylinux version number inherently > conveys the era of operating system compatibility that each variant is > targeting. In the case of this PEP, that would be `manylinux2010`, > since the RHEL/CentOS 6 ABI was formally set with the release of RHEL > 6 in November 2010. > > The intended benefit of that is that it would allow folks to go ahead > and propose newer manylinux variants that allow for ppc64le and > aarch64 support as needed, without having to guess where those > definitions should come in a sequential series. IMO this will bring forth more confusion that it'll solve. Technically, the ABI is linked to kernel and library versions rather than dates. Since Linux, unlike commercial products, is not locked into a particular vendor and thus doesn't have a single product life cycle forced upon it, those vary wildly between distibutions and running installations. > Would a manylinux2 -> manylinux2010 version numbering switch > significantly complicate implementation of the PEP? > > Cheers, > Nick. > -- Regards, Ivan From ncoghlan at gmail.com Mon Feb 5 08:51:44 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 5 Feb 2018 23:51:44 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> Message-ID: On 5 February 2018 at 18:03, Ivan Pozdeev via Distutils-SIG wrote: > On 05.02.2018 5:15, Nick Coghlan wrote: >> The intended benefit of that is that it would allow folks to go ahead >> and propose newer manylinux variants that allow for ppc64le and >> aarch64 support as needed, without having to guess where those >> definitions should come in a sequential series. > > IMO this will bring forth more confusion that it'll solve. Technically, the > ABI is linked to kernel and library versions rather than dates. > Since Linux, unlike commercial products, is not locked into a particular > vendor and thus doesn't have a single product life cycle forced upon it, > those vary wildly between distibutions and running installations. We pick the library API & ABI versions based on "compatible with most distributions released since year ", though (that's the "many" in "manylinux"). As an illustrative example, manylinux1 was essentially manylinux2007, and it's now running into problems precisely because that baseline is more than a decade old. That's not obvious if all you know is the sequential number "1", but it makes intuitive sense once you realise the effective baseline year is back in 2007. Similarly, neither manylinux1 nor the proposed manylinux2[010] will ever support ppc64le or aarch64, because those instruction set architectures are too new relative to the API/ABI definitions. With the sequential numbering, any of that kind of reasoning based on relative dates requires looking up the PEP that defined that version, finding which version of RHEL/CentOS we used as a baseline, and then finding when the corresponding x.0 major version of RHEL was released. Or, we can just put the year directly in the version number, so that publishers can go "I'm happy to target manylinux2010, because I'm fine with users of distros that are more than 7 years old needing to compile from source". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From j.orponen at 4teamwork.ch Mon Feb 5 09:35:37 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Mon, 5 Feb 2018 15:35:37 +0100 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> Message-ID: On Mon, Feb 5, 2018 at 2:51 PM, Nick Coghlan wrote: > As an illustrative example, manylinux1 was essentially manylinux2007, > and it's now running into problems precisely because that baseline is > more than a decade old. That's not obvious if all you know is the > sequential number "1", but it makes intuitive sense once you realise > the effective baseline year is back in 2007. > The 2007 baseline of a fairly conservative enterprise Linux distribution, which relatively liberally backports features in point releases over the lifespan. As discussed, the year does not ultimately mean all that much. Just going with sequential version numbers exposes and/or hides just enough for the end user. Is there a particular reason for not picking RHEL 7 as the base for manylinux2 at this point? -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From vano at mail.mipt.ru Mon Feb 5 09:38:21 2018 From: vano at mail.mipt.ru (Ivan Pozdeev) Date: Mon, 5 Feb 2018 17:38:21 +0300 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> Message-ID: <1dc8774f-156c-9a8d-450f-fe74049ece96@mail.mipt.ru> On 05.02.2018 16:51, Nick Coghlan wrote: > On 5 February 2018 at 18:03, Ivan Pozdeev via Distutils-SIG > wrote: >> On 05.02.2018 5:15, Nick Coghlan wrote: >>> The intended benefit of that is that it would allow folks to go ahead >>> and propose newer manylinux variants that allow for ppc64le and >>> aarch64 support as needed, without having to guess where those >>> definitions should come in a sequential series. >> IMO this will bring forth more confusion that it'll solve. Technically, the >> ABI is linked to kernel and library versions rather than dates. >> Since Linux, unlike commercial products, is not locked into a particular >> vendor and thus doesn't have a single product life cycle forced upon it, >> those vary wildly between distibutions and running installations. > We pick the library API & ABI versions based on "compatible with most > distributions released since year ", though (that's the "many" in > "manylinux"). > > As an illustrative example, manylinux1 was essentially manylinux2007, > and it's now running into problems precisely because that baseline is > more than a decade old. That's not obvious if all you know is the > sequential number "1", but it makes intuitive sense once you realise > the effective baseline year is back in 2007. > > Similarly, neither manylinux1 nor the proposed manylinux2[010] will > ever support ppc64le or aarch64, because those instruction set > architectures are too new relative to the API/ABI definitions. With > the sequential numbering, any of that kind of reasoning based on > relative dates requires looking up the PEP that defined that version, > finding which version of RHEL/CentOS we used as a baseline, and then > finding when the corresponding x.0 major version of RHEL was released. > > Or, we can just put the year directly in the version number, so that > publishers can go "I'm happy to target manylinux2010, because I'm fine > with users of distros that are more than 7 years old needing to > compile from source". The point is, a year has negative informativity in this case. The very reasoning "compatible with most distributions released since year " is flawed 'cuz it's vague and nonintuitive. Which is "most" distributions? Which part of the year X? Does that mean is included or not? How do I even know all that without checking the spec? (Normally, a year in an entity's name means that entity's release year.) That, provided I even remember the relevant years -- since compatibility is governed by other things, I have absolutely no reason to. A year would thus add confusion and/or encourage people to use that "easy way out" reasoning and not actually check what they're signing up for -- with the ensuing landmines to step on. -- Regards, Ivan From thomas at kluyver.me.uk Mon Feb 5 09:59:21 2018 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Mon, 05 Feb 2018 14:59:21 +0000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <1dc8774f-156c-9a8d-450f-fe74049ece96@mail.mipt.ru> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> <1dc8774f-156c-9a8d-450f-fe74049ece96@mail.mipt.ru> Message-ID: <1517842761.766067.1259997960.07424B47@webmail.messagingengine.com> On Mon, Feb 5, 2018, at 2:38 PM, Ivan Pozdeev via Distutils-SIG wrote: > The point is, a year has negative informativity in this case. > > The very reasoning "compatible with most distributions released since > year " is flawed 'cuz it's vague and nonintuitive. Which is "most" > distributions? Which part of the year X? Does that mean version Y> is included or not? How do I even know all that without > checking the spec? (Normally, a year in an entity's name means that > entity's release year.) That, provided I even remember the relevant > years -- since compatibility is governed by other things, I have > absolutely no reason to. > > A year would thus add confusion and/or encourage people to use that > "easy way out" reasoning and not actually check what they're signing up > for -- with the ensuing landmines to step on. People are going to take the easy way out anyway. I've made manylinux1 wheels, and I've never gone through and checked what distributions they are meant to be compatible with - I just assume that it will probably be most ones that I care about. I think the key question is how informative the year is. If there's a 80%+ chance that a distribution released in 2011 or 2012 supports manylinux2010, then I think it's helpful to make the year more obvious, even if there are a few counterexamples. From ncoghlan at gmail.com Mon Feb 5 16:01:19 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Feb 2018 07:01:19 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> Message-ID: On 6 February 2018 at 00:35, Joni Orponen wrote: > On Mon, Feb 5, 2018 at 2:51 PM, Nick Coghlan wrote: >> >> As an illustrative example, manylinux1 was essentially manylinux2007, >> and it's now running into problems precisely because that baseline is >> more than a decade old. That's not obvious if all you know is the >> sequential number "1", but it makes intuitive sense once you realise >> the effective baseline year is back in 2007. > > > The 2007 baseline of a fairly conservative enterprise Linux distribution, > which relatively liberally backports features in point releases over the > lifespan. Red Hat only backports features that don't break ABI, so the year still sets the ABI baseline. > As discussed, the year does not ultimately mean all that much. It does, it drives the entire process, as we want to maintain compatibility with a broad range of environments, and the simplest metric for ensuring that is "How old is the baseline?". They fact you're not aware of this is problematic, since it means we're not conveying that clearly. > Just going with sequential version numbers exposes and/or hides just enough > for the end user. It doesn't though, since once we have a few versions out there, it conveys *no* information about which potential users and deployment environments are being excluded by targeting a particular manylinux version. Compare: - manylinux1 vs manylinux2 vs manylinux3 - manylinux2007 vs manylinux2010 vs manylinux2014 In the first one, you have to go look at the PEPs defining each version to get any sense whatsoever of who you might be excluding by targeting each variant. In the second, it's pretty clear that you'd be excluding users of pre-2007 distros, pre-2010 distros, and pre-2014 distros respectively. It's a heuristic, not a precise guideline (you'll still need to compare PEPs to distro ABIs to get the exact details), but it conveys a lot more useful information than the plain sequential numbering does. > Is there a particular reason for not picking RHEL 7 as the base for > manylinux2 at this point? Yes, it's too new - it would set the baseline at around 2014, which cuts out a lot of end user environments that are still in the process of being upgraded to newer alternatives (most notably RHEL/CentOS 6, since they're still supported, but other LTS distros still tend to linger past their nominal end of life dates). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Feb 5 17:42:25 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Feb 2018 08:42:25 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <1dc8774f-156c-9a8d-450f-fe74049ece96@mail.mipt.ru> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> <1dc8774f-156c-9a8d-450f-fe74049ece96@mail.mipt.ru> Message-ID: On 6 February 2018 at 00:38, Ivan Pozdeev wrote: > On 05.02.2018 16:51, Nick Coghlan wrote: >> Or, we can just put the year directly in the version number, so that >> publishers can go "I'm happy to target manylinux2010, because I'm fine >> with users of distros that are more than 7 years old needing to >> compile from source". > > The point is, a year has negative informativity in this case. > > The very reasoning "compatible with most distributions released since year > " is flawed 'cuz it's vague and nonintuitive. > > Which is "most" > distributions? Which part of the year X? Does that mean version Y> is included or not? How do I even know all that without checking > the spec? For most distribution decisions, that level of detail is going to be irrelevant: what's relevant is being able to get a rough sense of how exclusive you're being. Choosing manylinux2010 in 2018+ means "not very exclusive", as if someone is in a sufficiently conservative deployment environment that the base platform isn't even keeping within 8 years of upstream development, the fact they can't use community provided precompiled binaries is likely to be among the least of their concerns. By contrast, choosing manylinux2014 in 2018 *would* risk excluding quite a few common deployment environments, since it's still in that 5-7 year window where a lot of conservative environments are grudgingly admitting that they really do need to update their base platform. > (Normally, a year in an entity's name means that entity's release > year.) The year we happen to define any given manylinux version is pretty close to entirely irrelevant to anything (e.g. manylinux1 was defined in 2016, but the platform interface it describes has been available since 2007). The only reason the definition year isn't *completely* irrelevant is because pip et al can only start supporting it once we define it, so folks that aren't following our "always use a recent version of the installation toolchain, regardless of target platform" guidance may end up needing to care about when a particular version was defined. > That, provided I even remember the relevant years -- since > compatibility is governed by other things, I have absolutely no reason to. The year informs the versions of libraries that we pick to include the baseline: every ABI in manylinux2[010] will have been available to distros in 2010 (when RHEL 6 was released), just as everything in manylinux1 was around in 2007 (when RHEL 5 was released). The fuzziness is then conveyed by the "many" in manylinux, since we do things like assuming the use of glibc, which isn't a universally accurate assumption (e.g. the default Alpine Linux based images published by Docker use musl libc rather than glibc, so they don't provide a manylinux compatible platform). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jhelmus at anaconda.com Mon Feb 5 16:17:55 2018 From: jhelmus at anaconda.com (Jonathan Helmus) Date: Mon, 5 Feb 2018 15:17:55 -0600 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> Message-ID: On 02/03/2018 02:11 AM, Nathaniel Smith wrote: >> Docker Images >> ------------- >> >> ``manylinux2`` Docker images based on CentOS 6.9 x86_64 and i686 are >> provided for building binary ``linux`` wheels that can reliably be >> converted to ``manylinux2`` wheels. [8]_ These images come with a >> full compiler suite installed (``gcc``, ``g++``, and ``gfortran`` >> 4.8.2) as well as the latest releases of Python and ``pip``. > We can and should use newer compiler versions than that, and probably > upgrade them again over the course of the image's lifespan, so let's > just drop the version numbers from the PEP entirely. (Maybe s/6.9/6/ > as well for the same reason.) > Moving to GCC 5 and above will introduced the new libstd++ ABI. [1]? The manylinux2 standard need to define which ABI compiled libraries should be compiled against as older version of libstdc++ will not support the new ABI.? From what I recall the devtoolset packages for CentOS can only target the older, _GLIBCXX_USE_CXX11_ABI=0, ABI. Cheers, ??? - Jonathan Helmus [1] https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrw at twistedmatrix.com Mon Feb 5 23:51:03 2018 From: mrw at twistedmatrix.com (Mark Williams) Date: Mon, 5 Feb 2018 20:51:03 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> Message-ID: <20180206045103.GA26538@alakwan> On Sat, Feb 03, 2018 at 12:11:51AM -0800, Nathaniel Smith wrote: > > Huzzah! This is an amazing bit of work, and I'm glad you got that > weird email problem sorted out :-). Me too! I'd rather deal with Linux ABI fussiness than email any day of the week. > > ABI. *[Citation for UCS ABI tags?]* > > For the citation: maybe PEP 3149? Or just https://github.com/pypa/pip/pull/3075 Check. > > > Compilation of Compliant Wheels > > =============================== > > > > Like ``manylinux1``, the ``auditwheel`` tool adds ```manylinux2`` > > platform tags to ``linux`` wheels built by ``pip wheel`` or > > ``bdist_wheel`` a ``manylinux2`` Docker container. > > Missing word: "*in* a" Check. > > > Docker Images > > ------------- > > ... [8]_ These images come with a full compiler suite installed > > (``gcc``, ``g++``, and ``gfortran`` 4.8.2) as well as the latest > > releases of Python and ``pip``. > > We can and should use newer compiler versions than that, and probably > upgrade them again over the course of the image's lifespan, so let's > just drop the version numbers from the PEP entirely. (Maybe s/6.9/6/ > as well for the same reason.) Wouldn't upgrading compiler versions potentially imply a change in libgcc symbol versions? If so, that would either require the PEP be updated for each new compiler, or the removal of libgcc from the library whitelist. I may be overly paranoid about this. Inspecting the manylinux2 image as it stands now reveals that devtoolset-2 installs an archive of libgcc into /opt/rh/devtoolset-2/root and not a shared object. Inasmuch as NumPy's extension modules depend on code in libgcc, they appear to statically link it in from this archive. If that's a reliable consequence of using devtoolset, then I think we should remove libgcc from the whitelist, which would certainly allow the PEP to remove any mention of compiler versions. Can anybody with GCC or even devtoolset expertise weigh in? In the meantime I'll attempt to build an executable with gcc 7 that depends on libgcc_s.so and run it against a gcc 4 installation. I think it's a good idea to change CentOS 6.9 to CentOS 6, though! It looks like CentOS can release Update Sets (where .9 is the current Update Set) even into the Maintenance phase of a release's lifetime: https://web.archive.org/web/20180108090257/https://wiki.centos.org/About/Product#fndef-5c004257e64c45d272f04727feda64ddb9de47b9-0 https://web.archive.org/web/20180108090257/https://wiki.centos.org/About/Product#fndef-a91b3c0c287c782f9af063daff9e64b566d648c7-1 I've made that change in the attached PEP. > > Compatibility with kernels that lack ``vsyscall`` > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > This section is maybe not *strictly* necessary in the PEP but I think > we might as well keep it; maybe someone will find it useful. When I wrote this, building manylinux2 Docker image involved patching glibc, so I felt it was appropriate to include it in the PEP. Now that I've that process to its own Docker image I'm not so sure (see https://github.com/pypa/manylinux/pull/152#issuecomment-363289829). It'd be nice to keep around for posterity's sake, especially if I got something dreadfully wrong, but I'd be OK with removing it in favor of a footnote that links to this: https://github.com/markrwilliams/manylinux/blob/8b61bcb999bd064f0f0fd0cf9d279a69ddb8a2be/docker/glibc/README.rst > > Backwards compatibility with ``manylinux1`` wheels > > ================================================== > > ... > > I'm a little confused about what this section is trying to say > (especially the words "even when explicitly set"). Should we maybe > just say something like: I wanted to make it clear that even if you do this: pip install --platform=manylinux2 ... You might still get a manylinux1 wheel if no manylinux2 wheels are available, as demonstrated by this test case: https://github.com/markrwilliams/pip/blob/5f1ad7935050ee10f6aae0248144f033eb5e949d/tests/functional/test_download.py#L353-L373 That seemed right to me given that manylinuxes specify an upper bound for library symbols; a manylinux1 wheel that depends only GLIBC at 2.5 symbols, for example, satisfies manylinux2's requirement of GLIBC <= 2.12. Maybe that's not a fair inference to make, especially since manylinux1 can depend on a installation of ncurses but manylinux2 can't. > In general, systems that can use manylinux2 wheels can also use > manylinux1 wheels; pip and similar installers should prefer manylinux2 > wheels where available. That's more succinct - do you think it makes the point about --platform clear enough? > -n > > -- > Nathaniel J. Smith -- https://vorpus.org (Changes are also available on GitHub: https://github.com/markrwilliams/peps/commits/manylinux2) ----- PEP: 9999 Title: The manylinux2 Platform Tag Version: $Revision$ Last-Modified: $Date$ Author: Mark Williams BDFL-Delegate: Nick Coghlan Discussions-To: Distutils SIG Status: Active Type: Informational Content-Type: text/x-rst Created: Post-History: Resolution: Abstract ======== This PEP proposes the creation of a ``manylinux2`` platform tag to succeed the ``manylinux1`` tag introduced by PEP 513 [1]_. It also proposes that PyPI and ``pip`` both be updated to support uploading, downloading, and installing ``manylinux2`` distributions on compatible platforms. Rationale ========= True to its name, the ``manylinux1`` platform tag has made the installation of binary extension modules a reality on many Linux systems. Libraries like ``cryptography`` [2]_ and ``numpy`` [3]_ are more accessible to Python developers now that their installation on common architectures does not depend on fragile development environments and build toolchains. ``manylinux1`` wheels achieve their portability by allowing the extension modules they contain to link against only a small set of system-level shared libraries that export versioned symbols old enough to benefit from backwards-compatibility policies. Extension modules in a ``manylinux1`` wheel that rely on ``glibc``, for example, must be built against version 2.5 or earlier; they may then be run systems that provide more recent ``glibc`` version that still export the required symbols at version 2.5. PEP 513 drew its whitelisted shared libraries and their symbol versions from CentOS 5.11, which was the oldest supported CentOS release at the time of its writing. Unfortunately, CentOS 5.11 reached its end-of-life on March 31st, 2017 with a clear warning against its continued use. [4]_ No further updates, such as security patches, will be made available. This means that its packages will remain at obsolete versions that hamper the efforts of Python software packagers who use the ``manylinux1`` Docker image. CentOS 6 is now the oldest supported CentOS release, and will receive maintenance updates through November 30th, 2020. [5]_ We propose that a new PEP 425-style [6]_ platform tag called ``manylinux2`` be derived from CentOS 6 and that the ``manylinux`` toolchain, PyPI, and ``pip`` be updated to support it. The ``manylinux2`` policy ========================= The following criteria determine a ``linux`` wheel's eligibility for the ``manylinux2`` tag: 1. The wheel may only contain binary executables and shared objects compiled for one of the two architectures supported by CentOS 6: x86_64 or i686. [5]_ 2. The wheel's binary executables or shared objects may not link against externally-provided libraries except those in the following whitelist: :: libgcc_s.so.1 libstdc++.so.6 libm.so.6 libdl.so.2 librt.so.1 libcrypt.so.1 libc.so.6 libnsl.so.1 libutil.so.1 libpthread.so.0 libresolv.so.2 libX11.so.6 libXext.so.6 libXrender.so.1 libICE.so.6 libSM.so.6 libGL.so.1 libgobject-2.0.so.0 libgthread-2.0.so.0 libglib-2.0.so.0 This list is identical to the externally-provided libraries whitelisted for ``manylinux1``, minus ``libncursesw.so.5`` and ``libpanelw.so.5``. [7]_ ``libpythonX.Y`` remains ineligible for inclusion for the same reasons outlined in PEP 513. On Debian-based systems, these libraries are provided by the packages: ============ ======================================================= Package Libraries ============ ======================================================= libc6 libdl.so.2, libresolv.so.2, librt.so.1, libc.so.6, libpthread.so.0, libm.so.6, libutil.so.1, libcrypt.so.1, libnsl.so.1 libgcc1 libgcc_s.so.1 libgl1 libGL.so.1 libglib2.0-0 libgobject-2.0.so.0, libgthread-2.0.so.0, libglib-2.0.so.0 libice6 libICE.so.6 libsm6 libSM.so.6 libstdc++6 libstdc++.so.6 libx11-6 libX11.so.6 libxext6 libXext.so.6 libxrender1 libXrender.so.1 ============ ======================================================= On RPM-based systems, they are provided by these packages: ============ ======================================================= Package Libraries ============ ======================================================= glib2 libglib-2.0.so.0, libgthread-2.0.so.0, libgobject-2.0.so.0 glibc libresolv.so.2, libutil.so.1, libnsl.so.1, librt.so.1, libcrypt.so.1, libpthread.so.0, libdl.so.2, libm.so.6, libc.so.6 libICE libICE.so.6 libX11 libX11.so.6 libXext: libXext.so.6 libXrender libXrender.so.1 libgcc: libgcc_s.so.1 libstdc++ libstdc++.so.6 mesa libGL.so.1 ============ ======================================================= 3. If the wheel contains binary executables or shared objects linked against any whitelisted libraries that also export versioned symbols, they may only depend on the following maximum versions:: GLIBC_2.12 CXXABI_1.3.3 GLIBCXX_3.4.13 GCC_4.3.0 As an example, ``manylinux2`` wheels may include binary artifacts that require ``glibc`` symbols at version ``GLIBC_2.4``, because this an earlier version than the maximum of ``GLIBC_2.12``. 4. If a wheel is built for any version of CPython 2 or CPython versions 3.0 up to and including 3.2, it *must* include a CPython ABI tag indicating its Unicode ABI. A ``manylinux2`` wheel built against Python 2, then, must include either the ``cpy27mu`` tag indicating it was built against an interpreter with the UCS-4 ABI or the ``cpy27m`` tag indicating an interpeter with the UCS-2 ABI. [8]_ [9]_ 5. A wheel *must not* require the ``PyFPE_jbuf`` symbol. This is achieved by building it against a Python compiled *without* the ``--with-fpectl`` ``configure`` flag. Compilation of Compliant Wheels =============================== Like ``manylinux1``, the ``auditwheel`` tool adds ```manylinux2`` platform tags to ``linux`` wheels built by ``pip wheel`` or ``bdist_wheel`` in a ``manylinux2`` Docker container. Docker Images ------------- ``manylinux2`` Docker images based on CentOS 6 x86_64 and i686 are provided for building binary ``linux`` wheels that can reliably be converted to ``manylinux2`` wheels. [10]_ These images come with a full compiler suite installed (``gcc``, ``g++``, and ``gfortran`` 4.8.2) as well as the latest releases of Python and ``pip``. Compatibility with kernels that lack ``vsyscall`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A Docker container assumes that its userland is compatible with its host's kernel. Unfortunately, an increasingly common kernel configuration breaks breaks this assumption for x86_64 CentOS 6 Docker images. Versions 2.14 and earlier of ``glibc`` require the kernel provide an archaic system call optimization known as ``vsyscall`` on x86_64. [11]_ To effect the optimization, the kernel maps a read-only page of frequently-called system calls -- most notably ``time(2)`` -- into each process at a fixed memory location. ``glibc`` then invokes these system calls by dereferencing a function pointer to the appropriate offset into the ``vsyscall`` page and calling it. This avoids the overhead associated with invoking the kernel that affects normal system call invocation. ``vsyscall`` has long been deprecated in favor of an equivalent mechanism known as vDSO, or "virtual dynamic shared object", in which the kernel instead maps a relocatable virtual shared object containing the optimized system calls into each process. [12]_ The ``vsyscall`` page has serious security implications because it does not participate in address space layout randomization (ASLR). Its predictable location and contents make it a useful source of gadgets used in return-oriented programming attacks. [13]_ At the same time, its elimination breaks the x86_64 ABI, because ``glibc`` versions that depend on ``vsyscall`` suffer from segmentation faults when attempting to dereference a system call pointer into a non-existent page. As a compromise, Linux 3.1 implemented an "emulated" ``vsyscall`` that reduced the executable code, and thus the material for ROP gadgets, mapped into the process. [14]_ ``vsyscall=emulated`` has been the default configuration in most distribution's kernels for many years. Unfortunately, ``vsyscall`` emulation still exposes predicatable code at a reliable memory location, and continues to be useful for return-oriented programming. [15]_ Because most distributions have now upgraded to ``glibc`` versions that do not depend on ``vsyscall``, they are beginning to ship kernels that do not support ``vsyscall`` at all. [16]_ CentOS 5.11 and 6 both include versions of ``glibc`` that depend on the ``vsyscall`` page (2.5 and 2.12.2 respectively), so containers based on either cannot run under kernels provided with many distribution's upcoming releases. [17]_ Continuum Analytics faces a related problem with its conda software suite, and as they point out, this will pose a significant obstacle to using these tools in hosted services. [18]_ If Travis CI, for example, begins running jobs under a kernel that does not provide the ``vsyscall`` interface, Python packagers will not be able to use our Docker images there to build ``manylinux`` wheels. [19]_ We have derived a patch from the ``glibc`` git repository that backports the removal of all dependencies on ``vsyscall`` to the version of ``glibc`` included with our ``manylinux2`` image. [20]_ Rebuilding ``glibc``, and thus building ``manylinux2`` image itself, still requires a host kernel that provides the ``vsyscall`` mechanism, but the resulting image can be both run on hosts that provide it and those that do not. Because the ``vsyscall`` interface is an optimization that is only applied to running processes, the ``manylinux2`` wheels built with this modified image should be identical to those built on an unmodified CentOS 6 system. Also, the ``vsyscall`` problem applies only to x86_64; it is not part of the i686 ABI. Auditwheel ---------- The ``auditwheel`` tool has also been updated to produce ``manylinux2`` wheels. [21]_ Its behavior and purpose are otherwise unchanged from PEP 513. Platform Detection for Installers ================================= Platforms may define a ``manylinux2_compatible`` boolean attribute on the ``_manylinux`` module described in PEP 513. A platform is considered incompatible with ``manylinux2`` if the attribute is ``False``. Backwards compatibility with ``manylinux1`` wheels ================================================== As explained in PEP 513, the specified symbol versions for ``manylinux1`` whitelisted libraries constitute an *upper bound*. The same is true for the symbol versions defined for ``manylinux2`` in this PEP. As a result, ``manylinux1`` wheels are considered ``manylinux2`` wheels. A ``pip`` that recognizes the ``manylinux2`` platform tag will thus install ``manylinux1`` wheels for ``manylinux2`` platforms -- even when explicitly set -- when no ``manylinux2`` wheels are available. [22]_ PyPI Support ============ PyPI should permit wheels containing the ``manylinux2`` platform tag to be uploaded in the same way that it permits ``manylinux1``. It should not attempt to verify the compatibility of ``manylinux2`` wheels. References ========== .. [1] PEP 513 -- A Platform Tag for Portable Linux Built Distributions (https://www.python.org/dev/peps/pep-0513/) .. [2] pyca/cryptography (https://cryptography.io/) .. [3] numpy (https://numpy.org) .. [4] CentOS 5.11 EOL announcement (https://lists.centos.org/pipermail/centos-announce/2017-April/022350.html) .. [5] CentOS Product Specifications (https://web.archive.org/web/20180108090257/https://wiki.centos.org/About/Product) .. [6] PEP 425 -- Compatibility Tags for Built Distributions (https://www.python.org/dev/peps/pep-0425/) .. [7] ncurses 5 -> 6 transition means we probably need to drop some libraries from the manylinux whitelist (https://github.com/pypa/manylinux/issues/94) .. [8] PEP 3149 https://www.python.org/dev/peps/pep-3149/ .. [9] SOABI support for Python 2.X and PyPy https://github.com/pypa/pip/pull/3075 .. [10] manylinux2 Docker images (https://hub.docker.com/r/markrwilliams/manylinux2/) .. [11] On vsyscalls and the vDSO (https://lwn.net/Articles/446528/) .. [12] vdso(7) (http://man7.org/linux/man-pages/man7/vdso.7.html) .. [13] Framing Signals -- A Return to Portable Shellcode (http://www.cs.vu.nl/~herbertb/papers/srop_sp14.pdf) .. [14] ChangeLog-3.1 (https://www.kernel.org/pub/linux/kernel/v3.x/ChangeLog-3.1) .. [15] Project Zero: Three bypasses and a fix for one of Flash's Vector.<*> mitigations (https://googleprojectzero.blogspot.com/2015/08/three-bypasses-and-fix-for-one-of.html) .. [16] linux: activate CONFIG_LEGACY_VSYSCALL_NONE ? (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852620) .. [17] [Wheel-builders] Heads-up re: new kernel configurations breaking the manylinux docker image (https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html) .. [18] Due to glibc 2.12 limitation, static executables that use time(), cpuinfo() and maybe a few others cannot be run on systems that do not support or use `vsyscall=emulate` (https://github.com/ContinuumIO/anaconda-issues/issues/8203) .. [19] Travis CI (https://travis-ci.org/) .. [20] remove-vsyscall.patch https://github.com/markrwilliams/manylinux/commit/e9493d55471d153089df3aafca8cfbcb50fa8093#diff-3eda4130bdba562657f3ec7c1b3f5720 .. [21] auditwheel manylinux2 branch (https://github.com/markrwilliams/auditwheel/tree/manylinux2) .. [22] pip manylinux2 branch https://github.com/markrwilliams/pip/commits/manylinux2 Copyright ========= This document has been placed into the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From mrw at twistedmatrix.com Tue Feb 6 01:05:22 2018 From: mrw at twistedmatrix.com (Mark Williams) Date: Mon, 5 Feb 2018 22:05:22 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> Message-ID: <20180206060522.GA22396@alakwan> On Mon, Feb 05, 2018 at 12:15:50PM +1000, Nick Coghlan wrote: > > Thanks for this! > > Something we've discussed in the past is switching manylinux over to a > variant of CalVer, where the manylinux version number inherently > conveys the era of operating system compatibility that each variant is > targeting. In the case of this PEP, that would be `manylinux2010`, > since the RHEL/CentOS 6 ABI was formally set with the release of RHEL > 6 in November 2010. > > The intended benefit of that is that it would allow folks to go ahead > and propose newer manylinux variants that allow for ppc64le and > aarch64 support as needed, without having to guess where those > definitions should come in a sequential series. That seems reasonable. I'll admit a bias towards CalVer, though :) As a counter point: presumably a `manylinux` standard that supports those architectures will require a PEP, in which case the author(s) will have read the preceding `manylinx` PEPs, either to actively borrow as much as possible or to understand arcane but necessary details at the behest of this list. In that case the PEPs' numbers will determine the next `manylinux`: `manylinux1` was PEP 513; if it's accepted, this will be PEP N > 513; and so a `manylinux` that supports ppc64le or aarch64 will be PEP M > N. This doesn't account for the simultaneous proposal of independent PEPs for ppc64le and aarch64, but in that case I imagine they'd be merged into a single PEP. Given that `manylinux` PEP numbers determine their sequence number, I don't see how CalVer would change the situation. A bigger issue is that `manylinux` isn't really one dimensional. Lots of things happened in 2014; for example, IBM shipped the first POWER8 systems and glibc 2.19 and 2.20 were released. But RHEL 7 and thus CentOS ship glibc 2.17. Why should `manylinux2014` support ppc64le but not glibc 2.19? Since the definition of `manylinux` depends on the state of RHEL and CentOS, maybe we should change the sequence number to match the underlying major release of RHEL/CentOS. That would have `manylinux2` become `manylinux6`, and its successor `manylinux7`. If we require that each `manylinux` support all the platforms its RHEL/CentOS supports, implementers and users could simply refer to that release to know what they're in for. I think I'm +1 for `manylinux6`, +0 for `manylinux2010`, and -1 for `manylinux2`, which now seems to be the worst alternative. > Would a manylinux2 -> manylinux2010 version numbering switch > significantly complicate implementation of the PEP? Certainly not! It'll take a little bit more time to adjust `auditwheel` and `pip`, but I don't consider that to be onerous. Once we settle on the appropriate versioning scheme I'll be happy to update everything! If I may, a quick question about procedure: do I continue to included updates to the PEP in my responses here? Or do I link to my branch on GitHub? -- Mark Williams mrw at twistedmatrix.com From njs at pobox.com Tue Feb 6 01:41:22 2018 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 5 Feb 2018 22:41:22 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> Message-ID: On Mon, Feb 5, 2018 at 1:17 PM, Jonathan Helmus wrote: > Moving to GCC 5 and above will introduced the new libstd++ ABI. [1] The > manylinux2 standard need to define which ABI compiled libraries should be > compiled against as older version of libstdc++ will not support the new ABI. > From what I recall the devtoolset packages for CentOS can only target the > older, _GLIBCXX_USE_CXX11_ABI=0, ABI. We're stuck on the devtoolset packages, but it doesn't really matter for manylinux purposes. None of the libraries you're allowed to assume exist expose a C++ ABI, and everything else you have to ship yourself. -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Tue Feb 6 02:55:36 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Feb 2018 17:55:36 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <20180206060522.GA22396@alakwan> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> Message-ID: On 6 February 2018 at 16:05, Mark Williams wrote: > As a counter point: presumably a `manylinux` standard that supports > those architectures will require a PEP, in which case the author(s) > will have read the preceding `manylinx` PEPs, either to actively > borrow as much as possible or to understand arcane but necessary > details at the behest of this list. In that case the PEPs' numbers > will determine the next `manylinux`: `manylinux1` was PEP 513; if it's > accepted, this will be PEP N > 513; and so a `manylinux` that supports > ppc64le or aarch64 will be PEP M > N. This doesn't account for the > simultaneous proposal of independent PEPs for ppc64le and aarch64, but > in that case I imagine they'd be merged into a single PEP. The CalVer idea first came up in the context of skipping ahead in the numbering sequence to go straight to a baseline that supported ppc64le and/or aarch64. Even 2014 would likely be too old for that, since CentOS 7 didn't support those at launch, and neither did Ubuntu 14.04. While such a PEP hasn't actually been written yet, the kinds of numbers we were looking at for a suitable baseline year were around 2015 or 2016, as that's when support for them started showing up in mainline Linux distros. > Given that `manylinux` PEP numbers determine their sequence number, I > don't see how CalVer would change the situation. It lets us deterministically skip numbers if we decide we want to enable access to things that older platforms just straight up don't support (like new instruction set architectures). > A bigger issue is that `manylinux` isn't really one dimensional. Lots > of things happened in 2014; for example, IBM shipped the first POWER8 > systems and glibc 2.19 and 2.20 were released. But RHEL 7 and thus > CentOS ship glibc 2.17. Why should `manylinux2014` support ppc64le > but not glibc 2.19? Mainly because we aim for "oldest version still used in new releases that year", but it's also why each version still needs a PEP that maps out the actual platform ABI as specific library versions. > Since the definition of `manylinux` depends on the state of RHEL and > CentOS, maybe we should change the sequence number to match the > underlying major release of RHEL/CentOS. That would have `manylinux2` > become `manylinux6`, and its successor `manylinux7`. If we require > that each `manylinux` support all the platforms its RHEL/CentOS > supports, implementers and users could simply refer to that release to > know what they're in for. We discussed that too, and one key reason for not doing it is that we only build off Red Hat's platform definitions as a matter of convenience, and because they currently have the longest support lifecycles. In the future, we could instead decide that a particular version of Ubuntu LTS or Debian stable (or even some other LTS distro) was a more suitable baseline for a given manylinux version, depending on how the relative timing works out. For non-RHEL/CentOS users, the RHEL/CentOS version is also just as arbitrary a sequence number as 1-based indexing. By contrast, year-based CalVer maintains distro-neutrality, while also giving a good sense of the maximum age of compatible target platforms. (e.g. given "manylinux2010", it's a pretty safe guess that Ubuntu 12.04, 14.04 and 16.04 are all expected to be compatible, while that isn't as clear given "manylinux2" or "manylinux6") > I think I'm +1 for `manylinux6`, +0 for `manylinux2010`, and -1 for > `manylinux2`, which now seems to be the worst alternative. > >> Would a manylinux2 -> manylinux2010 version numbering switch >> significantly complicate implementation of the PEP? > > Certainly not! It'll take a little bit more time to adjust > `auditwheel` and `pip`, but I don't consider that to be onerous. Once > we settle on the appropriate versioning scheme I'll be happy to update > everything! > > If I may, a quick question about procedure: do I continue to included > updates to the PEP in my responses here? Or do I link to my branch on > GitHub? Ah, thanks for the reminder - I'll get your intial PR merged, then we can track any further changes in the main PEPs repo as additional PRs. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Feb 6 03:00:11 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Feb 2018 18:00:11 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> Message-ID: On 6 February 2018 at 17:55, Nick Coghlan wrote: > On 6 February 2018 at 16:05, Mark Williams wrote: >> If I may, a quick question about procedure: do I continue to included >> updates to the PEP in my responses here? Or do I link to my branch on >> GitHub? > > Ah, thanks for the reminder - I'll get your intial PR merged, then we > can track any further changes in the main PEPs repo as additional PRs. OK, I've allocated PEP 571 for this, and it will show up at https://www.python.org/dev/peps/pep-0571/ once the 404 page caching expires in Fastly. In the meantime, the initially merged version can be seen at https://github.com/python/peps/blob/master/pep-0571.rst Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From j.orponen at 4teamwork.ch Tue Feb 6 07:19:32 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Tue, 6 Feb 2018 13:19:32 +0100 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> Message-ID: On Mon, Feb 5, 2018 at 10:01 PM, Nick Coghlan wrote: > On 6 February 2018 at 00:35, Joni Orponen wrote: > > On Mon, Feb 5, 2018 at 2:51 PM, Nick Coghlan wrote: > >> > >> As an illustrative example, manylinux1 was essentially manylinux2007, > >> and it's now running into problems precisely because that baseline is > >> more than a decade old. That's not obvious if all you know is the > >> sequential number "1", but it makes intuitive sense once you realise > >> the effective baseline year is back in 2007. > > > > > > The 2007 baseline of a fairly conservative enterprise Linux distribution, > > which relatively liberally backports features in point releases over the > > lifespan. > > Red Hat only backports features that don't break ABI, so the year > still sets the ABI baseline. > I'm not convinced all the dependencies of Python and especially of eggs out there actually fall within compatibility level 2. https://access.redhat.com/articles/rhel-abi-compatibility > As discussed, the year does not ultimately mean all that much. > > It does, it drives the entire process, as we want to maintain > compatibility with a broad range of environments, and the simplest > metric for ensuring that is "How old is the baseline?". Unless the name conveys the tie to RHEL, the easier assumption to make is an Ubuntu LTS release as they brand versions with the year semantics. I'd prefer a sequential number and an associated compatibility table. It doesn't though, since once we have a few versions out there, it > conveys *no* information about which potential users and deployment > environments are being excluded by targeting a particular manylinux > version. > > Compare: > > - manylinux1 vs manylinux2 vs manylinux3 > - manylinux2007 vs manylinux2010 vs manylinux2014 > > In the first one, you have to go look at the PEPs defining each > version to get any sense whatsoever of who you might be excluding by > targeting each variant. > > In the second, it's pretty clear that you'd be excluding users of > pre-2007 distros, pre-2010 distros, and pre-2014 distros respectively. > It's a heuristic, not a precise guideline (you'll still need to > compare PEPs to distro ABIs to get the exact details), but it conveys > a lot more useful information than the plain sequential numbering > does. I'm not sharing the view of packagers being as systems oriented or aware of packaging PEPs to catch this. What's happening within my bubble of associates is "this is how you manylinux" and that'd be followed "this is how you manylinux2 now". The effort of providing the docker images and such make this so opaque convenient people do not have to care. This is a sign of a job very well done. Congratulations. > Is there a particular reason for not picking RHEL 7 as the base for > > manylinux2 at this point? > > Yes, it's too new - it would set the baseline at around 2014, which > cuts out a lot of end user environments that are still in the process > of being upgraded to newer alternatives (most notably RHEL/CentOS 6, > since they're still supported, but other LTS distros still tend to > linger past their nominal end of life dates). These users would still be catered to by manylinux1. I've provided an opinion and there is no need to seek consensus with me beyond this exchange. Ultimately I'm nitpicking on semantics, which is not very meaningful for the larger topic at hand. -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at pyup.io Tue Feb 6 04:33:32 2018 From: jay at pyup.io (Jannis Gebauer) Date: Tue, 6 Feb 2018 10:33:32 +0100 Subject: [Distutils] Building a Python package build service for warehouse Message-ID: Hi! I?m currently working on a package build server. My goal is to produce useful additional meta data for all packages available on PyPi. This includes: - Transitive dependencies - Is the package installable under Python 3? - Various automated ?code quality? tests like pylint, pyflakes, pep8, mccabe etc. - Automated security tests - (possibly changelogs, commit logs) - Licenses! The main idea is to run the build process in a restricted ?sandbox? docker container that pulls the package from PyPi, installs it and runs a couple of tools on it. Code is still pretty rough, nothing to look at at the moment I?m afraid. Is there any interest in working on this together? Maybe even with the goal to make it an open API that can be consumed by warehouse et al.? Interested in any thoughts on this! Cheers, Jannis P.S: I?m currently crunching trough the data on a 96 CPU cluster. There?s an API available, but it?s sitting behind HTTP Basic Auth as it is basically an endpoint for remote code execution (and throws lots of 500s :D). Send me a mail to jay at pyup.io if you want to play around with it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Tue Feb 6 08:19:07 2018 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 6 Feb 2018 14:19:07 +0100 Subject: [Distutils] Building a Python package build service for warehouse In-Reply-To: References: Message-ID: <0d7ce91b-ee52-b2fc-a177-67531de87636@nextday.fi> I'd be all for it if I wasn't buried under a ton of other things to do. Happy hacking and good luck! Jannis Gebauer kirjoitti 06.02.2018 klo 10:33: > Hi! > > I?m currently working on a package build server. My goal is to produce > useful additional meta data for all packages available on PyPi. > > This includes: > > - Transitive dependencies > - Is the package installable under Python 3? > - Various automated ?code quality? tests like pylint, pyflakes, pep8, > mccabe etc. > - Automated security tests > - (possibly changelogs, commit logs) > - Licenses! > > The main idea is to run the build process in a restricted ?sandbox? > docker container that pulls the package from PyPi, installs it and > runs a couple of tools on it. Code is still pretty rough, nothing to > look at at the moment I?m afraid. > > Is there any interest in working on this together? Maybe even with the > goal to make it an open API that can be consumed by warehouse et al.? > > Interested in any thoughts on this! > > Cheers, > > Jannis > > P.S: I?m currently crunching trough the data on a 96 CPU cluster. > There?s an API available, but it?s sitting behind HTTP Basic Auth as > it is basically an endpoint for remote code execution (and throws lots > of 500s :D). Send me a mail to jay at pyup.io ?if you > want to play around with it. > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From janzert at janzert.com Tue Feb 6 15:33:24 2018 From: janzert at janzert.com (Janzert) Date: Tue, 6 Feb 2018 15:33:24 -0500 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <4e8c5b4a-9e02-2a26-7f43-310aa980c17c@mail.mipt.ru> Message-ID: <0b776c25-2d2d-f997-e963-18be45fd9602@janzert.com> On 2/5/2018 16:01, Nick Coghlan wrote: > Compare: > > - manylinux1 vs manylinux2 vs manylinux3 > - manylinux2007 vs manylinux2010 vs manylinux2014 > I'll leave this just as a data point (anecdote) from someone that isn't heavily involved with linux sysadmin or python packaging. Feel free to make of it what you like. I generally run debian stable and occasionally ubuntu lts on servers and the latest ubuntu for my workstation. If I were looking to install a package and one of the binaries available is manylinux2010 I probably completely pass over that option and don't even attempt using it. My assumption would be anything that old probably isn't going to work on my 2016 or newer OS. Whereas manylinux(1, 2, 3) I would think has a good chance of working on any reasonably modern linux. If not for having read the discussion here I would have interpreted a date, especially a date that's the better part of a decade in the past, completely the wrong way. Having said that, I'm pretty sure that pip should in general be handling this decision for me and doing the right thing anyway? So it probably doesn't matter too much. Janzert From sh at changeset.nyc Tue Feb 6 22:11:29 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Tue, 6 Feb 2018 22:11:29 -0500 Subject: [Distutils] Fwd: Warehouse update: still on track, new features In-Reply-To: <94818e62-a356-39ce-9125-0730cb5d9aae@changeset.nyc> References: <94818e62-a356-39ce-9125-0730cb5d9aae@changeset.nyc> Message-ID: <90a5a412-6e6f-2725-0ee4-0a93e3cfa08c@changeset.nyc> Our weekly Warehouse update just went out to pypa-dev and is included below. -------- Forwarded Message -------- Subject: Warehouse update: still on track, new features Date: Tue, 6 Feb 2018 22:09:12 -0500 From: Sumana Harihareswara To: pypa-dev Here's your weekly update on Warehouse, powering the new PyPI.[0] You can see some noticeable improvements to Warehouse right now compared to last week. There's a mobile UI for managing projects[1], and a project owner can now delete a project.[2] We also have several CSS tweaks and other continuing design improvements -- we're lucky to be working with Nicole on this.[3] Less visibly, we have further Kubernetes security work by Ernest in cabotage[4] and Dustin's work on a generic token service[5]. We're still on track to hit the Maintainer MVP milestone at the end of this month.[6] On the documentation and outreach side, Laura and I have been preparing to contact very active maintainers when we hit that milestone, and we've been improving the packaging user guide,[7] and working a bit on Twine (e.g., documentation for using python-keyring with Twine to avoid having to use a .pypirc).[8] Thanks to Jon Wayne Parrott for fixing an issue Dustin spotted[9] so that pypa.io gets fresh updates again.[10] In PEP progress, PEP 541 is moving forward again, with a pull request for a change in BDFL-Delegate.[11] As usual, meeting notes from our weekly discussion are on the wiki.[12] And if you want to get started contributing to Warehouse, Ernest wants to help you and give you stickers, and has 30-minute 1:1 slots available.[13] Right now we have eleven open issues marked as good for newcomers.[14] Thanks to Mozilla for their support for the PyPI & Warehouse work, and thanks to the PSF for facilitating and supporting this work![15][16] [0] https://pypi.org/ [1] https://github.com/pypa/warehouse/pull/2865 [2] https://github.com/pypa/warehouse/pull/2821 [3] http://whoisnicoleharris.com/warehouse/ [4] https://github.com/cabotage/cabotage-app/commits/master [5] https://github.com/pypa/warehouse/pull/2864 [6] https://github.com/pypa/warehouse/milestone/8 [7] https://github.com/pypa/python-packaging-user-guide/pull/426 [8] https://github.com/pypa/python-packaging-user-guide/issues/297#issuecomment-362426940 [9] https://groups.google.com/forum/#!topic/pypa-dev/jzXR3A3E-dw [10] https://www.pypa.io/en/latest/roadmap/ [11] https://github.com/python/peps/pull/566 [12] https://wiki.python.org/psf/PackagingWG/2018-02-05-Warehouse [13] https://twitter.com/EWDurbin/status/955415184339849217 [14] https://github.com/pypa/warehouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22 [15] https://pyfound.blogspot.com/2017/11/the-psf-awarded-moss-grant-pypi.html [16] https://blog.mozilla.org/blog/2018/01/23/moss-q4-supporting-python-ecosystem/ -- Sumana Harihareswara Warehouse project manager Changeset Consulting https://changeset.nyc -- Sumana Harihareswara Changeset Consulting https://changeset.nyc From tritium-list at sdamon.com Wed Feb 7 00:21:32 2018 From: tritium-list at sdamon.com (Alex Walters) Date: Wed, 7 Feb 2018 00:21:32 -0500 Subject: [Distutils] draft PEP: manylinux2 Message-ID: <0fbb01d39fd3$842bdfa0$8c839ee0$@sdamon.com> -----Original Message----- From: Alex Walters [mailto:tritium-list at sdamon.com] Sent: Wednesday, February 7, 2018 12:21 AM To: 'Janzert' Subject: RE: [Distutils] draft PEP: manylinux2 > -----Original Message----- > From: Distutils-SIG [mailto:distutils-sig-bounces+tritium- > list=sdamon.com at python.org] On Behalf Of Janzert > Sent: Tuesday, February 6, 2018 3:33 PM > To: Distutils-Sig at Python.Org > Subject: Re: [Distutils] draft PEP: manylinux2 > > On 2/5/2018 16:01, Nick Coghlan wrote: > > Compare: > > > > - manylinux1 vs manylinux2 vs manylinux3 > > - manylinux2007 vs manylinux2010 vs manylinux2014 > > > > I'll leave this just as a data point (anecdote) from someone that isn't > heavily involved with linux sysadmin or python packaging. Feel free to > make of it what you like. I generally run debian stable and occasionally > ubuntu lts on servers and the latest ubuntu for my workstation. > > If I were looking to install a package and one of the binaries available > is manylinux2010 I probably completely pass over that option and don't > even attempt using it. My assumption would be anything that old probably > isn't going to work on my 2016 or newer OS. Whereas manylinux(1, 2, 3) I > would think has a good chance of working on any reasonably modern linux. > > If not for having read the discussion here I would have interpreted a > date, especially a date that's the better part of a decade in the past, > completely the wrong way. > > Having said that, I'm pretty sure that pip should in general be handling > this decision for me and doing the right thing anyway? So it probably > doesn't matter too much. > > Janzert > This is a really good point. Since pip is the main interface to packages for end users anyways, we can call it manylinux8675309 and it wouldn't really matter to users - the name only really matters to package maintainers, not users. And because of that, manylinux2010, manylinux2014, etc makes more sense. A package maintainer is expected to be more educated about these matters, and that naming scheme is more useful to them. "Whats the oldest linux system my code will run on?" is a very likely question a maintainer would have when building binary packages, and the year-based naming scheme is the logical answer. +1 to manylinux2010, -0 manylinux2 > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Wed Feb 7 00:50:08 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 7 Feb 2018 15:50:08 +1000 Subject: [Distutils] Moving PEP 541 (PyPI's name management policy) forward Message-ID: Hi folks, As Sumana mentioned in her latest summary email, we've been looking at PEP 541 again, and figuring out how best to manage the process for getting that approved so that the folks handling name management requests can start relying on it. The key challenge with it has been the fact that this document is essentially defining a new operational policy for the Python Software Foundation (as the legal entity behind PyPI), and given that it relates to naming things on the internet, we can reasonably expect it to prove potentially contentious (and perhaps even turn litigious if someone *really* doesn't like a decision that was made in accordance with the policy). Approving that kind of policy would be a significant burden to place on the shoulders of a single volunteer, so we discussed it in the PSF's Packaging working group [1] (which decides how best to allocate any funding the PSF receives or allocates specifically to support the packaging infrastructure), and amended the approval process for this particular PEP as follows: - the BDFL-Delegate is now Mark Mangoba, in his role as the PSF's IT manager - rather than approving the PEP directly, Mark will instead put it forward for a vote in the Packaging Working Group (similar to the way we handle funding allocation decisions) - input from the PSF's General Counsel will be explicitly requested prior to the vote The commit making those changes can be found here: https://github.com/python/peps/commit/6c356f0472b38e69e8a59a2dee1f275b61aeafab distutils-sig will remain the mailing list for discussing the actual contents of the policy (and any future amendments to it) - the change in approach just makes it clearer that the policy needs to be explicitly adopted by the PSF as an entity, rather than by an individual volunteer. Regards, Nick. [1] https://wiki.python.org/psf/PackagingWG -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Feb 7 01:00:09 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 7 Feb 2018 16:00:09 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <0fbb01d39fd3$842bdfa0$8c839ee0$@sdamon.com> References: <0fbb01d39fd3$842bdfa0$8c839ee0$@sdamon.com> Message-ID: On 7 February 2018 at 15:21, Alex Walters wrote: > This is a really good point. Since pip is the main interface to packages > for end users anyways, we can call it manylinux8675309 and it wouldn't > really matter to users - the name only really matters to package > maintainers, not users. And because of that, manylinux2010, manylinux2014, > etc makes more sense. A package maintainer is expected to be more educated > about these matters, and that naming scheme is more useful to them. "Whats > the oldest linux system my code will run on?" is a very likely question a > maintainer would have when building binary packages, and the year-based > naming scheme is the logical answer. Exactly :) Knowing the baseline year gives publishers a clear set of "almost certainly won't work" environments: anything released prior to the baseline year (since the library versions included in the baseline either won't have existed yet, or may not have been broadly adopted). This is mostly likely to become important if we end up defining a newer platform variant for the sake of ppc64le and aarch64: targeting a compatibility baseline like "manylinux2017" would make it clear that it excludes things like Ubuntu 16.04 and RHEL/CentOS 7.0 (even if it ends up including a later RHEL/CentOS 7.x point release, or the mid-LTS opt-in platform upgrades that Canonical publishes) in a way that manylinuxN doesn't. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Feb 7 02:09:03 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 7 Feb 2018 17:09:03 +1000 Subject: [Distutils] Building a Python package build service for warehouse In-Reply-To: References: Message-ID: On 6 February 2018 at 19:33, Jannis Gebauer wrote: > The main idea is to run the build process in a restricted ?sandbox? docker > container that pulls the package from PyPi, installs it and runs a couple of > tools on it. Code is still pretty rough, nothing to look at at the moment > I?m afraid. Very cool! While the language-independent nature of it likely makes it more complex than what you'd need for a more Python-centric approach, it may be worth your while to poke around at https://github.com/fabric8-analytics/fabric8-analytics-worker/blob/master/docs/worker_ecosystem_support.md which pursues a similar model based on running analyses in celery worker nodes running on Kubernetes. Unfortunately, the public design documentation for fabric8-analytics is minimal to nonexistent, so it can be rather hard to follow as a newcomer to the code base :( Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robin at reportlab.com Wed Feb 7 04:58:08 2018 From: robin at reportlab.com (Robin Becker) Date: Wed, 7 Feb 2018 09:58:08 +0000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <0fbb01d39fd3$842bdfa0$8c839ee0$@sdamon.com> References: <0fbb01d39fd3$842bdfa0$8c839ee0$@sdamon.com> Message-ID: <22086f06-4bbc-c043-a389-d9ff6cbae36c@chamonix.reportlab.co.uk> On 07/02/2018 05:21, Alex Walters wrote: > ........... > This is a really good point. Since pip is the main interface to packages > for end users anyways, we can call it manylinux8675309 and it wouldn't > really matter to users - the name only really matters to package > maintainers, not users. And because of that, manylinux2010, manylinux2014, > etc makes more sense. A package maintainer is expected to be more educated > about these matters, and that naming scheme is more useful to them. "Whats > the oldest linux system my code will run on?" is a very likely question a ........ I dispute the fact that package maintainers should be more educated about these matters. The package maintainer usually knows about one or a few packages (in my case reportlab etc). I know very little about the architectures and platforms that people are using with reportlab today. Nor do I know (or need to know) about multiple linux distributions and what libraries they supported and in what year. I do agree that the name of the available packages shouldn't really matter. Provided there is information in the name that allows the requesting pip to decide on the appropriate package to use (or lack thereof) that should suffice. Is pip clever enough to decide this or will we have to rely on the mysterious _manylinux module? -- Robin Becker From ncoghlan at gmail.com Wed Feb 7 05:41:06 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 7 Feb 2018 20:41:06 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <22086f06-4bbc-c043-a389-d9ff6cbae36c@chamonix.reportlab.co.uk> References: <0fbb01d39fd3$842bdfa0$8c839ee0$@sdamon.com> <22086f06-4bbc-c043-a389-d9ff6cbae36c@chamonix.reportlab.co.uk> Message-ID: On 7 February 2018 at 19:58, Robin Becker wrote: > On 07/02/2018 05:21, Alex Walters wrote: >> This is a really good point. Since pip is the main interface to packages >> for end users anyways, we can call it manylinux8675309 and it wouldn't >> really matter to users - the name only really matters to package >> maintainers, not users. And because of that, manylinux2010, >> manylinux2014, >> etc makes more sense. A package maintainer is expected to be more >> educated >> about these matters, and that naming scheme is more useful to them. >> "Whats >> the oldest linux system my code will run on?" is a very likely question a > > ........ > I dispute the fact that package maintainers should be more educated about > these matters. The package maintainer usually knows about one or a few > packages (in my case reportlab etc). I know very little about the > architectures and platforms that people are using with reportlab today. Nor > do I know (or need to know) about multiple linux distributions and what > libraries they supported and in what year. Aye, there will still be guidance on packaging.python.org for folks that just want an opinionated recommendation on what binary platforms are best to target when producing wheel files. For that scenario, whether we call it "manylinux2" or "manylinux2010" shouldn't matter too much (since folks will just be copying it from the recommendation, or using a helper tool like cibuildwheel). > I do agree that the name of the available packages shouldn't really matter. > Provided there is information in the name that allows the requesting pip to > decide on the appropriate package to use (or lack thereof) that should > suffice. Is pip clever enough to decide this or will we have to rely on the > mysterious _manylinux module? Hmm, that question prompted me to notice a flaw in the current wording of https://www.python.org/dev/peps/pep-0571/#platform-detection-for-installers. The way that's currently worded suggests that "bool(_manylinux.manylinux2010_compatible)" would be the only way to identify whether or not a manylinux2010 wheel should be considered for installation. That isn't the case: installers checking for manylinux2010 compatibility should fall back to "have_compatible_glibc(2, 12)" if there's no `_manylinux` module, or if that module doesn't include a "_manylinux.manylinux2010_compatible" attribute. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mrw at twistedmatrix.com Thu Feb 8 01:12:19 2018 From: mrw at twistedmatrix.com (Mark Williams) Date: Wed, 7 Feb 2018 22:12:19 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <0fbb01d39fd3$842bdfa0$8c839ee0$@sdamon.com> <22086f06-4bbc-c043-a389-d9ff6cbae36c@chamonix.reportlab.co.uk> Message-ID: <20180208061219.GA16156@alakwan> On Wed, Feb 07, 2018 at 08:41:06PM +1000, Nick Coghlan wrote: > > Hmm, that question prompted me to notice a flaw in the current wording > of https://www.python.org/dev/peps/pep-0571/#platform-detection-for-installers. > > The way that's currently worded suggests that > "bool(_manylinux.manylinux2010_compatible)" would be the only way to > identify whether or not a manylinux2010 wheel should be considered for > installation. > > That isn't the case: installers checking for manylinux2010 > compatibility should fall back to "have_compatible_glibc(2, 12)" if > there's no `_manylinux` module, or if that module doesn't include a > "_manylinux.manylinux2010_compatible" attribute. Agh, thank you! Fortunately that's exactly what the draft pip implementation does: https://github.com/pypa/pip/pull/5008/files#diff-542f0dc2284dcb0cb6a0382dfeeb8ed2R160 I've pushed a new branch that includes this change: https://github.com/markrwilliams/peps/commit/4476f9c77b5adb6df4dcc00829303a5613ec7d9d -- Mark Williams mrw at twistedmatrix.com From mrw at twistedmatrix.com Fri Feb 9 17:09:01 2018 From: mrw at twistedmatrix.com (Mark Williams) Date: Fri, 9 Feb 2018 14:09:01 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <20180206045103.GA26538@alakwan> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206045103.GA26538@alakwan> Message-ID: <20180209220900.GA20178@alakwan> On Mon, Feb 05, 2018 at 08:51:03PM -0800, Mark Williams wrote: > On Sat, Feb 03, 2018 at 12:11:51AM -0800, Nathaniel Smith wrote: > > > > We can and should use newer compiler versions than that, and probably > > upgrade them again over the course of the image's lifespan, so let's > > just drop the version numbers from the PEP entirely. (Maybe s/6.9/6/ > > as well for the same reason.) > > Wouldn't upgrading compiler versions potentially imply a change in > libgcc symbol versions? If so, that would either require the PEP be > updated for each new compiler, or the removal of libgcc from the > library whitelist. > > I may be overly paranoid about this. I was overly paranoid about it :) Geoffrey Thomas helped to confirm that devtoolset-7 does the Right Thing. Its libgcc_s.so is actually the following linker script: # cat /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/libgcc_s.so /* GNU ld script Use the shared library, but some functions are only in the static library, so try that secondarily. */ OUTPUT_FORMAT(elf64-x86-64) GROUP ( /lib64/libgcc_s.so.1 libgcc.a ) The GROUP command instructs ld to search for GCC symbols in the system's libgcc_s.so first; if they're present there, the resulting binary will load them from it at runtime, and the binary will match the ABI policy described in the PEP. If the binary requires newer symbols that aren't present in the system's libgcc_s.so, ld will statically link them in from its interal libgcc.a. This result will also match the ABI policy, in that it will either depend only on the subset of symbols available in CentOS 6's default libgcc_s.so or none at all. See the ld documentation for an explanation of GROUP: https://sourceware.org/binutils/docs/ld/File-Commands.html -- Mark Williams mrw at twistedmatrix.com From mrw at twistedmatrix.com Fri Feb 9 17:18:46 2018 From: mrw at twistedmatrix.com (Mark Williams) Date: Fri, 9 Feb 2018 14:18:46 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> Message-ID: <20180209221846.GB20178@alakwan> On Mon, Feb 05, 2018 at 03:17:55PM -0600, Jonathan Helmus wrote: > On 02/03/2018 02:11 AM, Nathaniel Smith wrote: > > > Docker Images > > > ------------- > > > > > > ``manylinux2`` Docker images based on CentOS 6.9 x86_64 and i686 are > > > provided for building binary ``linux`` wheels that can reliably be > > > converted to ``manylinux2`` wheels. [8]_ These images come with a > > > full compiler suite installed (``gcc``, ``g++``, and ``gfortran`` > > > 4.8.2) as well as the latest releases of Python and ``pip``. > > We can and should use newer compiler versions than that, and probably > > upgrade them again over the course of the image's lifespan, so let's > > just drop the version numbers from the PEP entirely. (Maybe s/6.9/6/ > > as well for the same reason.) > > > > Moving to GCC 5 and above will introduced the new libstd++ ABI. [1]? The > manylinux2 standard need to define which ABI compiled libraries should be > compiled against as older version of libstdc++ will not support the new > ABI.? From what I recall the devtoolset packages for CentOS can only target > the older, _GLIBCXX_USE_CXX11_ABI=0, ABI. Geoffrey Thomas helped to confirm that devtoolset-7 does the Right Thing here, as much as that's possible. Its libstdc++.so is actually the following linker script: # cat /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/libstdc++.so /* GNU ld script Use the shared library, but some functions are only in the static library, so try that secondarily. */ OUTPUT_FORMAT(elf64-x86-64) INPUT ( /usr/lib64/libstdc++.so.6 -lstdc++_nonshared ) The INPUT command instructs ld to search for GCC symbols in the system's libstdc++_s.so first; if they're present there, the resulting binary will load them from it at runtime, and the binary will match the ABI policy described in the PEP. If the binary requires newer symbols that aren't present in the system's libstdc++.so, ld will statically link them in from its interal libstdc++_nonshared.a. This result will also match the ABI policy, in that it will either depend only on the subset of symbols available in CentOS 6's default libstdc++.so or none at all. See the ld documentation for an explanation of INPUT: https://sourceware.org/binutils/docs/ld/File-Commands.html Geoffrey did point out that there's a potential issue if two C++ libraries end up with their own statically-linked implementations std::string or std::list, whose ABIs changed with GCC 5. If instances of these classes are allocated in a library with pre-GCC 5 versions and freed in another with post-GCC 5 versions something bad might happen (at least, that's my understanding). I'm unsure how serious this issue will be in practice but it's worth discussion! -- Mark Williams mrw at twistedmatrix.com From mrw at twistedmatrix.com Sat Feb 10 01:03:59 2018 From: mrw at twistedmatrix.com (Mark Williams) Date: Fri, 9 Feb 2018 22:03:59 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> Message-ID: <20180210060359.GA13556@alakwan> On Tue, Feb 06, 2018 at 05:55:36PM +1000, Nick Coghlan wrote: > The CalVer idea first came up in the context of skipping ahead in the > numbering sequence to go straight to a baseline that supported ppc64le > and/or aarch64. Even 2014 would likely be too old for that, since > CentOS 7 didn't support those at launch, and neither did Ubuntu 14.04. > While such a PEP hasn't actually been written yet, the kinds of > numbers we were looking at for a suitable baseline year were around > 2015 or 2016, as that's when support for them started showing up in > mainline Linux distros. > > > Given that `manylinux` PEP numbers determine their sequence number, I > > don't see how CalVer would change the situation. > > It lets us deterministically skip numbers if we decide we want to > enable access to things that older platforms just straight up don't > support (like new instruction set architectures). > > > A bigger issue is that `manylinux` isn't really one dimensional. Lots > > of things happened in 2014; for example, IBM shipped the first POWER8 > > systems and glibc 2.19 and 2.20 were released. But RHEL 7 and thus > > CentOS ship glibc 2.17. Why should `manylinux2014` support ppc64le > > but not glibc 2.19? > > Mainly because we aim for "oldest version still used in new releases > that year", but it's also why each version still needs a PEP that maps > out the actual platform ABI as specific library versions. > > > Since the definition of `manylinux` depends on the state of RHEL and > > CentOS, maybe we should change the sequence number to match the > > underlying major release of RHEL/CentOS. That would have `manylinux2` > > become `manylinux6`, and its successor `manylinux7`. If we require > > that each `manylinux` support all the platforms its RHEL/CentOS > > supports, implementers and users could simply refer to that release to > > know what they're in for. > > We discussed that too, and one key reason for not doing it is that we > only build off Red Hat's platform definitions as a matter of > convenience, and because they currently have the longest support > lifecycles. > > In the future, we could instead decide that a particular version of > Ubuntu LTS or Debian stable (or even some other LTS distro) was a more > suitable baseline for a given manylinux version, depending on how the > relative timing works out. > > For non-RHEL/CentOS users, the RHEL/CentOS version is also just as > arbitrary a sequence number as 1-based indexing. > > By contrast, year-based CalVer maintains distro-neutrality, while also > giving a good sense of the maximum age of compatible target platforms. > (e.g. given "manylinux2010", it's a pretty safe guess that Ubuntu > 12.04, 14.04 and 16.04 are all expected to be compatible, while that > isn't as clear given "manylinux2" or "manylinux6") I'm convinced we should use CalVer. I'm still skeptical of the utility of CalVer here. Debian 6.0 (squeeze), for example, was released in 2011 but is incompatible with `manylinux2010` wheels because it uses glibc 2.11. I'm concerned that the sooner `manylinux2015` is defined, the more likely it is to describe too fuzzy an ABI era for CalVer to convey meaningful information to the LTS audience. What makes it worth it is the ability to skip and backfill versions. As you you pointed out, it would be a strange version scheme that had an architecture that gained wide support in 2015 become `manylinux3` and one that gained wide support in 2014 `manylinux4`. In particular, Geoffrey Thomas pointed out that it should be possible to produce nearly-`manylinux1` compliant wheels with a much newer toolchain: https://mail.python.org/pipermail/wheel-builders/2017-July/000283.html We may decide that an update to `manylinux1` is worthwhile, and by switching to CalVer, backfilling that version as `manylinux2008` would be straight forward. -- Mark Williams mrw at twistedmatrix.com From njs at pobox.com Sun Feb 11 04:17:33 2018 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 11 Feb 2018 01:17:33 -0800 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On Fri, Jan 26, 2018 at 8:37 PM, Nick Coghlan wrote: > On 27 January 2018 at 13:46, Nathaniel Smith wrote: >> >> The advantages are: >> >> - it's a simpler way to record information the information you want >> here, without adding more special cases to dist-info: most code >> doesn't even have to know what 'extras' are, just what packages are >> >> - it opens the door to lots of more advanced features, like >> 'foo[test]' being a package that actually contains foo's tests, or >> build variants like 'numpy[mkl]' being numpy built against the MKL >> library, or maybe making it possible to track which version of numpy's >> ABI different packages use. (The latter two cases need some kind of >> provides: support, which is impossible right now because we don't want >> to allow random-other-package to say 'provides-dist: cryptography'; >> but, it would be okay if 'numpy[mkl]' said 'provides-dist: numpy', >> because we know 'numpy[mkl]' and 'numpy' are maintained by the same >> people.) >> >> I know there's a lot of precedent for this kind of clever use of >> metadata-only packages in Debian (e.g. search for "metapackages"), and >> I guess the RPM world probably has similar tricks. > > > While I agree with this idea in principle, I'll note that RPM makes it > relatively straightforward to have a single SRPM emit multiple RPMs, so > defining a metapackage is just a few extra lines in a spec file. (I'm not > sure how Debian's metapackages work, but I believe they're similarly simple > on the publisher's side). > > We don't currently have a comparable mechanism to readily allow a single > source project to expand to multiple package index entries that all share a > common sdist, but include different subsets in their respective wheel files > (defining one would definitely be possible, it's just a tricky migration > problem to work out). Yeah, the migration is indeed the tricky part. Here's one possible approach. First, figure out what exactly an "extra" should become in the new system. I think it's: if package $PACKAGE version $VERSION defines an extra $EXTRA, then that corresponds to a wheel named "$PACKAGE[$EXTRA]" (the brackets become part of the package name), version $VERSION, and it has Requires-Dist: $PACKAGE = $VERSION, as well as whatever requirements were originally part of the extra. Now, if we didn't have to worry about migrations, we'd extend setuptools/bdist_wheel so that when they see the current syntax for defining an extra, they generate extra wheels following the formula above. (So 'setup.py bdist_wheel' generates N+1 wheels for a package with N extras.) And we'd teach PyPI that packages named like "$PACKAGE[$EXTRA]" should be collected together with packages named "$PACKAGE" (e.g. the same access control apply to both, and probably you want to display them together in the UI when their versions match). And we'd teach pip that square brackets are legal in package names. And that'd be about it. Of course, we do have to worry about migration, and in the first instance, what we care about is making pip's database of installed packages properly record these new wheels. So my proposal is: - Requirements like 'requests[security,socks]' need to be expanded to 'requests[security], requests[socks]'. More specifically, when pip processes a requirement like '$PACKAGE[$EXTRA1,$EXTRA2,...] $OP1 $VERSION1, $OP2 $VERSION2, ...', it expands it to multiple packages and then applies the constraints to each of them: ['$PACKAGE[$EXTRA1] $OP1 $VERSION1, $OP2 $VERSION2 ...', '$PACKAGE[$EXTRA2] $OP1 $VERSION1, $OP2 $VERSION2 ...', ...]. - When pip needs to find a wheel like 'requests[security]', then it first checks to see if this exact wheel (with the brackets) is available on PyPI (or whatever package sources it has available). If so, it uses that. If not, then it falls back to looking for a 'requests' wheel, and if it finds one, and that wheel has 'extra' metadata, then it *uses that metadata to generate a wheel on the spot*, and then carries on as if it had found it on PyPI. - Special case: when hash-checking mode is enabled and pip ends up doing this fallback, then pip always checks the hash against the wheel it found on PyPI ? so 'requests[security] --hash=...' checks the hash of requests.whl, not the auto-generated requests[security].whl. (There is some question to discuss here about how sdists should be handled: in many cases, possibly all of them, it doesn't really make sense to have separate sdists for different square-bracket packages. 'requests[security]' will probably always be generated from the requests source tree, and for build variants like 'numpy[mkl]' you definitely want to build that from numpy.tar.gz, with some special flag telling it to use the "mkl" configuration. So maybe when pip fails to find a wheel it should always go straight to the un-adorned sdist like requests.tar.gz, instead of checking for requests[security].tar.gz. But this isn't going to make-or-break a v1 implementation.) If we implement just these two things, then I think that's enough for pip to start immediately tracking all the proper metadata for existing extras packages, and also provide a smooth onramp to the eventual features. Once this was working, we could enable uploading square-bracket packages to PyPI, and pip would start automatically picking them up where present. Then we could flip the switch for setuptools to start generating such packages. We'd probably also want to tweak PEP 517 so that the build backend is informed of exactly which package pip is looking for, to handle the case where numpy.tar.gz is expected to produce numpy[mkl].whl. And after that we could enable real Provides-Dist: and Provides-Dist: metadata... -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Sun Feb 11 07:15:25 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 11 Feb 2018 22:15:25 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <20180210060359.GA13556@alakwan> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> <20180210060359.GA13556@alakwan> Message-ID: On 10 February 2018 at 16:03, Mark Williams wrote: > On Tue, Feb 06, 2018 at 05:55:36PM +1000, Nick Coghlan wrote: >> By contrast, year-based CalVer maintains distro-neutrality, while also >> giving a good sense of the maximum age of compatible target platforms. >> (e.g. given "manylinux2010", it's a pretty safe guess that Ubuntu >> 12.04, 14.04 and 16.04 are all expected to be compatible, while that >> isn't as clear given "manylinux2" or "manylinux6") > > I'm convinced we should use CalVer. > > I'm still skeptical of the utility of CalVer here. Debian 6.0 > (squeeze), for example, was released in 2011 but is incompatible with > `manylinux2010` wheels because it uses glibc 2.11. I'm concerned that > the sooner `manylinux2015` is defined, the more likely it is to > describe too fuzzy an ABI era for CalVer to convey meaningful > information to the LTS audience. Yeah, I'd agree with that - there's a fuzzy multi-year period from when libraries are available to when they become ubiquitous, so given a "manylinux2010", it would be surprising if a 2012 release like Ubuntu 12.04 didn't support it, but for distros released in 2010 or 2011 you'd still need to check the details. And even after that adoption period, there are always going to be distros that make other choices (like Alpine deciding glibc was too large). > What makes it worth it is the ability to skip and backfill versions. > As you you pointed out, it would be a strange version scheme that had > an architecture that gained wide support in 2015 become `manylinux3` > and one that gained wide support in 2014 `manylinux4`. > > In particular, Geoffrey Thomas pointed out that it should be possible > to produce nearly-`manylinux1` compliant wheels with a much newer > toolchain: > > https://mail.python.org/pipermail/wheel-builders/2017-July/000283.html > > We may decide that an update to `manylinux1` is worthwhile, and by > switching to CalVer, backfilling that version as `manylinux2008` would > be straight forward. Indeed, that concrete pragmatic benefit provides a more compelling rationale for switching the numbering scheme. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Feb 11 07:59:43 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 11 Feb 2018 22:59:43 +1000 Subject: [Distutils] Installed Extras Metadata In-Reply-To: References: Message-ID: On 11 February 2018 at 19:17, Nathaniel Smith wrote: [snip a potential plan for migrating to extras-as-virtual-packages] > - When pip needs to find a wheel like 'requests[security]', then it > first checks to see if this exact wheel (with the brackets) is > available on PyPI (or whatever package sources it has available). If > so, it uses that. If not, then it falls back to looking for a > 'requests' wheel, and if it finds one, and that wheel has 'extra' > metadata, then it *uses that metadata to generate a wheel on the > spot*, and then carries on as if it had found it on PyPI. It occurs to me that this synthetic metadata generation aspect could be pursued first, since these "installed extras" pseudo-packages should be relatively straightforward to create: - dist-info directory derived from the corresponding install package name plus the extra name - a generated METADATA file using the "name[extra]" pseudo-name and amended dependency spec - an appropriate RECORD file - INSTALLER and REQUESTED files as defined in PEP 376 While we could potentially use a different suffix for the synthetic pseudo-packages (e.g. "{name}[{extra}]-{version}.extra-info"), such that anything looking only for "*.dist-info" packages would ignore that, we could also decide that the non-standard name structure was enough of a distinguishing marker. Once we had install-time generation of those, then we could *separately* consider if we wanted to allow extras to expand beyond their current "optional dependencies" capabilities to also cover the installation of additional files that aren't in the base distribution, without requiring those files to be moved out to a separate dependency. The benefit of that split approach is that the "Database of installed extras" aspect would only requires enhancements to pip and other installers, and to tools that read the installation metadata, while the full proposal would also require changes to PyPI and to publishing tools. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From tritium-list at sdamon.com Sun Feb 11 10:53:05 2018 From: tritium-list at sdamon.com (Alex Walters) Date: Sun, 11 Feb 2018 10:53:05 -0500 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> <20180210060359.GA13556@alakwan> Message-ID: <010201d3a350$68764950$3962dbf0$@sdamon.com> Just out of curiosity, I did a little experiment. I explained this thread to my mother. My mother is a wonderful woman, but she wouldn't know a byte from a bite. I explained it as follows: "There is a tool that can make software run on a lot of different computers, but only if you build it for an ancient computer. The tool is a little complicated - you have to learn how to get it and to use it with any success. The people who make it are considering changing the way they name it. The new naming scheme is the bare minimum year the computer running the code can be from. The old naming scheme is just a sequence - 1, 2, 3. Would you be confused by the new naming scheme? Do you think people using it would be confused?" Her response, which I will paraphrase because my lovely mother likes four letter words, "If it's complicated to use already, then changing the name isn't any more confusing." Not exactly scientific, but based on that, I don't think CalVer will be that confusing to library developers. > -----Original Message----- > From: Distutils-SIG [mailto:distutils-sig-bounces+tritium- > list=sdamon.com at python.org] On Behalf Of Nick Coghlan > Sent: Sunday, February 11, 2018 7:15 AM > To: Mark Williams > Cc: Geoffrey Thomas ; DistUtils mailing list > ; Mark Williams > Subject: Re: [Distutils] draft PEP: manylinux2 > > On 10 February 2018 at 16:03, Mark Williams > wrote: > > On Tue, Feb 06, 2018 at 05:55:36PM +1000, Nick Coghlan wrote: > >> By contrast, year-based CalVer maintains distro-neutrality, while also > >> giving a good sense of the maximum age of compatible target platforms. > >> (e.g. given "manylinux2010", it's a pretty safe guess that Ubuntu > >> 12.04, 14.04 and 16.04 are all expected to be compatible, while that > >> isn't as clear given "manylinux2" or "manylinux6") > > > > I'm convinced we should use CalVer. > > > > I'm still skeptical of the utility of CalVer here. Debian 6.0 > > (squeeze), for example, was released in 2011 but is incompatible with > > `manylinux2010` wheels because it uses glibc 2.11. I'm concerned that > > the sooner `manylinux2015` is defined, the more likely it is to > > describe too fuzzy an ABI era for CalVer to convey meaningful > > information to the LTS audience. > > Yeah, I'd agree with that - there's a fuzzy multi-year period from > when libraries are available to when they become ubiquitous, so given > a "manylinux2010", it would be surprising if a 2012 release like > Ubuntu 12.04 didn't support it, but for distros released in 2010 or > 2011 you'd still need to check the details. And even after that > adoption period, there are always going to be distros that make other > choices (like Alpine deciding glibc was too large). > > > What makes it worth it is the ability to skip and backfill versions. > > As you you pointed out, it would be a strange version scheme that had > > an architecture that gained wide support in 2015 become `manylinux3` > > and one that gained wide support in 2014 `manylinux4`. > > > > In particular, Geoffrey Thomas pointed out that it should be possible > > to produce nearly-`manylinux1` compliant wheels with a much newer > > toolchain: > > > > https://mail.python.org/pipermail/wheel-builders/2017-July/000283.html > > > > We may decide that an update to `manylinux1` is worthwhile, and by > > switching to CalVer, backfilling that version as `manylinux2008` would > > be straight forward. > > Indeed, that concrete pragmatic benefit provides a more compelling > rationale for switching the numbering scheme. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From matthew.brett at gmail.com Sun Feb 11 16:34:07 2018 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 11 Feb 2018 13:34:07 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <010201d3a350$68764950$3962dbf0$@sdamon.com> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> <20180210060359.GA13556@alakwan> <010201d3a350$68764950$3962dbf0$@sdamon.com> Message-ID: On Sun, Feb 11, 2018 at 7:53 AM, Alex Walters wrote: > Just out of curiosity, I did a little experiment. I explained this thread > to my mother. My mother is a wonderful woman, but she wouldn't know a byte > from a bite. I explained it as follows: > > "There is a tool that can make software run on a lot of different computers, > but only if you build it for an ancient computer. The tool is a little > complicated - you have to learn how to get it and to use it with any > success. The people who make it are considering changing the way they name > it. The new naming scheme is the bare minimum year the computer running the > code can be from. I think the problem is that the whole discussion turns on whether we should care about the fact that it's more complicated than the last sentence would suggest. Cheers, Matthew From ncoghlan at gmail.com Mon Feb 12 18:53:06 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 13 Feb 2018 09:53:06 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> <20180210060359.GA13556@alakwan> <010201d3a350$68764950$3962dbf0$@sdamon.com> Message-ID: On 12 February 2018 at 07:34, Matthew Brett wrote: > On Sun, Feb 11, 2018 at 7:53 AM, Alex Walters wrote: >> "There is a tool that can make software run on a lot of different computers, >> but only if you build it for an ancient computer. The tool is a little >> complicated - you have to learn how to get it and to use it with any >> success. The people who make it are considering changing the way they name >> it. The new naming scheme is the bare minimum year the computer running the >> code can be from. > > I think the problem is that the whole discussion turns on whether we > should care about the fact that it's more complicated than the last > sentence would suggest. It isn't really - what "manylinux2010" tells you is that distros released before that year will almost certainly *not* comply with that variant of the spec, since at least some of the relevant library versions weren't available yet. While it also conveys some fuzzier signals (such as "distros released in 2012 or later will *probably* be compatible, unless they make some unconventional library choices"), I agree with Mark's last email explaining that we shouldn't view that as the primary benefit of switching to CalVer based numbering: the primary practical benefit is the fact that CalVer based numbering will let us backfill specs for arbitrary years whenever it suits us to do so (e.g. a manylinux2008 as the oldest base platform that more recent compilers are able to target) We're also not assessing the CalVer numbering scheme in a vacuum, we're assessing it relative to: * numbering in order of definition (which would make backfilling intermediate years confusing) * numbering with arbitrary gaps (which allows backfilling up to a point, but means having to explain the gaps [1]) * numbering based on RHEL/CentOS version (which makes it difficult to ever choose a different baseline distro and still doesn't solve the backfilling problem) And from that point of view, we can see that if we assume CalVer as the recommended path forward, then we wouldn't have a compelling argument for switching away from it to any of the other plausible schemes. Cheers, Nick. [1] Ah, that would bring back memories of BASIC line numbering :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robin at reportlab.com Tue Feb 13 04:59:13 2018 From: robin at reportlab.com (Robin Becker) Date: Tue, 13 Feb 2018 09:59:13 +0000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> <20180210060359.GA13556@alakwan> <010201d3a350$68764950$3962dbf0$@sdamon.com> Message-ID: <93867ce8-66f3-c011-15c1-f8ca6ff2089d@chamonix.reportlab.co.uk> I am a bit confused about the meaning of 'backfilling'. Does it mean that a particular manylinux will evolve in time so an early manylinux2010 wheel will differ from a later one? -- Robin Becker From chris.jerdonek at gmail.com Tue Feb 13 05:07:19 2018 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 13 Feb 2018 02:07:19 -0800 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: <93867ce8-66f3-c011-15c1-f8ca6ff2089d@chamonix.reportlab.co.uk> References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> <20180210060359.GA13556@alakwan> <010201d3a350$68764950$3962dbf0$@sdamon.com> <93867ce8-66f3-c011-15c1-f8ca6ff2089d@chamonix.reportlab.co.uk> Message-ID: On Tue, Feb 13, 2018 at 1:59 AM, Robin Becker wrote: > I am a bit confused about the meaning of 'backfilling'. Does it mean that a > particular manylinux will evolve in time so an early manylinux2010 wheel > will differ from a later one? I think it just means that, say, manylinux2008 could be released after manylinux2010. So the version numbers wouldn't need to increase with each release as it would if the numbering scheme were, say, manylinux1, manylinux2, manylinux3, etc. --Chris > -- > Robin Becker > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From robin at reportlab.com Tue Feb 13 06:03:08 2018 From: robin at reportlab.com (Robin Becker) Date: Tue, 13 Feb 2018 11:03:08 +0000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> <20180210060359.GA13556@alakwan> <010201d3a350$68764950$3962dbf0$@sdamon.com> <93867ce8-66f3-c011-15c1-f8ca6ff2089d@chamonix.reportlab.co.uk> Message-ID: On 13/02/2018 10:07, Chris Jerdonek wrote: > On Tue, Feb 13, 2018 at 1:59 AM, Robin Becker wrote: >> I am a bit confused about the meaning of 'backfilling'. Does it mean that a >> particular manylinux will evolve in time so an early manylinux2010 wheel >> will differ from a later one? > > I think it just means that, say, manylinux2008 could be released after > manylinux2010. So the version numbers wouldn't need to increase with > each release as it would if the numbering scheme were, say, > manylinux1, manylinux2, manylinux3, etc. > > --Chris thanks-- Robin Becker From ncoghlan at gmail.com Tue Feb 13 12:28:41 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Feb 2018 03:28:41 +1000 Subject: [Distutils] draft PEP: manylinux2 In-Reply-To: References: <1517443284.3610393.1255271800.11F2434B@webmail.messagingengine.com> <20180206060522.GA22396@alakwan> <20180210060359.GA13556@alakwan> <010201d3a350$68764950$3962dbf0$@sdamon.com> <93867ce8-66f3-c011-15c1-f8ca6ff2089d@chamonix.reportlab.co.uk> Message-ID: On 13 February 2018 at 20:07, Chris Jerdonek wrote: > On Tue, Feb 13, 2018 at 1:59 AM, Robin Becker wrote: >> I am a bit confused about the meaning of 'backfilling'. Does it mean that a >> particular manylinux will evolve in time so an early manylinux2010 wheel >> will differ from a later one? > > I think it just means that, say, manylinux2008 could be released after > manylinux2010. So the version numbers wouldn't need to increase with > each release as it would if the numbering scheme were, say, > manylinux1, manylinux2, manylinux3, etc. Yep, exactly this (the idea originally came from the fact we're going to need a manylinux variant with a baseline year around 2014 or 2015, or potentially even later, if we want to support aarch64 and/or ppc64le). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From sh at changeset.nyc Tue Feb 13 23:22:25 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Tue, 13 Feb 2018 23:22:25 -0500 Subject: [Distutils] Packaging/Warehouse sprint at PyCon 2018 In-Reply-To: <9c6f5ec0-e598-3753-3b0e-8e423fc5bc66@changeset.nyc> References: <9c6f5ec0-e598-3753-3b0e-8e423fc5bc66@changeset.nyc> Message-ID: Reminder: this Thursday, Feb. 15th, is the last day to request financial aid to attend PyCon https://us.pycon.org/2018/financial-assistance/ and thus the sprints. If money's a reason you're assuming you can't come join us and improve Warehouse and other Python packaging/distribution tools, I hope you'll apply for financial assistance. On 01/30/2018 01:39 PM, Sumana Harihareswara wrote: > In case you're planning your PyCon Cleveland travel: we are planning to > hold a Warehouse/packaging sprint at PyCon (the sprints are Monday, May > 14th - Thursday, May 17th 2018). > > We welcome package maintainers, backend and frontend web developers, > infrastructure administrators, technical writers, and testers to help us > make the new PyPI, and the packaging ecosystem more generally, as usable > and robust as possible. I took the liberty of updating > https://us.pycon.org/2018/community/sprints/ to say so. > > Once we're closer to the sprints I'll work on a more detailed list of > things we'll work on in Cleveland. > -- Sumana Harihareswara Changeset Consulting https://changeset.nyc From sh at changeset.nyc Tue Feb 13 23:16:17 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Tue, 13 Feb 2018 23:16:17 -0500 Subject: [Distutils] Fwd: Warehouse: package manager features & question about advertising In-Reply-To: References: Message-ID: <88a38385-3fc1-38d3-de15-bbff1877517b@changeset.nyc> Forwarded from pypa-dev https://groups.google.com/forum/#!topic/pypa-dev/xQb5RvDb5rc - the weekly Warehouse update. -------- Forwarded Message -------- Subject: Warehouse: package manager features & question about advertising Date: Tue, 13 Feb 2018 23:15:10 -0500 From: Sumana Harihareswara Here's your weekly update on Warehouse, powering the new PyPI.[0] Perhaps the biggest news is that the pace of our progress is making us optimistic; we expect to finish all the issues in the first milestone next week, which means Warehouse will have all the essential features package maintainers need.[1] When we get there, we'll be asking some active maintainers to take some time and poke at the site (in the browser and using the APIs) to let us know of any bugs or confusion. In the past week, we've made a ton of progress on, for instance, viewing releases[2] and managing user emails.[3] You can try those out right now at the pre-production site.[4] And the PyPI footer has various policies properly linked in the footer now -- thanks for your advice, PSF![5] Plus, a fix to human-friendly time indicators.[6] Also: Ever wonder how Twine is structured?[7] How does core metadata with multiple email addresses look?[8] And we continued our work on making our credentials handling for Kubernetes more robust.[9] Part of our work is setting up Warehouse on a good foundation for future work, so we spent some time sorting out stuff like: what API documentation do we need?[10] There's a new GitHub label for issues that ask: what APIs do we need?[11] And we restarted the discussion: How much work should we put into Warehouse localisation?[12] Luke Sneeringer volunteered to work on two-factor auth and PyPI API keys, which is great![13] As usual, the notes from our weekly meeting are on the Packaging Working Group wiki.[14] We've also introduced an overview of Warehouse's near-term progress using the GitHub "Projects" feature[15], in case you want to see what we're working on and what's next in a bit more detail than the roadmap.[16] Folks who want to help: we have several good first contribution issues[17] and a guide to getting started[18]. Also, as we prepare for future publicity pushes, please let me know (replying offlist is probably best): where should we advertise to reach occasional and non-Anglophone programmers?[19] Thanks to Mozilla and the PSF for their support for the PyPI & Warehouse work![20][21] [0] https://github.com/pypa/warehouse/ [1] https://github.com/pypa/warehouse/milestone/8 [2] https://github.com/pypa/warehouse/pull/2879 [3] https://github.com/pypa/warehouse/pull/2904 [4] https://pypi.org/ [5] https://github.com/pypa/warehouse/issues/1989 [6] https://github.com/pypa/warehouse/pull/2924 [7] https://github.com/pypa/twine/pull/296 [8] https://github.com/pypa/python-packaging-user-guide/pull/429 [9] https://github.com/cabotage/cabotage-app/commits/master [10] https://github.com/pypa/warehouse/issues/2913 [11] https://github.com/pypa/warehouse/labels/APIs%2Ffeeds [12] https://github.com/pypa/warehouse/issues/1453 [13] https://github.com/pypa/warehouse/issues/994 [14] https://wiki.python.org/psf/PackagingWG/2018-02-12-Warehouse [15] https://github.com/pypa/warehouse/projects/1 [16] https://wiki.python.org/psf/WarehouseRoadmap [17] https://github.com/pypa/warehouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22 [18] https://warehouse.readthedocs.io/development/getting-started/ [19] https://ask.metafilter.com/319055/How-do-I-reach-occasional-and-non-Anglophone-Python-programmers [20] https://pyfound.blogspot.com/2017/11/the-psf-awarded-moss-grant-pypi.html [21] https://blog.mozilla.org/blog/2018/01/23/moss-q4-supporting-python-ecosystem/ -- Sumana Harihareswara Warehouse project manager Changeset Consulting https://changeset.nyc From hlaz at hs-lausitz.de Tue Feb 13 15:19:20 2018 From: hlaz at hs-lausitz.de (Heiko L.) Date: Tue, 13 Feb 2018 21:19:20 +0100 Subject: [Distutils] pypi.python Error 403 Forbidden Message-ID: <8a941dd96af393470cbae39924873fd3.squirrel@webmail.fh-lausitz.de> hallo, I try to install a software from pypi.python.org and receive following errmsg: urllib2.HTTPError: HTTP Error 403: Forbidden and $ wget http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg HTTP request sent, awaiting response... 403 SSL is required After a few hours I found the following article: https://mail.python.org/pipermail/distutils-sig/2017-October/031712.html ...you can no longer access /simple/ and /packages/ over HTTP and you will have to directly go to HTTPS.... - It is true? - Is this the right place if I have any questions? regards Heiko From marius at gedmin.as Wed Feb 14 08:56:21 2018 From: marius at gedmin.as (Marius Gedminas) Date: Wed, 14 Feb 2018 15:56:21 +0200 Subject: [Distutils] pypi.python Error 403 Forbidden In-Reply-To: <8a941dd96af393470cbae39924873fd3.squirrel@webmail.fh-lausitz.de> References: <8a941dd96af393470cbae39924873fd3.squirrel@webmail.fh-lausitz.de> Message-ID: <20180214135621.ypdukpqc4dcjneqq@platonas> On Tue, Feb 13, 2018 at 09:19:20PM +0100, Heiko L. wrote: > I try to install a software from pypi.python.org and receive following errmsg: > urllib2.HTTPError: HTTP Error 403: Forbidden > > and > > $ wget http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg > HTTP request sent, awaiting response... 403 SSL is required > > After a few hours I found the following article: > https://mail.python.org/pipermail/distutils-sig/2017-October/031712.html > ...you can no longer access /simple/ and /packages/ over HTTP and you will have to directly go to HTTPS.... > > - It is true? Yes. > - Is this the right place if I have any questions? Yes. HTH, Marius Gedminas -- Any sufficiently advanced technology is indistinguishable from a rigged demo. - Andy Finkel, computer guy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From matthew.brett at gmail.com Wed Feb 14 09:40:46 2018 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 14 Feb 2018 14:40:46 +0000 Subject: [Distutils] pypi.python Error 403 Forbidden In-Reply-To: <8a941dd96af393470cbae39924873fd3.squirrel@webmail.fh-lausitz.de> References: <8a941dd96af393470cbae39924873fd3.squirrel@webmail.fh-lausitz.de> Message-ID: Hi, On Tue, Feb 13, 2018 at 8:19 PM, Heiko L. wrote: > hallo, > > I try to install a software from pypi.python.org and receive following errmsg: > urllib2.HTTPError: HTTP Error 403: Forbidden > > and > > $ wget http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg > HTTP request sent, awaiting response... 403 SSL is required > > > After a few hours I found the following article: > https://mail.python.org/pipermail/distutils-sig/2017-October/031712.html > ...you can no longer access /simple/ and /packages/ over HTTP and you will have to directly go to HTTPS.... > > - It is true? > - Is this the right place if I have any questions? Can you use pip to install your software instead? What are you trying to install, exactly? Is it just this old version of setuptools? Why do you need that? Cheers, Matthew From phildini at phildini.net Wed Feb 14 13:44:44 2018 From: phildini at phildini.net (Philip James) Date: Wed, 14 Feb 2018 10:44:44 -0800 Subject: [Distutils] Documenting project_urls to suggesting thanking or funding mechanisms Message-ID: Hello distutils-sig! This email comes to you by way of an issue I filed ( https://github.com/pypa/setuptools/issues/1276), the tl:dr; of which is: I would like it to be easier for packages and package maintainers to indicate how they want to be thanked or funded for their work. I created https://pypi.org/project/thanks/ to help with this, but right now it's reading from a JSON blob maintained by the library. I would prefer to read from the packages themselves, and the `project_urls` dict seems like a good place to do so. Here is an example from the thanks project: https://github.com/phildini/thanks/commit/a4e549338eb3e3c70b1dd5628b38dcbdbf63443a What I am planning to do: - Update the documentation and sample project with a `Project-URL: Funding` item, to encourage maintainers to add this information. What I would love to know from this list: - What do we think of the name "Funding"? Would you prefer "Thanks"? Cheers, and thank you for your time. Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From waynejwerner at gmail.com Thu Feb 15 09:36:23 2018 From: waynejwerner at gmail.com (Wayne Werner) Date: Thu, 15 Feb 2018 08:36:23 -0600 Subject: [Distutils] Documenting project_urls to suggesting thanking or funding mechanisms In-Reply-To: References: Message-ID: FWIW, some people aren't in it for the money, but would enjoy gratitude . I'd personally do `{'funding': 'https://fundme.example.com', 'thanks': ' https://saythanks.io/to/waynew '}` =========================================================== I welcome VSRE emails. Learn more at http://vsre.info/ =========================================================== On Wed, Feb 14, 2018 at 12:44 PM, Philip James wrote: > Hello distutils-sig! This email comes to you by way of an issue I filed ( > https://github.com/pypa/setuptools/issues/1276), the tl:dr; of which is: > > I would like it to be easier for packages and package maintainers to > indicate how they want to be thanked or funded for their work. I created > https://pypi.org/project/thanks/ to help with this, but right now it's > reading from a JSON blob maintained by the library. I would prefer to read > from the packages themselves, and the `project_urls` dict seems like a good > place to do so. > > Here is an example from the thanks project: > https://github.com/phildini/thanks/commit/a4e549338eb3e3c70b1dd5628b38dc > bdbf63443a > > What I am planning to do: > - Update the documentation and sample project with a `Project-URL: > Funding` item, to encourage maintainers to add this information. > > What I would love to know from this list: > - What do we think of the name "Funding"? Would you prefer "Thanks"? > > Cheers, and thank you for your time. > > Philip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phildini at phildini.net Thu Feb 15 10:30:34 2018 From: phildini at phildini.net (Philip James) Date: Thu, 15 Feb 2018 07:30:34 -0800 Subject: [Distutils] Documenting project_urls to suggesting thanking or funding mechanisms In-Reply-To: References: Message-ID: That works for me. I'm happy to document both. On Thu, Feb 15, 2018 at 6:36 AM, Wayne Werner wrote: > FWIW, some people aren't in it for the money, but would enjoy gratitude > . > > I'd personally do `{'funding': 'https://fundme.example.com', 'thanks': ' > https://saythanks.io/to/waynew '}` > > > > =========================================================== > I welcome VSRE emails. Learn more at http://vsre.info/ > =========================================================== > > On Wed, Feb 14, 2018 at 12:44 PM, Philip James > wrote: > >> Hello distutils-sig! This email comes to you by way of an issue I filed ( >> https://github.com/pypa/setuptools/issues/1276), the tl:dr; of which is: >> >> I would like it to be easier for packages and package maintainers to >> indicate how they want to be thanked or funded for their work. I created >> https://pypi.org/project/thanks/ to help with this, but right now it's >> reading from a JSON blob maintained by the library. I would prefer to read >> from the packages themselves, and the `project_urls` dict seems like a good >> place to do so. >> >> Here is an example from the thanks project: >> https://github.com/phildini/thanks/commit/a4e549338eb3e3c70b >> 1dd5628b38dcbdbf63443a >> >> What I am planning to do: >> - Update the documentation and sample project with a `Project-URL: >> Funding` item, to encourage maintainers to add this information. >> >> What I would love to know from this list: >> - What do we think of the name "Funding"? Would you prefer "Thanks"? >> >> Cheers, and thank you for your time. >> >> Philip >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at kluyver.me.uk Thu Feb 15 11:09:02 2018 From: thomas at kluyver.me.uk (Thomas Kluyver) Date: Thu, 15 Feb 2018 16:09:02 +0000 Subject: [Distutils] PEP 566 - Package metadata version 2.1 Message-ID: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> I'd like to once again prod this PEP towards completion: https://www.python.org/dev/peps/pep-0566/ The version numbering question has been decided in favour of calling it 2.1. The remaining question I'm aware of is whether to make the body text (in the email format of the metadata file) officially represent the package long description. I'm in favour of doing so: at least twine and flit already use this for metadata in wheels. Thomas From dholth at gmail.com Thu Feb 15 13:31:57 2018 From: dholth at gmail.com (Daniel Holth) Date: Thu, 15 Feb 2018 18:31:57 +0000 Subject: [Distutils] PEP 566 - Package metadata version 2.1 In-Reply-To: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> References: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> Message-ID: I agree but have simply not had time. Edit it to add something like "Instead of a description header, the description may be provided in the message body, e.g. after a completely blank line to end the headers, followed by the long description with no indentation or other special formatting needed". Write something about putting the body back into a description key in the json version. Just delete the example parsing code which doesn't parse message bodies. I don't recall any other issues that would prevent approval. On Thu, Feb 15, 2018 at 11:14 AM Thomas Kluyver wrote: > I'd like to once again prod this PEP towards completion: > > https://www.python.org/dev/peps/pep-0566/ > > The version numbering question has been decided in favour of calling it > 2.1. > > The remaining question I'm aware of is whether to make the body text (in > the email format of the metadata file) officially represent the package > long description. I'm in favour of doing so: at least twine and flit > already use this for metadata in wheels. > > Thomas > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Feb 15 14:19:37 2018 From: dholth at gmail.com (Daniel Holth) Date: Thu, 15 Feb 2018 19:19:37 +0000 Subject: [Distutils] PEP 566 - Package metadata version 2.1 In-Reply-To: References: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> Message-ID: Doing fine. Hashes do not belong in this PEP, which is intended to do just a little more than document the status quo. The document does provide for future enhancements to the spec without using the PEP process. Personally I am not a fan of putting concrete requirements or hashes of specific archives at this level. On Thu, Feb 15, 2018 at 1:44 PM Trishank Kuppusamy < trishank.kuppusamy at datadoghq.com> wrote: > Hi Daniel, long time no speak, how you doing? :) > > Maybe slightly off-topic, but I wonder if it the PEP allows for specifies > hashes of external requirements? Given a good copy of hashes, this would be > useful to survive a compromise of any package index. > > Does this make sense? Please let me know if you have questions, and thanks! > > On Thu, Feb 15, 2018 at 1:31 PM, Daniel Holth wrote: > >> I agree but have simply not had time. Edit it to add something like >> "Instead of a description header, the description may be provided in the >> message body, e.g. after a completely blank line to end the headers, >> followed by the long description with no indentation or other special >> formatting needed". Write something about putting the body back into a >> description key in the json version. Just delete the example parsing code >> which doesn't parse message bodies. I don't recall any other issues that >> would prevent approval. >> >> On Thu, Feb 15, 2018 at 11:14 AM Thomas Kluyver >> wrote: >> >>> I'd like to once again prod this PEP towards completion: >>> >>> https://www.python.org/dev/peps/pep-0566/ >>> >>> The version numbering question has been decided in favour of calling it >>> 2.1. >>> >>> The remaining question I'm aware of is whether to make the body text (in >>> the email format of the metadata file) officially represent the package >>> long description. I'm in favour of doing so: at least twine and flit >>> already use this for metadata in wheels. >>> >>> Thomas >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From di at di.codes Thu Feb 15 15:52:31 2018 From: di at di.codes (Dustin Ingram) Date: Thu, 15 Feb 2018 14:52:31 -0600 Subject: [Distutils] PEP 566 - Package metadata version 2.1 In-Reply-To: References: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> Message-ID: Sounds good. I've made the following edits, does this suffice? @ pep-0566.rst:84 @ Name The specification for the format of this field is now identical to the distribution name specification defined in PEP 508. +Description +::::::::::: +In addition to the ``Description`` header field, the distributions +description may instead be provided in the message body (i.e., after a +completely blank line following the headers, with no indentation or other +special formatting necessary). Version Specifiers ================== @ pep-0566.rst:135 @ as follows: single list containing all the original values for the given key; #. The ``Keywords`` field should be converted to a list by splitting the original value on whitespace characters; +#. The message body, if present, should be set to the value of the + ``description`` key. #. The result should be stored as a string-keyed dictionary. Summary of Differences From PEP 345 Thanks, D. On Thu, Feb 15, 2018 at 1:19 PM, Daniel Holth wrote: > Doing fine. > > Hashes do not belong in this PEP, which is intended to do just a little more > than document the status quo. The document does provide for future > enhancements to the spec without using the PEP process. > > Personally I am not a fan of putting concrete requirements or hashes of > specific archives at this level. > > On Thu, Feb 15, 2018 at 1:44 PM Trishank Kuppusamy > wrote: >> >> Hi Daniel, long time no speak, how you doing? :) >> >> Maybe slightly off-topic, but I wonder if it the PEP allows for specifies >> hashes of external requirements? Given a good copy of hashes, this would be >> useful to survive a compromise of any package index. >> >> Does this make sense? Please let me know if you have questions, and >> thanks! >> >> On Thu, Feb 15, 2018 at 1:31 PM, Daniel Holth wrote: >>> >>> I agree but have simply not had time. Edit it to add something like >>> "Instead of a description header, the description may be provided in the >>> message body, e.g. after a completely blank line to end the headers, >>> followed by the long description with no indentation or other special >>> formatting needed". Write something about putting the body back into a >>> description key in the json version. Just delete the example parsing code >>> which doesn't parse message bodies. I don't recall any other issues that >>> would prevent approval. >>> >>> On Thu, Feb 15, 2018 at 11:14 AM Thomas Kluyver >>> wrote: >>>> >>>> I'd like to once again prod this PEP towards completion: >>>> >>>> https://www.python.org/dev/peps/pep-0566/ >>>> >>>> The version numbering question has been decided in favour of calling it >>>> 2.1. >>>> >>>> The remaining question I'm aware of is whether to make the body text (in >>>> the email format of the metadata file) officially represent the package long >>>> description. I'm in favour of doing so: at least twine and flit already use >>>> this for metadata in wheels. >>>> >>>> Thomas >>>> _______________________________________________ >>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From hlaz at hs-lausitz.de Thu Feb 15 16:20:58 2018 From: hlaz at hs-lausitz.de (Heiko L.) Date: Thu, 15 Feb 2018 22:20:58 +0100 Subject: [Distutils] pypi.python Error 403 Forbidden Message-ID: Hallo Marius, >> After a few hours I found the following article: >> https://mail.python.org/pipermail/distutils-sig/2017-October/031712.html >> ...you can no longer access /simple/ and /packages/ over HTTP and you will have to directly go to HTTPS.... >> >> - It is true? > > Yes. > >> - Is this the right place if I have any questions? > > Yes. > I have not found a official website. I would have expected an info on the page https://pypi.python.org/pypi. Where can I find an official statement? Apparently exist prog on https://pypi.python.org which use http, but ypi.python.org does not allow http. It's not easy. A user should be able to decide for himself whether to use HTTP or HTTPS. regards Heiko. From hlaz at hs-lausitz.de Thu Feb 15 16:22:07 2018 From: hlaz at hs-lausitz.de (Heiko L.) Date: Thu, 15 Feb 2018 22:22:07 +0100 Subject: [Distutils] pypi.python Error 403 Forbidden Message-ID: <4b594205a2e3559db158541e508303d7.squirrel@webmail.fh-lausitz.de> hallo Matthew, > Can you use pip to install your software instead? What are you no. > trying to install, exactly? Is it just this old version of https://www2.fh-lausitz.de/launic/comp/tmp/180212.install_python_setuptools_tour.pdf > setuptools? Why do you need that? > installation prog without issue. regards Heiko From p.f.moore at gmail.com Thu Feb 15 18:11:06 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 15 Feb 2018 23:11:06 +0000 Subject: [Distutils] PEP 566 - Package metadata version 2.1 In-Reply-To: References: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> Message-ID: On 15 February 2018 at 20:52, Dustin Ingram wrote: > Sounds good. I've made the following edits, does this suffice? > > @ pep-0566.rst:84 @ Name > The specification for the format of this field is now identical to the > distribution name specification defined in PEP 508. > > +Description > +::::::::::: > > +In addition to the ``Description`` header field, the distributions Minor typo: distribution's > +description may instead be provided in the message body (i.e., after a > +completely blank line following the headers, with no indentation or other > +special formatting necessary). > > Version Specifiers > ================== > > @ pep-0566.rst:135 @ as follows: > single list containing all the original values for the given key; > #. The ``Keywords`` field should be converted to a list by splitting the > original value on whitespace characters; > +#. The message body, if present, should be set to the value of the > + ``description`` key. > #. The result should be stored as a string-keyed dictionary. > > Summary of Differences From PEP 345 > > Thanks, > D. > > On Thu, Feb 15, 2018 at 1:19 PM, Daniel Holth wrote: >> Doing fine. >> >> Hashes do not belong in this PEP, which is intended to do just a little more >> than document the status quo. The document does provide for future >> enhancements to the spec without using the PEP process. >> >> Personally I am not a fan of putting concrete requirements or hashes of >> specific archives at this level. >> >> On Thu, Feb 15, 2018 at 1:44 PM Trishank Kuppusamy >> wrote: >>> >>> Hi Daniel, long time no speak, how you doing? :) >>> >>> Maybe slightly off-topic, but I wonder if it the PEP allows for specifies >>> hashes of external requirements? Given a good copy of hashes, this would be >>> useful to survive a compromise of any package index. >>> >>> Does this make sense? Please let me know if you have questions, and >>> thanks! >>> >>> On Thu, Feb 15, 2018 at 1:31 PM, Daniel Holth wrote: >>>> >>>> I agree but have simply not had time. Edit it to add something like >>>> "Instead of a description header, the description may be provided in the >>>> message body, e.g. after a completely blank line to end the headers, >>>> followed by the long description with no indentation or other special >>>> formatting needed". Write something about putting the body back into a >>>> description key in the json version. Just delete the example parsing code >>>> which doesn't parse message bodies. I don't recall any other issues that >>>> would prevent approval. >>>> >>>> On Thu, Feb 15, 2018 at 11:14 AM Thomas Kluyver >>>> wrote: >>>>> >>>>> I'd like to once again prod this PEP towards completion: >>>>> >>>>> https://www.python.org/dev/peps/pep-0566/ >>>>> >>>>> The version numbering question has been decided in favour of calling it >>>>> 2.1. >>>>> >>>>> The remaining question I'm aware of is whether to make the body text (in >>>>> the email format of the metadata file) officially represent the package long >>>>> description. I'm in favour of doing so: at least twine and flit already use >>>>> this for metadata in wheels. >>>>> >>>>> Thomas >>>>> _______________________________________________ >>>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>> >>>> >>>> _______________________________________________ >>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>> >>> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Thu Feb 15 19:49:24 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 16 Feb 2018 10:49:24 +1000 Subject: [Distutils] pypi.python Error 403 Forbidden In-Reply-To: References: Message-ID: On 16 February 2018 at 07:20, Heiko L. wrote: > A user should be able to decide for himself whether to use HTTP or HTTPS. No, as without any other form of package or metadata signing, we're currently relying heavily on transport layer security to ensure that the information that the server sends is the information that the end user receives. Any access over HTTP can be transparently intercepted and altered to include a malicious payload (and there were a number of in-the-wild proofs-of-concept for this when using shared wireless networks before the service switched to HTTPS only). Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vano at mail.mipt.ru Fri Feb 16 02:38:51 2018 From: vano at mail.mipt.ru (Ivan Pozdeev) Date: Fri, 16 Feb 2018 10:38:51 +0300 Subject: [Distutils] pypi.python Error 403 Forbidden In-Reply-To: <4b594205a2e3559db158541e508303d7.squirrel@webmail.fh-lausitz.de> References: <4b594205a2e3559db158541e508303d7.squirrel@webmail.fh-lausitz.de> Message-ID: <1ddce7c0-0774-ce6f-7aca-54cefce1a406@mail.mipt.ru> On 16.02.2018 0:22, Heiko L. wrote: > https://www2.fh-lausitz.de/launic/comp/tmp/180212.install_python_setuptools_tour.pdf Those instructions are hacky, unsupported and obsolete. See https://packaging.python.org/tutorials/installing-packages/ -- Regards, Ivan From mattbju2013 at gmail.com Fri Feb 16 17:39:06 2018 From: mattbju2013 at gmail.com (Matt Gieger) Date: Fri, 16 Feb 2018 17:39:06 -0500 Subject: [Distutils] Invalid Packages Message-ID: I would like to see a clause added to the "Ivalid Package" section of PEP541 that allows some mechanism for other pypi users to mark a package as spam. Every day i see more spam packages added to pypi and currently the only way to get them removed is to create an issue in github. -Meichthys -------------- next part -------------- An HTML attachment was scrubbed... URL: From trishank.kuppusamy at datadoghq.com Thu Feb 15 13:44:16 2018 From: trishank.kuppusamy at datadoghq.com (Trishank Kuppusamy) Date: Thu, 15 Feb 2018 13:44:16 -0500 Subject: [Distutils] PEP 566 - Package metadata version 2.1 In-Reply-To: References: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> Message-ID: Hi Daniel, long time no speak, how you doing? :) Maybe slightly off-topic, but I wonder if it the PEP allows for specifies hashes of external requirements? Given a good copy of hashes, this would be useful to survive a compromise of any package index. Does this make sense? Please let me know if you have questions, and thanks! On Thu, Feb 15, 2018 at 1:31 PM, Daniel Holth wrote: > I agree but have simply not had time. Edit it to add something like > "Instead of a description header, the description may be provided in the > message body, e.g. after a completely blank line to end the headers, > followed by the long description with no indentation or other special > formatting needed". Write something about putting the body back into a > description key in the json version. Just delete the example parsing code > which doesn't parse message bodies. I don't recall any other issues that > would prevent approval. > > On Thu, Feb 15, 2018 at 11:14 AM Thomas Kluyver > wrote: > >> I'd like to once again prod this PEP towards completion: >> >> https://www.python.org/dev/peps/pep-0566/ >> >> The version numbering question has been decided in favour of calling it >> 2.1. >> >> The remaining question I'm aware of is whether to make the body text (in >> the email format of the metadata file) officially represent the package >> long description. I'm in favour of doing so: at least twine and flit >> already use this for metadata in wheels. >> >> Thomas >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trishank.kuppusamy at datadoghq.com Thu Feb 15 15:09:08 2018 From: trishank.kuppusamy at datadoghq.com (Trishank Kuppusamy) Date: Thu, 15 Feb 2018 15:09:08 -0500 Subject: [Distutils] PEP 566 - Package metadata version 2.1 In-Reply-To: References: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> Message-ID: On Thu, Feb 15, 2018 at 2:19 PM, Daniel Holth wrote: > > Hashes do not belong in this PEP, which is intended to do just a little > more than document the status quo. The document does provide for future > enhancements to the spec without using the PEP process. > > Personally I am not a fan of putting concrete requirements or hashes of > specific archives at this level. > Fair enough, just throwing the idea out there! -------------- next part -------------- An HTML attachment was scrubbed... URL: From waynejwerner at gmail.com Fri Feb 16 21:03:56 2018 From: waynejwerner at gmail.com (Wayne Werner) Date: Fri, 16 Feb 2018 20:03:56 -0600 Subject: [Distutils] Invalid Packages In-Reply-To: References: Message-ID: I don't know if it would be worth the effort, but I wonder if a Stack Overflow-esque rep system for packages would work. In a perfect world, I'm sure, but maybe not so much in ours. -W On Feb 16, 2018 6:24 PM, "Matt Gieger" wrote: > I would like to see a clause added to the "Ivalid Package" section of > PEP541 that allows some mechanism for other pypi users to mark a package as > spam. Every day i see more spam packages added to pypi and currently the > only way to get them removed is to create an issue in github. > > -Meichthys > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Feb 17 04:08:56 2018 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 17 Feb 2018 01:08:56 -0800 Subject: [Distutils] Invalid Packages In-Reply-To: References: Message-ID: On Fri, Feb 16, 2018 at 2:39 PM, Matt Gieger wrote: > I would like to see a clause added to the "Ivalid Package" section of PEP541 > that allows some mechanism for other pypi users to mark a package as spam. > Every day i see more spam packages added to pypi and currently the only way > to get them removed is to create an issue in github. The purpose of PEP 541 is to define which packages can/can't be removed/reassigned. Actually finding those packages is a separate question; that could just be a feature request on a warehouse. What do you mean by a "spam package"? I guess it might be covered under this section: https://www.python.org/dev/peps/pep-0541/#invalid-projects -n -- Nathaniel J. Smith -- https://vorpus.org From lele at metapensiero.it Sat Feb 17 12:48:54 2018 From: lele at metapensiero.it (Lele Gaifax) Date: Sat, 17 Feb 2018 18:48:54 +0100 Subject: [Distutils] Invalid Packages References: Message-ID: <87efljik1l.fsf@metapensiero.it> Nathaniel Smith writes: > What do you mean by a "spam package"? I guess it might be covered > under this section: > https://www.python.org/dev/peps/pep-0541/#invalid-projects > > -n Today lots of packages like the following appeared on PyPI: https://pypi.python.org/pypi/Kim-Kardashian-Hollywood-Hack-Cheats-tars-Cash-Energy-Genearator-Online-2018/1.1.2 Sooner or later we should find a solution, otherwise the index will become a rubbish receptacle. ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From ncoghlan at gmail.com Sun Feb 18 03:06:21 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 18 Feb 2018 18:06:21 +1000 Subject: [Distutils] Invalid Packages In-Reply-To: <87efljik1l.fsf@metapensiero.it> References: <87efljik1l.fsf@metapensiero.it> Message-ID: On 18 February 2018 at 03:48, Lele Gaifax wrote: > Nathaniel Smith writes: > >> What do you mean by a "spam package"? I guess it might be covered >> under this section: >> https://www.python.org/dev/peps/pep-0541/#invalid-projects >> >> -n > > Today lots of packages like the following appeared on PyPI: > > https://pypi.python.org/pypi/Kim-Kardashian-Hollywood-Hack-Cheats-tars-Cash-Energy-Genearator-Online-2018/1.1.2 > > Sooner or later we should find a solution, otherwise the index will become a > rubbish receptacle. The incident report (and response status updates) for the current spam attack can be found here: https://status.python.org/incidents/mgjw1g5yjy5j While we have some ideas for tools and techniques to help crowdsource discovery of problematic packages (e.g. https://github.com/pypa/warehouse/issues/2268), that's a design & implementation question for PyPI as a service, rather than something that needs to be captured in a PSF policy document (and PEP 541 is the latter, hence the slightly modified approval process that involves the PSF more explicitly). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From di at di.codes Sun Feb 18 17:04:15 2018 From: di at di.codes (Dustin Ingram) Date: Sun, 18 Feb 2018 16:04:15 -0600 Subject: [Distutils] PEP 566 - Package metadata version 2.1 In-Reply-To: References: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> Message-ID: I've updated the PEP to include the message body as Description. https://www.python.org/dev/peps/pep-0566/ I don't think there are any other outstanding issues, so this should be ready for Daniel's review. Thanks, D. On Thu, Feb 15, 2018 at 2:09 PM, Trishank Kuppusamy wrote: > On Thu, Feb 15, 2018 at 2:19 PM, Daniel Holth wrote: >> >> >> Hashes do not belong in this PEP, which is intended to do just a little >> more than document the status quo. The document does provide for future >> enhancements to the spec without using the PEP process. >> >> Personally I am not a fan of putting concrete requirements or hashes of >> specific archives at this level. > > > Fair enough, just throwing the idea out there! > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From ncoghlan at gmail.com Mon Feb 19 08:47:03 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 19 Feb 2018 23:47:03 +1000 Subject: [Distutils] Invalid Packages In-Reply-To: References: <87efljik1l.fsf@metapensiero.it> Message-ID: On 18 February 2018 at 18:06, Nick Coghlan wrote: > On 18 February 2018 at 03:48, Lele Gaifax wrote: >> Nathaniel Smith writes: >> >>> What do you mean by a "spam package"? I guess it might be covered >>> under this section: >>> https://www.python.org/dev/peps/pep-0541/#invalid-projects >>> >>> -n >> >> Today lots of packages like the following appeared on PyPI: >> >> https://pypi.python.org/pypi/Kim-Kardashian-Hollywood-Hack-Cheats-tars-Cash-Energy-Genearator-Online-2018/1.1.2 >> >> Sooner or later we should find a solution, otherwise the index will become a >> rubbish receptacle. > > The incident report (and response status updates) for the current spam > attack can be found here: > https://status.python.org/incidents/mgjw1g5yjy5j While this is still the right link to monitor for updates on this particular incident, folks interested in PyPI's spam handling in general may want to subscribe to https://github.com/pypa/warehouse/issues/2982 Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Feb 21 06:25:38 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 21 Feb 2018 21:25:38 +1000 Subject: [Distutils] PEPs 426 & 459 (metadata 2.0) have been Withdrawn Message-ID: With PEP 566 (metadata 2.1) nearing final review and acceptance, I've updated the PEPs repo to officially withdraw PEPs 426 and 459 in favour of PEP 566: https://github.com/python/peps/commit/0977d33b02920d4619c024b64e35a693220cc3cf I also reverted any mentions of metadata 3.0 in PEP 426: that's now back to referring solely to the never-adopted metadata 2.0. So while we may still revisit some of the specific ideas in 426 and 459 in future PEPs, any such proposals will share PEP 566's focus on ease of implementation and adoption. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From sh at changeset.nyc Wed Feb 21 16:35:27 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Wed, 21 Feb 2018 16:35:27 -0500 Subject: [Distutils] Fwd: Warehouse: essential maintainer features & next steps In-Reply-To: <8d8d0384-50db-cd95-cbd2-5ce7cd3d3cfc@changeset.nyc> References: <8d8d0384-50db-cd95-cbd2-5ce7cd3d3cfc@changeset.nyc> Message-ID: <59bb1f84-cf5a-c5f1-9f9a-09583a7f36e5@changeset.nyc> Forwarding from pypa-dev. -------- Forwarded Message -------- Subject: Warehouse: essential maintainer features & next steps Date: Wed, 21 Feb 2018 16:29:31 -0500 From: Sumana Harihareswara To: pypa-dev The big Warehouse news: we're now at the Maintainer Minimum Viable Product milestone! To quote our roadmap[0]: > give package maintainers a solid chance to try out Warehouse and report critical bugs early So we've started asking some package maintainers to test pypi.org, and probably later this week we'll broadcast that announcement and request more widely. Depending on the bugs we find as we reach out to project maintainers, and on some infrastructure work, we may hit Milestone 2 next week, which means we'd reach out to a lot of non-package-maintainer users, and start redirecting a portion of `pip` traffic to Warehouse. More on that in our weekly meeting notes.[1] Our team improved or added email management, account management including deletion, better password management and email confirmation of changed passwords to Warehouse last week.[2] We also continued to improve developer documentation[3] and API docs[4]. And we continued our cabotage work[5] and worked on some further improvements to Twine documentation.[6] I also want to highlight some work that Ernest W. Durbin III and Dustin Ingram have done on their own time, as volunteers, that help PyPI. Dustin's continuing work[7] on PEP 566[8] moves us closer to Markdown support for README files[9]. And Ernest put a BUNCH of time into spam-fighting on PyPI this past weekend. Thank you both. Thanks to Volcyy, waseem18, alanbato, zooba, alex, and HndrkMkt for their pull requests which we merged in the last week![10] In the past month, Warehouse has merged 72 pull requests from 11 distinct authors (excluding pyup-bot), and has closed 63 issues (and opened only 26 new ones).[11][12] We have 3 remaining issues between us and the next milestone (the End User MVP), and then ten more issues till we widely publicize the beta.[13] So, we're chugging along. What you can do: You can help improve Warehouse; we have seven open "good first contribution" issues[14] and a guide to getting started[15]. Ernest wants to help you dive in, and to give you stickers, and has 30-minute 1:1 slots available.[16] Please watch your email for a "hey please help us test" email to this very mailing list. Please file general packaging and distribution confusions, peeves, and suggestions in the packaging-problems issue repo.[17] Thanks to Mozilla's Open Source Support grant for funding this PyPI & Warehouse work![18][19] [0] https://wiki.python.org/psf/WarehouseRoadmap [1] https://wiki.python.org/psf/PackagingWG/2018-02-20-Warehouse [2] https://github.com/pypa/warehouse/milestone/8?closed=1 [3] https://warehouse.readthedocs.io/application/ [4] https://warehouse.readthedocs.io/api-reference/ [5] https://github.com/cabotage/cabotage-app/commits/master [6] https://github.com/pypa/twine/pull/297 [7] https://mail.python.org/pipermail/distutils-sig/2018-February/031997.html [8] https://www.python.org/dev/peps/pep-0566/ [9] https://github.com/pypa/warehouse/issues/869 [10] https://github.com/pypa/warehouse/pulls?utf8=%E2%9C%93&q=2948+2971+2975+2968+2984+2922+2985+2919+2917 [11] https://github.com/pypa/warehouse/pulse/monthly [12] https://github.com/pypa/warehouse/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Amerged+-author%3Apyup-bot+updated%3A%3E%3D2018-01-20+sort%3Aupdated-asc+ [13] https://github.com/pypa/warehouse/milestones [14] https://github.com/pypa/warehouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22 [15] https://warehouse.readthedocs.io/development/getting-started/ [16] https://twitter.com/EWDurbin/status/955415184339849217 [17] https://github.com/pypa/packaging-problems [18] https://pyfound.blogspot.com/2017/11/the-psf-awarded-moss-grant-pypi.html [19] https://blog.mozilla.org/blog/2018/01/23/moss-q4-supporting-python-ecosystem/ -- Sumana Harihareswara Changeset Consulting https://changeset.nyc From dholth at gmail.com Fri Feb 23 19:36:02 2018 From: dholth at gmail.com (Daniel Holth) Date: Sat, 24 Feb 2018 00:36:02 +0000 Subject: [Distutils] PEP 566 - Package metadata version 2.1 In-Reply-To: References: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> Message-ID: I accept PEP 566. Thank you for doing this work. Daniel On Sun, Feb 18, 2018 at 5:05 PM Dustin Ingram wrote: > I've updated the PEP to include the message body as Description. > > https://www.python.org/dev/peps/pep-0566/ > > I don't think there are any other outstanding issues, so this should > be ready for Daniel's review. > > Thanks, > D. > > On Thu, Feb 15, 2018 at 2:09 PM, Trishank Kuppusamy > wrote: > > On Thu, Feb 15, 2018 at 2:19 PM, Daniel Holth wrote: > >> > >> > >> Hashes do not belong in this PEP, which is intended to do just a little > >> more than document the status quo. The document does provide for future > >> enhancements to the spec without using the PEP process. > >> > >> Personally I am not a fan of putting concrete requirements or hashes of > >> specific archives at this level. > > > > > > Fair enough, just throwing the idea out there! > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Feb 23 22:42:58 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 24 Feb 2018 13:42:58 +1000 Subject: [Distutils] PEP 566 - Package metadata version 2.1 In-Reply-To: References: <1518710942.986270.1272015760.6FC8593E@webmail.messagingengine.com> Message-ID: On 24 February 2018 at 10:36, Daniel Holth wrote: > I accept PEP 566. Thank you for doing this work. > Huzzah! I've recorded the acceptance in the PEP: https://github.com/python/peps/commit/c83974875dcf14f8ef798a5893438a31b7f6cf4e I've also reviewed the open PR to check what still needs to be added to bring it up to date with the accepted version of the PEP: https://github.com/pypa/python-packaging-user-guide/pull/412#issuecomment-368196414 Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at mybetterlife.co.uk Sun Feb 25 11:49:22 2018 From: peter at mybetterlife.co.uk (peter at mybetterlife.co.uk) Date: Sun, 25 Feb 2018 16:49:22 -0000 Subject: [Distutils] Python Install Message-ID: <02a501d3ae58$99a8adc0$ccfa0940$@mybetterlife.co.uk> I am new to Python but would like to install to play with some Fast Artificial Neural Network Library routines. However I ha a V.7 installed but could not access pip. I uninstalled 2.7 an then installed latest download which I understand should include pip. However nothing appears to work. Just need some poineters to get to first base. Hope you can help. The following is dialogue from Python Terminal Python 3.7.0a4 (v3.7.0a4:07c9d85, Jan 9 2018, 07:07:02) [MSC v.1900 64 bit (AMD64)] on win32 Type "copyright", "credits" or "license()" for more information. >>> pip install wheel SyntaxError: invalid syntax >>> =============================== RESTART: Shell =============================== >>> pip Traceback (most recent call last): File "", line 1, in pip NameError: name 'pip' is not defined >>> python -m pip install SomePackage SyntaxError: invalid syntax >>> >>> python -m ensurepip --default-pip SyntaxError: invalid syntax >>> Regards Peter H. Williams Try BULB for great GREEN energy saving deals ~ a real alternative to the BIG six 113 Skelmorlie Castle Road Skelmorlie Ayrshire PA17 5AL TEL:- 01475 529946 Mobile:- 07773 348117 Skype:- peterhwuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheus235 at gmail.com Sun Feb 25 13:46:18 2018 From: prometheus235 at gmail.com (Nick Timkovich) Date: Sun, 25 Feb 2018 12:46:18 -0600 Subject: [Distutils] Python Install In-Reply-To: <02a501d3ae58$99a8adc0$ccfa0940$@mybetterlife.co.uk> References: <02a501d3ae58$99a8adc0$ccfa0940$@mybetterlife.co.uk> Message-ID: You need to run pip from a normal shell, e.g. bash/cmd.exe/PowerShell, not the Python REPL (read-eval print loop) "shell". On Sun, Feb 25, 2018 at 10:49 AM, wrote: > I am new to Python but would like to install to play with some Fast > Artificial Neural Network Library routines. > > However I ha a V.7 installed but could not access pip. I uninstalled 2.7 > an then installed latest download which I understand should include pip. > > However nothing appears to work. > > Just need some poineters to get to first base. > > Hope you can help. > > > > The following is dialogue from Python Terminal > > > > Python 3.7.0a4 (v3.7.0a4:07c9d85, Jan 9 2018, 07:07:02) [MSC v.1900 64 > bit (AMD64)] on win32 > > Type "copyright", "credits" or "license()" for more information. > > >>> pip install wheel > > SyntaxError: invalid syntax > > >>> > > =============================== RESTART: Shell > =============================== > > >>> pip > > Traceback (most recent call last): > > File "", line 1, in > > pip > > NameError: name 'pip' is not defined > > >>> python -m pip install SomePackage > > SyntaxError: invalid syntax > > >>> > > >>> python -m ensurepip --default-pip > > SyntaxError: invalid syntax > > >>> > > > > > > *Regards* > > *Peter H. Williams* > > > > *Try **BULB ** for great **GREEN > ** energy saving deals ~ a real > alternative to the BIG six* > > *113 Skelmorlie Castle Road* > > *Skelmorlie* > > *Ayrshire* > > *PA17 5AL* > > *TEL:- 01475 529946* > > *Mobile:- 07773 348117* > > *Skype:- peterhwuk* > > > > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From drsalists at gmail.com Sun Feb 25 21:14:30 2018 From: drsalists at gmail.com (Dan Stromberg) Date: Sun, 25 Feb 2018 18:14:30 -0800 Subject: [Distutils] Python Install In-Reply-To: References: <02a501d3ae58$99a8adc0$ccfa0940$@mybetterlife.co.uk> Message-ID: Also, you might be better off with 3.6. 3.7 is still a beta. You have an alpha of 3.7, which is less trustworthy than a beta. On Sun, Feb 25, 2018 at 10:46 AM, Nick Timkovich wrote: > You need to run pip from a normal shell, e.g. bash/cmd.exe/PowerShell, not > the Python REPL (read-eval print loop) "shell". > > On Sun, Feb 25, 2018 at 10:49 AM, wrote: >> >> I am new to Python but would like to install to play with some Fast >> Artificial Neural Network Library routines. >> >> However I ha a V.7 installed but could not access pip. I uninstalled 2.7 >> an then installed latest download which I understand should include pip. >> >> However nothing appears to work. >> >> Just need some poineters to get to first base. >> >> Hope you can help. >> >> >> >> The following is dialogue from Python Terminal >> >> >> >> Python 3.7.0a4 (v3.7.0a4:07c9d85, Jan 9 2018, 07:07:02) [MSC v.1900 64 >> bit (AMD64)] on win32 >> >> Type "copyright", "credits" or "license()" for more information. >> >> >>> pip install wheel >> >> SyntaxError: invalid syntax >> >> >>> >> >> =============================== RESTART: Shell >> =============================== >> >> >>> pip >> >> Traceback (most recent call last): >> >> File "", line 1, in >> >> pip >> >> NameError: name 'pip' is not defined >> >> >>> python -m pip install SomePackage >> >> SyntaxError: invalid syntax >> >> >>> >> >> >>> python -m ensurepip --default-pip >> >> SyntaxError: invalid syntax >> >> >>> >> >> >> >> >> >> Regards >> >> Peter H. Williams >> >> >> >> Try BULB for great GREEN energy saving deals ~ a real alternative to the >> BIG six >> >> 113 Skelmorlie Castle Road >> >> Skelmorlie >> >> Ayrshire >> >> PA17 5AL >> >> TEL:- 01475 529946 >> >> Mobile:- 07773 348117 >> >> Skype:- peterhwuk >> >> >> >> >> >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From sh at changeset.nyc Mon Feb 26 23:07:37 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Mon, 26 Feb 2018 23:07:37 -0500 Subject: [Distutils] Package maintainers, please help test PyPI Message-ID: <695e7861-c9bb-5771-8896-ccdcebe3a741@changeset.nyc> The Warehouse team has improved the new PyPI, available for you to check out at https://pypi.org/ , to the point where we would love for package maintainers to try it out, test it, and give us bug reports. https://wiki.python.org/psf/WarehousePackageMaintainerTesting has guidelines, things to test (like user registration and project removal), and how to contact Warehouse developers. We're hosting four livechat hours this week where Warehouse maintainers will be in IRC, in #pypa-dev on Freenode https://webchat.freenode.net/?channels=#pypa-dev , and specifically available to talk about problems you run into, or about how to hack on Warehouse. Tuesday Feb 27th: 1700 UTC / noon-1pm EST Tuesday Feb 27th: 2300 UTC / 6pm-7pm EST Thursday March 1st: 1700 UTC / noon-1pm EST Thursday March 1st: 2300 UTC / 6pm-7pm EST This isn't the big public beta yet, where we really push the message widely to get non-package-maintainer users to test the site. Since Warehouse must be a reimplementation of the existing PyPI, please focus initially on any differences, missing features, or incorrect behavior that pypi.org exhibits that affect your workflows for account management and package maintainership. We'll be soliciting feedback on other concerns soon! Feedback on user experience, accessibility, and overall ease of use are welcome. Thanks, -- Sumana Harihareswara Warehouse project manager Changeset Consulting https://changeset.nyc From sh at changeset.nyc Tue Feb 27 15:53:13 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Tue, 27 Feb 2018 15:53:13 -0500 Subject: [Distutils] Warehouse update: a week of testing, polish, & infrastructure Message-ID: <51e1a47e-a10c-235f-9be3-ae1883fb200d@changeset.nyc> This week we're publicizing pypi.org to package maintainers and asking them/you to test it, and to spread the word to other package maintainers you know.[1][2] And we're probably within two or three weeks of the big public beta, given the existing open Warehouse issues in the next milestone,[3] the new issues we'll open based on user testing this week, and the infrastructure work in play. You can learn more in our weekly meeting notes.[4] We've been doing lots of performance and memory consumption work as we harden our infrastructure and codebase to make them production-ready, lots of polish as we get more testers and get their feedback.Examples: * reduce columns pulled for main project view in admin[5] * Guard against all tuples in metadata upload[6] * Add table style to project description, clean up titles[7] * Add SQLAlchemy error fix to troubleshooting docs[8] * Update PyPI migration info in packaging guide[9] And thanks to volunteers: * alex for "Don't install g++ from a PPA in travis"[10] * wasim for "Added help entry for File already exists error"[11] * HndrkMkt for "Redirect authenticated user from reset pw pages to index"[12] Our infrastructure work included more improvements to cabotage[13] -- see Ernest's demo[14]: > It?s super rough? but here?s first light of an end to end deployment on this thing I been building. still plenty of work to do, but already chock full of automated end-to-end TLS, secure storage of secrets with Vault, a bucket of Kubernetes, enough docker to make your head spin... And our infrastructure work included a restart of Nicole's user testing, both with the broad publicity to package maintainers and with Nicole leading folks through one-on-one exercises and data-gathering sessions. More about Nicole's current design process is in her blog update.[15] Some issue discussion this week that you might find relevant: * Which 3rd party services should we contact about the new pypi.org domain?[16] * APIs/feeds issues got a bit more sorted[17] -- and if there's anything you need in our API docs[18] that isn't in there, please let us know. * What should we show as the default search result on https://pypi.org/search/ ?[19] * What do new developers need in architecture documentation?[20] * Should we rename the "/legacy" URL?[21] As usual, you can get an overview of Warehouse development at our GitHub rollout board[22] And if you want to help out, this week, please do test the site, come to our IRC livechat hours, and spread the word. Thanks to Mozilla for their support for the PyPI & Warehouse work, and thanks to the Python Software Foundation for coordinating it![23][24] [1] https://pyfound.blogspot.com/2018/02/python-package-maintainers-help-test.html [2] https://wiki.python.org/psf/WarehousePackageMaintainerTesting#IRC_livechat_hours [3] https://github.com/pypa/warehouse/milestone/10 [4] https://wiki.python.org/psf/PackagingWG/2018-02-26-Warehouse [5] https://github.com/pypa/warehouse/pull/3043 [6] https://github.com/pypa/warehouse/pull/3049 [7] https://github.com/pypa/warehouse/pull/3040 [8] https://github.com/pypa/warehouse/pull/3048 [9] https://github.com/pypa/python-packaging-user-guide/pull/439 [10] https://github.com/pypa/warehouse/pull/3037 [11] https://github.com/pypa/warehouse/pull/2997 [12] https://github.com/pypa/warehouse/pull/2988 [13] https://github.com/cabotage/cabotage-app [14] https://twitter.com/EWDurbin/status/968315460101042176 [15] http://whoisnicoleharris.com/2015/12/31/designing-warehouse-an-overview.html#update-27th-feb-2018 [16] https://github.com/pypa/warehouse/issues/2935 [17] https://github.com/pypa/warehouse/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3AAPIs%2Ffeeds [18] https://warehouse.readthedocs.io/api-reference/ [19] https://github.com/pypa/warehouse/issues/3062 [20] https://github.com/pypa/warehouse/issues/2794 [21] https://github.com/pypa/warehouse/issues/2285 [22] https://github.com/pypa/warehouse/projects/1 [23] https://pyfound.blogspot.com/2017/11/the-psf-awarded-moss-grant-pypi.html [24] https://blog.mozilla.org/blog/2018/01/23/moss-q4-supporting-python-ecosystem/ -- Sumana Harihareswara Changeset Consulting https://changeset.nyc From sh at changeset.nyc Wed Feb 28 08:52:34 2018 From: sh at changeset.nyc (Sumana Harihareswara) Date: Wed, 28 Feb 2018 08:52:34 -0500 Subject: [Distutils] merger versus separation of publishing and download tools Message-ID: I figured folks here would want to know that there's an ongoing discussion of the distinction between `pip` and `twine`, and related topics (should publishing tools be separate from download/consumption tools?), in: https://github.com/pypa/packaging-problems/issues/60 I don't want to break up the conversation into different threads, one here and one on GitHub,so please consider this a pointer rather than a fork of the conversation. :) -- Sumana Harihareswara Changeset Consulting https://changeset.nyc From porton at narod.ru Wed Feb 28 16:30:02 2018 From: porton at narod.ru (Victor Porton) Date: Wed, 28 Feb 2018 23:30:02 +0200 Subject: [Distutils] /etc files Message-ID: <1519853402.2276.16.camel@narod.ru> How to deal with the files to be placed into /etc or a similar dir? In the previous email I forgot to say I use setuptools not distutils. From porton at narod.ru Wed Feb 28 16:29:12 2018 From: porton at narod.ru (Victor Porton) Date: Wed, 28 Feb 2018 23:29:12 +0200 Subject: [Distutils] /etc files Message-ID: <1519853352.2276.15.camel@narod.ru> How to deal with files to be placed into /etc or a similar dir?