From njs at pobox.com Mon Apr 11 04:50:48 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 11 Apr 2016 01:50:48 -0700 Subject: [Wheel-builders] [RFC] Proposal for packaging native libraries into Python wheels Message-ID: Hi all, Here's a first draft of a thing for feedback. In addition to pasting below, I've also posted it as a pull request on the wheel-builders repo, so that there will be some history + a place for any nitpicky line-by-line comments: https://github.com/pypa/wheel-builders/pull/2 ----- Proposal: Native libraries in wheels ==================================== (very draft! comments welcome!) Motivation ---------- Wheels and the ecosystem around them (PyPI, pip, etc.) provide a standard cross-platform way for Python projects to distribute pre-built packages, including pre-built binary extensions. But currently it can be rather awkward to distribut binary extension modules when those modules depend on external native libraries -- for example, ``numpy`` and ``scipy`` depend on ``blas``, and ``cryptography`` and ``psycopg2`` depend on ``openssl``. Currently, there are two main approaches that are used: Option 1 is to include ('vendor') the native libraries into the Python package wheels, so that e.g. ``cryptography`` and ``psycopg2`` each have their own copy of ``openssl``. This creates bloated downloads and memory usage (e.g. BLAS libraries can be very large, so shipping multiple copies quickly adds up), and complicates the process of building and updating and responding to security problems (every time a new ``openssl`` bug comes out, the distributors of ``cryptography`` and ``psycopg2`` are both separately responsible for rerolling their wheels). OTOH, it's the only approach that currently works consistently and portably across platforms. Option 2 is to assume/hope that the library will be somehow provided externally by the system. This is mostly only done with sdists on Linux and sometimes OS X, and works about as well as you'd expect, i.e., sorta but not very well, and almost not at all on Windows. We can do better. This proposal ------------- Here we propose the ``pynativelib`` project: a set of conventions and tools for packaging native libraries and native executables into Python wheels, so that we can have wheels like ``openssl-py2.py3-none-win32.whl`` that can be installed/upgraded/uninstalled via ``pip`` and used by projects like ``cryptography`` and ``psycopg2``; this allows them to share a single copy of openssl, and when a security fix comes out, then only the openssl wheel needs to be updated to fix all the dependent packages. (Rationale for name: as we'll see below, this name will end up in filenames and metadata attached to native binaries that might become dissociated from the Python context; including "py" in the name gives a hint as to where these files come from. And "nativelib" because we aim to support native libraries, in any language.) Principles ---------- The pynativelib design should be: 1) Portable: support all platforms that distribute wheels via PyPI (currently Windows, OS X, Linux), and provide consistent behavior across these platforms (e.g. encapsulate differences in how shared libraries work across platforms so that users don't have to learn the grim details). 2) Pythonic: these packages should be located using familiar sys.path-based lookup rules, so e.g. installing these wheels into a virtualenv or anywhere else on PYTHONPATH should Just Work. 3) Play well with others: we expect people will use a mixture of pynativelib wheels, vendored libraries, and external system-provided libraries, possibly in the same package and certainly in the same environment. This should "just work". Pynativelibs shouldn't need any special case treatment from the broader packaging ecosystem; they should just work with the existing system, and without requiring coordination between anyone except the pynativelib package builder and the downstream package that wants to use it. Also, we would like to play nicely with systems like Fedora or Debian that will want to package Python libraries and have them depend on external system libraries, without requiring lots of per-package work by distributors. The rest of this document outlines my best attempt to meet all of these requirements. Example ------- To orient ourselves, let's start with a quick look at a "typical" package ``pypkg``, which contains an extension module ``pypkg._ext`` that is linked to the pynativelib ``libssl`` library. First, in ``pypkg/__init__.py``:: # This must be done before importing any code that is linked # against the pynativelib openssl libraries. It does whatever # platform-specific magic is needed to make the libraries # accessible; pypkg doesn't know or care about the details. import pynativelib_openssl pynativelib_openssl.enable() # Now we can import ._ext from ._ext import ... And in ``pypkg``'s ``setup.py``:: from setuptools import setup, Extension import pynativelib_openssl as pnlo setup(... # This doesn't actually work because of the usual problems with # setup_requires, but let's assume that some day it will # (see PEP 516/517) setup_requires=["pynativelib_openssl >= ..."], # ask pynativelib_openssl to tell us which versions are # ABI-compatible with the one we are building against install_requires=pnlo.install_requires(), ext_modules = [ Extension("pypkg._ext", ["pypkg/_ext.c"], # Ask pynativelib_openssl to tell us how to link to it **pnlo.distutils_extension_kwargs()), ], ) That's it. Now, to make this work, let's look at ``pynativelib_openssl``. It might be laid out like:: pynativelib_openssl/ __init__.py _libs/ libpynativelib_openssl__ssl.so.1.0.2 _include/ openssl.h (or replace ``libpynativelib_openssl_ssl.so.1.0.2`` with ``libpynativelib_openssl_ssl.1.0.2.dylib`` on OSX and ``pynativelib_openssl_ssl-1.0.2.dll`` on Windows, and on Windows we'll also need a ``pynativelib_openssl_ssl-1.0.2.lib`` import library. The general rule is that whatever the library would normally be called, we rename it to ``__``.) **Rationale for this naming scheme:** (1) It allows tools like ``auditwheel`` to not only recognize that an external dependency of a wheel is a pynativelib library, but also to extract the name of the pynativelib package and check that the wheel has an ``Install-Requires`` entry for it. (2) It is possible that there will be different pynativelib packages that contain different versions of the same library (e.g., on Windows there are multiple C++ ABIs); encoding the pynativelib package name into the library name ensures that these will have distinct names. And the use of a double underscore avoids ambiguity in case we ever end up with a package where the ```` and ```` portions both contain underscores, on the assumption that no-one will ever use double-underscores on purpose in package names. And ``pynativelib_openssl/__init__.py`` contains code that looks something like:: # openssl version 1.0.2 + our release 0 __version__ = "1.0.2.0" import os.path # Find our directories _root = os.path.abspath(os.path.dirname(__file__)) _libs = os.path.join(_root, "_libs") _include = os.path.join(_root, "_include") def enable(): # platform specific code for adding _libs to search path # basically prepending the _libs dir to # LD_LIBRARY_PATH on linux # DYLD_LIBRARY_PATH on OS X # PATH on Windows ... def install_requires(): # For a library that maintains backwards (but not forward) # ABI compatibility: return ["pynativelib_openssl >= {}".format(__version__)] # In real life openssl's API/ABI is a mess, so we might want to # have different parallel-installable packages for different # openssl versions or other clever things like that. # Either way users don't really need to care. def library_name(): # adjust as necessary for the platform return "libpynativelib_openssl__ssl.so.1.0.2" # -- WARNING -- # This is not actually how I'd suggest implementing this function # in real life, and there are some functions missing as well. # This is just to give the general idea: def distutils_extension_kwargs(): return {"include_dirs": [_include], "library_dirs": [_lib], # adjust as necessary for platform "libraries": ["pynativelib_openssl__ssl"], } Of course there are also lots of other possible variations here -- the person putting together the pynativelib package could use whatever conventions they prefer for what exactly to name the library and include directories, and how to handle version numbers in a consistent way cross-platform, etc. The crucial thing here is to settle on an interface between the pynativelib package and the downstream package that uses it. Does this work? --------------- Yes! Not only does it allow us to distribute packages that link against openssl and other libraries in a sensible way, but it fulfills our principles: it works on all supported platforms, the lookup happens via ``import`` so it follows standard Python rules, and it plays well with others in that pynativelib-installed libraries can coexist without interference with libraries coming from the external system, or vendored into other projects' wheels, or whatever. The interaction of the first and last points is somewhat subtle, though: why does it work on all these different platforms? Their linkers work differently in *lots* of ways, but they all (1) provide some way to alter the library search order in a process-wide fashion that also gets passed to child processes spawned with ``os.execv``, and (2) use the library name as a crucial part of their symbol resolution process. On Windows and OS X this is straightforward: they resolve symbols using a "two-level namespace", i.e., binaries don't just say "I need ``SSL_read``", they say "I need ``SSL_read`` from ``libssl``". So if one binary says that, and another says "I need ``SSL_read`` from ``pynativelib_openssl__libssl``", then there's no danger of the two ``SSL_read``'s getting mixed up. On Linux, it's a bit more complicated: on Linux, binaries do just say "I need ``SSL_read``", and then there's a separate list of libraries that get searched. But, thanks to the magic of ELF scopes (don't ask), every Python extension module gets its own independent list of libraries to be searched -- so if extension module A puts ``pynativelib_openssl__libssl`` into its list, and extension module B puts ``libssl`` into its list, then they'll both end up finding the version of ``SSL_read`` that they wanted. These platforms' dynamic linkers also provide all kinds of other neat features: ``RPATH`` variants, SxS manifests, etc., etc. But it turns out that these are all unnecessary for our purposes, and are too platform-specific to be useful in any case, so we ignore them and stick to the basics: search paths and unique library names. Other niceties .............. **What if I want to link to two pynativelib libraries, and I'm using distutils/setuptools?** Then you should write some ugly thing like:: def combined_extension_kwargs(*pynativelib_objs): combined = defaultdict(list) for po in pynativelib_objs: for k, v in po.items(): combined[k] += v return dict(combined) Extension(..., **combined_extension_kwargs( pynativelib_openssl, ...)) **Isn't that kinda ugly?** Yeah, because interfacing with distutils is just kinda intrinsically obnoxious -- the interface distutils gives you is over-complicated and yet not rich enough. This is the best I can come up with, and there's some more rationale details below, but really the long-term solution is to modify setuptools and whatever other build systems we care about so that you can just say "here's an object that implements the standard pynativelib interface, pls make that happen", like:: Extension("pypkg/_ext", ["pypkg/_ext.c"], pynativelibs=[pynativelib_openssl]) Possibly we can/should ship the ``combined_extension_kwargs`` function above in a helper library too as a stop-gap. **What if I have two libraries that ship together?** Contrary to the oversimplified example sketched above, openssl actually ships two libraries: ``libssl`` and ``libcrypto``. We need a way to link to just one or the other or both. Solution: well, this is Python, so instead of restricting ourselves to thinking of the "pynativelib interface" as something that only top-level modules can provide, we should think of it as an interface that can be implemented by some arbitrary object, and a single package can provide multiple objects that implement the pynativelib interface. So e.g. the openssl package might decide to expose two objects ``pynativelib_openssl.libssl`` and ``pynativelib_openssl.libcrypto`` and then one could write:: Extension("pypkg/_ext", ["pypkg/_ext.c"], pynativelibs=[pynativelib_openssl.libssl]) or:: Extension("pypkg/_ext", ["pypkg/_ext.c"], pynativelibs=[pynativelib_openssl.libcrypto]) or, if you want to link to both:: Extension("pypkg/_ext", ["pypkg/_ext.c"], pynativelibs=[pynativelib_openssl.libcrypto, pynativelib_openssl.libssl]) (Or if you're using a currently-existing version of setuptools that doesn't yet support ``pynativelibs=``, then you can translate this into the equivalent slightly-more-cumbersome code.) Another case where this idea of pynativelib interface objects is useful is numpy, which can and should provide an implementation for projects that want to link to its C API, as ``numpy.pynativelib`` or something. (This would also nudge downstream packages to start properly tracking numpy's ABI versions in their ``install_requires``, which is something that everyone currently gets wrong...) **What if I want to access a pynativelib library via ``dlopen``/``LoadLibrary``?** See the ``library_name`` interface below. **What if my pynativelib library depends on another pynativelib library?** Just make sure that your ``enable`` function calls their ``enable`` function, and everything else should work. **What about executables, like the ``openssl`` command-line tool?** We can handle this by stashing the executable itself in another hidden directory:: pynativelib_openssl/ <...all the stuff above...> _bins/ openssl and then install a Python dispatch script (using the same mechanisms as we'd use to ship any Python script with wheel), where the script looks like:: #!python import os import sys import pynativelib_openssl pynativelib_openssl.enable() os.execv(pynativelib_openssl._openssl_bin_path, sys.argv) Rejected alternatives --------------------- **Rationale for having ``enable`` but not ``disable``:** since one of our goals is to avoid accidental collisions with other shared library distribution strategies, I was originally hoping to use something like a context manager approach: ``with pynativelib_openssl.enabled(): import ._ext``. The problem is that you can't effectively remove a library from the search path once it's been loaded on Windows or Linux (and maybe OS X too for all I know -- `the Linux behavior shocked me `_ so I no longer trust docs on this kind of thing). So ``disable`` is impossible to implement correctly and there's no point trying to pretend otherwise; the only viable strategy to avoid collisions is to give libraries unique names. **Rationale for modifying ``PATH`` on Windows instead of doing something else:** SxS assemblies are `ruinously complex and yet fatally limited `_ -- in particular they can't be nested and it doesn't seem like they can load from arbitrary run-time specified directories. It's not clear if ``AddDllDirectory`` is available on all supported Windows versions. Preloading won't work for running executables (in fact, none of these options will). And so long as the only things we add to ``PATH`` are directories which contain nothing besides libraries, and these libraries all have carefully-chosen unique names, then modifying ``PATH`` should be totally safe. Detailed specification ---------------------- An object implementing the *pynativelib interface* should provide the following functions as attributes: Run-time interface: - ``enable()``: Client libraries must call this before attempting to load any code that was linked against this pynativelib. Must be idempotent, i.e. multiple calls have the same effect as one call. - ``library_name()``: Returns a string suitable for passing to ``dlopen`` or ``LoadLibrary``. You must call ``enable()`` before actually calling ``dlopen``, though. (This is necessary because the library might itself depend on other pynativelib libraries, and loading those will fail if ``enable`` has not been called. To remind you of this, ``library_name`` implementations should probably avoid returning a full path even when they could, so as to make code that fails to call ``enable`` fail early.) (FIXME/QUESTION: are there any cases where it would make sense to combine two libraries into a single pynativelib interface object? I can't think of any, but if there are then this interface might be inadequate. Let's impose that as an invariant for now to keep things simple? Alternatively, any package that has a weird special case can just implement and document a different interface -- I don't expect there will be much generic code calling ``library_name()`` on random pynativelib packages; instead it will mostly be specialized code calling ``library_name()`` on a specific package for use with ``ctypes`` or ``cffi``, so there shouldn't be much problem if a particular pynativelib object needs to deviate.) Build-time interface: The tricky thing about build time metadata is that we want to support different compilers simultaneously, and they use different command-line argument syntaxes. To address this problem, distutils `takes the approach `_ of defining an abstract Python-level interface to common operations (and some not-so-common operations), which then get translated into whatever command-line arguments are appropriate for the compiler at hand, plus providing the escape-valve ``extra_compile_args`` and ``extra_link_args``. ``pkg-config`` takes an interestingly different approach. It seems like in practice there are two practically-important styles of passing arguments to compilers: gcc and MSVC. Everyone else makes their command-line arguments match one or the other of these -- at least with regard to the basic arguments that you need to know in order to link against something:: | Function | GCC-style | MSVC-style | |--------------+-----------+--------------| | Include path | -I... | -I... | | Define | -DFOO=1 | -DFOO=1 | | Library path | -L... | /libpath:... | | Link to foo | -lfoo | foo.lib | (MSVC docs: `compiler `_, `linker `_; note that while the canonical option character is ``/``, ``-`` is also accepted.) So in a ``pkg-config`` file, you just write down all your compile arguments and linker arguments (they call them ``cflags`` and ``libs``) using GCC style, and ``pkg-config`` knows about ``-I`` and ``-L`` and ``-l``, and if you really want just those parts of the command line it can pull them out, and if you want MSVC-style syntax then you pass ``--msvc`` and it will translate ``-L`` and ``-l`` for you according to the above table. (``pkg-config`` also has some special casing for ``-rpath``, but ``-rpath`` is never useful for us, because we never know where the pynativelib library will be installed on the user filesystem. AFAICT these are the only cases that ``pkg-config`` thinks need special casing.) What I like about the ``pkg-config`` approach is that as a user you basically just say "give me {GCC, MSVC} {compile, link} flags", and then the gory details are encapsulated. This is strictly more powerful than the distutils approach. Consider the case of a library where if you use it with GCC you need to pass ``--pthread`` as part of your link flags -- but with MSVC you don't want to pass this. In the distutils approach there's simply now way to do this. Therefore, we adopt a more ``pkg-config``-style approach, plus a convenience adaptor to the legacy distutils-style approach. - ``compile_args(compiler_style)``, ``link_args(compiler_style)``: Mandatory argument ``compiler_style`` should be ``"gcc"`` or ``"msvc"`` (or potentially other styles later if it turns out to be relevant). Returns a list of strings like ``["-Ifoo", "-DFOO"]``. Note: Similarly to ``enable``, if successfully linking to pynativelib1 requires some arguments be added to also link to pynativelib2, then these functions should call their counterparts in pynativelib2 and merge them into the results. - ``distutils_extension_kwargs()``: returns a dict of kwargs to pass to `the distutils ``Extension`` constructor <`_, or raises an error if this is impossible (e.g. if there are necessary arguments that cannot be translated into the distutils interface). This should only be used as a workaround for build systems where we can't use the ``{compile,link}_args`` API. ``pkg-config`` also provides some options for pulling out particular parts of the args (e.g. ``--cflags-only-I`` + ``--cflags-only-other`` = ``--cflags``). I'm not sure why this is needed. We can add it later if it turns out to be necessary. FIXME: in some cases we might not be able to convince some code to link against a specific library just by controlling the arguments passed to the compiler; for example, there's no argument you can pass to ``gfortran`` to tell it "please link against ``pynativelib_libgfortran__libgfortran`` instead of ``libgfortran``. Maybe we need some additional function in the pynativelib interface that the build system is required to call after it's finished building each binary, so that the pynativelib codes gets a chance to do post-hoc patching of the binary? However, integrating this into a distutils-based build system would be non-trivial (and probably likewise for lots of other build systems)... Tracking ABI compatibility -------------------------- Most pynativelib packages will have multiple build variants, so we need to make sure that the right variant is found at install- and run-time. The most fundamental example of this problem is distinguishing between a pynativelib wheel containing Windows ``.dll`` files, and a pynativelib wheel containing Linux ``.so`` files. This basic problem can be solved directly by use of the wheel format's existing ABI tags. E.g., a typical pynativelib wheel will have a name like:: $(PKG)-$(VERSION)-py2.py3-none-$(PLATFORM TAG).whl where ``py2.py3`` indicates that it can be used with any version of Python, ``none`` indicates that it does not depend on any particular version of the Python C API, and the platform tag will be something like ``manylinux1_x86_64`` to indicate that this wheel contains binaries for Linux x86-64. The first complication on top of this is that on Windows, there are multiple versions of the C runtime (CRT) with incompatible ABIs, and some libraries have a public API that is dependent on the CRT ABI (e.g. because they have public APIs that take or return stdio ``FILE*`` or file descriptors, or where they return ``malloc``'ed memory and expect their caller to call ``free``). In practice, each CPython release picks and sticks with a particular version of MSVC and its corresponding CRT. So, for pynativelib libraries where the choice of CRT is important, we can use a cute trick: by tagging the wheel with the CPython version, we also pick out the right CRT version:: # MSVC 2008, 32-bit $(PKG)-$(VERSION)-cp26.cp27.cp32.pp2-none-win32.whl # MSVC 2010, 32-bit $(PKG)-$(VERSION)-cp33.cp34-none-win32.whl # MSVC 2015 or greater ("universal" CRT), 32-bit $(PKG)-$(VERSION)-cp35-none-win32.whl # MSVC 2008, 64-bit $(PKG)-$(VERSION)-cp26.cp27.cp32.pp2-none-win_amd64.whl # MSVC 2010, 64-bit $(PKG)-$(VERSION)-cp33.cp34-none-win_amd64.whl # MSVC 2015 or greater ("universal" CRT), 64-bit $(PKG)-$(VERSION)-cp35-none-win_amd64.whl Notes: 1) ``pp2`` is "any version of PyPy 2"; for compatibility with CPython they also use `MSVC 2008 for Windows builds `_, and this seems unlikely to change. PyPy 3 is not widely used and may well change compiler versions at some point, so we leave PyPy 3 out for now. 2) This works well for existing versions of CPython, but (as alluded to above) doesn't handle non-CPython builds as well, and it doesn't handle future CPython releases well either (when 3.6 is released we'll need to update our wheel tags). So in the long run we may want to come up with a better way of encoding this information (e.g., by adding a new ABI tag that directly indicates which CRT is in use, so that one could write ``py2.py3-msvc2008-win32`` or similar). Then, there are all the other ways that ABIs can vary. This may be especially an issue for libraries that expose C++ APIs. E.g., on Windows, MSVC and gcc have incompatible C++ APIs, so a C++ pynativelib package on Windows might want to provide two verions. (Or possibly even more, if one wants to support the different mingw-w64 ABI variants.) Or, on all platforms using gcc, APIs that use ``std::string`` or related objects will end up with different ABIs depending on whether they were built with g++ <5.1 or >=5.1 (`details `_). In these cases, it's plausible that a single Python environment might want multiple variants of these pynativelib packages installed simultaneously, and our job is to make sure that downstream libraries get run using the same variant of the pynativelib library as they were built against. Therefore, the solution in these cases will be to build multiple packages with different names, e.g. ``pynativelib_mycpplib_msvc`` versus ``pynativelib_mycpplib_mingww64``, so that they can be installed and imported separately. Practical challenges / notes towards a todo list ------------------------------------------------ - We're going to want a support library (maybe just ``import pynativelib``) that handles common boilerplate. Probably the thing to do is to start implementing some pynativelib packages then see what helper functions are useful, but I can guess already that we're going to want: - A generic function for adding a directory to the library search that does the right thing on different platforms. - Some code to handle the annoyances around compiler/library paths. Probably we should follow ``pkg-config``'s lead, and have pynativelib package authors provide the GCC-style options, and then the support library can automatically translate these into the various other representations that we care about. - Most library build systems won't have a flag that tells them to generate a library called ``libpynativelib_openssl__ssl`` instead of ``libssl`` or whatever. So we need a way to rename libraries. Strategy: - On Linux, .so files do know what their name is and this is crucial, because if two libraries both think they have the same name then you hit `glibc bug #19884 `_. However, ``patchelf --set-soname`` lets you rename a shared library and avoid this problem. - On OS X, shared libraries do know what their name is, but I'm not sure to what extent it affects things; however, my impression is that their linker is weirdly concerned about such things (FIXME). However, ``install_name_tool -id`` lets you change the name of a shared library and avoid any potential problems. - On Windows, .dll files do know what their name is, but I don't *think* that anything cares? FIXME: we should check this. (If they do care, then we can easily teach `redll `_ to modify the embedded name strings, similar to ``patchelf --set-soname``.) A bigger problem though is that on Windows, you don't link directly against .dll files; instead, you pass the linker a .lib "import library" and the .lib file says which .dll to link to. So if you rename a .dll, you also need some way to modify all the references to it inside the .lib file. *Possible strategy 1:* rename the .dll files and then regenerate the .lib files using something like `dlltool `_. *Possible strategy 2:* rename the .dll files and then patch the .lib file to refer to the new name. *Possible strategy 3:* leave the .lib file alone, and patch the resulting binary using something like `redll `_. Complicating strategies 1 & 2 is that there are actually two different formats for .lib files -- the simple format that MSVC traditionally uses, and the .a format traditionally used by mingw-based compilers. The .lib format is very simple and it would be easy to write a tool to patch it; also, it is the more widespread format and mingw-based compilers can nowadays understand it perfectly well, so that's probably what we want to be standardizing on. Also, dlltool only knows how to generate the .a format. Given all this, and the complexity/uncertainty associated of post-hoc patching (strategy 3), I'm leaning towards strategy 2: require everyone to use MSVC-style .lib files and write a tool to patch them. This also requires some way to actually generate .lib files when using the mingwpy buildchain. This could use the new ``genlib`` program that's just been committed to ``mingw-w64`` upstream, or it actually wouldn't be terribly hard to implement our own .lib file writer (the format is very simple -- just an ``ar`` archive of ``ILF`` files, `details are here < http://www.microsoft.com/whdc/system/platform/firmware/PECOFF.mspx>`_.). This extra work is unfortunate, but we may need to do it anyway if we want to allow for people using MSVC to link to mingwpy-compiled libraries, and mingw-w64 upstream is already moving in this direction. - We need some scripted way to build these wheels, which will involve running the underlying build system (e.g. openssl's Makefile-based system), patching the resulting binaries, and packing them up into a wheel. It's not 100% clear what the best way to do that is -- maybe some hack on top of (dist/setup)utils, maybe some ad hoc thing where we just generate the wheels directly (maybe using `flit `_ for the packing). - Practically speaking, we're probably going to want some shared infrastructure to make it easier to maintain the collection of pynativelib packages. `conda-forge `_ provides an interesting model here: they have a `shared github organization `_, with one repo per project. And there's some `shared tooling `_ for automatically building this using Appveyor + CircleCI + Travis to cover Windows / Linux / OS X respectively, and automatically push new versions into the conda channel. We'll need to figure out how to do this. Unanswered questions -------------------- - **The "gfortran problem" (also known as "the libstdc++ problem"):** Compiler runtimes are particularly challenging to ship as pynativelib wheels, because: - There's no straightforward way to tell the linker to link against the renamed versions of these libraries. - It may not be easy to even figure out what version of the runtime is needed. For GCC's runtimes you generally need to make sure you have a runtime available that's at least as new as the one used to build your code -- so you need some way to figure out what was used to build the code, which may require some sort of non-trivial build system integration. - It's not even 100% clear that anyone promises to maintain backwards compatibility for GCC runtimes on Windows or OS X. (On Linux they use ELF symbol versioning; on Windows and OS X there's... no such thing as ELF symbol versioning, and no formal promises I can find in the documentation.) But after some investigation, it looks like we're probably OK here in practice -- libgcc and libgfortran have never actually depended on symbol versioning, and the only time libstdc++ did was back in the ancient past. In particular, for the big c++11 ABI-breaking transition, they used a strategy (different mangling for the old and new symbols) that basically does the same thing as ELF symbol versioning but that is more portable. All in all it seems like we'll need to handle these specially somehow. In the short term, auditwheel-style vendoring works fine. In the long run, maybe it will work to follow an auditwheel-style strategy where we provide a special tool that postprocesses a built distro to convert it into one that depends on a pynativelib package (instead of just vendoring the runtime library)? Anyway, we can defer this for the moment... (Right now this mostly affects GCC-based toolchains, but now that MS has switched to a more traditional approach we may run into the problem with MSVC in the future too. While MSVC 2015 moved all the crucial C runtime stuff out of the compiler runtime and into the operating system, `there still is a runtime library `_ (``vcruntime140.dll`` in MSVC 2015). CPython ships ``vcruntime140.dll``, so we don't need to. But if MSVC 2016-built code needs ``vcruntime150.dll``, and you want to use MSVC 2016 to build a package for CPython 3.5, then we'll need a way to ship ``vcruntime150.dll``.) - **How can we make this play better with other distribution mechanisms?** E.g. Debian or conda package builders who when building ``cryptography`` or ``numpy`` actually *want* them to link to an external non-pynativelib version of libssl and libblas. This is the one place where I'm most uncertain about the design described above. The minimal answer is "well, Debian/conda/... can carry a little patch to their copy of ``cryptography``/``numpy``, and that's not wrong, but it would be really nice if we could do better than that. I'm not sure how. The biggest issue is that in the design above, downstream packages like ``numpy`` have a hardcoded ``import pynativelib_...`` in their ``__init__.py``, which should be removed when linked against a non-pynativelib version of the library. What would be very nice is if we could make the public pynativelib interface be entirely a build-time thing -- so that instead of a hard-coded runtime import, the build-time interface would provide the code that needed to be called at run-time. (Perhaps it could drop it into a special ``.py`` file as part of the build process, and then the package ``__init__.py`` would have to import that.) This would also potentially be useful in other cases too like separation of development and runtime libraries and header-only libraries, and maybe if we had some sort of general machinery for saying "run this code when you're imported" then it would also help with vendoring libraries on Windows. Unfortunately, build-time configuration is something that distutils is extremely terrible at... Copyright --------- This document has been placed into the public domain. -- Nathaniel J. Smith -- https://vorpus.org From donald at stufft.io Mon Apr 11 08:03:49 2016 From: donald at stufft.io (Donald Stufft) Date: Mon, 11 Apr 2016 08:03:49 -0400 Subject: [Wheel-builders] [RFC] Proposal for packaging native libraries into Python wheels In-Reply-To: References: Message-ID: > On Apr 11, 2016, at 4:50 AM, Nathaniel Smith wrote: > > **What about executables, like the ``openssl`` command-line tool?** We > can handle this by stashing the executable itself in another hidden > directory:: > > pynativelib_openssl/ > <...all the stuff above...> > _bins/ > openssl > > and then install a Python dispatch script (using the same mechanisms > as we'd use to ship any Python script with wheel), where the script > looks like:: > > #!python > > import os > import sys > > import pynativelib_openssl > > pynativelib_openssl.enable() > os.execv(pynativelib_openssl._openssl_bin_path, sys.argv) You don?t need to do this indirection, distutils has a scripts= option which allows you to pass arbitrary files that will get installed into the bin dir with executable bit set. > > > - **How can we make this play better with other distribution > mechanisms?** E.g. Debian or conda package builders who when > building ``cryptography`` or ``numpy`` actually *want* them to link > to an external non-pynativelib version of libssl and libblas. > > This is the one place where I'm most uncertain about the design > described above. > > The minimal answer is "well, Debian/conda/... can carry a little > patch to their copy of ``cryptography``/``numpy``, and that's not > wrong, but it would be really nice if we could do better than > that. I'm not sure how. The biggest issue is that in the design > above, downstream packages like ``numpy`` have a hardcoded ``import > pynativelib_...`` in their ``__init__.py``, which should be removed > when linked against a non-pynativelib version of the library. What > would be very nice is if we could make the public pynativelib > interface be entirely a build-time thing -- so that instead of a > hard-coded runtime import, the build-time interface would provide > the code that needed to be called at run-time. (Perhaps it could > drop it into a special ``.py`` file as part of the build process, > and then the package ``__init__.py`` would have to import that.) > This would also potentially be useful in other cases too like > separation of development and runtime libraries and header-only > libraries, and maybe if we had some sort of general machinery for > saying "run this code when you're imported" then it would also help > with vendoring libraries on Windows. Unfortunately, build-time > configuration is something that distutils is extremely terrible > at... > I think this is important, not because of Debian/conda who can pretty easily carry a patch, but because of people who want to install on those platforms using something like pip, but depend on the platform OpenSSL (or whatever). Right now those people can get that by doing: pip install ?no-binary cryptography and I think it would be a regression to lose that. Perhaps we could leverage some other mechanism for the runtime bits. One idea that springs to mind (which may be terrible) is using a .pth file to enable the environment modifications instead of needing people to do it at runtime in their library code. This would mean that it gets an implicit enable() call for every Python process regardless of whether it needs it or not, which may be a bad thing I?m not sure. If it is we could possibly use sys.meta_path to register an import hook which only enabled on import of a using lib and otherwise just did nothing. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Mon Apr 11 09:06:14 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 11 Apr 2016 06:06:14 -0700 Subject: [Wheel-builders] [RFC] Proposal for packaging native libraries into Python wheels In-Reply-To: References: Message-ID: On Apr 11, 2016 05:04, "Donald Stufft" wrote: > > > > On Apr 11, 2016, at 4:50 AM, Nathaniel Smith wrote: > > > > **What about executables, like the ``openssl`` command-line tool?** We > > can handle this by stashing the executable itself in another hidden > > directory:: > > > > pynativelib_openssl/ > > <...all the stuff above...> > > _bins/ > > openssl > > > > and then install a Python dispatch script (using the same mechanisms > > as we'd use to ship any Python script with wheel), where the script > > looks like:: > > > > #!python > > > > import os > > import sys > > > > import pynativelib_openssl > > > > pynativelib_openssl.enable() > > os.execv(pynativelib_openssl._openssl_bin_path, sys.argv) > > > You don?t need to do this indirection, distutils has a scripts= option > which allows you to pass arbitrary files that will get installed into > the bin dir with executable bit set. The indirection isn't needed to make it executable, it's needed so that we have a chance to munge LD_LIBRARY_PATH before loading the real binary. > > > > > > - **How can we make this play better with other distribution > > mechanisms?** E.g. Debian or conda package builders who when > > building ``cryptography`` or ``numpy`` actually *want* them to link > > to an external non-pynativelib version of libssl and libblas. > > > > This is the one place where I'm most uncertain about the design > > described above. > > > > The minimal answer is "well, Debian/conda/... can carry a little > > patch to their copy of ``cryptography``/``numpy``, and that's not > > wrong, but it would be really nice if we could do better than > > that. I'm not sure how. The biggest issue is that in the design > > above, downstream packages like ``numpy`` have a hardcoded ``import > > pynativelib_...`` in their ``__init__.py``, which should be removed > > when linked against a non-pynativelib version of the library. What > > would be very nice is if we could make the public pynativelib > > interface be entirely a build-time thing -- so that instead of a > > hard-coded runtime import, the build-time interface would provide > > the code that needed to be called at run-time. (Perhaps it could > > drop it into a special ``.py`` file as part of the build process, > > and then the package ``__init__.py`` would have to import that.) > > This would also potentially be useful in other cases too like > > separation of development and runtime libraries and header-only > > libraries, and maybe if we had some sort of general machinery for > > saying "run this code when you're imported" then it would also help > > with vendoring libraries on Windows. Unfortunately, build-time > > configuration is something that distutils is extremely terrible > > at... > > > > > I think this is important, not because of Debian/conda who can pretty > easily carry a patch, but because of people who want to install on > those platforms using something like pip, but depend on the platform > OpenSSL (or whatever). Right now those people can get that by doing: > > pip install ?no-binary cryptography > > and I think it would be a regression to lose that. Perhaps we could > leverage some other mechanism for the runtime bits. One idea that > springs to mind (which may be terrible) is using a .pth file to > enable the environment modifications instead of needing people to do > it at runtime in their library code. This would mean that it gets an > implicit enable() call for every Python process regardless of whether > it needs it or not, which may be a bad thing I?m not sure. Huh, that's an interesting and slightly terrifying idea. Can a wheel drop a .pth file into site-packages? I mean, does that even work technically? > If it is > we could possibly use sys.meta_path to register an import hook which > only enabled on import of a using lib and otherwise just did nothing. I think this hits the bootstrap problem: who's going to modify sys.meta_path before our package starts loading? The alternative also is probably not *that* bad. It's just bad enough that I decided I needed to stop fiddling and get this posted and more eyes on it before trying to solve it properly :-). But basically during your build you'd ask the pynativelib object to drop some generated file into your source, and then modify your __init__.py to do something like import ._pynativelib_openssl_enable There are a lot of fiddly variations on this idea and I'm not sure which is the cleanest. And making this work with distutils requires some nasty hack, because everything in distutils requires some nasty hack, but it's basically the same nasty hack as all the other cases where you have to auto generate a source file during the build -- e.g. every project using cython already does something morally equivalent to sneak the Cython-generated .c files into place before distutils tries compiling them. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Apr 11 09:10:28 2016 From: donald at stufft.io (Donald Stufft) Date: Mon, 11 Apr 2016 09:10:28 -0400 Subject: [Wheel-builders] [RFC] Proposal for packaging native libraries into Python wheels In-Reply-To: References: Message-ID: > On Apr 11, 2016, at 9:06 AM, Nathaniel Smith > wrote: > > The indirection isn't needed to make it executable, it's needed so that we have a chance to munge LD_LIBRARY_PATH before loading the real binary. > Oh right, I missed that part. I blame just waking up. > > Huh, that's an interesting and slightly terrifying idea. Can a wheel drop a .pth file into site-packages? I mean, does that even work technically? > > Yes. A .pth file is no different than any other file as far as wheels are concerned. If you have a pynativelib.pth in your wheel then it?ll get installed. > > I think this hits the bootstrap problem: who's going to modify sys.meta_path before our package starts loading? > > The .pth file that the pynativelib wheel ships. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Mon Apr 11 09:10:40 2016 From: donald at stufft.io (Donald Stufft) Date: Mon, 11 Apr 2016 09:10:40 -0400 Subject: [Wheel-builders] [RFC] Proposal for packaging native libraries into Python wheels In-Reply-To: References: Message-ID: > On Apr 11, 2016, at 9:06 AM, Nathaniel Smith > wrote: > > The indirection isn't needed to make it executable, it's needed so that we have a chance to munge LD_LIBRARY_PATH before loading the real binary. > Oh right, I missed that part. I blame just waking up. > > Huh, that's an interesting and slightly terrifying idea. Can a wheel drop a .pth file into site-packages? I mean, does that even work technically? > > Yes. A .pth file is no different than any other file as far as wheels are concerned. If you have a pynativelib.pth in your wheel then it?ll get installed. > > I think this hits the bootstrap problem: who's going to modify sys.meta_path before our package starts loading? > > The .pth file that the pynativelib wheel ships. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed Apr 13 10:38:01 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 14 Apr 2016 00:38:01 +1000 Subject: [Wheel-builders] [RFC] Proposal for packaging native libraries into Python wheels In-Reply-To: References: Message-ID: On 11 April 2016 at 22:03, Donald Stufft wrote: > I think this is important, not because of Debian/conda who can pretty > easily carry a patch, but because of people who want to install on > those platforms using something like pip, but depend on the platform > OpenSSL (or whatever). Right now those people can get that by doing: > > pip install ?no-binary cryptography > > and I think it would be a regression to lose that. I missed the start of this thread, but yes - once you're no longer supporting linking with system packages, you've become conda (a complete binary platform in its own right) rather than pip (a plugin manager for Python runtimes). > Perhaps we could > leverage some other mechanism for the runtime bits. One idea that > springs to mind (which may be terrible) is using a .pth file to > enable the environment modifications instead of needing people to do > it at runtime in their library code. Aside from slowing down Python startup time, the other main challenge with .pth files is that they don't get run if Python is started without running the site module (which is standard practice for Linux system scripts, for example). > This would mean that it gets an > implicit enable() call for every Python process regardless of whether > it needs it or not, which may be a bad thing I?m not sure. If it is > we could possibly use sys.meta_path to register an import hook which > only enabled on import of a using lib and otherwise just did nothing. A meta_path hook should work fine, and it can be pure Python to address the bootstrapping problem (as long as whatever native libs you need are loaded before the affected extension module starts loading there shouldn't be a problem). Pure extension modules with no Python component would potentially face a problem, but the Python-module-with-C-accelerator pattern could potentially address those. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From max_linke at gmx.de Sun Apr 17 17:38:49 2016 From: max_linke at gmx.de (Max Linke) Date: Sun, 17 Apr 2016 23:38:49 +0200 Subject: [Wheel-builders] manylinux vagrant box Message-ID: <57140269.6080401@gmx.de> Hi Has anybody build a vagrant manylinux box yet? There has been a short discussion about this already on the older google groups mailing list [1] Vagrant is a tool to easily create and share virtual machines similar to docker. Vagrant spins up a whole virtual machine using VirtualBox, KVM, etc instead of just a container. Personally I think vagrant is easier to use for beginners and for development since it is designed as a way to easily share development environments. I want to create a vagrant box to build conda packages anyway. I'm willing to also make the box so that wheels can be build (I don't have much experience with them yet) and share and maintain the scripts to build the boxes with you if you are interested. best Max [1] https://groups.google.com/forum/#!topic/manylinux-discuss/5T3DVvL50tA From msarahan at gmail.com Sun Apr 17 18:19:39 2016 From: msarahan at gmail.com (Michael Sarahan) Date: Sun, 17 Apr 2016 22:19:39 +0000 Subject: [Wheel-builders] manylinux vagrant box In-Reply-To: <57140269.6080401@gmx.de> References: <57140269.6080401@gmx.de> Message-ID: Hi Max, I think Vagrant is one conceptual level higher than docker images. IIRC, it can output docker containers if you so choose. The major perk of vagrant to me would be if you could have one consolidated recipe for obtaining requirements on several platforms. I'm not sure about the benefit over docker for a single OS - but that's just my opinion. Maybe it gives you a GUI, whereas docker is more command-line oriented? For conda, I have done some work on Windows with vagrant and salt: https://github.com/ContinuumIO/vagrant-images/pull/1 - and there's also a linux docker image much like the one for manylinux: https://github.com/ContinuumIO/docker-images/tree/master/conda_builder_linux If either of those are something you'd like to build on, I'd be happy to talk with you. Best, Michael On Sun, Apr 17, 2016 at 4:38 PM Max Linke wrote: > Hi > > Has anybody build a vagrant manylinux box yet? There has been a short > discussion about this already on the older google groups mailing list [1] > > Vagrant is a tool to easily create and share virtual machines similar to > docker. Vagrant spins up a whole virtual machine using VirtualBox, KVM, > etc instead of just a container. > > Personally I think vagrant is easier to use for beginners and for > development since it is designed as a way to easily share development > environments. > > I want to create a vagrant box to build conda packages anyway. I'm > willing to also make the box so that wheels can be build (I don't have > much experience with them yet) and share and maintain the scripts to > build the boxes with you if you are interested. > > best Max > > [1] https://groups.google.com/forum/#!topic/manylinux-discuss/5T3DVvL50tA > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Apr 17 19:51:15 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 17 Apr 2016 16:51:15 -0700 Subject: [Wheel-builders] manylinux vagrant box In-Reply-To: <57140269.6080401@gmx.de> References: <57140269.6080401@gmx.de> Message-ID: On Sun, Apr 17, 2016 at 2:38 PM, Max Linke wrote: > Hi > > Has anybody build a vagrant manylinux box yet? There has been a short > discussion about this already on the older google groups mailing list [1] > > Vagrant is a tool to easily create and share virtual machines similar to > docker. Vagrant spins up a whole virtual machine using VirtualBox, KVM, etc > instead of just a container. > > Personally I think vagrant is easier to use for beginners and for > development since it is designed as a way to easily share development > environments. Docker is pretty easy too, at least from linux -- spinning up a manylinux environment is just "sudo apt install docker.io && sudo docker run -ti quay.io/pypa/manylinux1_x86_64 bash". But I haven't tried using docker on other platforms (though I know it's possible to install docker somehow), and I haven't tried using vagrant at all :-). Given my ignorance, I'd be interested to hear more about what makes you excited about vagrant! > I want to create a vagrant box to build conda packages anyway. I'm willing > to also make the box so that wheels can be build (I don't have much > experience with them yet) and share and maintain the scripts to build the > boxes with you if you are interested. The more the merrier :-). And the conda linux and manylinux build environments are *very* similar. AFAIK the only real difference between them is that when you're targeting conda you can assume that the conda version of libstdc++ is already present, and conda's libstdc++ is newer than the version of libstdc++ you're allowed to assume is present when distributing manylinux wheels (which is just whatever is installed on the system). This almost never matters though: the compilers that ship with the manylinux image can handle any version of C++ up to C++11, and will work fine for both conda and manylinux packages. Or, if you really need support for C++14, then you can always use a newer compiler and then use auditwheel to bundle your newer version of libstdc++ in with your wheel. -n -- Nathaniel J. Smith -- https://vorpus.org From max_linke at gmx.de Mon Apr 18 03:25:02 2016 From: max_linke at gmx.de (Max Linke) Date: Mon, 18 Apr 2016 09:25:02 +0200 Subject: [Wheel-builders] manylinux vagrant box In-Reply-To: References: <57140269.6080401@gmx.de> Message-ID: <57148BCE.5000204@gmx.de> On 04/18/2016 01:51 AM, Nathaniel Smith wrote: > Docker is pretty easy too, at least from linux -- spinning up a > manylinux environment is just "sudo apt install docker.io && sudo > docker run -ti quay.io/pypa/manylinux1_x86_64 bash". But I haven't > tried using docker on other platforms (though I know it's possible to > install docker somehow), and I haven't tried using vagrant at all :-). > > Given my ignorance, I'd be interested to hear more about what makes > you excited about vagrant! I can build the images myself starting from verified distribution images. Thanks to the packer [1] tool this is very easy to setup. Docker images are hard to verify and AFAIK there is no signing of images or updates to them done. Building your own docker images from the ground up also isn't that easy. It is also super easy to change where images are stored my machine using one environment variable. This is important to me because I usually have small root partitions and a separate home partition. Vagrant should run anywhere virtualbox/kvm and ruby are available. But according to the docker docs it can now run on all platforms as well. Otherwise in the usage they are very similar. > The more the merrier :-). And the conda linux and manylinux build > environments are *very* similar. Good to know. Then I can also look into building wheels after the conda packages are done. [1] https://www.packer.io/ From olivier.grisel at ensta.org Wed Apr 20 05:14:35 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Wed, 20 Apr 2016 11:14:35 +0200 Subject: [Wheel-builders] [RFC] Use UPX to compress libraries in wheels? Message-ID: This project seems interesting: http://upx.sourceforge.net/ This could potentially reduce the size of our wheels significantly to both reduce bandwidth usage and speed up install times with pip. We could add an option to auditwheel repair to enable UPX compaction on the compiled extensions and the grafted libraries. Another benefit would be to reduce on-disk size of the typical virtualenv holding the full scipy stack. This can be important to deploy it in serverless environments such as AWS lambda which has size restrictions (50MB) for the hosted code. For instance see this blog post on this topic (it does not use UPX, only stripped binaries): https://serverlesscode.com/post/deploy-scikitlearn-on-lamba/ Note that this second use case (venv compaction) could easily be addressed by a tool not related to wheels but I thought that using UPX upstream in the build & packaging process might be of general interest (primarily to save bandwidth and speedup installs). -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From njs at pobox.com Wed Apr 20 09:12:19 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 20 Apr 2016 06:12:19 -0700 Subject: [Wheel-builders] [RFC] Use UPX to compress libraries in wheels? In-Reply-To: References: Message-ID: I thought upx only works on executables, not shared libraries? The pynativelib stuff could also significantly shrink our venvs compared to the current situation. -n On Apr 20, 2016 2:14 AM, "Olivier Grisel" wrote: > This project seems interesting: > > http://upx.sourceforge.net/ > > This could potentially reduce the size of our wheels significantly to > both reduce bandwidth usage > and speed up install times with pip. We could add an option to > auditwheel repair to enable UPX compaction on the compiled extensions > and the grafted libraries. > > Another benefit would be to reduce on-disk size of the typical > virtualenv holding the full scipy stack. > This can be important to deploy it in serverless environments such as > AWS lambda which has size restrictions (50MB) for the hosted code. For > instance see this blog post on this topic (it does not use UPX, only > stripped binaries): > > https://serverlesscode.com/post/deploy-scikitlearn-on-lamba/ > > Note that this second use case (venv compaction) could easily be > addressed by a tool not related to wheels but I thought that using UPX > upstream in the build & packaging process might be of general interest > (primarily to save bandwidth and speedup installs). > > -- > Olivier > http://twitter.com/ogrisel - http://github.com/ogrisel > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Thu Apr 21 04:49:37 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Thu, 21 Apr 2016 10:49:37 +0200 Subject: [Wheel-builders] [RFC] Use UPX to compress libraries in wheels? In-Reply-To: References: Message-ID: You are right, UPX cannot compress shared libraries: https://sourceforge.net/p/upx/feature-requests/53/ And +1 for pynativelib. -- Olivier From hafner87 at gmail.com Thu Apr 21 04:29:07 2016 From: hafner87 at gmail.com (Matthias Hafner) Date: Thu, 21 Apr 2016 08:29:07 +0000 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 Message-ID: Hi guys, I?m first of all very happy about this project/PEP. Many thanks to you for making this happen. This is what I?ve tried so far in the manylinux1_i686 container: bash-3.2# python/cp27-cp27mu/bin/pip wheel scipy Collecting scipy Downloading scipy-0.17.0.tar.gz (12.4MB) 100% |????????????????????????????????| 12.4MB 102kB/s Collecting numpy>=1.6.2 (from scipy) Downloading numpy-1.11.0.tar.gz (4.2MB) 100% |????????????????????????????????| 4.2MB 196kB/s Building wheels for collected packages: scipy, numpy Running setup.py bdist_wheel for scipy ... error Complete output from command /opt/_internal/cpython-2.7.11-ucs4/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-EW8Dw4/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmp6hYt6Xpip-wheel-: Traceback (most recent call last): File "", line 1, in File "/tmp/pip-build-EW8Dw4/scipy/setup.py", line 265, in setup_package() File "/tmp/pip-build-EW8Dw4/scipy/setup.py", line 253, in setup_package from numpy.distutils.core import setup ImportError: No module named numpy.distutils.core ---------------------------------------- Failed building wheel for scipy Running setup.py clean for scipy Running setup.py bdist_wheel for numpy ... done Stored in directory: /opt Successfully built numpy Failed to build scipy ERROR: Failed to build one or more wheels bash-3.2# python/cp27-cp27mu/bin/pip install _internal/ numpy-1.11.0-cp27-cp27mu-linux_x86_64.whl python/ rh/ bash-3.2# python/cp27-cp27mu/bin/pip install numpy-1.11.0-cp27-cp27mu-linux_x86_64.whl numpy-1.11.0-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform. Thanks for any help! Matthias -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Thu Apr 21 09:36:11 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Thu, 21 Apr 2016 15:36:11 +0200 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: This is a problem related to how scipy build is setup: you need to install numpy before being able to build scipy and not really related to the manylinux docker image itself. /path/to/pip install numpy /path/to/pip wheel scipy -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From njs at pobox.com Thu Apr 21 09:49:35 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 21 Apr 2016 06:49:35 -0700 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: Note also FYI that there are now some carefully tested, official manylinux builds of numpy and scipy posted on pypi. If you want to make your own wheels instead then there's no reason why you shouldn't, it's all free software :-). But if you just want to get some wheels that work then downloading those might save you some time. (The tricky part is BLAS.) -n On Apr 21, 2016 6:32 AM, "Matthias Hafner" wrote: > > Hi guys, > > > I?m first of all very happy about this project/PEP. Many thanks to you for making this happen. > > > This is what I?ve tried so far in the manylinux1_i686 container: > > > bash-3.2# python/cp27-cp27mu/bin/pip wheel scipy > > Collecting scipy > > Downloading scipy-0.17.0.tar.gz (12.4MB) > > 100% |????????????????????????????????| 12.4MB 102kB/s > > Collecting numpy>=1.6.2 (from scipy) > > Downloading numpy-1.11.0.tar.gz (4.2MB) > > 100% |????????????????????????????????| 4.2MB 196kB/s > > Building wheels for collected packages: scipy, numpy > > Running setup.py bdist_wheel for scipy ... error > > Complete output from command /opt/_internal/cpython-2.7.11-ucs4/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-EW8Dw4/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmp6hYt6Xpip-wheel-: > > Traceback (most recent call last): > > File "", line 1, in > > File "/tmp/pip-build-EW8Dw4/scipy/setup.py", line 265, in > > setup_package() > > File "/tmp/pip-build-EW8Dw4/scipy/setup.py", line 253, in setup_package > > from numpy.distutils.core import setup > > ImportError: No module named numpy.distutils.core > > > ---------------------------------------- > > Failed building wheel for scipy > > Running setup.py clean for scipy > > Running setup.py bdist_wheel for numpy ... done > > Stored in directory: /opt > > Successfully built numpy > > Failed to build scipy > > ERROR: Failed to build one or more wheels > > bash-3.2# python/cp27-cp27mu/bin/pip install > > _internal/ numpy-1.11.0-cp27-cp27mu-linux_x86_64.whl python/ rh/ > > bash-3.2# python/cp27-cp27mu/bin/pip install numpy-1.11.0-cp27-cp27mu-linux_x86_64.whl > > numpy-1.11.0-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform. > > > > Thanks for any help! > > Matthias > > > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Thu Apr 21 09:57:22 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Thu, 21 Apr 2016 15:57:22 +0200 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: Matthew did no upload the i686 (32 bit Linux) variant, only the more common x86_64 (64 bit Linux) variant. -- Olivier From matthew.brett at gmail.com Thu Apr 21 14:46:11 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 21 Apr 2016 11:46:11 -0700 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: Hi, On Thu, Apr 21, 2016 at 6:57 AM, Olivier Grisel wrote: > Matthew did no upload the i686 (32 bit Linux) variant, only the more > common x86_64 (64 bit Linux) variant. That's right - although it looks as if it is not hard to build i686 wheel, using https://github.com/matthew-brett/manylinux-builds docker run --rm -v $PWD:/io quay.io/pypa/manylinux1_i686 /io/build_openblas.sh docker run --rm -e NUMPY_VERSIONS=1.11.0 -e PYTHON_VERSIONS=2.7 -v $PWD:/io quay.io/pypa/manylinux1_i686 /io/build_numpies.sh It does make a wheel with an extra underscore though: Fixed-up wheel written to /io/wheelhouse/numpy-1.11.0-cp27-cp27mu-linux_x86_64.manylinux1__i686.whl We could add this to the build matrix for the manylinux-builds travis script, if there was some interest. Cheers, Matthew From matthew.brett at gmail.com Thu Apr 21 16:57:44 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 21 Apr 2016 13:57:44 -0700 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: On Thu, Apr 21, 2016 at 11:46 AM, Matthew Brett wrote: > Hi, > > On Thu, Apr 21, 2016 at 6:57 AM, Olivier Grisel > wrote: >> Matthew did no upload the i686 (32 bit Linux) variant, only the more >> common x86_64 (64 bit Linux) variant. > > That's right - although it looks as if it is not hard to build i686 > wheel, using https://github.com/matthew-brett/manylinux-builds > > docker run --rm -v $PWD:/io quay.io/pypa/manylinux1_i686 /io/build_openblas.sh > > docker run --rm -e NUMPY_VERSIONS=1.11.0 -e PYTHON_VERSIONS=2.7 -v > $PWD:/io quay.io/pypa/manylinux1_i686 /io/build_numpies.sh > > It does make a wheel with an extra underscore though: > > Fixed-up wheel written to > /io/wheelhouse/numpy-1.11.0-cp27-cp27mu-linux_x86_64.manylinux1__i686.whl The underscore problem presumably fixed by https://github.com/pypa/auditwheel/pull/27 Cheers, Matthew From hafner87 at gmail.com Thu Apr 21 17:13:40 2016 From: hafner87 at gmail.com (Matthias Hafner) Date: Thu, 21 Apr 2016 21:13:40 +0000 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: Hi Matthew, Thanks for pointing me to your scripts, that was maybe what I missed. However, if you could add i686 to Travis, I guess that means it would be uploaded to pypi? That would be awesome - otherwise I'd have to automate that myself. Thanks Matthias Matthew Brett schrieb am Do., 21. Apr. 2016, 22:58: > On Thu, Apr 21, 2016 at 11:46 AM, Matthew Brett > wrote: > > Hi, > > > > On Thu, Apr 21, 2016 at 6:57 AM, Olivier Grisel > > wrote: > >> Matthew did no upload the i686 (32 bit Linux) variant, only the more > >> common x86_64 (64 bit Linux) variant. > > > > That's right - although it looks as if it is not hard to build i686 > > wheel, using https://github.com/matthew-brett/manylinux-builds > > > > docker run --rm -v $PWD:/io quay.io/pypa/manylinux1_i686 > /io/build_openblas.sh > > > > docker run --rm -e NUMPY_VERSIONS=1.11.0 -e PYTHON_VERSIONS=2.7 -v > > $PWD:/io quay.io/pypa/manylinux1_i686 /io/build_numpies.sh > > > > It does make a wheel with an extra underscore though: > > > > Fixed-up wheel written to > > /io/wheelhouse/numpy-1.11.0-cp27-cp27mu-linux_x86_64.manylinux1__i686.whl > > The underscore problem presumably fixed by > https://github.com/pypa/auditwheel/pull/27 > > Cheers, > > Matthew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Apr 21 18:04:18 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 21 Apr 2016 15:04:18 -0700 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: On Thu, Apr 21, 2016 at 2:13 PM, Matthias Hafner wrote: > Hi Matthew, > > Thanks for pointing me to your scripts, that was maybe what I missed. > However, if you could add i686 to Travis, I guess that means it would be > uploaded to pypi? That would be awesome - otherwise I'd have to automate > that myself. They don't get uploaded to pypi automatically, and they would be a bit more difficult to test, but it would certainly make it easier to provide them... Cheers, Matthew From matthew.brett at gmail.com Fri Apr 22 00:22:09 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 21 Apr 2016 21:22:09 -0700 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: Hi, On Thu, Apr 21, 2016 at 3:04 PM, Matthew Brett wrote: > On Thu, Apr 21, 2016 at 2:13 PM, Matthias Hafner wrote: >> Hi Matthew, >> >> Thanks for pointing me to your scripts, that was maybe what I missed. >> However, if you could add i686 to Travis, I guess that means it would be >> uploaded to pypi? That would be awesome - otherwise I'd have to automate >> that myself. > > They don't get uploaded to pypi automatically, and they would be a bit > more difficult to test, but it would certainly make it easier to > provide them... I built i686 cython, numpy, scipy wheels. You can test with something like: python -m pip install --upgrade pip # to get latest pip pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy Cheers, Matthew From hafner87 at gmail.com Fri Apr 22 04:17:17 2016 From: hafner87 at gmail.com (Matthias Hafner) Date: Fri, 22 Apr 2016 08:17:17 +0000 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: Thank you very much Matthew, they work on our wheezy-i686 systems. Interestingly, I got two failures reported from scipy.test(), not sure if I should maybe report that somewhere. I'll paste it here in case somebody is interested: ====================================================================== FAIL: test_data.test_boost(,) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/testsys/test/tc/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/tests/test_data.py", line 481, in _test_factory test.check(dtype=dtype) File "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/_testutils.py", line 292, in check assert_(False, "\n".join(msg)) File "/home/testsys/test/tc/local/lib/python2.7/site-packages/numpy/testing/utils.py", line 71, in assert_ raise AssertionError(smsg) AssertionError: Max |adiff|: 7.90146e-13 Max |rdiff|: 8.34163e-07 Bad results (1 out of 1210) for the following points (in output 0): 0.00022049853578209877 107380.34375 0.135563462972641 => 6.193264077428975e-293 != 6.193269243624209e-293 (rdiff 8.341628679530342e-07) ====================================================================== FAIL: test_orthogonal.test_la_roots ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/testsys/test/tc/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/tests/test_orthogonal.py", line 722, in test_la_roots vgq(rootf(50), evalf(50), weightf(50), 0., np.inf, 100, atol=1e-13) File "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/tests/test_orthogonal.py", line 286, in verify_gauss_quad assert_allclose(vv, np.eye(N), rtol, atol) File "/home/testsys/test/tc/local/lib/python2.7/site-packages/numpy/testing/utils.py", line 1391, in assert_allclose verbose=verbose, header=header) File "/home/testsys/test/tc/local/lib/python2.7/site-packages/numpy/testing/utils.py", line 733, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-15, atol=1e-13 (mismatch 0.08%) x: array([[ 1.000000e+00, 4.552139e-16, -4.564824e-16, ..., 1.224186e-16, 9.222704e-17, -1.530426e-17], [ 4.552139e-16, 1.000000e+00, 1.328770e-15, ...,... y: array([[ 1., 0., 0., ..., 0., 0., 0.], [ 0., 1., 0., ..., 0., 0., 0.], [ 0., 0., 1., ..., 0., 0., 0.],... ---------------------------------------------------------------------- Ran 20344 tests in 167.768s FAILED (KNOWNFAIL=98, SKIP=1700, failures=2) ~/test$ pip freeze Error [Errno 2] No such file or directory while executing command git rev-parse backports-abc==0.4 backports.ssl-match-hostname==3.5.0.1 bokeh==0.11.1 certifi==2016.2.28 check-manifest==0.31 devpi-client==2.4.1 devpi-common==2.0.8 futures==3.0.5 Jinja2==2.8 MarkupSafe==0.23 ngc-decoder==1.2.1 nose==1.3.7 numpy==1.11.0 pkginfo==1.2.1 pluggy==0.3.1 py==1.4.31 python-dateutil==2.5.3 PyYAML==3.11 requests==2.9.1 scipy==0.17.0 singledispatch==3.4.0.3 six==1.10.0 tornado==4.3 tox==2.3.1 virtualenv==15.0.1 ~/test$ uname -a Linux am2-nur-d732-02 3.2.0-4-686-pae #1 SMP Debian 3.2.57-3 i686 GNU/Linux ~/test$ lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 7.5 (wheezy) Release: 7.5 Codename: wheezy Cheers, Matthias Matthew Brett schrieb am Fr., 22. Apr. 2016 um 06:22 Uhr: > Hi, > > On Thu, Apr 21, 2016 at 3:04 PM, Matthew Brett > wrote: > > On Thu, Apr 21, 2016 at 2:13 PM, Matthias Hafner > wrote: > >> Hi Matthew, > >> > >> Thanks for pointing me to your scripts, that was maybe what I missed. > >> However, if you could add i686 to Travis, I guess that means it would be > >> uploaded to pypi? That would be awesome - otherwise I'd have to automate > >> that myself. > > > > They don't get uploaded to pypi automatically, and they would be a bit > > more difficult to test, but it would certainly make it easier to > > provide them... > > I built i686 cython, numpy, scipy wheels. You can test with something > like: > > python -m pip install --upgrade pip # to get latest pip > pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy > > Cheers, > > Matthew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Apr 22 13:39:52 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 22 Apr 2016 10:39:52 -0700 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: Hi, On Fri, Apr 22, 2016 at 1:17 AM, Matthias Hafner wrote: > Thank you very much Matthew, they work on our wheezy-i686 systems. > Interestingly, I got two failures reported from scipy.test(), not sure if I > should maybe report that somewhere. I'll paste it here in case somebody is > interested: > > ====================================================================== > FAIL: test_data.test_boost( ibeta_inv_data_ipp-ibeta_inv_data>,) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/testsys/test/tc/local/lib/python2.7/site-packages/nose/case.py", line > 197, in runTest > self.test(*self.arg) > File > "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/tests/test_data.py", > line 481, in _test_factory > test.check(dtype=dtype) > File > "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/_testutils.py", > line 292, in check > assert_(False, "\n".join(msg)) > File > "/home/testsys/test/tc/local/lib/python2.7/site-packages/numpy/testing/utils.py", > line 71, in assert_ > raise AssertionError(smsg) > AssertionError: > Max |adiff|: 7.90146e-13 > Max |rdiff|: 8.34163e-07 > Bad results (1 out of 1210) for the following points (in output 0): > 0.00022049853578209877 107380.34375 > 0.135563462972641 => 6.193264077428975e-293 != > 6.193269243624209e-293 (rdiff 8.341628679530342e-07) > > ====================================================================== > FAIL: test_orthogonal.test_la_roots > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/testsys/test/tc/local/lib/python2.7/site-packages/nose/case.py", line > 197, in runTest > self.test(*self.arg) > File > "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/tests/test_orthogonal.py", > line 722, in test_la_roots > vgq(rootf(50), evalf(50), weightf(50), 0., np.inf, 100, atol=1e-13) > File > "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/tests/test_orthogonal.py", > line 286, in verify_gauss_quad > assert_allclose(vv, np.eye(N), rtol, atol) > File > "/home/testsys/test/tc/local/lib/python2.7/site-packages/numpy/testing/utils.py", > line 1391, in assert_allclose > verbose=verbose, header=header) > File > "/home/testsys/test/tc/local/lib/python2.7/site-packages/numpy/testing/utils.py", > line 733, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=1e-15, atol=1e-13 > > (mismatch 0.08%) > x: array([[ 1.000000e+00, 4.552139e-16, -4.564824e-16, ..., > 1.224186e-16, 9.222704e-17, -1.530426e-17], > [ 4.552139e-16, 1.000000e+00, 1.328770e-15, ...,... > y: array([[ 1., 0., 0., ..., 0., 0., 0.], > [ 0., 1., 0., ..., 0., 0., 0.], > [ 0., 0., 1., ..., 0., 0., 0.],... > > ---------------------------------------------------------------------- > Ran 20344 tests in 167.768s > > FAILED (KNOWNFAIL=98, SKIP=1700, failures=2) > > > ~/test$ pip freeze > Error [Errno 2] No such file or directory while executing command git > rev-parse > backports-abc==0.4 > backports.ssl-match-hostname==3.5.0.1 > bokeh==0.11.1 > certifi==2016.2.28 > check-manifest==0.31 > devpi-client==2.4.1 > devpi-common==2.0.8 > futures==3.0.5 > Jinja2==2.8 > MarkupSafe==0.23 > ngc-decoder==1.2.1 > nose==1.3.7 > numpy==1.11.0 > pkginfo==1.2.1 > pluggy==0.3.1 > py==1.4.31 > python-dateutil==2.5.3 > PyYAML==3.11 > requests==2.9.1 > scipy==0.17.0 > singledispatch==3.4.0.3 > six==1.10.0 > tornado==4.3 > tox==2.3.1 > virtualenv==15.0.1 > > ~/test$ uname -a > Linux am2-nur-d732-02 3.2.0-4-686-pae #1 SMP Debian 3.2.57-3 i686 GNU/Linux > > ~/test$ lsb_release -a > No LSB modules are available. > Distributor ID: Debian > Description: Debian GNU/Linux 7.5 (wheezy) > Release: 7.5 > Codename: wheezy Thanks for testing. I get your test_la_roots failure too: http://nipy.bic.berkeley.edu/builders/manylinux-2.7-ubuntu-32/builds/0/steps/shell_11/logs/stdio http://nipy.bic.berkeley.edu/builders/manylinux-3.4-ubuntu-32/builds/0/steps/shell_11/logs/stdio http://nipy.bic.berkeley.edu/builders/manylinux-3.5-ubuntu-32/builds/0/steps/shell_11/logs/stdio I also get this on Pythons 3.4, 3.5: ====================================================================== FAIL: test_qhull.TestUtilities.test_more_barycentric_transforms ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/bongoslave/manylinux-3_4-ubuntu-32/build/venv/lib/python3.4/site-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/buildslave/bongoslave/manylinux-3_4-ubuntu-32/build/venv/lib/python3.4/site-packages/scipy/spatial/tests/test_qhull.py", line 375, in test_more_barycentric_transforms unit_cube_tol=1500*eps) File "/home/buildslave/bongoslave/manylinux-3_4-ubuntu-32/build/venv/lib/python3.4/site-packages/scipy/spatial/tests/test_qhull.py", line 303, in _check_barycentric_transforms assert_(ok.all(), "%s %s" % (err_msg, np.where(~ok))) File "/home/buildslave/bongoslave/manylinux-3_4-ubuntu-32/build/venv/lib/python3.4/site-packages/numpy/testing/utils.py", line 71, in assert_ raise AssertionError(smsg) AssertionError: ndim=4 (array([11618], dtype=int32),) I guess these are precision errors, and I guess we are also not testing routinely on 32-bit Linux. I think the next step is to see whether we get the same failures on scipy master, with and without openblas 0.2.18, then make an issue for them. Cheers, Matthew From matthew.brett at gmail.com Fri Apr 22 18:51:41 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 22 Apr 2016 15:51:41 -0700 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: Hi, On Fri, Apr 22, 2016 at 10:39 AM, Matthew Brett wrote: > Hi, > > On Fri, Apr 22, 2016 at 1:17 AM, Matthias Hafner wrote: >> Thank you very much Matthew, they work on our wheezy-i686 systems. >> Interestingly, I got two failures reported from scipy.test(), not sure if I >> should maybe report that somewhere. I'll paste it here in case somebody is >> interested: >> >> ====================================================================== >> FAIL: test_data.test_boost(> ibeta_inv_data_ipp-ibeta_inv_data>,) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/home/testsys/test/tc/local/lib/python2.7/site-packages/nose/case.py", line >> 197, in runTest >> self.test(*self.arg) >> File >> "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/tests/test_data.py", >> line 481, in _test_factory >> test.check(dtype=dtype) >> File >> "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/_testutils.py", >> line 292, in check >> assert_(False, "\n".join(msg)) >> File >> "/home/testsys/test/tc/local/lib/python2.7/site-packages/numpy/testing/utils.py", >> line 71, in assert_ >> raise AssertionError(smsg) >> AssertionError: >> Max |adiff|: 7.90146e-13 >> Max |rdiff|: 8.34163e-07 >> Bad results (1 out of 1210) for the following points (in output 0): >> 0.00022049853578209877 107380.34375 >> 0.135563462972641 => 6.193264077428975e-293 != >> 6.193269243624209e-293 (rdiff 8.341628679530342e-07) >> >> ====================================================================== >> FAIL: test_orthogonal.test_la_roots >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/home/testsys/test/tc/local/lib/python2.7/site-packages/nose/case.py", line >> 197, in runTest >> self.test(*self.arg) >> File >> "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/tests/test_orthogonal.py", >> line 722, in test_la_roots >> vgq(rootf(50), evalf(50), weightf(50), 0., np.inf, 100, atol=1e-13) >> File >> "/home/testsys/test/tc/local/lib/python2.7/site-packages/scipy/special/tests/test_orthogonal.py", >> line 286, in verify_gauss_quad >> assert_allclose(vv, np.eye(N), rtol, atol) >> File >> "/home/testsys/test/tc/local/lib/python2.7/site-packages/numpy/testing/utils.py", >> line 1391, in assert_allclose >> verbose=verbose, header=header) >> File >> "/home/testsys/test/tc/local/lib/python2.7/site-packages/numpy/testing/utils.py", >> line 733, in assert_array_compare >> raise AssertionError(msg) >> AssertionError: >> Not equal to tolerance rtol=1e-15, atol=1e-13 >> >> (mismatch 0.08%) >> x: array([[ 1.000000e+00, 4.552139e-16, -4.564824e-16, ..., >> 1.224186e-16, 9.222704e-17, -1.530426e-17], >> [ 4.552139e-16, 1.000000e+00, 1.328770e-15, ...,... >> y: array([[ 1., 0., 0., ..., 0., 0., 0.], >> [ 0., 1., 0., ..., 0., 0., 0.], >> [ 0., 0., 1., ..., 0., 0., 0.],... >> >> ---------------------------------------------------------------------- >> Ran 20344 tests in 167.768s >> >> FAILED (KNOWNFAIL=98, SKIP=1700, failures=2) >> >> >> ~/test$ pip freeze >> Error [Errno 2] No such file or directory while executing command git >> rev-parse >> backports-abc==0.4 >> backports.ssl-match-hostname==3.5.0.1 >> bokeh==0.11.1 >> certifi==2016.2.28 >> check-manifest==0.31 >> devpi-client==2.4.1 >> devpi-common==2.0.8 >> futures==3.0.5 >> Jinja2==2.8 >> MarkupSafe==0.23 >> ngc-decoder==1.2.1 >> nose==1.3.7 >> numpy==1.11.0 >> pkginfo==1.2.1 >> pluggy==0.3.1 >> py==1.4.31 >> python-dateutil==2.5.3 >> PyYAML==3.11 >> requests==2.9.1 >> scipy==0.17.0 >> singledispatch==3.4.0.3 >> six==1.10.0 >> tornado==4.3 >> tox==2.3.1 >> virtualenv==15.0.1 >> >> ~/test$ uname -a >> Linux am2-nur-d732-02 3.2.0-4-686-pae #1 SMP Debian 3.2.57-3 i686 GNU/Linux >> >> ~/test$ lsb_release -a >> No LSB modules are available. >> Distributor ID: Debian >> Description: Debian GNU/Linux 7.5 (wheezy) >> Release: 7.5 >> Codename: wheezy > > Thanks for testing. I get your test_la_roots failure too: > > http://nipy.bic.berkeley.edu/builders/manylinux-2.7-ubuntu-32/builds/0/steps/shell_11/logs/stdio > > http://nipy.bic.berkeley.edu/builders/manylinux-3.4-ubuntu-32/builds/0/steps/shell_11/logs/stdio > > http://nipy.bic.berkeley.edu/builders/manylinux-3.5-ubuntu-32/builds/0/steps/shell_11/logs/stdio I added an issue for that failure here : https://github.com/scipy/scipy/issues/6093 Matthias - I can't replicate your 'boost' error - would you mind trying to replicate that with current scipy trunk, and raising an issue for it? Cheers, Matthew From matthew.brett at gmail.com Fri Apr 22 22:31:59 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 22 Apr 2016 19:31:59 -0700 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: On Fri, Apr 22, 2016 at 10:39 AM, Matthew Brett wrote: [snip] > > Thanks for testing. I get your test_la_roots failure too: > > http://nipy.bic.berkeley.edu/builders/manylinux-2.7-ubuntu-32/builds/0/steps/shell_11/logs/stdio > > http://nipy.bic.berkeley.edu/builders/manylinux-3.4-ubuntu-32/builds/0/steps/shell_11/logs/stdio > > http://nipy.bic.berkeley.edu/builders/manylinux-3.5-ubuntu-32/builds/0/steps/shell_11/logs/stdio > > I also get this on Pythons 3.4, 3.5: > > ====================================================================== > FAIL: test_qhull.TestUtilities.test_more_barycentric_transforms > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/buildslave/bongoslave/manylinux-3_4-ubuntu-32/build/venv/lib/python3.4/site-packages/nose/case.py", > line 198, in runTest > self.test(*self.arg) > File "/home/buildslave/bongoslave/manylinux-3_4-ubuntu-32/build/venv/lib/python3.4/site-packages/scipy/spatial/tests/test_qhull.py", > line 375, in test_more_barycentric_transforms > unit_cube_tol=1500*eps) > File "/home/buildslave/bongoslave/manylinux-3_4-ubuntu-32/build/venv/lib/python3.4/site-packages/scipy/spatial/tests/test_qhull.py", > line 303, in _check_barycentric_transforms > assert_(ok.all(), "%s %s" % (err_msg, np.where(~ok))) > File "/home/buildslave/bongoslave/manylinux-3_4-ubuntu-32/build/venv/lib/python3.4/site-packages/numpy/testing/utils.py", > line 71, in assert_ > raise AssertionError(smsg) > AssertionError: ndim=4 (array([11618], dtype=int32),) I reliably get this error only for the manylinux wheel, not the same code built on the local machine, apparently in the same way (numpy==1.11.0, scipy==0.17.0, openblas etc). I can't replicate it on any docker image I have to hand including the manylinux x86 docker image and: https://hub.docker.com/r/toopher/ubuntu-i386/tags/ - 12.04 https://hub.docker.com/r/f69m/ubuntu32/tags/ - lts So, I don't know what's going on, but it's going to be very hard to debug. Does anyone have any insight into this error? Cheers, Matthew From olivier.grisel at ensta.org Mon Apr 25 07:39:15 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 25 Apr 2016 13:39:15 +0200 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: I can reproduce the exact same error on a fresh ubuntu/trusty32 vagrant box with numpy 1.11.0 and scipy 0.17.0 built from source with system Python 3.4.3, libopenblas-dev 0.2.8-6ubuntu1, liblapack-dev 3.5.0-2ubuntu1 and gfortran 4.8.2-1ubuntu6. https://atlas.hashicorp.com/ubuntu/boxes/trusty32 ``` mkdir trusty32 && cd trusty32 vagrant init ubuntu/trusty32 # edit Vagrantfile to set virtualbox VM memory to 4096 vagrant up --provider virtualbox vagrant ssh ``` So this is not related to the manylinux packaging itself. -- Olivier Grisel From matthew.brett at gmail.com Mon Apr 25 12:19:03 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 25 Apr 2016 09:19:03 -0700 Subject: [Wheel-builders] [manylinux] how to build numpy/scipy wheels for x86 In-Reply-To: References: Message-ID: On Mon, Apr 25, 2016 at 4:39 AM, Olivier Grisel wrote: > I can reproduce the exact same error on a fresh ubuntu/trusty32 > vagrant box with numpy 1.11.0 and scipy 0.17.0 built from source with > system Python 3.4.3, libopenblas-dev 0.2.8-6ubuntu1, liblapack-dev > 3.5.0-2ubuntu1 and gfortran 4.8.2-1ubuntu6. > > https://atlas.hashicorp.com/ubuntu/boxes/trusty32 > > ``` > mkdir trusty32 && cd trusty32 > vagrant init ubuntu/trusty32 > # edit Vagrantfile to set virtualbox VM memory to 4096 > vagrant up --provider virtualbox > vagrant ssh > ``` > > So this is not related to the manylinux packaging itself. Thanks for taking the time to help track this one down. I've made an issue at https://github.com/scipy/scipy/issues/6101 Cheers, Matthew From matthew.brett at gmail.com Mon Apr 25 13:37:19 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 25 Apr 2016 10:37:19 -0700 Subject: [Wheel-builders] Manylinux external library test case - matplotlib, tk Message-ID: Hi, I am building matplotlib manylinux wheels: https://github.com/matthew-brett/manylinux-builds/blob/master/build_matplotlibs.sh The problem is, what to do about the default matplotlib tk optional dependency? Normally, matplotlib builds against the installed system tk libs : https://github.com/matplotlib/matplotlib/blob/master/setupext.py#L1522 When building a manylinux wheel, we can only build against the tk libs / headers on the manylinux docker image, by doing `yum install -y tk-devel` before building. This means that the tk that matplotlib is built against, is different from the one that Python uses with `import Tkinter`. I suppose that is what causes the following, when building matplotlib with tk-devel installed: >>> import matplotlib >>> matplotlib.get_backend() u'TkAgg' >>> import matplotlib.pyplot as plt >>> plt.plot(range(10)) [] >>> plt.show() Segmentation fault I don't immediately see a good way to deal with this. Any thoughts? Cheers, Matthew From matthew.brett at gmail.com Mon Apr 25 15:03:26 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 25 Apr 2016 12:03:26 -0700 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: Hi, On Mon, Apr 25, 2016 at 11:58 AM, Thomas Caswell wrote: > My only thought is to just not build the tk backend, shipping wheels that > are known to segfault is no good. :) - no good at all! Incidentally, the wheels at http://nipy.bic.berkeley.edu/manylinux/ do not have the tk backend, and so, do not (are not known to) segfault. > Is there any way to (optionally) do a bit of compilation on installation of > a binary wheel? No, at least not with the `pip install` step. The wheel install mechanism doesn't have post-install hooks. Is there an easy way to only do the tkagg build install, if we worked out a way to trigger that? Cheers, Matthew From tcaswell at gmail.com Mon Apr 25 14:58:16 2016 From: tcaswell at gmail.com (Thomas Caswell) Date: Mon, 25 Apr 2016 18:58:16 +0000 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: My only thought is to just not build the tk backend, shipping wheels that are known to segfault is no good. Is there any way to (optionally) do a bit of compilation on installation of a binary wheel? Tom On Mon, Apr 25, 2016 at 1:38 PM Matthew Brett wrote: > Hi, > > I am building matplotlib manylinux wheels: > > > https://github.com/matthew-brett/manylinux-builds/blob/master/build_matplotlibs.sh > > The problem is, what to do about the default matplotlib tk optional > dependency? > > Normally, matplotlib builds against the installed system tk libs : > https://github.com/matplotlib/matplotlib/blob/master/setupext.py#L1522 > > When building a manylinux wheel, we can only build against the tk libs > / headers on the manylinux docker image, by doing `yum install -y > tk-devel` before building. > > This means that the tk that matplotlib is built against, is different > from the one that Python uses with `import Tkinter`. I suppose that > is what causes the following, when building matplotlib with tk-devel > installed: > > >>> import matplotlib > >>> matplotlib.get_backend() > u'TkAgg' > >>> import matplotlib.pyplot as plt > >>> plt.plot(range(10)) > [] > >>> plt.show() > Segmentation fault > > I don't immediately see a good way to deal with this. Any thoughts? > > Cheers, > > Matthew > _______________________________________________ > Matplotlib-devel mailing list > Matplotlib-devel at python.org > https://mail.python.org/mailman/listinfo/matplotlib-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.v.root at gmail.com Mon Apr 25 15:19:31 2016 From: ben.v.root at gmail.com (Benjamin Root) Date: Mon, 25 Apr 2016 15:19:31 -0400 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: What would be better is if we can get rid of the compilation requirement, but from Michael's comments in the past, it seems that this isn't possible at the moment. The problem is that the compiled portion provides the image buffer access, which seems to not be available at the python level. Perhaps it might make sense to pursue submitting our code to the Tk/Tcl project so that we might be able to drop this step at some point? Ben Root On Mon, Apr 25, 2016 at 3:03 PM, Matthew Brett wrote: > Hi, > > On Mon, Apr 25, 2016 at 11:58 AM, Thomas Caswell > wrote: > > My only thought is to just not build the tk backend, shipping wheels that > > are known to segfault is no good. > > :) - no good at all! > > Incidentally, the wheels at http://nipy.bic.berkeley.edu/manylinux/ do > not have the tk backend, and so, do not (are not known to) segfault. > > > Is there any way to (optionally) do a bit of compilation on installation > of > > a binary wheel? > > No, at least not with the `pip install` step. The wheel install > mechanism doesn't have post-install hooks. > > Is there an easy way to only do the tkagg build install, if we worked > out a way to trigger that? > > Cheers, > > Matthew > _______________________________________________ > Matplotlib-devel mailing list > Matplotlib-devel at python.org > https://mail.python.org/mailman/listinfo/matplotlib-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Mon Apr 25 15:42:27 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 25 Apr 2016 21:42:27 +0200 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: 2016-04-25 21:19 GMT+02:00 Benjamin Root : > What would be better is if we can get rid of the compilation requirement, > but from Michael's comments in the past, it seems that this isn't possible > at the moment. The problem is that the compiled portion provides the image > buffer access, which seems to not be available at the python level. Perhaps > it might make sense to pursue submitting our code to the Tk/Tcl project so > that we might be able to drop this step at some point? Wouldn't it be possible to rewrite this part using ctypes instead? -- Olivier Grisel From njs at pobox.com Mon Apr 25 16:00:03 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 25 Apr 2016 13:00:03 -0700 Subject: [Wheel-builders] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: On Apr 25, 2016 10:38 AM, "Matthew Brett" wrote: > > Hi, > > I am building matplotlib manylinux wheels: > > https://github.com/matthew-brett/manylinux-builds/blob/master/build_matplotlibs.sh > > The problem is, what to do about the default matplotlib tk optional dependency? > > Normally, matplotlib builds against the installed system tk libs : > https://github.com/matplotlib/matplotlib/blob/master/setupext.py#L1522 > > When building a manylinux wheel, we can only build against the tk libs > / headers on the manylinux docker image, by doing `yum install -y > tk-devel` before building. > > This means that the tk that matplotlib is built against, is different > from the one that Python uses with `import Tkinter`. I suppose that > is what causes the following, when building matplotlib with tk-devel > installed: > > >>> import matplotlib > >>> matplotlib.get_backend() > u'TkAgg' > >>> import matplotlib.pyplot as plt > >>> plt.plot(range(10)) > [] > >>> plt.show() > Segmentation fault > > I don't immediately see a good way to deal with this. Any thoughts? Maybe obvious, but here's a syllogism: If: matplotlib and python have to agree on which tk libs they are using, and: there are inconsistencies between tk libs on different Linux systems such that matplotlib must be built against the same tk as it eventually uses, then: we don't really have any option except to disable tk support in matplotlib manylinux wheels So I guess the question to focus on are whether those two premises are actually true. I guess a backtrace on the segfault would help? Also: how important is tk support? Do we have a fallback? I know that Qt packaging is something of a mess right now too... -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Mon Apr 25 17:52:56 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 25 Apr 2016 23:52:56 +0200 Subject: [Wheel-builders] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: It would be nice to have a manylinux1 wheel for PyQT with embedded QT libs. -- Olivier From njs at pobox.com Mon Apr 25 17:56:57 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 25 Apr 2016 14:56:57 -0700 Subject: [Wheel-builders] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: On Mon, Apr 25, 2016 at 2:52 PM, Olivier Grisel wrote: > It would be nice to have a manylinux1 wheel for PyQT with embedded QT libs. PySide might be an easier initial target -- they almost support installing from wheels already. (You can make and post wheels, but they have a post-install script you have to manually run. Presumably whatever this does could be run from __init__.py on first import instead, at least when installing into a venv...) -n -- Nathaniel J. Smith -- https://vorpus.org From matthew.brett at gmail.com Mon Apr 25 19:23:27 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 25 Apr 2016 16:23:27 -0700 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: Hi, On Mon, Apr 25, 2016 at 12:42 PM, Olivier Grisel wrote: > 2016-04-25 21:19 GMT+02:00 Benjamin Root : >> What would be better is if we can get rid of the compilation requirement, >> but from Michael's comments in the past, it seems that this isn't possible >> at the moment. The problem is that the compiled portion provides the image >> buffer access, which seems to not be available at the python level. Perhaps >> it might make sense to pursue submitting our code to the Tk/Tcl project so >> that we might be able to drop this step at some point? > > Wouldn't it be possible to rewrite this part using ctypes instead? I had the same thought - has anyone looked into that? Thanks, Matthew From max_linke at gmx.de Tue Apr 26 08:34:30 2016 From: max_linke at gmx.de (Max Linke) Date: Tue, 26 Apr 2016 14:34:30 +0200 Subject: [Wheel-builders] manylinux vagrant box In-Reply-To: <57148BCE.5000204@gmx.de> References: <57140269.6080401@gmx.de> <57148BCE.5000204@gmx.de> Message-ID: <607a478b-449e-6ac4-a477-f1ebaa5356d3@gmx.de> I have now working templates to create vagrant boxes to build manylinux packages. https://github.com/kain88-de/vagrant-manylinux-template I will use them myself to build MDAnalysis conda packages soon. Others are welcome to use them as well. Also is there a way to test if the conda-packages/wheels are manylinux compatible? best Max On 04/18/2016 09:25 AM, Max Linke wrote: > > > On 04/18/2016 01:51 AM, Nathaniel Smith wrote: > > > Docker is pretty easy too, at least from linux -- spinning up a >> manylinux environment is just "sudo apt install docker.io && sudo >> docker run -ti quay.io/pypa/manylinux1_x86_64 bash". But I haven't >> tried using docker on other platforms (though I know it's possible to >> install docker somehow), and I haven't tried using vagrant at all :-). >> >> Given my ignorance, I'd be interested to hear more about what makes >> you excited about vagrant! > > I can build the images myself starting from verified distribution > images. Thanks to the packer [1] tool this is very easy to setup. > Docker images are hard to verify and AFAIK there is no signing of images > or updates to them done. Building your own docker images from the ground > up also isn't that easy. > > It is also super easy to change where images are stored my machine using > one environment variable. This is important to me because I usually have > small root partitions and a separate home partition. > > Vagrant should run anywhere virtualbox/kvm and ruby are available. But > according to the docker docs it can now run on all platforms as well. > > Otherwise in the usage they are very similar. > >> The more the merrier :-). And the conda linux and manylinux build >> environments are *very* similar. > > Good to know. Then I can also look into building wheels after the conda > packages are done. > > > [1] https://www.packer.io/ > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders From msarahan at gmail.com Tue Apr 26 09:00:20 2016 From: msarahan at gmail.com (Michael Sarahan) Date: Tue, 26 Apr 2016 13:00:20 +0000 Subject: [Wheel-builders] manylinux vagrant box In-Reply-To: <607a478b-449e-6ac4-a477-f1ebaa5356d3@gmx.de> References: <57140269.6080401@gmx.de> <57148BCE.5000204@gmx.de> <607a478b-449e-6ac4-a477-f1ebaa5356d3@gmx.de> Message-ID: Max, these look good - nice work. For the conda ones, you don't need to install both miniconda 2 and 3, though. Just one (either one) will suffice, and the pass the python version explicitly at build time: conda build --python=2.7 I hope this saves you some setup time and space in your final images. I am not aware of any auditwheel extensions or equivalents for conda right now. Best, Michael On Tue, Apr 26, 2016, 07:39 Max Linke wrote: > I have now working templates to create vagrant boxes to build manylinux > packages. > > https://github.com/kain88-de/vagrant-manylinux-template > > I will use them myself to build MDAnalysis conda packages soon. Others > are welcome to use them as well. > > Also is there a way to test if the conda-packages/wheels are manylinux > compatible? > > best Max > > > On 04/18/2016 09:25 AM, Max Linke wrote: > > > > > > On 04/18/2016 01:51 AM, Nathaniel Smith wrote: > > > > > Docker is pretty easy too, at least from linux -- spinning up a > >> manylinux environment is just "sudo apt install docker.io && sudo > >> docker run -ti quay.io/pypa/manylinux1_x86_64 bash". But I haven't > >> tried using docker on other platforms (though I know it's possible to > >> install docker somehow), and I haven't tried using vagrant at all :-). > >> > >> Given my ignorance, I'd be interested to hear more about what makes > >> you excited about vagrant! > > > > I can build the images myself starting from verified distribution > > images. Thanks to the packer [1] tool this is very easy to setup. > > Docker images are hard to verify and AFAIK there is no signing of images > > or updates to them done. Building your own docker images from the ground > > up also isn't that easy. > > > > It is also super easy to change where images are stored my machine using > > one environment variable. This is important to me because I usually have > > small root partitions and a separate home partition. > > > > Vagrant should run anywhere virtualbox/kvm and ruby are available. But > > according to the docker docs it can now run on all platforms as well. > > > > Otherwise in the usage they are very similar. > > > >> The more the merrier :-). And the conda linux and manylinux build > >> environments are *very* similar. > > > > Good to know. Then I can also look into building wheels after the conda > > packages are done. > > > > > > [1] https://www.packer.io/ > > _______________________________________________ > > Wheel-builders mailing list > > Wheel-builders at python.org > > https://mail.python.org/mailman/listinfo/wheel-builders > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Apr 28 08:45:15 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 28 Apr 2016 05:45:15 -0700 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: Hi, On Mon, Apr 25, 2016 at 4:23 PM, Matthew Brett wrote: > Hi, > > On Mon, Apr 25, 2016 at 12:42 PM, Olivier Grisel > wrote: >> 2016-04-25 21:19 GMT+02:00 Benjamin Root : >>> What would be better is if we can get rid of the compilation requirement, >>> but from Michael's comments in the past, it seems that this isn't possible >>> at the moment. The problem is that the compiled portion provides the image >>> buffer access, which seems to not be available at the python level. Perhaps >>> it might make sense to pursue submitting our code to the Tk/Tcl project so >>> that we might be able to drop this step at some point? >> >> Wouldn't it be possible to rewrite this part using ctypes instead? > > I had the same thought - has anyone looked into that? I was thinking of looking into this. It would be very useful to unlink matplotlib from the tk compile. It's already a pain to get this right for OSX - for the wheel build, we have to download the activestate tcl/tk, check it's the right version, link to it, and then make sure we haven't shipped any of it : https://github.com/MacPython/matplotlib-wheels/blob/master/run_install.sh#L5 https://github.com/MacPython/matplotlib-wheels/blob/master/check_tcl.py https://github.com/MacPython/matplotlib-wheels/blob/master/mpl_delocate.py I'm sure it's also annoying on Windows. Michael D - are you the right person to ask about this stuff? Do you have any pointers about where the problems will be? Thanks a lot, Matthew From mdboom at gmail.com Thu Apr 28 08:54:17 2016 From: mdboom at gmail.com (Michael Droettboom) Date: Thu, 28 Apr 2016 12:54:17 +0000 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: I'd be open to a ctypes implementation of the tk backend as long as it's fully optional (like the current tk backend) since using ctypes excludes us from use in restricted environments such as Google app engine. Submitting the buffer hook to tcl/tk and/or tkinter would be nice. All other modern Python GUI frameworks have this feature. Mike On Thu, Apr 28, 2016, 8:45 AM Matthew Brett wrote: > Hi, > > On Mon, Apr 25, 2016 at 4:23 PM, Matthew Brett > wrote: > > Hi, > > > > On Mon, Apr 25, 2016 at 12:42 PM, Olivier Grisel > > wrote: > >> 2016-04-25 21:19 GMT+02:00 Benjamin Root : > >>> What would be better is if we can get rid of the compilation > requirement, > >>> but from Michael's comments in the past, it seems that this isn't > possible > >>> at the moment. The problem is that the compiled portion provides the > image > >>> buffer access, which seems to not be available at the python level. > Perhaps > >>> it might make sense to pursue submitting our code to the Tk/Tcl > project so > >>> that we might be able to drop this step at some point? > >> > >> Wouldn't it be possible to rewrite this part using ctypes instead? > > > > I had the same thought - has anyone looked into that? > > I was thinking of looking into this. It would be very useful to > unlink matplotlib from the tk compile. It's already a pain to get > this right for OSX - for the wheel build, we have to download the > activestate tcl/tk, check it's the right version, link to it, and then > make sure we haven't shipped any of it : > > > https://github.com/MacPython/matplotlib-wheels/blob/master/run_install.sh#L5 > https://github.com/MacPython/matplotlib-wheels/blob/master/check_tcl.py > https://github.com/MacPython/matplotlib-wheels/blob/master/mpl_delocate.py > > I'm sure it's also annoying on Windows. > > Michael D - are you the right person to ask about this stuff? Do you > have any pointers about where the problems will be? > > Thanks a lot, > > Matthew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sat Apr 30 00:23:41 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 29 Apr 2016 21:23:41 -0700 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: I'm sure I missed the conversation, but as tkInter is included with Python itself, why can't the wheels link against the same version that python ships?? > Submitting the buffer hook to tcl/tk and/or tkinter would be nice. That would be the better solution, yes -- presumable to tkinter -- but that's only going to help with Python 3.6 or greater..... (or maybe it could be back-ported) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Apr 30 03:43:20 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 30 Apr 2016 00:43:20 -0700 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: On Fri, Apr 29, 2016 at 9:23 PM, Chris Barker wrote: > I'm sure I missed the conversation, but as tkInter is included with Python > itself, why can't the wheels link against the same version that python > ships?? The problem is that we're trying to make a single build of matplotlib that works with multiple different builds of Python, which might come with different, incompatible builds of tkInter -- so if we build against the version shipped with python A then it will be broken on python B and vice-versa. Or maybe not -- Matthew's seeing a segfault, and it *might* be due to the existence of fundamental ABI incompatibilities between different tk builds, but it hasn't really been characterized. >> Submitting the buffer hook to tcl/tk and/or tkinter would be nice. > > That would be the better solution, yes -- presumable to tkinter -- but > that's only going to help with Python 3.6 or greater..... > > (or maybe it could be back-ported) It might get backported to 3.5, maybe 3.4 if we're really quick, but it won't get backported to 2.7, so yeah, anything involving changing python upstream isn't going to be a short-term solution. -n -- Nathaniel J. Smith -- https://vorpus.org From ben.v.root at gmail.com Sat Apr 30 11:16:46 2016 From: ben.v.root at gmail.com (Benjamin Root) Date: Sat, 30 Apr 2016 11:16:46 -0400 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: Right. Submitting the patch won't address our near-term needs, but it'll certainly position us better a few years from now. I am thinking that for the near-term, we could have a fail-over mechanism. First attempt to access the proposed ik interface (to be forward-compatible), failing that, then attempt a ctypes interface. Failing that, fall back to our current implementation. This gives a way forward for those in the restricted environments to keep doing whatever has been working so far, while giving everyone else an intermediate solution -- a usable GUI backend that will almost always be guaranteed to work no matter in what order the packages were installed. Cheers! Ben Root On Sat, Apr 30, 2016 at 3:43 AM, Nathaniel Smith wrote: > On Fri, Apr 29, 2016 at 9:23 PM, Chris Barker > wrote: > > I'm sure I missed the conversation, but as tkInter is included with > Python > > itself, why can't the wheels link against the same version that python > > ships?? > > The problem is that we're trying to make a single build of matplotlib > that works with multiple different builds of Python, which might come > with different, incompatible builds of tkInter -- so if we build > against the version shipped with python A then it will be broken on > python B and vice-versa. > > Or maybe not -- Matthew's seeing a segfault, and it *might* be due to > the existence of fundamental ABI incompatibilities between different > tk builds, but it hasn't really been characterized. > > >> Submitting the buffer hook to tcl/tk and/or tkinter would be nice. > > > > That would be the better solution, yes -- presumable to tkinter -- but > > that's only going to help with Python 3.6 or greater..... > > > > (or maybe it could be back-ported) > > It might get backported to 3.5, maybe 3.4 if we're really quick, but > it won't get backported to 2.7, so yeah, anything involving changing > python upstream isn't going to be a short-term solution. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Matplotlib-devel mailing list > Matplotlib-devel at python.org > https://mail.python.org/mailman/listinfo/matplotlib-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Apr 30 16:40:45 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 30 Apr 2016 16:40:45 -0400 Subject: [Wheel-builders] [Matplotlib-devel] Manylinux external library test case - matplotlib, tk In-Reply-To: References: Message-ID: On Sat, Apr 30, 2016 at 3:43 AM, Nathaniel Smith wrote: > On Fri, Apr 29, 2016 at 9:23 PM, Chris Barker wrote: >> I'm sure I missed the conversation, but as tkInter is included with Python >> itself, why can't the wheels link against the same version that python >> ships?? > > The problem is that we're trying to make a single build of matplotlib > that works with multiple different builds of Python, which might come > with different, incompatible builds of tkInter -- so if we build > against the version shipped with python A then it will be broken on > python B and vice-versa. > > Or maybe not -- Matthew's seeing a segfault, and it *might* be due to > the existence of fundamental ABI incompatibilities between different > tk builds, but it hasn't really been characterized. I'm in Cuba at the moment, so it's very difficult to run stuff remotely. If you have access to benten.dlab.berkeley.edu, the build wheel is in /home/mb312/dev_trees/manylinux-builds/wheelhouse - latest matplotlib 1.5.1 for Python 2.7. Otherwise, the build recipe in manylinux-builds should give a good clue as to how to replicate. Cheers, Matthew