From tony at kelman.net Fri Jan 1 23:03:31 2016 From: tony at kelman.net (Tony Kelman) Date: Sat, 2 Jan 2016 04:03:31 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Nathaniel Smith pobox.com> writes: > (The only exception I know of is msys2's python 2.7, which is built > with mingw-w64. This makes a mess, though, and python.org has > specifically refused to accept patches to even allow upstream cpython > to be built this way -- this build basically only exists so msys2 gdb > can link to it and run python gdb plugins.) This isn't true at all. It exists to provide its users a functioning Python and 50+ packaged python-* modules all built with open source compilers, search for yourself at https://github.com/Alexpux/MINGW-packages. Including working builds of numpy (using openblas), scipy, pandas, and others. These aren't ABI compatible with arbitrary pypi msvc-built binaries, but given the build tools in msys2 it's often easier to build a library within that system. I've yet to see something that MSVC can build but mingw-w64 can't, but there are endless examples of the other way around. Python-dev's refusal to even review mingw-related patches has forced MSYS2 to essentially maintain a fork of cpython in the form of 80+ patches in their PKGBUILD file. This fork won't be going away, they will be continuing to package more modules against it. Does the MSYS2 approach satisfy scipy's self imposed requirements for MSVC ABI compatibility? No, but it does satisfy a sizable (and growing) chunk of users' requirements for scipy et al binary distribution on Windows. Forking cpython and maintaining a separate ecosystem is a more viable option than you give it credit for. From matthew.brett at gmail.com Sat Jan 2 09:33:31 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 2 Jan 2016 14:33:31 +0000 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: Hi, On Sat, Jan 2, 2016 at 4:03 AM, Tony Kelman wrote: > Nathaniel Smith pobox.com> writes: > >> (The only exception I know of is msys2's python 2.7, which is built >> with mingw-w64. This makes a mess, though, and python.org has >> specifically refused to accept patches to even allow upstream cpython >> to be built this way -- this build basically only exists so msys2 gdb >> can link to it and run python gdb plugins.) > > This isn't true at all. It exists to provide its users a functioning > Python and 50+ packaged python-* modules all built with open source > compilers, search for yourself at https://github.com/Alexpux/MINGW-packages. > Including working builds of numpy (using openblas), scipy, pandas, > and others. > > These aren't ABI compatible with arbitrary pypi msvc-built binaries, > but given the build tools in msys2 it's often easier to build a library > within that system. I've yet to see something that MSVC can build but > mingw-w64 can't, but there are endless examples of the other way around. > Python-dev's refusal to even review mingw-related patches has forced > MSYS2 to essentially maintain a fork of cpython in the form of 80+ > patches in their PKGBUILD file. This fork won't be going away, they will > be continuing to package more modules against it. > > Does the MSYS2 approach satisfy scipy's self imposed requirements for > MSVC ABI compatibility? No, but it does satisfy a sizable (and growing) > chunk of users' requirements for scipy et al binary distribution on > Windows. Forking cpython and maintaining a separate ecosystem is a > more viable option than you give it credit for. It would certainly be useful to share effort or at least discussion about this. Are there any good places to go to understand the logic behind upstream Python's lack of interest in these patches / mingw-w64 Python generally? I'd love to hear how you are dealing with DLL hell and pip on the msys2 system... Cheers, Matthew From tony at kelman.net Sat Jan 2 12:30:31 2016 From: tony at kelman.net (Tony Kelman) Date: Sat, 2 Jan 2016 17:30:31 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Matthew Brett gmail.com> writes: > It would certainly be useful to share effort or at least discussion about this. > > Are there any good places to go to understand the logic behind > upstream Python's lack of interest in these patches / mingw-w64 Python > generally? I'd love to hear how you are dealing with DLL hell and pip > on the msys2 system... I think the last time this was discussed was in the October 2014 python-dev thread "Status of C compilers for windows" https://mail.python.org/pipermail/python-dev/2014-October/136607.html My personal conclusion from that was it could be cleaner to turn MSYS2's large patch set into a formal fork that anyone could build (or cross compile) even if not using MSYS2's pkgbuild system, but I didn't get too far there. My understanding of how the MSYS2 python works is that for pure python modules without C extensions, pip should work fine. There are issues with virtualenv last time I checked, and I don't know how careful the MSYS2 packagers are about checking the status of the test suites of all their C extension builds. If you get all C extensions from MSYS2 then there is no dll hell, but mixing C runtimes can of course cause problems. Some of their patches are in distutils to also make that recognize and use mingw-w64 compilers, but that would need some firsthand testing to see how well it works on C extension modules that haven't been packaged by MSYS2 yet. To be clear I think the current approach of trying to make mingwpy more robust and eventually get it upstreamed makes sense. But you're forking GCC with features (linking against recent msvc runtimes) upstream mingw-w64 doesn't want to use, as MSYS2 is forking cpython with features upstream python-dev doesn't want. It's kind of unfortunate all around. From josef.pktd at gmail.com Sat Jan 2 12:51:55 2016 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 2 Jan 2016 12:51:55 -0500 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 12:30 PM, Tony Kelman wrote: > Matthew Brett gmail.com> writes: > >> It would certainly be useful to share effort or at least discussion about > this. >> >> Are there any good places to go to understand the logic behind >> upstream Python's lack of interest in these patches / mingw-w64 Python >> generally? I'd love to hear how you are dealing with DLL hell and pip >> on the msys2 system... > > I think the last time this was discussed was in the October 2014 > python-dev thread "Status of C compilers for windows" > https://mail.python.org/pipermail/python-dev/2014-October/136607.html > My personal conclusion from that was it could be cleaner to turn > MSYS2's large patch set into a formal fork that anyone could build > (or cross compile) even if not using MSYS2's pkgbuild system, but I > didn't get too far there. > > My understanding of how the MSYS2 python works is that for pure > python modules without C extensions, pip should work fine. There > are issues with virtualenv last time I checked, and I don't know > how careful the MSYS2 packagers are about checking the status of the > test suites of all their C extension builds. If you get all C > extensions from MSYS2 then there is no dll hell, but mixing C > runtimes can of course cause problems. Some of their patches are > in distutils to also make that recognize and use mingw-w64 compilers, > but that would need some firsthand testing to see how well it works > on C extension modules that haven't been packaged by MSYS2 yet. > > To be clear I think the current approach of trying to make mingwpy > more robust and eventually get it upstreamed makes sense. But you're > forking GCC with features (linking against recent msvc runtimes) > upstream mingw-w64 doesn't want to use, as MSYS2 is forking cpython > with features upstream python-dev doesn't want. It's kind of > unfortunate all around. A general question about what I don't understand in this discussion. What are the incompatibilities of mingwpy compiled C extensions with the official PSF python versions? I have been using MingW compiled statsmodels forever, and MingW64 for maybe two years up to python 3.4 and never ran into problems. I'm using now almost exclusively Winpython and the packaged gcc, which seems to come with all required local DLLs. (I thought the main or only problems are DLL packaging and Fortran.) Josef > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev From matthew.brett at gmail.com Sat Jan 2 14:13:21 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 2 Jan 2016 19:13:21 +0000 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: Hi Josef, On Sat, Jan 2, 2016 at 5:51 PM, wrote: > On Sat, Jan 2, 2016 at 12:30 PM, Tony Kelman wrote: >> Matthew Brett gmail.com> writes: >> >>> It would certainly be useful to share effort or at least discussion about >> this. >>> >>> Are there any good places to go to understand the logic behind >>> upstream Python's lack of interest in these patches / mingw-w64 Python >>> generally? I'd love to hear how you are dealing with DLL hell and pip >>> on the msys2 system... >> >> I think the last time this was discussed was in the October 2014 >> python-dev thread "Status of C compilers for windows" >> https://mail.python.org/pipermail/python-dev/2014-October/136607.html >> My personal conclusion from that was it could be cleaner to turn >> MSYS2's large patch set into a formal fork that anyone could build >> (or cross compile) even if not using MSYS2's pkgbuild system, but I >> didn't get too far there. >> >> My understanding of how the MSYS2 python works is that for pure >> python modules without C extensions, pip should work fine. There >> are issues with virtualenv last time I checked, and I don't know >> how careful the MSYS2 packagers are about checking the status of the >> test suites of all their C extension builds. If you get all C >> extensions from MSYS2 then there is no dll hell, but mixing C >> runtimes can of course cause problems. Some of their patches are >> in distutils to also make that recognize and use mingw-w64 compilers, >> but that would need some firsthand testing to see how well it works >> on C extension modules that haven't been packaged by MSYS2 yet. >> >> To be clear I think the current approach of trying to make mingwpy >> more robust and eventually get it upstreamed makes sense. But you're >> forking GCC with features (linking against recent msvc runtimes) >> upstream mingw-w64 doesn't want to use, as MSYS2 is forking cpython >> with features upstream python-dev doesn't want. It's kind of >> unfortunate all around. > > > A general question about what I don't understand in this discussion. > > What are the incompatibilities of mingwpy compiled C extensions with > the official PSF python versions? > > I have been using MingW compiled statsmodels forever, and MingW64 for > maybe two years up to python 3.4 and never ran into problems. > I'm using now almost exclusively Winpython and the packaged gcc, which > seems to come with all required local DLLs. > > (I thought the main or only problems are DLL packaging and Fortran.) There's a fairly detailed summary of the issues over at http://mingwpy.github.io/issues.html Cheers, Matthew From matthew.brett at gmail.com Sat Jan 2 14:23:54 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 2 Jan 2016 19:23:54 +0000 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: Hi, On Sat, Jan 2, 2016 at 5:30 PM, Tony Kelman wrote: > Matthew Brett gmail.com> writes: > >> It would certainly be useful to share effort or at least discussion about > this. >> >> Are there any good places to go to understand the logic behind >> upstream Python's lack of interest in these patches / mingw-w64 Python >> generally? I'd love to hear how you are dealing with DLL hell and pip >> on the msys2 system... > > I think the last time this was discussed was in the October 2014 > python-dev thread "Status of C compilers for windows" > https://mail.python.org/pipermail/python-dev/2014-October/136607.html > My personal conclusion from that was it could be cleaner to turn > MSYS2's large patch set into a formal fork that anyone could build > (or cross compile) even if not using MSYS2's pkgbuild system, but I > didn't get too far there. > > My understanding of how the MSYS2 python works is that for pure > python modules without C extensions, pip should work fine. There > are issues with virtualenv last time I checked, and I don't know > how careful the MSYS2 packagers are about checking the status of the > test suites of all their C extension builds. If you get all C > extensions from MSYS2 then there is no dll hell, but mixing C > runtimes can of course cause problems. The problem we're trying to solve, is making a toolchain that will allow any project to build their own Windows .whl installers without having to worry about how other projects were built. My own feeling is having one central place building all these packages is not going to scale, partly because continuous testing is so important, and this can most efficiently be done and responded to by the projects themselves. We scientific Python persons are particularly concerned with packages that need compilation. That does put us at risk of DLL hell because many packages need to link to external libraries, and two packages may need the same library. Or two versions of one package may get installed which have different and incompatible versions of the same library. > Some of their patches are > in distutils to also make that recognize and use mingw-w64 compilers, > but that would need some firsthand testing to see how well it works > on C extension modules that haven't been packaged by MSYS2 yet. > > To be clear I think the current approach of trying to make mingwpy > more robust and eventually get it upstreamed makes sense. But you're > forking GCC with features (linking against recent msvc runtimes) > upstream mingw-w64 doesn't want to use, as MSYS2 is forking cpython > with features upstream python-dev doesn't want. It's kind of > unfortunate all around. Yes, it is unfortunate, it would be good to find some way of sharing effort, and / or lobbying for the Python / pypi folks. I can well see the attraction of the Mingw-w64-built Python depending only on a single standard MS VC runtime, the current mix of VC runtimes is a tiring burden. Cheers, Matthew From josef.pktd at gmail.com Sat Jan 2 14:57:52 2016 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 2 Jan 2016 14:57:52 -0500 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 2:13 PM, Matthew Brett wrote: > Hi Josef, > > On Sat, Jan 2, 2016 at 5:51 PM, wrote: >> On Sat, Jan 2, 2016 at 12:30 PM, Tony Kelman wrote: >>> Matthew Brett gmail.com> writes: >>> >>>> It would certainly be useful to share effort or at least discussion about >>> this. >>>> >>>> Are there any good places to go to understand the logic behind >>>> upstream Python's lack of interest in these patches / mingw-w64 Python >>>> generally? I'd love to hear how you are dealing with DLL hell and pip >>>> on the msys2 system... >>> >>> I think the last time this was discussed was in the October 2014 >>> python-dev thread "Status of C compilers for windows" >>> https://mail.python.org/pipermail/python-dev/2014-October/136607.html >>> My personal conclusion from that was it could be cleaner to turn >>> MSYS2's large patch set into a formal fork that anyone could build >>> (or cross compile) even if not using MSYS2's pkgbuild system, but I >>> didn't get too far there. >>> >>> My understanding of how the MSYS2 python works is that for pure >>> python modules without C extensions, pip should work fine. There >>> are issues with virtualenv last time I checked, and I don't know >>> how careful the MSYS2 packagers are about checking the status of the >>> test suites of all their C extension builds. If you get all C >>> extensions from MSYS2 then there is no dll hell, but mixing C >>> runtimes can of course cause problems. Some of their patches are >>> in distutils to also make that recognize and use mingw-w64 compilers, >>> but that would need some firsthand testing to see how well it works >>> on C extension modules that haven't been packaged by MSYS2 yet. >>> >>> To be clear I think the current approach of trying to make mingwpy >>> more robust and eventually get it upstreamed makes sense. But you're >>> forking GCC with features (linking against recent msvc runtimes) >>> upstream mingw-w64 doesn't want to use, as MSYS2 is forking cpython >>> with features upstream python-dev doesn't want. It's kind of >>> unfortunate all around. >> >> >> A general question about what I don't understand in this discussion. >> >> What are the incompatibilities of mingwpy compiled C extensions with >> the official PSF python versions? >> >> I have been using MingW compiled statsmodels forever, and MingW64 for >> maybe two years up to python 3.4 and never ran into problems. >> I'm using now almost exclusively Winpython and the packaged gcc, which >> seems to come with all required local DLLs. >> >> (I thought the main or only problems are DLL packaging and Fortran.) > > There's a fairly detailed summary of the issues over at > http://mingwpy.github.io/issues.html Thanks for the link. Sounds like something for the experts, and not something I have to worry about myself. (except maybe that I will never again add a new 32 bit python install.) Josef > > Cheers, > > Matthew > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev From njs at pobox.com Sat Jan 2 15:06:47 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 2 Jan 2016 12:06:47 -0800 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Jan 2, 2016 9:52 AM, wrote: > [...] > I'm using now almost exclusively Winpython and the packaged gcc, which > seems to come with all required local DLLs. Note that the gcc packaged with winpython is exactly the prototype version of the mingwpy compiler that Ralf just announced funding for and that we're all hoping will save us. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Jan 2 15:18:53 2016 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 2 Jan 2016 15:18:53 -0500 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 3:06 PM, Nathaniel Smith wrote: > On Jan 2, 2016 9:52 AM, wrote: >> > [...] >> I'm using now almost exclusively Winpython and the packaged gcc, which >> seems to come with all required local DLLs. > > Note that the gcc packaged with winpython is exactly the prototype version > of the mingwpy compiler that Ralf just announced funding for and that we're > all hoping will save us. Then I'm a happy user of it so far, compiling almost only statsmodels with it. I saw that the directory name changed from `mingw32` to `mingwpy` between WinPython-64bit-3.4.3.1 and WinPython-64bit-3.4.3.6 I never looked at the details because it "just works" for me. (On my old notebook it "just worked" with older MingW from python-xy and 32bit Pythons.) Thanks, I'm happy when someone does the work and I don't have to worry about it, and as long as there are no hidden traps that I should pay attention to. Josef > > -n > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > From njs at pobox.com Sat Jan 2 15:23:29 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 2 Jan 2016 12:23:29 -0800 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Jan 2, 2016 9:30 AM, "Tony Kelman" wrote: > [...] > To be clear I think the current approach of trying to make mingwpy > more robust and eventually get it upstreamed makes sense. But you're > forking GCC with features (linking against recent msvc runtimes) > upstream mingw-w64 doesn't want to use, as MSYS2 is forking cpython > with features upstream python-dev doesn't want. It's kind of > unfortunate all around. To be clear: the current status here is that there are basically two runtimes that matter: the one used by 2.7, and the one used by 3.5 and future versions. The plan is indeed to fork mingw-w64 to handle the 2.7 runtime, because mingw-w64 is not interested in supporting that (at least at a price we can afford). Fortunately we know from Carl's work so far that the required patches are fairly small and the 2.7 runtime is not a moving target, but it is unfortunate and the riskiest part of the plan. But, for any patches needed for supporting 3.5+, including patches that are needed independently of the runtime chosen, we plan to push these upstream, and upstream is interested in taking them. (And in fact we're hoping upstream will do some of the heavy lifting :-).) We're in contact with both them and with Microsoft to figure out how to make modern mingw-w64 more compatible with modern msvc in general. This doesn't help with the 2.7 situation, but the long term situation is not as bleak as you fear. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at kelman.net Sat Jan 2 15:48:35 2016 From: tony at kelman.net (Tony Kelman) Date: Sat, 2 Jan 2016 20:48:35 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Matthew Brett gmail.com> writes: > The problem we're trying to solve, is making a toolchain that will > allow any project to build their own Windows .whl installers without > having to worry about how other projects were built. You are making an implicit assumption though that all other projects were built for compatibility with the python.org builds of CPython, and that all users will absolutely want to do this. For people who want to embed Python, e.g. in GDB, Blender, IJulia, etc this isn't a requirement that adds much value. > My own feeling is having one central place building all these packages > is not going to scale, partly because continuous testing is so > important, and this can most efficiently be done and responded to by > the projects themselves. I agree, this is one of my issues with MSYS2. An automatic build service would be ideal, and we heavily leverage the OpenSUSE build service to cross-compile Windows binaries of many libraries for use with Julia packages. (There are other reasons I can get into that we don't use MSYS2's infrastructure and toolchain more heavily for Julia, but it would be slightly off-topic.) Right now the only mature solution for binary distribution of the scientific Python stack on Windows is Anaconda, which is not only "one central place" but also not reproducible by other open source projects - as far as we're aware they are using MKL for NumPy and Intel Fortran for SciPy (and any other Fortran dependencies), correct? Anaconda.org still does not provide hosted automatic Windows build VM's on open source plans, do they? > We scientific Python persons are particularly concerned with packages > that need compilation. > > That does put us at risk of DLL hell because many packages need to > link to external libraries, and two packages may need the same > library. Or two versions of one package may get installed which have > different and incompatible versions of the same library. Which compiler you're using is another dimension of compatibility here, but I don't see how it changes matters at all with respect to library versioning and API changes. > Yes, it is unfortunate, it would be good to find some way of sharing > effort, and / or lobbying for the Python / pypi folks. It might be worth pinging a few people who are working on porting the scientific Python stack to ARM/Android, as cross compilation is also an important tool there. Distutils and many other Python tools like Cython and conda have some pretty deeply ingrained assumptions that the build system equals the runtime system (and Windows equals MSVC). MSYS[2] is fundamentally a path-mangling hack that allows you to use autotools, bash, and gmake within a posix system while pretending that you aren't cross-compiling to a non-posix MinGW target. If your build system knows how to cross compile properly, it can be cleaner to just do that from either Cygwin or Linux (and in the latter case, build times are often substantially faster). From tony at kelman.net Sat Jan 2 15:58:07 2016 From: tony at kelman.net (Tony Kelman) Date: Sat, 2 Jan 2016 20:58:07 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Tony Kelman kelman.net> writes: > Right now the only mature solution for > binary distribution of the scientific Python stack on Windows is Anaconda, Actually I take that back, it looks like WinPython is essentially prototyping the way forward with mingwpy. What were they using for SciPy prior to the existence of Carl's prototype toolchains? From njs at pobox.com Sat Jan 2 16:35:02 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 2 Jan 2016 13:35:02 -0800 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 12:48 PM, Tony Kelman wrote: > Matthew Brett gmail.com> writes: > >> The problem we're trying to solve, is making a toolchain that will >> allow any project to build their own Windows .whl installers without >> having to worry about how other projects were built. > > You are making an implicit assumption though that all other projects > were built for compatibility with the python.org builds of CPython, > and that all users will absolutely want to do this. For people who > want to embed Python, e.g. in GDB, Blender, IJulia, etc this isn't > a requirement that adds much value. It's also true though that the userbase who cares about running numpy inside a non-standard embedded python is much, much smaller than the userbase trying to run numpy on standard python. We certainly care about all users, but we also have to prioritize, and have extremely limited resources. :-/ It's possible that over the next few years enough patches will materialize and things will settle down enough to allow the programs mentioned above to switch to a MSVC2015-compatible runtime+ABI, which would at least allow them to embed a standard py35. Fingers crossed. Alternatively the mingw-w64 folks have a plan for how to teach their compiler to generate .dll's that are "runtime agile", i.e. will adapt themselves at dll-load-time to whichever runtime is in use by the host process, and that would solve this problem in general. But this would take more resources than we can scrounge out of the couch cushions (on the order of ~$100k instead of ~$10k), and while this would be pocket change for Microsoft, it turns out that even after extensive discussion, "the new open-source friendly Microsoft" is not currently interested in spending money to solve the endless pain their platform's brokenness causes for open source devs. So we do what we can. -n -- Nathaniel J. Smith -- http://vorpus.org From tony at kelman.net Sat Jan 2 17:04:28 2016 From: tony at kelman.net (Tony Kelman) Date: Sat, 2 Jan 2016 22:04:28 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Nathaniel Smith pobox.com> writes: > It's also true though that the userbase who cares about running > numpy inside a non-standard embedded python is much, much smaller > than the userbase trying to run numpy on standard python. We > certainly care about all users, but we also have to prioritize, > and have extremely limited resources. :-/ Sure. I didn't mean to imply that building CPython itself with MinGW-w64 was solely useful for the embedded use case. In fact the MSYS2 build of CPython works *today* with Python 3.5, NumPy, SciPy, etc. which cannot be said of any other Windows builds of SciPy using freely available compilers. "Standard python" doesn't have to mean "downloaded from python.org" if the users' needs are met, one way or another. I don't think the users who download Python and the scientific stack via Anaconda care much about the details of how it's built, as long as they can get the packages they need to work. > It's possible that over the next few years enough patches will > materialize and things will settle down enough to allow the > programs mentioned above to switch to a MSVC2015-compatible > runtime+ABI, which would at least allow them to embed a standard > py35. Fingers crossed. I'm not sure that adds much value to all that many projects. If you already prefer MinGW-w64 over MSVC (2015 still hasn't solved inline assembly, Fortran, C99 complex numbers, or use with autotools), there's little benefit to the 2015 runtime. In Julia's case the most likely path forward towards "MSVC compatibility" doesn't involve MSVC at all, it will use Clang for C and C++ and the PGI-donated LLVM Fortran front end (https://www.llnl.gov/news/nnsa-national-labs-team- nvidia-develop-open-source-fortran-compiler-technology) once that's available. I'm sure LLVM will need some assistance in making that compiler support Windows once it gets open-sourced, but that's not expected until "late 2016" so it doesn't help solve SciPy's short term problems. It's on my personal radar though, and it may be worth keeping yon yours as well. > "the new open-source friendly Microsoft" is not currently > interested in spending money to solve the endless pain their > platform's brokenness causes for open source devs. Not for the scientific/technical computing community anyway. Maybe nudging some Windows users towards VirtualBox and Ubuntu would be beneficial :) From njs at pobox.com Sat Jan 2 17:22:19 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 2 Jan 2016 14:22:19 -0800 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 2:04 PM, Tony Kelman wrote: > Nathaniel Smith pobox.com> writes: > >> It's also true though that the userbase who cares about running >> numpy inside a non-standard embedded python is much, much smaller >> than the userbase trying to run numpy on standard python. We >> certainly care about all users, but we also have to prioritize, >> and have extremely limited resources. :-/ > > Sure. I didn't mean to imply that building CPython itself with > MinGW-w64 was solely useful for the embedded use case. In fact the > MSYS2 build of CPython works *today* with Python 3.5, NumPy, SciPy, > etc. which cannot be said of any other Windows builds of SciPy using > freely available compilers. "Standard python" doesn't have to mean > "downloaded from python.org" if the users' needs are met, one way or > another. I don't think the users who download Python and the > scientific stack via Anaconda care much about the details of how > it's built, as long as they can get the packages they need to work. That's all great until the msys2 users type "pip install psycopg2" and pip happily installs a package that was built against the standard windows cpython ABI, and breaks in obscure ways when used from msys2 python. My feeling is that while there are some users who might be happy with treating anaconda or whatever like a walled garden and never going outside a small curated set of packages, for most users what they want is the full python ecosystem. And that means interoperating with that ecosystem. Even if parts of it can sometimes be very frustrating to work with :-). >> It's possible that over the next few years enough patches will >> materialize and things will settle down enough to allow the >> programs mentioned above to switch to a MSVC2015-compatible >> runtime+ABI, which would at least allow them to embed a standard >> py35. Fingers crossed. > > I'm not sure that adds much value to all that many projects. If > you already prefer MinGW-w64 over MSVC (2015 still hasn't solved > inline assembly, Fortran, C99 complex numbers, or use with autotools), > there's little benefit to the 2015 runtime. Depends on whether you care about python or not :-). Also one might reasonably want to get off of the 10 year old msvcrt.dll runtime even without caring about MSVC, but YMMV. And I guess there are some weirdos out there who actually like using Visual Studio to write extension code :-). (I can't seem to find the numbers right now, but apparently PTVS gets a ton of downloads.) In any case, the point is just that mingw-w64 itself may well switch to an MSVC-compatible configuration as the main supported target over the next while. > In Julia's case the most > likely path forward towards "MSVC compatibility" doesn't involve MSVC > at all, it will use Clang for C and C++ and the PGI-donated LLVM > Fortran front end (https://www.llnl.gov/news/nnsa-national-labs-team- > nvidia-develop-open-source-fortran-compiler-technology) once that's > available. I'm sure LLVM will need some assistance in making that > compiler support Windows once it gets open-sourced, but that's not > expected until "late 2016" so it doesn't help solve SciPy's short term > problems. It's on my personal radar though, and it may be worth keeping > yon yours as well. I hadn't seen that Fortran announcement -- that is indeed interesting. -n -- Nathaniel J. Smith -- http://vorpus.org From matthew.brett at gmail.com Sat Jan 2 18:28:20 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 2 Jan 2016 23:28:20 +0000 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 8:48 PM, Tony Kelman wrote: > Matthew Brett gmail.com> writes: > >> The problem we're trying to solve, is making a toolchain that will >> allow any project to build their own Windows .whl installers without >> having to worry about how other projects were built. > > You are making an implicit assumption though that all other projects > were built for compatibility with the python.org builds of CPython, > and that all users will absolutely want to do this. For people who > want to embed Python, e.g. in GDB, Blender, IJulia, etc this isn't > a requirement that adds much value. No, you misunderstand me, it's possible to imagine this system built on any easily available Python, I was only saying that it's important that it should be obvious and easy how to build each individual project without having to worry about - for example - the DLL configuration of other projects. If you have one central place that decides what to do - like Anaconda or the MSYS2 packages - then that place can take care that the DLLs do not tread on each others toes - but once you have distributed package building it becomes a lot harder to do that. >> My own feeling is having one central place building all these packages >> is not going to scale, partly because continuous testing is so >> important, and this can most efficiently be done and responded to by >> the projects themselves. > > I agree, this is one of my issues with MSYS2. An automatic build service > would be ideal, and we heavily leverage the OpenSUSE build service to > cross-compile Windows binaries of many libraries for use with Julia > packages. (There are other reasons I can get into that we don't use > MSYS2's infrastructure and toolchain more heavily for Julia, but it > would be slightly off-topic.) Right now the only mature solution for > binary distribution of the scientific Python stack on Windows is Anaconda, > which is not only "one central place" but also not reproducible by other > open source projects - as far as we're aware they are using MKL for > NumPy and Intel Fortran for SciPy (and any other Fortran dependencies), > correct? Anaconda.org still does not provide hosted automatic Windows > build VM's on open source plans, do they? I don't know, maybe someone else can comment? But in any case, there are other reasons, including the ones you give, to build up an alternative solution to packaging using standard tools like pip and wheels. >> We scientific Python persons are particularly concerned with packages >> that need compilation. >> >> That does put us at risk of DLL hell because many packages need to >> link to external libraries, and two packages may need the same >> library. Or two versions of one package may get installed which have >> different and incompatible versions of the same library. > > Which compiler you're using is another dimension of compatibility here, > but I don't see how it changes matters at all with respect to library > versioning and API changes. I'm sorry if I gave that impression, I was only trying to say that these problems get worse when switching from a centralized build to a distributed one. >> Yes, it is unfortunate, it would be good to find some way of sharing >> effort, and / or lobbying for the Python / pypi folks. > > It might be worth pinging a few people who are working on porting the > scientific Python stack to ARM/Android, as cross compilation is also an > important tool there. Distutils and many other Python tools like Cython > and conda have some pretty deeply ingrained assumptions that the build > system equals the runtime system (and Windows equals MSVC). MSYS[2] is > fundamentally a path-mangling hack that allows you to use autotools, > bash, and gmake within a posix system while pretending that you aren't > cross-compiling to a non-posix MinGW target. If your build system knows > how to cross compile properly, it can be cleaner to just do that from > either Cygwin or Linux (and in the latter case, build times are often > substantially faster). We could surely benefit from cross-compilation - is there a good central place to get going with mingw-w64 cross compiling? Cheers, Matthew From tony at kelman.net Sat Jan 2 19:01:52 2016 From: tony at kelman.net (Tony Kelman) Date: Sun, 3 Jan 2016 00:01:52 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Nathaniel Smith pobox.com> writes: > That's all great until the msys2 users type "pip install psycopg2" and > pip happily installs a package that was built against the standard > windows cpython ABI, and breaks in obscure ways when used from msys2 > python. I'll have to check whether the msys2 pip is configured in a way that tries to use wheels from pypi, or always falls back to source builds. Both could be minefields for different reasons. > My feeling is that while there are some users who might be > happy with treating anaconda or whatever like a walled garden and > never going outside a small curated set of packages, for most users > what they want is the full python ecosystem. And that means > interoperating with that ecosystem. It's an interesting question. My impression is that anaconda wouldn't be as popular as it is if more of its users had strong needs to use compiled extensions not yet provided within anaconda. > Depends on whether you care about python or not :-). Hence my personal lack of investment on the MinGW-w64 CPython fork... The IJulia installation problem that I cared most about was solved by downloading and driving miniconda through a Conda.jl package. > Also one might reasonably want to get off of the 10 year old > msvcrt.dll runtime even without caring about MSVC, but YMMV. MinGW-w64 has shims around most of the trouble spots that have come up over the years. Things like the math library are why Julia uses its own independent libm (mostly taken from BSDs, but buildable standalone): https://github.com/JuliaLang/openlibm - feel free to try building that with mingwpy BTW, it might help. There have been issues with glibc's libm and I believe on Mac as well, so this isn't uniquely a Windows problem. > In any case, the point is just that mingw-w64 itself may well switch > to an MSVC-compatible configuration as the main supported target over > the next while. This is also likely to happen by way of LLVM before it happens with GCC. Clang, LLD, compiler-rt, and libc++ are rapidly gaining in Windows functionality, including the mingw-w64 developers working on a standalone configuration that builds against the core mingw-w64 headers rather than needing to piggyback on an MSVC/WinSDK installation. As far as using the ucrt goes, has there been any more recent activity than http://sourceforge.net/p/mingw-w64/mailman/message/34124287/ ? Are the existing mingwpy patches public somewhere? From insertinterestingnamehere at gmail.com Sat Jan 2 19:28:25 2016 From: insertinterestingnamehere at gmail.com (Ian Henriksen) Date: Sun, 03 Jan 2016 00:28:25 +0000 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 1:23 PM Nathaniel Smith wrote: > But, for any patches needed for supporting 3.5+, including patches that > are needed independently of the runtime chosen, we plan to push these > upstream, and upstream is interested in taking them. (And in fact we're > hoping upstream will do some of the heavy lifting :-).) We're in contact > with both them and with Microsoft to figure out how to make modern > mingw-w64 more compatible with modern msvc in general. This doesn't help > with the 2.7 situation, but the long term situation is not as bleak as you > fear. > Getting a ucrt-compatible version of mingw-w64 would be a huge deal. It wouldn't require us to maintain a separate toolchain and drastically improves compiler compatibility overall. Thanks again for getting all of this moving! Best, -Ian Henriksen -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at kelman.net Sat Jan 2 19:31:35 2016 From: tony at kelman.net (Tony Kelman) Date: Sun, 3 Jan 2016 00:31:35 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Matthew Brett gmail.com> writes: > No, you misunderstand me, it's possible to imagine this system built > on any easily available Python, I was only saying that it's important > that it should be obvious and easy how to build each individual > project without having to worry about - for example - the DLL > configuration of other projects. I see, sorry for the misunderstanding. This seems like a problem that isn't really feasible to solve in the absolute most general case. Maybe the ucrt will help eventually, that was some of its design motivation, but it's still far too early to tell. > We could surely benefit from cross-compilation - is there a good > central place to get going with mingw-w64 cross compiling? Not that I know of, I had to pick it up by trying it out and lots of practice. I gave a very short talk at last year's JuliaCon that touched on this, but there wasn't much time for details. I've been meaning to record a screencast type tutorial that would be targeted mainly towards Julia package developers to cover all of the details starting from getting and using cross-compilers on a number of common Linux distributions. May also work from OSX, but there isn't a great standard source of OSX-to-MinGW cross compiler binaries right now. Debian, Ubuntu, Fedora, OpenSUSE, Arch, and Cygwin all have reasonably up-to-date mingw-w64 cross-compiler packages readily available in their repositories. If you pick a library that uses autotools, it can be as simple as configuring with --host=x86_64-w64-mingw32. CMake-built libraries also usually work but require a more verbose set of flags. This covers getting you to a DLL, which is the hardest part for Julia packages that wrap compiled C/Fortran libraries. I'm not very familiar with the additional steps that would be needed to build a Python C extension. You'd probably need an easy way of getting a Windows copy of libpython and Python.h onto the build machine? I could work through an example of this, maybe pick a Linux distribution to build from and a library to build and we could put together a Docker container that would have all the steps to produce a Windows DLL. From njs at pobox.com Sat Jan 2 19:34:03 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 2 Jan 2016 16:34:03 -0800 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 4:01 PM, Tony Kelman wrote: > Nathaniel Smith pobox.com> writes: > >> That's all great until the msys2 users type "pip install psycopg2" and >> pip happily installs a package that was built against the standard >> windows cpython ABI, and breaks in obscure ways when used from msys2 >> python. > > I'll have to check whether the msys2 pip is configured in a way that > tries to use wheels from pypi, or always falls back to source builds. > Both could be minefields for different reasons. The proper solution would be for it to use a different platform tag than standard cpython -- so instead of looking for wheels tagged win32 / win64, it would look for wheels tagged win32msys / win64msys (or whatever). But then you have the problem of convincing every project to provide builds for both platforms, which ugh. >> My feeling is that while there are some users who might be >> happy with treating anaconda or whatever like a walled garden and >> never going outside a small curated set of packages, for most users >> what they want is the full python ecosystem. And that means >> interoperating with that ecosystem. > > It's an interesting question. My impression is that anaconda wouldn't > be as popular as it is if more of its users had strong needs to use > compiled extensions not yet provided within anaconda. > >> Depends on whether you care about python or not :-). > > Hence my personal lack of investment on the MinGW-w64 CPython fork... > The IJulia installation problem that I cared most about was solved by > downloading and driving miniconda through a Conda.jl package. > >> Also one might reasonably want to get off of the 10 year old >> msvcrt.dll runtime even without caring about MSVC, but YMMV. > > MinGW-w64 has shims around most of the trouble spots that have come > up over the years. Things like the math library are why Julia uses > its own independent libm (mostly taken from BSDs, but buildable > standalone): https://github.com/JuliaLang/openlibm - feel free to try > building that with mingwpy BTW, it might help. There have been issues > with glibc's libm and I believe on Mac as well, so this isn't uniquely > a Windows problem. Unfortunately openlibm currently does not work very well if you're using any compiler that follows the MSVC ABI, because the code pools it draws from (like the BSDs) assume that the x87 FPU is configured in "extended precision" mode (which gives you an extra 16 bits of precision for intermediate calculations). This is a valid assumption on every popular x86 ABI *except* MSVC. (This is why the mingwpy page has a bunch of discussion of numerical precision issues -- mingw-w64's runtime is similar to openlibm in this respect.) Fortunately, though, it turns out that the bionic libc has a high-quality libm implemented using standard precision SSE, though (hat tip to Sergey Maidanov at Intel for pointing this out to us) -- you might want to check it out. >> In any case, the point is just that mingw-w64 itself may well switch >> to an MSVC-compatible configuration as the main supported target over >> the next while. > > This is also likely to happen by way of LLVM before it happens with > GCC. Clang, LLD, compiler-rt, and libc++ are rapidly gaining in > Windows functionality, including the mingw-w64 developers working on > a standalone configuration that builds against the core mingw-w64 > headers rather than needing to piggyback on an MSVC/WinSDK installation. > > As far as using the ucrt goes, has there been any more recent activity > than http://sourceforge.net/p/mingw-w64/mailman/message/34124287/ ? Not in public. The picture here should hopefully become much clearer within the next few weeks. (Sorry for vagueness, it's just that I've been pushing on various dominoes and don't know yet where they'll fall :-).) > Are the existing mingwpy patches public somewhere? Not yet. That's "phase 1" in the proposal that was just funded https://mingwpy.github.io/proposal_december2015.html and so I think Carl's going to start working on it, like, tomorrow or something like that. Again, patience is needed, but hopefully not *too* much patience :-). -n -- Nathaniel J. Smith -- http://vorpus.org From tony at kelman.net Sat Jan 2 19:58:13 2016 From: tony at kelman.net (Tony Kelman) Date: Sun, 3 Jan 2016 00:58:13 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Nathaniel Smith pobox.com> writes: > The proper solution would be for it to use a different platform tag > than standard cpython -- so instead of looking for wheels tagged win32 > / win64, it would look for wheels tagged win32msys / win64msys (or > whatever). But then you have the problem of convincing every project > to provide builds for both platforms, which ugh. Or msys2 would build and upload the wheels for their own platform tag? I suspect they'd rather be like a Linux distribution and manage their packaging via pacman though. > Unfortunately openlibm currently does not work very well if you're > using any compiler that follows the MSVC ABI, because the code pools > it draws from (like the BSDs) assume that the x87 FPU is configured in > "extended precision" mode (which gives you an extra 16 bits of > precision for intermediate calculations). This is a valid assumption > on every popular x86 ABI *except* MSVC. Why isn't "don't use x87" an option? Set a processor floor of Pentium 4 and don't try to use long double? It's not very well-supported or high performance on any x64 architecture anyway. From njs at pobox.com Sat Jan 2 20:29:48 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 2 Jan 2016 17:29:48 -0800 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 4:58 PM, Tony Kelman wrote: > Nathaniel Smith pobox.com> writes: > >> The proper solution would be for it to use a different platform tag >> than standard cpython -- so instead of looking for wheels tagged win32 >> / win64, it would look for wheels tagged win32msys / win64msys (or >> whatever). But then you have the problem of convincing every project >> to provide builds for both platforms, which ugh. > > Or msys2 would build and upload the wheels for their own platform tag? > I suspect they'd rather be like a Linux distribution and manage their > packaging via pacman though. They might well want that, but users won't put up with it. (Notice how popular anaconda is even on linux, and then how anaconda itself has had to adapt to interoperate better with pip -- users may well be happy to get their baseline stuff from a centralized source with good QA, but sooner or later they will want to pull in the latest version of pandas to get a specific new feature, or some obscure domain-specific package that hasn't made into their distro yet, or set up a virtualenv with old versions to reproduce an old paper, or ...) >> Unfortunately openlibm currently does not work very well if you're >> using any compiler that follows the MSVC ABI, because the code pools >> it draws from (like the BSDs) assume that the x87 FPU is configured in >> "extended precision" mode (which gives you an extra 16 bits of >> precision for intermediate calculations). This is a valid assumption >> on every popular x86 ABI *except* MSVC. > > Why isn't "don't use x87" an option? Set a processor floor of Pentium 4 > and don't try to use long double? It's not very well-supported or high > performance on any x64 architecture anyway. "don't use x87" is a great option. It's just that if you want to do that, then someone has to do the work to rewrite the code in openlibm that currently uses x87 :-). E.g. here's openlibm's x86 "sin" implementation, based around the x87 "fsin" instruction: https://github.com/JuliaLang/openlibm/blob/master/i387/s_sin.S fsin is an interesting instruction... https://randomascii.wordpress.com/2014/10/09/intel-underestimates-error-bounds-by-1-3-quintillion/ Compare to the x87-free implementation in bionic (using SSE): https://github.com/android/platform_bionic/blob/master/libm/x86/s_sin.S Note that I haven't actually tested openlibm in MSVC-compatible mode, so it might be fine; I just wouldn't trust it without testing it first, and I doubt anyone has tried. -n -- Nathaniel J. Smith -- http://vorpus.org From tony at kelman.net Sat Jan 2 20:54:10 2016 From: tony at kelman.net (Tony Kelman) Date: Sun, 3 Jan 2016 01:54:10 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Nathaniel Smith pobox.com> writes: > "don't use x87" is a great option. It's just that if you want to do > that, then someone has to do the work to rewrite the code in openlibm > that currently uses x87 . E.g. here's openlibm's x86 "sin" > implementation, based around the x87 "fsin" instruction: > https://github.com/JuliaLang/openlibm/blob/master/i387/s_sin.S Interesting, but that file isn't getting built in the current makefiles. We're using the C implementation in https://github.com/JuliaLang/openlibm /blob/0036d707347b27c580bf25c28cfc43dcbe412437/src/s_sin.c at the moment. Ref https://github.com/JuliaLang/openlibm/issues/11 > Compare to the x87-free implementation in bionic (using SSE): > https://github.com/android/platform_bionic/blob/master/libm/x86/s_sin.S Benchmarking openlibm vs bionic is certainly worth doing. I'll bug my neighborhood math-functions-implementation-anorak about it some time. > Note that I haven't actually tested openlibm in MSVC-compatible mode, > so it might be fine; I just wouldn't trust it without testing it > first, and I doubt anyone has tried. We should do this. What exactly do you mean here by "MSVC-compatible mode?" Just using mingwpy and linking against a versioned (or universal) crt? If you build with mingwpy you're still using the GNU assembler aren't you? Tranlating all of openlibm's assembly into Intel syntax so ml64.exe can understand it has been present but very low on my to-do list. From njs at pobox.com Sat Jan 2 23:46:21 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 2 Jan 2016 20:46:21 -0800 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sat, Jan 2, 2016 at 5:54 PM, Tony Kelman wrote: > Nathaniel Smith pobox.com> writes: > >> "don't use x87" is a great option. It's just that if you want to do >> that, then someone has to do the work to rewrite the code in openlibm >> that currently uses x87 . E.g. here's openlibm's x86 "sin" >> implementation, based around the x87 "fsin" instruction: >> https://github.com/JuliaLang/openlibm/blob/master/i387/s_sin.S > > Interesting, but that file isn't getting built in the current makefiles. > We're using the C implementation in https://github.com/JuliaLang/openlibm > /blob/0036d707347b27c580bf25c28cfc43dcbe412437/src/s_sin.c at the moment. > Ref https://github.com/JuliaLang/openlibm/issues/11 Oh interesting! >> Compare to the x87-free implementation in bionic (using SSE): >> https://github.com/android/platform_bionic/blob/master/libm/x86/s_sin.S > > Benchmarking openlibm vs bionic is certainly worth doing. I'll bug my > neighborhood math-functions-implementation-anorak about it some time. > >> Note that I haven't actually tested openlibm in MSVC-compatible mode, >> so it might be fine; I just wouldn't trust it without testing it >> first, and I doubt anyone has tried. > > We should do this. What exactly do you mean here by "MSVC-compatible > mode?" Just using mingwpy and linking against a versioned (or universal) > crt? If you build with mingwpy you're still using the GNU assembler aren't > you? Tranlating all of openlibm's assembly into Intel syntax so ml64.exe > can understand it has been present but very low on my to-do list. This is orthogonal to the CRT issue. By default, all mingw-w64-compiled DLLs have some startup code that unconditionally resets the current thread's x87 control word to enable extended precision. Altering global environment state like this is very dubious behavior in a program that uses a mix of mingw-w64-compiled code and non-mingw-w64-compiled code, and can also cause other weird issues. (Example: if you call mingw-w64 compiled code from a thread that was spawned by MSVC-compiled code, then it might give the wrong result, even though the same code works when invoked from the main thread or a thread spawned from mingw-w64 compiled code, because the mingw-w64 thread spawning functions take care to set up 80-bit precision in x87 control word in the new thread, while the vanilla Windows thread spawning functions don't bother and leave you with the default 64-bit precision. Basically it's a huge headache.) mingw-w64 does have an upstream-supported way to disable this behavior, to make it act like MSVC: you make sure to link the CRT_fp8.o object (which is included in mingw-w64 toolchains) into your build. Unfortunately I can't give you any more detailed instructions than that, but maybe that's a useful clue to start with :-). (You might also want -mlong-double-64, since mingw-w64's default 80-bit long double is definitely not going to work in this configuration.) -n -- Nathaniel J. Smith -- http://vorpus.org From matthew.brett at gmail.com Mon Jan 4 09:39:34 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Jan 2016 14:39:34 +0000 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Sun, Jan 3, 2016 at 12:31 AM, Tony Kelman wrote: > Matthew Brett gmail.com> writes: > >> No, you misunderstand me, it's possible to imagine this system built >> on any easily available Python, I was only saying that it's important >> that it should be obvious and easy how to build each individual >> project without having to worry about - for example - the DLL >> configuration of other projects. > > I see, sorry for the misunderstanding. This seems like a problem > that isn't really feasible to solve in the absolute most general > case. Maybe the ucrt will help eventually, that was some of its > design motivation, but it's still far too early to tell. > >> We could surely benefit from cross-compilation - is there a good >> central place to get going with mingw-w64 cross compiling? > > Not that I know of, I had to pick it up by trying it out and lots > of practice. I gave a very short talk at last year's JuliaCon that > touched on this, but there wasn't much time for details. I've been > meaning to record a screencast type tutorial that would be targeted > mainly towards Julia package developers to cover all of the details > starting from getting and using cross-compilers on a number of common > Linux distributions. May also work from OSX, but there isn't a great > standard source of OSX-to-MinGW cross compiler binaries right now. > > Debian, Ubuntu, Fedora, OpenSUSE, Arch, and Cygwin all have reasonably > up-to-date mingw-w64 cross-compiler packages readily available > in their repositories. If you pick a library that uses autotools, > it can be as simple as configuring with --host=x86_64-w64-mingw32. > CMake-built libraries also usually work but require a more verbose > set of flags. This covers getting you to a DLL, which is the hardest > part for Julia packages that wrap compiled C/Fortran libraries. I'm > not very familiar with the additional steps that would be needed to > build a Python C extension. You'd probably need an easy way of getting > a Windows copy of libpython and Python.h onto the build machine? > > I could work through an example of this, maybe pick a Linux > distribution to build from and a library to build and we could > put together a Docker container that would have all the steps to > produce a Windows DLL. That docker container would be extremely useful - what can I do to help? Cheers, Matthew From cmkleffner at gmail.com Mon Jan 4 11:03:20 2016 From: cmkleffner at gmail.com (Carl Kleffner) Date: Mon, 4 Jan 2016 17:03:20 +0100 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: Python binary extensions build with mingw-w64 and MSVC based CPython combatibility in mind should be compiled with a toolchain that has to be build with specific configurations and patches. The most important demands are exclusive linking the appropriate MS CRT runtime and statically linking the gcc runtime objects. Some other configurations are handling long double as double (as MS ABI does not support extended precision), standrad exception and win32 thread handling. Universal runtime support (VS 2015) is planned for the future. I doubt that this demands can be easily handled with mingw-w64 cross compilers found a certain Linux distros. Is the idea to build a specialized cross compiler for that purpose? Using the mingwpy package together with msys2 can be used instead if autotools have to be used. As mingwpy is not a cross compiler a windows box or a windows VM is needed. It was also shown that mingwpy works within Linux using Wine to compile python extensions. carlkl 2016-01-04 15:39 GMT+01:00 Matthew Brett : > On Sun, Jan 3, 2016 at 12:31 AM, Tony Kelman wrote: > > Matthew Brett gmail.com> writes: > > > >> No, you misunderstand me, it's possible to imagine this system built > >> on any easily available Python, I was only saying that it's important > >> that it should be obvious and easy how to build each individual > >> project without having to worry about - for example - the DLL > >> configuration of other projects. > > > > I see, sorry for the misunderstanding. This seems like a problem > > that isn't really feasible to solve in the absolute most general > > case. Maybe the ucrt will help eventually, that was some of its > > design motivation, but it's still far too early to tell. > > > >> We could surely benefit from cross-compilation - is there a good > >> central place to get going with mingw-w64 cross compiling? > > > > Not that I know of, I had to pick it up by trying it out and lots > > of practice. I gave a very short talk at last year's JuliaCon that > > touched on this, but there wasn't much time for details. I've been > > meaning to record a screencast type tutorial that would be targeted > > mainly towards Julia package developers to cover all of the details > > starting from getting and using cross-compilers on a number of common > > Linux distributions. May also work from OSX, but there isn't a great > > standard source of OSX-to-MinGW cross compiler binaries right now. > > > > Debian, Ubuntu, Fedora, OpenSUSE, Arch, and Cygwin all have reasonably > > up-to-date mingw-w64 cross-compiler packages readily available > > in their repositories. If you pick a library that uses autotools, > > it can be as simple as configuring with --host=x86_64-w64-mingw32. > > CMake-built libraries also usually work but require a more verbose > > set of flags. This covers getting you to a DLL, which is the hardest > > part for Julia packages that wrap compiled C/Fortran libraries. I'm > > not very familiar with the additional steps that would be needed to > > build a Python C extension. You'd probably need an easy way of getting > > a Windows copy of libpython and Python.h onto the build machine? > > > > I could work through an example of this, maybe pick a Linux > > distribution to build from and a library to build and we could > > put together a Docker container that would have all the steps to > > produce a Windows DLL. > > That docker container would be extremely useful - what can I do to help? > > Cheers, > > Matthew > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Jan 4 11:07:08 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Jan 2016 16:07:08 +0000 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: Hi, On Mon, Jan 4, 2016 at 4:03 PM, Carl Kleffner wrote: > Python binary extensions build with mingw-w64 and MSVC based CPython > combatibility in mind should be compiled with a toolchain that has to be > build with specific configurations and patches. > > The most important demands are exclusive linking the appropriate MS CRT > runtime and statically linking the gcc runtime objects. Some other > configurations are handling long double as double (as MS ABI does not > support extended precision), standrad exception and win32 thread handling. > Universal runtime support (VS 2015) is planned for the future. > > I doubt that this demands can be easily handled with mingw-w64 cross > compilers found a certain Linux distros. Is the idea to build a specialized > cross compiler for that purpose? > > Using the mingwpy package together with msys2 can be used instead if > autotools have to be used. As mingwpy is not a cross compiler a windows box > or a windows VM is needed. It was also shown that mingwpy works within Linux > using Wine to compile python extensions. Yes, my idea was, that it would be easier for many of us to work on cross-compilers in Linux VMs or docker containers than to work on Windows VMs. We would have to customize the cross compiler. I'm not suggesting this should be the first attack on the problem, only that it could be useful for the medium term. Cheers, Matthew From tony at kelman.net Mon Jan 4 14:31:43 2016 From: tony at kelman.net (Tony Kelman) Date: Mon, 4 Jan 2016 19:31:43 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: > On Mon, Jan 4, 2016 at 4:03 PM, Carl Kleffner gmail.com> wrote: > > Python binary extensions build with mingw-w64 and MSVC based CPython > > combatibility in mind should be compiled with a toolchain that has to be > > build with specific configurations and patches. > > > > The most important demands are exclusive linking the appropriate MS CRT > > runtime and statically linking the gcc runtime objects. Some other > > configurations are handling long double as double (as MS ABI does not > > support extended precision), standrad exception and win32 thread handling. > > Universal runtime support (VS 2015) is planned for the future. I'd encourage you to place the defaulting to static linking somewhere other than the toolchain build configuration itself if you want your work to be adopted upstream. Even just wrapper scripts might do the trick. Or work on standardizing a location to put the shared runtime libraries. I highly doubt Linux distributions or msys2 are ever going to accept static-by-default mingw-w64 configurations upstream. Throwing C++ exceptions across library boundaries is going to be an issue. > > I doubt that this demands can be easily handled with mingw-w64 cross > > compilers found a certain Linux distros. Is the idea to build a specialized > > cross compiler for that purpose? If needed. Eventually your work would ideally be upstreamed and picked up by the distributions' mingw-w64 packages. In the short term, let's start by educating and familiarizing the Linux-based developer community with how the basics of cross-compiling work, then work on Python-specific issues. This can happen entirely in parallel with what you're doing on the toolchain side. If you'd rather make all of your builds from Windows for ease of testing, go ahead and do so, though I'll say that if you're building GCC itself frequently, you will save *a lot* of time by leveraging cross compilation. The openSUSE build service makes it incredibly easy to build cross-compilers, Windows exe compiler builds (cross-compiling GCC itself with --build=linux, --host=mingw-w64, --target=mingw-w64), and .a and .dll libraries all from a web interface (or a CLI, but you may have to be running opensuse to use it? not positive). It's a bit like github for binaries - I was able to make a custom mingw-w64 build including a patch for a gcc bug via the build service web interface over the weekend. I have a hacky script in the Julia repository that can be used to download these builds including their dependencies on non-RPM-based systems, it would be worth porting that script to Python at some point. > > Using the mingwpy package together with msys2 can be used instead if > > autotools have to be used. As mingwpy is not a cross compiler a windows box > > or a windows VM is needed. This is a common limitation with Python build tools that I think is worth a bit of extra work to avoid inheriting. Matthew Brett gmail.com> writes: > Yes, my idea was, that it would be easier for many of us to work on > cross-compilers in Linux VMs or docker containers than to work on > Windows VMs. We would have to customize the cross compiler. I'm not > suggesting this should be the first attack on the problem, only that > it could be useful for the medium term. > That docker container would be extremely useful - what can I do to help? Name a preferred Linux distribution and a particular library to use as an initial starting example. As one example, I put together a cross compiled build of reference netlib Lapack here https://build.opensuse.org/package/show/windows:mingw:win64/mingw64-lapack that uses Lapack's cmake build system. We could translate that from an opensuse-specific spec file to a more widely usable docker container. If we want to get ambitious, we could do openblas instead, if you have a particular build configuration of openblas (DYNAMIC_ARCH obviously, and maybe including openblas' optimized port of lapack too?) you like. From cmkleffner at gmail.com Mon Jan 4 15:16:34 2016 From: cmkleffner at gmail.com (Carl Kleffner) Date: Mon, 4 Jan 2016 21:16:34 +0100 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: 2016-01-04 20:31 GMT+01:00 Tony Kelman : > > On Mon, Jan 4, 2016 at 4:03 PM, Carl Kleffner gmail.com > > > wrote: > > > Python binary extensions build with mingw-w64 and MSVC based CPython > > > combatibility in mind should be compiled with a toolchain that has to > be > > > build with specific configurations and patches. > > > > > > The most important demands are exclusive linking the appropriate MS CRT > > > runtime and statically linking the gcc runtime objects. Some other > > > configurations are handling long double as double (as MS ABI does not > > > support extended precision), standrad exception and win32 thread > handling. > > > Universal runtime support (VS 2015) is planned for the future. > > I'd encourage you to place the defaulting to static linking somewhere > other than the toolchain build configuration itself if you want your > work to be adopted upstream. Even just wrapper scripts might do the > trick. Or work on standardizing a location to put the shared runtime > libraries. I highly doubt Linux distributions or msys2 are ever going > to accept static-by-default mingw-w64 configurations upstream. Throwing > C++ exceptions across library boundaries is going to be an issue. > I propose different configurations of the toolchains based upon its users: user programmers: mingwpy wheel packages on PYPI and package maintainers: cross compiler within a Linux docker VM. mingwpy wheels with a statically configured toolchain gives a much better programming experience to python users. Typical use cases are cython or theano based extensions. As a programmer you don't ever except additional dependency problems for these use cases. And throwing C++ exceptions across library boundaries is not an issues at all for scipy as there are no library boundaries (this is true for most of the python packages I'm aware of) If you want python binaries depend on gcc shared runtime libraries you get another major problem on CPython for Windows. It is not garantueed, that correct versions of the libraries are found automatically at runtime. And python binaries (pyd's) are spread all around within the site-packages folder and subfolders. There is only one workaround I can think of: explicit preloading of all necessary runtime DLLs into the process space. This could be handled within a dedicated mingw-runtime package. That unfortunately means: an additional windows-only package dependency to all mingw-w64 based python binaries. BTW: There is nothing wrong to use an alternative mingw-w64 package with posix threads and gcc shared libraries. wxpython and PyQT are the only two packages that comes into my mind for these use case. I guess in this case the msys2 mingw-w64 toolchains could do with some specs trickery, i.e. additional specs files that could be activated per command line line. (or use a linux cross compiler) > > I doubt that this demands can be easily handled with mingw-w64 cross > > > compilers found a certain Linux distros. Is the idea to build a > specialized > > > cross compiler for that purpose? > > If needed. Eventually your work would ideally be upstreamed and picked > up by the distributions' mingw-w64 packages. In the short term, let's > start by educating and familiarizing the Linux-based developer community > with how the basics of cross-compiling work, then work on Python-specific > issues. This can happen entirely in parallel with what you're doing on > the toolchain side. If you'd rather make all of your builds from Windows > for ease of testing, go ahead and do so, though I'll say that if you're > building GCC itself frequently, you will save *a lot* of time by > leveraging cross compilation. The openSUSE build service makes it > incredibly easy to build cross-compilers, Windows exe compiler builds > (cross-compiling GCC itself with --build=linux, --host=mingw-w64, > --target=mingw-w64), and .a and .dll libraries all from a web interface > (or a CLI, but you may have to be running opensuse to use it? not > positive). It's a bit like github for binaries - I was able to make a > custom mingw-w64 build including a patch for a gcc bug via the build > service web interface over the weekend. I have a hacky script in the > Julia repository that can be used to download these builds including > their dependencies on non-RPM-based systems, it would be worth porting > that script to Python at some point. > > > > Using the mingwpy package together with msys2 can be used instead if > > > autotools have to be used. As mingwpy is not a cross compiler a > windows box > > > or a windows VM is needed. > > This is a common limitation with Python build tools that I think is > worth a bit of extra work to avoid inheriting. > > > Matthew Brett gmail.com> writes: > > > Yes, my idea was, that it would be easier for many of us to work on > > cross-compilers in Linux VMs or docker containers than to work on > > Windows VMs. We would have to customize the cross compiler. I'm not > > suggesting this should be the first attack on the problem, only that > > it could be useful for the medium term. > > > That docker container would be extremely useful - what can I do to help? > > Name a preferred Linux distribution and a particular library to use as > an initial starting example. As one example, I put together a cross > compiled build of reference netlib Lapack here > https://build.opensuse.org/package/show/windows:mingw:win64/mingw64-lapack > that uses Lapack's cmake build system. We could translate that from an > opensuse-specific spec file to a more widely usable docker container. > If we want to get ambitious, we could do openblas instead, if you have > a particular build configuration of openblas (DYNAMIC_ARCH obviously, > and maybe including openblas' optimized port of lapack too?) you like. > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at kelman.net Mon Jan 4 15:39:31 2016 From: tony at kelman.net (Tony Kelman) Date: Mon, 4 Jan 2016 20:39:31 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Carl Kleffner gmail.com> writes: > I propose different configurations of the toolchains based upon its users: 2 issues here: you're adding more maintenance burden for yourself by having to rebuild your own custom gcc configuration all the time, and mingw-w64 already has a "too many configurations and sources of binaries" problem. Once this all settles down and is finished being upstreamed, it would be best if mingwpy could avoid making that problem worse, is all I'm saying. Wouldn't it be great if mingwpy could just be a wheel repackaging of one of the many existing sources of mingw-w64 binary builds? With some wrappers setting flags for things like msvc crt linkage. Long term target. > And throwing C++ exceptions across library boundaries is not an issues at all for scipy as there are no library boundaries (this is true for most of the python packages I'm aware of) Even for calling into unmodified libraries via cffi? > If you want python binaries depend on gcc shared runtime libraries you get another major problem on CPython for Windows. It is not garantueed, that correct versions of the libraries are found automatically at runtime. And python binaries (pyd's) are spread all around within the site-packages folder and subfolders. There is only one workaround I can think of: explicit preloading of all necessary runtime DLLs into the process space. This could be handled within a dedicated mingw-runtime package. That unfortunately means: an additional windows-only package dependency to all mingw-w64 based python binaries. This seems like the ideal technical solution here. Static linking is good for single file executables, less good for library ecosystems with complicated interdependencies. From tpudlik at gmail.com Wed Jan 6 21:56:43 2016 From: tpudlik at gmail.com (Ted Pudlik) Date: Thu, 07 Jan 2016 02:56:43 +0000 Subject: [SciPy-Dev] Reimplementing hypergeometric functions Message-ID: Hello, The current implementation of the confluent hypergeometric function in SciPy has room for improvement---see gh-5349 and the issues referenced there. But as has been discussed before , a recent paper reviews different algorithms for this problem, and its authors even provide implementations (albeit in MATLAB and with an unclear license). I'm planning to get in touch with the authors and see if they would be willing to provide a copy of their MATLAB code with a BSD license. If so, I'll try to write a Cython translation and attempt to stitch the algorithms together into a general-purpose routine to replace hyp1f1. Does this sound reasonable? Is anyone else working on reimplementing hyp1f1 at the moment? Best wishes, Ted -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jan 7 13:39:21 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 7 Jan 2016 11:39:21 -0700 Subject: [SciPy-Dev] Numpy 1.10.4 release. Message-ID: Hi All, I'm pleased the release of the Numpy 1.10.4 (bugs stomped) release. This release was motivated by a reported segfault, but a few additional fixes were made: * gh-6922 BUG: numpy.recarray.sort segfaults on Windows, * gh-6937 BUG: busday_offset does the wrong thing with modifiedpreceding roll, * gh-6949 BUG: Type is lost when slicing a subclass of recarray, together with one minor enhancement * gh-6950 BUG trace is not subclass aware, np.trace(ma) != ma.trace(). The sources and documentation are available on Sourceforge, but no binaries. The usual windows binaries were ommitted due to problems with the toolchain. We hope that will be remedied in the future. Mac binaries can be installed from pypi. "Where is numpy 1.10.3?", you may ask. There were glitches with the uploads to pypi for that version that necessitated a version upgrade. A release manager's life is not a happy one. Many thanks to Marten van Kerkwijk, Sebastian Seberg, and Mark Wiebe for the quick bug fixes. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.craig.wilson at gmail.com Thu Jan 7 15:15:56 2016 From: josh.craig.wilson at gmail.com (Joshua Wilson) Date: Thu, 7 Jan 2016 14:15:56 -0600 Subject: [SciPy-Dev] Reimplementing hypergeometric functions In-Reply-To: References: Message-ID: I was working on reimplementing hyp1f1, but then my efforts stalled. I ran into three primary issues: (1) Really small b values. (b < 1e-16, IIRC.) For moderately sized a the terms of the Taylor series get very large before the factorial finally kills them, so you get a lot of cancellation. Still, the loss in accuracy was only a couple of digits beyond what was required for passing the current tests, so perhaps this is not worth worrying about. The ideas from the paper for small b (single fraction) did not seem to improve the situation when b was this small. (2) Arguments close to the imaginary axis. For moderate a the Taylor series has a lot of cancellation again. The current implementation tries to fix this issue by using a recurrence relation to decrease a, but that recurrence relation becomes unstable for numbers with nonpositive real part. In gh-5349 Miller's algorithm was suggested as a possible fix, but in this scenario there is no minimal solution (see the book "Numerical Methods for Special Functions" by Gil, Segura, and Temme), so Miller's algorithm is a no-go. I was planning on using a double recursion to try and fix this, but I never got around to implementing it... (3) Stitching it all together. Trying to get something that worked for all of the regimes and was still readable/maintainable was tricky. But I didn't contact the authors! I was just working directly from the paper; maybe they've already solved all of these issues in their implementation! I would like to see this project completed, so I would be glad to help in whatever way I can. FWIW I implemented a lot of the methods in the paper (in pure Python); let me know if you want any of that code. - Josh -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Thu Jan 7 16:45:03 2016 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Thu, 7 Jan 2016 21:45:03 +0000 Subject: [SciPy-Dev] Reimplementing hypergeometric functions In-Reply-To: References: Message-ID: Hi Ted, Hi Joshua, Hi Nikolay, First things first: Great to see you guys working on this! I don't have technical insights to offer; I'd only say that from ten thousand feet it looks like you guys could consider setting up a shared repo for your collaborative efforts. Either one your scipy forks, or a separate repo, whichever works for you guys. That would reduce the chances of ending up with several more unfinished masterpieces :-). Cheers, Evgeni On Thu, Jan 7, 2016 at 8:15 PM, Joshua Wilson wrote: > I was working on reimplementing hyp1f1, but then my efforts stalled. I ran > into three primary issues: > > (1) Really small b values. (b < 1e-16, IIRC.) For moderately sized a the > terms of the Taylor series get very large before the factorial finally kills > them, so you get a lot of cancellation. Still, the loss in accuracy was only > a couple of digits beyond what was required for passing the current tests, > so perhaps this is not worth worrying about. The ideas from the paper for > small b (single fraction) did not seem to improve the situation when b was > this small. > > (2) Arguments close to the imaginary axis. For moderate a the Taylor series > has a lot of cancellation again. The current implementation tries to fix > this issue by using a recurrence relation to decrease a, but that recurrence > relation becomes unstable for numbers with nonpositive real part. In gh-5349 > Miller's algorithm was suggested as a possible fix, but in this scenario > there is no minimal solution (see the book "Numerical Methods for Special > Functions" by Gil, Segura, and Temme), so Miller's algorithm is a no-go. I > was planning on using a double recursion to try and fix this, but I never > got around to implementing it... > > (3) Stitching it all together. Trying to get something that worked for all > of the regimes and was still readable/maintainable was tricky. > > But I didn't contact the authors! I was just working directly from the > paper; maybe they've already solved all of these issues in their > implementation! > > I would like to see this project completed, so I would be glad to help in > whatever way I can. FWIW I implemented a lot of the methods in the paper (in > pure Python); let me know if you want any of that code. > > - Josh > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > From evgeny.burovskiy at gmail.com Thu Jan 7 18:24:48 2016 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Thu, 7 Jan 2016 23:24:48 +0000 Subject: [SciPy-Dev] ANN: second release candidate for scipy 0.17.0 Message-ID: Hi, I'm pleased to announce the availability of the second release candidate for Scipy 0.17.0. It's two days ahead of the original schedule: based on typical development patterns, I'd like to have two weekends and a full working week before January 17th, when this rc2 is supposed to become the final release. Please try this rc and report any issues on Github tracker or scipy-dev mailing list. Source tarballs and full release notes are available from Github Releases: https://github.com/scipy/scipy/releases/tag/v0.17.0rc2 Compared to rc1, the following PRs were merged: - - `#5624 `__: FIX: Fix interpolate - - `#5625 `__: BUG: msvc9 binaries crash when indexing std::vector of size 0 - - `#5635 `__: BUG: misspelled __dealloc__ in cKDTree. - - `#5642 `__: STY: minor fixup of formatting of 0.17.0 release notes. - - `#5643 `__: BLD: fix a build issue in special/Faddeeva.cc with isnan. - - `#5661 `__: TST: linalg tests used stdlib random instead of numpy.random. There is a bit of an unusual change in this rc. In short, in testing rc1 we found that 1) interpolate.interp1d had an undocumented feature of allowing an array-valued fill_value to be broadcast against the data being interpolated, and 2) we broke it in 0.17.0rc1. (See PR 5624 for details.) We now *think* we fixed it so that no existing code is broken. Nevertheless, I would like to encourage everyone to test their code against this rc2, especially all code which uses interp1d. Cheers, Evgeni From nikolay.mayorov at zoho.com Thu Jan 7 18:53:48 2016 From: nikolay.mayorov at zoho.com (Nikolay Mayorov) Date: Thu, 07 Jan 2016 15:53:48 -0800 Subject: [SciPy-Dev] New class for cubic spline interpolation Message-ID: <1521e82b404.b0c5c55b7437.2847097286360496073@zoho.com> Hi, I came up with a PR implementing cubic spline interpolation in terms of polynomial coefficients on each segment https://github.com/scipy/scipy/pull/5653 The idea was to provide a standalone class, which is easier to understand and use than B-splines. Conceptually it fits in the same category as PChipInterpolator and Akima1DInterpolator. If you are interested, please share your feedback here or on github. Nikolay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpudlik at gmail.com Thu Jan 7 21:00:07 2016 From: tpudlik at gmail.com (Ted Pudlik) Date: Fri, 08 Jan 2016 02:00:07 +0000 Subject: [SciPy-Dev] Reimplementing hypergeometric functions In-Reply-To: References: Message-ID: Hi, Thank you all for your responses! Josh, Nikolay, it would be great if you could share what code you already have. Following Evgeni's recommendation, I'll add you as collaborators to my scipy fork and we can take it from there. (Please let me know if another arrangement would work better!) I sent an email to John Pearson, the first author of the aforementioned arXiv 1407.7786, asking for a license-compatible copy of their code and any advice on stitching the algorithms together. I will let you know if I hear back from him. Best wishes, Ted On Thu, Jan 7, 2016 at 4:45 PM Evgeni Burovski wrote: > Hi Ted, Hi Joshua, Hi Nikolay, > > First things first: Great to see you guys working on this! > > I don't have technical insights to offer; I'd only say that from ten > thousand feet it looks like you guys could consider setting up a > shared repo for your collaborative efforts. Either one your scipy > forks, or a separate repo, whichever works for you guys. > That would reduce the chances of ending up with several more > unfinished masterpieces :-). > > Cheers, > > Evgeni > > > > > On Thu, Jan 7, 2016 at 8:15 PM, Joshua Wilson > wrote: > > I was working on reimplementing hyp1f1, but then my efforts stalled. I > ran > > into three primary issues: > > > > (1) Really small b values. (b < 1e-16, IIRC.) For moderately sized a the > > terms of the Taylor series get very large before the factorial finally > kills > > them, so you get a lot of cancellation. Still, the loss in accuracy was > only > > a couple of digits beyond what was required for passing the current > tests, > > so perhaps this is not worth worrying about. The ideas from the paper for > > small b (single fraction) did not seem to improve the situation when b > was > > this small. > > > > (2) Arguments close to the imaginary axis. For moderate a the Taylor > series > > has a lot of cancellation again. The current implementation tries to fix > > this issue by using a recurrence relation to decrease a, but that > recurrence > > relation becomes unstable for numbers with nonpositive real part. In > gh-5349 > > Miller's algorithm was suggested as a possible fix, but in this scenario > > there is no minimal solution (see the book "Numerical Methods for Special > > Functions" by Gil, Segura, and Temme), so Miller's algorithm is a no-go. > I > > was planning on using a double recursion to try and fix this, but I never > > got around to implementing it... > > > > (3) Stitching it all together. Trying to get something that worked for > all > > of the regimes and was still readable/maintainable was tricky. > > > > But I didn't contact the authors! I was just working directly from the > > paper; maybe they've already solved all of these issues in their > > implementation! > > > > I would like to see this project completed, so I would be glad to help in > > whatever way I can. FWIW I implemented a lot of the methods in the paper > (in > > pure Python); let me know if you want any of that code. > > > > - Josh > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > https://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Jan 7 22:10:04 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 8 Jan 2016 03:10:04 +0000 Subject: [SciPy-Dev] [Numpy-discussion] ANN: second release candidate for scipy 0.17.0 In-Reply-To: References: Message-ID: Hi, On Thu, Jan 7, 2016 at 11:24 PM, Evgeni Burovski wrote: > Hi, > > I'm pleased to announce the availability of the second release > candidate for Scipy 0.17.0. It's two days ahead of the original > schedule: based on typical development patterns, I'd like to have two > weekends and a full working week before January 17th, when this rc2 is > supposed to become the final release. > > Please try this rc and report any issues on Github tracker or > scipy-dev mailing list. > Source tarballs and full release notes are available from Github > Releases: https://github.com/scipy/scipy/releases/tag/v0.17.0rc2 > > Compared to rc1, the following PRs were merged: > > - - `#5624 `__: FIX: Fix interpolate > - - `#5625 `__: BUG: msvc9 > binaries crash when indexing std::vector of size 0 > - - `#5635 `__: BUG: > misspelled __dealloc__ in cKDTree. > - - `#5642 `__: STY: minor > fixup of formatting of 0.17.0 release notes. > - - `#5643 `__: BLD: fix a > build issue in special/Faddeeva.cc with isnan. > - - `#5661 `__: TST: linalg > tests used stdlib random instead of numpy.random. > > There is a bit of an unusual change in this rc. In short, in testing > rc1 we found that 1) interpolate.interp1d had an undocumented feature > of allowing an array-valued fill_value to be broadcast against the > data being interpolated, and 2) we broke it in 0.17.0rc1. (See PR > 5624 for details.) > We now *think* we fixed it so that no existing code is broken. > Nevertheless, I would like to encourage everyone to test their code > against this rc2, especially all code which uses interp1d. Thanks for doing this. Testing builds of OSX wheels via https://github.com/MacPython/scipy-wheels, I found this problem : https://github.com/scipy/scipy/issues/5689 Cheers, Matthew From karcher.niklas at gmail.com Fri Jan 8 13:24:19 2016 From: karcher.niklas at gmail.com (Niklas K.) Date: Fri, 08 Jan 2016 18:24:19 +0000 Subject: [SciPy-Dev] differential_evolution enhancement Message-ID: Hi, I would like to use the differential_evolution algorithm but extend it with a few functionalities: 1. I would like to use the args arguments also in the callback method so that artificial arguments can be passed to that method as well as to the objective itself. 2. I would like to use a convex hull as an alternative to the bounds parameter. The type could be checked and the code could behave accordingly. Here the change is a little more difficult since the methods init_population_lhs and init_population_random need to be adopted to the appropriate bounds type. A first approach could be to sample in the designated way (LHS or random) as long as enough samples for the population are present in space defined by the convex hull. I think that both alterations create a backwards compatible code and can be done with a minimal changes. I hope a lead developer could give some feedback on my ideas. Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From derek at astro.physik.uni-goettingen.de Fri Jan 8 16:15:04 2016 From: derek at astro.physik.uni-goettingen.de (Derek Homeier) Date: Fri, 8 Jan 2016 22:15:04 +0100 Subject: [SciPy-Dev] ANN: second release candidate for scipy 0.17.0 In-Reply-To: References: Message-ID: <0961E91B-EA55-489F-A7B8-8AAFA25BCBA2@astro.physik.uni-goettingen.de> Hi, On 8 Jan 2016, at 12:24 am, Evgeni Burovski wrote: > > I'm pleased to announce the availability of the second release > candidate for Scipy 0.17.0. It's two days ahead of the original > schedule: based on typical development patterns, I'd like to have two > weekends and a full working week before January 17th, when this rc2 is > supposed to become the final release. I?ve built it under fink on MacOS X 10.10 and 10.11; all tests are passing now with Python versions 2.7.11, 3.4.4 and 3.5.1, good work! I found a single error trying to decode as ascii instead of unicode on running the tests for the Python3 versions within the package build system, which occurs in general when LANG is not set, or set to ?C? in the environment: Running unit tests for scipy NumPy version 1.10.4 NumPy relaxed strides checking option: False NumPy is installed in /sw/lib/python3.5/site-packages/numpy SciPy version 0.17.0rc2 SciPy is installed in /scratch.noindex/fink.build/root-scipy-py35-0.17.0rc2-1/sw/lib/python3.5/site-packages/scipy Python version 3.5.1 (default, Jan 8 2016, 14:14:02) [GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] nose version 1.3.7 ? ...................................................... ====================================================================== ERROR: test_fftpack_import (test_import.TestFFTPackImport) ---------------------------------------------------------------------- Traceback (most recent call last): File "/scratch.noindex/fink.build/root-scipy-py35-0.17.0rc2-1/sw/lib/python3.5/site-packages/scipy/fftpack/tests/test_import.py", line 29, in test_fftpack_import for line in file), File "/scratch.noindex/fink.build/root-scipy-py35-0.17.0rc2-1/sw/lib/python3.5/site-packages/scipy/fftpack/tests/test_import.py", line 28, in assert_(all(not re.fullmatch(regexp, line) File "/sw/lib/python3.5/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 351: ordinal not in range(128) ---------------------------------------------------------------------- Ran 20182 tests in 279.411s FAILED (KNOWNFAIL=97, SKIP=1647, errors=1) Cheers, Derek From andyfaff at gmail.com Sun Jan 10 18:29:06 2016 From: andyfaff at gmail.com (Andrew Nelson) Date: Mon, 11 Jan 2016 10:29:06 +1100 Subject: [SciPy-Dev] differential_evolution enhancement In-Reply-To: References: Message-ID: On 9 January 2016 at 05:24, Niklas K. wrote: > I would like to use the differential_evolution algorithm but extend it > with a few functionalities: > > 1. I would like to use the args arguments also in the callback method > so that artificial arguments can be passed to that method as well as to the > objective itself. > > One of the design goals was to keep the callback function signature the same as the other minimizers in the scipy.optimize stable. What's the rationale for wanting to be able to do this? You should be able do something like this already, but in your own code. For example you could store the args in a class attribute, the class can be callable: class Mycallback(self): def __init__(self, args): self.args = args def __call__(self, x): # implement your callback here. You have access to self.args. pass > > 1. I would like to use a convex hull as an alternative to the bounds > parameter. The type could be checked and the code could behave accordingly. > Here the change is a little more difficult since the methods > init_population_lhs and init_population_random need to be adopted to the > appropriate bounds type. A first approach could be to sample in the > designated way (LHS or random) as long as enough samples for the population > are present in space defined by the convex hull. > > As I understand it a convex hull is the smallest volume that encapsulates a set of points. It's not clear to me how a convex hull is applied to the optimization process here. At the moment the bounds for any given parameter are fixed during the evolution. Does the use of a convex hull mean that the bounds for a given parameter can change depending on the value of another parameter? > I think that both alterations create a backwards compatible code and can > be done with a minimal changes. I hope a lead developer could give some > feedback on my ideas. > Not a lead developer, but an interested party. These suggestions are best dealt with in separate PR's. -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Tue Jan 12 01:32:05 2016 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 12 Jan 2016 06:32:05 +0000 Subject: [SciPy-Dev] ANN: second release candidate for scipy 0.17.0 In-Reply-To: <0961E91B-EA55-489F-A7B8-8AAFA25BCBA2@astro.physik.uni-goettingen.de> References: <0961E91B-EA55-489F-A7B8-8AAFA25BCBA2@astro.physik.uni-goettingen.de> Message-ID: Thanks for testing it Derek! On Fri, Jan 8, 2016 at 9:15 PM, Derek Homeier wrote: > Hi, > > On 8 Jan 2016, at 12:24 am, Evgeni Burovski wrote: >> >> I'm pleased to announce the availability of the second release >> candidate for Scipy 0.17.0. It's two days ahead of the original >> schedule: based on typical development patterns, I'd like to have two >> weekends and a full working week before January 17th, when this rc2 is >> supposed to become the final release. > > I?ve built it under fink on MacOS X 10.10 and 10.11; all tests are passing now with Python versions > 2.7.11, 3.4.4 and 3.5.1, good work! > > I found a single error trying to decode as ascii instead of unicode on running the tests for the > Python3 versions within the package build system, which occurs in general when LANG is not set, > or set to ?C? in the environment: > > Running unit tests for scipy > NumPy version 1.10.4 > NumPy relaxed strides checking option: False > NumPy is installed in /sw/lib/python3.5/site-packages/numpy > SciPy version 0.17.0rc2 > SciPy is installed in /scratch.noindex/fink.build/root-scipy-py35-0.17.0rc2-1/sw/lib/python3.5/site-packages/scipy > Python version 3.5.1 (default, Jan 8 2016, 14:14:02) [GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] > nose version 1.3.7 > ? > ...................................................... > ====================================================================== > ERROR: test_fftpack_import (test_import.TestFFTPackImport) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/scratch.noindex/fink.build/root-scipy-py35-0.17.0rc2-1/sw/lib/python3.5/site-packages/scipy/fftpack/tests/test_import.py", line 29, in test_fftpack_import > for line in file), > File "/scratch.noindex/fink.build/root-scipy-py35-0.17.0rc2-1/sw/lib/python3.5/site-packages/scipy/fftpack/tests/test_import.py", line 28, in > assert_(all(not re.fullmatch(regexp, line) > File "/sw/lib/python3.5/encodings/ascii.py", line 26, in decode > return codecs.ascii_decode(input, self.errors)[0] > UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 351: ordinal not in range(128) > > ---------------------------------------------------------------------- > Ran 20182 tests in 279.411s > > FAILED (KNOWNFAIL=97, SKIP=1647, errors=1) >From a cursory glance the issue seems benign. It seems that the test needs fixing. I opened https://github.com/scipy/scipy/issues/5694 to keep track of it. Cheers, Evgeni From irvin.probst at ensta-bretagne.fr Tue Jan 19 10:49:02 2016 From: irvin.probst at ensta-bretagne.fr (Irvin Probst) Date: Tue, 19 Jan 2016 16:49:02 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters Message-ID: <569E5AEE.20002@ensta-bretagne.fr> Hi, maybe my question has been asked a thousand times but why are the callable's parameters in ode and odeint reversed ? From scipy.integrate.ode: http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode f : callable f(t, y, *f_args) From scipy.integrate.odeint: http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.odeint.html func : callable(y, t0, ...) Admittedly one will usually choose ode or odeint depending on what has to be done, but from an educational point of view this is really annoying. Say you want to show how to use ode or odeint to simulate something ? You have to define twice the same dynamics: def f_odeint(y,t): return y[1], -np.sin(y[0]) def f_ode(t,y): return [[y[1], -np.sin(y[0])]] Then come the usual questions: - why do I have to reverse y and t ? => Err... you see... because... - why can't I return a tuple in f_ode as in f_odeint ? => see ticket #1187 So I know that reversing the callable's parameters will break half the code using ode or odeint in the world and this is out of question, but couldn't it be possible to make it a bit more clear somewhere in the doc that the parameters are indeed reversed ? Or maybe am I missing some obvious explanation ? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjstickel at gmail.com Tue Jan 19 11:14:14 2016 From: jjstickel at gmail.com (Jonathan Stickel) Date: Tue, 19 Jan 2016 09:14:14 -0700 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <569E5AEE.20002@ensta-bretagne.fr> References: <569E5AEE.20002@ensta-bretagne.fr> Message-ID: <569E60D6.5060908@gmail.com> I also find this inconsistency to be annoying. Probably the result of of two different contributors that were unaware of each other. One workaround is to do something like the following: def dfdt(t, f, [args]): # for ode [your dfdt function statements] return [dfdt] def dfdt_odeint(f,t, *args): # swap t and f for odeint return dfdt(t,f, *args) so at least you only need to define your ode function just once. As shown, it doesn't resolve your tuple issue, but perhaps you could do that too (I tend to use lists/arrays rather than tuples when possible). Perhaps a new keyword flag could be added to odeint to implement functions calls the same as ODE. This would avoid breaking backward compatibility. Regards, Jonathan On 1/19/16 08:49 , Irvin Probst wrote: > Hi, > maybe my question has been asked a thousand times but why are the > callable's parameters in ode and odeint reversed ? > > From scipy.integrate.ode: > http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode > f : callable f(t, y, *f_args) > > From scipy.integrate.odeint: > http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.odeint.html > func : callable(y, t0, ...) > > Admittedly one will usually choose ode or odeint depending on what has > to be done, but from an educational point of view this is really annoying. > Say you want to show how to use ode or odeint to simulate something ? > You have to define twice the same dynamics: > > > def f_odeint(y,t): > return y[1], -np.sin(y[0]) > > def f_ode(t,y): > return [[y[1], -np.sin(y[0])]] > > Then come the usual questions: > - why do I have to reverse y and t ? => Err... you see... because... > - why can't I return a tuple in f_ode as in f_odeint ? => see ticket #1187 > > So I know that reversing the callable's parameters will break half the > code using ode or odeint in the world and this is out of question, but > couldn't it be possible to make it a bit more clear somewhere in the doc > that the parameters are indeed reversed ? Or maybe am I missing some > obvious explanation ? > > Regards. > > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > From irvin.probst at ensta-bretagne.fr Tue Jan 19 11:28:04 2016 From: irvin.probst at ensta-bretagne.fr (Irvin Probst) Date: Tue, 19 Jan 2016 17:28:04 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <569E60D6.5060908@gmail.com> References: <569E5AEE.20002@ensta-bretagne.fr> <569E60D6.5060908@gmail.com> Message-ID: <569E6414.3050600@ensta-bretagne.fr> On 19/01/2016 17:14, Jonathan Stickel wrote: > I also find this inconsistency to be annoying. Probably the result of > of two different contributors that were unaware of each other. That's my guess too > One workaround is to do something like the following: > > def dfdt(t, f, [args]): # for ode > [your dfdt function statements] > return [dfdt] > > def dfdt_odeint(f,t, *args): # swap t and f for odeint > return dfdt(t,f, *args) > This is indeed a more elegant solution, especially when you have some complex dynamics to integrate, but still it feels odd when you are learning scientific computing imho... The average student will see this as black magic and I'm afraid it won't help them to feel confident in what they are doing, on the other hand this is my problem and not scipy's... > Perhaps a new keyword flag could be added to odeint to implement > functions calls the same as ODE. This would avoid breaking backward > compatibility. Are there any statistics/survey on which one between ode and odeint is the most used ? If one has to be modified it would be better to modify the least used one. Or is there some standard policy to slowly deprecate a function prototype ? This would take longer but at least the fix would be much cleaner. Regards. From benny.malengier at gmail.com Tue Jan 19 12:38:27 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 19 Jan 2016 18:38:27 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <569E5AEE.20002@ensta-bretagne.fr> References: <569E5AEE.20002@ensta-bretagne.fr> Message-ID: Why would you ever want to use one or the other? LSODA (odeint) is fixed coefficient BDF, and VODE (ode) variable coefficient BDF, so should improve on LSODA as soon as there are often sharp and frequent time variation. If not present, VODE should be as good as LSODA. So, VODE is supposed to be better at handling a wider array of use cases. So you should just always use VODE (or even better one of the modern bindings to cvode), and not select at all (or you need to have really a huge system were fixed coefficient BDF is better, but then you should not be using python, and cvode can run in parallel on a HPC, has Krylov methods, and will outperform it.) Benny 2016-01-19 16:49 GMT+01:00 Irvin Probst : > Hi, > maybe my question has been asked a thousand times but why are the > callable's parameters in ode and odeint reversed ? > > From scipy.integrate.ode: > > http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode > f : callable f(t, y, *f_args) > > From scipy.integrate.odeint: > > http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.odeint.html > func : callable(y, t0, ...) > > Admittedly one will usually choose ode or odeint depending on what has to > be done, but from an educational point of view this is really annoying. > Say you want to show how to use ode or odeint to simulate something ? You > have to define twice the same dynamics: > > > def f_odeint(y,t): > return y[1], -np.sin(y[0]) > > def f_ode(t,y): > return [[y[1], -np.sin(y[0])]] > > Then come the usual questions: > - why do I have to reverse y and t ? => Err... you see... because... > - why can't I return a tuple in f_ode as in f_odeint ? => see ticket #1187 > > So I know that reversing the callable's parameters will break half the > code using ode or odeint in the world and this is out of question, but > couldn't it be possible to make it a bit more clear somewhere in the doc > that the parameters are indeed reversed ? Or maybe am I missing some > obvious explanation ? > > Regards. > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From irvin.probst at ensta-bretagne.fr Tue Jan 19 13:13:23 2016 From: irvin.probst at ensta-bretagne.fr (Irvin Probst) Date: Tue, 19 Jan 2016 19:13:23 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> Message-ID: On Tue, 19 Jan 2016 18:38:27 +0100, Benny Malengier wrote: > Why would you ever want to use one or the other? Hi Say for exemple I wish to write code to illustrate when/why ode is better than odeint, I have to use both of them. Say I want to show how Euler/RK2/Whatever behave compared to ode/odeint I have to use both of them. Say I'm a student in scientific computing 101 and I'll think that LSODA is a brand of junk drink and I'll maybe never encounter a dynamics where ode and odeint yield a result different enough to be noticeable and I'll use one or another. Anyway, what I wish to say is that your arguments are perfectly sound from a mathematics point of view, but IMHO having two different API for the callables might sometimes be a problem from a CS point of view. Or else let's say that odeint should never be used and deprecate it. Regards. From benny.malengier at gmail.com Tue Jan 19 14:37:35 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 19 Jan 2016 20:37:35 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> Message-ID: To reply your original question, all the original fortran from the 80s has t,y, so there all is ok and as you would expect. Travis himself added the y,t in odeint in the original creation of integrate it seems : https://github.com/scipy/scipy/blame/989318ad70e46915badde1bc6d950914bdfbce5c/Lib/integrate/__odepack.h you see there it was y,t from the start. Perhaps because the fortan methods have main method lsoda(...,y, t, tout,...), but the callable function is never like that (eg http://netlib.sandia.gov/alliant/ode/prog/lsoda.f) Travis explicitly moves arguments around so as to change y,t into t,y as is required by Fortran for the rhs. So, I suppose for backward compatibitility, nobody ever changed the order to the expected and common t,y . Or nobody serious is using that code ;-) LSODE solver is still popular, I wonder is this is not just because people don't change things that work. In my own tests with the implicit versions (LSODI), DDASPK (like vode) and modern IDA seriously outperform LSODI. There are however many use cases, so probably for some it would be better ... Travis is still around, but that code is from 2001, he will not remember I'm afraid ... Benny 2016-01-19 19:13 GMT+01:00 Irvin Probst : > On Tue, 19 Jan 2016 18:38:27 +0100, Benny Malengier wrote: > >> Why would you ever want to use one or the other? >> > > Hi > > Say for exemple I wish to write code to illustrate when/why ode is better > than odeint, I have to use both of them. > Say I want to show how Euler/RK2/Whatever behave compared to ode/odeint I > have to use both of them. > Say I'm a student in scientific computing 101 and I'll think that LSODA is > a brand of junk drink and I'll maybe never encounter a dynamics where ode > and odeint yield a result different enough to be noticeable and I'll use > one or another. > > > Anyway, what I wish to say is that your arguments are perfectly sound from > a mathematics point of view, but IMHO having two different API for the > callables might sometimes be a problem from a CS point of view. > > Or else let's say that odeint should never be used and deprecate it. > > > Regards. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.oliver at jacobs-university.de Wed Jan 20 05:28:59 2016 From: m.oliver at jacobs-university.de (Marcel Oliver) Date: Wed, 20 Jan 2016 11:28:59 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <569E5AEE.20002@ensta-bretagne.fr> References: <569E5AEE.20002@ensta-bretagne.fr> Message-ID: <22175.24939.214139.719216@xps13.localdomain> Irvin Probst writes: > Hi, maybe my question has been asked a thousand times but why are > the callable's parameters in ode and odeint reversed ? I would like to extend the question: why are there two different interfaces with different call patterns at all? I understand the answer will be "for historical reasons", but both interfaces are annoying enough that it may make sense to think about a uniform ODE interface and keep the current interfaces only as wrappers for legacy code. A few niggles worth fixing up from my point of view: * Allow arbitrary array-valued vector fields. (I remember there was a thread about existing code to do this on this list; it would be very useful to get this functionality into official scipy.) * integrate.ode seems to have the more "pythonic" interface and a larger choice of integrator. However, most of the integrators are not re-entrant per warning in the documentation, so the object oriented call signature is actually sending a false message. In addition, in many (if not most) applications, I need the solution at an array of times, and often need to post-process the entire time series as an array. Thus, I find the call signature of integrate.ode extremely annoying and avoid it in favor of odeint (when the latter works) even though integrate.ode is the more powerful package. * Finally, it would make sense to fix up as many of the integrators (or depreciate them if they don't have a clear advantage in certain circumstances) to make them re-entrant; that is useful indepedent of the question of the best interface. Best regards, Marcel From irvin.probst at ensta-bretagne.fr Wed Jan 20 05:39:31 2016 From: irvin.probst at ensta-bretagne.fr (Irvin Probst) Date: Wed, 20 Jan 2016 11:39:31 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> Message-ID: <569F63E3.5060504@ensta-bretagne.fr> On 19/01/2016 20:37, Benny Malengier wrote: > To reply your original question, all the original fortran from the 80s > has t,y, so there all is ok and as you would expect. > > Travis himself added the y,t in odeint in the original creation of > integrate it seems : [...] Thanks Benny for this very thorough answer, I have to admit I was too lazy to dig into 15 years of commits. From benny.malengier at gmail.com Wed Jan 20 06:21:08 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Wed, 20 Jan 2016 12:21:08 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <22175.24939.214139.719216@xps13.localdomain> References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> Message-ID: 2016-01-20 11:28 GMT+01:00 Marcel Oliver : > Irvin Probst writes: > > Hi, maybe my question has been asked a thousand times but why are > > the callable's parameters in ode and odeint reversed ? > > I would like to extend the question: why are there two different > interfaces with different call patterns at all? I understand the > answer will be "for historical reasons", but both interfaces are > annoying enough that it may make sense to think about a uniform ODE > interface and keep the current interfaces only as wrappers for legacy > code. > > A few niggles worth fixing up from my point of view: > > * Allow arbitrary array-valued vector fields. (I remember there was a > thread about existing code to do this on this list; it would be very > useful to get this functionality into official scipy.) > > * integrate.ode seems to have the more "pythonic" interface and a > larger choice of integrator. However, most of the integrators are > not re-entrant per warning in the documentation, so the object > oriented call signature is actually sending a false message. In > addition, in many (if not most) applications, I need the solution at > an array of times, and often need to post-process the entire time > series as an array. Thus, I find the call signature of > integrate.ode extremely annoying and avoid it in favor of odeint > (when the latter works) even though integrate.ode is the more > powerful package. > > * Finally, it would make sense to fix up as many of the integrators > (or depreciate them if they don't have a clear advantage in certain > circumstances) to make them re-entrant; that is useful indepedent of > the question of the best interface. > You should consider using the odes scikit for stiff problems, which interfaces the modern cvode impementation and is reentrant, see example (google chrome needed for the latex) https://github.com/bmcage/odes/blob/master/docs/ipython/Simple%20Oscillator.ipynb The cvode solver and in general the sundials solver compare to what is present in mathematica (a derivation of sundials according to rumor) and matlab (same principle as VODE) for stiff problems. There never have been many maintainers for the ode functionality. I looked at what would be needed to add odes into scipy, but the recent work on ode in scipy is not trivial to move (like the complex_ode method). The work is too large for me to consider, the API of ode too different from ode in odes scikit. So you would end up with odeint (for those wanting that interface with LSODA), ode (for those wanting a VODE/ZVODE/DOPRI/RK method ... approach with the added features), and odes (for those wanting the modern features in sundials (roots, sensitivity, Krylov)). That seems worse. In the end I have a feeling most advanced users with stiff problems just use one of the python interfaces to sundials, be it via octave, assimulo, or odes scikit. Benny > > Best regards, > Marcel > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From irvin.probst at ensta-bretagne.fr Wed Jan 20 06:26:41 2016 From: irvin.probst at ensta-bretagne.fr (Irvin Probst) Date: Wed, 20 Jan 2016 12:26:41 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <22175.24939.214139.719216@xps13.localdomain> References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> Message-ID: <569F6EF1.2050001@ensta-bretagne.fr> On 20/01/2016 11:28, Marcel Oliver wrote: > * integrate.ode seems to have the more "pythonic" interface and a > larger choice of integrator. However, most of the integrators are > not re-entrant per warning in the documentation, so the object > oriented call signature is actually sending a false message. In > addition, in many (if not most) applications, I need the solution at > an array of times, and often need to post-process the entire time > series as an array. Thus, I find the call signature of > integrate.ode extremely annoying and avoid it in favor of odeint > (when the latter works) even though integrate.ode is the more > powerful package. And THIS is why a vast majority of the students here use odeint and not ode if we don't tell them which one to use, actually even the official documentation seems to imply that odeint should be used instead of ode, that's written in a yellow box almost at the top of http://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.integrate.ode.html : See also: odeint: an integrator with a simpler interface based on lsoda from ODEPACK I think that almost all newcomers who see "simpler" will run to odeint and never come back to ode, and that is imho espacially true for people coming from Matlab as odeint can be used almost exactly like ode45 & co. Fortunately we never have to deal with systems who would require an integrator on steroids so odeint is good enough (actually even a custom RK2 with fixed step size is good enough in most cases...). I trust Benny Malengier when he says that having an unified interface for all available ODEs is not a trivial task and I wouldn't dare touching this, but at least it would be great to have an unified prototype for the callbacks. Regards. From irvin.probst at ensta-bretagne.fr Wed Jan 20 06:28:51 2016 From: irvin.probst at ensta-bretagne.fr (Irvin Probst) Date: Wed, 20 Jan 2016 12:28:51 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> Message-ID: <569F6F73.3090406@ensta-bretagne.fr> On 20/01/2016 12:21, Benny Malengier wrote: > > There never have been many maintainers for the ode functionality. I > looked at what would be needed to add odes into scipy, but the recent > work on ode in scipy is not trivial to move (like the complex_ode > method). The work is too large for me to consider, the API of ode too > different from ode in odes scikit. So you would end up with odeint > (for those wanting that interface with LSODA), ode (for those wanting > a VODE/ZVODE/DOPRI/RK method ... approach with the added features), > and odes (for those wanting the modern features in sundials (roots, > sensitivity, Krylov)). That seems worse. In the end I have a feeling > most advanced users with stiff problems just use one of the python > interfaces to sundials, be it via octave, assimulo, or odes scikit. Is that the kind of task for a Google SoC student ? From benny.malengier at gmail.com Wed Jan 20 06:39:32 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Wed, 20 Jan 2016 12:39:32 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <569F6F73.3090406@ensta-bretagne.fr> References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> Message-ID: 2016-01-20 12:28 GMT+01:00 Irvin Probst : > On 20/01/2016 12:21, Benny Malengier wrote: > >> >> There never have been many maintainers for the ode functionality. I >> looked at what would be needed to add odes into scipy, but the recent work >> on ode in scipy is not trivial to move (like the complex_ode method). The >> work is too large for me to consider, the API of ode too different from ode >> in odes scikit. So you would end up with odeint (for those wanting that >> interface with LSODA), ode (for those wanting a VODE/ZVODE/DOPRI/RK method >> ... approach with the added features), and odes (for those wanting the >> modern features in sundials (roots, sensitivity, Krylov)). That seems >> worse. In the end I have a feeling most advanced users with stiff problems >> just use one of the python interfaces to sundials, be it via octave, >> assimulo, or odes scikit. >> > > Is that the kind of task for a Google SoC student ? > > It was on the list of SoC to work on the ode part 2 years ago. Not last year. If there are mentors in the current contributors to scipy, it can be on it again I assume. Benny > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From archibald at astron.nl Wed Jan 20 08:43:26 2016 From: archibald at astron.nl (Anne Archibald) Date: Wed, 20 Jan 2016 13:43:26 +0000 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> Message-ID: There is periodically discussion of the mess of interfaces and solvers for ODEs in scipy (see the archives about six months ago, for example). One of the concerns is that people want to do very different things, so settling on a good interface is not at all easy. I don't just mean the underlying algorithms, but also the calling interface. It's easy to support someone who drops in an RHS and asks for the solution at a hundred predefined points. It's also clear that a general toolkit shouldn't support my use case: a 22-dimensional solver with a compiled RHS generated by a symbolic algebra package, with internal precision switchable between 80 and 128 bits, with a root-finder attached to the output to define stopping places, that needs to all run with compiled speed. So where along that scale do you aim scipy's interface? I have my own ideas (see aforementioned archive thread) but don't have the energy to implement it myself. And anyway, some people need very different things from their ODE solvers (for example, solution objects evaluable anywhere). Anne On Wed, Jan 20, 2016 at 12:39 PM Benny Malengier wrote: > 2016-01-20 12:28 GMT+01:00 Irvin Probst : > >> On 20/01/2016 12:21, Benny Malengier wrote: >> >>> >>> There never have been many maintainers for the ode functionality. I >>> looked at what would be needed to add odes into scipy, but the recent work >>> on ode in scipy is not trivial to move (like the complex_ode method). The >>> work is too large for me to consider, the API of ode too different from ode >>> in odes scikit. So you would end up with odeint (for those wanting that >>> interface with LSODA), ode (for those wanting a VODE/ZVODE/DOPRI/RK method >>> ... approach with the added features), and odes (for those wanting the >>> modern features in sundials (roots, sensitivity, Krylov)). That seems >>> worse. In the end I have a feeling most advanced users with stiff problems >>> just use one of the python interfaces to sundials, be it via octave, >>> assimulo, or odes scikit. >>> >> >> Is that the kind of task for a Google SoC student ? >> >> > It was on the list of SoC to work on the ode part 2 years ago. Not last > year. If there are mentors in the current contributors to scipy, it can be > on it again I assume. > > Benny > >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Sat Jan 23 07:51:58 2016 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Sat, 23 Jan 2016 12:51:58 +0000 Subject: [SciPy-Dev] ANN: scipy 0.17.0 release Message-ID: Hi, On behalf of the Scipy development team I am pleased to announce the availability of Scipy 0.17.0. This release contains several new features, detailed in the release notes below. 101 people contributed to this release over the course of six months. This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.6.2 or greater. Source tarballs and release notes can be found at https://github.com/scipy/scipy/releases/tag/v0.17.0. Thanks to everyone who contributed to this release. Cheers, Evgeni -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ========================== SciPy 0.17.0 Release Notes ========================== .. contents:: SciPy 0.17.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.17.x branch, and on adding new features on the master branch. This release requires Python 2.6, 2.7 or 3.2-3.5 and NumPy 1.6.2 or greater. Release highlights: - New functions for linear and nonlinear least squares optimization with constraints: `scipy.optimize.lsq_linear` and `scipy.optimize.least_squares` - Support for fitting with bounds in `scipy.optimize.curve_fit`. - Significant improvements to `scipy.stats`, providing many functions with better handing of inputs which have NaNs or are empty, improved documentation, and consistent behavior between `scipy.stats` and `scipy.stats.mstats`. - Significant performance improvements and new functionality in `scipy.spatial.cKDTree`. New features ============ `scipy.cluster` improvements - ---------------------------- A new function `scipy.cluster.hierarchy.cut_tree`, which determines a cut tree from a linkage matrix, was added. `scipy.io` improvements - ----------------------- `scipy.io.mmwrite` gained support for symmetric sparse matrices. `scipy.io.netcdf` gained support for masking and scaling data based on data attributes. `scipy.optimize` improvements - ----------------------------- Linear assignment problem solver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ `scipy.optimize.linear_sum_assignment` is a new function for solving the linear sum assignment problem. It uses the Hungarian algorithm (Kuhn-Munkres). Least squares optimization ~~~~~~~~~~~~~~~~~~~~~~~~~~ A new function for *nonlinear* least squares optimization with constraints was added: `scipy.optimize.least_squares`. It provides several methods: Levenberg-Marquardt for unconstrained problems, and two trust-region methods for constrained ones. Furthermore it provides different loss functions. New trust-region methods also handle sparse Jacobians. A new function for *linear* least squares optimization with constraints was added: `scipy.optimize.lsq_linear`. It provides a trust-region method as well as an implementation of the Bounded-Variable Least-Squares (BVLS) algorithm. `scipy.optimize.curve_fit` now supports fitting with bounds. `scipy.signal` improvements - --------------------------- A ``mode`` keyword was added to `scipy.signal.spectrogram`, to let it return other spectrograms than power spectral density. `scipy.stats` improvements - -------------------------- Many functions in `scipy.stats` have gained a ``nan_policy`` keyword, which allows specifying how to treat input with NaNs in them: propagate the NaNs, raise an error, or omit the NaNs. Many functions in `scipy.stats` have been improved to correctly handle input arrays that are empty or contain infs/nans. A number of functions with the same name in `scipy.stats` and `scipy.stats.mstats` were changed to have matching signature and behavior. See `gh-5474 `__ for details. `scipy.stats.binom_test` and `scipy.stats.mannwhitneyu` gained a keyword ``alternative``, which allows specifying the hypothesis to test for. Eventually all hypothesis testing functions will get this keyword. For methods of many continuous distributions, complex input is now accepted. Matrix normal distribution has been implemented as `scipy.stats.matrix_normal`. `scipy.sparse` improvements - --------------------------- The `axis` keyword was added to sparse norms, `scipy.sparse.linalg.norm`. `scipy.spatial` improvements - ---------------------------- `scipy.spatial.cKDTree` was partly rewritten for improved performance and several new features were added to it: - - the ``query_ball_point`` method became significantly faster - - ``query`` and ``query_ball_point`` gained an ``n_jobs`` keyword for parallel execution - - build and query methods now release the GIL - - full pickling support - - support for periodic spaces - - the ``sparse_distance_matrix`` method can now return and sparse matrix type `scipy.interpolate` improvements - -------------------------------- Out-of-bounds behavior of `scipy.interpolate.interp1d` has been improved. Use a two-element tuple for the ``fill_value`` argument to specify separate fill values for input below and above the interpolation range. Linear and nearest interpolation kinds of `scipy.interpolate.interp1d` support extrapolation via the ``fill_value="extrapolate"`` keyword. ``fill_value`` can also be set to an array-like (or a two-element tuple of array-likes for separate below and above values) so long as it broadcasts properly to the non-interpolated dimensions of an array. This was implicitly supported by previous versions of scipy, but support has now been formalized and gets compatibility-checked before use. For example, a set of ``y`` values to interpolate with shape ``(2, 3, 5)`` interpolated along the last axis (2) could accept a ``fill_value`` array with shape ``()`` (singleton), ``(1,)``, ``(2, 1)``, ``(1, 3)``, ``(3,)``, or ``(2, 3)``; or it can be a 2-element tuple to specify separate below and above bounds, where each of the two tuple elements obeys proper broadcasting rules. `scipy.linalg` improvements - --------------------------- The default algorithm for `scipy.linalg.leastsq` has been changed to use LAPACK's function ``*gelsd``. Users wanting to get the previous behavior can use a new keyword ``lapack_driver="gelss"`` (allowed values are "gelss", "gelsd" and "gelsy"). ``scipy.sparse`` matrices and linear operators now support the matmul (``@``) operator when available (Python 3.5+). See [PEP 465](http://legacy.python.org/dev/peps/pep-0465/) A new function `scipy.linalg.ordqz`, for QZ decomposition with reordering, has been added. Deprecated features =================== ``scipy.stats.histogram`` is deprecated in favor of ``np.histogram``, which is faster and provides the same functionality. ``scipy.stats.threshold`` and ``scipy.mstats.threshold`` are deprecated in favor of ``np.clip``. See issue #617 for details. ``scipy.stats.ss`` is deprecated. This is a support function, not meant to be exposed to the user. Also, the name is unclear. See issue #663 for details. ``scipy.stats.square_of_sums`` is deprecated. This too is a support function not meant to be exposed to the user. See issues #665 and #663 for details. ``scipy.stats.f_value``, ``scipy.stats.f_value_multivariate``, ``scipy.stats.f_value_wilks_lambda``, and ``scipy.mstats.f_value_wilks_lambda`` are deprecated. These are related to ANOVA, for which ``scipy.stats`` provides quite limited functionality and these functions are not very useful standalone. See issues #660 and #650 for details. ``scipy.stats.chisqprob`` is deprecated. This is an alias. ``stats.chi2.sf`` should be used instead. ``scipy.stats.betai`` is deprecated. This is an alias for ``special.betainc`` which should be used instead. Backwards incompatible changes ============================== The functions ``stats.trim1`` and ``stats.trimboth`` now make sure the elements trimmed are the lowest and/or highest, depending on the case. Slicing without at least partial sorting was previously done, but didn't make sense for unsorted input. When ``variable_names`` is set to an empty list, ``scipy.io.loadmat`` now correctly returns no values instead of all the contents of the MAT file. Element-wise multiplication of sparse matrices now returns a sparse result in all cases. Previously, multiplying a sparse matrix with a dense matrix or array would return a dense matrix. The function ``misc.lena`` has been removed due to license incompatibility. The constructor for ``sparse.coo_matrix`` no longer accepts ``(None, (m,n))`` to construct an all-zero matrix of shape ``(m,n)``. This functionality was deprecated since at least 2007 and was already broken in the previous SciPy release. Use ``coo_matrix((m,n))`` instead. The Cython wrappers in ``linalg.cython_lapack`` for the LAPACK routines ``*gegs``, ``*gegv``, ``*gelsx``, ``*geqpf``, ``*ggsvd``, ``*ggsvp``, ``*lahrd``, ``*latzm``, ``*tzrqf`` have been removed since these routines are not present in the new LAPACK 3.6.0 release. With the exception of the routines ``*ggsvd`` and ``*ggsvp``, these were all deprecated in favor of routines that are currently present in our Cython LAPACK wrappers. Because the LAPACK ``*gegv`` routines were removed in LAPACK 3.6.0. The corresponding Python wrappers in ``scipy.linalg.lapack`` are now deprecated and will be removed in a future release. The source files for these routines have been temporarily included as a part of ``scipy.linalg`` so that SciPy can be built against LAPACK versions that do not provide these deprecated routines. Other changes ============= Html and pdf documentation of development versions of Scipy is now automatically rebuilt after every merged pull request. `scipy.constants` is updated to the CODATA 2014 recommended values. Usage of `scipy.fftpack` functions within Scipy has been changed in such a way that `PyFFTW `__ can easily replace `scipy.fftpack` functions (with improved performance). See `gh-5295 `__ for details. The ``imread`` functions in `scipy.misc` and `scipy.ndimage` were unified, for which a ``mode`` argument was added to `scipy.misc.imread`. Also, bugs for 1-bit and indexed RGB image formats were fixed. ``runtests.py``, the development script to build and test Scipy, now allows building in parallel with ``--parallel``. Authors ======= * @cel4 + * @chemelnucfin + * @endolith * @mamrehn + * @tosh1ki + * Joshua L. Adelman + * Anne Archibald * Herv? Audren + * Vincent Barrielle + * Bruno Beltran + * Sumit Binnani + * Joseph Jon Booker * Olga Botvinnik + * Michael Boyle + * Matthew Brett * Zaz Brown + * Lars Buitinck * Pete Bunch + * Evgeni Burovski * CJ Carey * Ien Cheng + * Cody + * Jaime Fernandez del Rio * Ales Erjavec + * Abraham Escalante * Yves-R?mi Van Eycke + * Yu Feng + * Eric Firing * Francis T. O'Donovan + * Andr? Gaul * Christoph Gohlke * Ralf Gommers * Alex Griffing * Alexander Grigorievskiy * Charles Harris * J?rn Hees + * Ian Henriksen * Derek Homeier + * David Men?ndez Hurtado * Gert-Ludwig Ingold * Aakash Jain + * Rohit Jamuar + * Jan Schl?ter * Johannes Ball? * Luke Zoltan Kelley + * Jason King + * Andreas Kopecky + * Eric Larson * Denis Laxalde * Antony Lee * Gregory R. Lee * Josh Levy-Kramer + * Sam Lewis + * Fran?ois Magimel + * Mart?n Gait?n + * Sam Mason + * Andreas Mayer * Nikolay Mayorov * Damon McDougall + * Robert McGibbon * Sturla Molden * Will Monroe + * Eric Moore * Maniteja Nandana * Vikram Natarajan + * Andrew Nelson * Marti Nito + * Behzad Nouri + * Daisuke Oyama + * Giorgio Patrini + * Fabian Paul + * Christoph Paulik + * Mad Physicist + * Irvin Probst * Sebastian Pucilowski + * Ted Pudlik + * Eric Quintero * Yoav Ram + * Joscha Reimer + * Juha Remes * Frederik Rietdijk + * R?my L?one + * Christian Sachs + * Skipper Seabold * Sebastian Skoup? + * Alex Seewald + * Andreas Sorge + * Bernardo Sulzbach + * Julian Taylor * Louis Tiao + * Utkarsh Upadhyay + * Jacob Vanderplas * Gael Varoquaux + * Pauli Virtanen * Fredrik Wallner + * Stefan van der Walt * James Webber + * Warren Weckesser * Raphael Wettinger + * Josh Wilson + * Nat Wilson + * Peter Yin + A total of 101 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. Issues closed for 0.17.0 - ------------------------ - - `#1923 `__: problem with numpy 0's in stats.poisson.rvs (Trac #1398) - - `#2138 `__: scipy.misc.imread segfaults on 1 bit png (Trac #1613) - - `#2237 `__: distributions do not accept complex arguments (Trac #1718) - - `#2282 `__: scipy.special.hyp1f1(0.5, 1.5, -1000) fails (Trac #1763) - - `#2618 `__: poisson.pmf returns NaN if mu is 0 - - `#2957 `__: hyp1f1 precision issue - - `#2997 `__: FAIL: test_qhull.TestUtilities.test_more_barycentric_transforms - - `#3129 `__: No way to set ranges for fitting parameters in Optimize functions - - `#3191 `__: interp1d should contain a fill_value_below and a fill_value_above... - - `#3453 `__: PchipInterpolator sets slopes at edges differently than Matlab's... - - `#4106 `__: ndimage._ni_support._normalize_sequence() fails with numpy.int64 - - `#4118 `__: `scipy.integrate.ode.set_solout` called after `scipy.integrate.ode.set_initial_value` fails silently - - `#4233 `__: 1D scipy.interpolate.griddata using method=nearest produces nans... - - `#4375 `__: All tests fail due to bad file permissions - - `#4580 `__: scipy.ndimage.filters.convolve documenation is incorrect - - `#4627 `__: logsumexp with sign indicator - enable calculation with negative... - - `#4702 `__: logsumexp with zero scaling factor - - `#4834 `__: gammainc should return 1.0 instead of NaN for infinite x - - `#4838 `__: enh: exprel special function - - `#4862 `__: the scipy.special.boxcox function is inaccurate for denormal... - - `#4887 `__: Spherical harmonic incongruences - - `#4895 `__: some scipy ufuncs have inconsistent output dtypes? - - `#4923 `__: logm does not aggressively convert complex outputs to float - - `#4932 `__: BUG: stats: The `fit` method of the distributions silently ignores... - - `#4956 `__: Documentation error in `scipy.special.bi_zeros` - - `#4957 `__: Docstring for `pbvv_seq` is wrong - - `#4967 `__: block_diag should look at dtypes of all arguments, not only the... - - `#5037 `__: scipy.optimize.minimize error messages are printed to stdout... - - `#5039 `__: Cubic interpolation: On entry to DGESDD parameter number 12 had... - - `#5163 `__: Base case example of Hierarchical Clustering (offer) - - `#5181 `__: BUG: stats.genextreme.entropy should use the explicit formula - - `#5184 `__: Some? wheels don't express a numpy dependency - - `#5197 `__: mstats: test_kurtosis fails (ULP max is 2) - - `#5260 `__: Typo causing an error in splrep - - `#5263 `__: Default epsilon in rbf.py fails for colinear points - - `#5276 `__: Reading empty (no data) arff file fails - - `#5280 `__: 1d scipy.signal.convolve much slower than numpy.convolve - - `#5326 `__: Implementation error in scipy.interpolate.PchipInterpolator - - `#5370 `__: Test issue with test_quadpack and libm.so as a linker script - - `#5426 `__: ERROR: test_stats.test_chisquare_masked_arrays - - `#5427 `__: Automate installing correct numpy versions in numpy-vendor image - - `#5430 `__: Python3 : Numpy scalar types "not iterable"; specific instance... - - `#5450 `__: BUG: spatial.ConvexHull triggers a seg. fault when given nans. - - `#5478 `__: clarify the relation between matrix normal distribution and `multivariate_normal` - - `#5539 `__: lstsq related test failures on windows binaries from numpy-vendor - - `#5560 `__: doc: scipy.stats.burr pdf issue - - `#5571 `__: lstsq test failure after lapack_driver change - - `#5577 `__: ordqz segfault on Python 3.4 in Wine - - `#5578 `__: scipy.linalg test failures on python 3 in Wine - - `#5607 `__: Overloaded ?isnan(double&)? is ambiguous when compiling with... - - `#5629 `__: Test for lstsq randomly failed - - `#5630 `__: memory leak with scipy 0.16 spatial cKDEtree - - `#5689 `__: isnan errors compiling scipy/special/Faddeeva.cc with clang++ - - `#5694 `__: fftpack test failure in test_import - - `#5719 `__: curve_fit(method!="lm") ignores initial guess Pull requests for 0.17.0 - ------------------------ - - `#3022 `__: hyp1f1: better handling of large negative arguments - - `#3107 `__: ENH: Add ordered QZ decomposition - - `#4390 `__: ENH: Allow axis and keepdims arguments to be passed to scipy.linalg.norm. - - `#4671 `__: ENH: add axis to sparse norms - - `#4796 `__: ENH: Add cut tree function to scipy.cluster.hierarchy - - `#4809 `__: MAINT: cauchy moments are undefined - - `#4821 `__: ENH: stats: make distribution instances picklable - - `#4839 `__: ENH: Add scipy.special.exprel relative error exponential ufunc - - `#4859 `__: Logsumexp fixes - allows sign flags and b==0 - - `#4865 `__: BUG: scipy.io.mmio.write: error with big indices and low precision - - `#4869 `__: add as_inexact option to _lib._util._asarray_validated - - `#4884 `__: ENH: Finite difference approximation of Jacobian matrix - - `#4890 `__: ENH: Port cKDTree query methods to C++, allow pickling on Python... - - `#4892 `__: how much doctesting is too much? - - `#4896 `__: MAINT: work around a possible numpy ufunc loop selection bug - - `#4898 `__: MAINT: A bit of pyflakes-driven cleanup. - - `#4899 `__: ENH: add 'alternative' keyword to hypothesis tests in stats - - `#4903 `__: BENCH: Benchmarks for interpolate module - - `#4905 `__: MAINT: prepend underscore to mask_to_limits; delete masked_var. - - `#4906 `__: MAINT: Benchmarks for optimize.leastsq - - `#4910 `__: WIP: Trimmed statistics functions have inconsistent API. - - `#4912 `__: MAINT: fix typo in stats tutorial. Closes gh-4911. - - `#4914 `__: DEP: deprecate `scipy.stats.ss` and `scipy.stats.square_of_sums`. - - `#4924 `__: MAINT: if the imaginary part of logm of a real matrix is small,... - - `#4930 `__: BENCH: Benchmarks for signal module - - `#4941 `__: ENH: update `find_repeats`. - - `#4942 `__: MAINT: use np.float64_t instead of np.float_t in cKDTree - - `#4944 `__: BUG: integer overflow in correlate_nd - - `#4951 `__: do not ignore invalid kwargs in distributions fit method - - `#4958 `__: Add some detail to docstrings for special functions - - `#4961 `__: ENH: stats.describe: add bias kw and empty array handling - - `#4963 `__: ENH: scipy.sparse.coo.coo_matrix.__init__: less memory needed - - `#4968 `__: DEP: deprecate ``stats.f_value*`` and ``mstats.f_value*`` functions. - - `#4969 `__: ENH: review `stats.relfreq` and `stats.cumfreq`; fixes to `stats.histogram` - - `#4971 `__: Extend github source links to line ranges - - `#4972 `__: MAINT: impove the error message in validate_runtests_log - - `#4976 `__: DEP: deprecate `scipy.stats.threshold` - - `#4977 `__: MAINT: more careful dtype treatment in block diagonal matrix... - - `#4979 `__: ENH: distributions, complex arguments - - `#4984 `__: clarify dirichlet distribution error handling - - `#4992 `__: ENH: `stats.fligner` and `stats.bartlett` empty input handling. - - `#4996 `__: DOC: fix stats.spearmanr docs - - `#4997 `__: Fix up boxcox for underflow / loss of precision - - `#4998 `__: DOC: improved documentation for `stats.ppcc_max` - - `#5000 `__: ENH: added empty input handling `scipy.moment`; doc enhancements - - `#5003 `__: ENH: improves rankdata algorithm - - `#5005 `__: scipy.stats: numerical stability improvement - - `#5007 `__: ENH: nan handling in functions that use `stats._chk_asarray` - - `#5009 `__: remove coveralls.io - - `#5010 `__: Hypergeometric distribution log survival function - - `#5014 `__: Patch to compute the volume and area of convex hulls - - `#5015 `__: DOC: Fix mistaken variable name in sawtooth - - `#5016 `__: DOC: resample example - - `#5017 `__: DEP: deprecate `stats.betai` and `stats.chisqprob` - - `#5018 `__: ENH: Add test on random inpu to volume computations - - `#5026 `__: BUG: Fix return dtype of lil_matrix.getnnz(axis=0) - - `#5030 `__: DOC: resample slow for prime output too - - `#5033 `__: MAINT: integrate, special: remove unused R1MACH and Makefile - - `#5034 `__: MAINT: signal: lift max_len_seq validation out of Cython - - `#5035 `__: DOC/MAINT: refguide / doctest drudgery - - `#5041 `__: BUG: fixing some small memory leaks detected by cppcheck - - `#5044 `__: [GSoC] ENH: New least-squares algorithms - - `#5050 `__: MAINT: C fixes, trimmed a lot of dead code from Cephes - - `#5057 `__: ENH: sparse: avoid densifying on sparse/dense elementwise mult - - `#5058 `__: TST: stats: add a sample distribution to the test loop - - `#5061 `__: ENH: spatial: faster 2D Voronoi and Convex Hull plotting - - `#5065 `__: TST: improve test coverage for `stats.mvsdist` and `stats.bayes_mvs` - - `#5066 `__: MAINT: fitpack: remove a noop - - `#5067 `__: ENH: empty and nan input handling for `stats.kstat` and `stats.kstatvar` - - `#5071 `__: DOC: optimize: Correct paper reference, add doi - - `#5072 `__: MAINT: scipy.sparse cleanup - - `#5073 `__: DOC: special: Add an example showing the relation of diric to... - - `#5075 `__: DOC: clarified parameterization of stats.lognorm - - `#5076 `__: use int, float, bool instead of np.int, np.float, np.bool - - `#5078 `__: DOC: Rename fftpack docs to README - - `#5081 `__: BUG: Correct handling of scalar 'b' in lsmr and lsqr - - `#5082 `__: loadmat variable_names: don't confuse [] and None. - - `#5083 `__: Fix integrate.fixed_quad docstring to indicate None return value - - `#5086 `__: Use solve() instead of inv() for gaussian_kde - - `#5090 `__: MAINT: stats: add explicit _sf, _isf to gengamma distribution - - `#5094 `__: ENH: scipy.interpolate.NearestNDInterpolator: cKDTree configurable - - `#5098 `__: DOC: special: fix typesetting in ``*_roots quadrature`` functions - - `#5099 `__: DOC: make the docstring of stats.moment raw - - `#5104 `__: DOC/ENH fixes and micro-optimizations for scipy.linalg - - `#5105 `__: enh: made l-bfgs-b parameter for the maximum number of line search... - - `#5106 `__: TST: add NIST test cases to `stats.f_oneway` - - `#5110 `__: [GSoC]: Bounded linear least squares - - `#5111 `__: MAINT: special: Cephes cleanup - - `#5118 `__: BUG: FIR path failed if len(x) < len(b) in lfilter. - - `#5124 `__: ENH: move the filliben approximation to a publicly visible function - - `#5126 `__: StatisticsCleanup: `stats.kruskal` review - - `#5130 `__: DOC: update PyPi trove classifiers. Beta -> Stable. Add license. - - `#5131 `__: DOC: differential_evolution, improve docstring for mutation and... - - `#5132 `__: MAINT: differential_evolution improve init_population_lhs comments... - - `#5133 `__: MRG: rebased mmio refactoring - - `#5135 `__: MAINT: `stats.mstats` consistency with `stats.stats` - - `#5139 `__: TST: linalg: add a smoke test for gh-5039 - - `#5140 `__: EHN: Update constants.codata to CODATA 2014 - - `#5145 `__: added ValueError to docstring as possible error raised - - `#5146 `__: MAINT: Improve implementation details and doc in `stats.shapiro` - - `#5147 `__: [GSoC] ENH: Upgrades to curve_fit - - `#5150 `__: Fix misleading wavelets/cwt example - - `#5152 `__: BUG: cluster.hierarchy.dendrogram: missing font size doesn't... - - `#5153 `__: add keywords to control the summation in discrete distributions... - - `#5156 `__: DOC: added comments on algorithms used in Legendre function - - `#5158 `__: ENH: optimize: add the Hungarian algorithm - - `#5162 `__: FIX: Remove lena - - `#5164 `__: MAINT: fix cluster.hierarchy.dendrogram issues and docs - - `#5166 `__: MAINT: changed `stats.pointbiserialr` to delegate to `stats.pearsonr` - - `#5167 `__: ENH: add nan_policy to `stats.kendalltau`. - - `#5168 `__: TST: added nist test case (Norris) to `stats.linregress`. - - `#5169 `__: update lpmv docstring - - `#5171 `__: Clarify metric parameter in linkage docstring - - `#5172 `__: ENH: add mode keyword to signal.spectrogram - - `#5177 `__: DOC: graphical example for KDTree.query_ball_point - - `#5179 `__: MAINT: stats: tweak the formula for ncx2.pdf - - `#5188 `__: MAINT: linalg: A bit of clean up. - - `#5189 `__: BUG: stats: Use the explicit formula in stats.genextreme.entropy - - `#5193 `__: BUG: fix uninitialized use in lartg - - `#5194 `__: BUG: properly return error to fortran from ode_jacobian_function - - `#5198 `__: TST: Fix TestCtypesQuad failure on Python 3.5 for Windows - - `#5201 `__: allow extrapolation in interp1d - - `#5209 `__: MAINT: Change complex parameter to boolean in Y_() - - `#5213 `__: BUG: sparse: fix logical comparison dtype conflicts - - `#5216 `__: BUG: sparse: fixing unbound local error - - `#5218 `__: DOC and BUG: Bessel function docstring improvements, fix array_like,... - - `#5222 `__: MAINT: sparse: fix COO ctor - - `#5224 `__: DOC: optimize: type of OptimizeResult.hess_inv varies - - `#5228 `__: ENH: Add maskandscale support to netcdf; based on pupynere and... - - `#5229 `__: DOC: sparse.linalg.svds doc typo fixed - - `#5234 `__: MAINT: sparse: simplify COO ctor - - `#5235 `__: MAINT: sparse: warn on todia() with many diagonals - - `#5236 `__: MAINT: ndimage: simplify thread handling/recursion + constness - - `#5239 `__: BUG: integrate: Fixed issue 4118 - - `#5241 `__: qr_insert fixes, closes #5149 - - `#5246 `__: Doctest tutorial files - - `#5247 `__: DOC: optimize: typo/import fix in linear_sum_assignment - - `#5248 `__: remove inspect.getargspec and test python 3.5 on Travis CI - - `#5250 `__: BUG: Fix sparse multiply by single-element zero - - `#5261 `__: Fix bug causing a TypeError in splrep when a runtime warning... - - `#5262 `__: Follow up to 4489 (Addition LAPACK routines in linalg.lstsq) - - `#5264 `__: ignore zero-length edges for default epsilon - - `#5269 `__: DOC: Typos and spell-checking - - `#5272 `__: MAINT: signal: Convert array syntax to memoryviews - - `#5273 `__: DOC: raw strings for docstrings with math - - `#5274 `__: MAINT: sparse: update cython code for MST - - `#5278 `__: BUG: io: Stop guessing the data delimiter in ARFF files. - - `#5289 `__: BUG: misc: Fix the Pillow work-around for 1-bit images. - - `#5291 `__: ENH: call np.correlate for 1d in scipy.signal.correlate - - `#5294 `__: DOC: special: Remove a potentially misleading example from the... - - `#5295 `__: Simplify replacement of fftpack by pyfftw - - `#5296 `__: ENH: Add matrix normal distribution to stats - - `#5297 `__: Fixed leaf_rotation and leaf_font_size in Python 3 - - `#5303 `__: MAINT: stats: rewrite find_repeats - - `#5307 `__: MAINT: stats: remove unused Fortran routine - - `#5313 `__: BUG: sparse: fix diags for nonsquare matrices - - `#5315 `__: MAINT: special: Cephes cleanup - - `#5316 `__: fix input check for sparse.linalg.svds - - `#5319 `__: MAINT: Cython code maintenance - - `#5328 `__: BUG: Fix place_poles return values - - `#5329 `__: avoid a spurious divide-by-zero in Student t stats - - `#5334 `__: MAINT: integrate: miscellaneous cleanup - - `#5340 `__: MAINT: Printing Error Msg to STDERR and Removing iterate.dat - - `#5347 `__: ENH: add Py3.5-style matmul operator (e.g. A @ B) to sparse linear... - - `#5350 `__: FIX error, when reading 32-bit float wav files - - `#5351 `__: refactor the PCHIP interpolant's algorithm - - `#5354 `__: MAINT: construct csr and csc matrices from integer lists - - `#5359 `__: add a fast path to interp1d - - `#5364 `__: Add two fill_values to interp1d. - - `#5365 `__: ABCD docstrings - - `#5366 `__: Fixed typo in the documentation for scipy.signal.cwt() per #5290. - - `#5367 `__: DOC updated scipy.spatial.Delaunay example - - `#5368 `__: ENH: Do not create a throwaway class at every function call - - `#5372 `__: DOC: spectral: fix reference formatting - - `#5375 `__: PEP8 amendments to ffpack_basic.py - - `#5377 `__: BUG: integrate: builtin name no longer shadowed - - `#5381 `__: PEP8ified fftpack_pseudo_diffs.py - - `#5385 `__: BLD: fix Bento build for changes to optimize and spatial - - `#5386 `__: STY: PEP8 amendments to interpolate.py - - `#5387 `__: DEP: deprecate stats.histogram - - `#5388 `__: REL: add "make upload" command to doc/Makefile. - - `#5389 `__: DOC: updated origin param of scipy.ndimage.filters.convolve - - `#5395 `__: BUG: special: fix a number of edge cases related to `x = np.inf`. - - `#5398 `__: MAINT: stats: avoid spurious warnings in lognorm.pdf(0, s) - - `#5407 `__: ENH: stats: Handle mu=0 in stats.poisson - - `#5409 `__: Fix the behavior of discrete distributions at the right-hand... - - `#5412 `__: TST: stats: skip a test to avoid a spurious log(0) warning - - `#5413 `__: BUG: linalg: work around LAPACK single-precision lwork computation... - - `#5414 `__: MAINT: stats: move creation of namedtuples outside of function... - - `#5415 `__: DOC: fix up sections in ToC in the pdf reference guide - - `#5416 `__: TST: fix issue with a ctypes test for integrate on Fedora. - - `#5418 `__: DOC: fix bugs in signal.TransferFunction docstring. Closes gh-5287. - - `#5419 `__: MAINT: sparse: fix usage of NotImplementedError - - `#5420 `__: Raise proper error if maxiter < 1 - - `#5422 `__: DOC: changed documentation of brent to be consistent with bracket - - `#5444 `__: BUG: gaussian_filter, BPoly.from_derivatives fail on numpy int... - - `#5445 `__: MAINT: stats: fix incorrect deprecation warnings and test noise - - `#5446 `__: DOC: add note about PyFFTW in fftpack tutorial. - - `#5459 `__: DOC: integrate: Some improvements to the differential equation... - - `#5465 `__: BUG: Relax mstats kurtosis test tolerance by a few ulp - - `#5471 `__: ConvexHull should raise ValueError for NaNs. - - `#5473 `__: MAINT: update decorators.py module to version 4.0.5 - - `#5476 `__: BUG: imsave searches for wrong channel axis if image has 3 or... - - `#5477 `__: BLD: add numpy to setup/install_requires for OS X wheels - - `#5479 `__: ENH: return Jacobian/Hessian from BasinHopping - - `#5484 `__: BUG: fix ttest zero division handling - - `#5486 `__: Fix crash on kmeans2 - - `#5491 `__: MAINT: Expose parallel build option to runtests.py - - `#5494 `__: Sort OptimizeResult.__repr__ by key - - `#5496 `__: DOC: update the author name mapping - - `#5497 `__: Enhancement to binned_statistic: option to unraveled returned... - - `#5498 `__: BUG: sparse: fix a bug in sparsetools input dtype resolution - - `#5500 `__: DOC: detect unprintable characters in docstrings - - `#5505 `__: BUG: misc: Ensure fromimage converts mode 'P' to 'RGB' or 'RGBA'. - - `#5514 `__: DOC: further update the release notes - - `#5515 `__: ENH: optionally disable fixed-point acceleration - - `#5517 `__: DOC: Improvements and additions to the matrix_normal doc - - `#5518 `__: Remove wrappers for LAPACK deprecated routines - - `#5521 `__: TST: skip a linalg.orth memory test on 32-bit platforms. - - `#5523 `__: DOC: change a few floats to integers in docstring examples - - `#5524 `__: DOC: more updates to 0.17.0 release notes. - - `#5525 `__: Fix to minor typo in documentation for scipy.integrate.ode - - `#5527 `__: TST: bump arccosh tolerance to allow for inaccurate numpy or... - - `#5535 `__: DOC: signal: minor clarification to docstring of TransferFunction. - - `#5538 `__: DOC: signal: fix find_peaks_cwt documentation - - `#5545 `__: MAINT: Fix typo in linalg/basic.py - - `#5547 `__: TST: mark TestEig.test_singular as knownfail in master. - - `#5550 `__: MAINT: work around lstsq driver selection issue - - `#5556 `__: BUG: Fixed broken dogbox trust-region radius update - - `#5561 `__: BUG: eliminate warnings, exception (on Win) in test_maskandscale;... - - `#5567 `__: TST: a few cleanups in the test suite; run_module_suite and clearer... - - `#5568 `__: MAINT: simplify poisson's _argcheck - - `#5569 `__: TST: bump GMean test tolerance to make it pass on Wine - - `#5572 `__: TST: lstsq: bump test tolerance for TravisCI - - `#5573 `__: TST: remove use of np.fromfile from cluster.vq tests - - `#5576 `__: Lapack deprecations - - `#5579 `__: TST: skip tests of linalg.norm axis keyword on numpy <= 1.7.x - - `#5582 `__: Clarify language of survival function documentation - - `#5583 `__: MAINT: stats/tests: A bit of clean up. - - `#5588 `__: DOC: stats: Add a note that stats.burr is the Type III Burr distribution. - - `#5595 `__: TST: fix test_lamch failures on Python 3 - - `#5600 `__: MAINT: Ignore spatial/ckdtree.cxx and .h - - `#5602 `__: Explicitly numbered replacement fields for maintainability - - `#5605 `__: MAINT: collection of small fixes to test suite - - `#5614 `__: Minor doc change. - - `#5624 `__: FIX: Fix interpolate - - `#5625 `__: BUG: msvc9 binaries crash when indexing std::vector of size 0 - - `#5635 `__: BUG: misspelled __dealloc__ in cKDTree. - - `#5642 `__: STY: minor fixup of formatting of 0.17.0 release notes. - - `#5643 `__: BLD: fix a build issue in special/Faddeeva.cc with isnan. - - `#5661 `__: TST: linalg tests used stdlib random instead of numpy.random. - - `#5682 `__: backports for 0.17.0 - - `#5696 `__: Minor improvements to least_squares' docstring. - - `#5697 `__: BLD: fix for isnan/isinf issues in special/Faddeeva.cc - - `#5720 `__: TST: fix for file opening error in fftpack test_import.py - - `#5722 `__: BUG: Make curve_fit respect an initial guess with bounds - - `#5726 `__: Backports for v0.17.0rc2 - - `#5727 `__: API: Changes to least_squares API Checksums ========= MD5 ~~~ 5ff2971e1ce90e762c59d2cd84837224 scipy-0.17.0.tar.gz ef0949640ee73f845dde6dd7f84d7691 scipy-0.17.0.tar.xz 28a4fe29e980804db162524f10873211 scipy-0.17.0.zip SHA256 ~~~~~~ f600b755fb69437d0f70361f9e560ab4d304b1b66987ed5a28bdd9dd7793e089 scipy-0.17.0.tar.gz 2bc03ea36cd55bfd80869d87f690334b4cad240373e05857ddfa7a4d1e2f9a7a scipy-0.17.0.tar.xz ede6820030b2e5796126aa1571d86738b14bbd670d68c83378877b1d9eb9894d scipy-0.17.0.zip -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQEcBAEBAgAGBQJWokWbAAoJEIp0pQ0zQcu+JUgH/16Qqurptl3jUU11+4pl+8ji 3AFZN+dgzGLyiMz9s+V+OyL4hcdZY4++WM2QizF5uD3hDDFNi+aJRHAYySMAZeIZ O8y//v+DXDVOLZpKwcPFq5E5ZebTDh1jO4zZzpuyi1PnKgTAEKZeSyBSMd0RMx/p kZ0URTcAa/tPgkNZz3+i8By9b/zBWOlbI+v6fCVwV8E20YWfGBSp2s0KAtQk787D Q88ylnc3Zfv4IR1hgCFZz6oJA0RgmpH4USKi7guyg+fIKjf1nFe64zIuV3C2r/Uj y9Qbs/8x/HTteDp5owkRiSwnpTZHOtx/jqeh2z/w9aQe0i8Nag5Ere6JZ2kixG4= =jZUn -----END PGP SIGNATURE----- From m.oliver at jacobs-university.de Mon Jan 25 08:29:20 2016 From: m.oliver at jacobs-university.de (Marcel Oliver) Date: Mon, 25 Jan 2016 14:29:20 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> Message-ID: <22182.9008.344108.842230@xps13.localdomain> Anne Archibald writes: > There is periodically discussion of the mess of interfaces and > solvers for ODEs in scipy (see the archives about six months ago, > for example). One of the concerns is that people want to do very > different things, so settling on a good interface is not at all > easy. I don't just mean the underlying algorithms, but also the > calling interface. It's easy to support someone who drops in an RHS > and asks for the solution at a hundred predefined points. It's also > clear that a general toolkit shouldn't support my use case: a > 22-dimensional solver with a compiled RHS generated by a symbolic > algebra package, with internal precision switchable between 80 and > 128 bits, with a root-finder attached to the output to define > stopping places, that needs to all run with compiled speed. So > where along that scale do you aim scipy's interface? I have my own > ideas (see aforementioned archive thread) but don't have the energy > to implement it myself. And anyway, some people need very different > things from their ODE solvers (for example, solution objects > evaluable anywhere). > > Anne Thanks for all the comments. I had not been aware of the existence of scikits odes - the additional capabilities are very nice, but API-wise it just adds to the mess... Being unsure what has previously been discussed, I'd just like to share some random thoughts, as I don't claim to understand the problem domain completely. Basically, the API should make simple thing simple and natural, and complicated things possible. So I think your requirements stated above should not be completely out of reach - in principle - even though high precision arithmetic would require completely new backends and whether one can get close to compiled speed is probably very problem dependent. So, here an unordered list of thoughts: 1. I don't think there is a big difference in practice between the ode, odes, and odeint interfaces. Personally, think that odeint (f, y0, t) is just fine. But the object oriented approach of ode/odes is good, too; however, the various parameters should be controlled by (keyword) arguments as a statement like r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0) is just not pretty. So I like the odes approach best, with a basic object oriented structure, but the parameters set as arguments, not via methods. Not sure if odes allows to set an initial time different from zero, that is essential. I also dislike that specifying the solver is mandatory and first in the argument list; I think this is specialist territory as many of the existing solvers perform sufficiently well over a large range of problems so that changing the solver is more of an exception. If the interface is object oriented (as in ode or odes), then at least the default solver should be reentrant, otherwise the API raises false expectations. I would also suggest that, independent of the rest of the API, the final time t can be an array or a scalar; the result is either returned as a vector of shape y0 when t is scalar and as an array with one additional axis for time when t is a 1-d-array. 2. It would be very nice to control output time intervals or exit times. Mathematica has a fairly rich "WhenEvent" API, although it seems to me it may also be no more than a fancy wrapper around a standard solver. The Mathematica API feels almost a bit too general, but there are certainly two independent ways triggers could be implemented, both seem useful to me: (a) Simple Boolean trigger for "output event" and "termination event" based on the evaluation of some function of (y,t,p) which is passed to the solver and which is evaluated after each time step. (b) Trigger for "output event" and/or "termination" based on finding the root of a scalar function of the (y,t,p). This needs either a separate root finder or a modification to the implicit solver (the latter may be hard as that would require nontrivial modification of the embedded solver code). (odes "rootfn" may be doing just that, but there seems to be no detailed documentation...) 3. Returning interpolating function objects as, e.g., Mathematica does, seems a bit of overkill to me. It would be easier to set up an independent general interpolating function library (as possibly even exists already) and pass a discrete time series to it. It is likely that by deriving the interpolation function directly from the interpolation on which the solver is based one could gain some efficiency and give better control of the interpolation accuracy, but to me that seems to require a lot of effort for relatively little improvement. 4. odes includes DAE solvers. I wonder if it would not be more natural to pass the algebraic part of the equation as a separate keyword argument and let the solver wrapper decide, based on the presence or absence of this argument, whether to call a DAE backend. 5. As I wrote before, the API should deal transparently with d-dimensional vector fields. Just a question about solver backends: it seems that the SUNDIALS solvers are particularly strong for large stiff systems which come from parabolic PDEs? (And of course the ability to propagate sensitivities, is that ability made available through odes, too?) On smaller system of ODEs, I have found that ode "dop853" is extremely accurate and seems capable of dealing with rather stiff problems even though I have no clue how it does it. (E.g., for the Van der Pol oscillator with large stiffness parameter where other explicit solvers either blow up or grind to a halt, dop853 and also dopri5 are doing just fine while Matlab's dopri5 solver fails as expected. It's a mystery to me.) In any case, it would be very nice to develop a clear plan for cleaning up ODE solver backend in Scipy. Once there is a clear target, it might be easier to see what is easy to do and what is missing on the backend side. Regards, Marcel From benny.malengier at gmail.com Mon Jan 25 09:22:09 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Mon, 25 Jan 2016 15:22:09 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <22182.9008.344108.842230@xps13.localdomain> References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> <22182.9008.344108.842230@xps13.localdomain> Message-ID: 2016-01-25 14:29 GMT+01:00 Marcel Oliver : > Anne Archibald writes: > > > > to implement it myself. And anyway, some people need very different > > things from their ODE solvers (for example, solution objects > > evaluable anywhere). > > > > Anne > > > > 2. It would be very nice to control output time intervals or exit > times. Mathematica has a fairly rich "WhenEvent" API, although it > seems to me it may also be no more than a fancy wrapper around a > standard solver. > > The Mathematica API feels almost a bit too general, but there are > certainly two independent ways triggers could be implemented, both > seem useful to me: > > (a) Simple Boolean trigger for "output event" and "termination > event" based on the evaluation of some function of (y,t,p) > which is passed to the solver and which is evaluated after each > time step. > > (b) Trigger for "output event" and/or "termination" based on > finding the root of a scalar function of the (y,t,p). This > needs either a separate root finder or a modification to the > implicit solver (the latter may be hard as that would require > nontrivial modification of the embedded solver code). > > (odes "rootfn" may be doing just that, but there seems to be no > detailed documentation...) > Yes rootfn does that. A nicer interface with onroot functionality to that was added by a contributor to CVODE, I just extended it to the IDA solver. Only available through the solve method of the solvers (as STEP methods just stop), example: https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall.py Same with fixed tstop added in a list of output times: https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall_tstop.py > > 3. Returning interpolating function objects as, e.g., Mathematica > does, seems a bit of overkill to me. It would be easier to set up > an independent general interpolating function library (as possibly > even exists already) and pass a discrete time series to it. It is > likely that by deriving the interpolation function directly from > the interpolation on which the solver is based one could gain some > efficiency and give better control of the interpolation accuracy, > but to me that seems to require a lot of effort for relatively > little improvement. > agree. Also, just reinit the solver and solve again from closest start time is an option too if for some reason interpolation error must be avoided. CVODE can actually give you output backward if you have jumped too far ahead, but that requires you to know where you want interpolation output immediately after doing a STEP. All very problem depending. > > 4. odes includes DAE solvers. I wonder if it would not be more > natural to pass the algebraic part of the equation as a separate > keyword argument and let the solver wrapper decide, based on the > presence or absence of this argument, whether to call a DAE > backend. > These things all require a lot of boilerplate code. In the end, my experience is that the documentation of the original solvers is the best to learn the details, so keeping variable names the same in options is more important than something that is more pythonic. Eg, the rootfn above is a bad name, but fits the original documentation at https://computation.llnl.gov/casc/sundials/documentation/documentation.html CVODE doc is 164 pages, in the end, for problems you want to optimize you will grab that documentation to learn what options you want to tweak. About transparant algebraic part. The backends need a lot of info on the algebraic part. You also need to structure your variables specifically if you eg want banded Jacobian. And the calling sequence of all functions becomes different with an extra parameter in and out (ydot). Quite a lot of boilerplate that slows your wrapper down will be needed I'm afraid. > 5. As I wrote before, the API should deal transparently with > d-dimensional vector fields. > I'm not really following. You mean unwrap and wrap again like the complex_ode solver does now in scipy? http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.complex_ode.html Writing your own wrapper for your problem does not seem like a big task to me. > > Just a question about solver backends: it seems that the SUNDIALS > solvers are particularly strong for large stiff systems which come > from parabolic PDEs? (And of course the ability to propagate > sensitivities, is that ability made available through odes, too?) > They are used for that too, but it is not the core. Large complex kinetic systems with different time scales seems like the largest use case on the mailing list, see also as first example in the example doc. Sensitivities are not present in odes at the moment, but I think some wrappers have that covered already. Let's say it is asked once or twice a year to add. > On smaller system of ODEs, I have found that ode "dop853" is extremely > accurate and seems capable of dealing with rather stiff problems even > though I have no clue how it does it. (E.g., for the Van der Pol > oscillator with large stiffness parameter where other explicit solvers > either blow up or grind to a halt, dop853 and also dopri5 are doing > just fine while Matlab's dopri5 solver fails as expected. It's a > mystery to me.) > Some extra tests seem in order. How does method=adams in ode succeed? Should you be able to quickly make an ipython notebook ... > In any case, it would be very nice to develop a clear plan for > cleaning up ODE solver backend in Scipy. Once there is a clear > target, it might be easier to see what is easy to do and what is > missing on the backend side. > Yes. We don't have our own backends, we only wrap backends. In the end, those backends with continued development will win out. Wrapping solvers is a tricky business, as it is very hard to avoid design of the solver to not slip through > > Regards, > Marcel > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.oliver at jacobs-university.de Tue Jan 26 05:43:26 2016 From: m.oliver at jacobs-university.de (Marcel Oliver) Date: Tue, 26 Jan 2016 11:43:26 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> <22182.9008.344108.842230@xps13.localdomain> Message-ID: <22183.19918.593905.530551@xps13.localdomain> Benny Malengier writes: [...] > Yes rootfn does that. A nicer interface with onroot functionality > to that was added by a contributor to CVODE, I just extended it to > the IDA solver. Only available through the solve method of the > solvers (as STEP methods just stop), example: > https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall.py > Same with fixed tstop added in a list of output times: > https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall_tstop.py Thanks, I'll be looking at that! (Side remark: tried to compile odes today on a Fedora 23 machine, but it fails with gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory error: Command "gcc -pthread -Wno-unused-result -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ibuild/src.linux-x86_64-3.4 -I/usr/lib64/python3.4/site-packages/numpy/core/include -I/usr/include/python3.4m -c build/src.linux-x86_64-3.4/fortranobject.c -o build/temp.linux-x86_64-3.4/build/src.linux-x86_64-3.4/fortranobject.o" failed with exit status 1 Seems there are some hard-coded paths somewhere, did not have time to track this down other than noting that I get this error, so I may have done something stupid - don't worry if this is the case.) [...] > 4. odes includes DAE solvers. I wonder if it would not be more > natural to pass the algebraic part of the equation as a separate > keyword argument and let the solver wrapper decide, based on the > presence or absence of this argument, whether to call a DAE > backend. > > These things all require a lot of boilerplate code. In the end, my experience > is that the documentation of the original solvers is the best to learn the > details, so keeping variable names the same in options is more important than > something that is more pythonic. Eg, the rootfn above is a bad name, but fits > the original documentation at > https://computation.llnl.gov/casc/sundials/documentation/documentation.html > CVODE doc is 164 pages, in the end, for problems you want to optimize you will > grab that documentation to learn what options you want to tweak. > About transparant algebraic part. The backends need a lot of info on the > algebraic part. You also need to structure your variables specifically if you > eg want banded Jacobian. And the calling sequence of all functions becomes > different with an extra parameter in and out (ydot). > Quite a lot of boilerplate that slows your wrapper down will be needed I'm > afraid. I see the point. But then one can do this only for the default solver suite (I assume you'd go for sundials) and if it's necessary to replace the backend, then the non-default solver should get the wrapper code to make this transparent. > 5. As I wrote before, the API should deal transparently with > d-dimensional vector fields. > > I'm not really following. You mean unwrap and wrap again like the > complex_ode solver does now in scipy? > http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.complex_ode.html > > Writing your own wrapper for your problem does not seem like a big > task to me. I have done this many times, but if feels like something the computer should be doing for me. I am coming from the user perspective, not as a scipy developer, so I don't want the ugliness in MY code... The other question is that best practice is not clear to me. What do I need to do to avoid extra copies of the data, which should be possible in general. A harder problem is probably what to do e.g. for systems of reaction diffusion equations where one would want to keep the sparse structure. Not sure if this is even possible to address in a sufficiently general way. > On smaller system of ODEs, I have found that ode "dop853" is extremely > accurate and seems capable of dealing with rather stiff problems even > though I have no clue how it does it. (E.g., for the Van der Pol > oscillator with large stiffness parameter where other explicit solvers > either blow up or grind to a halt, dop853 and also dopri5 are doing > just fine while Matlab's dopri5 solver fails as expected. It's a > mystery to me.) > > Some extra tests seem in order. How does method=adams in ode succeed? Should > you be able to quickly make an ipython notebook ... I attach a small code snippet. In fact, my memory failed me in that dop853 indeed complains about insufficient nmax (nsteps in the Python API). What breaks is the accuracy of the solution, but it's still remarkably good and can be controlled by nsteps. It's obvious that one should not solve this kind of problem with dop853, but I am still amazed that it does not fall flat on its face. In fact, vode/Adams dies completely (as it should) and vode/BDF gives lots of warnings and computes a completely wrong solution (BDF schemes are supposed to work here). lsoda is fine, though. Best, Marcel PS.: My motivation for looking at the solver API comes from a nascent book project where the idea is to include computer experiments on nonlinear dynamical systems done in Scientific Python. Thus, I am exploring best practice examples that are generic for various problem domains. -------------- next part -------------- A non-text attachment was scrubbed... Name: prob4.py Type: application/octet-stream Size: 1032 bytes Desc: not available URL: From benny.malengier at gmail.com Tue Jan 26 06:03:26 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 26 Jan 2016 12:03:26 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <22183.19918.593905.530551@xps13.localdomain> References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> <22182.9008.344108.842230@xps13.localdomain> <22183.19918.593905.530551@xps13.localdomain> Message-ID: 2016-01-26 11:43 GMT+01:00 Marcel Oliver : > > (Side remark: tried to compile odes today on a Fedora 23 machine, but > it fails with > > gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or > directory > gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or > directory > error: Command "gcc -pthread -Wno-unused-result > -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -g -pipe -Wall > -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions > -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches > -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic > -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ibuild/src.linux-x86_64-3.4 > -I/usr/lib64/python3.4/site-packages/numpy/core/include > -I/usr/include/python3.4m -c build/src.linux-x86_64-3.4/fortranobject.c -o > build/temp.linux-x86_64-3.4/build/src.linux-x86_64-3.4/fortranobject.o" > failed with exit status 1 > > Seems there are some hard-coded paths somewhere, did not have time to > track this down other than noting that I get this error, so I may have > done something stupid - don't worry if this is the case.) > This seems to be a part of numpy/f2py that is crashing ... So this will be to expose the ddaspk solver in odes. Still, fortran compiling should work. Using a debian system myself. You should check what packages provides the specs sought for: -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Tue Jan 26 06:48:14 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 26 Jan 2016 12:48:14 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <22183.19918.593905.530551@xps13.localdomain> References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> <22182.9008.344108.842230@xps13.localdomain> <22183.19918.593905.530551@xps13.localdomain> Message-ID: 2016-01-26 11:43 GMT+01:00 Marcel Oliver : > > > Some extra tests seem in order. How does method=adams in ode succeed? > Should > > you be able to quickly make an ipython notebook ... > > I attach a small code snippet. In fact, my memory failed me in that > dop853 indeed complains about insufficient nmax (nsteps in the Python > API). What breaks is the accuracy of the solution, but it's still > remarkably good and can be controlled by nsteps. It's obvious that > one should not solve this kind of problem with dop853, but I am still > amazed that it does not fall flat on its face. > > In fact, vode/Adams dies completely (as it should) and vode/BDF gives > lots of warnings and computes a completely wrong solution (BDF schemes > are supposed to work here). lsoda is fine, though. > You forgot , with_jacobian=True when doing bdf metho via ode. So change into: r2 = ode(van_der_pol).set_integrator('vode', method='bdf', with_jacobian=True) Normally, as vode is for stiff methods, you set ml or mr to indicate how banded the jacobian is. Here you want full jacobian, so you should indicate this, see doc. Result then is as lsoda. Can I use your code in an example under the scipy license ? I'll see what odes does. Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Tue Jan 26 08:49:19 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 26 Jan 2016 14:49:19 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> <22182.9008.344108.842230@xps13.localdomain> <22183.19918.593905.530551@xps13.localdomain> Message-ID: As a follow up, I adapted your code, to add odes, and also cvode versions via the odes scikit. Timings on my PC: Time for dop853: 5.587895 Time for vode/BDF: 0.005166 Time for lsoda: 0.006365 Time for cvode/BDF: 0.006328 Time for cvode/BDF - class: 0.006171 Time for cvode/BDF - cython: 0.003059 Time for lsoda - cython: 0.00459 Time for vode/BDF - cython: 0.003977 So, fastes is odes with cvode and cython rhs. Then VODE and cython (almost 30% slower), then LSODA and cython. Without cython, VODE is fastest, then cvode without a wrapper around the function (almost 20% slower), then normal cvode = lsoda. Nice to see C beating Fortran, but that is probably due to the trip around to python. Benny 2016-01-26 12:48 GMT+01:00 Benny Malengier : > > > 2016-01-26 11:43 GMT+01:00 Marcel Oliver : > >> >> > Some extra tests seem in order. How does method=adams in ode succeed? >> Should >> > you be able to quickly make an ipython notebook ... >> >> I attach a small code snippet. In fact, my memory failed me in that >> dop853 indeed complains about insufficient nmax (nsteps in the Python >> API). What breaks is the accuracy of the solution, but it's still >> remarkably good and can be controlled by nsteps. It's obvious that >> one should not solve this kind of problem with dop853, but I am still >> amazed that it does not fall flat on its face. >> >> In fact, vode/Adams dies completely (as it should) and vode/BDF gives >> lots of warnings and computes a completely wrong solution (BDF schemes >> are supposed to work here). lsoda is fine, though. >> > > You forgot > , with_jacobian=True > when doing bdf metho via ode. So change into: > > r2 = ode(van_der_pol).set_integrator('vode', method='bdf', > with_jacobian=True) > > Normally, as vode is for stiff methods, you set ml or mr to indicate how > banded the jacobian is. Here you want full jacobian, so you should indicate > this, see doc. > Result then is as lsoda. > > Can I use your code in an example under the scipy license ? I'll see what > odes does. > > Benny > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Tue Jan 26 08:50:17 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 26 Jan 2016 14:50:17 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> <22182.9008.344108.842230@xps13.localdomain> <22183.19918.593905.530551@xps13.localdomain> Message-ID: now the code in attach, hopefully it comes trough. Benny 2016-01-26 14:49 GMT+01:00 Benny Malengier : > As a follow up, I adapted your code, to add odes, and also cvode versions > via the odes scikit. > Timings on my PC: > > Time for dop853: 5.587895 > Time for vode/BDF: 0.005166 > Time for lsoda: 0.006365 > Time for cvode/BDF: 0.006328 > Time for cvode/BDF - class: 0.006171 > Time for cvode/BDF - cython: 0.003059 > Time for lsoda - cython: 0.00459 > Time for vode/BDF - cython: 0.003977 > > So, fastes is odes with cvode and cython rhs. Then VODE and cython (almost > 30% slower), then LSODA and cython. > > Without cython, VODE is fastest, then cvode without a wrapper around the > function (almost 20% slower), then normal cvode = lsoda. > > Nice to see C beating Fortran, but that is probably due to the trip around > to python. > > Benny > > 2016-01-26 12:48 GMT+01:00 Benny Malengier : > >> >> >> 2016-01-26 11:43 GMT+01:00 Marcel Oliver : >> >>> >>> > Some extra tests seem in order. How does method=adams in ode succeed? >>> Should >>> > you be able to quickly make an ipython notebook ... >>> >>> I attach a small code snippet. In fact, my memory failed me in that >>> dop853 indeed complains about insufficient nmax (nsteps in the Python >>> API). What breaks is the accuracy of the solution, but it's still >>> remarkably good and can be controlled by nsteps. It's obvious that >>> one should not solve this kind of problem with dop853, but I am still >>> amazed that it does not fall flat on its face. >>> >>> In fact, vode/Adams dies completely (as it should) and vode/BDF gives >>> lots of warnings and computes a completely wrong solution (BDF schemes >>> are supposed to work here). lsoda is fine, though. >>> >> >> You forgot >> , with_jacobian=True >> when doing bdf metho via ode. So change into: >> >> r2 = ode(van_der_pol).set_integrator('vode', method='bdf', >> with_jacobian=True) >> >> Normally, as vode is for stiff methods, you set ml or mr to indicate how >> banded the jacobian is. Here you want full jacobian, so you should indicate >> this, see doc. >> Result then is as lsoda. >> >> Can I use your code in an example under the scipy license ? I'll see what >> odes does. >> >> Benny >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: van_der_pol.py Type: text/x-python Size: 3302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: van_der_pol_fun.pyx Type: application/octet-stream Size: 1276 bytes Desc: not available URL: From m.oliver at jacobs-university.de Tue Jan 26 09:08:14 2016 From: m.oliver at jacobs-university.de (Marcel Oliver) Date: Tue, 26 Jan 2016 15:08:14 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> <22182.9008.344108.842230@xps13.localdomain> <22183.19918.593905.530551@xps13.localdomain> Message-ID: <22183.32206.575694.976113@xps13.localdomain> Benny Malengier writes: > I attach a small code snippet. In fact, my memory failed me in that > dop853 indeed complains about insufficient nmax (nsteps in the Python > API). What breaks is the accuracy of the solution, but it's still > remarkably good and can be controlled by nsteps. It's obvious that > one should not solve this kind of problem with dop853, but I am still > amazed that it does not fall flat on its face. > > In fact, vode/Adams dies completely (as it should) and vode/BDF gives > lots of warnings and computes a completely wrong solution (BDF schemes > are supposed to work here). lsoda is fine, though. > > You forgot > , with_jacobian=True > when doing bdf metho via ode. So change into: > > r2 = ode(van_der_pol).set_integrator('vode', method='bdf', with_jacobian=True) > > Normally, as vode is for stiff methods, you set ml or mr to indicate how > banded the jacobian is. Here you want full jacobian, so you should indicate > this, see doc. Oops, I completely overlooked that vode is defaulting to simple fixed-point iteration in the BDF solver. Thanks for pointing this out! I wonder if there is any situation where one would want to use BDF and still not want to use a full Newton or quasi-Newton iteration since the convergence condition for a simple fixed point iteration is basically as bad as using an explicit solver in the first place. But there is certainly educational value to see this behavior... > Can I use your code in an example under the scipy license ? I'll > see what odes does. Absolutely. It's pretty trivial of course, so feel free to use and/or adapt. Best, Marcel PS.: I'll try to get odes to compile later. From helmrp at yahoo.com Tue Jan 26 10:16:04 2016 From: helmrp at yahoo.com (The Helmbolds) Date: Tue, 26 Jan 2016 15:16:04 +0000 (UTC) Subject: [SciPy-Dev] SciPy-Dev Digest, Vol 147, Issue 20 In-Reply-To: References: Message-ID: <1497282122.414210.1453821364951.JavaMail.yahoo@mail.yahoo.com> However, if the inputs were a dictionary ----?"You won't find the right answers if you don't ask the right questions!" (Robert Helmbold, 2013) From: "scipy-dev-request at scipy.org" To: scipy-dev at scipy.org Sent: Tuesday, January 26, 2016 3:43 AM Subject: SciPy-Dev Digest, Vol 147, Issue 20 Send SciPy-Dev mailing list submissions to ??? scipy-dev at scipy.org To subscribe or unsubscribe via the World Wide Web, visit ??? https://mail.scipy.org/mailman/listinfo/scipy-dev or, via email, send a message with subject or body 'help' to ??? scipy-dev-request at scipy.org You can reach the person managing the list at ??? scipy-dev-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-Dev digest..." Today's Topics: ? 1. Re: ode and odeint callable parameters (Marcel Oliver) ? 2. Re: ode and odeint callable parameters (Benny Malengier) ? 3. Re: ode and odeint callable parameters (Marcel Oliver) ---------------------------------------------------------------------- Message: 1 Date: Mon, 25 Jan 2016 14:29:20 +0100 From: Marcel Oliver To: SciPy Developers List Subject: Re: [SciPy-Dev] ode and odeint callable parameters Message-ID: <22182.9008.344108.842230 at xps13.localdomain> Content-Type: text/plain; charset="us-ascii" Anne Archibald writes: > There is periodically discussion of the mess of interfaces and > solvers for ODEs in scipy (see the archives about six months ago, > for example). One of the concerns is that people want to do very > different things, so settling on a good interface is not at all > easy. I don't just mean the underlying algorithms, but also the > calling interface. It's easy to support someone who drops in an RHS > and asks for the solution at a hundred predefined points. It's also > clear that a general toolkit shouldn't support my use case: a > 22-dimensional solver with a compiled RHS generated by a symbolic > algebra package, with internal precision switchable between 80 and > 128 bits, with a root-finder attached to the output to define > stopping places, that needs to all run with compiled speed. So > where along that scale do you aim scipy's interface? I have my own > ideas (see aforementioned archive thread) but don't have the energy > to implement it myself. And anyway, some people need very different > things from their ODE solvers (for example, solution objects > evaluable anywhere). > > Anne Thanks for all the comments.? I had not been aware of the existence of scikits odes - the additional capabilities are very nice, but API-wise it just adds to the mess... Being unsure what has previously been discussed, I'd just like to share some random thoughts, as I don't claim to understand the problem domain completely.? Basically, the API should make simple thing simple and natural, and complicated things possible.? So I think your requirements stated above should not be completely out of reach - in principle - even though high precision arithmetic would require completely new backends and whether one can get close to compiled speed is probably very problem dependent. So, here an unordered list of thoughts: 1. I don't think there is a big difference in practice between the ? ode, odes, and odeint interfaces.? Personally, think that ? ? odeint (f, y0, t) ? is just fine.? But the object oriented approach of ode/odes is ? good, too; however, the various parameters should be controlled by ? (keyword) arguments as a statement like ? ? r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0) ? is just not pretty. ? So I like the odes approach best, with a basic object oriented ? structure, but the parameters set as arguments, not via methods. ? Not sure if odes allows to set an initial time different from zero, ? that is essential.? I also dislike that specifying the solver is ? mandatory and first in the argument list; I think this is ? specialist territory as many of the existing solvers perform ? sufficiently well over a large range of problems so that changing ? the solver is more of an exception. ? If the interface is object oriented (as in ode or odes), then at ? least the default solver should be reentrant, otherwise the ? API raises false expectations. ? I would also suggest that, independent of the rest of the API, the ? final time t can be an array or a scalar; the result is either ? returned as a vector of shape y0 when t is scalar and as an array ? with one additional axis for time when t is a 1-d-array. 2. It would be very nice to control output time intervals or exit ? times.? Mathematica has a fairly rich "WhenEvent" API, although it ? seems to me it may also be no more than a fancy wrapper around a ? standard solver. ? The Mathematica API feels almost a bit too general, but there are ? certainly two independent ways triggers could be implemented, both ? seem useful to me: ? (a) Simple Boolean trigger for "output event" and "termination ? ? ? event" based on the evaluation of some function of (y,t,p) ? ? ? which is passed to the solver and which is evaluated after each ? ? ? time step. ? (b) Trigger for "output event" and/or "termination" based on ? ? ? finding the root of a scalar function of the (y,t,p).? This ? ? ? needs either a separate root finder or a modification to the ? ? ? implicit solver (the latter may be hard as that would require ? ? ? nontrivial modification of the embedded solver code). ? ? ? (odes "rootfn" may be doing just that, but there seems to be no ? ? ? detailed documentation...) 3. Returning interpolating function objects as, e.g., Mathematica ? does, seems a bit of overkill to me.? It would be easier to set up ? an independent general interpolating function library (as possibly ? even exists already) and pass a discrete time series to it.? It is ? likely that by deriving the interpolation function directly from ? the interpolation on which the solver is based one could gain some ? efficiency and give better control of the interpolation accuracy, ? but to me that seems to require a lot of effort for relatively ? little improvement. 4. odes includes DAE solvers.? I wonder if it would not be more ? natural to pass the algebraic part of the equation as a separate ? keyword argument and let the solver wrapper decide, based on the ? presence or absence of this argument, whether to call a DAE ? backend. 5. As I wrote before, the API should deal transparently with ? d-dimensional vector fields. Just a question about solver backends: it seems that the SUNDIALS solvers are particularly strong for large stiff systems which come from parabolic PDEs?? (And of course the ability to propagate sensitivities, is that ability made available through odes, too?) On smaller system of ODEs, I have found that ode "dop853" is extremely accurate and seems capable of dealing with rather stiff problems even though I have no clue how it does it.? (E.g., for the Van der Pol oscillator with large stiffness parameter where other explicit solvers either blow up or grind to a halt, dop853 and also dopri5 are doing just fine while Matlab's dopri5 solver fails as expected.? It's a mystery to me.) In any case, it would be very nice to develop a clear plan for cleaning up ODE solver backend in Scipy.? Once there is a clear target, it might be easier to see what is easy to do and what is missing on the backend side. Regards, Marcel ------------------------------ Message: 2 Date: Mon, 25 Jan 2016 15:22:09 +0100 From: Benny Malengier To: SciPy Developers List Subject: Re: [SciPy-Dev] ode and odeint callable parameters Message-ID: ??? Content-Type: text/plain; charset="utf-8" 2016-01-25 14:29 GMT+01:00 Marcel Oliver : > Anne Archibald writes: >? > >? > to implement it myself. And anyway, some people need very different >? > things from their ODE solvers (for example, solution objects >? > evaluable anywhere). >? > >? > Anne > > > > 2. It would be very nice to control output time intervals or exit >? ? times.? Mathematica has a fairly rich "WhenEvent" API, although it >? ? seems to me it may also be no more than a fancy wrapper around a >? ? standard solver. > >? ? The Mathematica API feels almost a bit too general, but there are >? ? certainly two independent ways triggers could be implemented, both >? ? seem useful to me: > >? ? (a) Simple Boolean trigger for "output event" and "termination >? ? ? ? event" based on the evaluation of some function of (y,t,p) >? ? ? ? which is passed to the solver and which is evaluated after each >? ? ? ? time step. > >? ? (b) Trigger for "output event" and/or "termination" based on >? ? ? ? finding the root of a scalar function of the (y,t,p).? This >? ? ? ? needs either a separate root finder or a modification to the >? ? ? ? implicit solver (the latter may be hard as that would require >? ? ? ? nontrivial modification of the embedded solver code). > >? ? ? ? (odes "rootfn" may be doing just that, but there seems to be no >? ? ? ? detailed documentation...) > Yes rootfn does that. A nicer interface with onroot functionality to that was added by a contributor to CVODE, I just extended it to the IDA solver. Only available through the solve method of the solvers (as STEP methods just stop), example: https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall.py Same with fixed tstop added in a list of output times: https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall_tstop.py > > 3. Returning interpolating function objects as, e.g., Mathematica >? ? does, seems a bit of overkill to me.? It would be easier to set up >? ? an independent general interpolating function library (as possibly >? ? even exists already) and pass a discrete time series to it.? It is >? ? likely that by deriving the interpolation function directly from >? ? the interpolation on which the solver is based one could gain some >? ? efficiency and give better control of the interpolation accuracy, >? ? but to me that seems to require a lot of effort for relatively >? ? little improvement. > agree. Also, just reinit the solver and solve again from closest start time is an option too if for some reason interpolation error must be avoided. CVODE can actually give you output backward if you have jumped too far ahead, but that requires you to know where you want interpolation output immediately after doing a STEP. All very problem depending. > > 4. odes includes DAE solvers.? I wonder if it would not be more >? ? natural to pass the algebraic part of the equation as a separate >? ? keyword argument and let the solver wrapper decide, based on the >? ? presence or absence of this argument, whether to call a DAE >? ? backend. > These things all require a lot of boilerplate code. In the end, my experience is that the documentation of the original solvers is the best to learn the details, so keeping variable names the same in options is more important than something that is more pythonic. Eg, the rootfn above is a bad name, but fits the original documentation at https://computation.llnl.gov/casc/sundials/documentation/documentation.html CVODE doc is 164 pages, in the end, for problems you want to optimize you will grab that documentation to learn what options you want to tweak. About transparant algebraic part. The backends need a lot of info on the algebraic part. You also need to structure your variables specifically if you eg want banded Jacobian. And the calling sequence of all functions becomes different with an extra parameter in and out (ydot). Quite a lot of boilerplate that slows your wrapper down will be needed I'm afraid. > 5. As I wrote before, the API should deal transparently with >? ? d-dimensional vector fields. > I'm not really following. You mean unwrap and wrap again like the complex_ode solver does now in scipy? http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.complex_ode.html Writing your own wrapper for your problem does not seem like a big task to me. > > Just a question about solver backends: it seems that the SUNDIALS > solvers are particularly strong for large stiff systems which come > from parabolic PDEs?? (And of course the ability to propagate > sensitivities, is that ability made available through odes, too?) > They are used for that too, but it is not the core. Large complex kinetic systems with different time scales seems like the largest use case on the mailing list, see also as first example in the example doc. Sensitivities are not present in odes at the moment, but I think some wrappers have that covered already. Let's say it is asked once or twice a year to add. > On smaller system of ODEs, I have found that ode "dop853" is extremely > accurate and seems capable of dealing with rather stiff problems even > though I have no clue how it does it.? (E.g., for the Van der Pol > oscillator with large stiffness parameter where other explicit solvers > either blow up or grind to a halt, dop853 and also dopri5 are doing > just fine while Matlab's dopri5 solver fails as expected.? It's a > mystery to me.) > Some extra tests seem in order. How does method=adams in ode succeed? Should you be able to quickly make an ipython notebook ... > In any case, it would be very nice to develop a clear plan for > cleaning up ODE solver backend in Scipy.? Once there is a clear > target, it might be easier to see what is easy to do and what is > missing on the backend side. > Yes. We don't have our own backends, we only wrap backends. In the end, those backends with continued development will win out. Wrapping solvers is a tricky business, as it is very hard to avoid design of the solver to not slip through > > Regards, > Marcel > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Tue, 26 Jan 2016 11:43:26 +0100 From: Marcel Oliver To: SciPy Developers List Subject: Re: [SciPy-Dev] ode and odeint callable parameters Message-ID: <22183.19918.593905.530551 at xps13.localdomain> Content-Type: text/plain; charset="us-ascii" Benny Malengier writes: [...] > Yes rootfn does that.? A nicer interface with onroot functionality > to that was added by a contributor to CVODE, I just extended it to > the IDA solver. Only available through the solve method of the > solvers (as STEP methods just stop), example: > https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall.py > Same with fixed tstop added in a list of output times: > https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall_tstop.py Thanks, I'll be looking at that! (Side remark: tried to compile odes today on a Fedora 23 machine, but it fails with gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory error: Command "gcc -pthread -Wno-unused-result -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ibuild/src.linux-x86_64-3.4 -I/usr/lib64/python3.4/site-packages/numpy/core/include -I/usr/include/python3.4m -c build/src.linux-x86_64-3.4/fortranobject.c -o build/temp.linux-x86_64-3.4/build/src.linux-x86_64-3.4/fortranobject.o" failed with exit status 1 Seems there are some hard-coded paths somewhere, did not have time to track this down other than noting that I get this error, so I may have done something stupid - don't worry if this is the case.) [...] >? ? 4. odes includes DAE solvers.? I wonder if it would not be more >? ? ? ? natural to pass the algebraic part of the equation as a separate >? ? ? ? keyword argument and let the solver wrapper decide, based on the >? ? ? ? presence or absence of this argument, whether to call a DAE >? ? ? ? backend. > > These things all require a lot of boilerplate code. In the end, my experience > is that the documentation of the original solvers is the best to learn the > details, so keeping variable names the same in options is more important than > something that is more pythonic. Eg, the rootfn above is a bad name, but fits > the original documentation at > https://computation.llnl.gov/casc/sundials/documentation/documentation.html > CVODE doc is 164 pages, in the end, for problems you want to optimize you will > grab that documentation to learn what options you want to tweak. > About transparant algebraic part. The backends need a lot of info on the > algebraic part. You also need to structure your variables specifically if you > eg want banded Jacobian. And the calling sequence of all functions becomes > different with an extra parameter in and out (ydot). > Quite a lot of boilerplate that slows your wrapper down will be needed I'm > afraid. I see the point.? But then one can do this only for the default solver suite (I assume you'd go for sundials) and if it's necessary to replace the backend, then the non-default solver should get the wrapper code to make this transparent. >? ? 5. As I wrote before, the API should deal transparently with >? ? ? ? d-dimensional vector fields. > > I'm not really following. You mean unwrap and wrap again like the > complex_ode solver does now in scipy? > http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.complex_ode.html > > Writing your own wrapper for your problem does not seem like a big > task to me. I have done this many times, but if feels like something the computer should be doing for me.? I am coming from the user perspective, not as a scipy developer, so I don't want the ugliness in MY code... The other question is that best practice is not clear to me.? What do I need to do to avoid extra copies of the data, which should be possible in general.? A harder problem is probably what to do e.g. for systems of reaction diffusion equations where one would want to keep the sparse structure.? Not sure if this is even possible to address in a sufficiently general way. >? ? On smaller system of ODEs, I have found that ode "dop853" is extremely >? ? accurate and seems capable of dealing with rather stiff problems even >? ? though I have no clue how it does it.? (E.g., for the Van der Pol >? ? oscillator with large stiffness parameter where other explicit solvers >? ? either blow up or grind to a halt, dop853 and also dopri5 are doing >? ? just fine while Matlab's dopri5 solver fails as expected.? It's a >? ? mystery to me.) > > Some extra tests seem in order. How does method=adams in ode succeed? Should > you be able to quickly make an ipython notebook ... I attach a small code snippet.? In fact, my memory failed me in that dop853 indeed complains about insufficient nmax (nsteps in the Python API).? What breaks is the accuracy of the solution, but it's still remarkably good and can be controlled by nsteps.? It's obvious that one should not solve this kind of problem with dop853, but I am still amazed that it does not fall flat on its face. In fact, vode/Adams dies completely (as it should) and vode/BDF gives lots of warnings and computes a completely wrong solution (BDF schemes are supposed to work here).? lsoda is fine, though. Best, Marcel PS.: My motivation for looking at the solver API comes from a nascent book project where the idea is to include computer experiments on nonlinear dynamical systems done in Scientific Python.? Thus, I am exploring best practice examples that are generic for various problem domains. -------------- next part -------------- A non-text attachment was scrubbed... Name: prob4.py Type: application/octet-stream Size: 1032 bytes Desc: not available URL: ------------------------------ Subject: Digest Footer _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org https://mail.scipy.org/mailman/listinfo/scipy-dev ------------------------------ End of SciPy-Dev Digest, Vol 147, Issue 20 ****************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From larson.eric.d at gmail.com Tue Jan 26 13:56:38 2016 From: larson.eric.d at gmail.com (Eric Larson) Date: Tue, 26 Jan 2016 13:56:38 -0500 Subject: [SciPy-Dev] Resampling using polyphase filtering Message-ID: Hey all, I have made a PR to scipy to add signal resampling using polyphase filtering (via upfirdn) to complement the existing FFT-based resampling method currently available via resample(...). Feel free to take a look and leave comments if interested: https://github.com/scipy/scipy/pull/5749 Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Jan 26 15:49:30 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 26 Jan 2016 13:49:30 -0700 Subject: [SciPy-Dev] Numpy 1.11.0b1 is out Message-ID: Hi All, I'm pleased to announce that Numpy 1.11.0b1 is now available on sourceforge. This is a source release as the mingw32 toolchain is broken. Please test it out and report any errors that you discover. Hopefully we can do better with 1.11.0 than we did with 1.10.0 ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at kelman.net Wed Jan 27 00:06:15 2016 From: tony at kelman.net (Tony Kelman) Date: Wed, 27 Jan 2016 05:06:15 +0000 (UTC) Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release References: Message-ID: Nathaniel Smith pobox.com> writes: > > Are the existing mingwpy patches public somewhere? > > Not yet. That's "phase 1" in the proposal that was just funded > https://mingwpy.github.io/proposal_december2015.html > and so I think Carl's going to start working on it, like, tomorrow or > something like that. Again, patience is needed, but hopefully not > *too* much patience . > > -n Isn't distributing a modified GCC without publicly posting the patches that were used a violation of the GPL? From njs at pobox.com Wed Jan 27 00:20:51 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 26 Jan 2016 21:20:51 -0800 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: On Tue, Jan 26, 2016 at 9:06 PM, Tony Kelman wrote: > Nathaniel Smith pobox.com> writes: > >> > Are the existing mingwpy patches public somewhere? >> >> Not yet. That's "phase 1" in the proposal that was just funded >> https://mingwpy.github.io/proposal_december2015.html >> and so I think Carl's going to start working on it, like, tomorrow or >> something like that. Again, patience is needed, but hopefully not >> *too* much patience . >> >> -n > > Isn't distributing a modified GCC without publicly posting the > patches that were used a violation of the GPL? Yes. -n -- Nathaniel J. Smith -- https://vorpus.org From cmkleffner at gmail.com Wed Jan 27 04:35:33 2016 From: cmkleffner at gmail.com (Carl Kleffner) Date: Wed, 27 Jan 2016 10:35:33 +0100 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: see https://github.com/mingwpy/mingw-builds, it's work in progress Carl 2016-01-27 6:06 GMT+01:00 Tony Kelman : > Nathaniel Smith pobox.com> writes: > > > > Are the existing mingwpy patches public somewhere? > > > > Not yet. That's "phase 1" in the proposal that was just funded > > https://mingwpy.github.io/proposal_december2015.html > > and so I think Carl's going to start working on it, like, tomorrow or > > something like that. Again, patience is needed, but hopefully not > > *too* much patience . > > > > -n > > Isn't distributing a modified GCC without publicly posting the > patches that were used a violation of the GPL? > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Wed Jan 27 13:31:16 2016 From: cmkleffner at gmail.com (Carl Kleffner) Date: Wed, 27 Jan 2016 19:31:16 +0100 Subject: [SciPy-Dev] win32 binaries and scipy 0.17.0 release In-Reply-To: References: Message-ID: To be a little bit more specific, it is the mingwpy-dev branch, see https://github.com/mingwpy/mingw-builds/blob/mingwpy-dev/mingwpy_readme.md 2016-01-27 10:35 GMT+01:00 Carl Kleffner : > see https://github.com/mingwpy/mingw-builds, it's work in progress > > Carl > > 2016-01-27 6:06 GMT+01:00 Tony Kelman : > >> Nathaniel Smith pobox.com> writes: >> >> > > Are the existing mingwpy patches public somewhere? >> > >> > Not yet. That's "phase 1" in the proposal that was just funded >> > https://mingwpy.github.io/proposal_december2015.html >> > and so I think Carl's going to start working on it, like, tomorrow or >> > something like that. Again, patience is needed, but hopefully not >> > *too* much patience . >> > >> > -n >> >> Isn't distributing a modified GCC without publicly posting the >> patches that were used a violation of the GPL? >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jan 28 15:51:49 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 28 Jan 2016 13:51:49 -0700 Subject: [SciPy-Dev] Numpy 1.11.0b2 released Message-ID: Hi All, I hope I am pleased to announce the Numpy 1.11.0b2 release. The first beta was a damp squib due to missing files in the released source files, this release fixes that. The new source filese may be downloaded from sourceforge, no binaries will be released until the mingw tool chain problems are sorted. Please test and report any problem. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Jan 28 16:29:11 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 28 Jan 2016 13:29:11 -0800 Subject: [SciPy-Dev] [Numpy-discussion] Numpy 1.11.0b2 released In-Reply-To: References: Message-ID: Hi, On Thu, Jan 28, 2016 at 12:51 PM, Charles R Harris wrote: > Hi All, > > I hope I am pleased to announce the Numpy 1.11.0b2 release. The first beta > was a damp squib due to missing files in the released source files, this > release fixes that. The new source filese may be downloaded from > sourceforge, no binaries will be released until the mingw tool chain > problems are sorted. > > Please test and report any problem. OSX wheels build OK: https://travis-ci.org/MacPython/numpy-wheels/builds/105521850 Y'all can test with: pip install --pre --trusted-host wheels.scipy.org -f http://wheels.scipy.org numpy Cheers, Matthew From benny.malengier at gmail.com Fri Jan 29 09:29:07 2016 From: benny.malengier at gmail.com (Benny Malengier) Date: Fri, 29 Jan 2016 15:29:07 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: <22183.32206.575694.976113@xps13.localdomain> References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> <22182.9008.344108.842230@xps13.localdomain> <22183.19918.593905.530551@xps13.localdomain> <22183.32206.575694.976113@xps13.localdomain> Message-ID: I would like to come back to this issue. I added a dopri wrapper to the odes scikit, https://github.com/bmcage/odes/blob/master/scikits/odes/dopri5.py which wraps the scipy.integrate calls to dopri5 and dop853. When doing test with the Van der Pol example, I see this solver fails. The seemingly good solution, is actually bad. In the test script you send, you integrate via: sol1 = [r1.integrate(T) for T in tt[1:]] What this does is actually raise an error after time 166.49 sec everytime, but only one is printed to output, the others seemingly supressed, and you keep restarting the solver via this call, even if not successful. WIth the wrapper, after time 166.49, solutions never reach the output, they go to sol1.errors assuming you did sol1 = r1b.solve(tt, y0) and as a user you must decide what to do. This somewhat proves that returning output also on wrong output (not satisfying atol/rtol) like ode.integrate does is not a good practice. In the above case, with scipy.ode, you should test for r1.successful() after every integrate, which you failed to do. With the odes scikit approach, wrong output just stops the solve routine and goes to sol.errors. The dopri warning output is to increase nsteps. If you do that, and restart at the last computed solution, the error of dop853 becomes: UserWarning: dop853: problem is probably stiff (interrupted) So an indication to use another solver. Your van der pol example: https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/van_der_pol.py which shows this behavior via the scikit API. Benny 2016-01-26 15:08 GMT+01:00 Marcel Oliver : > Benny Malengier writes: > > I attach a small code snippet. In fact, my memory failed me in that > > dop853 indeed complains about insufficient nmax (nsteps in the > Python > > API). What breaks is the accuracy of the solution, but it's still > > remarkably good and can be controlled by nsteps. It's obvious that > > one should not solve this kind of problem with dop853, but I am > still > > amazed that it does not fall flat on its face. > > > > In fact, vode/Adams dies completely (as it should) and vode/BDF > gives > > lots of warnings and computes a completely wrong solution (BDF > schemes > > are supposed to work here). lsoda is fine, though. > > > > You forgot > > , with_jacobian=True > > when doing bdf metho via ode. So change into: > > > > r2 = ode(van_der_pol).set_integrator('vode', method='bdf', > with_jacobian=True) > > > > Normally, as vode is for stiff methods, you set ml or mr to indicate how > > banded the jacobian is. Here you want full jacobian, so you should > indicate > > this, see doc. > > Oops, I completely overlooked that vode is defaulting to simple > fixed-point iteration in the BDF solver. Thanks for pointing this > out! > > I wonder if there is any situation where one would want to use BDF and > still not want to use a full Newton or quasi-Newton iteration since > the convergence condition for a simple fixed point iteration is > basically as bad as using an explicit solver in the first place. But > there is certainly educational value to see this behavior... > > > Can I use your code in an example under the scipy license ? I'll > > see what odes does. > > Absolutely. It's pretty trivial of course, so feel free to use and/or > adapt. > > Best, > Marcel > > PS.: I'll try to get odes to compile later. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.oliver at jacobs-university.de Fri Jan 29 10:56:53 2016 From: m.oliver at jacobs-university.de (Marcel Oliver) Date: Fri, 29 Jan 2016 16:56:53 +0100 Subject: [SciPy-Dev] ode and odeint callable parameters In-Reply-To: References: <569E5AEE.20002@ensta-bretagne.fr> <22175.24939.214139.719216@xps13.localdomain> <569F6F73.3090406@ensta-bretagne.fr> <22182.9008.344108.842230@xps13.localdomain> <22183.19918.593905.530551@xps13.localdomain> <22183.32206.575694.976113@xps13.localdomain> Message-ID: <22187.35781.627388.377729@xps13.localdomain> Benny Malengier writes: > This somewhat proves that returning output also on wrong output (not > satisfying atol/rtol) like ode.integrate does is not a good practice. In the > above case, with scipy.ode, you should test for r1.successful() after every > integrate, which you failed to do. I totally agree. In fact, I used this example to teach students to be critical about solver output. We tested with a number of solvers - not just library solvers, mostly simple self-written ones - and all of failing ones except dop853 fail catastrophically on this example. So in some sense the dop853 has the worst possible behavior, but on the other hand I am still amazed how close it is (qualitatively) to the correct solution. And dop853 does very well for non-stiff problems. In particular, it is only very mildly dissipative on energy conserving systems. > With the odes scikit approach, wrong output just stops the solve routine and > goes to sol.errors. The dopri warning output is to increase nsteps. If you do > that, and restart at the last computed solution, the error of dop853 becomes: > UserWarning: dop853: problem is probably stiff (interrupted) > So an indication to use another solver. Interesting, I guess I never increased nsteps that much. I absolutely agree that an explicit fail is much better. Best, Marcel