From charlesr.harris at gmail.com Mon Nov 2 11:37:28 2020 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 2 Nov 2020 09:37:28 -0700 Subject: [SciPy-User] NumPy 1.19.4 release Message-ID: Hi All, On behalf of the NumPy team I am pleased to announce the release of NumPy 1.19.4. NumPy 1.19.4 is a quick release to revert the OpenBLAS library version. It was hoped that the 0.3.12 OpenBLAS version used in 1.19.3 would work around the Microsoft fmod bug, but problems in some docker environments turned up. Instead, 1.19.4 will use the older library and run a sanity check on import, raising an error if the problem is detected. Microsoft is aware of the problem and has promised a fix, users should upgrade when it becomes available. This release supports Python 3.6-3.9. NumPy Wheels for this release can be downloaded from PyPI , source archives, release notes, and wheel hashes are available on Github . Linux users will need pip >= 0.19.3 in order to install manylinux2010 and manylinux2014 wheels. *Contributors* A total of 1 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Charles Harris *Pull requests merged* A total of 2 pull requests were merged for this release. - #17679: MAINT: Add check for Windows 10 version 2004 bug. - #17680: REV: Revert OpenBLAS to 1.19.2 version for 1.19.4 Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From itamar at pythonspeed.com Thu Nov 19 13:24:30 2020 From: itamar at pythonspeed.com (Itamar Turner-Trauring) Date: Thu, 19 Nov 2020 13:24:30 -0500 Subject: [SciPy-User] Fil v0.11.0, a memory profiler for scientists and data scientists Message-ID: Your code reads some data, processes it, and uses too much memory. In order to reduce memory usage, you need to figure out: 1. Where peak memory usage is, also known as the high-water mark. 2. What code was responsible for allocating the memory that was present at that peak moment. That's exactly what Fil will help you find. Fil an open source memory profiler designed for data processing applications written in Python, and includes native support for Jupyter. It is designed to be high-performance and easy to use. At the moment it only runs on Linux and macOS. You can learn more about Fil at https://pythonspeed.com/fil or on GitHub at https://github.com/pythonspeed/filprofiler/. v0.11 includes performance improvements and less intrusive behavior under Jupyter. Fil vs. other Python memory tools There are two distinct patterns of Python usage, each with its own source of memory problems. In a long-running server, memory usage can grow indefinitely due to memory leaks. That is, some memory is not being freed. * If the issue is in Python code, tools like `tracemalloc` and Pympler can tell you which objects are leaking and what is preventing them from being leaked. * If you're leaking memory in C code, you can use tools like Valgrind . Fil, however, is not aimed at memory leaks, but at the other use case: data processing applications. These applications load in data, process it somehow, and then finish running. The problem with these applications is that they can, on purpose or by mistake, allocate huge amounts of memory. It might get freed soon after, but if you allocate 16GB RAM and only have 8GB in your computer, the lack of leaks doesn't help you. Fil will therefore tell you, in an easy to understand way: 1. Where peak memory usage is, also known as the high-water mark. 2. What code was responsible for allocating the memory that was present at that peak moment. 3. This includes C/Fortran/C++/whatever extensions that don't use Python's memory allocation API (`tracemalloc` only does Python memory APIs). -------------- next part -------------- An HTML attachment was scrubbed... URL: From gilles.bordas73 at gmail.com Fri Nov 20 08:39:01 2020 From: gilles.bordas73 at gmail.com (Bordas Gilles) Date: Fri, 20 Nov 2020 14:39:01 +0100 Subject: [SciPy-User] Informations about interpolate.griddata updates Message-ID: Hello this is my 1st message to the scipy-user list... I obtain different results from scipy.interpolate.griddata with scipy 1.3.2 and a later version scipy 1.5.2 I proceed Cubic 2d interpolation and the 2 versions of scipy induce non negligible differences. I can?t find any explanation about it in the Scipy release notes: Scipy.org Release Notes ...no more on stackoverflow. Has anyone ever encountered this problem : updating scipy has changed the results given by scipy.interpolate.griddata for cubic 2d interpolation ? my related stackoverflow post Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: