From p.vanmulbregt at comcast.net Mon Jun 5 18:23:20 2017 From: p.vanmulbregt at comcast.net (Paul van Mulbregt) Date: Mon, 5 Jun 2017 18:23:20 -0400 Subject: [SciPy-Dev] scipy.special.smirnov, scipy/special/cephes/kolmogorov.c Message-ID: Hi, Re: scipy.special.smirnov, scipy/special/cephes/kolmogorov.c While looking at some problems I encountered with the KSone statistics distribution, I delved deep into the scipy.special functions smirnov() and smirnovi() in kolmogorov.c and came across some accuracy and convergence issues. https://github.com/scipy/scipy/issues/7456 I have a stab at modifications which would address them but the issues turned out to be much more subtle/interesting than I was expecting. 1. The first part involves modifying algorithms and C code in scipy/special/cephes/kolmogorov.c. The cephes code appears to have undergone mostly maintenance changes the last N years. Is there some documentation on policies for the cephes C code? Such as: * What kind of changes are being accepted? * When to report OVERFLOW or UNDERFLOW? * How much commenting is appropriate for algorithm changes? * Is it tabs or spaces in the C code? (Both styles appear to be in use.) * Are there specs, either explicit or implicit, for the functions in cephes? * What is required for an algorithmic change to be considered? * What is required for a new function to be added? * Is the test suite expansive enough to detect newly introduced errors? [Don't know if it is appropriate for this list, but if anyone is willing to take a look at the mods as they stand and offer comments before I?ve gone way far in a wrong direction, that would be welcomed. https://github.com/pvanmulbregt/scipy/tree/smirnov_stability ] 2. The second part of the change would involve exporting a newly created C function smirnovp(), the derivative of smirnov(), to be used in the ksone python code. Is there documentation on what would be required to export a new cephes function? So far I?ve found that I needed to modify a dozen files just to get my installation to compile and pass tests. scipy/special/__init__.py scipy/special/_legacy.pxd scipy/special/_ufuncs.pyx scipy/special/_ufuncs_defs.h scipy/special/add_newdocs.py scipy/special/cephes.h scipy/special/cephes/cephes_names.h scipy/special/cython_special.pxd scipy/special/cython_special.pyx scipy/special/generate_ufuncs.py scipy/special/tests/test_basic.py scipy/special/tests/test_cython_special.py That suggest that the process is non-trivial and that there must be even more. Can someone point me to the place(s) in the doc where this is described? Thanks! Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Jun 6 15:23:58 2017 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 6 Jun 2017 21:23:58 +0200 Subject: [SciPy-Dev] scipy.special.smirnov, scipy/special/cephes/kolmogorov.c In-Reply-To: References: Message-ID: Paul van Mulbregt kirjoitti 06.06.2017 klo 00:23: > 1. The first part involves modifying algorithms and C code in > scipy/special/cephes/kolmogorov.c. > The cephes code appears to have undergone mostly maintenance changes the > last N years. > Is there some documentation on policies for the cephes C code? > Such as: > * What kind of changes are being accepted? > * When to report OVERFLOW or UNDERFLOW? > * How much commenting is appropriate for algorithm changes? > * Is it tabs or spaces in the C code? (Both styles appear to be in use.) > * Are there specs, either explicit or implicit, for the functions in > cephes? > * What is required for an algorithmic change to be considered? > * What is required for a new function to be added? > * Is the test suite expansive enough to detect newly introduced errors? Bugfixes are OK, and you can add new functions --- although it probably would be clearer to not put them under the cephes/ folder, unless it's coupled to the existing implementation. I would refrain from large reorganizations, even though at this point the library code is already a fork. You do not need to add comments in the source code on *changes* you made. There may be such comments there, but this is not good style nowadays. As usual, it is however good to add enough comments and especially the literature references, to make it possible for other people (years later) to understand what the code does and why. What changes were made and why however should be explained in the commit messages in git, which records who did what. To get an algorithm change accepted, you need to show that (i) the old behavior is wrong, and (ii) the new behavior is right, in a way that can be verified later on. The way to do this is to add appropriate tests in the test suite. E.g. test against known-good ground truth values, and across "difficult" parameter regimes as appropriate covering the branches of the algorithm(s) used. If you have gcc and lcov installed, you can run 'python runtests.py --gcov -s special -m full' followed by 'python runtests.py --lcov-html' to generate coverage report that you can look at what parts of the code got run. One possible source of "true" values are other libraries. In particular, if the function exists in mpmath, you can generate values on-the-line using it. For data from other sources, you can save values in a text file under tests/data/local (see the README there on how the data files are used). I don't know the status of smirnov, but unless the function has been touched since 2007 I would not trust the existing tests to test the function for *correctness*. The old test codes for the stats functions AFAIK largely just assume the implementations are correct and only smoketest the bindings. If it's stats function, it may be tested indirectly via the stats routines, but this is probably not enough. So if you want to change the implementation, getting enough test data to which you can compare against is probably necessary. > 2. The second part of the change would involve exporting a newly created > C function smirnovp(), the derivative of smirnov(), to be used in the > ksone python code. > Is there documentation on what would be required to export a new cephes > function? The process is relatively straightforward once you know what to do: 1) Add the function to the list in generate_ufuncs.py. See the discussion in that file and look at how the other functions are done as an example. 2) Add an entry to add_newdocs.py to include the documentation. The above two are the *only* files you modify manually (apart from the *.h and *.c files for the implementation itself, of course). 3) Run generate_ufuncs.py to regenerate the other files. 4) (Re)build Scipy as usual. Pauli From blairuk+scipydev at gmail.com Wed Jun 7 02:14:54 2017 From: blairuk+scipydev at gmail.com (Blair Azzopardi) Date: Wed, 7 Jun 2017 07:14:54 +0100 Subject: [SciPy-Dev] =?utf-8?q?ENH=3A_Add_PDF=2C_CDF_and_parameter_estima?= =?utf-8?q?tion_for_Stable_Distributions_=C2=B7_Issue_=237374_?= =?utf-8?q?=C2=B7_scipy/scipy?= In-Reply-To: References: Message-ID: Hi I submitted PR 7374 about a month ago and notice newer PRs have been labelled, ie script.stats etc. Is there an issue with this PR preventing it from being labelled and hopefully accepted into master? https://github.com/scipy/scipy/pull/7374 Blair -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Jun 7 05:04:54 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 7 Jun 2017 21:04:54 +1200 Subject: [SciPy-Dev] =?utf-8?q?ENH=3A_Add_PDF=2C_CDF_and_parameter_estima?= =?utf-8?q?tion_for_Stable_Distributions_=C2=B7_Issue_=237374_?= =?utf-8?q?=C2=B7_scipy/scipy?= In-Reply-To: References: Message-ID: On Wed, Jun 7, 2017 at 6:14 PM, Blair Azzopardi wrote: > Hi > > I submitted PR 7374 about a month ago and notice newer PRs have been > labelled, ie script.stats etc. Is there an issue with this PR preventing it > from being labelled and hopefully accepted into master? > Hi Blair, no issue at all - I just started labelling PRs last night and skipped a few that looked from the title like I wanted/needed to review them rather than just label them and move on. Cheers, Ralf > > https://github.com/scipy/scipy/pull/7374 > > Blair > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blairuk+scipydev at gmail.com Wed Jun 7 05:12:34 2017 From: blairuk+scipydev at gmail.com (Blair Azzopardi) Date: Wed, 7 Jun 2017 10:12:34 +0100 Subject: [SciPy-Dev] =?utf-8?q?ENH=3A_Add_PDF=2C_CDF_and_parameter_estima?= =?utf-8?q?tion_for_Stable_Distributions_=C2=B7_Issue_=237374_?= =?utf-8?q?=C2=B7_scipy/scipy?= In-Reply-To: References: Message-ID: On 7 Jun 2017 10:06, "Ralf Gommers" wrote: On Wed, Jun 7, 2017 at 6:14 PM, Blair Azzopardi wrote: > Hi > > I submitted PR 7374 about a month ago and notice newer PRs have been > labelled, ie script.stats etc. Is there an issue with this PR preventing it > from being labelled and hopefully accepted into master? > Hi Blair, no issue at all - I just started labelling PRs last night and skipped a few that looked from the title like I wanted/needed to review them rather than just label them and move on. Cheers, Ralf > > https://github.com/scipy/scipy/pull/7374 > > Blair > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > _______________________________________________ SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev Hi Ralf That's good to know. I can make further adjustments if needed. Thanks Blair -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Jun 7 15:25:59 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 7 Jun 2017 13:25:59 -0600 Subject: [SciPy-Dev] NumPy 1.13.0 release Message-ID: Hi All, On behalf of the NumPy team, I amn pleased to announce the NumPy 1.13.0 release. This release supports Python 2.7 and 3.4-3.6 and contains many new features. It is one of the most ambitious releases in the last several years. Some of the highlights and new functions are *Highlights* - Operations like ``a + b + c`` will reuse temporaries on some platforms, resulting in less memory use and faster execution. - Inplace operations check if inputs overlap outputs and create temporaries to avoid problems. - New __array_ufunc__ attribute provides improved ability for classes to override default ufunc behavior. - New np.block function for creating blocked arrays. *New functions* - New ``np.positive`` ufunc. - New ``np.divmod`` ufunc provides more efficient divmod. - New ``np.isnat`` ufunc tests for NaT special values. - New ``np.heaviside`` ufunc computes the Heaviside function. - New ``np.isin`` function, improves on ``in1d``. - New ``np.block`` function for creating blocked arrays. - New ``PyArray_MapIterArrayCopyIfOverlap`` added to NumPy C-API. Wheels for the pre-release are available on PyPI. Source tarballs, zipfiles, release notes, and the changelog are available on github . A total of 102 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - A. Jesse Jiryu Davis + - Alessandro Pietro Bardelli + - Alex Rothberg + - Alexander Shadchin - Allan Haldane - Andres Guzman-Ballen + - Antoine Pitrou - Antony Lee - B R S Recht + - Baurzhan Muftakhidinov + - Ben Rowland - Benda Xu + - Blake Griffith - Bradley Wogsland + - Brandon Carter + - CJ Carey - Charles Harris - Christoph Gohlke - Danny Hermes + - David Hagen + - David Nicholson + - Duke Vijitbenjaronk + - Egor Klenin + - Elliott Forney + - Elliott M Forney + - Endolith - Eric Wieser - Erik M. Bray - Eugene + - Evan Limanto + - Felix Berkenkamp + - Fran?ois Bissey + - Frederic Bastien - Greg Young - Gregory R. Lee - Importance of Being Ernest + - Jaime Fernandez - Jakub Wilk + - James Cowgill + - James Sanders - Jean Utke + - Jesse Thoren + - Jim Crist + - Joerg Behrmann + - John Kirkham - Jonathan Helmus - Jonathan L Long - Jonathan Tammo Siebert + - Joseph Fox-Rabinovitz - Joshua Loyal + - Juan Nunez-Iglesias + - Julian Taylor - Kirill Balunov + - Likhith Chitneni + - Lo?c Est?ve - Mads Ohm Larsen - Marein K?nings + - Marten van Kerkwijk - Martin Thoma - Martino Sorbaro + - Marvin Schmidt + - Matthew Brett - Matthias Bussonnier + - Matthias C. M. Troffaes + - Matti Picus - Michael Seifert - Mikhail Pak + - Mortada Mehyar - Nathaniel J. Smith - Nick Papior - Oscar Villellas + - Pauli Virtanen - Pavel Potocek - Pete Peeradej Tanruangporn + - Philipp A + - Ralf Gommers - Robert Kern - Roland Kaufmann + - Ronan Lamy - Sami Salonen + - Sanchez Gonzalez Alvaro - Sebastian Berg - Shota Kawabuchi - Simon Gibbons - Stefan Otte - Stefan Peterson + - Stephan Hoyer - S?ren Fuglede J?rgensen + - Takuya Akiba - Tom Boyd + - Ville Skytt? + - Warren Weckesser - Wendell Smith - Yu Feng - Zixu Zhao + - Z? Vin?cius + - aha66 + - drabach + - drlvk + - jsh9 + - solarjoe + - zengi + Cheers, Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.vanmulbregt at comcast.net Wed Jun 7 16:12:55 2017 From: p.vanmulbregt at comcast.net (Paul van Mulbregt) Date: Wed, 7 Jun 2017 16:12:55 -0400 Subject: [SciPy-Dev] How to preview changes to a doc file? Message-ID: What is the recommended way to preview any documentation changes, such as after modifying restructured text (rst) files? After adding some additional math to an rst file, I?d like to see the result of the change, but haven?t found a good way to do so. [I?m using a Mac, with Python 3.6.] * Attempts to build the scipy doc have not succeeded for me ("setup.py build_sphinx" generates lots of dvipng errors; perhaps building the doc requires Py2.7???) * The Mac editor Atom has some rst preview packages, but I haven?t found one that can handle the \begin{eqnarray*} constructs in the scipy rst files. * Stand-alone rst-viewers which then opened in a browser didn?t like the math either. * One online viewer http://rst.ninjs.org did render the math in a fashion, after I had removed the ":nowrap:", and converted the backtick :math:`XXX` to stand-alone math. But it was a poor rendering when compared to the actual scipy html. How do you preview doc changes? Thanks, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Wed Jun 7 18:51:09 2017 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Thu, 8 Jun 2017 01:51:09 +0300 Subject: [SciPy-Dev] How to preview changes to a doc file? In-Reply-To: References: Message-ID: On Wed, Jun 7, 2017 at 11:12 PM, Paul van Mulbregt wrote: > What is the recommended way to preview any documentation changes, such as > after modifying restructured text (rst) files? $ cd doc $ make html-scipyorg which builds the docs *for the installed scipy*. To build the docs for the dev tree, use $ python runtests.py --shell $ cd /doc $ make html-scipyorg > After adding some additional math to an rst file, I?d like to see the result > of the change, but haven?t found a good way to do so. > [I?m using a Mac, with Python 3.6.] > > * Attempts to build the scipy doc have not succeeded for me ("setup.py > build_sphinx" generates lots of dvipng errors; perhaps building the doc > requires Py2.7???) Not anymore (modulo https://github.com/scipy/scipy/issues/7369 which might be a user error). > * The Mac editor Atom has some rst preview packages, but I haven?t found one > that can handle the \begin{eqnarray*} constructs in the scipy rst files. > * Stand-alone rst-viewers which then opened in a browser didn?t like the > math either. > * One online viewer http://rst.ninjs.org did render the math in a fashion, > after I had removed the ":nowrap:", and converted the backtick :math:`XXX` > to stand-alone math. But it was a poor rendering when compared to the > actual scipy html. > > How do you preview doc changes? > > Thanks, > Paul > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > From p.vanmulbregt at comcast.net Wed Jun 7 21:51:36 2017 From: p.vanmulbregt at comcast.net (Paul van Mulbregt) Date: Wed, 7 Jun 2017 21:51:36 -0400 Subject: [SciPy-Dev] How to preview changes to a doc file? In-Reply-To: References: Message-ID: Thanks Evgeni, See below for how that failed and what made it work. > On Jun 7, 2017, at 6:51 PM, Evgeni Burovski wrote: > > On Wed, Jun 7, 2017 at 11:12 PM, Paul van Mulbregt > wrote: >> What is the recommended way to preview any documentation changes, such as >> after modifying restructured text (rst) files? > > $ cd doc > $ make html-scipyorg > > which builds the docs *for the installed scipy*. To build the docs for > the dev tree, use > > $ python runtests.py --shell > $ cd /doc > $ make html-scipyorg This last step failed due to "You need a recent enough version of matplotlib". But the doc build is using a Python2.7, even though the python on my path is 3.6. I tracked down the Py2.7 that is being used, then installed numpy, scipy, matplotlib in *that* Py27 installation and repeated it and this time the make succeeded. The process does seem to be explicitly asking for a python2.7 to build the doc. The first command in scipy/doc/Makefile is "PYVER = 2.7" [~/src/python/scipy/doc] $ make html-scipyorg mkdir -p build/html build/doctrees LANG=C python2.7 /usr/local/bin/sphinx-build -t scipyorg -b html -d build/doctrees source build/html-scipyorg Running Sphinx v1.6.2 Scipy (VERSION 1.0.0.dev0+69ce114) ... Is there some other env variable that needs setting in order to build with Py3.x? > > > >> After adding some additional math to an rst file, I?d like to see the result >> of the change, but haven?t found a good way to do so. >> [I?m using a Mac, with Python 3.6.] >> >> * Attempts to build the scipy doc have not succeeded for me ("setup.py >> build_sphinx" generates lots of dvipng errors; perhaps building the doc >> requires Py2.7???) > > Not anymore (modulo https://github.com/scipy/scipy/issues/7369 which > might be a user error). > > >> * The Mac editor Atom has some rst preview packages, but I haven?t found one >> that can handle the \begin{eqnarray*} constructs in the scipy rst files. >> * Stand-alone rst-viewers which then opened in a browser didn?t like the >> math either. >> * One online viewer http://rst.ninjs.org did render the math in a fashion, >> after I had removed the ":nowrap:", and converted the backtick :math:`XXX` >> to stand-alone math. But it was a poor rendering when compared to the >> actual scipy html. >> >> How do you preview doc changes? >> >> Thanks, >> Paul >> >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.rosa at unistra.fr Fri Jun 9 10:46:07 2017 From: b.rosa at unistra.fr (Benoit Rosa) Date: Fri, 9 Jun 2017 16:46:07 +0200 Subject: [SciPy-Dev] Stochastic branch and bound optimization algorithm Message-ID: Hi all, I recently implemented a stochastic branch and bound algorithm in python for one of my projects (namely, 3D registration without an initial guess, requiring global optimization over the SE3 group). After trying basinhopping without much success, I went for a modified stochastic branch and bound algorithm, described in those two papers (disclaimer: I'm an author in one of those) [1] C. Papazov and D. Burschka, Stochastic global optimization for robust point set registration, Comput. Vis. Image Understand. 115 (12) (2011) 1598-1609 [2] C. Gruijthuijsen, B. Rosa, P.T. Tran, J. Vander Sloten, E. Vander Poorten, D. Reynaerts, An automatic registration method for radiation-free catheter navigation guidance, J. Med. Robot Res. 1 (03) (2016), 1640009 Since I wanted to compare with other optimization algorithms, I implemented things by modifying the basinhopping file, and as such my implementation (git commit here: https://github.com/benoitrosa/scipy/commit/49a2c23b74b69dc4250e20e21db75bd071dfd92d ) is fully compatible with Scipy already. I have added a bit of documentation in the file too. If you want an idea, on my project (for which I can't share the code now), this algorithm finds the global optimum over the S03 space (i.e. rotations) in 4.5 seconds on average, where basinhopping takes more than 15 seconds, and doesn't necessarily converge to the correct solution. More about the algorithm itself: it's a common branch and bound algorithm (i.e. recursive traversal of a tree and bisecting of leaves to find an optimum), with two additions: 1 - The tree is traversed stochastically. This is governed by a temperature parameter, much like simulated annealing algorithms. At each node, there is a probability of going towards a "less promising" (understand: cost function higher) branch, this probability being governed by the temperature parameter. In the beginning, "bad" branches will be selected, ensuring an exploration of large portions of the space. As the number of iterations increases the temperature decreases, and the algorithms goes more towards the most promising branches/leaves, getting higher resolution in that part of the space. 2 - When creating a new leaf and evaluating the cost function there, the value is propagated down the tree, until reaching the root or a node with a lower cost. This ensures a smart traversal of the tree. Moreover, it also makes sure that at any time, the root node has the lowest cost, making it easy to query for the best solution when it is interrupted. Another advantage of this algorithm is that parameter tuning is pretty easy. It only needs the bounds of the search space (no initial guess), a sampling function (default is uniform) and a cost function (obviously). Additional parameters are the number of iterations, and the start and stop temperatures (start should be high to accept many "bad solutions" in the beginning, and stop should be tending to zero). If that algorithm is of interest to the SciPy community, I'd be happy to clean up the code and make a pull request ! Best, Benoit From ralf.gommers at gmail.com Sun Jun 18 04:24:13 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 18 Jun 2017 20:24:13 +1200 Subject: [SciPy-Dev] preparing 0.19.1 release Message-ID: Hi all, The 0.19.1 release is long overdue, so I'm aiming to get it out during the coming week. In particular the integrate.quad memory leak (gh-7214) is urgent. The PRs currently labelled with `backport-candidate` [1] are planned to go in. Backports for all but the two unmerged PRs are in https://github.com/scipy/scipy/pull/7502. If there's other issues/fixes that need to go in, please point them out. The one thing that needs a decision is the behavior for float32 input to interpolation routines: https://github.com/scipy/scipy/issues/7273. Opinions very welcome. Ralf [1] https://github.com/scipy/scipy/pulls?utf8=%E2%9C%93&q=is%3Apr%20label%3Abackport-candidate -------------- next part -------------- An HTML attachment was scrubbed... URL: From michelemartone at users.sourceforge.net Wed Jun 21 05:51:55 2017 From: michelemartone at users.sourceforge.net (Michele Martone) Date: Wed, 21 Jun 2017 11:51:55 +0200 Subject: [SciPy-Dev] PyRSB: Python interface to librsb sparse matrices library Message-ID: <20170621095155.GE19876@localhost> Hi. I'm the author of the high performance multithreaded sparse matrix library `librsb' (mostly C, LGPLv3): http://librsb.sourceforge.net/ I'm *not* a user of SciPy/NumPy/Python, but using Cython I have written a proof-of-concept interface to librsb, named `PyRSB': https://github.com/michelemartone/pyrsb PyRSB is in a prototypal state; e.g. still lacks good error handling. Its interface is trivial, as it mimicks that of SciPy's 'csr_matrix'. Advantages over csr_matrix are in fast multithreaded multiplication of huge sparse matrices. Intended application area is iterative solution of linear systems; particularly fast if with symmetric matrices and many rhs. With this email I am looking for prospective: - users/testers - developers (any interest to collaborate/adopt/include the project?) Looking forward for your feedback, Michele -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From ralf.gommers at gmail.com Thu Jun 22 06:27:23 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 22 Jun 2017 22:27:23 +1200 Subject: [SciPy-Dev] moved 0.19.1 tag Message-ID: Hi, Unfortunately I had to move the v0.19.1 tag by one commit, a 32-bit test error had snuck in (we don't have that in our regular TravisCI runs, to be fixed). If you have pulled from the repo in the last 24 hours or so and have a local v0.19.1 tag, do: git tag -d v0.19.1 git fetch upstream --tags to fix possible issues. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From freddyrietdijk at fridh.nl Thu Jun 22 07:02:40 2017 From: freddyrietdijk at fridh.nl (Freddy Rietdijk) Date: Thu, 22 Jun 2017 13:02:40 +0200 Subject: [SciPy-Dev] moved 0.19.1 tag In-Reply-To: References: Message-ID: Please don't ever move tags. Moving tags isn't nice for anyone relying on them. Build systems that retrieve tags and use checksums will now get an error because the checksum isn't correct. They may suspect someone tempered with the source. Surely a new v0.19.1.1 could have been made in this case. On Thu, Jun 22, 2017 at 12:27 PM, Ralf Gommers wrote: > Hi, > > Unfortunately I had to move the v0.19.1 tag by one commit, a 32-bit test > error had snuck in (we don't have that in our regular TravisCI runs, to be > fixed). If you have pulled from the repo in the last 24 hours or so and > have a local v0.19.1 tag, do: > > git tag -d v0.19.1 > git fetch upstream --tags > > to fix possible issues. > > Cheers, > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Jun 22 07:09:19 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 22 Jun 2017 23:09:19 +1200 Subject: [SciPy-Dev] moved 0.19.1 tag In-Reply-To: References: Message-ID: On Thu, Jun 22, 2017 at 11:02 PM, Freddy Rietdijk wrote: > Please don't ever move tags. Moving tags isn't nice for anyone relying on > them. Build systems that retrieve tags and use checksums will now get an > error because the checksum isn't correct. They may suspect someone tempered > with the source. > It's a tradeoff. The relatively few people that have to delete a tag by hand will do less work than moving to v1.19.2 would have cost me. Added to that is the annoyance of having a non-existing 0.19.1 release if I had not moved the tag. > Surely a new v0.19.1.1 could have been made in this case. > That's not a valid version number. Ralf > > On Thu, Jun 22, 2017 at 12:27 PM, Ralf Gommers > wrote: > >> Hi, >> >> Unfortunately I had to move the v0.19.1 tag by one commit, a 32-bit test >> error had snuck in (we don't have that in our regular TravisCI runs, to be >> fixed). If you have pulled from the repo in the last 24 hours or so and >> have a local v0.19.1 tag, do: >> >> git tag -d v0.19.1 >> git fetch upstream --tags >> >> to fix possible issues. >> >> Cheers, >> Ralf >> >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jomsdev at gmail.com Thu Jun 22 07:26:23 2017 From: jomsdev at gmail.com (Jordi Montes) Date: Thu, 22 Jun 2017 13:26:23 +0200 Subject: [SciPy-Dev] moved 0.19.1 tag In-Reply-To: References: Message-ID: Hi, What do you mean when you say that it is not tested in your regular TravisCI? Maybe I could do a pull request fixing it and to skip this kind of problems the next time. Thanks, Jordi. On 22 June 2017 at 12:27, Ralf Gommers wrote: > Hi, > > Unfortunately I had to move the v0.19.1 tag by one commit, a 32-bit test > error had snuck in (we don't have that in our regular TravisCI runs, to be > fixed). If you have pulled from the repo in the last 24 hours or so and > have a local v0.19.1 tag, do: > > git tag -d v0.19.1 > git fetch upstream --tags > > to fix possible issues. > > Cheers, > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Jun 22 07:32:59 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 22 Jun 2017 23:32:59 +1200 Subject: [SciPy-Dev] moved 0.19.1 tag In-Reply-To: References: Message-ID: On Thu, Jun 22, 2017 at 11:26 PM, Jordi Montes wrote: > Hi, > > What do you mean when you say that it is not tested in your regular > TravisCI? Maybe I could do a pull request fixing it and to skip this kind > of problems the next time. > It was a 32-bit platform specific issue that turned up at the last moment, and we don't have 32-bit builds on TravisCI. We build the wheels for release from a separate repo (https://github.com/MacPython/scipy-wheels), which does do 32-bit. And those builds were failing after I tagged the release. Copying the TravisCI config (PLAT=i686) from the scipy-wheels repo and updating one of the existing entries in the build matrix of the scipy repo to use that would be a helpful contribution. Cheers, Ralf > > > Thanks, > > Jordi. > > On 22 June 2017 at 12:27, Ralf Gommers wrote: > >> Hi, >> >> Unfortunately I had to move the v0.19.1 tag by one commit, a 32-bit test >> error had snuck in (we don't have that in our regular TravisCI runs, to be >> fixed). If you have pulled from the repo in the last 24 hours or so and >> have a local v0.19.1 tag, do: >> >> git tag -d v0.19.1 >> git fetch upstream --tags >> >> to fix possible issues. >> >> Cheers, >> Ralf >> >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Jun 22 07:36:05 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 22 Jun 2017 12:36:05 +0100 Subject: [SciPy-Dev] moved 0.19.1 tag In-Reply-To: References: Message-ID: Hi, On Thu, Jun 22, 2017 at 12:32 PM, Ralf Gommers wrote: > > > On Thu, Jun 22, 2017 at 11:26 PM, Jordi Montes wrote: >> >> Hi, >> >> What do you mean when you say that it is not tested in your regular >> TravisCI? Maybe I could do a pull request fixing it and to skip this kind of >> problems the next time. > > > It was a 32-bit platform specific issue that turned up at the last moment, > and we don't have 32-bit builds on TravisCI. We build the wheels for release > from a separate repo (https://github.com/MacPython/scipy-wheels), which does > do 32-bit. And those builds were failing after I tagged the release. > > Copying the TravisCI config (PLAT=i686) from the scipy-wheels repo and > updating one of the existing entries in the build matrix of the scipy repo > to use that would be a helpful contribution. We do test master every day with cron jobs on the scipy-wheels repo. Did those tests miss this problem? Cheers, Matthew From ralf.gommers at gmail.com Thu Jun 22 07:39:40 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 22 Jun 2017 23:39:40 +1200 Subject: [SciPy-Dev] moved 0.19.1 tag In-Reply-To: References: Message-ID: On Thu, Jun 22, 2017 at 11:36 PM, Matthew Brett wrote: > Hi, > > On Thu, Jun 22, 2017 at 12:32 PM, Ralf Gommers > wrote: > > > > > > On Thu, Jun 22, 2017 at 11:26 PM, Jordi Montes > wrote: > >> > >> Hi, > >> > >> What do you mean when you say that it is not tested in your regular > >> TravisCI? Maybe I could do a pull request fixing it and to skip this > kind of > >> problems the next time. > > > > > > It was a 32-bit platform specific issue that turned up at the last > moment, > > and we don't have 32-bit builds on TravisCI. We build the wheels for > release > > from a separate repo (https://github.com/MacPython/scipy-wheels), which > does > > do 32-bit. And those builds were failing after I tagged the release. > > > > Copying the TravisCI config (PLAT=i686) from the scipy-wheels repo and > > updating one of the existing entries in the build matrix of the scipy > repo > > to use that would be a helpful contribution. > > We do test master every day with cron jobs on the scipy-wheels repo. > Did those tests miss this problem? > That doesn't test 0.19.x (and I'm not suggesting it should); I backported a PR from master with the issue to 0.19.x. On master it was fixed later, after the daily builds found the problem. Didn't see that second PR until it was too late. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Jun 22 07:43:12 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 22 Jun 2017 12:43:12 +0100 Subject: [SciPy-Dev] moved 0.19.1 tag In-Reply-To: References: Message-ID: On Thu, Jun 22, 2017 at 12:39 PM, Ralf Gommers wrote: > > > On Thu, Jun 22, 2017 at 11:36 PM, Matthew Brett > wrote: >> >> Hi, >> >> On Thu, Jun 22, 2017 at 12:32 PM, Ralf Gommers >> wrote: >> > >> > >> > On Thu, Jun 22, 2017 at 11:26 PM, Jordi Montes >> > wrote: >> >> >> >> Hi, >> >> >> >> What do you mean when you say that it is not tested in your regular >> >> TravisCI? Maybe I could do a pull request fixing it and to skip this >> >> kind of >> >> problems the next time. >> > >> > >> > It was a 32-bit platform specific issue that turned up at the last >> > moment, >> > and we don't have 32-bit builds on TravisCI. We build the wheels for >> > release >> > from a separate repo (https://github.com/MacPython/scipy-wheels), which >> > does >> > do 32-bit. And those builds were failing after I tagged the release. >> > >> > Copying the TravisCI config (PLAT=i686) from the scipy-wheels repo and >> > updating one of the existing entries in the build matrix of the scipy >> > repo >> > to use that would be a helpful contribution. >> >> We do test master every day with cron jobs on the scipy-wheels repo. >> Did those tests miss this problem? > > > That doesn't test 0.19.x (and I'm not suggesting it should); I backported a > PR from master with the issue to 0.19.x. On master it was fixed later, after > the daily builds found the problem. Didn't see that second PR until it was > too late. Maybe worth adding instructions to trigger a build on the scipy-wheels repo with the relevant commit before tagging? Cheers, Matthew From ralf.gommers at gmail.com Thu Jun 22 16:24:21 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 23 Jun 2017 08:24:21 +1200 Subject: [SciPy-Dev] moved 0.19.1 tag In-Reply-To: References: Message-ID: On Thu, Jun 22, 2017 at 11:43 PM, Matthew Brett wrote: > On Thu, Jun 22, 2017 at 12:39 PM, Ralf Gommers > wrote: > > > > > > On Thu, Jun 22, 2017 at 11:36 PM, Matthew Brett > > > wrote: > >> > >> Hi, > >> > >> On Thu, Jun 22, 2017 at 12:32 PM, Ralf Gommers > >> wrote: > >> > > >> > > >> > On Thu, Jun 22, 2017 at 11:26 PM, Jordi Montes > >> > wrote: > >> >> > >> >> Hi, > >> >> > >> >> What do you mean when you say that it is not tested in your regular > >> >> TravisCI? Maybe I could do a pull request fixing it and to skip this > >> >> kind of > >> >> problems the next time. > >> > > >> > > >> > It was a 32-bit platform specific issue that turned up at the last > >> > moment, > >> > and we don't have 32-bit builds on TravisCI. We build the wheels for > >> > release > >> > from a separate repo (https://github.com/MacPython/scipy-wheels), > which > >> > does > >> > do 32-bit. And those builds were failing after I tagged the release. > >> > > >> > Copying the TravisCI config (PLAT=i686) from the scipy-wheels repo and > >> > updating one of the existing entries in the build matrix of the scipy > >> > repo > >> > to use that would be a helpful contribution. > >> > >> We do test master every day with cron jobs on the scipy-wheels repo. > >> Did those tests miss this problem? > > > > > > That doesn't test 0.19.x (and I'm not suggesting it should); I > backported a > > PR from master with the issue to 0.19.x. On master it was fixed later, > after > > the daily builds found the problem. Didn't see that second PR until it > was > > too late. > > Maybe worth adding instructions to trigger a build on the scipy-wheels > repo with the relevant commit before tagging? > Makes sense. It should just build the commit hash rather than the tag. Will send a PR for that. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From perimosocordiae at gmail.com Thu Jun 22 22:24:20 2017 From: perimosocordiae at gmail.com (CJ Carey) Date: Thu, 22 Jun 2017 22:24:20 -0400 Subject: [SciPy-Dev] PyRSB: Python interface to librsb sparse matrices library In-Reply-To: <20170621095155.GE19876@localhost> References: <20170621095155.GE19876@localhost> Message-ID: Looks interesting, thanks for posting! I haven't downloaded anything to try it out, but I'm interested to see the benchmark results you get from `make test`. On Wed, Jun 21, 2017 at 5:51 AM, Michele Martone < michelemartone at users.sourceforge.net> wrote: > Hi. > > I'm the author of the high performance multithreaded sparse matrix > library `librsb' (mostly C, LGPLv3): http://librsb.sourceforge.net/ > > I'm *not* a user of SciPy/NumPy/Python, but using Cython I have > written a proof-of-concept interface to librsb, named `PyRSB': > https://github.com/michelemartone/pyrsb > > PyRSB is in a prototypal state; e.g. still lacks good error handling. > Its interface is trivial, as it mimicks that of SciPy's 'csr_matrix'. > Advantages over csr_matrix are in fast multithreaded multiplication > of huge sparse matrices. > Intended application area is iterative solution of linear systems; > particularly fast if with symmetric matrices and many rhs. > > With this email I am looking for prospective: > - users/testers > - developers (any interest to collaborate/adopt/include the project?) > > Looking forward for your feedback, > Michele > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jun 23 11:27:12 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 24 Jun 2017 03:27:12 +1200 Subject: [SciPy-Dev] ANN: SciPy 0.19.1 release Message-ID: On behalf of the Scipy development team I am pleased to announce the availability of Scipy 0.19.1. This is a bugfix-only release, no new features are included. This release requires Python 2.7 or 3.4-3.6 and NumPy 1.8.2 or greater. Source tarballs and release notes can be found at https://github.com/scipy/scipy/releases/tag/v0.19.1. OS X and Linux wheels are available from PyPI: https://pypi.python.org/pypi/scipy/0.19.1 Thanks to everyone who contributed! Cheers, Ralf -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ========================== SciPy 0.19.1 Release Notes ========================== SciPy 0.19.1 is a bug-fix release with no new features compared to 0.19.0. The most important change is a fix for a severe memory leak in ``integrate.quad``. Authors ======= * Evgeni Burovski * Patrick Callier + * Yu Feng * Ralf Gommers * Ilhan Polat * Eric Quintero * Scott Sievert * Pauli Virtanen * Warren Weckesser A total of 9 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. Issues closed for 0.19.1 - ------------------------ - - `#7214 `__: Memory use in integrate.quad in scipy-0.19.0 - - `#7258 `__: ``linalg.matrix_balance`` gives wrong transformation matrix - - `#7262 `__: Segfault in daily testing - - `#7273 `__: ``scipy.interpolate._bspl.evaluate_spline`` gets wrong type - - `#7335 `__: scipy.signal.dlti(A,B,C,D).freqresp() fails Pull requests for 0.19.1 - ------------------------ - - `#7211 `__: BUG: convolve may yield inconsistent dtypes with method changed - - `#7216 `__: BUG: integrate: fix refcounting bug in quad() - - `#7229 `__: MAINT: special: Rewrite a test of wrightomega - - `#7261 `__: FIX: Corrected the transformation matrix permutation - - `#7265 `__: BUG: Fix broken axis handling in spectral functions - - `#7266 `__: FIX 7262: ckdtree crashes in query_knn. - - `#7279 `__: Upcast half- and single-precision floats to doubles in BSpline... - - `#7336 `__: BUG: Fix signal.dfreqresp for StateSpace systems - - `#7419 `__: Fix several issues in ``sparse.load_npz``, ``save_npz`` - - `#7420 `__: BUG: stats: allow integers as kappa4 shape parameters Checksums ========= MD5 ~~~ 72415e8da753eea97eb9820602931cb5 scipy-0.19.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl e0022540df2735eb0475071b266d5d71 scipy-0.19.1-cp27-cp27m-manylinux1_i686.whl f513eb4ea2086de169a502df7efb91c7 scipy-0.19.1-cp27-cp27m-manylinux1_x86_64.whl 906c3c59209d6249b5d8ce14cfa01382 scipy-0.19.1-cp27-cp27mu-manylinux1_i686.whl afbf8ffb4a4fe7c18e34cb8a313c18ee scipy-0.19.1-cp27-cp27mu-manylinux1_x86_64.whl 5ba945b3404644244ab469883a1723f0 scipy-0.19.1-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 9c02cdd79e4ffadddcce7b2212039816 scipy-0.19.1-cp34-cp34m-manylinux1_i686.whl 79c0ba3618466614744de9a2f5362bbc scipy-0.19.1-cp34-cp34m-manylinux1_x86_64.whl 602a741a54190e16698ff8b2fe9fd27c scipy-0.19.1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl d6c2ecadd4df36eb61870227fae42d3a scipy-0.19.1-cp35-cp35m-manylinux1_i686.whl e7167c0a9cf270f89437e2fd09731636 scipy-0.19.1-cp35-cp35m-manylinux1_x86_64.whl fc2e4679e83590ff19c1a5c5b1aa4786 scipy-0.19.1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 458615e9a56429a72038531dd5dcb3cb scipy-0.19.1-cp36-cp36m-manylinux1_i686.whl 65b1667ac56861da4cbc609960ed735b scipy-0.19.1-cp36-cp36m-manylinux1_x86_64.whl b704ebe9a28b8fe83d9f238d40031266 scipy-0.19.1.tar.gz cad6bac0638b176f72c00fe81ed54d19 scipy-0.19.1.tar.xz eb69261e5026ef2f3b9ae827caa7e5b8 scipy-0.19.1.zip SHA256 ~~~~~~ 1e8fedf602859b541ebae78667ccfc53158edef58d9ee19ee659309004565952 scipy-0.19.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 023ee29faa76c184a607e21076f097dc32f3abba7c71ece374588f95920aa993 scipy-0.19.1-cp27-cp27m-manylinux1_i686.whl 2a26d06a642e3c9107ca06df125f5dc5507abe2b87fd7865415d03ab654b0b43 scipy-0.19.1-cp27-cp27m-manylinux1_x86_64.whl b3e97be2cd9f052d984fc5ba2d441897971b744c64d658617944c47bc366f8ff scipy-0.19.1-cp27-cp27mu-manylinux1_i686.whl 78713101b32af384c564837fd7ae665a2a72cb6d49edbd8f32148d74724a65a8 scipy-0.19.1-cp27-cp27mu-manylinux1_x86_64.whl 2c9e87d556b83a8e11de7a064856c3997bbaff7f9cf62bf63ff0623751549e4c scipy-0.19.1-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl b87a923ed390ba0aafcbbcc6f0e023d495f19d2bd22ae59bdef3b0aec6485b39 scipy-0.19.1-cp34-cp34m-manylinux1_i686.whl 2dc9bcfaa585d9d941fb1add0d160556fe8587c3800264a77643695565a2d279 scipy-0.19.1-cp34-cp34m-manylinux1_x86_64.whl 93825cc80c638d901099f657dfff852ad2421beb51cb7d1d3f91157741ebe287 scipy-0.19.1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 4118647273c907ed984da52b71fa2bbb30229f1820fb79b1ed087c8bc9a20cca scipy-0.19.1-cp35-cp35m-manylinux1_i686.whl 4c26ef44e8bb2cd2aef11c60d163caa04670d6f42996789b209526677310ded2 scipy-0.19.1-cp35-cp35m-manylinux1_x86_64.whl 2a3b6ceadbb58d8b8d4a329f8219f9e6f17757ec6c85baf03987bbd2b728c263 scipy-0.19.1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 909be30bdd1d8cd57609760ea72a1499543169ed6ea5c66fad50946582682cf8 scipy-0.19.1-cp36-cp36m-manylinux1_i686.whl 4de84e3870228e8afd55a6e63e762aa7c9d1e3bd9a9ef5ab716835a69c77d257 scipy-0.19.1-cp36-cp36m-manylinux1_x86_64.whl 9147f7b8b5ed4b80fefe82057a6d0d4bf3a50375a1d573407f84686d3b82c3bb scipy-0.19.1.tar.gz 0dca04c4860afdcb066cab4fd520fcffa8c85e9a7b5aa37a445308e899d728b3 scipy-0.19.1.tar.xz 2cc1baed85f7f702d958ad19e5d11c9dad917e981d3d0c299a7f0fd15027516a scipy-0.19.1.zip -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJZTTG2AAoJEO2+o3i/Gl69V8gH/3S199hh1Fcl0a05VOCgt7aZ zRKvdZhYkqGvWoWX+YwJBaZn0aXThJfIDE2lAGt4S9PyxObBKzsujI8OLIZF4zU5 PtAARrooOqDEH1CC39k2gscF9GBhxAlxwntM2Hd8h+4xsklAhFVxn4UT0M8igBoX txpaIqghzBJnenkMq6lqrj7vhupuulz0zrMnJU4LetpdSxX1lb5sGvWmZDTKd/nr xmthQ4TmHPbf9jAYCtLI4V6OUGFGLQ+d7IqiYvU+DVKNZFEgyPMnvN9QvSdu6diR JBEIBcijxM0BqjPwRoavCQjCHk37kR/G3UsgcEyHO2tr5zuOXogjjBo3Bei8jnI= =lFXD -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.corlay at gmail.com Sat Jun 24 10:29:35 2017 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Sat, 24 Jun 2017 16:29:35 +0200 Subject: [SciPy-Dev] PyRSB: Python interface to librsb sparse matrices library In-Reply-To: <20170621095155.GE19876@localhost> References: <20170621095155.GE19876@localhost> Message-ID: Hi Michele, This is really interesting. I am a co-author of the xtensor project and one thing that could be interesting is to wrap the various sparse matrix data structures in the form of xtensor expressions. A byproduct of doing so is that it would simplify creating bindings for multiple scientific computing languages (Python, Julia, R, and more coming). You can see the blog post http://quantstack.net/c++/2017/05/30/polyglot-scientific-computing-with- xtensor.html for reference... Also, one quick question: is the LGPL license a deliberate choice or is it not important to you? Most projects in the Python scientific stack are BSD licensed. So the LGPL choice makes it unlikely that a higher-level project adopts it as a dependency. If you are the only copyright holder, you would still have the possibility to license it under a more permissive license such as BSD or MIT... Congratulations on the release! Sylvain On Wed, Jun 21, 2017 at 11:51 AM, Michele Martone wrote: > Hi. > > I'm the author of the high performance multithreaded sparse matrix > library `librsb' (mostly C, LGPLv3): http://librsb.sourceforge.net/ > > I'm *not* a user of SciPy/NumPy/Python, but using Cython I have > written a proof-of-concept interface to librsb, named `PyRSB': > https://github.com/michelemartone/pyrsb > > PyRSB is in a prototypal state; e.g. still lacks good error handling. > Its interface is trivial, as it mimicks that of SciPy's 'csr_matrix'. > Advantages over csr_matrix are in fast multithreaded multiplication > of huge sparse matrices. > Intended application area is iterative solution of linear systems; > particularly fast if with symmetric matrices and many rhs. > > With this email I am looking for prospective: > - users/testers > - developers (any interest to collaborate/adopt/include the project?) > > Looking forward for your feedback, > Michele > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Jun 24 15:16:39 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 24 Jun 2017 12:16:39 -0700 Subject: [SciPy-Dev] PyRSB: Python interface to librsb sparse matrices library In-Reply-To: References: <20170621095155.GE19876@localhost> Message-ID: On Jun 24, 2017 7:29 AM, "Sylvain Corlay" wrote: Also, one quick question: is the LGPL license a deliberate choice or is it not important to you? Most projects in the Python scientific stack are BSD licensed. So the LGPL choice makes it unlikely that a higher-level project adopts it as a dependency. If you are the only copyright holder, you would still have the possibility to license it under a more permissive license such as BSD or MIT... Why would LGPL be a problem in a dependency? That doesn't stop you making your code BSD, and it's less restrictive license-wise than depending on MKL or the windows C runtime... -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From michelemartone at users.sourceforge.net Sun Jun 25 06:01:03 2017 From: michelemartone at users.sourceforge.net (Michele Martone) Date: Sun, 25 Jun 2017 12:01:03 +0200 Subject: [SciPy-Dev] PyRSB: Python interface to librsb sparse matrices library In-Reply-To: References: <20170621095155.GE19876@localhost> Message-ID: <20170625100103.GC2167@localhost> On 20170624 at 16:29, Sylvain Corlay wrote: > Hi Michele, Hi Sylvain, > This is really interesting. I am a co-author of the xtensor project and one > thing that could be interesting is to wrap the various sparse matrix data > structures in the form of xtensor expressions. A byproduct of doing so is > that it would simplify creating bindings for multiple scientific computing > languages (Python, Julia, R, and more coming). You can see the blog post > http://quantstack.net/c++/2017/05/30/polyglot-scientific-computing-with- > xtensor.html for reference... This article exemplifies manipulation of numerical arrays. Now I ask you: Given an interactive language $L of the one you cite above, can xtensor provide objects with custom type and operators for manipulation in *that* language, like in e.g. the pyrsb case: a=rsb_matrix((4,4)) print(a+a) # + operator and 'print' interfacing ? > Also, one quick question: is the LGPL license a deliberate choice or is it > not important to you? Most projects in the Python scientific stack are BSD > licensed. So the LGPL choice makes it unlikely that a higher-level project > adopts it as a dependency. If you are the only copyright holder, you would > still have the possibility to license it under a more permissive license > such as BSD or MIT... No important choice. No problems relicensing the PyRSB prototype to BSD or MIT. > Congratulations on the release! Thanks for the interest and welcome constructive feedback :-) > Sylvain > > > > > > On Wed, Jun 21, 2017 at 11:51 AM, Michele Martone sourceforge.net> wrote: > > > Hi. > > > > I'm the author of the high performance multithreaded sparse matrix > > library `librsb' (mostly C, LGPLv3): http://librsb.sourceforge.net/ > > > > I'm *not* a user of SciPy/NumPy/Python, but using Cython I have > > written a proof-of-concept interface to librsb, named `PyRSB': > > https://github.com/michelemartone/pyrsb > > > > PyRSB is in a prototypal state; e.g. still lacks good error handling. > > Its interface is trivial, as it mimicks that of SciPy's 'csr_matrix'. > > Advantages over csr_matrix are in fast multithreaded multiplication > > of huge sparse matrices. > > Intended application area is iterative solution of linear systems; > > particularly fast if with symmetric matrices and many rhs. > > > > With this email I am looking for prospective: > > - users/testers > > - developers (any interest to collaborate/adopt/include the project?) > > > > Looking forward for your feedback, > > Michele > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From jaime.frio at gmail.com Sun Jun 25 12:55:22 2017 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Sun, 25 Jun 2017 18:55:22 +0200 Subject: [SciPy-Dev] Refactoring ndimage Message-ID: Hi all, I have started sending PRs for ndimage. It is my intention to keep at it through the summer, and would like to provide some context on what I am trying to achieve. A couple of years ago I mentored a GSoC to port ndimage to Cython that didn't do much progress. If nothing else, I think I did learn quite a bit about ndimage and what makes it hard to maintain. And I think one of the big ones is lack of encapsulation in the C code: ndimage defines four "classes" in ni_support.h that get used throughout, namely NI_Iterator, NI_LineBuffer, NI_FilterIterator and NI_CoordinateList. Unfortunately they are not very self contained, and e.g. to instantiate a NI_FilterIterator you typically have to: - declare a NI_FilterIterator variable, - declare two NI_Iterator variables, one for the input array, another for the output, - declare a npy_intp pointer of offsets and assign NULL to it, - call NI_InitFilterOffsets to initialize the offsets pointer, - call NI_InitFilterIterator to initialize the filter iterator, - call NI_InitPointIterator twice, once for the input, another for the output, - after each iteration call NI_FILTER_NEXT2 to advance all three iterators, - after iteration is finished, call free on the pointer of offsets. There is no good reason why we couldn't refactor this to being more like: - call NI_FilterIterator_New and assign its return to a NI_FilterIterator pointer, - after each iteration call NI_FilterIterator_Next to advance all involved pointers, - after iteration is finished, call NI_FilterIterator_Delete to release any memory. Proper encapsulation would have many benefits: - high level functions would not be cluttered with boilerplate, and would be easier to understand and follow, - chances for memory leaks and the like would be minimized, - we could wrap those "classes" in Python and test them thoroughly, - it would make the transition to Cython for the higher level functions, much simpler. As an example, the lowest hanging fruit in this would be #7527 . So open source wise this is what I would like to spend my summer on. Any feedback is welcome, but I would especially like to hear about: - thoughts on the overall plan, - reviewer availability: I would like to make this as incremental as possible, but that means many smaller interdependent PRs, which require reviewer availability, - if anyone wants to join in the fun, I'm more than happy to mentor! Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Sun Jun 25 14:02:09 2017 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sun, 25 Jun 2017 20:02:09 +0200 Subject: [SciPy-Dev] Refactoring ndimage In-Reply-To: References: Message-ID: On 25 June 2017 at 18:55, Jaime Fern?ndez del R?o wrote: > if anyone wants to join in the fun, I'm more than happy to mentor! That sounds like fun. I don't have much bandwidth, but I can definitely find a few hours a week for this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.corlay at gmail.com Sun Jun 25 15:24:37 2017 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Sun, 25 Jun 2017 21:24:37 +0200 Subject: [SciPy-Dev] PyRSB: Python interface to librsb sparse matrices library In-Reply-To: <20170625100103.GC2167@localhost> References: <20170621095155.GE19876@localhost> <20170625100103.GC2167@localhost> Message-ID: Hi Michele, Regarding xtensor, the article focused on how we can wrap an existing data structure in an "xtensor expression", i.e. something that you can operate upon with the numpy style API in C++ (see the numpy to xtensor cheat sheet http://xtensor.readthedocs.io/en/latest/numpy.html). Since it is an expression system, you can do "a + b * sin(t)" with numpy-style broadcasting and it returns an un-evaluated expression, which is computed only upon access or assignment to a container. "a", "b", and "c" could all be compound expressions, backed by filesystem operations, be an in-memory container, or a wrapper on your sparse matrices... Now, we are working on automating the "*C++ tensor expression-type -> python wrapper with access and math operators" *generation of code, which would be what you are describing below... On the subject of the LGPL, regarding the distinction with the GPL that were brought up earlier in the thread, it is indeed not as problematic as the GPL, but still less permissive as BSD and MIT. In general, I think that using a BSD or MIT license averts these concerns in the Python ecosystem, and is preferable *if you don't care about copyleft*... Best, Sylvain On Sun, Jun 25, 2017 at 12:01 PM, Michele Martone < michelemartone at users.sourceforge.net> wrote: > On 20170624 at 16:29, Sylvain Corlay wrote: > > Hi Michele, > > Hi Sylvain, > > > This is really interesting. I am a co-author of the xtensor project and > one > > thing that could be interesting is to wrap the various sparse matrix data > > structures in the form of xtensor expressions. A byproduct of doing so is > > that it would simplify creating bindings for multiple scientific > computing > > languages (Python, Julia, R, and more coming). You can see the blog post > > http://quantstack.net/c++/2017/05/30/polyglot-scientific-computing-with- > > xtensor.html for reference... > This article exemplifies manipulation of numerical arrays. > Now I ask you: Given an interactive language $L of the one you cite above, > can xtensor provide objects with custom type and operators for manipulation > in *that* language, like in e.g. the pyrsb case: > a=rsb_matrix((4,4)) > print(a+a) # + operator and 'print' interfacing > ? > > > Also, one quick question: is the LGPL license a deliberate choice or is > it > > not important to you? Most projects in the Python scientific stack are > BSD > > licensed. So the LGPL choice makes it unlikely that a higher-level > project > > adopts it as a dependency. If you are the only copyright holder, you > would > > still have the possibility to license it under a more permissive license > > such as BSD or MIT... > No important choice. > No problems relicensing the PyRSB prototype to BSD or MIT. > > > Congratulations on the release! > Thanks for the interest and welcome constructive feedback :-) > > > Sylvain > > > > > > > > > > > > On Wed, Jun 21, 2017 at 11:51 AM, Michele Martone > sourceforge.net> wrote: > > > > > Hi. > > > > > > I'm the author of the high performance multithreaded sparse matrix > > > library `librsb' (mostly C, LGPLv3): http://librsb.sourceforge.net/ > > > > > > I'm *not* a user of SciPy/NumPy/Python, but using Cython I have > > > written a proof-of-concept interface to librsb, named `PyRSB': > > > https://github.com/michelemartone/pyrsb > > > > > > PyRSB is in a prototypal state; e.g. still lacks good error handling. > > > Its interface is trivial, as it mimicks that of SciPy's 'csr_matrix'. > > > Advantages over csr_matrix are in fast multithreaded multiplication > > > of huge sparse matrices. > > > Intended application area is iterative solution of linear systems; > > > particularly fast if with symmetric matrices and many rhs. > > > > > > With this email I am looking for prospective: > > > - users/testers > > > - developers (any interest to collaborate/adopt/include the project?) > > > > > > Looking forward for your feedback, > > > Michele > > > > > > _______________________________________________ > > > SciPy-Dev mailing list > > > SciPy-Dev at python.org > > > https://mail.python.org/mailman/listinfo/scipy-dev > > > > > > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Sun Jun 25 17:08:03 2017 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Sun, 25 Jun 2017 23:08:03 +0200 Subject: [SciPy-Dev] Weird segfault in call to PyArray_SIZE Message-ID: While working in scipy.ndimage I tried to replace the following code, see here for the source: npy_intp max_lines; int ii; for (ii = 0; ii < array->nd; ii++) max_lines *= array->dimensions[ii]; with the in theory equivalent: max_lines = PyArray_SIZE(array); Oddly enough, the result is a segfault as soon as PyArray_SIZE is called, e.g. by running the ndimage tests. Both numpy and scipy are the latest development versions, built by myself on OSX. Unfortunately I don't have Python and friends built with debug symbols, so the best I have been able to figure out, through printf debugging, is that the segfault happens before PyArray_MultiplyList is entered, which made me suspicious of it being related to import_array() not being called, but that doesn't seem to be the case. Any thoughts on what may be causing this? Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Jun 26 03:45:56 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 26 Jun 2017 19:45:56 +1200 Subject: [SciPy-Dev] Refactoring ndimage In-Reply-To: References: Message-ID: On Mon, Jun 26, 2017 at 4:55 AM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > Hi all, > > I have started sending PRs for ndimage. It is my intention to keep at it > through the summer, and would like to provide some context on what I am > trying to achieve. > > A couple of years ago I mentored a GSoC to port ndimage to Cython that > didn't do much progress. If nothing else, I think I did learn quite a bit > about ndimage and what makes it hard to maintain. And I think one of the > big ones is lack of encapsulation in the C code: ndimage defines four > "classes" in ni_support.h > that > get used throughout, namely NI_Iterator, NI_LineBuffer, NI_FilterIterator > and NI_CoordinateList. > > Unfortunately they are not very self contained, and e.g. to instantiate a > NI_FilterIterator you typically have to: > > - declare a NI_FilterIterator variable, > - declare two NI_Iterator variables, one for the input array, another > for the output, > - declare a npy_intp pointer of offsets and assign NULL to it, > - call NI_InitFilterOffsets to initialize the offsets pointer, > - call NI_InitFilterIterator to initialize the filter iterator, > - call NI_InitPointIterator twice, once for the input, another for the > output, > - after each iteration call NI_FILTER_NEXT2 to advance all three > iterators, > - after iteration is finished, call free on the pointer of offsets. > > There is no good reason why we couldn't refactor this to being more like: > > - call NI_FilterIterator_New and assign its return to a > NI_FilterIterator pointer, > - after each iteration call NI_FilterIterator_Next to advance all > involved pointers, > - after iteration is finished, call NI_FilterIterator_Delete to > release any memory. > > Proper encapsulation would have many benefits: > > - high level functions would not be cluttered with boilerplate, and > would be easier to understand and follow, > - chances for memory leaks and the like would be minimized, > - we could wrap those "classes" in Python and test them thoroughly, > - it would make the transition to Cython for the higher level > functions, much simpler. > > As an example, the lowest hanging fruit in this would be #7527 > . > > So open source wise this is what I would like to spend my summer on. > Awesome! ndimage definitely could use it! > Any feedback is welcome, but I would especially like to hear about: > > - thoughts on the overall plan, > > Sounds like a great plan. I do think that the current test suite is a bit marginal for this exercise and review may not catch subtle issues, so it would be useful to: - use scikit-image as an extra testsuite regularly - ideally find a few heavy users to test the master branch once in a while - use tools like a static code analyzer and Valgrind where it makes sense. > > - reviewer availability: I would like to make this as incremental as > possible, but that means many smaller interdependent PRs, which require > reviewer availability, > > For the next 4 weeks or so my availability will be pretty good. I'm pretty sure that I don't know as much about ndimage as you do, but that's likely true for all other current devs as well:) I think it's important to keep up with your PRs; once we start getting too far behind in reviewing the effort only goes up. I suggest not being too modest in pinging for reviews of PRs that are going to be a bottleneck or result in conflicts later on. Ralf > > - if anyone wants to join in the fun, I'm more than happy to mentor! > > Jaime > > -- > (\__/) > ( O.o) > ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes > de dominaci?n mundial. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michelemartone at users.sourceforge.net Mon Jun 26 04:46:46 2017 From: michelemartone at users.sourceforge.net (Michele Martone) Date: Mon, 26 Jun 2017 10:46:46 +0200 Subject: [SciPy-Dev] PyRSB: Python interface to librsb sparse matrices library In-Reply-To: References: <20170621095155.GE19876@localhost> Message-ID: <20170626084646.GA14459@localhost> Hi CJ, On a current Xeon server with say, >=8 cores you might expect pyrsb's rsb_matrix vs csr_matrix comparison to show speedup <1x (so, slowdown) until ~1M nonzeroes; then growing to say a 10x-20x speedup at ~100M nonzeroes. No golden rule here; just rough estimates to give you an idea. On 20170622 at 22:24, CJ Carey wrote: > Looks interesting, thanks for posting! > > I haven't downloaded anything to try it out, but I'm interested to see the > benchmark results you get from `make test`. > > On Wed, Jun 21, 2017 at 5:51 AM, Michele Martone < > michelemartone at users.sourceforge.net> wrote: > > > Hi. > > > > I'm the author of the high performance multithreaded sparse matrix > > library `librsb' (mostly C, LGPLv3): http://librsb.sourceforge.net/ > > > > I'm *not* a user of SciPy/NumPy/Python, but using Cython I have > > written a proof-of-concept interface to librsb, named `PyRSB': > > https://github.com/michelemartone/pyrsb > > > > PyRSB is in a prototypal state; e.g. still lacks good error handling. > > Its interface is trivial, as it mimicks that of SciPy's 'csr_matrix'. > > Advantages over csr_matrix are in fast multithreaded multiplication > > of huge sparse matrices. > > Intended application area is iterative solution of linear systems; > > particularly fast if with symmetric matrices and many rhs. > > > > With this email I am looking for prospective: > > - users/testers > > - developers (any interest to collaborate/adopt/include the project?) > > > > Looking forward for your feedback, > > Michele > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From jaime.frio at gmail.com Mon Jun 26 05:36:27 2017 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Mon, 26 Jun 2017 11:36:27 +0200 Subject: [SciPy-Dev] Refactoring ndimage In-Reply-To: References: Message-ID: On Mon, Jun 26, 2017 at 9:45 AM, Ralf Gommers wrote: > > > On Mon, Jun 26, 2017 at 4:55 AM, Jaime Fern?ndez del R?o < > jaime.frio at gmail.com> wrote: > >> Hi all, >> >> I have started sending PRs for ndimage. It is my intention to keep at it >> through the summer, and would like to provide some context on what I am >> trying to achieve. >> >> A couple of years ago I mentored a GSoC to port ndimage to Cython that >> didn't do much progress. If nothing else, I think I did learn quite a bit >> about ndimage and what makes it hard to maintain. And I think one of the >> big ones is lack of encapsulation in the C code: ndimage defines four >> "classes" in ni_support.h >> that >> get used throughout, namely NI_Iterator, NI_LineBuffer, NI_FilterIterator >> and NI_CoordinateList. >> >> Unfortunately they are not very self contained, and e.g. to instantiate a >> NI_FilterIterator you typically have to: >> >> - declare a NI_FilterIterator variable, >> - declare two NI_Iterator variables, one for the input array, another >> for the output, >> - declare a npy_intp pointer of offsets and assign NULL to it, >> - call NI_InitFilterOffsets to initialize the offsets pointer, >> - call NI_InitFilterIterator to initialize the filter iterator, >> - call NI_InitPointIterator twice, once for the input, another for >> the output, >> - after each iteration call NI_FILTER_NEXT2 to advance all three >> iterators, >> - after iteration is finished, call free on the pointer of offsets. >> >> There is no good reason why we couldn't refactor this to being more like: >> >> - call NI_FilterIterator_New and assign its return to a >> NI_FilterIterator pointer, >> - after each iteration call NI_FilterIterator_Next to advance all >> involved pointers, >> - after iteration is finished, call NI_FilterIterator_Delete to >> release any memory. >> >> Proper encapsulation would have many benefits: >> >> - high level functions would not be cluttered with boilerplate, and >> would be easier to understand and follow, >> - chances for memory leaks and the like would be minimized, >> - we could wrap those "classes" in Python and test them thoroughly, >> - it would make the transition to Cython for the higher level >> functions, much simpler. >> >> As an example, the lowest hanging fruit in this would be #7527 >> . >> >> So open source wise this is what I would like to spend my summer on. >> > > Awesome! ndimage definitely could use it! > > >> Any feedback is welcome, but I would especially like to hear about: >> >> - thoughts on the overall plan, >> >> > Sounds like a great plan. I do think that the current test suite is a bit > marginal for this exercise and review may not catch subtle issues, so it > would be useful to: > - use scikit-image as an extra testsuite regularly > - ideally find a few heavy users to test the master branch once in a while > Good points, I have e-mailed the scikit-image list to make them aware of this and to ask them for help. > - use tools like a static code analyzer and Valgrind where it makes sense. > I may need help with setting up Valgrind: how hard is it ti make it work under OSX? We should also work on having a more complete test suite for the low level iterators, probably through Python wrappers. > >> - reviewer availability: I would like to make this as incremental as >> possible, but that means many smaller interdependent PRs, which require >> reviewer availability, >> >> For the next 4 weeks or so my availability will be pretty good. I'm > pretty sure that I don't know as much about ndimage as you do, but that's > likely true for all other current devs as well:) I think it's important to > keep up with your PRs; once we start getting too far behind in reviewing > the effort only goes up. I suggest not being too modest in pinging for > reviews of PRs that are going to be a bottleneck or result in conflicts > later on. > > Ralf > > > >> >> - if anyone wants to join in the fun, I'm more than happy to mentor! >> >> Jaime >> >> -- >> (\__/) >> ( O.o) >> ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes >> de dominaci?n mundial. >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. On Mon, Jun 26, 2017 at 9:45 AM, Ralf Gommers wrote: > > > On Mon, Jun 26, 2017 at 4:55 AM, Jaime Fern?ndez del R?o < > jaime.frio at gmail.com> wrote: > >> Hi all, >> >> I have started sending PRs for ndimage. It is my intention to keep at it >> through the summer, and would like to provide some context on what I am >> trying to achieve. >> >> A couple of years ago I mentored a GSoC to port ndimage to Cython that >> didn't do much progress. If nothing else, I think I did learn quite a bit >> about ndimage and what makes it hard to maintain. And I think one of the >> big ones is lack of encapsulation in the C code: ndimage defines four >> "classes" in ni_support.h >> that >> get used throughout, namely NI_Iterator, NI_LineBuffer, NI_FilterIterator >> and NI_CoordinateList. >> >> Unfortunately they are not very self contained, and e.g. to instantiate a >> NI_FilterIterator you typically have to: >> >> - declare a NI_FilterIterator variable, >> - declare two NI_Iterator variables, one for the input array, another >> for the output, >> - declare a npy_intp pointer of offsets and assign NULL to it, >> - call NI_InitFilterOffsets to initialize the offsets pointer, >> - call NI_InitFilterIterator to initialize the filter iterator, >> - call NI_InitPointIterator twice, once for the input, another for >> the output, >> - after each iteration call NI_FILTER_NEXT2 to advance all three >> iterators, >> - after iteration is finished, call free on the pointer of offsets. >> >> There is no good reason why we couldn't refactor this to being more like: >> >> - call NI_FilterIterator_New and assign its return to a >> NI_FilterIterator pointer, >> - after each iteration call NI_FilterIterator_Next to advance all >> involved pointers, >> - after iteration is finished, call NI_FilterIterator_Delete to >> release any memory. >> >> Proper encapsulation would have many benefits: >> >> - high level functions would not be cluttered with boilerplate, and >> would be easier to understand and follow, >> - chances for memory leaks and the like would be minimized, >> - we could wrap those "classes" in Python and test them thoroughly, >> - it would make the transition to Cython for the higher level >> functions, much simpler. >> >> As an example, the lowest hanging fruit in this would be #7527 >> . >> >> So open source wise this is what I would like to spend my summer on. >> > > Awesome! ndimage definitely could use it! > > >> Any feedback is welcome, but I would especially like to hear about: >> >> - thoughts on the overall plan, >> >> > Sounds like a great plan. I do think that the current test suite is a bit > marginal for this exercise and review may not catch subtle issues, so it > would be useful to: > - use scikit-image as an extra testsuite regularly > - ideally find a few heavy users to test the master branch once in a while > - use tools like a static code analyzer and Valgrind where it makes sense. > >> >> - reviewer availability: I would like to make this as incremental as >> possible, but that means many smaller interdependent PRs, which require >> reviewer availability, >> >> For the next 4 weeks or so my availability will be pretty good. I'm > pretty sure that I don't know as much about ndimage as you do, but that's > likely true for all other current devs as well:) I think it's important to > keep up with your PRs; once we start getting too far behind in reviewing > the effort only goes up. I suggest not being too modest in pinging for > reviews of PRs that are going to be a bottleneck or result in conflicts > later on. > > Ralf > > > >> >> - if anyone wants to join in the fun, I'm more than happy to mentor! >> >> Jaime >> >> -- >> (\__/) >> ( O.o) >> ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes >> de dominaci?n mundial. >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Jun 26 07:17:54 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 26 Jun 2017 23:17:54 +1200 Subject: [SciPy-Dev] Refactoring ndimage In-Reply-To: References: Message-ID: On Mon, Jun 26, 2017 at 9:36 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > > On Mon, Jun 26, 2017 at 9:45 AM, Ralf Gommers > wrote: > >> >> >> On Mon, Jun 26, 2017 at 4:55 AM, Jaime Fern?ndez del R?o < >> jaime.frio at gmail.com> wrote: >> >>> Hi all, >>> >>> I have started sending PRs for ndimage. It is my intention to keep at it >>> through the summer, and would like to provide some context on what I am >>> trying to achieve. >>> >>> A couple of years ago I mentored a GSoC to port ndimage to Cython that >>> didn't do much progress. If nothing else, I think I did learn quite a bit >>> about ndimage and what makes it hard to maintain. And I think one of the >>> big ones is lack of encapsulation in the C code: ndimage defines four >>> "classes" in ni_support.h >>> that >>> get used throughout, namely NI_Iterator, NI_LineBuffer, NI_FilterIterator >>> and NI_CoordinateList. >>> >>> Unfortunately they are not very self contained, and e.g. to instantiate >>> a NI_FilterIterator you typically have to: >>> >>> - declare a NI_FilterIterator variable, >>> - declare two NI_Iterator variables, one for the input array, >>> another for the output, >>> - declare a npy_intp pointer of offsets and assign NULL to it, >>> - call NI_InitFilterOffsets to initialize the offsets pointer, >>> - call NI_InitFilterIterator to initialize the filter iterator, >>> - call NI_InitPointIterator twice, once for the input, another for >>> the output, >>> - after each iteration call NI_FILTER_NEXT2 to advance all three >>> iterators, >>> - after iteration is finished, call free on the pointer of offsets. >>> >>> There is no good reason why we couldn't refactor this to being more like: >>> >>> - call NI_FilterIterator_New and assign its return to a >>> NI_FilterIterator pointer, >>> - after each iteration call NI_FilterIterator_Next to advance all >>> involved pointers, >>> - after iteration is finished, call NI_FilterIterator_Delete to >>> release any memory. >>> >>> Proper encapsulation would have many benefits: >>> >>> - high level functions would not be cluttered with boilerplate, and >>> would be easier to understand and follow, >>> - chances for memory leaks and the like would be minimized, >>> - we could wrap those "classes" in Python and test them thoroughly, >>> - it would make the transition to Cython for the higher level >>> functions, much simpler. >>> >>> As an example, the lowest hanging fruit in this would be #7527 >>> . >>> >>> So open source wise this is what I would like to spend my summer on. >>> >> >> Awesome! ndimage definitely could use it! >> >> >>> Any feedback is welcome, but I would especially like to hear about: >>> >>> - thoughts on the overall plan, >>> >>> >> Sounds like a great plan. I do think that the current test suite is a bit >> marginal for this exercise and review may not catch subtle issues, so it >> would be useful to: >> - use scikit-image as an extra testsuite regularly >> - ideally find a few heavy users to test the master branch once in a while >> > > Good points, I have e-mailed the scikit-image list to make them aware of > this and to ask them for help. > > >> - use tools like a static code analyzer and Valgrind where it makes sense. >> > > I may need help with setting up Valgrind: how hard is it ti make it work > under OSX? > I managed to set it up once on OS X, don't remember it being too painful. Here's a recent recipe for it: http://julianguyen.org/installing-valgrind-on-os-x-el-capitan/ > > We should also work on having a more complete test suite for the low level > iterators, probably through Python wrappers. > Makes sense. Ralf > > >> >>> - reviewer availability: I would like to make this as incremental as >>> possible, but that means many smaller interdependent PRs, which require >>> reviewer availability, >>> >>> For the next 4 weeks or so my availability will be pretty good. I'm >> pretty sure that I don't know as much about ndimage as you do, but that's >> likely true for all other current devs as well:) I think it's important to >> keep up with your PRs; once we start getting too far behind in reviewing >> the effort only goes up. I suggest not being too modest in pinging for >> reviews of PRs that are going to be a bottleneck or result in conflicts >> later on. >> >> Ralf >> >> >> >>> >>> - if anyone wants to join in the fun, I'm more than happy to mentor! >>> >>> Jaime >>> >>> -- >>> (\__/) >>> ( O.o) >>> ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus >>> planes de dominaci?n mundial. >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > > -- > (\__/) > ( O.o) > ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes > de dominaci?n mundial. > > On Mon, Jun 26, 2017 at 9:45 AM, Ralf Gommers > wrote: > >> >> >> On Mon, Jun 26, 2017 at 4:55 AM, Jaime Fern?ndez del R?o < >> jaime.frio at gmail.com> wrote: >> >>> Hi all, >>> >>> I have started sending PRs for ndimage. It is my intention to keep at it >>> through the summer, and would like to provide some context on what I am >>> trying to achieve. >>> >>> A couple of years ago I mentored a GSoC to port ndimage to Cython that >>> didn't do much progress. If nothing else, I think I did learn quite a bit >>> about ndimage and what makes it hard to maintain. And I think one of the >>> big ones is lack of encapsulation in the C code: ndimage defines four >>> "classes" in ni_support.h >>> that >>> get used throughout, namely NI_Iterator, NI_LineBuffer, NI_FilterIterator >>> and NI_CoordinateList. >>> >>> Unfortunately they are not very self contained, and e.g. to instantiate >>> a NI_FilterIterator you typically have to: >>> >>> - declare a NI_FilterIterator variable, >>> - declare two NI_Iterator variables, one for the input array, >>> another for the output, >>> - declare a npy_intp pointer of offsets and assign NULL to it, >>> - call NI_InitFilterOffsets to initialize the offsets pointer, >>> - call NI_InitFilterIterator to initialize the filter iterator, >>> - call NI_InitPointIterator twice, once for the input, another for >>> the output, >>> - after each iteration call NI_FILTER_NEXT2 to advance all three >>> iterators, >>> - after iteration is finished, call free on the pointer of offsets. >>> >>> There is no good reason why we couldn't refactor this to being more like: >>> >>> - call NI_FilterIterator_New and assign its return to a >>> NI_FilterIterator pointer, >>> - after each iteration call NI_FilterIterator_Next to advance all >>> involved pointers, >>> - after iteration is finished, call NI_FilterIterator_Delete to >>> release any memory. >>> >>> Proper encapsulation would have many benefits: >>> >>> - high level functions would not be cluttered with boilerplate, and >>> would be easier to understand and follow, >>> - chances for memory leaks and the like would be minimized, >>> - we could wrap those "classes" in Python and test them thoroughly, >>> - it would make the transition to Cython for the higher level >>> functions, much simpler. >>> >>> As an example, the lowest hanging fruit in this would be #7527 >>> . >>> >>> So open source wise this is what I would like to spend my summer on. >>> >> >> Awesome! ndimage definitely could use it! >> >> >>> Any feedback is welcome, but I would especially like to hear about: >>> >>> - thoughts on the overall plan, >>> >>> >> Sounds like a great plan. I do think that the current test suite is a bit >> marginal for this exercise and review may not catch subtle issues, so it >> would be useful to: >> - use scikit-image as an extra testsuite regularly >> - ideally find a few heavy users to test the master branch once in a while >> - use tools like a static code analyzer and Valgrind where it makes sense. >> >>> >>> - reviewer availability: I would like to make this as incremental as >>> possible, but that means many smaller interdependent PRs, which require >>> reviewer availability, >>> >>> For the next 4 weeks or so my availability will be pretty good. I'm >> pretty sure that I don't know as much about ndimage as you do, but that's >> likely true for all other current devs as well:) I think it's important to >> keep up with your PRs; once we start getting too far behind in reviewing >> the effort only goes up. I suggest not being too modest in pinging for >> reviews of PRs that are going to be a bottleneck or result in conflicts >> later on. >> >> Ralf >> >> >> >>> >>> - if anyone wants to join in the fun, I'm more than happy to mentor! >>> >>> Jaime >>> >>> -- >>> (\__/) >>> ( O.o) >>> ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus >>> planes de dominaci?n mundial. >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > > -- > (\__/) > ( O.o) > ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes > de dominaci?n mundial. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Jun 26 08:51:15 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 27 Jun 2017 00:51:15 +1200 Subject: [SciPy-Dev] Weird segfault in call to PyArray_SIZE In-Reply-To: References: Message-ID: On Mon, Jun 26, 2017 at 9:08 AM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > While working in scipy.ndimage I tried to replace the following code, see > here > for > the source: > > npy_intp max_lines; > int ii; > > for (ii = 0; ii < array->nd; ii++) > max_lines *= array->dimensions[ii]; > > with the in theory equivalent: > > max_lines = PyArray_SIZE(array); > > Oddly enough, the result is a segfault as soon as PyArray_SIZE is called, > e.g. by running the ndimage tests. > > Both numpy and scipy are the latest development versions, built by myself > on OSX. Unfortunately I don't have Python and friends built with debug > symbols, so the best I have been able to figure out, through printf > debugging, is that the segfault happens before PyArray_MultiplyList is > entered, which made me suspicious of it being related to import_array() > not being called, but that doesn't seem to be the case. > Even if it's not the cause here, that ``#undef NO_IMPORT_ARRAY`` looks quite odd, it's the only place in scipy or numpy where that is done. May be good to get rid of it. Ralf > Any thoughts on what may be causing this? > > Jaime > > -- > (\__/) > ( O.o) > ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes > de dominaci?n mundial. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Mon Jun 26 09:34:23 2017 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Mon, 26 Jun 2017 15:34:23 +0200 Subject: [SciPy-Dev] Weird segfault in call to PyArray_SIZE In-Reply-To: References: Message-ID: On Mon, Jun 26, 2017 at 2:51 PM, Ralf Gommers wrote: > > > On Mon, Jun 26, 2017 at 9:08 AM, Jaime Fern?ndez del R?o < > jaime.frio at gmail.com> wrote: > >> While working in scipy.ndimage I tried to replace the following code, see >> here >> for >> the source: >> >> npy_intp max_lines; >> int ii; >> >> for (ii = 0; ii < array->nd; ii++) >> max_lines *= array->dimensions[ii]; >> >> with the in theory equivalent: >> >> max_lines = PyArray_SIZE(array); >> >> Oddly enough, the result is a segfault as soon as PyArray_SIZE is >> called, e.g. by running the ndimage tests. >> >> Both numpy and scipy are the latest development versions, built by myself >> on OSX. Unfortunately I don't have Python and friends built with debug >> symbols, so the best I have been able to figure out, through printf >> debugging, is that the segfault happens before PyArray_MultiplyList is >> entered, which made me suspicious of it being related to import_array() >> not being called, but that doesn't seem to be the case. >> > > Even if it's not the cause here, that ``#undef NO_IMPORT_ARRAY`` looks > quite odd, it's the only place in scipy or numpy where that is done. May be > good to get rid of it. > I played a little bit with it yesterday and it did not seem to be the cause. We should actually move to using the usual numpy types and get rid of the #include and the surrounding #define/#undef dance altogether, yes. Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Jun 26 12:13:54 2017 From: cournape at gmail.com (David Cournapeau) Date: Mon, 26 Jun 2017 17:13:54 +0100 Subject: [SciPy-Dev] Weird segfault in call to PyArray_SIZE In-Reply-To: References: Message-ID: On Sun, Jun 25, 2017 at 10:08 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > While working in scipy.ndimage I tried to replace the following code, see > here > for > the source: > > npy_intp max_lines; > int ii; > > for (ii = 0; ii < array->nd; ii++) > max_lines *= array->dimensions[ii]; > > with the in theory equivalent: > > max_lines = PyArray_SIZE(array); > > Oddly enough, the result is a segfault as soon as PyArray_SIZE is called, > e.g. by running the ndimage tests. > > Both numpy and scipy are the latest development versions, built by myself > on OSX. Unfortunately I don't have Python and friends built with debug > symbols, so the best I have been able to figure out, through printf > debugging, is that the segfault happens before PyArray_MultiplyList is > entered, which made me suspicious of it being related to import_array() > not being called, but that doesn't seem to be the case. > Maybe you check the array of pointers that is supposed to be populated by import_array. If the entries are NULL, you know it is caused by a failure at that level. David > > Any thoughts on what may be causing this? > > Jaime > > -- > (\__/) > ( O.o) > ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes > de dominaci?n mundial. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Mon Jun 26 13:04:39 2017 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 26 Jun 2017 10:04:39 -0700 Subject: [SciPy-Dev] Weird segfault in call to PyArray_SIZE In-Reply-To: References: Message-ID: On 6/25/2017 2:08 PM, Jaime Fern?ndez del R?o wrote: > While working in scipy.ndimage I tried to replace the following code, > see here > for > the source: > > npy_intp max_lines; > int ii; > > for (ii = 0; ii < array->nd; ii++) > max_lines *= array->dimensions[ii]; > > with the in theory equivalent: > > max_lines = PyArray_SIZE(array); > > Oddly enough, the result is a segfault as soon as PyArray_SIZE is > called, e.g. by running the ndimage tests. > > Both numpy and scipy are the latest development versions, built by > myself on OSX. Unfortunately I don't have Python and friends built with > debug symbols, so the best I have been able to figure out, through > printf debugging, is that the segfault happens before > PyArray_MultiplyList is entered, which made me suspicious of it > being related to import_array() not being called, but that doesn't seem > to be the case. > > Any thoughts on what may be causing this? > > Jaime > On Windows the proposed change throws a link error: ni_support.obj : error LNK2001: unresolved external symbol PyArray_API Christoph From jaime.frio at gmail.com Tue Jun 27 04:26:19 2017 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Tue, 27 Jun 2017 10:26:19 +0200 Subject: [SciPy-Dev] Weird segfault in call to PyArray_SIZE In-Reply-To: References: Message-ID: After some more probing around, it was an import_array() kind of issue. The magical incantation that banished the segfault was to add: #define PY_ARRAY_UNIQUE_SYMBOL _scipy_ndimage_ARRAY_API at the top of every *.c file. I suppose the technical term is that it should be defined to the same value in every "compilation unit" that wants to use the NumPy API. I will open a documentation bug with numpy: all google could find was buried in 5+ year old mailing list discussions, and there seems to be no coherent explanation of how the whole thing works anywhere. Thanks everyone for your suggestions! Jaime On Mon, Jun 26, 2017 at 7:04 PM, Christoph Gohlke wrote: > > > On 6/25/2017 2:08 PM, Jaime Fern?ndez del R?o wrote: > >> While working in scipy.ndimage I tried to replace the following code, see >> here > src/ni_support.c#L95> for the source: >> >> npy_intp max_lines; >> int ii; >> >> for (ii = 0; ii < array->nd; ii++) >> max_lines *= array->dimensions[ii]; >> >> with the in theory equivalent: >> >> max_lines = PyArray_SIZE(array); >> >> Oddly enough, the result is a segfault as soon as PyArray_SIZE is called, >> e.g. by running the ndimage tests. >> >> Both numpy and scipy are the latest development versions, built by myself >> on OSX. Unfortunately I don't have Python and friends built with debug >> symbols, so the best I have been able to figure out, through printf >> debugging, is that the segfault happens before PyArray_MultiplyList is >> entered, which made me suspicious of it being related to import_array() not >> being called, but that doesn't seem to be the case. >> >> Any thoughts on what may be causing this? >> >> Jaime >> >> > On Windows the proposed change throws a link error: > > ni_support.obj : error LNK2001: unresolved external symbol PyArray_API > > Christoph > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Tue Jun 27 06:54:36 2017 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Tue, 27 Jun 2017 12:54:36 +0200 Subject: [SciPy-Dev] Possibly warnings creating errors in development test runs Message-ID: <1498560876.32188.1.camel@sipsolutions.net> Hi all, we just merged a PR [1], which (re)enables having warnings cause errors during testing (only in development mode of course). Since warnings (especially warnings such as about NaNs) can be system dependent, it is not unlikely that you may experience new errors during testing. These are easy to get rid of: For NaN warnings, you usually should add the numpy errstate context such as: ``` with np.errstate(invalid="ignore") ``` For most other warnings, add the now new `suppress_warnings` context manager (please do not use "ignore" warning filters, they can create issues), such as: ``` from scipy._lib._numpy_compat import suppress_warnings def test(): with suppress_warnings() as sup: sup.filter(UserWarning, "\nThe coefficients of the spline") ``` The `suppress_warnings` context also supports use as a decorator in case there are mass of similar suppressions needed in some file. You can also use `r = sup.record(...)` and check `len(r) == 1` or an `assert_warns` context if it would be good to test that the warning occurs. I hope this does not cause a lot of troubles, probably it is only a few occurrences, if any, but the new context may be good to know about in any case. Regards, Sebastian [1] https://github.com/scipy/scipy/pull/7525 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From ralf.gommers at gmail.com Tue Jun 27 07:55:29 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 27 Jun 2017 23:55:29 +1200 Subject: [SciPy-Dev] Possibly warnings creating errors in development test runs In-Reply-To: <1498560876.32188.1.camel@sipsolutions.net> References: <1498560876.32188.1.camel@sipsolutions.net> Message-ID: On Tue, Jun 27, 2017 at 10:54 PM, Sebastian Berg wrote: > Hi all, > > we just merged a PR [1], which (re)enables having warnings cause errors > during testing (only in development mode of course). > Thanks Sebastian, for putting quite some effort into solving a fairly tedious problem! Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Tue Jun 27 16:55:10 2017 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 27 Jun 2017 23:55:10 +0300 Subject: [SciPy-Dev] ANN: SciPy 0.19.1 release In-Reply-To: References: Message-ID: Thank you Ralf for taking care of the release! 23.06.2017 18:27 ???????????? "Ralf Gommers" ???????: > On behalf of the Scipy development team I am pleased to announce the > availability of Scipy 0.19.1. This is a bugfix-only release, no new > features are included. > > This release requires Python 2.7 or 3.4-3.6 and NumPy 1.8.2 or greater. > Source tarballs and release notes can be found at > https://github.com/scipy/scipy/releases/tag/v0.19.1. > OS X and Linux wheels are available from PyPI: > https://pypi.python.org/pypi/scipy/0.19.1 > > Thanks to everyone who contributed! > > Cheers, > Ralf > > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > ========================== > SciPy 0.19.1 Release Notes > ========================== > > SciPy 0.19.1 is a bug-fix release with no new features compared to 0.19.0. > The most important change is a fix for a severe memory leak in > ``integrate.quad``. > > > Authors > ======= > > * Evgeni Burovski > * Patrick Callier + > * Yu Feng > * Ralf Gommers > * Ilhan Polat > * Eric Quintero > * Scott Sievert > * Pauli Virtanen > * Warren Weckesser > > A total of 9 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > This list of names is automatically generated, and may not be fully > complete. > > > Issues closed for 0.19.1 > - ------------------------ > > - - `#7214 `__: Memory use in > integrate.quad in scipy-0.19.0 > - - `#7258 `__: > ``linalg.matrix_balance`` gives wrong transformation matrix > - - `#7262 `__: Segfault in > daily testing > - - `#7273 `__: > ``scipy.interpolate._bspl.evaluate_spline`` gets wrong type > - - `#7335 `__: > scipy.signal.dlti(A,B,C,D).freqresp() fails > > > Pull requests for 0.19.1 > - ------------------------ > > - - `#7211 `__: BUG: convolve > may yield inconsistent dtypes with method changed > - - `#7216 `__: BUG: integrate: > fix refcounting bug in quad() > - - `#7229 `__: MAINT: special: > Rewrite a test of wrightomega > - - `#7261 `__: FIX: Corrected > the transformation matrix permutation > - - `#7265 `__: BUG: Fix broken > axis handling in spectral functions > - - `#7266 `__: FIX 7262: > ckdtree crashes in query_knn. > - - `#7279 `__: Upcast half- > and single-precision floats to doubles in BSpline... > - - `#7336 `__: BUG: Fix > signal.dfreqresp for StateSpace systems > - - `#7419 `__: Fix several > issues in ``sparse.load_npz``, ``save_npz`` > - - `#7420 `__: BUG: stats: > allow integers as kappa4 shape parameters > > > Checksums > ========= > > MD5 > ~~~ > > 72415e8da753eea97eb9820602931cb5 scipy-0.19.1-cp27-cp27m-macosx > _10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_ > 10_intel.macosx_10_10_x86_64.whl > e0022540df2735eb0475071b266d5d71 scipy-0.19.1-cp27-cp27m-manyli > nux1_i686.whl > f513eb4ea2086de169a502df7efb91c7 scipy-0.19.1-cp27-cp27m-manyli > nux1_x86_64.whl > 906c3c59209d6249b5d8ce14cfa01382 scipy-0.19.1-cp27-cp27mu-manyl > inux1_i686.whl > afbf8ffb4a4fe7c18e34cb8a313c18ee scipy-0.19.1-cp27-cp27mu-manyl > inux1_x86_64.whl > 5ba945b3404644244ab469883a1723f0 scipy-0.19.1-cp34-cp34m-macosx > _10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_ > 10_intel.macosx_10_10_x86_64.whl > 9c02cdd79e4ffadddcce7b2212039816 scipy-0.19.1-cp34-cp34m-manyli > nux1_i686.whl > 79c0ba3618466614744de9a2f5362bbc scipy-0.19.1-cp34-cp34m-manyli > nux1_x86_64.whl > 602a741a54190e16698ff8b2fe9fd27c scipy-0.19.1-cp35-cp35m-macosx > _10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_ > 10_intel.macosx_10_10_x86_64.whl > d6c2ecadd4df36eb61870227fae42d3a scipy-0.19.1-cp35-cp35m-manyli > nux1_i686.whl > e7167c0a9cf270f89437e2fd09731636 scipy-0.19.1-cp35-cp35m-manyli > nux1_x86_64.whl > fc2e4679e83590ff19c1a5c5b1aa4786 scipy-0.19.1-cp36-cp36m-macosx > _10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_ > 10_intel.macosx_10_10_x86_64.whl > 458615e9a56429a72038531dd5dcb3cb scipy-0.19.1-cp36-cp36m-manyli > nux1_i686.whl > 65b1667ac56861da4cbc609960ed735b scipy-0.19.1-cp36-cp36m-manyli > nux1_x86_64.whl > b704ebe9a28b8fe83d9f238d40031266 scipy-0.19.1.tar.gz > cad6bac0638b176f72c00fe81ed54d19 scipy-0.19.1.tar.xz > eb69261e5026ef2f3b9ae827caa7e5b8 scipy-0.19.1.zip > > SHA256 > ~~~~~~ > > 1e8fedf602859b541ebae78667ccfc53158edef58d9ee19ee659309004565952 > scipy-0.19.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel. > macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 023ee29faa76c184a607e21076f097dc32f3abba7c71ece374588f95920aa993 > scipy-0.19.1-cp27-cp27m-manylinux1_i686.whl > 2a26d06a642e3c9107ca06df125f5dc5507abe2b87fd7865415d03ab654b0b43 > scipy-0.19.1-cp27-cp27m-manylinux1_x86_64.whl > b3e97be2cd9f052d984fc5ba2d441897971b744c64d658617944c47bc366f8ff > scipy-0.19.1-cp27-cp27mu-manylinux1_i686.whl > 78713101b32af384c564837fd7ae665a2a72cb6d49edbd8f32148d74724a65a8 > scipy-0.19.1-cp27-cp27mu-manylinux1_x86_64.whl > 2c9e87d556b83a8e11de7a064856c3997bbaff7f9cf62bf63ff0623751549e4c > scipy-0.19.1-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel. > macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > b87a923ed390ba0aafcbbcc6f0e023d495f19d2bd22ae59bdef3b0aec6485b39 > scipy-0.19.1-cp34-cp34m-manylinux1_i686.whl > 2dc9bcfaa585d9d941fb1add0d160556fe8587c3800264a77643695565a2d279 > scipy-0.19.1-cp34-cp34m-manylinux1_x86_64.whl > 93825cc80c638d901099f657dfff852ad2421beb51cb7d1d3f91157741ebe287 > scipy-0.19.1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel. > macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 4118647273c907ed984da52b71fa2bbb30229f1820fb79b1ed087c8bc9a20cca > scipy-0.19.1-cp35-cp35m-manylinux1_i686.whl > 4c26ef44e8bb2cd2aef11c60d163caa04670d6f42996789b209526677310ded2 > scipy-0.19.1-cp35-cp35m-manylinux1_x86_64.whl > 2a3b6ceadbb58d8b8d4a329f8219f9e6f17757ec6c85baf03987bbd2b728c263 > scipy-0.19.1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel. > macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 909be30bdd1d8cd57609760ea72a1499543169ed6ea5c66fad50946582682cf8 > scipy-0.19.1-cp36-cp36m-manylinux1_i686.whl > 4de84e3870228e8afd55a6e63e762aa7c9d1e3bd9a9ef5ab716835a69c77d257 > scipy-0.19.1-cp36-cp36m-manylinux1_x86_64.whl > 9147f7b8b5ed4b80fefe82057a6d0d4bf3a50375a1d573407f84686d3b82c3bb > scipy-0.19.1.tar.gz > 0dca04c4860afdcb066cab4fd520fcffa8c85e9a7b5aa37a445308e899d728b3 > scipy-0.19.1.tar.xz > 2cc1baed85f7f702d958ad19e5d11c9dad917e981d3d0c299a7f0fd15027516a > scipy-0.19.1.zip > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEcBAEBAgAGBQJZTTG2AAoJEO2+o3i/Gl69V8gH/3S199hh1Fcl0a05VOCgt7aZ > zRKvdZhYkqGvWoWX+YwJBaZn0aXThJfIDE2lAGt4S9PyxObBKzsujI8OLIZF4zU5 > PtAARrooOqDEH1CC39k2gscF9GBhxAlxwntM2Hd8h+4xsklAhFVxn4UT0M8igBoX > txpaIqghzBJnenkMq6lqrj7vhupuulz0zrMnJU4LetpdSxX1lb5sGvWmZDTKd/nr > xmthQ4TmHPbf9jAYCtLI4V6OUGFGLQ+d7IqiYvU+DVKNZFEgyPMnvN9QvSdu6diR > JBEIBcijxM0BqjPwRoavCQjCHk37kR/G3UsgcEyHO2tr5zuOXogjjBo3Bei8jnI= > =lFXD > -----END PGP SIGNATURE----- > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: