From charlesr.harris at gmail.com Wed Jan 1 13:05:01 2020 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 1 Jan 2020 11:05:01 -0700 Subject: [SciPy-Dev] NumPy 1.17.5 Released Message-ID: Hi All, On behalf of the NumPy team I am pleased to announce that NumPy 1.17.5 has been released. This release fixes bugs reported against the 1.17.4 release. The supported Python versions are 3.5-3.7. This is the last planned release that supports Python 3.5. Wheels for this release can be downloaded from PyPI , source archives and release notes are available from Github . Downstream developers building this release should use Cython >= 0.29.14 and, if using OpenBLAS, OpenBLAS >= v0.3.7. It is recommended that developers interested in the new random bit generators upgrade to the NumPy 1.18.x series, as it has updated documentation and many small improvements. *Highlights* - The ``np.testing.utils`` functions have been updated from 1.19.0-dev0. This improves the function documentation and error messages as well extending the ``assert_array_compare`` function to additional types. *Contributors* A total of 6 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Charles Harris - Eric Wieser - Ilhan Polat - Matti Picus - Michael Hudson-Doyle - Ralf Gommers *Pull requests merged* A total of 8 pull requests were merged for this release. - `#14593 `__: MAINT: backport Cython API cleanup to 1.17.x, remove docs - `#14937 `__: BUG: fix integer size confusion in handling array's ndmin argument - `#14939 `__: BUILD: remove SSE2 flag from numpy.random builds - `#14993 `__: MAINT: Added Python3.8 branch to dll lib discovery - `#15038 `__: BUG: Fix refcounting in ufunc object loops - `#15067 `__: BUG: Exceptions tracebacks are dropped - `#15175 `__: ENH: Backport improvements to testing functions. - `#15213 `__: REL: Prepare for the NumPy 1.17.5 release. Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhwende at protonmail.com Thu Jan 2 09:25:35 2020 From: mhwende at protonmail.com (mhwende) Date: Thu, 02 Jan 2020 14:25:35 +0000 Subject: [SciPy-Dev] Proposed addition to scipy.sparse.csgraph.maximum_flow Message-ID: Hi, recently I have made use of the maximum flow algorithm for CSR matrices, i.e., the scipy.sparse.csgraph.maximum_flow function. My test case was a non-sparse, bipartite network with lots of connections between the two node sets. The value of the maximum flow was small in comparison to the total number of nodes and edges in the graph. I wanted to use the scipy.sparse.csgraph module, however with a depth first search (DFS) instead of the breadth-first search that you have implemented. Therefore, I have implemented a new option to use DFS instead of BFS with your implementation. When measuring the execution time for a random graph, there was a signnificant speedup of the DFS procedure compared to the BFS variant. During these tests, I also refactored the two other functions that you call within the main csgraph.maximum_flow function, i.e., _add_reverse_edges and _make_edge_pointers. Depending on the input graph, I found that the time spent in this functions can be larger than the execution time for the actual Edmond-Karps algorithm. Below, you can find a table showing the measured execution times in seconds. Within the columns of this table, the execution time is broken down into parts as follows: a) Adding reverse edges (function _add_reverse_edges). b) Making edge pointers (function _make_edge_pointers). c) Actual max flow computation (function _edmond_karps) d) Total execution time (function maximum_flow). Rows of the table refer to the changes which were made to the implementation, where each measurement was performed with either DFS or BFS for the construction of augmenting paths: 1) Original implementation. 2) After the reimplementation of _make_edge_pointers. 3) After the reimplementation of _add_reverse_edges. Here are the execution times: | a) | b) | c) | d) --------+-----------+-----------+-----------+----------- 1. BFS | 1.29 s | 3.81 s | 8.93 s | 14.08 s 1. DFS | 1.29 s | 3.82 s | 0.50 s | 5.66 s --------+-----------+-----------+-----------+----------- 2. BFS | 1.54 s | 0.22 s | 8.94 s | 10.76 s 2. DFS | 1.33 s | 0.22 s | 0.50 s | 2.10 s --------+-----------+-----------+-----------+----------- 3. BFS | 0.16 s | 0.27 s | 9.01 s | 9.50 s 3. DFS | 0.16 s | 0.23 s | 0.52 s | 0.96 s By comparing the total execution times for BFS in 1d) and for DFS in 3d), my code changes have led to a speedup of about 14, for the network example which I have mentioned. Please let me know if would like to incorporate some or all of my changes into your code base. My changes (a single commit) can be found here: https://github.com/mhwende/scipy/commit/e7be2dc29364fe95f6fae20a4c0775716cdba5e5 Finally, here is the script which I have used for testing. The above timings have been produced with the command: python scipymaxflowtest 2000 2000 --dfs=0 This will create a graph with 4000 nodes and 4,000,000 edges, with randomly chosen zero or unit capacities. # File scipymaxflowtest.py import time import argparse import numpy as np import scipy.sparse as sp def scipymaxflowtest(layer_sizes, dfs): layer_sizes = np.array(layer_sizes, dtype = np.int) num_layers = len(layer_sizes) print("Layer sizes:", *layer_sizes) print("Number of layers:", num_layers) num_nodes = 2 + np.sum(layer_sizes) print("Number of nodes:", num_nodes) num_edges = layer_sizes[0] + layer_sizes[-1] for i in range(layer_sizes.size - 1): num_edges += layer_sizes[i] * layer_sizes[i + 1] print("Number of edges:", num_edges) #data = np.ones(num_edges, dtype = np.int) data = np.random.randint(0, 2, num_edges) indices = np.zeros(num_edges, dtype = np.int) indptr = np.zeros(num_nodes + 1, dtype = np.int) ptr = 0 for j in range(layer_sizes[0]): indices[ptr] = 2 + j ptr += 1 indptr[1] = ptr indptr[2] = ptr layer_sum = 0 for k in range(num_layers - 1): next_layer_sum = layer_sum + layer_sizes[k] for i in range(layer_sizes[k]): for j in range(layer_sizes[k + 1]): indices[ptr] = 2 + next_layer_sum + j ptr += 1 indptr[2 + layer_sum + i + 1] = ptr layer_sum = next_layer_sum for i in range(layer_sizes[-1]): indices[ptr] = 1 ptr += 1 indptr[2 + layer_sum + i + 1] = ptr indptr[num_nodes] = ptr # Check that we set as many entries in the index array as there are edges. assert ptr == num_edges # Create CSR matrix from data and index arrays. a = sp.csr_matrix((data, indices, indptr), shape = (num_nodes, num_nodes)) if num_nodes <= 12: print("A =\n{}".format(a.toarray())) t0 = time.time() flow_result = sp.csgraph.maximum_flow(a, 0, 1, dfs) t1 = time.time() flow_value = flow_result.flow_value print("Maximum flow: {} (took {:.6f} s)".format(flow_value, t1 - t0)) return flow_value if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "layer_sizes", type = int, nargs = "+", help = "Number of nodes on individual graph layers.") parser.add_argument( "--dfs", type = int, default = 0, help = "Whether to user depth first search.") args = parser.parse_args() np.random.seed(4891243) scipymaxflowtest(args.layer_sizes, args.dfs) From ralf.gommers at gmail.com Fri Jan 3 05:53:31 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 3 Jan 2020 11:53:31 +0100 Subject: [SciPy-Dev] Making it easier to pin numpy in pyproject.toml In-Reply-To: References: <5bd0d03c-2653-4766-4571-67b6f642b218@aero.iitb.ac.in> Message-ID: On Thu, Dec 12, 2019 at 7:59 AM Ralf Gommers wrote: > > > On Sat, Nov 30, 2019 at 7:31 AM Thomas Robitaille < > thomas.robitaille at gmail.com> wrote: > >> Hi Ralf, >> >> On Sat, 30 Nov 2019 at 01:41, Ralf Gommers >> wrote: >> > >> > >> > >> > On Wed, Nov 27, 2019 at 1:32 AM Thomas Robitaille < >> thomas.robitaille at gmail.com> wrote: >> >> >> >> Hi Ralf and Prabhu, >> >> >> >> Thanks for your feedback! I should have clarified this originally, but >> >> pyproject.toml (in oldest-supported-numpy) is not the right place to >> >> define the Numpy pinnings since pyproject build requirements are only >> >> relevant for building wheels of oldest-supported-numpy (they would not >> >> be inherited by downstream packages using oldest-supported-numpy). >> >> Numpy needs to be a runtime dependency of oldest-supported-numpy >> >> because in this case 'runtime' means inside the downstream build >> >> environment. >> >> >> >> In fact, I deliberately have only published a universal wheel (no >> >> sdist) on PyPI for oldest-supported-numpy which means that >> >> pyproject.toml would never actually be used, and in addition this >> >> means that setuptools will not get invoked in the process either. The >> >> oldest-supported-numpy wheel only contains a bunch of metadata (no >> >> setup.cfg or setup.py) of which the main one that is relevant here is >> >> Requires-Dist which I think will be directly interpreted by pip. >> > >> > >> > Ah yes, that sounds correct. >> > >> > So back to your original question: do we want to maintain this under >> the SciPy org. It sounds like a useful idea to me. What do others think? >> > >> > Also Tom, do you want to remain one of the maintainers of this package >> when we move it under https://github.com/scipy/? >> >> I'd be happy to still be one of the maintainers - I would just be more >> comfortable with not being a single point of failure if this becomes a >> build-time dependency for a number of packages, hence why I thought it >> would make sense to live under a community organization like scipy. >> > > Sounds good to me. It looks like there are no concerns. So let's try and > finalize this I'd say. If you could send a transfer request for the repo, I > can accept that, set up a new team under the SciPy GitHub org, and make you > an admin of the team. And also give the whole SciPy team write access. > This is done now: https://github.com/scipy/oldest-supported-numpy. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorentzen.ch at gmail.com Sat Jan 4 06:17:19 2020 From: lorentzen.ch at gmail.com (Christian Lorentzen) Date: Sat, 4 Jan 2020 12:17:19 +0100 Subject: [SciPy-Dev] Wright's Generalized Bessel Function Message-ID: <7d671672-cbe7-7517-6228-f1137e307377@googlemail.com> Dear Scipy Developers I started the PR [1] in order to make Wright's generalized Bessel function a member of scipy.special (see PR for more infos). This is a necessary ingredient for the implementation of Tweedie distributions [2] in scipy.stats and an interesting function in its own right. AFAIK there is no public reference implementation. As it is an entire function in z, it is possible to use its defining series expansion: - Every term needs a call to 1/Gamma(...). - It is not very clear when to stop the expansion. - There are asymptotic expansions, too. I got very unsatisfactory results from them. My questions: 1) Do you want to include this function in scipy.special? 2) API: Optional arguments `tol` and `max_iter` seem to fit a dynamic convergence check, |lambertw also has `tol`. |3) Implementation: Is it ok to start with a pure python implementation and later on change to, e.g. Cython? Or is a Cython version mandatory for merging a PR? 4) How to test this function? There are very little special cases (one with scipy.special.iv)? References: [1] https://github.com/scipy/scipy/pull/11313 [2] https://github.com/scipy/scipy/issues/11291 Kind regards, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Sat Jan 4 08:12:15 2020 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Sat, 4 Jan 2020 14:12:15 +0100 Subject: [SciPy-Dev] Symmetric/Hermitian Eigenvalue Problem API and LAPACK wrappers change proposal In-Reply-To: References: Message-ID: Dear all, Regarding this, PR is seemingly stabilized. Please review it if you have time https://github.com/scipy/scipy/pull/11304 as it is kind-of long due to the new exposed functionality. Best, ilhan On Mon, Dec 23, 2019 at 12:00 AM Ilhan Polat wrote: > Dear all, > > I've finally managed to create some time to put together the current > status of the eigh, eigvalsh functions and possible ways to improve and > maintain them. > > The issue is here > > https://github.com/scipy/scipy/issues/6502#issuecomment-241437506 > > I would really appreciate any feedback for this. It's in WIP format and > I'll add more information as I go along. > > Happy holidays to all! > > Best, > ilhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.craig.wilson at gmail.com Sat Jan 4 11:58:08 2020 From: josh.craig.wilson at gmail.com (Joshua Wilson) Date: Sat, 4 Jan 2020 08:58:08 -0800 Subject: [SciPy-Dev] Wright's Generalized Bessel Function In-Reply-To: <7d671672-cbe7-7517-6228-f1137e307377@googlemail.com> References: <7d671672-cbe7-7517-6228-f1137e307377@googlemail.com> Message-ID: Some thoughts here: > it is possible to use its defining series expansion It looks like that series is going to suffer from cancellation for (at least) z along the negative real axis, so the asymptotic series will be needed in some fashion. In general, this function is reasonably complex. You will probably need to use multiple methods across various regions. > 2) API: Optional arguments `tol` and `max_iter` seem to fit a dynamic convergence check, lambertw also has `tol`. Including tol in lambertw was a mistake IMO, and I would not like to see it here. Generally we strive to have all functions be as correct as possible instead of fiddling with tols. > 3) Implementation: Is it ok to start with a pure python implementation As mentioned in the PR, this is not ok IMO, for some reasons outlined there. > 4) How to test this function? There are very little special cases (one with scipy.special.iv)? One approach is to do what Boost does in situations like this-they make a very naive implementation using arbitrary precision (use mpmath for this) and then compare that implementation to the optimized one. > 1) Do you want to include this function in scipy.special? It's in the DLMF, so I'd say it's reasonable to include. *But* I think you are underestimating how hard it is going to be to create a good implementation of this function (and the bar for special is high). Depending on your level of experience with implementing special functions, it could potentially be *a lot* of work. I don't know about the details of the Tweedie distributions, but you might want to think about whether you need a generic implementation of this function. If you only need it for some regions e.g. we can just create a private implementation for those specific regions, potentially reducing the amount of work involved. On Sat, Jan 4, 2020 at 3:18 AM Christian Lorentzen wrote: > > Dear Scipy Developers > > I started the PR [1] in order to make Wright's generalized Bessel function a member of scipy.special (see PR for more infos). This is a necessary ingredient for the implementation of Tweedie distributions [2] in scipy.stats and an interesting function in its own right. > > AFAIK there is no public reference implementation. As it is an entire function in z, it is possible to use its defining series expansion: > - Every term needs a call to 1/Gamma(...). > - It is not very clear when to stop the expansion. > - There are asymptotic expansions, too. I got very unsatisfactory results from them. > > My questions: > 1) Do you want to include this function in scipy.special? > 2) API: Optional arguments `tol` and `max_iter` seem to fit a dynamic convergence check, > lambertw also has `tol`. > 3) Implementation: Is it ok to start with a pure python implementation and later on change to, e.g. Cython? Or is a Cython version mandatory for merging a PR? > 4) How to test this function? There are very little special cases (one with scipy.special.iv)? > > References: > [1] https://github.com/scipy/scipy/pull/11313 > [2] https://github.com/scipy/scipy/issues/11291 > > > Kind regards, > Christian > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From charlesr.harris at gmail.com Mon Jan 6 18:20:30 2020 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 6 Jan 2020 16:20:30 -0700 Subject: [SciPy-Dev] NumPy 1.18.1 released Message-ID: Hi All, On behalf of the NumPy team I am pleased to announce that NumPy 1.18.1 has been released. This release contains fixes for bugs reported against NumPy 1.18.0. Two bugs in particular that caused widespread problems downstream were: - The cython random extension test was not using a temporary directory for building, resulting in a permission violation. Fixed. - Numpy distutils was appending -std=c99 to all C compiler runs, leading to changed behavior and compile problems downstream. That flag is now only applied when building numpy C code. The Python versions supported in this release are 3.5-3.8. Downstream developers should use Cython >= 0.29.14 for Python 3.8 support and OpenBLAS >= 3.7 to avoid errors on the Skylake architecture. Wheels for this release can be downloaded from PyPI , source archives and release notes are available from Github . *Contributors* A total of 7 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Charles Harris - Matti Picus - Maxwell Aladago - Pauli Virtanen - Ralf Gommers - Tyler Reddy - Warren Weckesser *Pull requests merged* A total of 13 pull requests were merged for this release. - `#15158 `__: MAINT: Update pavement.py for towncrier. - `#15159 `__: DOC: add moved modules to 1.18 release note - `#15161 `__: MAINT, DOC: Minor backports and updates for 1.18.x - `#15176 `__: TST: Add assert_array_equal test for big integer arrays - `#15184 `__: BUG: use tmp dir and check version for cython test. - `#15220 `__: BUG: distutils: fix msvc+gfortran openblas handling corner case - `#15221 `__: BUG: remove -std=c99 for c++ compilation - `#15222 `__: MAINT: unskip test on win32 - `#15223 `__: TST: add BLAS ILP64 run in Travis & Azure - `#15245 `__: MAINT: only add --std=c99 where needed - `#15246 `__: BUG: lib: Fix handling of integer arrays by gradient. - `#15247 `__: MAINT: Do not use private Python function in testing - `#15250 `__: REL: Prepare for the NumPy 1.18.1 release. Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteoravasi at gmail.com Wed Jan 8 14:31:15 2020 From: matteoravasi at gmail.com (matteo ravasi) Date: Wed, 8 Jan 2020 20:31:15 +0100 Subject: [SciPy-Dev] Implementation choices in scipy.sparse.linalg.LinearOperator Message-ID: <0464AE5D-EF1D-4B14-A9FA-DCD3497156FA@gmail.com> Dear Scipy Developers, some of you may be aware of the PyLops project (https://github.com/equinor/pylops), which heavily builds on the scipy.sparse.linalg.LinearOperator class of scipy and provides an extensive library of general purpose and domain specific linear operators. Recently a users asked for a functionality that would create the equivalent dense matrix representation of an operator (to be mostly used for teaching purposes). While implementing it, we came across two internal behaviours in the interface.py file that we would like to mention and discuss about: - The method _matmat and _rmatmat as implemented per today requires operators to work on (N, 1)/(M, 1) arrays. Whilst the first sentence in the Notes of LinearOperator states "The user-defined matvec() function must properly handle the case where v has shape (N,) as well as the (N,1) case", I tend to believe that the first option should be the encouraged one, as I do not really see any advantage with carrying over trailing dimensions in this case. Moreover, some of the linear solvers in scipy that can be used together with linear operator such as lsqr or lsmr, require the RHS b vector to have a single dimension (b : (m,) ndarray). Again, although a quick test showed that they work also when providing a b vector of shape (m, 1), this shows to me that the extra trailing dimension is just a nuisance and should be discouraged. Assuming that you agree with this argument, an alternative solution that works on (N, ) input is actually possible and currently implemented in PyLops for _matmat (see here https://github.com/equinor/pylops/blob/master/pylops/LinearOperator.py ). This avoids having implementations that work for both (N, ) and (N, 1), which sometimes requires adding additional checks and squeezes within the implementation of _matvec and _rmatvec in situations where trailing dimensions in the input vector are not really bringing any benefit. - The above mentioned new feature has been currently added as a method to the PyLops LinearOperator class (and named todense in a similar fashion to scipy.sparse.xxx_matrix.todense methods). Doing so, we realized that every time two operators are summed/subtracted/multiplied? (pretty much when one of the overloading of scipy.sparse.linalg.LinearOperator is invoked), the returned object is not of LinearOperator type anymore but of the type of the private class in scipy.sparse.linalg.interface used to perform the requested operator (e.g, _SumLinearOperator). I wonder if returning an object of a ?private? type is actually something recommended to do in scipy (or even if this happens anywhere else in scipy). Again, in PyLops we need to overload the __add__ method (and similar methods) and make it into a thin wrappers of the scipy equivalent, returning an object of our parent class LinearOperator. This is to ensure that the returned object would maintain the additional properties and methods of our Linearoperator class, but also to avoid ?leaking? any of the private classes to the end users. I wonder if also the scipy API should perhaps avoid returning to end users objects of some private classes. This email is not intended to criticise any of current behaviours in scipy.sparse.linalg.LinearOperator, rather to gain an understanding of some of the decisions made by the core developers of the scipy.sparse.linalg.interface submodule, which may all have their own right to be the way they are today :) Looking forward to hearing from you. Best wishes, MR -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Thu Jan 9 05:38:58 2020 From: benny.malengier at gmail.com (Benny Malengier) Date: Thu, 9 Jan 2020 11:38:58 +0100 Subject: [SciPy-Dev] ANN: ODES 2.6.0 Message-ID: I am pleased to announce the release of the odes scikit 2.6.0 For those interested in the Sundials ode and dae solvers, odes provides a python interface to these. This version moves to use the 2019 sundials release, version 5.0 released October 2019 A start was also made to allow in the future to easily access also sensitivity versions cvodes and idas. Code available at https://github.com/bmcage/odes, or via pypi https://pypi.org/project/scikits.odes/ Kind regards, Benny Malengier -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Jan 9 07:22:33 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 9 Jan 2020 13:22:33 +0100 Subject: [SciPy-Dev] Fwd: [NumFOCUS Projects] Job: Open Source Developer Advocate In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: Nicole Foster Date: Wed, Jan 8, 2020 at 10:25 PM Subject: [NumFOCUS Projects] Job: Open Source Developer Advocate To: Fiscally Sponsored Project Representatives , Affiliated Projects Hi Everyone, We recently posted an ad for an Open Source Developer Advocate position with NumFOCUS. We would be thrilled to have a member of our community work with us here at NumFOCUS so we ask that you please share this job posting with anyone you know that might be a good fit. Please feel free to reach out if you have any questions. Thank you for your time and consideration as we work to fill this role. Best, -- Nicole Foster Executive Operations Administrator, NumFOCUS nicole at numfocus.org 512-831-2870 -- You received this message because you are subscribed to the Google Groups "Fiscally Sponsored Project Representatives" group. To unsubscribe from this group and stop receiving emails from it, send an email to projects+unsubscribe at numfocus.org. To view this discussion on the web visit https://groups.google.com/a/numfocus.org/d/msgid/projects/CAJLwxPF5ezbsAEWk4pqWPXk6F7SAU8cESm6GrXpy1ak6%3DBzt6w%40mail.gmail.com . -------------- next part -------------- An HTML attachment was scrubbed... URL: From aditya.ravuri at gmail.com Thu Jan 9 19:40:25 2020 From: aditya.ravuri at gmail.com (Aditya Ravuri) Date: Fri, 10 Jan 2020 00:40:25 +0000 Subject: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication Message-ID: Hello, I'd like to add a function that performs efficient matrix-vector multiplication of Toeplitz matrices with vectors using the circulant matrix embedding trick. A simple example is attached. May I proceed with coding up an implementation for SciPy based on this script? Is it something that could be added? An example application of this is in statistics, where stationary processes have a Toeplitz covariance matrix. The efficient matrix vector product can be used to compute log probabilities quickly and perhaps even make use of conjugate gradient solvers in multivariate normals conditional distributions. Regards, Aditya -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: toep_fft_mul.py Type: text/x-python-script Size: 649 bytes Desc: not available URL: From rlucas7 at vt.edu Fri Jan 10 21:52:31 2020 From: rlucas7 at vt.edu (rlucas7 at vt.edu) Date: Fri, 10 Jan 2020 21:52:31 -0500 Subject: [SciPy-Dev] SciPy-Dev Digest, Vol 195, Issue 3 In-Reply-To: References: Message-ID: > On Jan 4, 2020, at 12:00 PM, scipy-dev-request at python.org wrote: > > ?Send SciPy-Dev mailing list submissions to > scipy-dev at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at python.org > > You can reach the person managing the list at > scipy-dev-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Wright's Generalized Bessel Function (Christian Lorentzen) > 2. Re: Symmetric/Hermitian Eigenvalue Problem API and LAPACK > wrappers change proposal (Ilhan Polat) > 3. Re: Wright's Generalized Bessel Function (Joshua Wilson) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 4 Jan 2020 12:17:19 +0100 > From: Christian Lorentzen > To: scipy-dev at python.org > Subject: [SciPy-Dev] Wright's Generalized Bessel Function > Message-ID: <7d671672-cbe7-7517-6228-f1137e307377 at googlemail.com> > Content-Type: text/plain; charset="utf-8"; Format="flowed" > > Dear Scipy Developers > > I started the PR [1] in order to make Wright's generalized Bessel > function a member of scipy.special (see PR for more infos). This is a > necessary ingredient for the implementation of Tweedie distributions [2] > in scipy.stats and an interesting function in its own right. > > AFAIK there is no public reference implementation. As it is an entire > function in z, it is possible to use its defining series expansion: > - Every term needs a call to 1/Gamma(...). > - It is not very clear when to stop the expansion. > - There are asymptotic expansions, too. I got very unsatisfactory > results from them. > > My questions: > 1) Do you want to include this function in scipy.special? > 2) API: Optional arguments `tol` and `max_iter` seem to fit a dynamic > convergence check, > |lambertw also has `tol`. > |3) Implementation: Is it ok to start with a pure python implementation > and later on change to, e.g. Cython? Or is a Cython version mandatory > for merging a PR? > 4) How to test this function? There are very little special cases (one > with scipy.special.iv)? > > References: > [1] https://github.com/scipy/scipy/pull/11313 > [2] https://github.com/scipy/scipy/issues/11291 > > > Kind regards, > Christian > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Sat, 4 Jan 2020 14:12:15 +0100 > From: Ilhan Polat > To: SciPy Developers List > Subject: Re: [SciPy-Dev] Symmetric/Hermitian Eigenvalue Problem API > and LAPACK wrappers change proposal > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Dear all, > > Regarding this, PR is seemingly stabilized. Please review it if you have > time https://github.com/scipy/scipy/pull/11304 as it is kind-of long due to > the new exposed functionality. > > Best, > ilhan > > > >> On Mon, Dec 23, 2019 at 12:00 AM Ilhan Polat wrote: >> >> Dear all, >> >> I've finally managed to create some time to put together the current >> status of the eigh, eigvalsh functions and possible ways to improve and >> maintain them. >> >> The issue is here >> >> https://github.com/scipy/scipy/issues/6502#issuecomment-241437506 >> >> I would really appreciate any feedback for this. It's in WIP format and >> I'll add more information as I go along. >> >> Happy holidays to all! >> >> Best, >> ilhan >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 3 > Date: Sat, 4 Jan 2020 08:58:08 -0800 > From: Joshua Wilson > To: SciPy Developers List > Subject: Re: [SciPy-Dev] Wright's Generalized Bessel Function > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > Some thoughts here: > >> it is possible to use its defining series expansion > > It looks like that series is going to suffer from cancellation for (at > least) z along the negative real axis, so the asymptotic series will > be needed in some fashion. In general, this function is reasonably > complex. You will probably need to use multiple methods across various > regions. There are a couple of papers on numerical implementations of wrights Bessel. They don?t appear to be much work in the area nor applications beyond fractional differential equations. So I?d second @person142 on the private method in special. > >> 2) API: Optional arguments `tol` and `max_iter` seem to fit a dynamic convergence check, lambertw also has `tol`. > > Including tol in lambertw was a mistake IMO, and I would not like to > see it here. Generally we strive to have all functions be as correct > as possible instead of fiddling with tols. > >> 3) Implementation: Is it ok to start with a pure python implementation > > As mentioned in the PR, this is not ok IMO, for some reasons outlined there. > >> 4) How to test this function? There are very little special cases (one with scipy.special.iv)? > > One approach is to do what Boost does in situations like this-they > make a very naive implementation using arbitrary precision (use mpmath > for this) and then compare that implementation to the optimized one. I second the mpmath approach. Another approach is to find some papers that have some example data and use that data as test cases and make sure code reproduces results. Upshot if this is the lit review process you might find an algorithmic approach that works better in a region. >> 1) Do you want to include this function in scipy.special? > > It's in the DLMF, so I'd say it's reasonable to include. *But* I think > you are underestimating how hard it is going to be to create a good > implementation of this function (and the bar for special is high). > Depending on your level of experience with implementing special > functions, it could potentially be *a lot* of work. I don't know > about the details of the Tweedie distributions, but you might want to > think about whether you need a generic implementation of this > function. If you only need it for some regions e.g. we can just create > a private implementation for those specific regions, potentially > reducing the amount of work involved. For the tweedie the key parameter is p which is supported on (1,nifty) and (-nifty, 0). From the brief lit review I did on google scholar publicly available docs, numerically coverage of the whole real support of p is probably going to be a lot of work. The most common applications of tweedie I?ve used in the past were with 1<=p<=2. P=1 so it might make sense to work towards supporting those argument values first. Then expanding coverage to other values of p. > >> On Sat, Jan 4, 2020 at 3:18 AM Christian Lorentzen >> wrote: >> >> Dear Scipy Developers >> >> I started the PR [1] in order to make Wright's generalized Bessel function a member of scipy.special (see PR for more infos). This is a necessary ingredient for the implementation of Tweedie distributions [2] in scipy.stats and an interesting function in its own right. >> >> AFAIK there is no public reference implementation. As it is an entire function in z, it is possible to use its defining series expansion: >> - Every term needs a call to 1/Gamma(...). >> - It is not very clear when to stop the expansion. >> - There are asymptotic expansions, too. I got very unsatisfactory results from them. >> >> My questions: >> 1) Do you want to include this function in scipy.special? >> 2) API: Optional arguments `tol` and `max_iter` seem to fit a dynamic convergence check, >> lambertw also has `tol`. >> 3) Implementation: Is it ok to start with a pure python implementation and later on change to, e.g. Cython? Or is a Cython version mandatory for merging a PR? >> 4) How to test this function? There are very little special cases (one with scipy.special.iv)? >> >> References: >> [1] https://github.com/scipy/scipy/pull/11313 >> [2] https://github.com/scipy/scipy/issues/11291 >> >> >> Kind regards, >> Christian >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > End of SciPy-Dev Digest, Vol 195, Issue 3 > ***************************************** From Former at physicist.net Sun Jan 12 11:38:08 2020 From: Former at physicist.net (Adam Kullberg) Date: Sun, 12 Jan 2020 11:38:08 -0500 Subject: [SciPy-Dev] PR's fixing hypergeometric function Message-ID: Hello guys, Is there anything I can do to expedite these PR's?? They fix the hypergeometric function and are a couple of years old. I am still willing to make fixes if there is any appetite for these fixes. https://github.com/scipy/scipy/pull/8548 https://github.com/scipy/scipy/pull/8151 https://github.com/scipy/scipy/pull/8110 Thank you! Adam From matthew.brett at gmail.com Sun Jan 12 12:56:36 2020 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 12 Jan 2020 17:56:36 +0000 Subject: [SciPy-Dev] PR's fixing hypergeometric function In-Reply-To: References: Message-ID: Hi, On Sun, Jan 12, 2020 at 4:39 PM Adam Kullberg wrote: > > Hello guys, > > Is there anything I can do to expedite these PR's? They fix the > hypergeometric function and are a couple of years old. I am still > willing to make fixes if there is any appetite for these fixes. > > https://github.com/scipy/scipy/pull/8548 > > https://github.com/scipy/scipy/pull/8151 > > https://github.com/scipy/scipy/pull/8110 > > Thank you! It would be great to see these get in. Cheers, Matthew From josh.craig.wilson at gmail.com Sun Jan 12 19:43:43 2020 From: josh.craig.wilson at gmail.com (Joshua Wilson) Date: Sun, 12 Jan 2020 16:43:43 -0800 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check Message-ID: Hey everyone, During code review, it seems that we care about missing whitespace after commas. (I have no data on this, but I suspect it is our most common linting nit.) Linting for this is currently disabled. Now on the one hand, we generally avoid turning on checks like this because of the disruption to open PRs and what they do to git history. On the other hand, I don't think enforcing this check by hand works. Reminding an author to add spaces after commas can easily add several rounds of extra review (some places almost invariably get missed on the first try, and then the problem tends to sneak back in in later commits), and I usually just give up after a while. So I don't think we are heading in the direction of "in 5 years all the bad cases will be gone and we can turn on the check without disruption". So I would kind of like to push us to either: 1. Turn the lint check on. We can do this module-by-module or even file-by-file if necessary to minimize disruption. 2. Declare that this is not something we care about in code review. Thoughts? - Josh From tyler.je.reddy at gmail.com Sun Jan 12 23:09:51 2020 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Sun, 12 Jan 2020 21:09:51 -0700 Subject: [SciPy-Dev] PR's fixing hypergeometric function In-Reply-To: References: Message-ID: I've added 1.5.0 milestones to the subset of those PRs lacking them to help prevent falling through the cracks. Fortran source adjustments are among the hardest for the team to review. On Sun, 12 Jan 2020 at 10:57, Matthew Brett wrote: > Hi, > > On Sun, Jan 12, 2020 at 4:39 PM Adam Kullberg > wrote: > > > > Hello guys, > > > > Is there anything I can do to expedite these PR's? They fix the > > hypergeometric function and are a couple of years old. I am still > > willing to make fixes if there is any appetite for these fixes. > > > > https://github.com/scipy/scipy/pull/8548 > > > > https://github.com/scipy/scipy/pull/8151 > > > > https://github.com/scipy/scipy/pull/8110 > > > > Thank you! > > It would be great to see these get in. > > Cheers, > > Matthew > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlucas7 at vt.edu Mon Jan 13 12:40:59 2020 From: rlucas7 at vt.edu (rlucas7 at vt.edu) Date: Mon, 13 Jan 2020 12:40:59 -0500 Subject: [SciPy-Dev] SciPy-Dev Digest, Vol 195, Issue 9 In-Reply-To: References: Message-ID: > On Jan 13, 2020, at 12:00 PM, scipy-dev-request at python.org wrote: > > ?Send SciPy-Dev mailing list submissions to > scipy-dev at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at python.org > > You can reach the person managing the list at > scipy-dev-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Re: PR's fixing hypergeometric function (Matthew Brett) > 2. Missing whitespace after commas in pycodestyle check > (Joshua Wilson) > 3. Re: PR's fixing hypergeometric function (Tyler Reddy) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 12 Jan 2020 17:56:36 +0000 > From: Matthew Brett > To: SciPy Developers List > Subject: Re: [SciPy-Dev] PR's fixing hypergeometric function > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > Hi, > >> On Sun, Jan 12, 2020 at 4:39 PM Adam Kullberg wrote: >> >> Hello guys, >> >> Is there anything I can do to expedite these PR's? They fix the >> hypergeometric function and are a couple of years old. I am still >> willing to make fixes if there is any appetite for these fixes. >> >> https://github.com/scipy/scipy/pull/8548 >> >> https://github.com/scipy/scipy/pull/8151 >> >> https://github.com/scipy/scipy/pull/8110 >> >> Thank you! > > It would be great to see these get in. > > Cheers, > > Matthew > > > ------------------------------ > > Message: 2 > Date: Sun, 12 Jan 2020 16:43:43 -0800 > From: Joshua Wilson > To: SciPy Developers List > Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle > check > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > Hey everyone, > > During code review, it seems that we care about missing whitespace > after commas. (I have no data on this, but I suspect it is our most > common linting nit.) Linting for this is currently disabled. Now on > the one hand, we generally avoid turning on checks like this because > of the disruption to open PRs and what they do to git history. On the > other hand, I don't think enforcing this check by hand works. > Reminding an author to add spaces after commas can easily add several > rounds of extra review (some places almost invariably get missed on > the first try, and then the problem tends to sneak back in in later > commits), and I usually just give up after a while. So I don't think > we are heading in the direction of "in 5 years all the bad cases will > be gone and we can turn on the check without disruption". > > So I would kind of like to push us to either: > > 1. Turn the lint check on. We can do this module-by-module or even > file-by-file if necessary to minimize disruption. > 2. Declare that this is not something we care about in code review. > > Thoughts? Turning it on and the concomitant changes required might be a lot of work. Assuming that is true: Mechanism to enforce on new prs could be a pre-commit hook in git that has the linter configurations used plus the one you mention. The pre-commit hook fails the commit locally if the linter fails. There are lots of examples on github using flake8/black linters. If the disruption to enable is minimal then this may not be needed. If we don?t care about spaces after comma the also not needed. > - Josh > > > ------------------------------ > > Message: 3 > Date: Sun, 12 Jan 2020 21:09:51 -0700 > From: Tyler Reddy > To: SciPy Developers List > Subject: Re: [SciPy-Dev] PR's fixing hypergeometric function > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > I've added 1.5.0 milestones to the subset of those PRs lacking them to help > prevent falling through the cracks. Fortran source adjustments are among > the hardest for the team to review. > >> On Sun, 12 Jan 2020 at 10:57, Matthew Brett wrote: >> >> Hi, >> >>> On Sun, Jan 12, 2020 at 4:39 PM Adam Kullberg >>> wrote: >>> >>> Hello guys, >>> >>> Is there anything I can do to expedite these PR's? They fix the >>> hypergeometric function and are a couple of years old. I am still >>> willing to make fixes if there is any appetite for these fixes. >>> >>> https://github.com/scipy/scipy/pull/8548 >>> >>> https://github.com/scipy/scipy/pull/8151 >>> >>> https://github.com/scipy/scipy/pull/8110 >>> >>> Thank you! >> >> It would be great to see these get in. >> >> Cheers, >> >> Matthew >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > End of SciPy-Dev Digest, Vol 195, Issue 9 > ***************************************** From aditya.ravuri at gmail.com Mon Jan 13 17:01:22 2020 From: aditya.ravuri at gmail.com (Aditya Ravuri) Date: Mon, 13 Jan 2020 22:01:22 +0000 Subject: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication In-Reply-To: References: Message-ID: Hello, I've submit a draft PR for this on GitHub: https://github.com/scipy/scipy/pull/11346 An issue has been raised there that adding more functions increases the number of public functions, which is undesirable. If this mailing list can confirm that SciPy will never support matrix methods for special matrices over and above what's already implemented, I will move these lines to the documentation for scipy.linalg.toeplitz as suggested. Regards, Aditya On Fri, Jan 10, 2020 at 12:40 AM Aditya Ravuri wrote: > Hello, > > I'd like to add a function that performs efficient matrix-vector > multiplication of Toeplitz matrices with vectors using the circulant matrix > embedding trick. A simple example is attached. > > May I proceed with coding up an implementation for SciPy based on this > script? Is it something that could be added? > > An example application of this is in statistics, where stationary > processes have a Toeplitz covariance matrix. The efficient matrix vector > product can be used to compute log probabilities quickly and perhaps even > make use of conjugate gradient solvers in multivariate normals conditional > distributions. > > Regards, > > Aditya > -- AdityaR -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Jan 13 17:09:39 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 13 Jan 2020 23:09:39 +0100 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check In-Reply-To: References: Message-ID: Hey Josh, On Mon, Jan 13, 2020 at 1:44 AM Joshua Wilson wrote: > Hey everyone, > > During code review, it seems that we care about missing whitespace > after commas. (I have no data on this, but I suspect it is our most > common linting nit.) Linting for this is currently disabled. Now on > the one hand, we generally avoid turning on checks like this because > of the disruption to open PRs and what they do to git history. On the > other hand, I don't think enforcing this check by hand works. > Reminding an author to add spaces after commas can easily add several > rounds of extra review (some places almost invariably get missed on > the first try, and then the problem tends to sneak back in in later > commits), and I usually just give up after a while. So I don't think > we are heading in the direction of "in 5 years all the bad cases will > be gone and we can turn on the check without disruption". > I agree, the current workflow is not optimal. > So I would kind of like to push us to either: > > 1. Turn the lint check on. We can do this module-by-module or even > file-by-file if necessary to minimize disruption. > 2. Declare that this is not something we care about in code review. > Another option would be to run pycodestyle/pyflakes with all checks we want enabled, but only on the diff in the PR. This is what PyTorch does for example. The nice thing is you can test what you want without having to deal with code churn before enabling a check. The downside is that it's a bit convoluted to reproduce the "run on the diff to master" command locally (but we could add a script to streamline that probably). Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Mon Jan 13 18:15:00 2020 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Tue, 14 Jan 2020 00:15:00 +0100 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check In-Reply-To: References: Message-ID: I actually got pretty quick at fixing these stuff by the aid of some very mild nonaggressive scripts. Hence if we all agree I can go through all repo and fix everything once and ask the current PR owners to rebase. Sounds like a chore to everyone but I guess biting the bullet once and then turning on all the checks seems the least painful way. Otherwise Ralf's suggestion is also a good idea. On Mon, Jan 13, 2020 at 11:10 PM Ralf Gommers wrote: > Hey Josh, > > > On Mon, Jan 13, 2020 at 1:44 AM Joshua Wilson > wrote: > >> Hey everyone, >> >> During code review, it seems that we care about missing whitespace >> after commas. (I have no data on this, but I suspect it is our most >> common linting nit.) Linting for this is currently disabled. Now on >> the one hand, we generally avoid turning on checks like this because >> of the disruption to open PRs and what they do to git history. On the >> other hand, I don't think enforcing this check by hand works. >> Reminding an author to add spaces after commas can easily add several >> rounds of extra review (some places almost invariably get missed on >> the first try, and then the problem tends to sneak back in in later >> commits), and I usually just give up after a while. So I don't think >> we are heading in the direction of "in 5 years all the bad cases will >> be gone and we can turn on the check without disruption". >> > > I agree, the current workflow is not optimal. > > >> So I would kind of like to push us to either: >> >> 1. Turn the lint check on. We can do this module-by-module or even >> file-by-file if necessary to minimize disruption. >> 2. Declare that this is not something we care about in code review. >> > > Another option would be to run pycodestyle/pyflakes with all checks we > want enabled, but only on the diff in the PR. This is what PyTorch does for > example. The nice thing is you can test what you want without having to > deal with code churn before enabling a check. The downside is that it's a > bit convoluted to reproduce the "run on the diff to master" command locally > (but we could add a script to streamline that probably). > > Cheers, > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Tue Jan 14 02:02:56 2020 From: andyfaff at gmail.com (Andrew Nelson) Date: Tue, 14 Jan 2020 18:02:56 +1100 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check In-Reply-To: References: Message-ID: > > >>> Another option would be to run pycodestyle/pyflakes with all checks we >> want enabled, but only on the diff in the PR. This is what PyTorch does for >> example. The nice thing is you can test what you want without having to >> deal with code churn before enabling a check. The downside is that it's a >> bit convoluted to reproduce the "run on the diff to master" command locally >> (but we could add a script to streamline that probably). >> > I'd prefer to do this. There are various PEP8 things that would be nice to have uniformly implemented throughout the codebase. I don't think we should let them slide by during a normal PR (because we'll never reach the ideal), so having a check on the diff is a good compromise. FWIW I did the change wholesale for a project of mine, and it is definitely more pleasing on the eye. It took me a couple of hours, and once it's done it's done. Then one can crack down. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaistriega at gmail.com Tue Jan 14 05:03:46 2020 From: kaistriega at gmail.com (Kai Striega) Date: Tue, 14 Jan 2020 18:03:46 +0800 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check In-Reply-To: References: Message-ID: +1 for Ralph's idea to run pycodestyle/pyflakes with all checks we want enabled, but only on the diff in the PR. Cleaning would also be quite important, I imagine quite a few potential contributors would be put of by having to fix *existing* PEP8 violations so Ilhan's scripts would also be useful. On Tue, 14 Jan 2020 at 15:03, Andrew Nelson wrote: > >>>> Another option would be to run pycodestyle/pyflakes with all checks we >>> want enabled, but only on the diff in the PR. This is what PyTorch does for >>> example. The nice thing is you can test what you want without having to >>> deal with code churn before enabling a check. The downside is that it's a >>> bit convoluted to reproduce the "run on the diff to master" command locally >>> (but we could add a script to streamline that probably). >>> >> > I'd prefer to do this. There are various PEP8 things that would be nice to > have uniformly implemented throughout the codebase. I don't think we should > let them slide by during a normal PR (because we'll never reach the ideal), > so having a check on the diff is a good compromise. FWIW I did the change > wholesale for a project of mine, and it is definitely more pleasing on the > eye. It took me a couple of hours, and once it's done it's done. Then one > can crack down. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Tue Jan 14 05:05:45 2020 From: andyfaff at gmail.com (Andrew Nelson) Date: Tue, 14 Jan 2020 21:05:45 +1100 Subject: [SciPy-Dev] Minimum Python the codebase needs to support Message-ID: What is the minimum Python version the codebase needs to support? There are plenty of cleanups possible as the minimum version increases. From PyPI it looks like we still need to cope with 3.5? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaistriega at gmail.com Tue Jan 14 05:16:28 2020 From: kaistriega at gmail.com (Kai Striega) Date: Tue, 14 Jan 2020 18:16:28 +0800 Subject: [SciPy-Dev] Minimum Python the codebase needs to support In-Reply-To: References: Message-ID: The minimum for SciPy 1.4 (Dec '19) is 3.5. Saying that, the toolchain roadmap [1] list the target for 2020 to be 3.6. [1]_https://docs.scipy.org/doc/scipy/reference/toolchain.html On Tue, 14 Jan 2020 at 18:06, Andrew Nelson wrote: > What is the minimum Python version the codebase needs to support? There > are plenty of cleanups possible as the minimum version increases. From PyPI > it looks like we still need to cope with 3.5? > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Tue Jan 14 05:35:58 2020 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 14 Jan 2020 13:35:58 +0300 Subject: [SciPy-Dev] Minimum Python the codebase needs to support In-Reply-To: References: Message-ID: How about continuing the cadence we had for 2.7 --- drop the CI in 1.3, but postpone code removals until 1.5? This can be year based (two minor releases, one full year) or release based (one minor release, 1/2 year). Evgeni ??, 14 ???. 2020 ?., 13:17 Kai Striega : > The minimum for SciPy 1.4 (Dec '19) is 3.5. > > Saying that, the toolchain roadmap [1] list the target for 2020 to be 3.6. > > [1]_https://docs.scipy.org/doc/scipy/reference/toolchain.html > > On Tue, 14 Jan 2020 at 18:06, Andrew Nelson wrote: > >> What is the minimum Python version the codebase needs to support? There >> are plenty of cleanups possible as the minimum version increases. From PyPI >> it looks like we still need to cope with 3.5? >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Jan 14 05:42:33 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 14 Jan 2020 11:42:33 +0100 Subject: [SciPy-Dev] Minimum Python the codebase needs to support In-Reply-To: References: Message-ID: On Tue, Jan 14, 2020 at 11:37 AM Evgeni Burovski wrote: > How about continuing the cadence we had for 2.7 --- drop the CI in 1.3, > but postpone code removals until 1.5? > > This can be year based (two minor releases, one full year) or release > based (one minor release, 1/2 year). > That sounds reasonable to me. In master Python 3.5 is already dropped, so for new code we can start using 3.6+ features. But there's no reason to start cleaning up straight away and perhaps make backports harder. Ralf > Evgeni > > ??, 14 ???. 2020 ?., 13:17 Kai Striega : > >> The minimum for SciPy 1.4 (Dec '19) is 3.5. >> >> Saying that, the toolchain roadmap [1] list the target for 2020 to be 3.6. >> >> [1]_https://docs.scipy.org/doc/scipy/reference/toolchain.html >> >> On Tue, 14 Jan 2020 at 18:06, Andrew Nelson wrote: >> >>> What is the minimum Python version the codebase needs to support? There >>> are plenty of cleanups possible as the minimum version increases. From PyPI >>> it looks like we still need to cope with 3.5? >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Jan 14 05:46:34 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 14 Jan 2020 11:46:34 +0100 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check In-Reply-To: References: Message-ID: On Tue, Jan 14, 2020 at 12:15 AM Ilhan Polat wrote: > I actually got pretty quick at fixing these stuff by the aid of some very > mild nonaggressive scripts. Hence if we all agree I can go through all repo > and fix everything once and ask the current PR owners to rebase. > You mean you want to fix up all PRs, right? Or all code in master? If we implement the diff checks I'm not sure the former is needed, but perhaps it helps. If the latter, I don't think that's a good idea. Ralf > Sounds like a chore to everyone but I guess biting the bullet once and > then turning on all the checks seems the least painful way. Otherwise > Ralf's suggestion is also a good idea. > > On Mon, Jan 13, 2020 at 11:10 PM Ralf Gommers > wrote: > >> Hey Josh, >> >> >> On Mon, Jan 13, 2020 at 1:44 AM Joshua Wilson < >> josh.craig.wilson at gmail.com> wrote: >> >>> Hey everyone, >>> >>> During code review, it seems that we care about missing whitespace >>> after commas. (I have no data on this, but I suspect it is our most >>> common linting nit.) Linting for this is currently disabled. Now on >>> the one hand, we generally avoid turning on checks like this because >>> of the disruption to open PRs and what they do to git history. On the >>> other hand, I don't think enforcing this check by hand works. >>> Reminding an author to add spaces after commas can easily add several >>> rounds of extra review (some places almost invariably get missed on >>> the first try, and then the problem tends to sneak back in in later >>> commits), and I usually just give up after a while. So I don't think >>> we are heading in the direction of "in 5 years all the bad cases will >>> be gone and we can turn on the check without disruption". >>> >> >> I agree, the current workflow is not optimal. >> >> >>> So I would kind of like to push us to either: >>> >>> 1. Turn the lint check on. We can do this module-by-module or even >>> file-by-file if necessary to minimize disruption. >>> 2. Declare that this is not something we care about in code review. >>> >> >> Another option would be to run pycodestyle/pyflakes with all checks we >> want enabled, but only on the diff in the PR. This is what PyTorch does for >> example. The nice thing is you can test what you want without having to >> deal with code churn before enabling a check. The downside is that it's a >> bit convoluted to reproduce the "run on the diff to master" command locally >> (but we could add a script to streamline that probably). >> >> Cheers, >> Ralf >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christoph.baumgarten at gmail.com Wed Jan 15 14:14:07 2020 From: christoph.baumgarten at gmail.com (Christoph Baumgarten) Date: Wed, 15 Jan 2020 20:14:07 +0100 Subject: [SciPy-Dev] Generate random variates using Cython Message-ID: Hi, I recently looked into rejection algorithms to generate random variates that rely on repeatedly generating uniform random variates until a termination condition is satisfied. A pure Python implementation is reasonably fast but much slower than a C implementation (e.g. Poisson distribution in numpy). I tried to use Cython to adapt my Python code (I just started to look into Cython, so it could be that I overlook something very basic) but I did not succeed and I would like to ask for help. The following code works that generates rvs on [0, 1] with density 3 x**2 as an example (compiling it generates a warning though: Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION) but I struggle to get rid of the Python interaction np.random.rand() to speed it up. cimport numpy as np from numpy cimport ndarray, float64_t import numpy as np def rvs(int N): cdef double u, v cdef int i cdef np.ndarray[np.float64_t, ndim=1] x = np.empty((N, ), dtype=np.float64) for i in range(N): while (1): u = np.random.rand() v = np.random.rand() if u <= v*v: break x[i] = v return x Following https://numpy.org/devdocs/reference/random/c-api.html (Cython API for random), I tried cimport numpy.random to use random_standard_uniform. However, I get an error: 'numpy\random.pxd' not found I get similar errors when trying to compile the Cython examles here: https://docs.scipy.org/doc/numpy-1.17.0/reference/random/extending.html Cython version: 0.29.14 Numpy 1.17.4 python 3.7.1 Any guidance on how to access the uniform distribution in numpy using Cython would be great. Thanks Christoph -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Wed Jan 15 15:14:41 2020 From: matti.picus at gmail.com (Matti Picus) Date: Thu, 16 Jan 2020 07:14:41 +1100 Subject: [SciPy-Dev] Generate random variates using Cython In-Reply-To: References: Message-ID: <5a73b945-afd1-e685-6d32-8cd2ebd822e5@gmail.com> On 16/1/20 6:14 am, Christoph Baumgarten wrote: > > Hi, > > I recently looked into rejection algorithms to generate random > variates that rely on repeatedly generating uniform random variates > until a termination condition is satisfied. > A pure Python implementation is reasonably fast but much slower than a > C implementation (e.g. Poisson distribution in numpy). I tried to use > Cython to adapt my Python code (I just started to look into Cython, so > it could be that I overlook something very basic) but I did not > succeed and I would like to ask for help. > > The following code works that generates rvs on [0, 1] with density 3 > x**2 as an example (compiling it generates a warning though: Warning > Msg: Using deprecated NumPy API, disable it with #define > NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION)? but I struggle to get rid > of the Python interaction np.random.rand() to speed it up. > > cimport numpy as np > from numpy cimport ndarray, float64_t > import numpy as np > def rvs(int N): > ??? cdef double u, v > ??? cdef int i > ??? cdef np.ndarray[np.float64_t, ndim=1] x = np.empty((N, ), > dtype=np.float64) > ??? for i in range(N): > ??????? while (1): > ??????????? u = np.random.rand() > ??????????? v = np.random.rand() > ??????????? if u <= v*v: > ??????????????? break > ??????? x[i] = v > ??? return x > > Following https://numpy.org/devdocs/reference/random/c-api.html > (Cython API for random), I tried > cimport numpy.random to use random_standard_uniform. > However, I get an error: 'numpy\random.pxd' not found > > I get similar errors when trying to compile the Cython examles here: > https://docs.scipy.org/doc/numpy-1.17.0/reference/random/extending.html > > Cython version: 0.29.14 > Numpy 1.17.4 > python 3.7.1 > > Any guidance on how to access the uniform distribution in numpy using > Cython would be great. Thanks > > Christoph > The new random c-api was broken in NumPy 1.17. Please try with NumPy 1.18. Matti From josh.craig.wilson at gmail.com Wed Jan 15 22:06:18 2020 From: josh.craig.wilson at gmail.com (Joshua Wilson) Date: Wed, 15 Jan 2020 19:06:18 -0800 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check In-Reply-To: References: Message-ID: Seems like there are no objections to adding the checks to just the diffs; I'll set that up in upcoming weeks. Very much looking forward to eliminating that aspect of code review. - Josh On Tue, Jan 14, 2020 at 2:47 AM Ralf Gommers wrote: > > > > On Tue, Jan 14, 2020 at 12:15 AM Ilhan Polat wrote: >> >> I actually got pretty quick at fixing these stuff by the aid of some very mild nonaggressive scripts. Hence if we all agree I can go through all repo and fix everything once and ask the current PR owners to rebase. > > > You mean you want to fix up all PRs, right? Or all code in master? If we implement the diff checks I'm not sure the former is needed, but perhaps it helps. If the latter, I don't think that's a good idea. > > Ralf > > >> >> Sounds like a chore to everyone but I guess biting the bullet once and then turning on all the checks seems the least painful way. Otherwise Ralf's suggestion is also a good idea. >> >> On Mon, Jan 13, 2020 at 11:10 PM Ralf Gommers wrote: >>> >>> Hey Josh, >>> >>> >>> On Mon, Jan 13, 2020 at 1:44 AM Joshua Wilson wrote: >>>> >>>> Hey everyone, >>>> >>>> During code review, it seems that we care about missing whitespace >>>> after commas. (I have no data on this, but I suspect it is our most >>>> common linting nit.) Linting for this is currently disabled. Now on >>>> the one hand, we generally avoid turning on checks like this because >>>> of the disruption to open PRs and what they do to git history. On the >>>> other hand, I don't think enforcing this check by hand works. >>>> Reminding an author to add spaces after commas can easily add several >>>> rounds of extra review (some places almost invariably get missed on >>>> the first try, and then the problem tends to sneak back in in later >>>> commits), and I usually just give up after a while. So I don't think >>>> we are heading in the direction of "in 5 years all the bad cases will >>>> be gone and we can turn on the check without disruption". >>> >>> >>> I agree, the current workflow is not optimal. >>> >>>> >>>> So I would kind of like to push us to either: >>>> >>>> 1. Turn the lint check on. We can do this module-by-module or even >>>> file-by-file if necessary to minimize disruption. >>>> 2. Declare that this is not something we care about in code review. >>> >>> >>> Another option would be to run pycodestyle/pyflakes with all checks we want enabled, but only on the diff in the PR. This is what PyTorch does for example. The nice thing is you can test what you want without having to deal with code churn before enabling a check. The downside is that it's a bit convoluted to reproduce the "run on the diff to master" command locally (but we could add a script to streamline that probably). >>> >>> Cheers, >>> Ralf >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From rlucente at pipeline.com Thu Jan 16 06:45:05 2020 From: rlucente at pipeline.com (Robert Lucente - Pipeline.Com) Date: Thu, 16 Jan 2020 06:45:05 -0500 Subject: [SciPy-Dev] cuSignal: GPU accelerated version of SciPy Signal package Message-ID: <038f01d5cc62$6572a090$3057e1b0$@pipeline.com> https://github.com/rapidsai/cusignal I was wondering what people thought of cuSignal? If the topic is not appropriate for this mailing list, please consider replying directly to me. From juanlu001 at gmail.com Thu Jan 16 09:20:07 2020 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Thu, 16 Jan 2020 15:20:07 +0100 Subject: [SciPy-Dev] cuSignal: GPU accelerated version of SciPy Signal package In-Reply-To: <038f01d5cc62$6572a090$3057e1b0$@pipeline.com> References: <038f01d5cc62$6572a090$3057e1b0$@pipeline.com> Message-ID: The question is very broad, so I'm answering very broadly as well: I'm happy that someone is splitting the SciPy sub-packages into separate projects... I could see myself using cusignal (or cuintegrate, or cuoptimize) just because of that, regardless of the GPU stuff. On 1/16/20, Robert Lucente - Pipeline.Com wrote: > https://github.com/rapidsai/cusignal > > I was wondering what people thought of cuSignal? > > If the topic is not appropriate for this mailing list, please consider > replying directly to me. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- Juan Luis Cano From Sebastian.Semper at tu-ilmenau.de Thu Jan 16 09:28:24 2020 From: Sebastian.Semper at tu-ilmenau.de (Semper Sebastian TU Ilmenau) Date: Thu, 16 Jan 2020 14:28:24 +0000 Subject: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication In-Reply-To: References: , Message-ID: You could just use https://fastmat.readthedocs.io/en/latest/ or https://github.com/equinor/pylops for efficient matrix-vector multiplications. ? ________________________________ Von: SciPy-Dev im Auftrag von Aditya Ravuri Gesendet: Montag, 13. Januar 2020 23:01:22 An: scipy-dev at python.org Betreff: Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication Hello, I've submit a draft PR for this on GitHub: https://github.com/scipy/scipy/pull/11346 An issue has been raised there that adding more functions increases the number of public functions, which is undesirable. If this mailing list can confirm that SciPy will never support matrix methods for special matrices over and above what's already implemented, I will move these lines to the documentation for scipy.linalg.toeplitz as suggested. Regards, Aditya On Fri, Jan 10, 2020 at 12:40 AM Aditya Ravuri > wrote: Hello, I'd like to add a function that performs efficient matrix-vector multiplication of Toeplitz matrices with vectors using the circulant matrix embedding trick. A simple example is attached. May I proceed with coding up an implementation for SciPy based on this script? Is it something that could be added? An example application of this is in statistics, where stationary processes have a Toeplitz covariance matrix. The efficient matrix vector product can be used to compute log probabilities quickly and perhaps even make use of conjugate gradient solvers in multivariate normals conditional distributions. Regards, Aditya -- AdityaR -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlucas7 at vt.edu Fri Jan 17 10:49:11 2020 From: rlucas7 at vt.edu (rlucas7 at vt.edu) Date: Fri, 17 Jan 2020 10:49:11 -0500 Subject: [SciPy-Dev] Regression in binned statistic dd Message-ID: <747D1F43-F9E8-4D8D-9AD4-CC7FAFD47DCF@vt.edu> Hi, I added a valueerror throw for binned-statistic dd in https://github.com/scipy/scipy/pull/10664 Which broke something in lightkurve package. One of the devs opened this issue: https://github.com/scipy/scipy/issues/11365#issuecomment-574483999 I agreed with the proposal and opened this PR https://github.com/scipy/scipy/pull/10664 Reverting the change and implementing fix and changing the test. If anyone has any time and can give a look would be great. Would like to have a consensus on whether to backport and to implement change as is or to modify (if someone has a better idea). In particular the _bin_edges function is where the defect resides but handling the error there might be confusing for end user to figure out what?s going on. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scipy-dev at fuglede.dk Sat Jan 18 01:49:46 2020 From: scipy-dev at fuglede.dk (=?iso-8859-1?Q?S=F8ren_Fuglede_J=F8rgensen?=) Date: Sat, 18 Jan 2020 07:49:46 +0100 Subject: [SciPy-Dev] Proposed addition to scipy.sparse.csgraph.maximum_flow In-Reply-To: <20200117211020.GC23218@hackenturet.dk> References: <20200117211020.GC23218@hackenturet.dk> Message-ID: <20200118064946.GA27745@hackenturet.dk> Hi mhwende, There's definitely some interesting observations in here! I'll let someone more familiar with SciPy decide what the bar for acceptable complexity should be here, but certainly the DFS change seems minimal enough to not be harmful. Regarding the changes to the initialization functions called prior to the actual optimization: You mention that your inputs are non-sparse. When originally working on the benchmarks in https://github.com/scipy/scipy/pull/10566 I tried a few different sparsities. I seem to recall that I found the NumPy route preferable for all things initialization, but I wonder if that simply has to do with the sparsity of the inputs I used during testing. In particular, I would have expected your use of `transpose` to be much slower but your numbers certainly suggest otherwise. Would you be able to repeat your experiment with sparse inputs to see if the improvements are consistent? Not to turn this into a code review already, but I did have a few thoughts while reading through: - I didn't check for correctness, but while the new initialization code is a bit longer, it's also much more explicit which could be a maintenance pro. - Regarding the naming, having DFS as part of the signature for _edmonds_karp irked me slightly as -- at least in the literature I know of -- Edmonds--Karp really simply is Ford--Fulkerson with the further specification that augmenting paths be found using BFS. So really the new method is more like "Ford--Fulkerson with DFS". - Along the same lines, I would suggest allowing for a more general method description in `maximum_flow` itself than just a boolean to toggle between search order: When we're talking about finding performance improvements to the existing method, other more or less low-hanging fruits would be to include some of the other solvers for the same problem (push-relabel would be a natural first candidate). Indeed, doing so might lead to improvements that are even greater than the ones your tweak gave rise to (of course at the cost of complexity). But as such, it might be preferable to leave open the option to do so by having something like the "method" parameter from scipy.optimize, and using that instead of the simpler bool. - Even if it seems to be the natural choice for your inputs, I'm in favor of keeping BFS the default, as you also did. DFS has exponential worst case complexity [0] while Edmonds--Karp is polynomial time. Regards, S?ren [0]: https://people.cs.clemson.edu/~bcdean/iterm.pdf On Thu, Jan 02, 2020 at 02:25:35PM +0000, mhwende wrote: > Hi, > > recently I have made use of the maximum flow algorithm for > CSR matrices, i.e., the scipy.sparse.csgraph.maximum_flow function. > > My test case was a non-sparse, bipartite network with lots of connections > between the two node sets. The value of the maximum flow was small in > comparison to the total number of nodes and edges in the graph. > I wanted to use the scipy.sparse.csgraph module, however with a depth first > search (DFS) instead of the breadth-first search that you have implemented. > Therefore, I have implemented a new option to use DFS instead of BFS with > your implementation. When measuring the execution time for a random graph, > there was a signnificant speedup of the DFS procedure compared to the > BFS variant. > > During these tests, I also refactored the two other functions that you call > within the main csgraph.maximum_flow function, i.e., _add_reverse_edges and > _make_edge_pointers. Depending on the input graph, I found that the time > spent in this functions can be larger than the execution time for the actual > Edmond-Karps algorithm. > > Below, you can find a table showing the measured execution times in seconds. > Within the columns of this table, the execution time is broken down into parts > as follows: > > a) Adding reverse edges (function _add_reverse_edges). > b) Making edge pointers (function _make_edge_pointers). > c) Actual max flow computation (function _edmond_karps) > d) Total execution time (function maximum_flow). > > Rows of the table refer to the changes which were made to the implementation, > where each measurement was performed with either DFS or BFS for the > construction of augmenting paths: > > 1) Original implementation. > 2) After the reimplementation of _make_edge_pointers. > 3) After the reimplementation of _add_reverse_edges. > > Here are the execution times: > > | a) | b) | c) | d) > --------+-----------+-----------+-----------+----------- > 1. BFS | 1.29 s | 3.81 s | 8.93 s | 14.08 s > 1. DFS | 1.29 s | 3.82 s | 0.50 s | 5.66 s > --------+-----------+-----------+-----------+----------- > 2. BFS | 1.54 s | 0.22 s | 8.94 s | 10.76 s > 2. DFS | 1.33 s | 0.22 s | 0.50 s | 2.10 s > --------+-----------+-----------+-----------+----------- > 3. BFS | 0.16 s | 0.27 s | 9.01 s | 9.50 s > 3. DFS | 0.16 s | 0.23 s | 0.52 s | 0.96 s > > By comparing the total execution times for BFS in 1d) and for > DFS in 3d), my code changes have led to a speedup of about 14, > for the network example which I have mentioned. > > Please let me know if would like to incorporate some or all of my > changes into your code base. > My changes (a single commit) can be found here: > https://github.com/mhwende/scipy/commit/e7be2dc29364fe95f6fae20a4c0775716cdba5e5 > > Finally, here is the script which I have used for testing. > The above timings have been produced with the command: > > python scipymaxflowtest 2000 2000 --dfs=0 > > This will create a graph with 4000 nodes and 4,000,000 edges, > with randomly chosen zero or unit capacities. > > # File scipymaxflowtest.py > > import time > import argparse > import numpy as np > import scipy.sparse as sp > > > def scipymaxflowtest(layer_sizes, dfs): > layer_sizes = np.array(layer_sizes, dtype = np.int) > num_layers = len(layer_sizes) > print("Layer sizes:", *layer_sizes) > print("Number of layers:", num_layers) > num_nodes = 2 + np.sum(layer_sizes) > print("Number of nodes:", num_nodes) > num_edges = layer_sizes[0] + layer_sizes[-1] > for i in range(layer_sizes.size - 1): > num_edges += layer_sizes[i] * layer_sizes[i + 1] > print("Number of edges:", num_edges) > #data = np.ones(num_edges, dtype = np.int) > data = np.random.randint(0, 2, num_edges) > indices = np.zeros(num_edges, dtype = np.int) > indptr = np.zeros(num_nodes + 1, dtype = np.int) > ptr = 0 > for j in range(layer_sizes[0]): > indices[ptr] = 2 + j > ptr += 1 > indptr[1] = ptr > indptr[2] = ptr > layer_sum = 0 > for k in range(num_layers - 1): > next_layer_sum = layer_sum + layer_sizes[k] > for i in range(layer_sizes[k]): > for j in range(layer_sizes[k + 1]): > indices[ptr] = 2 + next_layer_sum + j > ptr += 1 > indptr[2 + layer_sum + i + 1] = ptr > layer_sum = next_layer_sum > for i in range(layer_sizes[-1]): > indices[ptr] = 1 > ptr += 1 > indptr[2 + layer_sum + i + 1] = ptr > indptr[num_nodes] = ptr > > # Check that we set as many entries in the index array as there are edges. > assert ptr == num_edges > > # Create CSR matrix from data and index arrays. > a = sp.csr_matrix((data, indices, indptr), shape = (num_nodes, num_nodes)) > if num_nodes <= 12: > print("A =\n{}".format(a.toarray())) > > t0 = time.time() > flow_result = sp.csgraph.maximum_flow(a, 0, 1, dfs) > t1 = time.time() > flow_value = flow_result.flow_value > print("Maximum flow: {} (took {:.6f} s)".format(flow_value, t1 - t0)) > return flow_value > > > if __name__ == "__main__": > parser = argparse.ArgumentParser() > parser.add_argument( > "layer_sizes", type = int, nargs = "+", > help = "Number of nodes on individual graph layers.") > parser.add_argument( > "--dfs", type = int, default = 0, > help = "Whether to user depth first search.") > args = parser.parse_args() > np.random.seed(4891243) > scipymaxflowtest(args.layer_sizes, args.dfs) > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From aditya.ravuri at gmail.com Sat Jan 18 12:20:59 2020 From: aditya.ravuri at gmail.com (Aditya Ravuri) Date: Sat, 18 Jan 2020 17:20:59 +0000 Subject: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication In-Reply-To: References: Message-ID: Hi, Fastmat and pylops do not have all the toeplitz methods implemented yet (to my knowledge). If it is determined that scipy is not the appropriate place for these functions, I'm happy to contribute there. Regards, Aditya On Thu, Jan 16, 2020, 5:01 PM wrote: > Send SciPy-Dev mailing list submissions to > scipy-dev at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at python.org > > You can reach the person managing the list at > scipy-dev-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Re: cuSignal: GPU accelerated version of SciPy Signal package > (Juan Luis Cano) > 2. Re: Efficient Toeplitz Matrix-Vector Multiplication > (Semper Sebastian TU Ilmenau) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 16 Jan 2020 15:20:07 +0100 > From: Juan Luis Cano > To: SciPy Developers List > Subject: Re: [SciPy-Dev] cuSignal: GPU accelerated version of SciPy > Signal package > Message-ID: > < > CAFmivUWhSKC4kFcoFsE36gxaN9Jg37W9S8HFcdybsyMMUs-JkQ at mail.gmail.com> > Content-Type: text/plain; charset="UTF-8" > > The question is very broad, so I'm answering very broadly as well: I'm > happy that someone is splitting the SciPy sub-packages into separate > projects... I could see myself using cusignal (or cuintegrate, or > cuoptimize) just because of that, regardless of the GPU stuff. > > On 1/16/20, Robert Lucente - Pipeline.Com wrote: > > https://github.com/rapidsai/cusignal > > > > I was wondering what people thought of cuSignal? > > > > If the topic is not appropriate for this mailing list, please consider > > replying directly to me. > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > > > > > -- > Juan Luis Cano > > > ------------------------------ > > Message: 2 > Date: Thu, 16 Jan 2020 14:28:24 +0000 > From: Semper Sebastian TU Ilmenau > To: SciPy Developers List > Subject: Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector > Multiplication > Message-ID: > Content-Type: text/plain; charset="utf-8" > > You could just use > > > https://fastmat.readthedocs.io/en/latest/ > > > or > > > https://github.com/equinor/pylops > > > for efficient matrix-vector multiplications. ? > > ________________________________ > Von: SciPy-Dev tu-ilmenau.de at python.org> im Auftrag von Aditya Ravuri < > aditya.ravuri at gmail.com> > Gesendet: Montag, 13. Januar 2020 23:01:22 > An: scipy-dev at python.org > Betreff: Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication > > Hello, > > I've submit a draft PR for this on GitHub: > https://github.com/scipy/scipy/pull/11346 > > An issue has been raised there that adding more functions increases the > number of public functions, which is undesirable. If this mailing list can > confirm that SciPy will never support matrix methods for special matrices > over and above what's already implemented, I will move these lines to the > documentation for scipy.linalg.toeplitz as suggested. > > Regards, > > Aditya > > > > On Fri, Jan 10, 2020 at 12:40 AM Aditya Ravuri > wrote: > Hello, > > I'd like to add a function that performs efficient matrix-vector > multiplication of Toeplitz matrices with vectors using the circulant matrix > embedding trick. A simple example is attached. > > May I proceed with coding up an implementation for SciPy based on this > script? Is it something that could be added? > > An example application of this is in statistics, where stationary > processes have a Toeplitz covariance matrix. The efficient matrix vector > product can be used to compute log probabilities quickly and perhaps even > make use of conjugate gradient solvers in multivariate normals conditional > distributions. > > Regards, > > Aditya > > > -- > AdityaR > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.python.org/pipermail/scipy-dev/attachments/20200116/d12d20f8/attachment-0001.html > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > End of SciPy-Dev Digest, Vol 195, Issue 14 > ****************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Jan 18 12:38:55 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 18 Jan 2020 18:38:55 +0100 Subject: [SciPy-Dev] cuSignal: GPU accelerated version of SciPy Signal package In-Reply-To: References: <038f01d5cc62$6572a090$3057e1b0$@pipeline.com> Message-ID: On Thu, Jan 16, 2020 at 3:20 PM Juan Luis Cano wrote: > The question is very broad, so I'm answering very broadly as well: I'm > happy that someone is splitting the SciPy sub-packages into separate > projects... I could see myself using cusignal (or cuintegrate, or > cuoptimize) just because of that, regardless of the GPU stuff. > Keep in mind that it's basically a fork, and it may diverge at some point. The RAPIDS team does want to work with the community, so I hope we can evolve to some sort backend system in the future, rather than just a separate package with a copy of the API. I do like RAPIDS, it has a lot of potential I think. Cheers. Ralf > On 1/16/20, Robert Lucente - Pipeline.Com wrote: > > https://github.com/rapidsai/cusignal > > > > I was wondering what people thought of cuSignal? > > > > If the topic is not appropriate for this mailing list, please consider > > replying directly to me. > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > > > > > -- > Juan Luis Cano > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Sat Jan 18 18:58:35 2020 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Sat, 18 Jan 2020 16:58:35 -0700 Subject: [SciPy-Dev] Python 2.7 LTS support branch Message-ID: Hi, The tests are finally passing the Python 2.7 LTS support branch of SciPy: https://github.com/scipy/scipy/pull/10758 I'm hoping this may be the last release to provide fixes for Python 2.7, given the difficulty of backporting fixes to such an old branch with a wide-ranging support profile. I just want to make sure I'm not missing anything important for Python 2.7 as we prepare for SciPy 1.2.3 LTS. Best wishes, Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlucente at pipeline.com Mon Jan 20 09:32:35 2020 From: rlucente at pipeline.com (Robert Lucente) Date: Mon, 20 Jan 2020 09:32:35 -0500 (GMT-05:00) Subject: [SciPy-Dev] Scientific Python: Using SciPy for Optimization by Bryan Weber at Real Python - Jan 20, 2020 Message-ID: <515153040.1295.1579530755952@wamui-albus.atl.sa.earthlink.net> https://realpython.com/python-scipy-cluster-optimize/ I was wondering what people thought of the article From mhaberla at calpoly.edu Mon Jan 20 13:19:25 2020 From: mhaberla at calpoly.edu (Matt Haberland) Date: Mon, 20 Jan 2020 18:19:25 +0000 Subject: [SciPy-Dev] SciPy 2020 Conference Seeking Submissions and Reviewers Message-ID: Dear SciPy Developers, Besides an incredible library, "SciPy" is also the name of a conference about scientific computing with Python and the scientific Python ecosystem as a whole. SciPy 2020 will be July 6-12 in Austin, Texas. Talk, poster and tutorial submissions are due February 11th. We also seek volunteers to review submissions between February 20th and March 16th. Please indicate your interest here. Feel free to contact me if you have any questions about the conference, and I hope to see you there! Thanks, Matt Haberland -- Matt Haberland Assistant Professor BioResource and Agricultural Engineering 08A-3K, Cal Poly -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Mon Jan 20 15:39:42 2020 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 21 Jan 2020 07:39:42 +1100 Subject: [SciPy-Dev] Scientific Python: Using SciPy for Optimization by Bryan Weber at Real Python - Jan 20, 2020 In-Reply-To: <515153040.1295.1579530755952@wamui-albus.atl.sa.earthlink.net> References: <515153040.1295.1579530755952@wamui-albus.atl.sa.earthlink.net> Message-ID: <66363938-da17-631c-078e-2fbbfca71298@gmail.com> On 21/1/20 1:32 am, Robert Lucente wrote: > https://realpython.com/python-scipy-cluster-optimize/ > > I was wondering what people thought of the article Hey Robert. You are asking the mailing list, which could be hundreds of people, to review an article. This would at least require them to spend the time to read it (i they haven't already) and also spend the time to write a careful reply. However you do not reciprocate by telling us what you thought about it, why it interests you, and what level of response you are looking for: the graphics? the tone? the critique of SciPy? I would need more from you before I invest in a better reply. Matti From rlucente at pipeline.com Mon Jan 20 15:58:04 2020 From: rlucente at pipeline.com (Robert Lucente - Pipeline.Com) Date: Mon, 20 Jan 2020 15:58:04 -0500 Subject: [SciPy-Dev] Scientific Python: Using SciPy for Optimization by Bryan Weber at Real Python - Jan 20, 2020 In-Reply-To: <66363938-da17-631c-078e-2fbbfca71298@gmail.com> References: <515153040.1295.1579530755952@wamui-albus.atl.sa.earthlink.net> <66363938-da17-631c-078e-2fbbfca71298@gmail.com> Message-ID: <016001d5cfd4$4fc6c260$ef544720$@pipeline.com> >which could be hundreds of people >However you do not reciprocate by telling us what you thought about it, ... Ooops on my part. Let me organize my thoughts and write something up. Also, thanks for the guidance. This is my second or third posting to the mailing list and still trying to determine what is appropriate. -----Original Message----- From: SciPy-Dev On Behalf Of Matti Picus Sent: Monday, January 20, 2020 3:40 PM To: scipy-dev at python.org Subject: Re: [SciPy-Dev] Scientific Python: Using SciPy for Optimization by Bryan Weber at Real Python - Jan 20, 2020 On 21/1/20 1:32 am, Robert Lucente wrote: > https://realpython.com/python-scipy-cluster-optimize/ > > I was wondering what people thought of the article Hey Robert. You are asking the mailing list, which could be hundreds of people, to review an article. This would at least require them to spend the time to read it (i they haven't already) and also spend the time to write a careful reply. However you do not reciprocate by telling us what you thought about it, why it interests you, and what level of response you are looking for: the graphics? the tone? the critique of SciPy? I would need more from you before I invest in a better reply. Matti _______________________________________________ SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev From rlucente at pipeline.com Mon Jan 20 17:32:15 2020 From: rlucente at pipeline.com (Robert Lucente - Pipeline.Com) Date: Mon, 20 Jan 2020 17:32:15 -0500 Subject: [SciPy-Dev] Scientific Python: Using SciPy for Optimization by Bryan Weber at Real Python - Jan 20, 2020 References: <515153040.1295.1579530755952@wamui-albus.atl.sa.earthlink.net> <66363938-da17-631c-078e-2fbbfca71298@gmail.com> Message-ID: <000001d5cfe1$77c6dbd0$67549370$@pipeline.com> SciPy can be overwhelming for newbies. I am trying to find a single artifact which I can point them to. I believe that "Scientific Python: Using SciPy for Optimization" by Bryan Weber (https://realpython.com/python-scipy-cluster-optimize/) satisfies this criteria. It not only talks about SciPy but the entire ecosystem. It then demonstrates SciPy w/ specific examples of clustering and minimzing a function. The demonstrations involve code so that people can see how easy it is to use. At the same time, the article is long enough and technical enough to work as the "are you serious test?". No sense in wasting the readers time or other peoples time if they are not serious. Real Python has a really good reputation and so when I recommend it to people they are likely to pay attention. It won't just be another web page. Also, B. Weber has written a lot of good stuff (https://realpython.com/team/bweber/). For example "MATLAB vs Python: Why and How to Make the Switch" (https://realpython.com/matlab-vs-python/). Long winded way of saying that the article has good credentials. I am new to SciPy and my math is rusty, please consider reviewing the article for technical accuracy. >the graphics? the tone? the critique of SciPy? Any other perspectives on the article would be appreciated. -----Original Message----- From: Robert Lucente - Pipeline.Com Sent: Monday, January 20, 2020 3:58 PM To: 'SciPy Developers List' Cc: Robert Gmail Backup 2 Lucente (robert.backup.2.lucente at gmail.com) Subject: RE: [SciPy-Dev] Scientific Python: Using SciPy for Optimization by Bryan Weber at Real Python - Jan 20, 2020 >which could be hundreds of people >However you do not reciprocate by telling us what you thought about it, ... Ooops on my part. Let me organize my thoughts and write something up. Also, thanks for the guidance. This is my second or third posting to the mailing list and still trying to determine what is appropriate. -----Original Message----- From: SciPy-Dev On Behalf Of Matti Picus Sent: Monday, January 20, 2020 3:40 PM To: scipy-dev at python.org Subject: Re: [SciPy-Dev] Scientific Python: Using SciPy for Optimization by Bryan Weber at Real Python - Jan 20, 2020 On 21/1/20 1:32 am, Robert Lucente wrote: > https://realpython.com/python-scipy-cluster-optimize/ > > I was wondering what people thought of the article Hey Robert. You are asking the mailing list, which could be hundreds of people, to review an article. This would at least require them to spend the time to read it (i they haven't already) and also spend the time to write a careful reply. However you do not reciprocate by telling us what you thought about it, why it interests you, and what level of response you are looking for: the graphics? the tone? the critique of SciPy? I would need more from you before I invest in a better reply. Matti _______________________________________________ SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev From Sebastian.Semper at tu-ilmenau.de Tue Jan 21 10:40:10 2020 From: Sebastian.Semper at tu-ilmenau.de (Semper Sebastian TU Ilmenau) Date: Tue, 21 Jan 2020 15:40:10 +0000 Subject: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication In-Reply-To: References: , Message-ID: <1d5efe924c3c49b38d4ea3a3418b2dcd@tu-ilmenau.de> Hi, as one of the developers of fastmat, I can tell you that we have Circulant, Toeplitz and respective multilevel-Circulant and Toeplitz routines implemented, which use your proposed trick to do a fast matrix-vector multiplication. ;-) Cheers Seb ________________________________ Von: SciPy-Dev im Auftrag von Aditya Ravuri Gesendet: Samstag, 18. Januar 2020 18:20:59 An: scipy-dev at python.org Betreff: Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication Hi, Fastmat and pylops do not have all the toeplitz methods implemented yet (to my knowledge). If it is determined that scipy is not the appropriate place for these functions, I'm happy to contribute there. Regards, Aditya On Thu, Jan 16, 2020, 5:01 PM > wrote: Send SciPy-Dev mailing list submissions to scipy-dev at python.org To subscribe or unsubscribe via the World Wide Web, visit https://mail.python.org/mailman/listinfo/scipy-dev or, via email, send a message with subject or body 'help' to scipy-dev-request at python.org You can reach the person managing the list at scipy-dev-owner at python.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-Dev digest..." Today's Topics: 1. Re: cuSignal: GPU accelerated version of SciPy Signal package (Juan Luis Cano) 2. Re: Efficient Toeplitz Matrix-Vector Multiplication (Semper Sebastian TU Ilmenau) ---------------------------------------------------------------------- Message: 1 Date: Thu, 16 Jan 2020 15:20:07 +0100 From: Juan Luis Cano > To: SciPy Developers List > Subject: Re: [SciPy-Dev] cuSignal: GPU accelerated version of SciPy Signal package Message-ID: > Content-Type: text/plain; charset="UTF-8" The question is very broad, so I'm answering very broadly as well: I'm happy that someone is splitting the SciPy sub-packages into separate projects... I could see myself using cusignal (or cuintegrate, or cuoptimize) just because of that, regardless of the GPU stuff. On 1/16/20, Robert Lucente - Pipeline.Com > wrote: > https://github.com/rapidsai/cusignal > > I was wondering what people thought of cuSignal? > > If the topic is not appropriate for this mailing list, please consider > replying directly to me. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- Juan Luis Cano ------------------------------ Message: 2 Date: Thu, 16 Jan 2020 14:28:24 +0000 From: Semper Sebastian TU Ilmenau > To: SciPy Developers List > Subject: Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication Message-ID: > Content-Type: text/plain; charset="utf-8" You could just use https://fastmat.readthedocs.io/en/latest/ or https://github.com/equinor/pylops for efficient matrix-vector multiplications. ? ________________________________ Von: SciPy-Dev > im Auftrag von Aditya Ravuri > Gesendet: Montag, 13. Januar 2020 23:01:22 An: scipy-dev at python.org Betreff: Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication Hello, I've submit a draft PR for this on GitHub: https://github.com/scipy/scipy/pull/11346 An issue has been raised there that adding more functions increases the number of public functions, which is undesirable. If this mailing list can confirm that SciPy will never support matrix methods for special matrices over and above what's already implemented, I will move these lines to the documentation for scipy.linalg.toeplitz as suggested. Regards, Aditya On Fri, Jan 10, 2020 at 12:40 AM Aditya Ravuri >> wrote: Hello, I'd like to add a function that performs efficient matrix-vector multiplication of Toeplitz matrices with vectors using the circulant matrix embedding trick. A simple example is attached. May I proceed with coding up an implementation for SciPy based on this script? Is it something that could be added? An example application of this is in statistics, where stationary processes have a Toeplitz covariance matrix. The efficient matrix vector product can be used to compute log probabilities quickly and perhaps even make use of conjugate gradient solvers in multivariate normals conditional distributions. Regards, Aditya -- AdityaR -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev ------------------------------ End of SciPy-Dev Digest, Vol 195, Issue 14 ****************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteoravasi at gmail.com Tue Jan 21 11:03:22 2020 From: matteoravasi at gmail.com (Matteo Ravasi) Date: Tue, 21 Jan 2020 17:03:22 +0100 Subject: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication In-Reply-To: <1d5efe924c3c49b38d4ea3a3418b2dcd@tu-ilmenau.de> References: <1d5efe924c3c49b38d4ea3a3418b2dcd@tu-ilmenau.de> Message-ID: Hi Aditya, We do have Convolve1D (https://pylops.readthedocs.io/en/latest/api/generated/pylops.signalprocessing.Convolve1D.html#pylops.signalprocessing.Convolve1D ) which is basically what Toeplitz does in the description of the MIT page you point to in the PR. The choice of calling it Convolve1D in first place is that because I personally like things to be called for what they do (Toeplitz is well know matrix type but Convolve is even more explicative). Our method directly wraps scipy.signal.convolve when applied to 1d signals with the possibility of choosing the method of application (time or freq) and scipy.signal.fftconvolve when applied in batched mode (ie along one direction of a N-d signal). In adjoint mode it?s just applying the same methods with flipped filter. On top of that we also provide Convolve2D and ConvolveND where now the convolution is applied to 2 or more directions. That would effectively become a Blockdiagonal matrix of Toeplitz matrices, one more reason for using Convolve ;) I think by modifying the input filter appropriately you can get what circulant does as well. This is not to say that your PR should not be pushed forward. I see that Toeplitz and Circulant are very very popular in many different fields of science, but again I stress that in my personal opinion they should implemented by subclassing LinearOperator instead of pure method - effectively what you do is a linear operator :) Thanks! iMR >> On 21 Jan 2020, at 16:41, Semper Sebastian TU Ilmenau wrote: > ? > Hi, > > > > as one of the developers of fastmat, I can tell you that we have Circulant, Toeplitz and respective multilevel-Circulant and Toeplitz routines implemented, which use your proposed trick to do a fast matrix-vector multiplication. ;-) > > > > Cheers > > Seb > > > Von: SciPy-Dev im Auftrag von Aditya Ravuri > Gesendet: Samstag, 18. Januar 2020 18:20:59 > An: scipy-dev at python.org > Betreff: Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication > > Hi, > > Fastmat and pylops do not have all the toeplitz methods implemented yet (to my knowledge). If it is determined that scipy is not the appropriate place for these functions, I'm happy to contribute there. > > Regards, > > Aditya > > >> On Thu, Jan 16, 2020, 5:01 PM wrote: >> Send SciPy-Dev mailing list submissions to >> scipy-dev at python.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://mail.python.org/mailman/listinfo/scipy-dev >> or, via email, send a message with subject or body 'help' to >> scipy-dev-request at python.org >> >> You can reach the person managing the list at >> scipy-dev-owner at python.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of SciPy-Dev digest..." >> >> >> Today's Topics: >> >> 1. Re: cuSignal: GPU accelerated version of SciPy Signal package >> (Juan Luis Cano) >> 2. Re: Efficient Toeplitz Matrix-Vector Multiplication >> (Semper Sebastian TU Ilmenau) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Thu, 16 Jan 2020 15:20:07 +0100 >> From: Juan Luis Cano >> To: SciPy Developers List >> Subject: Re: [SciPy-Dev] cuSignal: GPU accelerated version of SciPy >> Signal package >> Message-ID: >> >> Content-Type: text/plain; charset="UTF-8" >> >> The question is very broad, so I'm answering very broadly as well: I'm >> happy that someone is splitting the SciPy sub-packages into separate >> projects... I could see myself using cusignal (or cuintegrate, or >> cuoptimize) just because of that, regardless of the GPU stuff. >> >> On 1/16/20, Robert Lucente - Pipeline.Com wrote: >> > https://github.com/rapidsai/cusignal >> > >> > I was wondering what people thought of cuSignal? >> > >> > If the topic is not appropriate for this mailing list, please consider >> > replying directly to me. >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at python.org >> > https://mail.python.org/mailman/listinfo/scipy-dev >> > >> >> >> -- >> Juan Luis Cano >> >> >> ------------------------------ >> >> Message: 2 >> Date: Thu, 16 Jan 2020 14:28:24 +0000 >> From: Semper Sebastian TU Ilmenau >> To: SciPy Developers List >> Subject: Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector >> Multiplication >> Message-ID: >> Content-Type: text/plain; charset="utf-8" >> >> You could just use >> >> >> https://fastmat.readthedocs.io/en/latest/ >> >> >> or >> >> >> https://github.com/equinor/pylops >> >> >> for efficient matrix-vector multiplications. ? >> >> ________________________________ >> Von: SciPy-Dev im Auftrag von Aditya Ravuri >> Gesendet: Montag, 13. Januar 2020 23:01:22 >> An: scipy-dev at python.org >> Betreff: Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication >> >> Hello, >> >> I've submit a draft PR for this on GitHub: https://github.com/scipy/scipy/pull/11346 >> >> An issue has been raised there that adding more functions increases the number of public functions, which is undesirable. If this mailing list can confirm that SciPy will never support matrix methods for special matrices over and above what's already implemented, I will move these lines to the documentation for scipy.linalg.toeplitz as suggested. >> >> Regards, >> >> Aditya >> >> >> >> On Fri, Jan 10, 2020 at 12:40 AM Aditya Ravuri > wrote: >> Hello, >> >> I'd like to add a function that performs efficient matrix-vector multiplication of Toeplitz matrices with vectors using the circulant matrix embedding trick. A simple example is attached. >> >> May I proceed with coding up an implementation for SciPy based on this script? Is it something that could be added? >> >> An example application of this is in statistics, where stationary processes have a Toeplitz covariance matrix. The efficient matrix vector product can be used to compute log probabilities quickly and perhaps even make use of conjugate gradient solvers in multivariate normals conditional distributions. >> >> Regards, >> >> Aditya >> >> >> -- >> AdityaR >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> >> ------------------------------ >> >> End of SciPy-Dev Digest, Vol 195, Issue 14 >> ****************************************** > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Tue Jan 21 12:53:40 2020 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Tue, 21 Jan 2020 10:53:40 -0700 Subject: [SciPy-Dev] ANN: SciPy 1.2.3 (LTS) Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi all, On behalf of the SciPy development team I'm pleased to announce the release of SciPy 1.2.3, which is a bug fix release. This is part of the long-term support (LTS) branch that includes Python 2.7. Sources and binary wheels can be found at: https://pypi.org/project/scipy/ and at: https://github.com/scipy/scipy/releases/tag/v1.2.3 One of a few ways to install this release with pip: pip install scipy==1.2.3 ========================== SciPy 1.2.3 Release Notes ========================== SciPy 1.2.3 is a bug-fix release with no new features compared to 1.2.2. It is part of the long-term support (LTS) release series for Python 2.7. Authors ======= * Geordie McBain * Matt Haberland * David Hagen * Tyler Reddy * Pauli Virtanen * Eric Larson * Yu Feng * ananyashreyjain * Nikolay Mayorov * Evgeni Burovski * Warren Weckesser Issues closed for 1.2.3 -------------------------------- * `#4915 `__: Bug in unique_roots in scipy.signal.signaltools.py for roots with same magnitude * `#5546 `__: ValueError raised if scipy.sparse.linalg.expm recieves array larger than 200x200 * `#7117 `__: Warn users when using float32 input data to curve_fit and friends * `#7906 `__: Wrong result from scipy.interpolate.UnivariateSpline.integral for out-of-bounds * `#9581 `__: Least-squares minimization fails silently when x and y data are different types * `#9901 `__: lsoda fails to detect stiff problem when called from solve_ivp * `#9988 `__: doc build broken with Sphinx 2.0.0 * `#10303 `__: BUG: optimize: `linprog` failing TestLinprogSimplexBland::test_unbounded_below_no_presolve_corrected * `#10376 `__: TST: Travis CI fails (with pytest 5.0 ?) * `#10384 `__: CircleCI doc build failing on new warnings * `#10535 `__: TST: master branch CI failures * `#11121 `__: Calls to `scipy.interpolate.splprep` increase RAM usage. * `#11198 `__: BUG: sparse eigs (arpack) shift-invert drops the smallest eigenvalue for some k * `#11266 `__: Sparse matrix constructor data type detection changes on Numpy 1.18.0 Pull requests for 1.2.3 -------------------------------- * `#9992 `__: MAINT: Undo Sphinx pin * `#10071 `__: DOC: reconstruct SuperLU permutation matrices avoiding SparseEfficiencyWarning * `#10076 `__: BUG: optimize: fix curve_fit for mixed float32/float64 input * `#10138 `__: BUG: special: Invalid arguments to ellip_harm can crash Python. * `#10306 `__: BUG: optimize: Fix for 10303 * `#10309 `__: BUG: Pass jac=None directly to lsoda * `#10377 `__: TST, MAINT: adjustments for pytest 5.0 * `#10379 `__: BUG: sparse: set writeability to be forward-compatible with numpy>=1.17 * `#10426 `__: MAINT: Fix doc build bugs * `#10540 `__: MAINT: Fix Travis and Circle * `#10633 `__: BUG: interpolate: integral(a, b) should be zero when both limits are outside of the interpolation range * `#10833 `__: BUG: Fix subspace_angles for complex values * `#10882 `__: BUG: sparse/arpack: fix incorrect code for complex hermitian M * `#10906 `__: BUG: sparse/linalg: fix expm for np.matrix inputs * `#10961 `__: BUG: Fix signal.unique_roots * `#11126 `__: BUG: interpolate/fitpack: fix memory leak in splprep * `#11199 `__: BUG: sparse.linalg: mistake in unsymm. real shift-invert ARPACK eigenvalue selection * `#11269 `__: Fix: Sparse matrix constructor data type detection changes on Numpy 1.18.0 Checksums ========= MD5 ~~~ 702e7f68e024359d5cf0627337944520 scipy-1.2.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 53035df3bf9c688fb7803df11b74eb97 scipy-1.2.3-cp27-cp27m-manylinux1_i686.whl 6a07e4864228d4d6bc8d396b9e09f71c scipy-1.2.3-cp27-cp27m-manylinux1_x86_64.whl 645b4211dc2f2b3bdc4e5cf85454e164 scipy-1.2.3-cp27-cp27m-win32.whl 49e6c189ca1a0d92f426e4efaa782f37 scipy-1.2.3-cp27-cp27m-win_amd64.whl a875abf6f3f52fac916739dd556ccefb scipy-1.2.3-cp27-cp27mu-manylinux1_i686.whl 46138092ed3b9e9f0b67afb3765ca041 scipy-1.2.3-cp27-cp27mu-manylinux1_x86_64.whl 608650168cfeda8fb9c1f44ccfb9b6a7 scipy-1.2.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 3acbfaf35ad246e4cffd0f7e35f88483 scipy-1.2.3-cp34-cp34m-manylinux1_i686.whl 807bca9d17cf3221a062c6b72192a1ed scipy-1.2.3-cp34-cp34m-manylinux1_x86_64.whl 327a19bd65112ee59425f57cf65dce67 scipy-1.2.3-cp34-cp34m-win32.whl a1b627a5b0b1adb3c5418d3c9081615b scipy-1.2.3-cp34-cp34m-win_amd64.whl 80d65fd4266bcb29e02b03545ae80b7a scipy-1.2.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl bd1e8f80f9e0427855ff10810e68118f scipy-1.2.3-cp35-cp35m-manylinux1_i686.whl 35f03968403ad3049db956b90a346a59 scipy-1.2.3-cp35-cp35m-manylinux1_x86_64.whl bf489f87f8bfeb6a14c5e606b539f16a scipy-1.2.3-cp35-cp35m-win32.whl 7a618445b53f4f8671352ea52df5cc9f scipy-1.2.3-cp35-cp35m-win_amd64.whl 7b7e8889babc121b4c1340f4b8081423 scipy-1.2.3-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 8339ebba078ee752b2bd15eb3fc64f2f scipy-1.2.3-cp36-cp36m-manylinux1_i686.whl 4dd7be670ce0b1bd2b4c3f5038cf6fb9 scipy-1.2.3-cp36-cp36m-manylinux1_x86_64.whl 9130c940f3b811b9281ac64a3bd610a6 scipy-1.2.3-cp36-cp36m-win32.whl adeb6e4e9c270df4a5e55d0f1a4a72f0 scipy-1.2.3-cp36-cp36m-win_amd64.whl 19f6644944227c64c50dedbb04f8d91d scipy-1.2.3-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl f75450b9399acd7eb3c0fc1724de4be3 scipy-1.2.3-cp37-cp37m-manylinux1_i686.whl fc1cb1164479e070af722e579604bea2 scipy-1.2.3-cp37-cp37m-manylinux1_x86_64.whl 6c2057a41b73f17daf100f3fc1252903 scipy-1.2.3-cp37-cp37m-win32.whl 27f26a031ac1884ac5cb3e35343aba7c scipy-1.2.3-cp37-cp37m-win_amd64.whl 43b42a507472dfa1dff4c91d58a6543f scipy-1.2.3.tar.gz 561ee26a6d0a9b31d644db5e8244bc76 scipy-1.2.3.tar.xz 5b9f47308d06b22078810fca4f97fad2 scipy-1.2.3.zip SHA256 ~~~~~~ 2ff82db1393bd5d8ddeb9134e8f77a8e971e635452d7e65f7238f40c71d385a8 scipy-1.2.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 5d950892a6a2da6dae2b5a3021b329dbf04483a7fb0bd3011685db7f51578ae7 scipy-1.2.3-cp27-cp27m-manylinux1_i686.whl 487a61a7e477923c7a9e8fa06f27e3f2dbc7ce9450a970a2e5d902b0a305d028 scipy-1.2.3-cp27-cp27m-manylinux1_x86_64.whl 3397ce30e240e9a543d81f623a9e8e98ae39012cf72e42e95929530a03f20791 scipy-1.2.3-cp27-cp27m-win32.whl 8533e8c2e467eed913a0aa4fac09c9fb824f32d1ab1121db3a50845f9b347825 scipy-1.2.3-cp27-cp27m-win_amd64.whl b6cbb7125b0c742e0f31fb293d19e9f1a03db58f6ddefc51a2025ee15ae607cb scipy-1.2.3-cp27-cp27mu-manylinux1_i686.whl e870dfd006ab657f9cc48b099646c5ceb4e812e59a7d460dc80ac9a659f089dc scipy-1.2.3-cp27-cp27mu-manylinux1_x86_64.whl b8b0e81a2b87d68acf4a54aea800edafbb5bc9a04f38256718826f95f625fb75 scipy-1.2.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl ecc27dda36d7172bb4ed7f245499e28251fa17909b314b4e8956a942f302e1a3 scipy-1.2.3-cp34-cp34m-manylinux1_i686.whl ee213d8c8eee0540ba91efca61605dc0e2b5bd90ca35ab49c85f0ad7038df00f scipy-1.2.3-cp34-cp34m-manylinux1_x86_64.whl 57a0ee083b94944ba6329e755d112bfd53a98bd9a6a4faf10bc7722c955a7653 scipy-1.2.3-cp34-cp34m-win32.whl 1bc0720f149fbb69d19156cf91730aa21455c58949aea56bfaa2b74c06868100 scipy-1.2.3-cp34-cp34m-win_amd64.whl d39ae9cfd7bdea7753fd617e2edfc68b94a019a65c0153c3fced35cf657b79e7 scipy-1.2.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 7e6c485c3f2789c790fe4cc5c6c3b4554000d284fd05be90479b6fbd8795d064 scipy-1.2.3-cp35-cp35m-manylinux1_i686.whl 92fbdcf9acebf2007502adbed1b22e3a3e9445aceb5e7a568f9759d76461368b scipy-1.2.3-cp35-cp35m-manylinux1_x86_64.whl b2fad2430232f8c2faeb0abff7ed5191f0a534bd1bd482c71a9616c44e674c76 scipy-1.2.3-cp35-cp35m-win32.whl 8b3ac1e50188792fdf811e5a747df2b784c65d9a17f59609a73c8285424a48ad scipy-1.2.3-cp35-cp35m-win_amd64.whl b38f2e6d53f852ae1de1aacead2c26a69c91b36f833e744557ca370979b81652 scipy-1.2.3-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 39bf06b2536e69f08a257b260aaa25d088d755b73cf3751b44e5838ccb1d0a82 scipy-1.2.3-cp36-cp36m-manylinux1_i686.whl c08013e0fe1554372da9312d5bad588402f71c0636f0f86a9b9b61d507c59bac scipy-1.2.3-cp36-cp36m-manylinux1_x86_64.whl cac83647970115dfb6d29dc3b4ab44b3aa11254e8aeba115f88ad0ccbb273085 scipy-1.2.3-cp36-cp36m-win32.whl 0c23e5b3a314dce4049b969c81ad801cf05e1fe699760846c7567deaa9b8c548 scipy-1.2.3-cp36-cp36m-win_amd64.whl 503e25b8da22b1be6f2f81e1dcf26f42bfb13fe89bccbf8bc48e1b6f2a4789c8 scipy-1.2.3-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 19904399501ecf56ead21b307cc52c8ff03a2103343f554f3fbc189bb2cf1609 scipy-1.2.3-cp37-cp37m-manylinux1_i686.whl f658f90db1530f70128bbeb2c1e61006de2eb2382408080a211d885f501af3a7 scipy-1.2.3-cp37-cp37m-manylinux1_x86_64.whl 7aeb232273b38e74d89181d9165ca47314e1c4269afe7c3a1926aea3e63345a2 scipy-1.2.3-cp37-cp37m-win32.whl 53353a504ad2181eb27c9d61e91ce29324f8b7550876736a08a36e6b60c43407 scipy-1.2.3-cp37-cp37m-win_amd64.whl ecbe6413ca90b8e19f8475bfa303ac001e81b04ec600d17fa7f816271f7cca57 scipy-1.2.3.tar.gz 94ef2ac3c9c83cbbda0a5c5552fffa22993c1c21313c0eb7a2e7102b4629bc31 scipy-1.2.3.tar.xz a4f09f8c6f5924582fc61d3c5cd8174e8f8857a45dfa4a81c0d134b7c4af74cc scipy-1.2.3.zip -----BEGIN PGP SIGNATURE----- iQGzBAEBCAAdFiEEuinQzfUTSK0iykiguEZ+VI8xggYFAl4nNasACgkQuEZ+VI8x ggYXiQwArWCb+6PjBb/vyjDO8KOjYmVCCNm8OxC1OvJ3Nvm+CMOY9z8nurfImpZ3 3mL/pl3mqmvMj7u7i30pOVXlfdVUaAoRKbmyjOnXDGTOTogbPEkkf6PPJOKgdXYo 1ml+FmdAFPIyNJhJOMv9ME5rssZlLqpwxbwqKsRBnC3/eHLLXgJ2/kFL9dunPSDN upfSpptbBbuQJpsA6+OxUB3DKZ3VsxbOG7+b+2dIiKfsRkt1N2nQi4g/HJqbQ85P 8G7s4Twqo6DToUZdDcud/C+hsUsFkqRRh/0osLjuu99H3z4y0+F6rZSedwf5T4Wa zFU5IFiU92T4pA5Y0EnZxJS4MnB7jaVeSe+FLf23q7Y6Hi1nC3s4/rLxqUhKzO0V T02G+/hAyeFol2/6quwephzn3Vv+fiZ+hg7zEvDnnjP/TlmSwW/csLRTHXb1qu/z FXxb/Xs1f+/jPbjwqb0gA7vpzs7OgodWZtML+0FOxPwjyNBxWQQe/pbuppjtszQq 2iysS9UM =E9q4 -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Tue Jan 21 12:58:18 2020 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 21 Jan 2020 12:58:18 -0500 Subject: [SciPy-Dev] ANN: SciPy 1.2.3 (LTS) In-Reply-To: References: Message-ID: Thanks Tyler! Warren On 1/21/20, Tyler Reddy wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hi all, > > On behalf of the SciPy development team I'm pleased to announce > the release of SciPy 1.2.3, which is a bug fix release. This is part > of the long-term support (LTS) branch that includes Python 2.7. > > Sources and binary wheels can be found at: > https://pypi.org/project/scipy/ > and at: https://github.com/scipy/scipy/releases/tag/v1.2.3 > > One of a few ways to install this release with pip: > > pip install scipy==1.2.3 > > ========================== > SciPy 1.2.3 Release Notes > ========================== > > SciPy 1.2.3 is a bug-fix release with no new features compared to 1.2.2. It > is > part of the long-term support (LTS) release series for Python 2.7. > > Authors > ======= > > * Geordie McBain > * Matt Haberland > * David Hagen > * Tyler Reddy > * Pauli Virtanen > * Eric Larson > * Yu Feng > * ananyashreyjain > * Nikolay Mayorov > * Evgeni Burovski > * Warren Weckesser > > Issues closed for 1.2.3 > -------------------------------- > * `#4915 `__: Bug in > unique_roots in scipy.signal.signaltools.py for roots with same magnitude > * `#5546 `__: ValueError raised > if scipy.sparse.linalg.expm recieves array larger than 200x200 > * `#7117 `__: Warn users when > using float32 input data to curve_fit and friends > * `#7906 `__: Wrong result from > scipy.interpolate.UnivariateSpline.integral for out-of-bounds > * `#9581 `__: Least-squares > minimization fails silently when x and y data are different types > * `#9901 `__: lsoda fails to > detect stiff problem when called from solve_ivp > * `#9988 `__: doc build broken > with Sphinx 2.0.0 > * `#10303 `__: BUG: optimize: > `linprog` failing > TestLinprogSimplexBland::test_unbounded_below_no_presolve_corrected > * `#10376 `__: TST: Travis CI > fails (with pytest 5.0 ?) > * `#10384 `__: CircleCI doc > build failing on new warnings > * `#10535 `__: TST: master > branch CI failures > * `#11121 `__: Calls to > `scipy.interpolate.splprep` increase RAM usage. > * `#11198 `__: BUG: sparse > eigs (arpack) shift-invert drops the smallest eigenvalue for some k > * `#11266 `__: Sparse matrix > constructor data type detection changes on Numpy 1.18.0 > > Pull requests for 1.2.3 > -------------------------------- > * `#9992 `__: MAINT: Undo Sphinx > pin > * `#10071 `__: DOC: reconstruct > SuperLU permutation matrices avoiding SparseEfficiencyWarning > * `#10076 `__: BUG: optimize: > fix curve_fit for mixed float32/float64 input > * `#10138 `__: BUG: special: > Invalid arguments to ellip_harm can crash Python. > * `#10306 `__: BUG: optimize: > Fix for 10303 > * `#10309 `__: BUG: Pass > jac=None directly to lsoda > * `#10377 `__: TST, MAINT: > adjustments for pytest 5.0 > * `#10379 `__: BUG: sparse: set > writeability to be forward-compatible with numpy>=1.17 > * `#10426 `__: MAINT: Fix doc > build bugs > * `#10540 `__: MAINT: Fix Travis > and Circle > * `#10633 `__: BUG: interpolate: > integral(a, b) should be zero when both limits are outside of the > interpolation range > * `#10833 `__: BUG: Fix > subspace_angles for complex values > * `#10882 `__: BUG: > sparse/arpack: fix incorrect code for complex hermitian M > * `#10906 `__: BUG: > sparse/linalg: fix expm for np.matrix inputs > * `#10961 `__: BUG: Fix > signal.unique_roots > * `#11126 `__: BUG: > interpolate/fitpack: fix memory leak in splprep > * `#11199 `__: BUG: > sparse.linalg: mistake in unsymm. real shift-invert ARPACK eigenvalue > selection > * `#11269 `__: Fix: Sparse > matrix constructor data type detection changes on Numpy 1.18.0 > > > > Checksums > ========= > > MD5 > ~~~ > > 702e7f68e024359d5cf0627337944520 > scipy-1.2.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 53035df3bf9c688fb7803df11b74eb97 > scipy-1.2.3-cp27-cp27m-manylinux1_i686.whl > 6a07e4864228d4d6bc8d396b9e09f71c > scipy-1.2.3-cp27-cp27m-manylinux1_x86_64.whl > 645b4211dc2f2b3bdc4e5cf85454e164 scipy-1.2.3-cp27-cp27m-win32.whl > 49e6c189ca1a0d92f426e4efaa782f37 scipy-1.2.3-cp27-cp27m-win_amd64.whl > a875abf6f3f52fac916739dd556ccefb > scipy-1.2.3-cp27-cp27mu-manylinux1_i686.whl > 46138092ed3b9e9f0b67afb3765ca041 > scipy-1.2.3-cp27-cp27mu-manylinux1_x86_64.whl > 608650168cfeda8fb9c1f44ccfb9b6a7 > scipy-1.2.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 3acbfaf35ad246e4cffd0f7e35f88483 > scipy-1.2.3-cp34-cp34m-manylinux1_i686.whl > 807bca9d17cf3221a062c6b72192a1ed > scipy-1.2.3-cp34-cp34m-manylinux1_x86_64.whl > 327a19bd65112ee59425f57cf65dce67 scipy-1.2.3-cp34-cp34m-win32.whl > a1b627a5b0b1adb3c5418d3c9081615b scipy-1.2.3-cp34-cp34m-win_amd64.whl > 80d65fd4266bcb29e02b03545ae80b7a > scipy-1.2.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > bd1e8f80f9e0427855ff10810e68118f > scipy-1.2.3-cp35-cp35m-manylinux1_i686.whl > 35f03968403ad3049db956b90a346a59 > scipy-1.2.3-cp35-cp35m-manylinux1_x86_64.whl > bf489f87f8bfeb6a14c5e606b539f16a scipy-1.2.3-cp35-cp35m-win32.whl > 7a618445b53f4f8671352ea52df5cc9f scipy-1.2.3-cp35-cp35m-win_amd64.whl > 7b7e8889babc121b4c1340f4b8081423 > scipy-1.2.3-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 8339ebba078ee752b2bd15eb3fc64f2f > scipy-1.2.3-cp36-cp36m-manylinux1_i686.whl > 4dd7be670ce0b1bd2b4c3f5038cf6fb9 > scipy-1.2.3-cp36-cp36m-manylinux1_x86_64.whl > 9130c940f3b811b9281ac64a3bd610a6 scipy-1.2.3-cp36-cp36m-win32.whl > adeb6e4e9c270df4a5e55d0f1a4a72f0 scipy-1.2.3-cp36-cp36m-win_amd64.whl > 19f6644944227c64c50dedbb04f8d91d > scipy-1.2.3-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > f75450b9399acd7eb3c0fc1724de4be3 > scipy-1.2.3-cp37-cp37m-manylinux1_i686.whl > fc1cb1164479e070af722e579604bea2 > scipy-1.2.3-cp37-cp37m-manylinux1_x86_64.whl > 6c2057a41b73f17daf100f3fc1252903 scipy-1.2.3-cp37-cp37m-win32.whl > 27f26a031ac1884ac5cb3e35343aba7c scipy-1.2.3-cp37-cp37m-win_amd64.whl > 43b42a507472dfa1dff4c91d58a6543f scipy-1.2.3.tar.gz > 561ee26a6d0a9b31d644db5e8244bc76 scipy-1.2.3.tar.xz > 5b9f47308d06b22078810fca4f97fad2 scipy-1.2.3.zip > > SHA256 > ~~~~~~ > > 2ff82db1393bd5d8ddeb9134e8f77a8e971e635452d7e65f7238f40c71d385a8 > scipy-1.2.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 5d950892a6a2da6dae2b5a3021b329dbf04483a7fb0bd3011685db7f51578ae7 > scipy-1.2.3-cp27-cp27m-manylinux1_i686.whl > 487a61a7e477923c7a9e8fa06f27e3f2dbc7ce9450a970a2e5d902b0a305d028 > scipy-1.2.3-cp27-cp27m-manylinux1_x86_64.whl > 3397ce30e240e9a543d81f623a9e8e98ae39012cf72e42e95929530a03f20791 > scipy-1.2.3-cp27-cp27m-win32.whl > 8533e8c2e467eed913a0aa4fac09c9fb824f32d1ab1121db3a50845f9b347825 > scipy-1.2.3-cp27-cp27m-win_amd64.whl > b6cbb7125b0c742e0f31fb293d19e9f1a03db58f6ddefc51a2025ee15ae607cb > scipy-1.2.3-cp27-cp27mu-manylinux1_i686.whl > e870dfd006ab657f9cc48b099646c5ceb4e812e59a7d460dc80ac9a659f089dc > scipy-1.2.3-cp27-cp27mu-manylinux1_x86_64.whl > b8b0e81a2b87d68acf4a54aea800edafbb5bc9a04f38256718826f95f625fb75 > scipy-1.2.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > ecc27dda36d7172bb4ed7f245499e28251fa17909b314b4e8956a942f302e1a3 > scipy-1.2.3-cp34-cp34m-manylinux1_i686.whl > ee213d8c8eee0540ba91efca61605dc0e2b5bd90ca35ab49c85f0ad7038df00f > scipy-1.2.3-cp34-cp34m-manylinux1_x86_64.whl > 57a0ee083b94944ba6329e755d112bfd53a98bd9a6a4faf10bc7722c955a7653 > scipy-1.2.3-cp34-cp34m-win32.whl > 1bc0720f149fbb69d19156cf91730aa21455c58949aea56bfaa2b74c06868100 > scipy-1.2.3-cp34-cp34m-win_amd64.whl > d39ae9cfd7bdea7753fd617e2edfc68b94a019a65c0153c3fced35cf657b79e7 > scipy-1.2.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 7e6c485c3f2789c790fe4cc5c6c3b4554000d284fd05be90479b6fbd8795d064 > scipy-1.2.3-cp35-cp35m-manylinux1_i686.whl > 92fbdcf9acebf2007502adbed1b22e3a3e9445aceb5e7a568f9759d76461368b > scipy-1.2.3-cp35-cp35m-manylinux1_x86_64.whl > b2fad2430232f8c2faeb0abff7ed5191f0a534bd1bd482c71a9616c44e674c76 > scipy-1.2.3-cp35-cp35m-win32.whl > 8b3ac1e50188792fdf811e5a747df2b784c65d9a17f59609a73c8285424a48ad > scipy-1.2.3-cp35-cp35m-win_amd64.whl > b38f2e6d53f852ae1de1aacead2c26a69c91b36f833e744557ca370979b81652 > scipy-1.2.3-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 39bf06b2536e69f08a257b260aaa25d088d755b73cf3751b44e5838ccb1d0a82 > scipy-1.2.3-cp36-cp36m-manylinux1_i686.whl > c08013e0fe1554372da9312d5bad588402f71c0636f0f86a9b9b61d507c59bac > scipy-1.2.3-cp36-cp36m-manylinux1_x86_64.whl > cac83647970115dfb6d29dc3b4ab44b3aa11254e8aeba115f88ad0ccbb273085 > scipy-1.2.3-cp36-cp36m-win32.whl > 0c23e5b3a314dce4049b969c81ad801cf05e1fe699760846c7567deaa9b8c548 > scipy-1.2.3-cp36-cp36m-win_amd64.whl > 503e25b8da22b1be6f2f81e1dcf26f42bfb13fe89bccbf8bc48e1b6f2a4789c8 > scipy-1.2.3-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 19904399501ecf56ead21b307cc52c8ff03a2103343f554f3fbc189bb2cf1609 > scipy-1.2.3-cp37-cp37m-manylinux1_i686.whl > f658f90db1530f70128bbeb2c1e61006de2eb2382408080a211d885f501af3a7 > scipy-1.2.3-cp37-cp37m-manylinux1_x86_64.whl > 7aeb232273b38e74d89181d9165ca47314e1c4269afe7c3a1926aea3e63345a2 > scipy-1.2.3-cp37-cp37m-win32.whl > 53353a504ad2181eb27c9d61e91ce29324f8b7550876736a08a36e6b60c43407 > scipy-1.2.3-cp37-cp37m-win_amd64.whl > ecbe6413ca90b8e19f8475bfa303ac001e81b04ec600d17fa7f816271f7cca57 > scipy-1.2.3.tar.gz > 94ef2ac3c9c83cbbda0a5c5552fffa22993c1c21313c0eb7a2e7102b4629bc31 > scipy-1.2.3.tar.xz > a4f09f8c6f5924582fc61d3c5cd8174e8f8857a45dfa4a81c0d134b7c4af74cc > scipy-1.2.3.zip > -----BEGIN PGP SIGNATURE----- > > iQGzBAEBCAAdFiEEuinQzfUTSK0iykiguEZ+VI8xggYFAl4nNasACgkQuEZ+VI8x > ggYXiQwArWCb+6PjBb/vyjDO8KOjYmVCCNm8OxC1OvJ3Nvm+CMOY9z8nurfImpZ3 > 3mL/pl3mqmvMj7u7i30pOVXlfdVUaAoRKbmyjOnXDGTOTogbPEkkf6PPJOKgdXYo > 1ml+FmdAFPIyNJhJOMv9ME5rssZlLqpwxbwqKsRBnC3/eHLLXgJ2/kFL9dunPSDN > upfSpptbBbuQJpsA6+OxUB3DKZ3VsxbOG7+b+2dIiKfsRkt1N2nQi4g/HJqbQ85P > 8G7s4Twqo6DToUZdDcud/C+hsUsFkqRRh/0osLjuu99H3z4y0+F6rZSedwf5T4Wa > zFU5IFiU92T4pA5Y0EnZxJS4MnB7jaVeSe+FLf23q7Y6Hi1nC3s4/rLxqUhKzO0V > T02G+/hAyeFol2/6quwephzn3Vv+fiZ+hg7zEvDnnjP/TlmSwW/csLRTHXb1qu/z > FXxb/Xs1f+/jPbjwqb0gA7vpzs7OgodWZtML+0FOxPwjyNBxWQQe/pbuppjtszQq > 2iysS9UM > =E9q4 > -----END PGP SIGNATURE----- > From evgeny.burovskiy at gmail.com Tue Jan 21 13:10:52 2020 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 21 Jan 2020 21:10:52 +0300 Subject: [SciPy-Dev] ANN: SciPy 1.2.3 (LTS) In-Reply-To: References: Message-ID: Great work Tyler, much appreciated! ??, 21 ???. 2020 ?., 20:54 Tyler Reddy : > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hi all, > > On behalf of the SciPy development team I'm pleased to announce > the release of SciPy 1.2.3, which is a bug fix release. This is part > of the long-term support (LTS) branch that includes Python 2.7. > > Sources and binary wheels can be found at: > https://pypi.org/project/scipy/ > and at: https://github.com/scipy/scipy/releases/tag/v1.2.3 > > One of a few ways to install this release with pip: > > pip install scipy==1.2.3 > > ========================== > SciPy 1.2.3 Release Notes > ========================== > > SciPy 1.2.3 is a bug-fix release with no new features compared to 1.2.2. > It is > part of the long-term support (LTS) release series for Python 2.7. > > Authors > ======= > > * Geordie McBain > * Matt Haberland > * David Hagen > * Tyler Reddy > * Pauli Virtanen > * Eric Larson > * Yu Feng > * ananyashreyjain > * Nikolay Mayorov > * Evgeni Burovski > * Warren Weckesser > > Issues closed for 1.2.3 > -------------------------------- > * `#4915 `__: Bug in > unique_roots in scipy.signal.signaltools.py for roots with same magnitude > * `#5546 `__: ValueError > raised if scipy.sparse.linalg.expm recieves array larger than 200x200 > * `#7117 `__: Warn users when > using float32 input data to curve_fit and friends > * `#7906 `__: Wrong result > from scipy.interpolate.UnivariateSpline.integral for out-of-bounds > * `#9581 `__: Least-squares > minimization fails silently when x and y data are different types > * `#9901 `__: lsoda fails to > detect stiff problem when called from solve_ivp > * `#9988 `__: doc build > broken with Sphinx 2.0.0 > * `#10303 `__: BUG: > optimize: `linprog` failing > TestLinprogSimplexBland::test_unbounded_below_no_presolve_corrected > * `#10376 `__: TST: Travis > CI fails (with pytest 5.0 ?) > * `#10384 `__: CircleCI doc > build failing on new warnings > * `#10535 `__: TST: master > branch CI failures > * `#11121 `__: Calls to > `scipy.interpolate.splprep` increase RAM usage. > * `#11198 `__: BUG: sparse > eigs (arpack) shift-invert drops the smallest eigenvalue for some k > * `#11266 `__: Sparse matrix > constructor data type detection changes on Numpy 1.18.0 > > Pull requests for 1.2.3 > -------------------------------- > * `#9992 `__: MAINT: Undo > Sphinx pin > * `#10071 `__: DOC: > reconstruct SuperLU permutation matrices avoiding SparseEfficiencyWarning > * `#10076 `__: BUG: optimize: > fix curve_fit for mixed float32/float64 input > * `#10138 `__: BUG: special: > Invalid arguments to ellip_harm can crash Python. > * `#10306 `__: BUG: optimize: > Fix for 10303 > * `#10309 `__: BUG: Pass > jac=None directly to lsoda > * `#10377 `__: TST, MAINT: > adjustments for pytest 5.0 > * `#10379 `__: BUG: sparse: > set writeability to be forward-compatible with numpy>=1.17 > * `#10426 `__: MAINT: Fix doc > build bugs > * `#10540 `__: MAINT: Fix > Travis and Circle > * `#10633 `__: BUG: > interpolate: integral(a, b) should be zero when both limits are outside of > the interpolation range > * `#10833 `__: BUG: Fix > subspace_angles for complex values > * `#10882 `__: BUG: > sparse/arpack: fix incorrect code for complex hermitian M > * `#10906 `__: BUG: > sparse/linalg: fix expm for np.matrix inputs > * `#10961 `__: BUG: Fix > signal.unique_roots > * `#11126 `__: BUG: > interpolate/fitpack: fix memory leak in splprep > * `#11199 `__: BUG: > sparse.linalg: mistake in unsymm. real shift-invert ARPACK eigenvalue > selection > * `#11269 `__: Fix: Sparse > matrix constructor data type detection changes on Numpy 1.18.0 > > > > Checksums > ========= > > MD5 > ~~~ > > 702e7f68e024359d5cf0627337944520 > scipy-1.2.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 53035df3bf9c688fb7803df11b74eb97 > scipy-1.2.3-cp27-cp27m-manylinux1_i686.whl > 6a07e4864228d4d6bc8d396b9e09f71c > scipy-1.2.3-cp27-cp27m-manylinux1_x86_64.whl > 645b4211dc2f2b3bdc4e5cf85454e164 scipy-1.2.3-cp27-cp27m-win32.whl > 49e6c189ca1a0d92f426e4efaa782f37 scipy-1.2.3-cp27-cp27m-win_amd64.whl > a875abf6f3f52fac916739dd556ccefb > scipy-1.2.3-cp27-cp27mu-manylinux1_i686.whl > 46138092ed3b9e9f0b67afb3765ca041 > scipy-1.2.3-cp27-cp27mu-manylinux1_x86_64.whl > 608650168cfeda8fb9c1f44ccfb9b6a7 > scipy-1.2.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 3acbfaf35ad246e4cffd0f7e35f88483 > scipy-1.2.3-cp34-cp34m-manylinux1_i686.whl > 807bca9d17cf3221a062c6b72192a1ed > scipy-1.2.3-cp34-cp34m-manylinux1_x86_64.whl > 327a19bd65112ee59425f57cf65dce67 scipy-1.2.3-cp34-cp34m-win32.whl > a1b627a5b0b1adb3c5418d3c9081615b scipy-1.2.3-cp34-cp34m-win_amd64.whl > 80d65fd4266bcb29e02b03545ae80b7a > scipy-1.2.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > bd1e8f80f9e0427855ff10810e68118f > scipy-1.2.3-cp35-cp35m-manylinux1_i686.whl > 35f03968403ad3049db956b90a346a59 > scipy-1.2.3-cp35-cp35m-manylinux1_x86_64.whl > bf489f87f8bfeb6a14c5e606b539f16a scipy-1.2.3-cp35-cp35m-win32.whl > 7a618445b53f4f8671352ea52df5cc9f scipy-1.2.3-cp35-cp35m-win_amd64.whl > 7b7e8889babc121b4c1340f4b8081423 > scipy-1.2.3-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 8339ebba078ee752b2bd15eb3fc64f2f > scipy-1.2.3-cp36-cp36m-manylinux1_i686.whl > 4dd7be670ce0b1bd2b4c3f5038cf6fb9 > scipy-1.2.3-cp36-cp36m-manylinux1_x86_64.whl > 9130c940f3b811b9281ac64a3bd610a6 scipy-1.2.3-cp36-cp36m-win32.whl > adeb6e4e9c270df4a5e55d0f1a4a72f0 scipy-1.2.3-cp36-cp36m-win_amd64.whl > 19f6644944227c64c50dedbb04f8d91d > scipy-1.2.3-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > f75450b9399acd7eb3c0fc1724de4be3 > scipy-1.2.3-cp37-cp37m-manylinux1_i686.whl > fc1cb1164479e070af722e579604bea2 > scipy-1.2.3-cp37-cp37m-manylinux1_x86_64.whl > 6c2057a41b73f17daf100f3fc1252903 scipy-1.2.3-cp37-cp37m-win32.whl > 27f26a031ac1884ac5cb3e35343aba7c scipy-1.2.3-cp37-cp37m-win_amd64.whl > 43b42a507472dfa1dff4c91d58a6543f scipy-1.2.3.tar.gz > 561ee26a6d0a9b31d644db5e8244bc76 scipy-1.2.3.tar.xz > 5b9f47308d06b22078810fca4f97fad2 scipy-1.2.3.zip > > SHA256 > ~~~~~~ > > 2ff82db1393bd5d8ddeb9134e8f77a8e971e635452d7e65f7238f40c71d385a8 > scipy-1.2.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 5d950892a6a2da6dae2b5a3021b329dbf04483a7fb0bd3011685db7f51578ae7 > scipy-1.2.3-cp27-cp27m-manylinux1_i686.whl > 487a61a7e477923c7a9e8fa06f27e3f2dbc7ce9450a970a2e5d902b0a305d028 > scipy-1.2.3-cp27-cp27m-manylinux1_x86_64.whl > 3397ce30e240e9a543d81f623a9e8e98ae39012cf72e42e95929530a03f20791 > scipy-1.2.3-cp27-cp27m-win32.whl > 8533e8c2e467eed913a0aa4fac09c9fb824f32d1ab1121db3a50845f9b347825 > scipy-1.2.3-cp27-cp27m-win_amd64.whl > b6cbb7125b0c742e0f31fb293d19e9f1a03db58f6ddefc51a2025ee15ae607cb > scipy-1.2.3-cp27-cp27mu-manylinux1_i686.whl > e870dfd006ab657f9cc48b099646c5ceb4e812e59a7d460dc80ac9a659f089dc > scipy-1.2.3-cp27-cp27mu-manylinux1_x86_64.whl > b8b0e81a2b87d68acf4a54aea800edafbb5bc9a04f38256718826f95f625fb75 > scipy-1.2.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > ecc27dda36d7172bb4ed7f245499e28251fa17909b314b4e8956a942f302e1a3 > scipy-1.2.3-cp34-cp34m-manylinux1_i686.whl > ee213d8c8eee0540ba91efca61605dc0e2b5bd90ca35ab49c85f0ad7038df00f > scipy-1.2.3-cp34-cp34m-manylinux1_x86_64.whl > 57a0ee083b94944ba6329e755d112bfd53a98bd9a6a4faf10bc7722c955a7653 > scipy-1.2.3-cp34-cp34m-win32.whl > 1bc0720f149fbb69d19156cf91730aa21455c58949aea56bfaa2b74c06868100 > scipy-1.2.3-cp34-cp34m-win_amd64.whl > d39ae9cfd7bdea7753fd617e2edfc68b94a019a65c0153c3fced35cf657b79e7 > scipy-1.2.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 7e6c485c3f2789c790fe4cc5c6c3b4554000d284fd05be90479b6fbd8795d064 > scipy-1.2.3-cp35-cp35m-manylinux1_i686.whl > 92fbdcf9acebf2007502adbed1b22e3a3e9445aceb5e7a568f9759d76461368b > scipy-1.2.3-cp35-cp35m-manylinux1_x86_64.whl > b2fad2430232f8c2faeb0abff7ed5191f0a534bd1bd482c71a9616c44e674c76 > scipy-1.2.3-cp35-cp35m-win32.whl > 8b3ac1e50188792fdf811e5a747df2b784c65d9a17f59609a73c8285424a48ad > scipy-1.2.3-cp35-cp35m-win_amd64.whl > b38f2e6d53f852ae1de1aacead2c26a69c91b36f833e744557ca370979b81652 > scipy-1.2.3-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 39bf06b2536e69f08a257b260aaa25d088d755b73cf3751b44e5838ccb1d0a82 > scipy-1.2.3-cp36-cp36m-manylinux1_i686.whl > c08013e0fe1554372da9312d5bad588402f71c0636f0f86a9b9b61d507c59bac > scipy-1.2.3-cp36-cp36m-manylinux1_x86_64.whl > cac83647970115dfb6d29dc3b4ab44b3aa11254e8aeba115f88ad0ccbb273085 > scipy-1.2.3-cp36-cp36m-win32.whl > 0c23e5b3a314dce4049b969c81ad801cf05e1fe699760846c7567deaa9b8c548 > scipy-1.2.3-cp36-cp36m-win_amd64.whl > 503e25b8da22b1be6f2f81e1dcf26f42bfb13fe89bccbf8bc48e1b6f2a4789c8 > scipy-1.2.3-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 19904399501ecf56ead21b307cc52c8ff03a2103343f554f3fbc189bb2cf1609 > scipy-1.2.3-cp37-cp37m-manylinux1_i686.whl > f658f90db1530f70128bbeb2c1e61006de2eb2382408080a211d885f501af3a7 > scipy-1.2.3-cp37-cp37m-manylinux1_x86_64.whl > 7aeb232273b38e74d89181d9165ca47314e1c4269afe7c3a1926aea3e63345a2 > scipy-1.2.3-cp37-cp37m-win32.whl > 53353a504ad2181eb27c9d61e91ce29324f8b7550876736a08a36e6b60c43407 > scipy-1.2.3-cp37-cp37m-win_amd64.whl > ecbe6413ca90b8e19f8475bfa303ac001e81b04ec600d17fa7f816271f7cca57 > scipy-1.2.3.tar.gz > 94ef2ac3c9c83cbbda0a5c5552fffa22993c1c21313c0eb7a2e7102b4629bc31 > scipy-1.2.3.tar.xz > a4f09f8c6f5924582fc61d3c5cd8174e8f8857a45dfa4a81c0d134b7c4af74cc > scipy-1.2.3.zip > -----BEGIN PGP SIGNATURE----- > > iQGzBAEBCAAdFiEEuinQzfUTSK0iykiguEZ+VI8xggYFAl4nNasACgkQuEZ+VI8x > ggYXiQwArWCb+6PjBb/vyjDO8KOjYmVCCNm8OxC1OvJ3Nvm+CMOY9z8nurfImpZ3 > 3mL/pl3mqmvMj7u7i30pOVXlfdVUaAoRKbmyjOnXDGTOTogbPEkkf6PPJOKgdXYo > 1ml+FmdAFPIyNJhJOMv9ME5rssZlLqpwxbwqKsRBnC3/eHLLXgJ2/kFL9dunPSDN > upfSpptbBbuQJpsA6+OxUB3DKZ3VsxbOG7+b+2dIiKfsRkt1N2nQi4g/HJqbQ85P > 8G7s4Twqo6DToUZdDcud/C+hsUsFkqRRh/0osLjuu99H3z4y0+F6rZSedwf5T4Wa > zFU5IFiU92T4pA5Y0EnZxJS4MnB7jaVeSe+FLf23q7Y6Hi1nC3s4/rLxqUhKzO0V > T02G+/hAyeFol2/6quwephzn3Vv+fiZ+hg7zEvDnnjP/TlmSwW/csLRTHXb1qu/z > FXxb/Xs1f+/jPbjwqb0gA7vpzs7OgodWZtML+0FOxPwjyNBxWQQe/pbuppjtszQq > 2iysS9UM > =E9q4 > -----END PGP SIGNATURE----- > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Tue Jan 21 16:56:29 2020 From: andyfaff at gmail.com (Andrew Nelson) Date: Wed, 22 Jan 2020 08:56:29 +1100 Subject: [SciPy-Dev] Github actions setup. Message-ID: I've recently setup github actions for a personal project. The setup is quite quick, the test suite runs in parallel and it seems to be fast. I'm wondering what peoples thoughts are for doing the same for scipy? The rationale is that there is a reasonably heavy load on travisCI (10 entries) and this might be a way of spreading the load somewhat. e.g. we could take off half the Linux entries. -------------- next part -------------- An HTML attachment was scrubbed... URL: From christoph.baumgarten at gmail.com Wed Jan 22 15:01:43 2020 From: christoph.baumgarten at gmail.com (Christoph Baumgarten) Date: Wed, 22 Jan 2020 21:01:43 +0100 Subject: [SciPy-Dev] Generate random variates using Cython In-Reply-To: References: Message-ID: Hi, After upgrading to Numpy 1.18, I could indeed compile the code https://docs.scipy.org/doc/numpy-1.17.0/reference/random/extending.html and it gives a substantial speedup. I am still wondering how to integrate such an approach into SciPy. To generate uniform samples in the _rvs method of a continuous distribution i scipy.stats, I would write: self._random_state.random_sample If I want to use Cython, how can I ensure that it relies on the same random state? In the future, one needs to support RandomState and Generator (cross-ref to the recent PR https://github.com/scipy/scipy/pull/11402) Thanks for your help. Christoph P.S. code used import numpy as np cimport numpy as np cimport cython from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer from numpy.random cimport bitgen_t from numpy.random import PCG64 @cython.boundscheck(False) @cython.wraparound(False) def rvs(Py_ssize_t n): cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef double u, v cdef double[::1] random_values x = PCG64() capsule = x.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='float64') with x.lock, nogil: for i in range(n): while (1): u = rng.next_double(rng.state) v = rng.next_double(rng.state) if u <= v*v: break random_values[i] = v randoms = np.asarray(random_values) return randoms On Thu, Jan 16, 2020 at 12:46 PM wrote: > Send SciPy-Dev mailing list submissions to > scipy-dev at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at python.org > > You can reach the person managing the list at > scipy-dev-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Generate random variates using Cython (Christoph Baumgarten) > 2. Re: Generate random variates using Cython (Matti Picus) > 3. Re: Missing whitespace after commas in pycodestyle check > (Joshua Wilson) > 4. cuSignal: GPU accelerated version of SciPy Signal package > (Robert Lucente - Pipeline.Com) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 15 Jan 2020 20:14:07 +0100 > From: Christoph Baumgarten > To: scipy-dev at python.org > Subject: [SciPy-Dev] Generate random variates using Cython > Message-ID: > < > CABXY2qC6PLydMVi5on7x3tqn2qED5nBD19ACtKA81OYm_Y4GAA at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi, > > I recently looked into rejection algorithms to generate random variates > that rely on repeatedly generating uniform random variates until a > termination condition is satisfied. > A pure Python implementation is reasonably fast but much slower than a C > implementation (e.g. Poisson distribution in numpy). I tried to use Cython > to adapt my Python code (I just started to look into Cython, so it could be > that I overlook something very basic) but I did not succeed and I would > like to ask for help. > > The following code works that generates rvs on [0, 1] with density 3 x**2 > as an example (compiling it generates a warning though: Warning Msg: Using > deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API > NPY_1_7_API_VERSION) but I struggle to get rid of the Python interaction > np.random.rand() to speed it up. > > cimport numpy as np > from numpy cimport ndarray, float64_t > import numpy as np > def rvs(int N): > cdef double u, v > cdef int i > cdef np.ndarray[np.float64_t, ndim=1] x = np.empty((N, ), > dtype=np.float64) > for i in range(N): > while (1): > u = np.random.rand() > v = np.random.rand() > if u <= v*v: > break > x[i] = v > return x > > Following https://numpy.org/devdocs/reference/random/c-api.html (Cython > API > for random), I tried > cimport numpy.random to use random_standard_uniform. > However, I get an error: 'numpy\random.pxd' not found > > I get similar errors when trying to compile the Cython examles here: > https://docs.scipy.org/doc/numpy-1.17.0/reference/random/extending.html > > Cython version: 0.29.14 > Numpy 1.17.4 > python 3.7.1 > > Any guidance on how to access the uniform distribution in numpy using > Cython would be great. Thanks > > Christoph > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.python.org/pipermail/scipy-dev/attachments/20200115/077cdcf8/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Thu, 16 Jan 2020 07:14:41 +1100 > From: Matti Picus > To: scipy-dev at python.org > Subject: Re: [SciPy-Dev] Generate random variates using Cython > Message-ID: <5a73b945-afd1-e685-6d32-8cd2ebd822e5 at gmail.com> > Content-Type: text/plain; charset=utf-8; format=flowed > > On 16/1/20 6:14 am, Christoph Baumgarten wrote: > > > > Hi, > > > > I recently looked into rejection algorithms to generate random > > variates that rely on repeatedly generating uniform random variates > > until a termination condition is satisfied. > > A pure Python implementation is reasonably fast but much slower than a > > C implementation (e.g. Poisson distribution in numpy). I tried to use > > Cython to adapt my Python code (I just started to look into Cython, so > > it could be that I overlook something very basic) but I did not > > succeed and I would like to ask for help. > > > > The following code works that generates rvs on [0, 1] with density 3 > > x**2 as an example (compiling it generates a warning though: Warning > > Msg: Using deprecated NumPy API, disable it with #define > > NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION)? but I struggle to get rid > > of the Python interaction np.random.rand() to speed it up. > > > > cimport numpy as np > > from numpy cimport ndarray, float64_t > > import numpy as np > > def rvs(int N): > > ??? cdef double u, v > > ??? cdef int i > > ??? cdef np.ndarray[np.float64_t, ndim=1] x = np.empty((N, ), > > dtype=np.float64) > > ??? for i in range(N): > > ??????? while (1): > > ??????????? u = np.random.rand() > > ??????????? v = np.random.rand() > > ??????????? if u <= v*v: > > ??????????????? break > > ??????? x[i] = v > > ??? return x > > > > Following https://numpy.org/devdocs/reference/random/c-api.html > > (Cython API for random), I tried > > cimport numpy.random to use random_standard_uniform. > > However, I get an error: 'numpy\random.pxd' not found > > > > I get similar errors when trying to compile the Cython examles here: > > https://docs.scipy.org/doc/numpy-1.17.0/reference/random/extending.html > > > > Cython version: 0.29.14 > > Numpy 1.17.4 > > python 3.7.1 > > > > Any guidance on how to access the uniform distribution in numpy using > > Cython would be great. Thanks > > > > Christoph > > > > The new random c-api was broken in NumPy 1.17. Please try with NumPy 1.18. > > Matti > > > > ------------------------------ > > Message: 3 > Date: Wed, 15 Jan 2020 19:06:18 -0800 > From: Joshua Wilson > To: SciPy Developers List > Subject: Re: [SciPy-Dev] Missing whitespace after commas in > pycodestyle check > Message-ID: > 52M4xAkVA+hq9LhviywUEYnk2YB8y9Qg at mail.gmail.com> > Content-Type: text/plain; charset="UTF-8" > > Seems like there are no objections to adding the checks to just the > diffs; I'll set that up in upcoming weeks. Very much looking forward > to eliminating that aspect of code review. > > - Josh > > On Tue, Jan 14, 2020 at 2:47 AM Ralf Gommers > wrote: > > > > > > > > On Tue, Jan 14, 2020 at 12:15 AM Ilhan Polat > wrote: > >> > >> I actually got pretty quick at fixing these stuff by the aid of some > very mild nonaggressive scripts. Hence if we all agree I can go through all > repo and fix everything once and ask the current PR owners to rebase. > > > > > > You mean you want to fix up all PRs, right? Or all code in master? If we > implement the diff checks I'm not sure the former is needed, but perhaps it > helps. If the latter, I don't think that's a good idea. > > > > Ralf > > > > > >> > >> Sounds like a chore to everyone but I guess biting the bullet once and > then turning on all the checks seems the least painful way. Otherwise > Ralf's suggestion is also a good idea. > >> > >> On Mon, Jan 13, 2020 at 11:10 PM Ralf Gommers > wrote: > >>> > >>> Hey Josh, > >>> > >>> > >>> On Mon, Jan 13, 2020 at 1:44 AM Joshua Wilson < > josh.craig.wilson at gmail.com> wrote: > >>>> > >>>> Hey everyone, > >>>> > >>>> During code review, it seems that we care about missing whitespace > >>>> after commas. (I have no data on this, but I suspect it is our most > >>>> common linting nit.) Linting for this is currently disabled. Now on > >>>> the one hand, we generally avoid turning on checks like this because > >>>> of the disruption to open PRs and what they do to git history. On the > >>>> other hand, I don't think enforcing this check by hand works. > >>>> Reminding an author to add spaces after commas can easily add several > >>>> rounds of extra review (some places almost invariably get missed on > >>>> the first try, and then the problem tends to sneak back in in later > >>>> commits), and I usually just give up after a while. So I don't think > >>>> we are heading in the direction of "in 5 years all the bad cases will > >>>> be gone and we can turn on the check without disruption". > >>> > >>> > >>> I agree, the current workflow is not optimal. > >>> > >>>> > >>>> So I would kind of like to push us to either: > >>>> > >>>> 1. Turn the lint check on. We can do this module-by-module or even > >>>> file-by-file if necessary to minimize disruption. > >>>> 2. Declare that this is not something we care about in code review. > >>> > >>> > >>> Another option would be to run pycodestyle/pyflakes with all checks we > want enabled, but only on the diff in the PR. This is what PyTorch does for > example. The nice thing is you can test what you want without having to > deal with code churn before enabling a check. The downside is that it's a > bit convoluted to reproduce the "run on the diff to master" command locally > (but we could add a script to streamline that probably). > >>> > >>> Cheers, > >>> Ralf > >>> > >>> _______________________________________________ > >>> SciPy-Dev mailing list > >>> SciPy-Dev at python.org > >>> https://mail.python.org/mailman/listinfo/scipy-dev > >> > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at python.org > >> https://mail.python.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > Message: 4 > Date: Thu, 16 Jan 2020 06:45:05 -0500 > From: "Robert Lucente - Pipeline.Com" > To: "'SciPy Developers List'" > Cc: "Robert Gmail Backup 2 Lucente" > > Subject: [SciPy-Dev] cuSignal: GPU accelerated version of SciPy Signal > package > Message-ID: <038f01d5cc62$6572a090$3057e1b0$@pipeline.com> > Content-Type: text/plain; charset="us-ascii" > > https://github.com/rapidsai/cusignal > > I was wondering what people thought of cuSignal? > > If the topic is not appropriate for this mailing list, please consider > replying directly to me. > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > End of SciPy-Dev Digest, Vol 195, Issue 13 > ****************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Wed Jan 22 16:09:09 2020 From: andyfaff at gmail.com (Andrew Nelson) Date: Thu, 23 Jan 2020 08:09:09 +1100 Subject: [SciPy-Dev] Generate random variates using Cython In-Reply-To: References: Message-ID: > > After upgrading to Numpy 1.18, I could indeed compile the code > https://docs.scipy.org/doc/numpy-1.17.0/reference/random/extending.html > and it gives a substantial speedup. > I am still wondering how to integrate such an approach into SciPy. To > generate uniform samples in the _rvs method of a continuous distribution i > scipy.stats, I would write: > > self._random_state.random_sample > `np.random.Generator` doesn't have a `random_sample` method. But it does have a `uniform` method. So if `_random_state` is either a `RandomState` or a `Generator` instance you should use the `uniform` method for it to work across both. If I want to use Cython, how can I ensure that it relies on the same random > state? > `Generator` and `RandomState` will produce different streams of random numbers from the same seed. In the PR linked to issue #11402 I'm trying to ensure that the `rvs` methods will work with both `Generator` and `RandomState`. The latter is current default, and one would want to use that to retain back compatibility of random numbers. If you're trying to use Cython to speedup a particular rvs method you'll need to have two branches. One branch is given a RandomState instance, the other a Generator instance. import numpy as np > cimport numpy as np > cimport cython > from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer > from numpy.random cimport bitgen_t > from numpy.random import PCG64 > > @cython.boundscheck(False) > @cython.wraparound(False) > def rvs(Py_ssize_t n): > cdef Py_ssize_t i > cdef bitgen_t *rng > cdef const char *capsule_name = "BitGenerator" > cdef double u, v > cdef double[::1] random_values > x = PCG64() > capsule = x.capsule > # Optional check that the capsule if from a BitGenerator > if not PyCapsule_IsValid(capsule, capsule_name): > raise ValueError("Invalid pointer to anon_func_state") > # Cast the pointer > rng = PyCapsule_GetPointer(capsule, capsule_name) > random_values = np.empty(n, dtype='float64') > with x.lock, nogil: > for i in range(n): > while (1): > u = rng.next_double(rng.state) > v = rng.next_double(rng.state) > if u <= v*v: > break > random_values[i] = v > randoms = np.asarray(random_values) > return randoms > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Wed Jan 22 21:58:31 2020 From: andyfaff at gmail.com (Andrew Nelson) Date: Thu, 23 Jan 2020 13:58:31 +1100 Subject: [SciPy-Dev] Generate random variates using Cython In-Reply-To: References: Message-ID: Christoph, this might give you a headstart. %%cython import numpy as np cimport numpy as np cimport cython from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer from numpy.random cimport bitgen_t from numpy.random import PCG64 @cython.boundscheck(False) @cython.wraparound(False) def rvs(rand_state, Py_ssize_t n): """ Create an array of `n` uniformly distributed doubles. A 'real' distribution would want to process the values into some non-uniform distribution """ cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef double[::1] random_values if isinstance(rand_state, np.random.RandomState): randoms = rand_state.uniform(size=n) elif isinstance(rand_state, np.random.Generator): x = rand_state.bit_generator capsule = x.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='float64') with x.lock, nogil: for i in range(n): # Call the function random_values[i] = rng.next_double(rng.state) randoms = np.asarray(random_values) else: raise RuntimeError("rand_state wasn't a RandomState or Generator") return randoms -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteoravasi at gmail.com Fri Jan 24 12:54:23 2020 From: matteoravasi at gmail.com (matteo ravasi) Date: Fri, 24 Jan 2020 18:54:23 +0100 Subject: [SciPy-Dev] Implementation choices in scipy.sparse.linalg.LinearOperator In-Reply-To: <0464AE5D-EF1D-4B14-A9FA-DCD3497156FA@gmail.com> References: <0464AE5D-EF1D-4B14-A9FA-DCD3497156FA@gmail.com> Message-ID: <382E3555-B904-494D-9E7C-9A3339B54987@gmail.com> Dear All, just trying to revive this as I didn?t get any reply. I can see that there is more interest in the community with respect to ad-hoc matrix-vector multiplications (aka Linear operators) - see also the discussion in Re: [SciPy-Dev] Efficient Toeplitz Matrix-Vector Multiplication and related PR. Hopefully some of the core developers or anyone involved in this area can comment here ;) Thank you! MR > On 8 Jan 2020, at 20:31, matteo ravasi wrote: > > Dear Scipy Developers, > some of you may be aware of the PyLops project (https://github.com/equinor/pylops ), which heavily builds on the scipy.sparse.linalg.LinearOperator class of scipy and provides an extensive library of general purpose and domain specific linear operators. > > Recently a users asked for a functionality that would create the equivalent dense matrix representation of an operator (to be mostly used for teaching purposes). While implementing it, we came across two internal behaviours in the interface.py file that we would like to mention and discuss about: > > - The method _matmat and _rmatmat as implemented per today requires operators to work on (N, 1)/(M, 1) arrays. Whilst the first sentence in the Notes of LinearOperator states "The user-defined matvec() function must properly handle the case where v has shape (N,) as well as the (N,1) case", I tend to believe that the first option should be the encouraged one, as I do not really see any advantage with carrying over trailing dimensions in this case. Moreover, some of the linear solvers in scipy that can be used together with linear operator such as lsqr or lsmr, require the RHS b vector to have a single dimension (b : (m,) ndarray). Again, although a quick test showed that they work also when providing a b vector of shape (m, 1), this shows to me that the extra trailing dimension is just a nuisance and should be discouraged. Assuming that you agree with this argument, an alternative solution that works on (N, ) input is actually possible and currently implemented in PyLops for _matmat (see here https://github.com/equinor/pylops/blob/master/pylops/LinearOperator.py ). This avoids having implementations that work for both (N, ) and (N, 1), which sometimes requires adding additional checks and squeezes within the implementation of _matvec and _rmatvec in situations where trailing dimensions in the input vector are not really bringing any benefit. > > - The above mentioned new feature has been currently added as a method to the PyLops LinearOperator class (and named todense in a similar fashion to scipy.sparse.xxx_matrix.todense methods). Doing so, we realized that every time two operators are summed/subtracted/multiplied? (pretty much when one of the overloading of scipy.sparse.linalg.LinearOperator is invoked), the returned object is not of LinearOperator type anymore but of the type of the private class in scipy.sparse.linalg.interface used to perform the requested operator (e.g, _SumLinearOperator). I wonder if returning an object of a ?private? type is actually something recommended to do in scipy (or even if this happens anywhere else in scipy). Again, in PyLops we need to overload the __add__ method (and similar methods) and make it into a thin wrappers of the scipy equivalent, returning an object of our parent class LinearOperator. This is to ensure that the returned object would maintain the additional properties and methods of our Linearoperator class, but also to avoid ?leaking? any of the private classes to the end users. I wonder if also the scipy API should perhaps avoid returning to end users objects of some private classes. > > This email is not intended to criticise any of current behaviours in scipy.sparse.linalg.LinearOperator, rather to gain an understanding of some of the decisions made by the core developers of the scipy.sparse.linalg.interface submodule, which may all have their own right to be the way they are today :) > > Looking forward to hearing from you. > > Best wishes, > MR -------------- next part -------------- An HTML attachment was scrubbed... URL: From melissawm at gmail.com Fri Jan 24 15:24:57 2020 From: melissawm at gmail.com (=?UTF-8?Q?Melissa_Mendon=C3=A7a?=) Date: Fri, 24 Jan 2020 17:24:57 -0300 Subject: [SciPy-Dev] SciPy 2020 Call for Submissions is open! In-Reply-To: References: Message-ID: Apologies for crossposting. ===== SciPy 2020, the 19th annual Scientific Computing with Python conference, will be held July 6-12, 2020 in Austin, Texas. The annual SciPy Conference brings together over 900 participants from industry, academia, and government to showcase their latest projects, learn from skilled users and developers, and collaborate on code development. The call for SciPy 2020 talks, posters, and tutorials is now open through February 11, 2020. Talks and Posters (July 8-10, 2020) In addition to the general track, this year will have the following specialized tracks, mini symposia, and sessions: Tracks High Performance Python Machine Learning and Data Science Mini Symposia Astronomy and Astrophysics Biology and Bioinformatics Earth, Ocean, Geo and Atmospheric Science Materials Science Special Sessions Maintainers Track SciPy Tools Plenary Session For additional details and instructions, please see the conference website. Tutorials (July 6-7, 2020) Tutorials should be focused on covering a well-defined topic in a hands-on manner. We are looking for awesome techniques or packages, helping new or advanced Python programmers develop better or faster scientific applications. We encourage submissions to be designed to allow at least 50% of the time for hands-on exercises even if this means the subject matter needs to be limited. Tutorials will be 4 hours in duration. In your tutorial application, you can indicate what prerequisite skills and knowledge will be needed for your tutorial, and the approximate expected level of knowledge of your students (i.e., beginner, intermediate, advanced). Instructors of accepted tutorials will receive a stipend. For examples of content and format, you can refer to tutorials from past SciPy tutorial sessions (SciPy 2018 , SciPy2019 ) and some accepted submissions . For additional details and instructions see the conference website . Submission page: https://easychair.org/conferences/?conf=scipy2020 Submission Deadline: February 11, 2020 - Melissa Weber Mendon?a Diversity Committee Co-Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From melissawm at gmail.com Wed Jan 29 07:47:47 2020 From: melissawm at gmail.com (=?UTF-8?Q?Melissa_Mendon=C3=A7a?=) Date: Wed, 29 Jan 2020 09:47:47 -0300 Subject: [SciPy-Dev] SciPy 2020 Call for Submissions is open! In-Reply-To: References: Message-ID: Apologies for crossposting. ===== SciPy 2020, the 19th annual Scientific Computing with Python conference, will be held July 6-12, 2020 in Austin, Texas. The annual SciPy Conference brings together over 900 participants from industry, academia, and government to showcase their latest projects, learn from skilled users and developers, and collaborate on code development. The call for SciPy 2020 talks, posters, and tutorials is now open through February 11, 2020. Talks and Posters (July 8-10, 2020) In addition to the general track, this year will have the following specialized tracks, mini symposia, and sessions: Tracks High Performance Python Machine Learning and Data Science Mini Symposia Astronomy and Astrophysics Biology and Bioinformatics Earth, Ocean, Geo and Atmospheric Science Materials Science Special Sessions Maintainers Track SciPy Tools Plenary Session For additional details and instructions, please see the conference website. Tutorials (July 6-7, 2020) Tutorials should be focused on covering a well-defined topic in a hands-on manner. We are looking for awesome techniques or packages, helping new or advanced Python programmers develop better or faster scientific applications. We encourage submissions to be designed to allow at least 50% of the time for hands-on exercises even if this means the subject matter needs to be limited. Tutorials will be 4 hours in duration. In your tutorial application, you can indicate what prerequisite skills and knowledge will be needed for your tutorial, and the approximate expected level of knowledge of your students (i.e., beginner, intermediate, advanced). Instructors of accepted tutorials will receive a stipend. For examples of content and format, you can refer to tutorials from past SciPy tutorial sessions (SciPy 2018 , SciPy2019 ) and some accepted submissions . For additional details and instructions see the conference website . Submission page: https://easychair.org/conferences/?conf=scipy2020 Submission Deadline: February 11, 2020 - Melissa Weber Mendon?a Diversity Committee Co-Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Jan 29 09:33:17 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 29 Jan 2020 15:33:17 +0100 Subject: [SciPy-Dev] Github actions setup. In-Reply-To: References: Message-ID: On Tue, Jan 21, 2020 at 10:56 PM Andrew Nelson wrote: > I've recently setup github actions for a personal project. The setup is > quite quick, the test suite runs in parallel and it seems to be fast. I'm > wondering what peoples thoughts are for doing the same for scipy? > The rationale is that there is a reasonably heavy load on travisCI (10 > entries) and this might be a way of spreading the load somewhat. e.g. we > could take off half the Linux entries. > That sounds fine to me. I'm not sure it will decrease the TravisCI turnaround time, because that's probably determined by ppcle64 and macos. But it probably can't hurt. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Thu Jan 30 01:42:59 2020 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 29 Jan 2020 22:42:59 -0800 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check In-Reply-To: References: Message-ID: <20200130064259.rhszdrapkglva7ry@aurelius.localdomain> On Tue, 14 Jan 2020 18:03:46 +0800, Kai Striega wrote: > +1 for Ralph's idea to run pycodestyle/pyflakes with all checks we want > enabled, but only on the diff in the PR. Cleaning would also be quite > important, I imagine quite a few potential contributors would be put of by > having to fix *existing* PEP8 violations so Ilhan's scripts would also be > useful. We have been using https://pep8speaks.com/ ? a GitHub integration. Works pretty well for reporting PEP transgressions. Here is an example comment generated: https://github.com/cesium-ml/baselayer/pull/71#issuecomment-541254369 Another approach is the one taken by Napari, which is to automatically run Black to format every contribution. The only disadvantage is that you need developers to install a pre-commit hook which, in their case, they do using the pre-commit package: pre-commit install (see https://github.com/napari/napari/blob/master/docs/CONTRIBUTING.md) Best regards, St?fan From ralf.gommers at gmail.com Thu Jan 30 03:37:54 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 30 Jan 2020 09:37:54 +0100 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check In-Reply-To: <20200130064259.rhszdrapkglva7ry@aurelius.localdomain> References: <20200130064259.rhszdrapkglva7ry@aurelius.localdomain> Message-ID: On Thu, Jan 30, 2020 at 7:42 AM Stefan van der Walt wrote: > On Tue, 14 Jan 2020 18:03:46 +0800, Kai Striega wrote: > > +1 for Ralph's idea to run pycodestyle/pyflakes with all checks we want > > enabled, but only on the diff in the PR. Cleaning would also be quite > > important, I imagine quite a few potential contributors would be put of > by > > having to fix *existing* PEP8 violations so Ilhan's scripts would also be > > useful. > > We have been using https://pep8speaks.com/ ? a GitHub integration. > Works pretty well for reporting PEP transgressions. > That looks interesting, they have a simple setting for checking only the diff. > Here is an example comment generated: > > https://github.com/cesium-ml/baselayer/pull/71#issuecomment-541254369 > > Another approach is the one taken by Napari, which is to automatically > run Black to format every contribution. The only disadvantage is that > you need developers to install a pre-commit hook which, in their case, > they do using the pre-commit package: > I think that's a non-starter, since it obliterates history very thoroughly on introduction. The pre-commit thing is also too intrusive - it tripped me up multiple times with Dask, I'm sure it will lead to lots of confusion for new contributors. Cheers, Ralf > pre-commit install > > (see https://github.com/napari/napari/blob/master/docs/CONTRIBUTING.md) > > Best regards, > St?fan > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.craig.wilson at gmail.com Thu Jan 30 10:30:09 2020 From: josh.craig.wilson at gmail.com (Joshua Wilson) Date: Thu, 30 Jan 2020 07:30:09 -0800 Subject: [SciPy-Dev] Missing whitespace after commas in pycodestyle check In-Reply-To: References: <20200130064259.rhszdrapkglva7ry@aurelius.localdomain> Message-ID: > they have a simple setting for checking only the diff A nice thing is that pycodestyle also has a diff setting, so writing our own check is (or at least appears to be) pretty simple: https://github.com/scipy/scipy/pull/11423/files#diff-1350e784331064804377929917ce5e09R66 Because of that I'd prefer to write our own script-it makes it easier to integrate into `runtests.py` and not complicate our workflow any more. From shashaank.n at columbia.edu Thu Jan 30 17:38:59 2020 From: shashaank.n at columbia.edu (Shashaank Narayanan) Date: Thu, 30 Jan 2020 17:38:59 -0500 Subject: [SciPy-Dev] SciPy Contribution: Gammatone Filters in scipy.signal Message-ID: Hello SciPy Team, I am new to this mailing list, and I am interested in contributing to SciPy. I would like to suggest a new feature to be added to the scipy.signal module: gammatone filters. Gammatone filters are becoming increasingly popular in the fields of digital signal processing and music analysis as it effectively models the auditory filters of the human auditory system. Currently, there are very few implementations of gammatone filters available for Python, and these implementations are not generalized to basic finite impulse response (FIR) and infinite impulse response (IIR) filters like SciPy has. I have written my own gammatone FIR filter using NumPy based on Malcolm Slaney's 1993 paper on the topic ( https://engineering.purdue.edu/~malcolm/apple/tr35/PattersonsEar.pdf). This paper was used for Matlab's implementation of gammatone filters. I am in the process of writing a gammatone IIR filter with NumPy and SciPy. Please let me know if this feature will fit with the scipy.signal module. Appreciate your time and guidance. Thanks, Shashaank -------------- next part -------------- An HTML attachment was scrubbed... URL: