From ralf.gommers at gmail.com Mon Mar 1 04:24:48 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 1 Mar 2021 10:24:48 +0100 Subject: [SciPy-Dev] Faster maximum flow algorithm for scipy.sparse.csgraph In-Reply-To: References: Message-ID: On Thu, Feb 18, 2021 at 2:12 PM Touqir Sajed wrote: > Dear Scipy developers, > > This is a continuation of https://github.com/scipy/scipy/issues/13402 . > The current implementation of scipy.sparse.csgraph uses Edmond's Karp > algorithm which is not quite good in terms of theoretical time complexity > but despite of this, the implementation is optimized enough to outperform > several other superior (theoretical complexity) algorithms as shown here : > https://github.com/scipy/scipy/pull/10566#issuecomment-552615594 . Later > I carried out benchmarks ( > https://github.com/scipy/scipy/issues/13402#issuecomment-767909167 ) > showing that indeed scipy's Edmond-Karp implementation can be significantly > beaten with optimized implementations. My original concern was > Edmond-Karp's theoretical complexity which limits its performance in some > cases (highly dense graphs). So, having another algorithm in scipy with a > better theoretical complexity along with proven superior empirical > performance makes sense. Only the algorithms here : > https://github.com/touqir14/MaxFlow have been shown to significantly > outperform scipy's Edmond-Karp. I think it would be good to port one or > several of these implementations into scipy. Having solely cython ports > will probably be easier to maintain. One thing to ponder here is how much > of a complex implementation should we allow if we decide to add new max > flow algorithms to scipy. > > Let me know your thoughts. > Thanks for asking Touqir, and apologies for the slow reply. Your benchmarks look great, so I'm +1 on adding at least one new algorithm. I replied on the issue in more detail. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy.pamphile at gmail.com Tue Mar 2 09:13:31 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Tue, 2 Mar 2021 15:13:31 +0100 Subject: [SciPy-Dev] Move some validations as asserts? Message-ID: <6380A893-34C5-45BF-B257-BC630D166B79@gmail.com> Hi everyone, TL;DR: could we use assert to do validation? In a recent issue (https://github.com/scipy/scipy/issues/13629), the cost of validation was raised in context such as optimization. In situations like these, or even when you know how to use the API, there are lots of unnecessary validations happening. I would propose a general reflection around validation. Some ideas: A simple approach would be to systematically separate the logic in two: core function on one side, user interface on the other. This way the user could by-pass all the time the validation. Mock the validations. After some quick tests, it looks like the overhead is too big. I could be wrong. Using assert for all validation. The advantage of the assert is that validation would be cleanly identified in the code (starlette is full of these for instance). This would be an advanced feature as most users don?t know about asserts being removed if you launch python in optimized mode. I am interested in reading your thoughts. If one solution would be chosen, I could work on it too. Cheers, Pamphile -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Tue Mar 2 16:56:25 2021 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Wed, 3 Mar 2021 00:56:25 +0300 Subject: [SciPy-Dev] Move some validations as asserts? In-Reply-To: <6380A893-34C5-45BF-B257-BC630D166B79@gmail.com> References: <6380A893-34C5-45BF-B257-BC630D166B79@gmail.com> Message-ID: > TL;DR: could we use assert to do validation? > > In a recent issue (https://github.com/scipy/scipy/issues/13629), the cost of validation was raised in context such as optimization. > In situations like these, or even when you know how to use the API, there are lots of unnecessary validations happening. > > I would propose a general reflection around validation. > > Some ideas: > > A simple approach would be to systematically separate the logic in two: core function on one side, user interface on the other. This way the user could by-pass all the time the validation. > Mock the validations. After some quick tests, it looks like the overhead is too big. I could be wrong. > Using assert for all validation. The first option sounds best to me. My 2c, Evgeni From ralf.gommers at gmail.com Wed Mar 3 08:14:48 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 3 Mar 2021 14:14:48 +0100 Subject: [SciPy-Dev] Move some validations as asserts? In-Reply-To: References: <6380A893-34C5-45BF-B257-BC630D166B79@gmail.com> Message-ID: On Tue, Mar 2, 2021 at 10:56 PM Evgeni Burovski wrote: > > TL;DR: could we use assert to do validation? > > > > In a recent issue (https://github.com/scipy/scipy/issues/13629), the > cost of validation was raised in context such as optimization. > > In situations like these, or even when you know how to use the API, > there are lots of unnecessary validations happening. > > > > I would propose a general reflection around validation. > > > > Some ideas: > > > > A simple approach would be to systematically separate the logic in two: > core function on one side, user interface on the other. This way the user > could by-pass all the time the validation. > > Mock the validations. After some quick tests, it looks like the overhead > is too big. I could be wrong. > I'm not sure what this means exactly. > Using assert for all validation. > One obvious issue is that `python -OO` will remove all asserts, and that's not what we want (we've had lots of issues with popular web deployment tools running with -OO by default). Hence the rule has always been "no plain asserts". Also, error messages for plain asserts are bad usually. > The first option sounds best to me. > Probably - if we do something, that's likely the best direction. The question though is how to provide a sensible UX for it, that's nontrivial. Splitting the whole API is a lot of work and new API surface. There may be alternatives, such as new keywords or fancier approaches (e.g., cython has a uniform way of turning off bounds checking and negative indexing with `cython.boundscheck`, `cython.wraparound`). We probably need to look at all the types of validation (array-like input, nan/inf checks, parameter bounds, parameter type checks, etc.) and figure out how we'd like new code to look. For example, I was toying with the idea of removing `array_like` input from stats.qmc before we release it, and just accept plain ndarrays (has other benefits, but removes overhead too). Cheers, Ralf > My 2c, > > Evgeni > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Wed Mar 3 10:29:33 2021 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Wed, 3 Mar 2021 18:29:33 +0300 Subject: [SciPy-Dev] Move some validations as asserts? In-Reply-To: References: <6380A893-34C5-45BF-B257-BC630D166B79@gmail.com> Message-ID: >> > A simple approach would be to systematically separate the logic in two: core function on one side, user interface on the other. This way the user could by-pass all the time the validation. > Probably - if we do something, that's likely the best direction. The question though is how to provide a sensible UX for it, that's nontrivial. Splitting the whole API is a lot of work and new API surface. There may be alternatives, such as new keywords or fancier approaches (e.g., cython has a uniform way of turning off bounds checking and negative indexing with `cython.boundscheck`, `cython.wraparound`). FWIW, I don't think we can or should do it uniformly for the whole API surface, it's best done on a case by case basis. One way (maybe not the best one) is PPoly.construct_fast, which is documented as "Takes the same parameters as the constructor. Input arguments c and x must be arrays of the correct shape and type". In general, I suspect the sweet spot is structuring computational routines into validation then an underscored workhorse routine. Roughly like def func(*inputs): ... validate, raise errors as necessary ... result = _func(*sanitazed_inputs) return result Note the leading underscore: we do not guarantee the stability of the underscored routine, but if a user knows what they're doing, fine, they can use it directly. But we do not document _func routines. Typically the alternative is that a user simply copies parts of our code into their project --- either way it's their responsibility, we're only making it slightly easier without taking an undue burden on us library maintainers. Also note that boundschecking etc is best switched off at the level of the _func routine. > We probably need to look at all the types of validation (array-like input, nan/inf checks, parameter bounds, parameter type checks, etc.) and figure out how we'd like new code to look. For example, I was toying with the idea of removing `array_like` input from stats.qmc before we release it, and just accept plain ndarrays (has other benefits, but removes overhead too). Note that once there's more than several types of validation, having separate keywords for them is probably a no-go. Cheers, Evgeni From roy.pamphile at gmail.com Wed Mar 3 10:43:06 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Wed, 3 Mar 2021 16:43:06 +0100 Subject: [SciPy-Dev] Move some validations as asserts? In-Reply-To: References: <6380A893-34C5-45BF-B257-BC630D166B79@gmail.com> Message-ID: > > > > A simple approach would be to systematically separate the logic in two: core function on one side, user interface on the other. This way the user could by-pass all the time the validation. > > Mock the validations. After some quick tests, it looks like the overhead is too big. I could be wrong. > > I'm not sure what this means exactly. I just did some quick tests like the following. (I just wanted to see how the API could look like.) import time from unittest.mock import patch import numpy as np from scipy.spatial import distance coords = np.random.random((100, 2)) weights = [1, 2] itime = time.time() for _ in range(50): distance.cdist(coords, coords, metric='sqeuclidean', w=weights) print(f"Time: {time.time() - itime}") with patch('scipy.spatial.distance._validate_vector') as mock_requests: mock_requests.side_effect = lambda x, *args, **kwargs: np.asarray(x) itime = time.time() for _ in range(50): distance.cdist(coords, coords, 'sqeuclidean', w=weights) print(f"Time: {time.time() - itime}") > > Using assert for all validation. > > One obvious issue is that `python -OO` will remove all asserts, and that's not what we want (we've had lots of issues with popular web deployment tools running with -OO by default). Hence the rule has always been "no plain asserts". > > Also, error messages for plain asserts are bad usually. Oh ok then it?s off the table. > > The first option sounds best to me. > > Probably - if we do something, that's likely the best direction. The question though is how to provide a sensible UX for it, that's nontrivial. Splitting the whole API is a lot of work and new API surface. There may be alternatives, such as new keywords or fancier approaches (e.g., cython has a uniform way of turning off bounds checking and negative indexing with `cython.boundscheck`, `cython.wraparound`). What I had in mind was more in line with what Evgini explains. We would not change the public API, just within the functions, separate the validation from the actual work. The decorator thing in Cython would probably be what I had in mind with the mocking. If we would manage to do something like this fast, this could be a great solution IMO. Cheers, Pamphile -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Wed Mar 3 12:29:57 2021 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Wed, 3 Mar 2021 18:29:57 +0100 Subject: [SciPy-Dev] Move some validations as asserts? In-Reply-To: References: <6380A893-34C5-45BF-B257-BC630D166B79@gmail.com> Message-ID: > We probably need to look at all the types of validation (array-like input, nan/inf checks, parameter bounds, parameter type checks, etc.) and figure out how we'd like new code to look. For example, I was toying with the idea of removing `array_like` input from stats.qmc before we release it, and just accept plain ndarrays (has other benefits, but removes overhead too). I am slowly coming to the same conclusion that array_like is not serving the convenience it promises other than the trivial `[1, 2, 3]` type of array inputs. And we are losing much of the performance in these checks especially when the functionality is an order of magnitude faster than these checks. Instead of checking the ndim and be done with it we are doing lots of things for the array_like edge cases. Not sure if we can get rid of it due to decade-usage but at least we can keep that in mind while allowing for array_like. On Wed, Mar 3, 2021 at 4:44 PM Pamphile Roy wrote: > > > >> > A simple approach would be to systematically separate the logic in two: >> core function on one side, user interface on the other. This way the user >> could by-pass all the time the validation. >> > Mock the validations. After some quick tests, it looks like the >> overhead is too big. I could be wrong. >> > > I'm not sure what this means exactly. > > > I just did some quick tests like the following. (I just wanted to see how > the API could look like.) > > import time > from unittest.mock import patch > import numpy as np > from scipy.spatial import distance > > coords = np.random.random((100, 2)) > weights = [1, 2] > > itime = time.time() > for _ in range(50): > distance.cdist(coords, coords, metric='sqeuclidean', w=weights) > > print(f"Time: {time.time() - itime}") > > with patch('scipy.spatial.distance._validate_vector') as mock_requests: > mock_requests.side_effect = lambda x, *args, **kwargs: np.asarray(x) > > itime = time.time() > for _ in range(50): > distance.cdist(coords, coords, 'sqeuclidean', w=weights) > > print(f"Time: {time.time() - itime}") > > > > Using assert for all validation. >> > > One obvious issue is that `python -OO` will remove all asserts, and that's > not what we want (we've had lots of issues with popular web deployment > tools running with -OO by default). Hence the rule has always been "no > plain asserts". > > Also, error messages for plain asserts are bad usually. > > > Oh ok then it?s off the table. > > >> The first option sounds best to me. >> > > Probably - if we do something, that's likely the best direction. The > question though is how to provide a sensible UX for it, that's nontrivial. > Splitting the whole API is a lot of work and new API surface. There may be > alternatives, such as new keywords or fancier approaches (e.g., cython has > a uniform way of turning off bounds checking and negative indexing with > `cython.boundscheck`, `cython.wraparound`). > > > What I had in mind was more in line with what Evgini explains. We would > not change the public API, just within the functions, separate the > validation from the actual work. > > The decorator thing in Cython would probably be what I had in mind with > the mocking. If we would manage to do something like this fast, this could > be a great solution IMO. > > > Cheers, > Pamphile > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 6 15:28:03 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 6 Mar 2021 21:28:03 +0100 Subject: [SciPy-Dev] Move some validations as asserts? In-Reply-To: References: <6380A893-34C5-45BF-B257-BC630D166B79@gmail.com> Message-ID: On Wed, Mar 3, 2021 at 4:43 PM Pamphile Roy wrote: > > > >> > A simple approach would be to systematically separate the logic in two: >> core function on one side, user interface on the other. This way the user >> could by-pass all the time the validation. >> > Mock the validations. After some quick tests, it looks like the >> overhead is too big. I could be wrong. >> > > I'm not sure what this means exactly. > > > I just did some quick tests like the following. (I just wanted to see how > the API could look like.) > > import time > from unittest.mock import patch > import numpy as np > from scipy.spatial import distance > > coords = np.random.random((100, 2)) > weights = [1, 2] > > itime = time.time() > for _ in range(50): > distance.cdist(coords, coords, metric='sqeuclidean', w=weights) > > print(f"Time: {time.time() - itime}") > > with patch('scipy.spatial.distance._validate_vector') as mock_requests: > mock_requests.side_effect = lambda x, *args, **kwargs: np.asarray(x) > > itime = time.time() > for _ in range(50): > distance.cdist(coords, coords, 'sqeuclidean', w=weights) > > print(f"Time: {time.time() - itime}") > > > > Using assert for all validation. >> > > One obvious issue is that `python -OO` will remove all asserts, and that's > not what we want (we've had lots of issues with popular web deployment > tools running with -OO by default). Hence the rule has always been "no > plain asserts". > > Also, error messages for plain asserts are bad usually. > > > Oh ok then it?s off the table. > > >> The first option sounds best to me. >> > > Probably - if we do something, that's likely the best direction. The > question though is how to provide a sensible UX for it, that's nontrivial. > Splitting the whole API is a lot of work and new API surface. There may be > alternatives, such as new keywords or fancier approaches (e.g., cython has > a uniform way of turning off bounds checking and negative indexing with > `cython.boundscheck`, `cython.wraparound`). > > > What I had in mind was more in line with what Evgini explains. We would > not change the public API, just within the functions, separate the > validation from the actual work. > That seems fine to me, and easy to do in most cases. We typically need that pattern anyway when we move code to Cython/Pythran. > The decorator thing in Cython would probably be what I had in mind with > the mocking. If we would manage to do something like this fast, this could > be a great solution IMO. > This one would not be fast I'm afraid. Cheers, Ralf > > Cheers, > Pamphile > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From witt.chuck at gmail.com Mon Mar 8 05:19:06 2021 From: witt.chuck at gmail.com (Chuck Witt) Date: Mon, 8 Mar 2021 10:19:06 +0000 Subject: [SciPy-Dev] Real spherical harmonics Message-ID: Dear SciPy Devs, I am thinking of contributing a real spherical harmonic function (scipy.special.real_sph_harm) to complement the existing spherical harmonic routine (scipy.special.sph_harm). The real spherical harmonics span the same space as traditional spherical harmonics but can be more useful in some cases, particularly when the function to be expanded is purely real. They are simple to compute from the traditional spherical harmonics: https://en.wikipedia.org/wiki/Spherical_harmonics#Real_form. Would this contribution be of interest? Thanks for all the work you do. Best wishes, Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Mar 8 12:26:47 2021 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 08 Mar 2021 09:26:47 -0800 Subject: [SciPy-Dev] Real spherical harmonics In-Reply-To: References: Message-ID: Hi Chuck, On Mon, Mar 8, 2021, at 02:19, Chuck Witt wrote: > I am thinking of contributing a real spherical harmonic function (scipy.special.real_sph_harm) to complement the existing spherical harmonic routine (scipy.special.sph_harm). > > The real spherical harmonics span the same space as traditional spherical harmonics but can be more useful in some cases, particularly when the function to be expanded is purely real. They are simple to compute from the traditional spherical harmonics: > > https://en.wikipedia.org/wiki/Spherical_harmonics#Real_form. > > Would this contribution be of interest? I think those would be useful, as evidenced by their use in DiPy [0]. Perhaps it's worth checkout out what else DiPy currently implements that would be applicable to a wider audience. Best regards, St?fan [0] https://www.dipy.org/documentation/1.0.0./reference/dipy.reconst/#module-dipy.reconst.shm -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielschmitzsiegen at googlemail.com Thu Mar 11 04:18:49 2021 From: danielschmitzsiegen at googlemail.com (Daniel Schmitz) Date: Thu, 11 Mar 2021 10:18:49 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Hey all, after Andrea's suggestions, I did an extensive github search and found several global optimization python libraries which mimic the scipy API, so that a user only has to change the import statements. Could it be useful to add a page in the documentation of these? Non exhaustive list: DIRECT: https://github.com/andim/scipydirect DE/PSO/CMA-ES: https://github.com/keurfonluu/stochopy PSO: https://github.com/jerrytheo/psopy Powell's derivative free optimizers: https://www.pdfo.net/index.html As DIRECT was very competitive on some of Andrea's benchmarks, it could be useful to mimic the scipydirect repo for inclusion into scipy (MIT license). The code is unfortunately a f2py port of the original Fortran implementation which has hard coded bounds on the number of function evaluations (90.000) and iterations (6.000). Any opinions on this? I personally am very impressed by biteopt's performance, and although it ranked very high in other global optimization benchmarks there is no formal paper on it yet. I understand the scipy guidelines in a way that such a paper is a requisite for inclusion into scipy. Best, Daniel Daniel On Sun, 17 Jan 2021 at 14:33, Andrea Gavana wrote: > > Hi Stefan, > > You?re most welcome :-) . I?m happy the experts in the community are commenting and suggesting things, and constructive criticism is also always welcome. > > On Sun, 17 Jan 2021 at 12.11, Stefan Endres wrote: >> >> Dear Andrea, >> >> Thank you very much for this detailed analysis. I don't think I've seen such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions: >> >> Many of the commercial algorithms offer free licenses for benchmarking problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use. > > > I?m definitely willing to include those commercial algorithms. The test suite per se is almost completely automated, so it?s not that complicated to add one or more solvers. I?m generally more inclined in testing open source algorithms but there?s nothing stopping the inclusion of commercial ones. > > I welcome any suggestions related to commercial solvers, as long as they can run on Python 2 / Python 3 and on Windows (I might be able to setup a Linux virtual machine if absolutely needed but that would defy part of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy solvers run on all platforms that support SciPy). > >> The collection of test suites you've garnered could be immensely useful for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)? >> >> In this case I would also like to volunteer to run some of the commercial solvers on the benchmark suite. >> It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ). > > > > I?m still sorting out all the implications related to a potential paper with my employer, but as far as I can see there shouldn?t be any problem with that: assuming everything goes as it should, I will definitely push for making the code open source. > > >> >> Comments on shgo: >> >> High RAM use in higher dimensions: >> >> In the higher dimensions the new simplicial sampling can be used (not pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work. >> >> An exception to this "rule" is when special grey box information such as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or pre-computed bounds on the Lipschitz constants is known. >> >> In the symmetry case SHGO can solve these by supplying the `symmetry` option (which was used in the previous benchmarks done by me for the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite). >> >> I have had a few reports of memory leaks from various users. I have spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully, this will happen in the summer semester of 2021. > > > To be honest I wouldn?t be so concerned in general: SHGO is an excellent global optimization algorithm and it consistently ranks at the top, no matter what problems you throw at it. Together with Dual Annealing, SciPy has gained two phenomenal nonlinear solvers and I?m very happy to see that SciPy is now at the cutting edge of the open source global optimization universe. > > Andrea. > >> Thank you again for compiling this large set of benchmark results. >> >> Best regards, >> Stefan >> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana wrote: >>> >>> Dear SciPy Developers & Users, >>> >>> long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013. >>> >>> This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ . >>> >>> The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function generators, from which we can actually create almost an unlimited number of unique test problems. Biggest news are: >>> >>> This whole exercise is made up of 6,825 test problems divided across 16 different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks, the overall grand total number of functions evaluations stands at 3,859,786,025 - close to 4 billion. Not bad. >>> A couple of "new" optimization algorithms I have ported to Python: >>> >>> MCS: Multilevel Coordinate Search, it?s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. >>> BiteOpt: BITmask Evolution OPTimization , I have converted the C++ code into Python and added a few, minor modifications. >>> >>> >>> Enough chatting for now. The 13 tested algorithms are described here: >>> >>> http://infinity77.net/go_2021/ >>> >>> High level description & results of the 16 benchmarks: >>> >>> http://infinity77.net/go_2021/thebenchmarks.html >>> >>> Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities. >>> >>> List of tested algorithms: >>> >>> AMPGO: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here: >>> >>> http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf >>> >>> I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit, I have improved it even more - in my opinion. >>> >>> BasinHopping: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales: >>> >>> http://www-wales.ch.cam.ac.uk/ >>> >>> BasinHopping is now part of the standard SciPy distribution. >>> >>> BiteOpt: BITmask Evolution OPTimization, based on the algorithm presented in this GitHub link: >>> >>> https://github.com/avaneev/biteopt >>> >>> I have converted the C++ code into Python and added a few, minor modifications. >>> >>> CMA-ES: Covariance Matrix Adaptation Evolution Strategy, based on the following algorithm: >>> >>> http://www.lri.fr/~hansen/cmaesintro.html >>> >>> http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm) >>> >>> CRS2: Controlled Random Search with Local Mutation, as implemented in the NLOpt package: >>> >>> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_Search_.28CRS.29_with_local_mutation >>> >>> DE: Differential Evolution, described in the following page: >>> >>> http://www1.icsi.berkeley.edu/~storn/code.html >>> >>> DE is now part of the standard SciPy distribution, and I have taken the implementation as it stands in SciPy: >>> >>> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution >>> >>> DIRECT: the DIviding RECTangles procedure, described in: >>> >>> https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLinGloOpt/doc/DIRECT_Lipschitzian%20optimization%20without%20the%20lipschitz%20constant.pdf >>> >>> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm) >>> >>> DualAnnealing: the Dual Annealing algorithm, taken directly from the SciPy implementation: >>> >>> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing >>> >>> LeapFrog: the Leap Frog procedure, which I have been recommended for use, taken from: >>> >>> https://github.com/flythereddflagg/lpfgopt >>> >>> MCS: Multilevel Coordinate Search, it?s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ): >>> >>> https://www.mat.univie.ac.at/~neum/software/mcs/ >>> >>> I have added a few, minor improvements compared to the original implementation. See the MCS section for a quick and dirty comparison between the Matlab code and my Python conversion. >>> >>> PSWARM: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C source code from: >>> >>> http://www.norg.uminho.pt/aivaz/pswarm/ >>> >>> SCE: Shuffled Complex Evolution, described in: >>> >>> Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992. >>> >>> The version I used was graciously made available by Matthias Cuntz via a personal e-mail. >>> >>> SHGO: Simplicial Homology Global Optimization, taken directly from the SciPy implementation: >>> >>> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo >>> >>> >>> List of benchmark test suites: >>> >>> SciPy Extended: 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima. >>> >>> I have added about 40 new functions to the standard SciPy benchmarks and fixed a few bugs in the existing benchmark models in the SciPy repository. >>> >>> GKLS: 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator. I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python. >>> >>> GlobOpt: 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named ?GlobOpt?: https://www.researchgate.net/publication/225566516_A_new_class_of_test_functions_for_global_optimization . The original code is in C++ and I have bridged it to Python using Cython. >>> >>> Many thanks go to Professor Marco Locatelli for providing an updated copy of the C++ source code. >>> >>> MMTFG: sort-of an acronym for ?Multi-Modal Test Function with multiple Global minima?, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimodal_Test_Functions_with_Multiple_Global_Optima . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython. >>> >>> GOTPY: a generator of benchmark functions using the Bocharov-Feldbaum ?Method-Min?, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime. >>> >>> Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng . >>> >>> Huygens: this benchmark suite is very different from the rest, as it uses a ?fractal? approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py, then generating 600 2-dimensional test problems out of it. >>> >>> LGMVG: not sure about the meaning of the acronym, but the implementation follows the ?Max-Set of Gaussians Landscape Generator? described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it?s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5. >>> >>> NgLi: Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it?s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. >>> >>> MPM2: Implementing the ?Multiple Peaks Model 2?, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5. >>> >>> RandomFields: as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_problems_based_on_random_field_composition , it generates benchmark functions by ?smoothing? one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself. >>> >>> NIST: not exactly the realm of Global Optimization solvers, but the NIST StRD dataset can be used to generate a single objective function as ?sum of squares?. I have used the NIST dataset as implemented in lmfit, thus creating 27 test problems with dimensionality ranging from 2 to 9. >>> >>> GlobalLib: Arnold Neumaier maintains a suite of test problems termed ?COCONUT Benchmark? and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C parser to convert the benchmarks from C to Python. >>> >>> The global minima are taken from Sahinidis or from Neumaier or refined using the NEOS server when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9. >>> >>> CVMG: another ?landscape generator?, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/multi.kennedy.html , the acronym stands for ?Continuous Valued Multimodality Generator?. Source code is in C++ but it?s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5. >>> >>> NLSE: again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications, ALIAS/COPRIN and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8. >>> >>> Schoen: based on the early work of Fabio Schoen and his short note on a simple but interesting idea on a test function generator, I have taken the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6. >>> >>> Many thanks go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications. >>> >>> Robust: the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6. >>> >>> >>> Enjoy, and Happy 2021 :-) . >>> >>> >>> Andrea. >>> >>> _______________________________________________ >>> >>> >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >> >> >> >> -- >> Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering) >> >> Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Stra?e 3, 28359 Bremen, Germany >> Work phone (DE): +49 (0) 421 218 51238 >> Cellphone (DE): +49 (0) 160 949 86417 >> Cellphone (ZA): +27 (0) 82 972 42 89 >> E-mail (work): s.endres at iwt.uni-bremen.de >> Website: https://stefan-endres.github.io/ >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From roy.pamphile at gmail.com Thu Mar 11 10:36:13 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Thu, 11 Mar 2021 16:36:13 +0100 Subject: [SciPy-Dev] NumFOCUS Small Development Grants Message-ID: Hi everyone, For those of you who don?t know, SciPy is part of the NumFOCUS organization. This allows receiving funding in general and also for something called: Small Development Grants (SDG). This grant is to help support development cost up to $5000. So if you have a project (something from the roadmap for instance) and need support for a PhD student, or simply your own compensation, this can help! There are 3 grant cycles per year. It is too late for the first cycle, but you have time to prepare for the next ones (May). If you consider doing this, a quick note: Any NumFOCUS Fiscally Sponsored or Affiliated project is eligible to submit 1 proposal per grant cycle. If a project would like to solicit proposals from its internal community, leaders must organize their own review process to select the proposal submitted on behalf of the project. Each project may receive a maximum of 2 grants per calendar year. More information here: https://numfocus.org/programs/sustainability#sdg Cheers, Pamphile -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikolay.mayorov at zoho.com Fri Mar 12 14:08:03 2021 From: nikolay.mayorov at zoho.com (Nikolay Mayorov) Date: Sat, 13 Mar 2021 00:08:03 +0500 Subject: [SciPy-Dev] GSoC'21 participation SciPy In-Reply-To: References: <6EB6E0CE-53E3-4ECC-806A-4B728728579A@gmail.com> Message-ID: <17827d6ac1c.b0deeb8838932.5136930551930802487@zoho.com> Hi! I've added an idea about implementing object-oriented design of filtering in scipy.signal. It was discussed quite a lot in the past, I think it's a sane idea and scipy.signal definitely can be made more user friendly and convenient. This is only some preliminarily view on the project. Feel free to edit the text. So far I've put only myself as a possible mentor. Nikolay ---- On Thu, 18 Feb 2021 08:34:16 +0500 Andrew Nelson wrote ---- It's good to hear about the ask-tell interface, it's not something I'd heard about before. The class-based Optimizer that was proposed wasn't going to work in quite that way. The main concept was to create an (e.g.) LBFGSB class (inheriting a Minimizer superclass). All Minimizer objects would be iterators, having a __next__ method that would perform one step of a minimisation loop. Iterator based design syncs quite well with the loop based design of most of the existing minimisation algorithms. The __next__ method would be responsible for calling the user based functions. If the user based functions could be marked as vectorisable the __next__ method could despatch a whole series of `x` locations for the user function (one or all of func/jac/hess) to evaluate; the?user function could do whatever parallelisation it wanted. Vectorisable function evaluations also offer benefits for numerical differentiation evaluation. The return value of __next__ would be something along the lines of an intermediate OptimizeResult. I don't know the ask-tell approach works in finer detail. For example, each minimisation step typically requires multiple function evaluations to proceed, e.g. at least once for func evaluation, and many times more for grad/jac and hess evaluation (not to mention constraint function evaluations). THerefore there wouldn't be a 1:1 correspondence of a single ask-tell and a complete step of the minimizer. I reckon the development of this would be way more than a single GSOC could provide, at least to get a mature design into scipy. It's vital to get the architecture correct (esp. the base class), when considering all the minimizers that scipy offers, and their different vagaries. Implementing for one or two minimizers wouldn't be sufficient otherwise one forgets that they e.g. all have different approaches to halting, and you find yourself bolting other things on to make things work. In addition, it's not just the minimizer methods that are involved, it's considering how this all ties in with how constraints/numerical differentiation/`LowLevelCallable`/etc could be improved/used in such a design. At least for the methods involved in `minimize` such an opportunity is the time to consider a total redesign of how things work. Smart/vectorisable numerical differentiation would be more than a whole GSOC in itself. As Robert says, implementation in a separate package would probably be the best way to work; once the bugs have been ironed out it could be merged into scipy-proper. Any redesign could take into account the existing API's/functionality to make things a less jarring change. It'd be great to get the original class-based Optimization off the ground, or something similar. However, it's worth noting that the original proposal only received lukewarm support. A. _______________________________________________ SciPy-Dev mailing list mailto:SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 13 05:03:52 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 13 Mar 2021 11:03:52 +0100 Subject: [SciPy-Dev] SciPy default build settings switched to use Pythran Message-ID: Hi all, After a few months of having Pythran support in master as opt-in, and quite a bit of positive feedback from contributions trying out Pythran in PRs, we've now flipped the build setup to opt-out. If you have a problem, you can still work around it for now with `export SCIPY_USE_PYTHRAN=0` before building. If there is an unexpected issue, please do report it. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 13 07:47:49 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 13 Mar 2021 13:47:49 +0100 Subject: [SciPy-Dev] NumFOCUS Small Development Grants In-Reply-To: References: Message-ID: On Thu, Mar 11, 2021 at 4:36 PM Pamphile Roy wrote: > Hi everyone, > > For those of you who don?t know, SciPy is part of the NumFOCUS > organization. This allows receiving funding in general and also for > something called: Small Development Grants (SDG). > This grant is to help support development cost up to $5000. So if you have > a project (something from the roadmap for instance) and need support for a > PhD student, or simply your own compensation, this can help! > Thanks for pointing out this opportunity Pamphile! SciPy has had 6 small development grants since 2018, it has definitely been an impactful program. Typically there has been slightly more interest than slots to apply, so we've usually made a pre-selection between interested maintainers on the maintainers mailing list. It's time to start organizing fundable projects/ideas a little better though, since we now also have some structural project income ($2,500/month from Tidelift). This is why I recently wrote https://numpy.org/neps/nep-0048-spending-project-funds.html, which we can use for SciPy as well. I'll start a wiki page with a template where we can collect new ideas. Cheers, Ralf > There are 3 grant cycles per year. It is too late for the first cycle, but > you have time to prepare for the next ones (May). > > If you consider doing this, a quick note: > > Any NumFOCUS Fiscally Sponsored or Affiliated project is eligible to > submit 1 proposal per grant cycle. > If a project would like to solicit proposals from its internal community, > leaders must organize > their own review process to select the proposal submitted on behalf of the > project. > > Each project may receive a maximum of 2 grants per calendar year. > > More information here: https://numfocus.org/programs/sustainability#sdg > > Cheers, > Pamphile > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From deak.andris at gmail.com Sat Mar 13 10:57:27 2021 From: deak.andris at gmail.com (Andras Deak) Date: Sat, 13 Mar 2021 16:57:27 +0100 Subject: [SciPy-Dev] cKDTree deprecation? Message-ID: Hi All, I ran into the doc issue that there's no guidance to choose between `KDTree` and `cKDTree` in spatial (in either docstrings or at https://docs.scipy.org/doc/scipy/reference/spatial.html#nearest-neighbor-queries). I plan to fix this in the documentation, but from the brief feedback I got when I asked on a related issue [1] I wonder if we should do more? As of November 2020 [2] the two classes have identical APIs, and `KDTree` seems to be a thin wrapper around `cKDTree`. The long-term plan is to remove the `cKDTree` name. Should we take steps now to deprecate `cKDTree`? I don't know if anything else is needed for proper deprecation, and it also seems to me that `PendingDeprecationWarning`s are not a thing in scipy. At least we should add a note that says that `cKDTree` might be removed in a future version. Cheers, Andr?s [1]: https://github.com/scipy/scipy/issues/8923#issuecomment-796924434 [2]: https://github.com/scipy/scipy/pull/12852 From ralf.gommers at gmail.com Sat Mar 13 11:14:56 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 13 Mar 2021 17:14:56 +0100 Subject: [SciPy-Dev] cKDTree deprecation? In-Reply-To: References: Message-ID: On Sat, Mar 13, 2021 at 4:57 PM Andras Deak wrote: > Hi All, > > I ran into the doc issue that there's no guidance to choose between > `KDTree` and `cKDTree` in spatial (in either docstrings or at > > https://docs.scipy.org/doc/scipy/reference/spatial.html#nearest-neighbor-queries > ). > I plan to fix this in the documentation, but from the brief feedback I > got when I asked on a related issue [1] I wonder if we should do more? > > As of November 2020 [2] the two classes have identical APIs, and > `KDTree` seems to be a thin wrapper around `cKDTree`. The long-term > plan is to remove the `cKDTree` name. Should we take steps now to > deprecate `cKDTree`? I don't know if anything else is needed for > proper deprecation, and it also seems to me that > `PendingDeprecationWarning`s are not a thing in scipy. At least we > should add a note that says that `cKDTree` might be removed in a > future version. > We on purpose didn't deprecate cKDTree yet, because KDTree only recently became fully equivalent and we don't want users to have to write: if scipy.__version__ < 1.6.0: KDTree = spatial.cKDTree else: KDTree = spatial.KDTree PendingDeprecationWarning is not all that helpful either, because it has the same issue - quite a few libraries have (or should have) a "zero warnings" rule in their test suite. I'd suggest adding the doc note now, and then we do the deprecation in 2-3 releases from now. Cheers, Ralf > Cheers, > > Andr?s > > [1]: https://github.com/scipy/scipy/issues/8923#issuecomment-796924434 > [2]: https://github.com/scipy/scipy/pull/12852 > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From deak.andris at gmail.com Sat Mar 13 11:21:49 2021 From: deak.andris at gmail.com (Andras Deak) Date: Sat, 13 Mar 2021 17:21:49 +0100 Subject: [SciPy-Dev] cKDTree deprecation? In-Reply-To: References: Message-ID: On Sat, Mar 13, 2021 at 5:15 PM Ralf Gommers wrote: > > > > On Sat, Mar 13, 2021 at 4:57 PM Andras Deak wrote: >> >> Hi All, >> >> I ran into the doc issue that there's no guidance to choose between >> `KDTree` and `cKDTree` in spatial (in either docstrings or at >> https://docs.scipy.org/doc/scipy/reference/spatial.html#nearest-neighbor-queries). >> I plan to fix this in the documentation, but from the brief feedback I >> got when I asked on a related issue [1] I wonder if we should do more? >> >> As of November 2020 [2] the two classes have identical APIs, and >> `KDTree` seems to be a thin wrapper around `cKDTree`. The long-term >> plan is to remove the `cKDTree` name. Should we take steps now to >> deprecate `cKDTree`? I don't know if anything else is needed for >> proper deprecation, and it also seems to me that >> `PendingDeprecationWarning`s are not a thing in scipy. At least we >> should add a note that says that `cKDTree` might be removed in a >> future version. > > > We on purpose didn't deprecate cKDTree yet, because KDTree only recently became fully equivalent and we don't want users to have to write: > if scipy.__version__ < 1.6.0: > KDTree = spatial.cKDTree > else: > KDTree = spatial.KDTree > > PendingDeprecationWarning is not all that helpful either, because it has the same issue - quite a few libraries have (or should have) a "zero warnings" rule in their test suite. > > I'd suggest adding the doc note now, and then we do the deprecation in 2-3 releases from now. Thanks, Ralf, makes sense. Andr?s > Cheers, > Ralf > > > >> >> Cheers, >> >> Andr?s >> >> [1]: https://github.com/scipy/scipy/issues/8923#issuecomment-796924434 >> [2]: https://github.com/scipy/scipy/pull/12852 >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From arthurvolant at gmail.com Sat Mar 13 16:48:27 2021 From: arthurvolant at gmail.com (Arthur VOLANT) Date: Sat, 13 Mar 2021 22:48:27 +0100 Subject: [SciPy-Dev] Moving `fisher_exact` and `barnard_exact` to `scipy.stats.contingency` Message-ID: Hello, I have been working on the implementation of `barnard_exact` test for the analysis of contingency tables in gh-13441. It is pretty close to be merged (any reviews are welcomed btw) and with Matt Haberland, we were thinking of moving this test and `fisher_exact` test into `scipy.stats.contingency`. At the moment, `barnard_exact` is in `scipy.stats._hypotests.py` and `fisher_exact` in `scipy.stats.stats`. For backwards compatibility, we could leave `fisher_exact` as an alias, but remove `scipy.stats.fisher_exact` from documentation and add `scipy.stats.contingency.fisher_exact`. What do you think? Thanks for your feedbacks, Arthur -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Sat Mar 13 18:02:49 2021 From: andyfaff at gmail.com (Andrew Nelson) Date: Sun, 14 Mar 2021 10:02:49 +1100 Subject: [SciPy-Dev] SciPy default build settings switched to use Pythran In-Reply-To: References: Message-ID: Does this mean Pythran is a build dependency now? On Sat, 13 Mar 2021, 21:04 Ralf > > After a few months of having Pythran support in master as opt-in, and > quite a bit of positive feedback from contributions trying out Pythran in > PRs, we've now flipped the build setup to opt-out. > > If you have a problem, you can still work around it for now with `export > SCIPY_USE_PYTHRAN=0` before building. If there is an unexpected issue, > please do report it. > > Cheers, > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Sat Mar 13 22:11:54 2021 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Sat, 13 Mar 2021 20:11:54 -0700 Subject: [SciPy-Dev] SciPy default build settings switched to use Pythran In-Reply-To: References: Message-ID: Yes, Pythran is a build dep in master branch now On Sat, 13 Mar 2021 at 16:03, Andrew Nelson wrote: > Does this mean Pythran is a build dependency now? > > On Sat, 13 Mar 2021, 21:04 Ralf > >> >> After a few months of having Pythran support in master as opt-in, and >> quite a bit of positive feedback from contributions trying out Pythran in >> PRs, we've now flipped the build setup to opt-out. >> >> If you have a problem, you can still work around it for now with `export >> SCIPY_USE_PYTHRAN=0` before building. If there is an unexpected issue, >> please do report it. >> >> Cheers, >> Ralf >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Mar 14 07:42:40 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 14 Mar 2021 12:42:40 +0100 Subject: [SciPy-Dev] SciPy default build settings switched to use Pythran In-Reply-To: References: Message-ID: On Sun, Mar 14, 2021 at 4:12 AM Tyler Reddy wrote: > Yes, Pythran is a build dep in master branch now > I'd say it has been an optional build dependency for almost 3 months now, and it still is optional (because you can opt out). Unless we find showstoppers, the goal is to make it a required build dependency before the next release. Cheers, Ralf > On Sat, 13 Mar 2021 at 16:03, Andrew Nelson wrote: > >> Does this mean Pythran is a build dependency now? >> >> On Sat, 13 Mar 2021, 21:04 Ralf >> >>> >>> After a few months of having Pythran support in master as opt-in, and >>> quite a bit of positive feedback from contributions trying out Pythran in >>> PRs, we've now flipped the build setup to opt-out. >>> >>> If you have a problem, you can still work around it for now with `export >>> SCIPY_USE_PYTHRAN=0` before building. If there is an unexpected issue, >>> please do report it. >>> >>> Cheers, >>> Ralf >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Mar 14 09:41:38 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 14 Mar 2021 14:41:38 +0100 Subject: [SciPy-Dev] Moving `fisher_exact` and `barnard_exact` to `scipy.stats.contingency` In-Reply-To: References: Message-ID: On Sat, Mar 13, 2021 at 10:48 PM Arthur VOLANT wrote: > Hello, > > I have been working on the implementation of `barnard_exact` test for the > analysis of contingency tables in gh-13441. It is pretty close to be merged > (any reviews are welcomed btw) and with Matt Haberland, we were thinking of > moving this test and `fisher_exact` test into `scipy.stats.contingency`. At > the moment, `barnard_exact` is in `scipy.stats._hypotests.py` and > `fisher_exact` in `scipy.stats.stats`. > > For backwards compatibility, we could leave `fisher_exact` as an alias, > but remove `scipy.stats.fisher_exact` from documentation and add > `scipy.stats.contingency.fisher_exact`. > > What do you think? > This makes sense to me. It would be good to think about what other functions are exclusive to contingency tables, like `stats.chi2_contingency`. Cheers, Ralf > Thanks for your feedbacks, > Arthur > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Mar 14 11:07:52 2021 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 14 Mar 2021 11:07:52 -0400 Subject: [SciPy-Dev] Moving `fisher_exact` and `barnard_exact` to `scipy.stats.contingency` In-Reply-To: References: Message-ID: On Sun, Mar 14, 2021 at 9:42 AM Ralf Gommers wrote: > > > On Sat, Mar 13, 2021 at 10:48 PM Arthur VOLANT > wrote: > >> Hello, >> >> I have been working on the implementation of `barnard_exact` test for the >> analysis of contingency tables in gh-13441. It is pretty close to be merged >> (any reviews are welcomed btw) and with Matt Haberland, we were thinking of >> moving this test and `fisher_exact` test into `scipy.stats.contingency`. At >> the moment, `barnard_exact` is in `scipy.stats._hypotests.py` and >> `fisher_exact` in `scipy.stats.stats`. >> >> For backwards compatibility, we could leave `fisher_exact` as an alias, >> but remove `scipy.stats.fisher_exact` from documentation and add >> `scipy.stats.contingency.fisher_exact`. >> >> What do you think? >> > > This makes sense to me. It would be good to think about what other > functions are exclusive to contingency tables, like > `stats.chi2_contingency`. > contingency tables is just a way to organize a sample. I'm not a fan of moving functions that are not explicitly contingency tables to it's namespace for user access. barnard test, binomial tests and similar are just tests for binomial/multinomial/.. proportion data, and fit well in the flat scipy.stats namespace. code location doesn't matter much, but IMO it should remain in the main stats namespace Josef > > > Cheers, > Ralf > > > >> Thanks for your feedbacks, >> Arthur >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bennet at umich.edu Sun Mar 14 12:30:24 2021 From: bennet at umich.edu (Bennet Fauber) Date: Sun, 14 Mar 2021 12:30:24 -0400 Subject: [SciPy-Dev] Moving `fisher_exact` and `barnard_exact` to `scipy.stats.contingency` In-Reply-To: References: Message-ID: I am left wondering why the proposal isn't to move barnard_exact into scipy.stats rather then push both into the more obscure contingency category? Why prefer scipy.stats.contingency.barnard_exact scipy.stats.contingency.fisher_exact over the more straightforward and more easily found scipy.stats.barnard_exact scipy.stats.fisher_exact ?? There is a strong penalty against newcomers to scipy by increasing the depth to which one must search to find something, and I suggest that whatever computational weight might be imposed by having many functions live directly under scipy.stats would be overcome by the lessening of cognitive weight imposed on the new programmer by over classification of functions and the lowered likelihood that they would be found and used. I am one of those newcomers, so I probably missed some solid argument for pushing these deeper into the tree. In most of the things I work on, the programmer's time is more important than the computation time, and often the programmer's time is more expensive than the computation time. Making these functions easier to locate and use would therefore be an important consideration. Generally, we aren't running functions millions of times and losing thousands of hours to inefficient computation, if an inefficiency would be introduced by adding more functions to the scipy.stats namespace, so perhaps my perspective is warped. On Sun, Mar 14, 2021 at 11:08 AM wrote: > > > > On Sun, Mar 14, 2021 at 9:42 AM Ralf Gommers wrote: >> >> >> >> On Sat, Mar 13, 2021 at 10:48 PM Arthur VOLANT wrote: >>> >>> Hello, >>> >>> I have been working on the implementation of `barnard_exact` test for the analysis of contingency tables in gh-13441. It is pretty close to be merged (any reviews are welcomed btw) and with Matt Haberland, we were thinking of moving this test and `fisher_exact` test into `scipy.stats.contingency`. At the moment, `barnard_exact` is in `scipy.stats._hypotests.py` and `fisher_exact` in `scipy.stats.stats`. >>> >>> For backwards compatibility, we could leave `fisher_exact` as an alias, but remove `scipy.stats.fisher_exact` from documentation and add `scipy.stats.contingency.fisher_exact`. >>> >>> What do you think? >> >> >> This makes sense to me. It would be good to think about what other functions are exclusive to contingency tables, like `stats.chi2_contingency`. > > > contingency tables is just a way to organize a sample. > > I'm not a fan of moving functions that are not explicitly contingency tables to it's namespace for user access. > > barnard test, binomial tests and similar are just tests for binomial/multinomial/.. proportion data, and fit well in the flat scipy.stats namespace. > > code location doesn't matter much, but IMO it should remain in the main stats namespace > > Josef > >> >> >> >> Cheers, >> Ralf >> >> >>> >>> Thanks for your feedbacks, >>> Arthur >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From tirthasheshpatel at gmail.com Tue Mar 16 10:52:29 2021 From: tirthasheshpatel at gmail.com (Tirth Patel) Date: Tue, 16 Mar 2021 20:22:29 +0530 Subject: [SciPy-Dev] Adding mode parameter to scipy.stats.ansari Message-ID: Hi all, I have been working on gh-13650 (https://github.com/scipy/scipy/pull/13650) with Mr. Matt Haberland. We had a discussion on what should be the behavior when the mode parameter is passed to the Ansari-Bradley test (`scipy.stats.ansari`). In the comment on gh-13650 (https://github.com/scipy/scipy/pull/13650#discussion_r594926432), Mr. Haberland suggested: - "auto" would check for ties. if ties are present or both sample sizes are large, the asymptotic method would be used - "exact" would not check for anything; it would just respect the user's decision. (should we check and throw a warning about ties here?) The question is whether to emit a warning or not when `mode="exact"` is passed and ties are present in the data. The normal approximation corrects for ties and so it may be more correct than the exact method. So, it may be helpful to prompt the user with a warning suggesting him/her to use the approximate method instead. IMO, throwing a warning seems helpful. In an earlier comment I made (https://github.com/scipy/scipy/pull/13650#discussion_r589946142), I suggested falling back to the approximate method even when `mode="exact"` is passed (like R does). But now that I think of it, Mr. Haberland is right: It would be better to use the exact method as the user requested for it (and we wouldn't be technically wrong doing that, right?). Anyway, would appreciate others' views on this, thanks! -- Kind Regards, Tirth Patel From ha at igh.de Thu Mar 18 07:31:13 2021 From: ha at igh.de (Richard Hacker) Date: Thu, 18 Mar 2021 12:31:13 +0100 Subject: [SciPy-Dev] Problems with signal.dlti Message-ID: <75b33c79-2976-634d-1838-1b4751903711@igh.de> Hi everyone I have the an issue with scipy.signal I have a very simple first order low pass filter with difference equation: y[n+1] = 0.9y[n] + 0.1u[n] Its pole-zero representation is: Y(z)/U(z) = 0.1/(z - 0.9) and state-space representation is: A = 0.9 B = 0.1 C = 1 D = 0 To get the frequency response I use signal.dfreqresp(). The problem is that while the pole-zero representation works, the state-space version barfs on the first call, but the results from both are identical. The second time I call the state-space version, everything is OK: $ python3 >>> import scipy >>> scipy.version.full_version '0.19.1' >>> from scipy import signal >>> signal.dfreqresp(([0.1], [1.0, -0.9], True), 1) (array([ 0.]), array([ 1.+0.j])) >>> #^^^ pole-zero representation -> OK >>> signal.dfreqresp(([0.9],[0.1],[1],[0], True), 1) /usr/lib/python3/dist-packages/scipy/signal/filter_design.py:1452: BadCoefficients: Badly conditioned filter coefficients (numerator): the results may be meaningless "results may be meaningless", BadCoefficients) (array([ 0.]), array([ 1.+0.j])) >>> #^^^ state-space representation first try -> BOO! >>> signal.dfreqresp(([0.9],[0.1],[1],[0], True), 1) (array([ 0.]), array([ 1.+0.j])) >>> #^^^ state-space representation next try -> OK BTW, the same behavior comes from the continuous time equivalent: dy/dt = -0.1y + 0.1u Pole-zero: Y(s)/U(s) = 0.1/(s + 0.1) state-space: A = -0.1; B=0.1, C=1, D=0 Mit freundlichem Gru? Richard Hacker -- ------------------------------------------------------------------------ Richard Hacker M.Sc. richard.hacker at igh.de Tel.: +49 201 / 36014-16 Ingenieurgemeinschaft IgH Gesellschaft f?r Ingenieurleistungen mbH Nordsternstra?e 66 D-45329 Essen Amtsgericht Essen HRB 11500 USt-Id.-Nr.: DE 174 626 722 Gesch?ftsf?hrung: - Dr.-Ing. Siegfried Rotth?user - Dr. Sven Beermann, Prokurist Tel.: +49 201 / 360-14-0 http://www.igh.de ------------------------------------------------------------------------ From amulyabitspilani at gmail.com Sat Mar 20 11:09:16 2021 From: amulyabitspilani at gmail.com (Amulya Gupta) Date: Sat, 20 Mar 2021 20:39:16 +0530 Subject: [SciPy-Dev] Regarding contribution in Scipy organization for GSoC 21 Message-ID: Dear Sir/Ma'am, I am Amulya Gupta, a sophomore from BITS,Pilani. I have been coding for the past 3 years. I am profound in Python(Modules: nose test, poetry, numpy, matplotlib, cmake, pipenv, tkinter, turtle and currently learning mypy etc), C++, Mysql, Java script. I have experience in open source with an internship with an organisation called NumFocus. I want to openly contribute to the project "*Add type annotations to a submodule*" from SciPy. Please can I talk to the mentor about his expectations from the student applying. How many more students have applied for the same project and how does the mentor has pre-planned about the project and his inputs for the proposal buildup. Regards Amulya Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 20 11:18:06 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 20 Mar 2021 16:18:06 +0100 Subject: [SciPy-Dev] GSoC'21 participation SciPy In-Reply-To: <17827d6ac1c.b0deeb8838932.5136930551930802487@zoho.com> References: <6EB6E0CE-53E3-4ECC-806A-4B728728579A@gmail.com> <17827d6ac1c.b0deeb8838932.5136930551930802487@zoho.com> Message-ID: Hi Nikolay, Thanks for adding that! I agree there's a lot to improve there. It's a nice and detailed description, I don't have much to add. Gmail unhelpfully classified your email as spam, so I thought I'd reply on the list in case that happened for other people as well. Cheers, Ralf On Fri, Mar 12, 2021 at 8:08 PM Nikolay Mayorov wrote: > Hi! > > I've added an idea about implementing object-oriented design of filtering > in scipy.signal. It was discussed quite a lot in the past, I think it's a > sane idea and scipy.signal definitely can be made more user friendly and > convenient. > > This is only some preliminarily view on the project. Feel free to edit the > text. So far I've put only myself as a possible mentor. > > Nikolay > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 20 11:25:38 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 20 Mar 2021 16:25:38 +0100 Subject: [SciPy-Dev] Regarding contribution in Scipy organization for GSoC 21 In-Reply-To: References: Message-ID: Hi Amulya, On Sat, Mar 20, 2021 at 4:09 PM Amulya Gupta wrote: > Dear Sir/Ma'am, > I am Amulya Gupta, a sophomore from BITS,Pilani. I have been coding for > the past 3 years. I am profound in Python(Modules: nose test, poetry, > numpy, matplotlib, cmake, pipenv, tkinter, turtle and currently learning > mypy etc), C++, Mysql, Java script. I have experience in open source with > an internship with an organisation called NumFocus. I want to openly > contribute to the project "*Add type annotations to a submodule*" from > SciPy. Please can I talk to the mentor about his expectations from the > student applying. How many more students have applied for the same project and > how does the mentor has pre-planned about the project and his inputs for > the proposal buildup. > I have replied to you already. We are happy to help, but do expect that you read the information we have put together on the ideas page ( https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas) and try to understand what is needed. If you have concrete questions or an early draft of a proposal we can give you feedback. But we won't draft the proposal for you - there are lots of examples, from the PSF template to the guidance in https://google.github.io/gsocguides/student/writing-a-proposal.html Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From amulyabitspilani at gmail.com Sat Mar 20 12:35:20 2021 From: amulyabitspilani at gmail.com (Amulya Gupta) Date: Sat, 20 Mar 2021 22:05:20 +0530 Subject: [SciPy-Dev] Regarding contribution in Scipy organization for GSoC 21 In-Reply-To: References: Message-ID: My intentions were not that. I just inquired that you need any pre-plan for the proposal. I have started my early proposal and will submit as soon as possible. Regards AMULYA GUPTA On Sat, Mar 20, 2021 at 8:56 PM Ralf Gommers wrote: > Hi Amulya, > > > On Sat, Mar 20, 2021 at 4:09 PM Amulya Gupta > wrote: > >> Dear Sir/Ma'am, >> I am Amulya Gupta, a sophomore from BITS,Pilani. I have been coding for >> the past 3 years. I am profound in Python(Modules: nose test, poetry, >> numpy, matplotlib, cmake, pipenv, tkinter, turtle and currently learning >> mypy etc), C++, Mysql, Java script. I have experience in open source with >> an internship with an organisation called NumFocus. I want to openly >> contribute to the project "*Add type annotations to a submodule*" from >> SciPy. Please can I talk to the mentor about his expectations from the >> student applying. How many more students have applied for the same project and >> how does the mentor has pre-planned about the project and his inputs for >> the proposal buildup. >> > > I have replied to you already. We are happy to help, but do expect that > you read the information we have put together on the ideas page ( > https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas) and try to > understand what is needed. If you have concrete questions or an early draft > of a proposal we can give you feedback. But we won't draft the proposal for > you - there are lots of examples, from the PSF template to the guidance in > https://google.github.io/gsocguides/student/writing-a-proposal.html > > Cheers, > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 20 16:39:11 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 20 Mar 2021 21:39:11 +0100 Subject: [SciPy-Dev] Regarding contribution in Scipy organization for GSoC 21 In-Reply-To: References: Message-ID: On Sat, Mar 20, 2021 at 5:35 PM Amulya Gupta wrote: > My intentions were not that. I just inquired that you need any pre-plan > for the proposal. I have started my early proposal and will submit as soon > as possible. > That sounds good, thanks Amulya. Cheers, Ralf Regards > AMULYA GUPTA > > On Sat, Mar 20, 2021 at 8:56 PM Ralf Gommers > wrote: > >> Hi Amulya, >> >> >> On Sat, Mar 20, 2021 at 4:09 PM Amulya Gupta >> wrote: >> >>> Dear Sir/Ma'am, >>> I am Amulya Gupta, a sophomore from BITS,Pilani. I have been coding for >>> the past 3 years. I am profound in Python(Modules: nose test, poetry, >>> numpy, matplotlib, cmake, pipenv, tkinter, turtle and currently learning >>> mypy etc), C++, Mysql, Java script. I have experience in open source with >>> an internship with an organisation called NumFocus. I want to openly >>> contribute to the project "*Add type annotations to a submodule*" from >>> SciPy. Please can I talk to the mentor about his expectations from the >>> student applying. How many more students have applied for the same project and >>> how does the mentor has pre-planned about the project and his inputs for >>> the proposal buildup. >>> >> >> I have replied to you already. We are happy to help, but do expect that >> you read the information we have put together on the ideas page ( >> https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas) and try to >> understand what is needed. If you have concrete questions or an early draft >> of a proposal we can give you feedback. But we won't draft the proposal for >> you - there are lots of examples, from the PSF template to the guidance in >> https://google.github.io/gsocguides/student/writing-a-proposal.html >> >> Cheers, >> Ralf >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 20 17:30:13 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 20 Mar 2021 22:30:13 +0100 Subject: [SciPy-Dev] welcome Pamphile Roy to the SciPy core team Message-ID: Hi all, On behalf of the SciPy developers I'd like to welcome Pamphile Roy as a member of the core team. Pamphile has been contributing for about a year and a half, mostly on creating our newest submodule - scipy.stats.qmc. More recently he has also been active reviewing PRs and maintaining scipy.stats and scipy.optimize. Here is an overview of his SciPy PRs: https://github.com/scipy/scipy/pulls/tupui I'm looking forward to Pamphile's continued contributions! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Sun Mar 21 01:24:20 2021 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Sun, 21 Mar 2021 08:24:20 +0300 Subject: [SciPy-Dev] welcome Pamphile Roy to the SciPy core team In-Reply-To: References: Message-ID: Welcome Pamphile! ??, 21 ???. 2021 ?., 0:30 Ralf Gommers : > Hi all, > > On behalf of the SciPy developers I'd like to welcome Pamphile Roy as a > member of the core team. Pamphile has been contributing for about a year > and a half, mostly on creating our newest submodule - scipy.stats.qmc. More > recently he has also been active reviewing PRs and maintaining scipy.stats > and scipy.optimize. Here is an overview of his SciPy PRs: > https://github.com/scipy/scipy/pulls/tupui > > I'm looking forward to Pamphile's continued contributions! > > Cheers, > Ralf > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy.pamphile at gmail.com Sun Mar 21 05:13:21 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Sun, 21 Mar 2021 10:13:21 +0100 Subject: [SciPy-Dev] welcome Pamphile Roy to the SciPy core team In-Reply-To: References: Message-ID: <083DAA55-34DC-415B-A4A3-04011D2D5135@gmail.com> Hi everyone, Thanks for this opportunity to join the team and for the messages! Feel free to reach out to me. Cheers, Pamphile > On 20.03.2021, at 22:30, Ralf Gommers wrote: > > Hi all, > > On behalf of the SciPy developers I'd like to welcome Pamphile Roy as a member of the core team. Pamphile has been contributing for about a year and a half, mostly on creating our newest submodule - scipy.stats.qmc. More recently he has also been active reviewing PRs and maintaining scipy.stats and scipy.optimize. Here is an overview of his SciPy PRs: https://github.com/scipy/scipy/pulls/tupui > > I'm looking forward to Pamphile's continued contributions! > > Cheers, > Ralf > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Sun Mar 21 11:20:15 2021 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Sun, 21 Mar 2021 16:20:15 +0100 Subject: [SciPy-Dev] welcome Pamphile Roy to the SciPy core team In-Reply-To: <083DAA55-34DC-415B-A4A3-04011D2D5135@gmail.com> References: <083DAA55-34DC-415B-A4A3-04011D2D5135@gmail.com> Message-ID: Welcome on board! On Sun, Mar 21, 2021 at 10:14 AM Pamphile Roy wrote: > Hi everyone, > > Thanks for this opportunity to join the team and for the messages! > > Feel free to reach out to me. > > Cheers, > Pamphile > > On 20.03.2021, at 22:30, Ralf Gommers wrote: > > Hi all, > > On behalf of the SciPy developers I'd like to welcome Pamphile Roy as a > member of the core team. Pamphile has been contributing for about a year > and a half, mostly on creating our newest submodule - scipy.stats.qmc. More > recently he has also been active reviewing PRs and maintaining scipy.stats > and scipy.optimize. Here is an overview of his SciPy PRs: > https://github.com/scipy/scipy/pulls/tupui > > I'm looking forward to Pamphile's continued contributions! > > Cheers, > Ralf > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy.pamphile at gmail.com Sun Mar 21 12:16:27 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Sun, 21 Mar 2021 17:16:27 +0100 Subject: [SciPy-Dev] pydata-sphinx-theme Message-ID: <59A8480E-2640-471E-BE8E-DE2FADC53A36@gmail.com> Hi everyone, I was wondering if there was a plan about moving the doc to using PyData?s theme? (I could not find any issues or ref in mail, sorry if I missed something). IMO it?s good if we have a UX which is close to NumPy and the other libraries of the stack. Although for the landing page, I would propose to have fewer things like Pandas. I could propose to work on this, if this is something we want. Cheers, Pamphile From ralf.gommers at gmail.com Sun Mar 21 12:22:08 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 21 Mar 2021 17:22:08 +0100 Subject: [SciPy-Dev] pydata-sphinx-theme In-Reply-To: <59A8480E-2640-471E-BE8E-DE2FADC53A36@gmail.com> References: <59A8480E-2640-471E-BE8E-DE2FADC53A36@gmail.com> Message-ID: On Sun, Mar 21, 2021 at 5:16 PM Pamphile Roy wrote: > Hi everyone, > > I was wondering if there was a plan about moving the doc to using PyData?s > theme? (I could not find any issues or ref in mail, sorry if I missed > something). > There's no concrete plan, other than "we should definitely do this". IMO it?s good if we have a UX which is close to NumPy and the other > libraries of the stack. Although for the landing page, I would propose to > have fewer things like Pandas. > Agreed, we should use the exact same tiled layout as Pandas. I already proposed that for NumPy as well: https://github.com/numpy/numpy/issues/18419 > I could propose to work on this, if this is something we want. > That would be awesome! Hopefully it isn't too hard - the PR where NumPy moved (https://github.com/numpy/numpy/pull/17074) may be a useful reference. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy.pamphile at gmail.com Sun Mar 21 12:26:34 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Sun, 21 Mar 2021 17:26:34 +0100 Subject: [SciPy-Dev] pydata-sphinx-theme In-Reply-To: References: <59A8480E-2640-471E-BE8E-DE2FADC53A36@gmail.com> Message-ID: <1EEB9159-5437-4C16-96AF-2D75C959A492@gmail.com> > On 21.03.2021, at 17:22, Ralf Gommers wrote: > > There's no concrete plan, other than "we should definitely do this". > > IMO it?s good if we have a UX which is close to NumPy and the other libraries of the stack. Although for the landing page, I would propose to have fewer things like Pandas. > > Agreed, we should use the exact same tiled layout as Pandas. I already proposed that for NumPy as well: https://github.com/numpy/numpy/issues/18419 > > > I could propose to work on this, if this is something we want. > > That would be awesome! Hopefully it isn't too hard - the PR where NumPy moved (https://github.com/numpy/numpy/pull/17074 ) may be a useful reference. OK great, thanks for the issue, I was currently searching for it. Yeah after quickly going through the doc, I hope it goes smoothly. I will start working on it then! Cheers, Pamphile -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlucas7 at vt.edu Sun Mar 21 15:17:52 2021 From: rlucas7 at vt.edu (Lucas Roberts) Date: Sun, 21 Mar 2021 15:17:52 -0400 Subject: [SciPy-Dev] python3.9 runtests.py -mypy failing on master bc mpmath Message-ID: Hi scipy-dev, I just merged a PR: https://github.com/scipy/scipy/pull/10170 that was all green and passing. It looks like on master there is a python 3.9 mypy execution of runtests.py, here is the offending part of the stacktrace where the build fails ``` 2021-03-21T18:26:08.7168990Z Installing collected packages: typing-extensions, typed-ast, mypy-extensions, mypy 2021-03-21T18:26:10.3177755Z Successfully installed mypy-0.812 mypy-extensions-0.4.3 typed-ast-1.4.2 typing-extensions-3.7.4.3 2021-03-21T18:26:10.4138039Z ##[group]Run python3.9 -u runtests.py --mypy 2021-03-21T18:26:10.4138752Z [36;1mpython3.9 -u runtests.py --mypy [0m 2021-03-21T18:26:10.4183883Z shell: /usr/bin/bash -e {0} 2021-03-21T18:26:10.4184310Z ##[endgroup] 2021-03-21T18:26:10.4915105Z Building, see build.log... 2021-03-21T18:27:10.5562824Z ... build in progress (0:01:00.064479 elapsed) 2021-03-21T18:28:10.6226962Z ... build in progress (0:02:00.131202 elapsed) 2021-03-21T18:29:10.6923322Z ... build in progress (0:03:00.200865 elapsed) 2021-03-21T18:30:10.7642621Z ... build in progress (0:04:00.272632 elapsed) 2021-03-21T18:31:10.8354301Z ... build in progress (0:05:00.343937 elapsed) 2021-03-21T18:32:10.9051482Z ... build in progress (0:06:00.413679 elapsed) 2021-03-21T18:33:10.9747934Z ... build in progress (0:07:00.483329 elapsed) 2021-03-21T18:34:11.0420822Z ... build in progress (0:08:00.550613 elapsed) 2021-03-21T18:35:11.1096296Z ... build in progress (0:09:00.618168 elapsed) 2021-03-21T18:36:11.1807347Z ... build in progress (0:10:00.689247 elapsed) 2021-03-21T18:37:11.2499801Z ... build in progress (0:11:00.758503 elapsed) 2021-03-21T18:38:11.3141592Z ... build in progress (0:12:00.822681 elapsed) 2021-03-21T18:39:11.3839094Z ... build in progress (0:13:00.892443 elapsed) 2021-03-21T18:40:11.4513782Z ... build in progress (0:14:00.959913 elapsed) 2021-03-21T18:41:11.5224005Z ... build in progress (0:15:01.030878 elapsed) 2021-03-21T18:42:11.5936181Z ... build in progress (0:16:01.102135 elapsed) 2021-03-21T18:42:32.6182673Z Build OK (0:16:22.126754 elapsed) 2021-03-21T18:43:05.3712863Z scipy/special/_precompute/cosine_cdf.py:2: error: Cannot find implementation or library stub for module named 'mpmath' [import] 2021-03-21T18:43:05.3716636Z scipy/special/_precompute/cosine_cdf.py:2: note: See https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports 2021-03-21T18:43:05.3718156Z Found 1 error in 1 file (checked 677 source files) 2021-03-21T18:43:06.5626501Z ##[error]Process completed with exit code 1. 2021-03-21T18:43:07.2727027Z Post job cleanup. 2021-03-21T18:43:07.3838441Z [command]/usr/bin/git version 2021-03-21T18:43:07.3891979Z git version 2.30.2 2021-03-21T18:43:07.3933267Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2021-03-21T18:43:07.3968953Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || : 2021-03-21T18:43:07.4490076Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2021-03-21T18:43:07.4525518Z http.https://github.com/.extraheader 2021-03-21T18:43:07.4539152Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2021-03-21T18:43:07.5212999Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || : 2021-03-21T18:43:07.5566225Z Cleaning up orphan processes ``` but looking at the runs on the PR https://github.com/scipy/scipy/runs/1953046830 we were doing: ``` python3.9 -u runtests.py -m fast ``` I'm less familiar with the -mypy flag to runtests.py, does anyone know if we need (1) to remove the mpmath parts of that PR opting instead for precompute/hard coded numbers or (2) this is something that will change/be removed in future -- -L -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Sun Mar 21 15:20:58 2021 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Sun, 21 Mar 2021 20:20:58 +0100 Subject: [SciPy-Dev] Proposal to merge linalg.pinv and linalg.pinv2 and deprecate pinv2 Message-ID: Currently, scipy has pinv, pinv2, pinvh pseudo inverse functions. The first one sends the array to the least squares solver to solve @ X = Identity with minimum-norm solution And the other two forms the SVD based construction by inverting the SV/eig diagonal and reforming the product in the reverse order i.e. the typical pseudoinverse. However, the first one already does this in the lower level driver anyways since that's what gelss driver does [1]. Hence I think it is better to merge these and get rid of one as an alias in the deprecation cycle. Thoughts? ilhan [1] https://www.netlib.org/lapack/lug/node27.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Mar 21 17:59:19 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 21 Mar 2021 22:59:19 +0100 Subject: [SciPy-Dev] Proposal to merge linalg.pinv and linalg.pinv2 and deprecate pinv2 In-Reply-To: References: Message-ID: On Sun, Mar 21, 2021 at 8:21 PM Ilhan Polat wrote: > Currently, scipy has pinv, pinv2, pinvh pseudo inverse functions. > > The first one sends the array to the least squares solver to solve > > @ X = Identity > > with minimum-norm solution > > And the other two forms the SVD based construction by inverting the SV/eig > diagonal and reforming the product in the reverse order i.e. the typical > pseudoinverse. > > However, the first one already does this in the lower level driver anyways > since that's what gelss driver does [1]. Hence I think it is better to > merge these and get rid of one as an alias in the deprecation cycle. > Sounds fine to me, having multiple and it not being clear to the user which one to use is not great. Do you happen to know the history of how we ended up with pinv2? Cheers, Ralf > Thoughts? > > ilhan > > > [1] https://www.netlib.org/lapack/lug/node27.html > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Mar 21 18:53:51 2021 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 21 Mar 2021 18:53:51 -0400 Subject: [SciPy-Dev] Proposal to merge linalg.pinv and linalg.pinv2 and deprecate pinv2 In-Reply-To: References: Message-ID: On Sun, Mar 21, 2021 at 6:00 PM Ralf Gommers wrote: > > On Sun, Mar 21, 2021 at 8:21 PM Ilhan Polat wrote: > >> Currently, scipy has pinv, pinv2, pinvh pseudo inverse functions. >> >> The first one sends the array to the least squares solver to solve >> >> @ X = Identity >> >> with minimum-norm solution >> >> And the other two forms the SVD based construction by inverting the >> SV/eig diagonal and reforming the product in the reverse order i.e. the >> typical pseudoinverse. >> >> However, the first one already does this in the lower level driver >> anyways since that's what gelss driver does [1]. Hence I think it is better >> to merge these and get rid of one as an alias in the deprecation cycle. >> > > Sounds fine to me, having multiple and it not being clear to the user > which one to use is not great. > > Do you happen to know the history of how we ended up with pinv2? > I suspect that when `pinv2()` was added, the `lstsq()` call underlying `pinv()` was not SVD-based. The precise LAPACK driver has changed over the years. We might have started with the QR-based driver. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Mar 21 19:17:36 2021 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 21 Mar 2021 19:17:36 -0400 Subject: [SciPy-Dev] Proposal to merge linalg.pinv and linalg.pinv2 and deprecate pinv2 In-Reply-To: References: Message-ID: On Sun, Mar 21, 2021 at 6:53 PM Robert Kern wrote: > On Sun, Mar 21, 2021 at 6:00 PM Ralf Gommers > wrote: > >> >> Do you happen to know the history of how we ended up with pinv2? >> > > I suspect that when `pinv2()` was added, the `lstsq()` call underlying > `pinv()` was not SVD-based. The precise LAPACK driver has changed over the > years. We might have started with the QR-based driver. > It's going to be very hard to tell definitively because I think the history got lost in the SVN->git conversion due to some directory renames that happened in the early days. The pinv/pinv2 split seems to have been very early, though, so it may have dated from the original Multipack library (the source tarballs of which are also linkrotted away). -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sun Mar 21 19:24:35 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 21 Mar 2021 19:24:35 -0400 Subject: [SciPy-Dev] welcome Pamphile Roy to the SciPy core team In-Reply-To: References: Message-ID: On 3/20/21, Ralf Gommers wrote: > Hi all, > > On behalf of the SciPy developers I'd like to welcome Pamphile Roy as a > member of the core team. Pamphile has been contributing for about a year > and a half, mostly on creating our newest submodule - scipy.stats.qmc. More > recently he has also been active reviewing PRs and maintaining scipy.stats > and scipy.optimize. Here is an overview of his SciPy PRs: > https://github.com/scipy/scipy/pulls/tupui > > I'm looking forward to Pamphile's continued contributions! > > Cheers, > Ralf > Welcome, Pamphile. Keep up the great work! Warren From andyfaff at gmail.com Sun Mar 21 19:28:50 2021 From: andyfaff at gmail.com (Andrew Nelson) Date: Mon, 22 Mar 2021 10:28:50 +1100 Subject: [SciPy-Dev] welcome Pamphile Roy to the SciPy core team In-Reply-To: References: Message-ID: Welcome Pamphile. On Sun, 21 Mar 2021 at 08:31, Ralf Gommers wrote: > Hi all, > > On behalf of the SciPy developers I'd like to welcome Pamphile Roy as a > member of the core team. Pamphile has been contributing for about a year > and a half, mostly on creating our newest submodule - scipy.stats.qmc. More > recently he has also been active reviewing PRs and maintaining scipy.stats > and scipy.optimize. Here is an overview of his SciPy PRs: > https://github.com/scipy/scipy/pulls/tupui > > I'm looking forward to Pamphile's continued contributions! > > Cheers, > Ralf > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sun Mar 21 20:12:39 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 21 Mar 2021 20:12:39 -0400 Subject: [SciPy-Dev] python3.9 runtests.py -mypy failing on master bc mpmath In-Reply-To: References: Message-ID: On 3/21/21, Lucas Roberts wrote: > Hi scipy-dev, > I just merged a PR: https://github.com/scipy/scipy/pull/10170 > that was all green and passing. > > It looks like on master there is a python 3.9 mypy execution of > runtests.py, here is the offending part of the stacktrace where the build > fails > > ``` > > 2021-03-21T18:26:08.7168990Z Installing collected packages: > typing-extensions, typed-ast, mypy-extensions, mypy > 2021-03-21T18:26:10.3177755Z Successfully installed mypy-0.812 > mypy-extensions-0.4.3 typed-ast-1.4.2 typing-extensions-3.7.4.3 > 2021-03-21T18:26:10.4138039Z ##[group]Run python3.9 -u runtests.py --mypy > 2021-03-21T18:26:10.4138752Z [36;1mpython3.9 -u runtests.py --mypy [0m > 2021-03-21T18:26:10.4183883Z shell: /usr/bin/bash -e {0} > 2021-03-21T18:26:10.4184310Z ##[endgroup] > 2021-03-21T18:26:10.4915105Z Building, see build.log... > 2021-03-21T18:27:10.5562824Z ... build in progress (0:01:00.064479 > elapsed) > 2021-03-21T18:28:10.6226962Z ... build in progress (0:02:00.131202 > elapsed) > 2021-03-21T18:29:10.6923322Z ... build in progress (0:03:00.200865 > elapsed) > 2021-03-21T18:30:10.7642621Z ... build in progress (0:04:00.272632 > elapsed) > 2021-03-21T18:31:10.8354301Z ... build in progress (0:05:00.343937 > elapsed) > 2021-03-21T18:32:10.9051482Z ... build in progress (0:06:00.413679 > elapsed) > 2021-03-21T18:33:10.9747934Z ... build in progress (0:07:00.483329 > elapsed) > 2021-03-21T18:34:11.0420822Z ... build in progress (0:08:00.550613 > elapsed) > 2021-03-21T18:35:11.1096296Z ... build in progress (0:09:00.618168 > elapsed) > 2021-03-21T18:36:11.1807347Z ... build in progress (0:10:00.689247 > elapsed) > 2021-03-21T18:37:11.2499801Z ... build in progress (0:11:00.758503 > elapsed) > 2021-03-21T18:38:11.3141592Z ... build in progress (0:12:00.822681 > elapsed) > 2021-03-21T18:39:11.3839094Z ... build in progress (0:13:00.892443 > elapsed) > 2021-03-21T18:40:11.4513782Z ... build in progress (0:14:00.959913 > elapsed) > 2021-03-21T18:41:11.5224005Z ... build in progress (0:15:01.030878 > elapsed) > 2021-03-21T18:42:11.5936181Z ... build in progress (0:16:01.102135 > elapsed) > 2021-03-21T18:42:32.6182673Z Build OK (0:16:22.126754 elapsed) > 2021-03-21T18:43:05.3712863Z > scipy/special/_precompute/cosine_cdf.py:2: error: Cannot find > implementation or library stub for module named 'mpmath' [import] > 2021-03-21T18:43:05.3716636Z > scipy/special/_precompute/cosine_cdf.py:2: note: See > https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports > 2021-03-21T18:43:05.3718156Z Found 1 error in 1 file (checked 677 source > files) > 2021-03-21T18:43:06.5626501Z ##[error]Process completed with exit code 1. > 2021-03-21T18:43:07.2727027Z Post job cleanup. > 2021-03-21T18:43:07.3838441Z [command]/usr/bin/git version > 2021-03-21T18:43:07.3891979Z git version 2.30.2 > 2021-03-21T18:43:07.3933267Z [command]/usr/bin/git config --local > --name-only --get-regexp core\.sshCommand > 2021-03-21T18:43:07.3968953Z [command]/usr/bin/git submodule foreach > --recursive git config --local --name-only --get-regexp > 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' > || : > 2021-03-21T18:43:07.4490076Z [command]/usr/bin/git config --local > --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader > 2021-03-21T18:43:07.4525518Z http.https://github.com/.extraheader > 2021-03-21T18:43:07.4539152Z [command]/usr/bin/git config --local > --unset-all http.https://github.com/.extraheader > 2021-03-21T18:43:07.5212999Z [command]/usr/bin/git submodule foreach > --recursive git config --local --name-only --get-regexp > 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local > --unset-all 'http.https://github.com/.extraheader' || : > 2021-03-21T18:43:07.5566225Z Cleaning up orphan processes > > ``` > > but looking at the runs on the PR > https://github.com/scipy/scipy/runs/1953046830 > we were doing: > ``` > python3.9 -u runtests.py -m fast > ``` > I'm less familiar with the -mypy flag to runtests.py, does anyone know if > we need > (1) to remove the mpmath parts of that PR opting instead for > precompute/hard coded numbers or > (2) this is something that will change/be removed in future > > -- > -L > I can reproduce the problem locally, and I can "fix" it by configuring mypy to ignore errors in that file. That is done by adding [mypy-scipy.special._precompute.cosine_cdf] ignore_errors = True to mypy.ini. I created a pull request: https://github.com/scipy/scipy/pull/13720. The tests are running as I write this, but it worked locally. This is obviously just a work-around, but we have a lot of those in mypy.ini. I'm not a mypy expert, so suggestions (and even better, PRs) with a better fix are welcome. Warren From rlucas7 at vt.edu Sun Mar 21 20:49:23 2021 From: rlucas7 at vt.edu (rlucas7 at vt.edu) Date: Sun, 21 Mar 2021 20:49:23 -0400 Subject: [SciPy-Dev] python3.9 runtests.py -mypy failing on master bc mpmath In-Reply-To: References: Message-ID: <9B412ACC-D11B-4B7D-8677-2B40AD6B86FD@vt.edu> Hey Warren, Also not a mypy expert. I also opened a pr with an inline ignore that is similar to what was used in another prior pr. I missed your PR until I opened other one. https://github.com/scipy/scipy/pull/13721 Not sure how we prefer those, if we want to take a top level one (on np math) and omit the file specific ones it might save many lines removal. If I?ve misunderstood how the mypy.ini config works please let me know. -Lucas Roberts > On Mar 21, 2021, at 8:13 PM, Warren Weckesser wrote: > > ?On 3/21/21, Lucas Roberts wrote: >> Hi scipy-dev, >> I just merged a PR: https://github.com/scipy/scipy/pull/10170 >> that was all green and passing. >> >> It looks like on master there is a python 3.9 mypy execution of >> runtests.py, here is the offending part of the stacktrace where the build >> fails >> >> ``` >> >> 2021-03-21T18:26:08.7168990Z Installing collected packages: >> typing-extensions, typed-ast, mypy-extensions, mypy >> 2021-03-21T18:26:10.3177755Z Successfully installed mypy-0.812 >> mypy-extensions-0.4.3 typed-ast-1.4.2 typing-extensions-3.7.4.3 >> 2021-03-21T18:26:10.4138039Z ##[group]Run python3.9 -u runtests.py --mypy >> 2021-03-21T18:26:10.4138752Z [36;1mpython3.9 -u runtests.py --mypy [0m >> 2021-03-21T18:26:10.4183883Z shell: /usr/bin/bash -e {0} >> 2021-03-21T18:26:10.4184310Z ##[endgroup] >> 2021-03-21T18:26:10.4915105Z Building, see build.log... >> 2021-03-21T18:27:10.5562824Z ... build in progress (0:01:00.064479 >> elapsed) >> 2021-03-21T18:28:10.6226962Z ... build in progress (0:02:00.131202 >> elapsed) >> 2021-03-21T18:29:10.6923322Z ... build in progress (0:03:00.200865 >> elapsed) >> 2021-03-21T18:30:10.7642621Z ... build in progress (0:04:00.272632 >> elapsed) >> 2021-03-21T18:31:10.8354301Z ... build in progress (0:05:00.343937 >> elapsed) >> 2021-03-21T18:32:10.9051482Z ... build in progress (0:06:00.413679 >> elapsed) >> 2021-03-21T18:33:10.9747934Z ... build in progress (0:07:00.483329 >> elapsed) >> 2021-03-21T18:34:11.0420822Z ... build in progress (0:08:00.550613 >> elapsed) >> 2021-03-21T18:35:11.1096296Z ... build in progress (0:09:00.618168 >> elapsed) >> 2021-03-21T18:36:11.1807347Z ... build in progress (0:10:00.689247 >> elapsed) >> 2021-03-21T18:37:11.2499801Z ... build in progress (0:11:00.758503 >> elapsed) >> 2021-03-21T18:38:11.3141592Z ... build in progress (0:12:00.822681 >> elapsed) >> 2021-03-21T18:39:11.3839094Z ... build in progress (0:13:00.892443 >> elapsed) >> 2021-03-21T18:40:11.4513782Z ... build in progress (0:14:00.959913 >> elapsed) >> 2021-03-21T18:41:11.5224005Z ... build in progress (0:15:01.030878 >> elapsed) >> 2021-03-21T18:42:11.5936181Z ... build in progress (0:16:01.102135 >> elapsed) >> 2021-03-21T18:42:32.6182673Z Build OK (0:16:22.126754 elapsed) >> 2021-03-21T18:43:05.3712863Z >> scipy/special/_precompute/cosine_cdf.py:2: error: Cannot find >> implementation or library stub for module named 'mpmath' [import] >> 2021-03-21T18:43:05.3716636Z >> scipy/special/_precompute/cosine_cdf.py:2: note: See >> https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports >> 2021-03-21T18:43:05.3718156Z Found 1 error in 1 file (checked 677 source >> files) >> 2021-03-21T18:43:06.5626501Z ##[error]Process completed with exit code 1. >> 2021-03-21T18:43:07.2727027Z Post job cleanup. >> 2021-03-21T18:43:07.3838441Z [command]/usr/bin/git version >> 2021-03-21T18:43:07.3891979Z git version 2.30.2 >> 2021-03-21T18:43:07.3933267Z [command]/usr/bin/git config --local >> --name-only --get-regexp core\.sshCommand >> 2021-03-21T18:43:07.3968953Z [command]/usr/bin/git submodule foreach >> --recursive git config --local --name-only --get-regexp >> 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' >> || : >> 2021-03-21T18:43:07.4490076Z [command]/usr/bin/git config --local >> --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader >> 2021-03-21T18:43:07.4525518Z http.https://github.com/.extraheader >> 2021-03-21T18:43:07.4539152Z [command]/usr/bin/git config --local >> --unset-all http.https://github.com/.extraheader >> 2021-03-21T18:43:07.5212999Z [command]/usr/bin/git submodule foreach >> --recursive git config --local --name-only --get-regexp >> 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local >> --unset-all 'http.https://github.com/.extraheader' || : >> 2021-03-21T18:43:07.5566225Z Cleaning up orphan processes >> >> ``` >> >> but looking at the runs on the PR >> https://github.com/scipy/scipy/runs/1953046830 >> we were doing: >> ``` >> python3.9 -u runtests.py -m fast >> ``` >> I'm less familiar with the -mypy flag to runtests.py, does anyone know if >> we need >> (1) to remove the mpmath parts of that PR opting instead for >> precompute/hard coded numbers or >> (2) this is something that will change/be removed in future >> >> -- >> -L >> > > I can reproduce the problem locally, and I can "fix" it by configuring > mypy to ignore errors in that file. That is done by adding > > [mypy-scipy.special._precompute.cosine_cdf] > ignore_errors = True > > to mypy.ini. I created a pull request: > https://github.com/scipy/scipy/pull/13720. The tests are running as I > write this, but it worked locally. > > This is obviously just a work-around, but we have a lot of those in > mypy.ini. I'm not a mypy expert, so suggestions (and even better, > PRs) with a better fix are welcome. > > Warren > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.craig.wilson at gmail.com Sun Mar 21 23:59:24 2021 From: josh.craig.wilson at gmail.com (Joshua Wilson) Date: Sun, 21 Mar 2021 20:59:24 -0700 Subject: [SciPy-Dev] python3.9 runtests.py -mypy failing on master bc mpmath In-Reply-To: <9B412ACC-D11B-4B7D-8677-2B40AD6B86FD@vt.edu> References: <9B412ACC-D11B-4B7D-8677-2B40AD6B86FD@vt.edu> Message-ID: The right approach here is to ignore errors from mpmath in the mypy.ini instead of ignoring errors from that particular file; something like [mypy-mpmath] ignore_errors = True [mypy-mpmath.*] ignore_errors = True Mypy is unhappy because mpmath has no typing information (it complains by default so that untyped stuff doesn't sneak in without you knowing), but mpmath is unlikely to ever gain type support, so we should ignore errors from it. On Sun, Mar 21, 2021 at 5:50 PM wrote: > > Hey Warren, > > Also not a mypy expert. > > I also opened a pr with an inline ignore that is similar to what was used in another prior pr. I missed your PR until I opened other one. > > https://github.com/scipy/scipy/pull/13721 > > Not sure how we prefer those, if we want to take a top level one (on np math) and omit the file specific ones it might save many lines removal. > > If I?ve misunderstood how the mypy.ini config works please let me know. > > -Lucas Roberts > > On Mar 21, 2021, at 8:13 PM, Warren Weckesser wrote: > > ?On 3/21/21, Lucas Roberts wrote: > > Hi scipy-dev, > > I just merged a PR: https://github.com/scipy/scipy/pull/10170 > > that was all green and passing. > > > It looks like on master there is a python 3.9 mypy execution of > > runtests.py, here is the offending part of the stacktrace where the build > > fails > > > ``` > > > 2021-03-21T18:26:08.7168990Z Installing collected packages: > > typing-extensions, typed-ast, mypy-extensions, mypy > > 2021-03-21T18:26:10.3177755Z Successfully installed mypy-0.812 > > mypy-extensions-0.4.3 typed-ast-1.4.2 typing-extensions-3.7.4.3 > > 2021-03-21T18:26:10.4138039Z ##[group]Run python3.9 -u runtests.py --mypy > > 2021-03-21T18:26:10.4138752Z [36;1mpython3.9 -u runtests.py --mypy [0m > > 2021-03-21T18:26:10.4183883Z shell: /usr/bin/bash -e {0} > > 2021-03-21T18:26:10.4184310Z ##[endgroup] > > 2021-03-21T18:26:10.4915105Z Building, see build.log... > > 2021-03-21T18:27:10.5562824Z ... build in progress (0:01:00.064479 > > elapsed) > > 2021-03-21T18:28:10.6226962Z ... build in progress (0:02:00.131202 > > elapsed) > > 2021-03-21T18:29:10.6923322Z ... build in progress (0:03:00.200865 > > elapsed) > > 2021-03-21T18:30:10.7642621Z ... build in progress (0:04:00.272632 > > elapsed) > > 2021-03-21T18:31:10.8354301Z ... build in progress (0:05:00.343937 > > elapsed) > > 2021-03-21T18:32:10.9051482Z ... build in progress (0:06:00.413679 > > elapsed) > > 2021-03-21T18:33:10.9747934Z ... build in progress (0:07:00.483329 > > elapsed) > > 2021-03-21T18:34:11.0420822Z ... build in progress (0:08:00.550613 > > elapsed) > > 2021-03-21T18:35:11.1096296Z ... build in progress (0:09:00.618168 > > elapsed) > > 2021-03-21T18:36:11.1807347Z ... build in progress (0:10:00.689247 > > elapsed) > > 2021-03-21T18:37:11.2499801Z ... build in progress (0:11:00.758503 > > elapsed) > > 2021-03-21T18:38:11.3141592Z ... build in progress (0:12:00.822681 > > elapsed) > > 2021-03-21T18:39:11.3839094Z ... build in progress (0:13:00.892443 > > elapsed) > > 2021-03-21T18:40:11.4513782Z ... build in progress (0:14:00.959913 > > elapsed) > > 2021-03-21T18:41:11.5224005Z ... build in progress (0:15:01.030878 > > elapsed) > > 2021-03-21T18:42:11.5936181Z ... build in progress (0:16:01.102135 > > elapsed) > > 2021-03-21T18:42:32.6182673Z Build OK (0:16:22.126754 elapsed) > > 2021-03-21T18:43:05.3712863Z > > scipy/special/_precompute/cosine_cdf.py:2: error: Cannot find > > implementation or library stub for module named 'mpmath' [import] > > 2021-03-21T18:43:05.3716636Z > > scipy/special/_precompute/cosine_cdf.py:2: note: See > > https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports > > 2021-03-21T18:43:05.3718156Z Found 1 error in 1 file (checked 677 source > > files) > > 2021-03-21T18:43:06.5626501Z ##[error]Process completed with exit code 1. > > 2021-03-21T18:43:07.2727027Z Post job cleanup. > > 2021-03-21T18:43:07.3838441Z [command]/usr/bin/git version > > 2021-03-21T18:43:07.3891979Z git version 2.30.2 > > 2021-03-21T18:43:07.3933267Z [command]/usr/bin/git config --local > > --name-only --get-regexp core\.sshCommand > > 2021-03-21T18:43:07.3968953Z [command]/usr/bin/git submodule foreach > > --recursive git config --local --name-only --get-regexp > > 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' > > || : > > 2021-03-21T18:43:07.4490076Z [command]/usr/bin/git config --local > > --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader > > 2021-03-21T18:43:07.4525518Z http.https://github.com/.extraheader > > 2021-03-21T18:43:07.4539152Z [command]/usr/bin/git config --local > > --unset-all http.https://github.com/.extraheader > > 2021-03-21T18:43:07.5212999Z [command]/usr/bin/git submodule foreach > > --recursive git config --local --name-only --get-regexp > > 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local > > --unset-all 'http.https://github.com/.extraheader' || : > > 2021-03-21T18:43:07.5566225Z Cleaning up orphan processes > > > ``` > > > but looking at the runs on the PR > > https://github.com/scipy/scipy/runs/1953046830 > > we were doing: > > ``` > > python3.9 -u runtests.py -m fast > > ``` > > I'm less familiar with the -mypy flag to runtests.py, does anyone know if > > we need > > (1) to remove the mpmath parts of that PR opting instead for > > precompute/hard coded numbers or > > (2) this is something that will change/be removed in future > > > -- > > -L > > > > I can reproduce the problem locally, and I can "fix" it by configuring > mypy to ignore errors in that file. That is done by adding > > [mypy-scipy.special._precompute.cosine_cdf] > ignore_errors = True > > to mypy.ini. I created a pull request: > https://github.com/scipy/scipy/pull/13720. The tests are running as I > write this, but it worked locally. > > This is obviously just a work-around, but we have a lot of those in > mypy.ini. I'm not a mypy expert, so suggestions (and even better, > PRs) with a better fix are welcome. > > Warren > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From mhaberla at calpoly.edu Mon Mar 22 02:31:50 2021 From: mhaberla at calpoly.edu (Matt Haberland) Date: Sun, 21 Mar 2021 23:31:50 -0700 Subject: [SciPy-Dev] differential_entropy? quartile_coeff_dispersion? Message-ID: Two PRs I've reviewed recently are in reasonably good shape, but they should be considered by the community before merging. TL-DR: Differential entropy of a continuous distribution from a sample in gh-13631 ; closes gh-4080 . Quartile Coefficient of Dispersion in gh-13475 ; closes gh-13385 . More information in the postscript. Thanks for your thoughts! Matt --- *Differential Entropy from a Sample* @vnmabus submitted gh-13631 , which would add the function `differential_entropy`. This would close gh-4080 , which asks for a way of approximating the differential entropy of a continuous distribution from a sample. Currently, the function implements only the Vasicek estimator, but there would be followup PR to add a `method` parameter to choose between other estimation methods. My opinion: it is essentially ready to merge and it would be a useful addition. `scipy.stats.entropy` is only for discrete distributions, and differential entropy is not the continuous analog of discrete entropy, so I think it deserves its own function to avoid confusion. There is one question in the PR about the file in which the new function should live. *Quartile/Quantile Coefficient of Dispersion* @YarivLevy81 submitted gh-13475 , which would add the function `quartile_coeff_dispersion`. This would close gh-13385 which asks for a "robust variation" statistic. My opinion: We might want to change the name to `quantile_coeff_dispersion`, as it uses `np.quantile` and allows quantiles other than 0.25/0.5/0.75. If there is interest in the function returning additional information (e.g. confidence interval) in the future, we should consider having the function return some sort of object other than a scalar or array. -- Matt Haberland Assistant Professor BioResource and Agricultural Engineering 08A-3K, Cal Poly -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Mon Mar 22 04:55:51 2021 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Mon, 22 Mar 2021 09:55:51 +0100 Subject: [SciPy-Dev] Proposal to merge linalg.pinv and linalg.pinv2 and deprecate pinv2 In-Reply-To: References: Message-ID: Yes probably Lapack deprecations affected this one too. There might be a possible objection to this as pinv gets the minimal norm solution and pinv2 and pinvh gets the typical pseudo-inverse. Numerically in fact pinv is favorable to the classic definition but I think this can be merged into single pinv function. On Mon, Mar 22, 2021 at 12:18 AM Robert Kern wrote: > On Sun, Mar 21, 2021 at 6:53 PM Robert Kern wrote: > >> On Sun, Mar 21, 2021 at 6:00 PM Ralf Gommers >> wrote: >> >>> >>> Do you happen to know the history of how we ended up with pinv2? >>> >> >> I suspect that when `pinv2()` was added, the `lstsq()` call underlying >> `pinv()` was not SVD-based. The precise LAPACK driver has changed over the >> years. We might have started with the QR-based driver. >> > > It's going to be very hard to tell definitively because I think the > history got lost in the SVN->git conversion due to some directory renames > that happened in the early days. The pinv/pinv2 split seems to have been > very early, though, so it may have dated from the original Multipack > library (the source tarballs of which are also linkrotted away). > > -- > Robert Kern > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Mon Mar 22 04:57:46 2021 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Mon, 22 Mar 2021 09:57:46 +0100 Subject: [SciPy-Dev] SciPy default build settings switched to use Pythran In-Reply-To: References: Message-ID: Not a showstopper but did anyone else hit the issue given in https://github.com/scipy/scipy/issues/13717 ? We might need to massage the prerequirements documents on windows side if it is so. On Sun, Mar 14, 2021 at 12:43 PM Ralf Gommers wrote: > > > On Sun, Mar 14, 2021 at 4:12 AM Tyler Reddy > wrote: > >> Yes, Pythran is a build dep in master branch now >> > > I'd say it has been an optional build dependency for almost 3 months now, > and it still is optional (because you can opt out). Unless we find > showstoppers, the goal is to make it a required build dependency before the > next release. > > Cheers, > Ralf > > > >> On Sat, 13 Mar 2021 at 16:03, Andrew Nelson wrote: >> >>> Does this mean Pythran is a build dependency now? >>> >>> On Sat, 13 Mar 2021, 21:04 Ralf >>> >>>> >>>> After a few months of having Pythran support in master as opt-in, and >>>> quite a bit of positive feedback from contributions trying out Pythran in >>>> PRs, we've now flipped the build setup to opt-out. >>>> >>>> If you have a problem, you can still work around it for now with >>>> `export SCIPY_USE_PYTHRAN=0` before building. If there is an unexpected >>>> issue, please do report it. >>>> >>>> Cheers, >>>> Ralf >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/scipy-dev >>>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy.pamphile at gmail.com Mon Mar 22 09:03:06 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Mon, 22 Mar 2021 14:03:06 +0100 Subject: [SciPy-Dev] pydata-sphinx-theme In-Reply-To: <59A8480E-2640-471E-BE8E-DE2FADC53A36@gmail.com> References: <59A8480E-2640-471E-BE8E-DE2FADC53A36@gmail.com> Message-ID: <4707F08B-48F0-4953-87E1-5AEEC731EE4B@gmail.com> Hi everyone, I started implementing this over here: https://github.com/scipy/scipy/pull/13724 There are still a few issues, but it looks nice! It?s quite a big change, so it would be great if we have more feedback. Cheers, Pamphile > On 21.03.2021, at 17:16, Pamphile Roy wrote: > > Hi everyone, > > I was wondering if there was a plan about moving the doc to using PyData?s theme? (I could not find any issues or ref in mail, sorry if I missed something). > IMO it?s good if we have a UX which is close to NumPy and the other libraries of the stack. Although for the landing page, I would propose to have fewer things like Pandas. > > I could propose to work on this, if this is something we want. > > Cheers, > Pamphile From stefanv at berkeley.edu Mon Mar 22 16:32:03 2021 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 22 Mar 2021 13:32:03 -0700 Subject: [SciPy-Dev] =?utf-8?q?Proposal_to_merge_linalg=2Epinv_and_linalg?= =?utf-8?q?=2Epinv2_and_deprecate_pinv2?= In-Reply-To: References: Message-ID: <13cc48ae-2606-4e03-809a-b97b65738a59@www.fastmail.com> On Sun, Mar 21, 2021, at 16:17, Robert Kern wrote: > On Sun, Mar 21, 2021 at 6:53 PM Robert Kern wrote: >> On Sun, Mar 21, 2021 at 6:00 PM Ralf Gommers wrote: >>> >>> Do you happen to know the history of how we ended up with pinv2? >> >> I suspect that when `pinv2()` was added, the `lstsq()` call underlying `pinv()` was not SVD-based. The precise LAPACK driver has changed over the years. We might have started with the QR-based driver. > > It's going to be very hard to tell definitively because I think the history got lost in the SVN->git conversion due to some directory renames that happened in the early days. The pinv/pinv2 split seems to have been very early, though, so it may have dated from the original Multipack library (the source tarballs of which are also linkrotted away). I think you're right, this was pre-SVN. I was looking at the following commit from https://github.com/scipy/scipy-svn commit c6ef539392f31bda0b56541a1c8fdd61a0c0e6eb (HEAD) Author: pearu Date: Sun Apr 7 15:03:50 2002 +0000 Replacing linalg with linalg2: linalg->linalg/linalg1 and linalg2->linalg There, the linalg/basic.py file is added, and inside it both pinv and pinv2 already exist: def pinv(a, cond=-1): """Compute generalized inverse of A using least-squares solver. """ a = asarray(a) t = a.typecode() b = scipy.identity(a.shape[0],t) return lstsq(a, b, cond=cond)[0] def pinv2(a, cond=-1): """Compute the generalized inverse of A using svd. """ a = asarray(a) u, s, vh = decomp.svd(a) m = u.shape[1] n = vh.shape[0] t = u.typecode() if cond is -1 or cond is None: cond = {0: feps*1e3, 1: eps*1e6}[_array_precision[t]] cutoff = cond*scipy_base.maximum.reduce(s) for i in range(min(n,m)): if s[i] > cutoff: s[i] = 1.0/s[i] else: s[i] = 0.0 return dot(tran(conj(vh)),tran(conj(u))*s[:,NewAxis]) I have not been able to find a copy of multipack-0.7.tar.gz St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From christoph.baumgarten at gmail.com Wed Mar 24 14:49:54 2021 From: christoph.baumgarten at gmail.com (Christoph Baumgarten) Date: Wed, 24 Mar 2021 19:49:54 +0100 Subject: [SciPy-Dev] GSoC project UNU.RAN Message-ID: Hi all, the developers of the library UNU.RAN ( http://statmath.wu.ac.at/software/unuran/) have given their permission to integrate it into SciPy under a BSD licence.It would be a great addition to scipy.stats since UNU.RAN contains a lot of powerful tools for generating random variates. I already discussed the idea with some of the developers of scipy.stats and we agreed to propose this as a project idea for GSoC, see https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas Please let me know if you have any questions / comments. Thanks Christoph -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy.pamphile at gmail.com Wed Mar 24 15:31:26 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Wed, 24 Mar 2021 20:31:26 +0100 Subject: [SciPy-Dev] GSoC project UNU.RAN In-Reply-To: References: Message-ID: <89C523C1-8C58-425D-AE83-215D26C271D0@gmail.com> Hi, This is good news! By integrating, do you mean that you propose to write python wrappers on top of the whole C library? Because I can see that there are things like MCMC or copulas which we might not want here (we had discussions about it and let this for statsmodels). Thanks in advance for the precision. Cheers, Pamphile > On 24.03.2021, at 19:49, Christoph Baumgarten wrote: > > > Hi all, > > the developers of the library UNU.RAN (http://statmath.wu.ac.at/software/unuran/ ) have given their permission to integrate it into SciPy under a BSD licence.It would be a great addition to scipy.stats since UNU.RAN contains a lot of powerful tools for generating random variates. I already discussed the idea with some of the developers of scipy.stats and we agreed to propose this as a project idea for GSoC, see https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas > > Please let me know if you have any questions / comments. > > Thanks > > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikolay.mayorov at zoho.com Wed Mar 24 18:58:53 2021 From: nikolay.mayorov at zoho.com (Nikolay Mayorov) Date: Thu, 25 Mar 2021 03:58:53 +0500 Subject: [SciPy-Dev] GSoC'21 participation SciPy In-Reply-To: References: <6EB6E0CE-53E3-4ECC-806A-4B728728579A@gmail.com> <17827d6ac1c.b0deeb8838932.5136930551930802487@zoho.com> Message-ID: <178667650ce.ee2557c489059.5035677909948735908@zoho.com> Ralf, thanks for the feedback! Waiting for a good student to apply for this project :) Nikolay ---- On Sat, 20 Mar 2021 20:18:06 +0500 Ralf Gommers wrote ---- Hi Nikolay, Thanks for adding that! I agree there's a lot to improve there. It's a nice and detailed description, I don't have much to add. Gmail unhelpfully classified your email as spam, so I thought I'd reply on the list in case that happened for other people as well. Cheers, Ralf On Fri, Mar 12, 2021 at 8:08 PM Nikolay Mayorov wrote: Hi! I've added an idea about implementing object-oriented design of filtering in scipy.signal. It was discussed quite a lot in the past, I think it's a sane idea and scipy.signal definitely can be made more user friendly and convenient. This is only some preliminarily view on the project. Feel free to edit the text. So far I've put only myself as a possible mentor. Nikolay _______________________________________________ SciPy-Dev mailing list mailto:SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Wed Mar 24 22:13:19 2021 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Wed, 24 Mar 2021 20:13:19 -0600 Subject: [SciPy-Dev] ANN: SciPy 1.6.2 Message-ID: Hi all, On behalf of the SciPy development team I'm pleased to announce the release of SciPy 1.6.2, which is a bug fix release. Sources and binary wheels can be found at: https://pypi.org/project/scipy/ and at: https://github.com/scipy/scipy/releases/tag/v1.6.2 One of a few ways to install this release with pip: pip install scipy==1.6.2 ===================== SciPy 1.6.2 Release Notes ===================== SciPy 1.6.2 is a bug-fix release with no new features compared to 1.6.1. This is also the first SciPy release to place upper bounds on some dependencies to improve the long-term repeatability of source builds. Authors ====== * Pradipta Ghosh + * Tyler Reddy * Ralf Gommers * Martin K. Scherer + * Robert Uhl * Warren Weckesser A total of 6 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. Issues closed for 1.6.2 ------------------------------ * `#13512 `__: \`stats.gaussian_kde.evaluate\` broken on S390X * `#13584 `__: rotation._compute_euler_from_matrix() creates an array with negative... * `#13585 `__: Behavior change in coo_matrix when dtype=None * `#13686 `__: delta0 argument of scipy.odr.ODR() ignored Pull requests for 1.6.2 ------------------------------ * `#12862 `__: REL: put upper bounds on versions of dependencies * `#13575 `__: BUG: fix \`gaussian_kernel_estimate\` on S390X * `#13586 `__: BUG: sparse: Create a utility function \`getdata\` * `#13598 `__: MAINT, BUG: enforce contiguous layout for output array in Rotation.as_euler * `#13687 `__: BUG: fix scipy.odr to consider given delta0 argument Checksums ========= MD5 ~~~ fc81d43879a28270d593aaea37c74ff8 scipy-1.6.2-cp37-cp37m-macosx_10_9_x86_64.whl 9213533bfd3c2f1563d169009c39825c scipy-1.6.2-cp37-cp37m-manylinux1_i686.whl 2ddd03b89efdb1619fa995da7b83aa6f scipy-1.6.2-cp37-cp37m-manylinux1_x86_64.whl d378f725958bd6a83db7ef23e8659762 scipy-1.6.2-cp37-cp37m-manylinux2014_aarch64.whl 87bc2771b8a8ab1f10168b1563300415 scipy-1.6.2-cp37-cp37m-win32.whl 861dab18fe41e82c08c8f585f2710545 scipy-1.6.2-cp37-cp37m-win_amd64.whl d2e2002b526adeebf94489aa95031f54 scipy-1.6.2-cp38-cp38-macosx_10_9_x86_64.whl 2dc36bfbe3938c492533604aba002c17 scipy-1.6.2-cp38-cp38-manylinux1_i686.whl 0114de2118d41f9440cf86fdd67434fc scipy-1.6.2-cp38-cp38-manylinux1_x86_64.whl ede6db56b1bf0a7fed0c75acac7dcb85 scipy-1.6.2-cp38-cp38-manylinux2014_aarch64.whl 191636ac3276da0ee9fd263b47927b73 scipy-1.6.2-cp38-cp38-win32.whl 8bdf7ab041b9115b379f043bb02d905f scipy-1.6.2-cp38-cp38-win_amd64.whl 608c82b227b6077d9a7871ac6278e64d scipy-1.6.2-cp39-cp39-macosx_10_9_x86_64.whl 4c0313b2cccc85666b858ffd692a3c87 scipy-1.6.2-cp39-cp39-manylinux1_i686.whl 92da8ffe165034dbbe5f098d0ed58aec scipy-1.6.2-cp39-cp39-manylinux1_x86_64.whl b4b225fb1deeaaf0eda909fdd3bd6ca6 scipy-1.6.2-cp39-cp39-manylinux2014_aarch64.whl 662969220eadbb6efec99030e4d00268 scipy-1.6.2-cp39-cp39-win32.whl f19186d6d91c7e37000e9f6ccd9b9b60 scipy-1.6.2-cp39-cp39-win_amd64.whl cbcb9b39bd9d877ad3deeccc7c37bb7f scipy-1.6.2.tar.gz b56e705c653ad808a9725dfe840d1258 scipy-1.6.2.tar.xz 6f615549670cd3d312dc9e4359d2436a scipy-1.6.2.zip SHA256 ~~~~~~ 77f7a057724545b7e097bfdca5c6006bed8580768cd6621bb1330aedf49afba5 scipy-1.6.2-cp37-cp37m-macosx_10_9_x86_64.whl e547f84cd52343ac2d56df0ab08d3e9cc202338e7d09fafe286d6c069ddacb31 scipy-1.6.2-cp37-cp37m-manylinux1_i686.whl bc52d4d70863141bb7e2f8fd4d98e41d77375606cde50af65f1243ce2d7853e8 scipy-1.6.2-cp37-cp37m-manylinux1_x86_64.whl adf7cee8e5c92b05f2252af498f77c7214a2296d009fc5478fc432c2f8fb953b scipy-1.6.2-cp37-cp37m-manylinux2014_aarch64.whl e3e9742bad925c421d39e699daa8d396c57535582cba90017d17f926b61c1552 scipy-1.6.2-cp37-cp37m-win32.whl ffdfb09315896c6e9ac739bb6e13a19255b698c24e6b28314426fd40a1180822 scipy-1.6.2-cp37-cp37m-win_amd64.whl 6ca1058cb5bd45388041a7c3c11c4b2bd58867ac9db71db912501df77be2c4a4 scipy-1.6.2-cp38-cp38-macosx_10_9_x86_64.whl 993c86513272bc84c451349b10ee4376652ab21f312b0554fdee831d593b6c02 scipy-1.6.2-cp38-cp38-manylinux1_i686.whl 37f4c2fb904c0ba54163e03993ce3544c9c5cde104bcf90614f17d85bdfbb431 scipy-1.6.2-cp38-cp38-manylinux1_x86_64.whl 96620240b393d155097618bcd6935d7578e85959e55e3105490bbbf2f594c7ad scipy-1.6.2-cp38-cp38-manylinux2014_aarch64.whl 03f1fd3574d544456325dae502facdf5c9f81cbfe12808a5e67a737613b7ba8c scipy-1.6.2-cp38-cp38-win32.whl 0c81ea1a95b4c9e0a8424cf9484b7b8fa7ef57169d7bcc0dfcfc23e3d7c81a12 scipy-1.6.2-cp38-cp38-win_amd64.whl c1d3f771c19af00e1a36f749bd0a0690cc64632783383bc68f77587358feb5a4 scipy-1.6.2-cp39-cp39-macosx_10_9_x86_64.whl 50e5bcd9d45262725e652611bb104ac0919fd25ecb78c22f5282afabd0b2e189 scipy-1.6.2-cp39-cp39-manylinux1_i686.whl 816951e73d253a41fa2fd5f956f8e8d9ac94148a9a2039e7db56994520582bf2 scipy-1.6.2-cp39-cp39-manylinux1_x86_64.whl 1fba8a214c89b995e3721670e66f7053da82e7e5d0fe6b31d8e4b19922a9315e scipy-1.6.2-cp39-cp39-manylinux2014_aarch64.whl e89091e6a8e211269e23f049473b2fde0c0e5ae0dd5bd276c3fc91b97da83480 scipy-1.6.2-cp39-cp39-win32.whl d744657c27c128e357de2f0fd532c09c84cd6e4933e8232895a872e67059ac37 scipy-1.6.2-cp39-cp39-win_amd64.whl e9da33e21c9bc1b92c20b5328adb13e5f193b924c9b969cd700c8908f315aa59 scipy-1.6.2.tar.gz 8fadc443044396283c48191d48e4e07a3c3b6e2ae320b1a56e76bb42929e84d2 scipy-1.6.2.tar.xz 2af283054d91865336b4579aa91f9e59d648d436cf561f96d4692008f795c750 scipy-1.6.2.zip -------------- next part -------------- An HTML attachment was scrubbed... URL: From akhilram2001 at gmail.com Thu Mar 25 02:33:48 2021 From: akhilram2001 at gmail.com (SIVALASETTY AKHILRAM) Date: Thu, 25 Mar 2021 12:03:48 +0530 Subject: [SciPy-Dev] Introduction my-self(introduction request) Message-ID: Good afternoon respected community, I am Akhil Ram, I am new to this community. coming to i was currently pursuing b.tech 3rd year. I had good experience with python , c and some good experience in machine learning. To showcase my abilities in python and machine learning i had done some projects. But I was very new to open source contributions. I had gone through the ideas that were displayed under the idea section page. So, basically from my past experience I got to know what I can work on ("Improve performance through use of Pythran or Cython"). please help me to get started on my journey.please ignore grammatical mistakes. If I am wrong to introduce myself to a different mailing list please let me know. thanking you, Akhil Ram Sivalasetty akhilram2001 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy.pamphile at gmail.com Thu Mar 25 03:29:39 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Thu, 25 Mar 2021 08:29:39 +0100 Subject: [SciPy-Dev] Introduction my-self(introduction request) In-Reply-To: References: Message-ID: Hi Akhil, Welcome to SciPy. We are always looking for new motivated contributors. I believe you are referring to the GSoC21. In that case I would quote what Ralf said previously: We are happy to help, but do expect that you read the information we have put together on the ideas page ( https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas ) and try to understand what is needed. If you have concrete questions or an early draft of a proposal we can give you feedback. But we won't draft the proposal for you - there are lots of examples, from the PSF template to the guidance in https://google.github.io/gsocguides/student/writing-a-proposal.html Cheers, Pamphile > On 25.03.2021, at 07:33, SIVALASETTY AKHILRAM wrote: > > Good afternoon respected community, > I am Akhil Ram, I am new to this community. coming to i was currently pursuing b.tech 3rd year. I had good experience with python , c and some good experience in machine learning. > To showcase my abilities in python and machine learning i had done some projects. > > But I was very new to open source contributions. I had gone through the ideas that were displayed under the idea section page. So, basically from my past experience I got to know what I can work on ("Improve performance through use of Pythran or Cython"). > > please help me to get started on my journey.please ignore grammatical mistakes. If I am wrong to introduce myself to a different mailing list please let me know. > > thanking you, > Akhil Ram Sivalasetty > akhilram2001 at gmail.com > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From akhilram2001 at gmail.com Thu Mar 25 03:49:56 2021 From: akhilram2001 at gmail.com (SIVALASETTY AKHILRAM) Date: Thu, 25 Mar 2021 13:19:56 +0530 Subject: [SciPy-Dev] Introduction my-self(introduction request) In-Reply-To: References: Message-ID: Thank you for your response, But, I was confused, (try to understand what is needed. If you have concrete questions or an early draft of a proposal we can give you feedback. But we won't draft the proposal for you - there are lots of examples, from the PSF template to the guidance in) . Sir all the above mentioned points help me writing a proposal or way to understand the inside meaning of projects. Sorry if I raised wrong question. You sincerely, Akhil ram sivalasetty akhilram2001 at gmail.com On Thu, Mar 25, 2021, 1:00 PM Pamphile Roy wrote: > Hi Akhil, > > Welcome to SciPy. We are always looking for new motivated contributors. > > I believe you are referring to the GSoC21. In that case I would quote what > Ralf said previously: > > We are happy to help, but do expect that you > read the information we have put together on the ideas page (https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas) and try to > understand what is needed. If you have concrete questions or an early draft > of a proposal we can give you feedback. But we won't draft the proposal for > you - there are lots of examples, from the PSF template to the guidance inhttps://google.github.io/gsocguides/student/writing-a-proposal.html > > Cheers, > Pamphile > > > On 25.03.2021, at 07:33, SIVALASETTY AKHILRAM > wrote: > > Good afternoon respected community, > I am Akhil Ram, I am new to this community. coming to i was currently > pursuing b.tech 3rd year. I had good experience with python , c and some > good experience in machine learning. > To showcase my abilities in python and machine learning i had done some > projects. > > But I was very new to open source contributions. I had gone through the > ideas that were displayed under the idea section page. So, basically from > my past experience I got to know what I can work on ("Improve performance > through use of Pythran or Cython"). > > please help me to get started on my journey.please ignore > grammatical mistakes. If I am wrong to introduce myself to a different > mailing list please let me know. > > thanking you, > Akhil Ram Sivalasetty > akhilram2001 at gmail.com > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Mar 25 09:19:10 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 25 Mar 2021 14:19:10 +0100 Subject: [SciPy-Dev] Introduction my-self(introduction request) In-Reply-To: References: Message-ID: Hi Akhil, On Thu, Mar 25, 2021 at 8:50 AM SIVALASETTY AKHILRAM wrote: > Thank you for your response, > > But, I was confused, (try to understand what is needed. If you have > concrete questions or an early draft of a proposal we can give you > feedback. But we won't draft the proposal for you - there are lots of > examples, from the PSF template to the guidance in) . > > Sir all the above mentioned points help me writing a proposal or way to > understand the inside meaning of projects. > Thank you for introducing yourself here, that's perfectly fine. The next step would be to carefully read the instructions at https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas. You will want to submit at least one PR to SciPy fixing an issue (doesn't matter which one), and start on a draft proposal. Once you have a draft proposal, or concrete questions about the topic after you have done some research by yourself, then we can give you feedback on that. Cheers, Ralf > Sorry if I raised wrong question. > > You sincerely, > Akhil ram sivalasetty > akhilram2001 at gmail.com > > > > > > > > On Thu, Mar 25, 2021, 1:00 PM Pamphile Roy wrote: > >> Hi Akhil, >> >> Welcome to SciPy. We are always looking for new motivated contributors. >> >> I believe you are referring to the GSoC21. In that case I would quote >> what Ralf said previously: >> >> We are happy to help, but do expect that you >> read the information we have put together on the ideas page (https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas) and try to >> understand what is needed. If you have concrete questions or an early draft >> of a proposal we can give you feedback. But we won't draft the proposal for >> you - there are lots of examples, from the PSF template to the guidance inhttps://google.github.io/gsocguides/student/writing-a-proposal.html >> >> Cheers, >> Pamphile >> >> >> On 25.03.2021, at 07:33, SIVALASETTY AKHILRAM >> wrote: >> >> Good afternoon respected community, >> I am Akhil Ram, I am new to this community. coming to i was currently >> pursuing b.tech 3rd year. I had good experience with python , c and >> some good experience in machine learning. >> To showcase my abilities in python and machine learning i had done some >> projects. >> >> But I was very new to open source contributions. I had gone through the >> ideas that were displayed under the idea section page. So, basically from >> my past experience I got to know what I can work on ("Improve >> performance through use of Pythran or Cython"). >> >> please help me to get started on my journey.please ignore >> grammatical mistakes. If I am wrong to introduce myself to a different >> mailing list please let me know. >> >> thanking you, >> Akhil Ram Sivalasetty >> akhilram2001 at gmail.com >> >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Thu Mar 25 09:31:38 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 25 Mar 2021 09:31:38 -0400 Subject: [SciPy-Dev] ANN: SciPy 1.6.2 In-Reply-To: References: Message-ID: On 3/24/21, Tyler Reddy wrote: > Hi all, > > On behalf of the SciPy development team I'm pleased to announce > the release of SciPy 1.6.2, which is a bug fix release. > Thanks for managing the release, Tyler! Warren > Sources and binary wheels can be found at: > https://pypi.org/project/scipy/ > and at: https://github.com/scipy/scipy/releases/tag/v1.6.2 > > > One of a few ways to install this release with pip: > > pip install scipy==1.6.2 > > ===================== > SciPy 1.6.2 Release Notes > ===================== > > SciPy 1.6.2 is a bug-fix release with no new features > compared to 1.6.1. This is also the first SciPy release > to place upper bounds on some dependencies to improve > the long-term repeatability of source builds. > > Authors > ====== > > * Pradipta Ghosh + > * Tyler Reddy > * Ralf Gommers > * Martin K. Scherer + > * Robert Uhl > * Warren Weckesser > > A total of 6 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > This list of names is automatically generated, and may not be fully > complete. > > Issues closed for 1.6.2 > ------------------------------ > > * `#13512 `__: > \`stats.gaussian_kde.evaluate\` broken on S390X > * `#13584 `__: > rotation._compute_euler_from_matrix() creates an array with negative... > * `#13585 `__: Behavior change > in coo_matrix when dtype=None > * `#13686 `__: delta0 argument > of scipy.odr.ODR() ignored > > Pull requests for 1.6.2 > ------------------------------ > > * `#12862 `__: REL: put upper > bounds on versions of dependencies > * `#13575 `__: BUG: fix > \`gaussian_kernel_estimate\` on S390X > * `#13586 `__: BUG: sparse: > Create a utility function \`getdata\` > * `#13598 `__: MAINT, BUG: > enforce contiguous layout for output array in Rotation.as_euler > * `#13687 `__: BUG: fix > scipy.odr to consider given delta0 argument > > Checksums > ========= > > MD5 > ~~~ > > fc81d43879a28270d593aaea37c74ff8 > scipy-1.6.2-cp37-cp37m-macosx_10_9_x86_64.whl > 9213533bfd3c2f1563d169009c39825c > scipy-1.6.2-cp37-cp37m-manylinux1_i686.whl > 2ddd03b89efdb1619fa995da7b83aa6f > scipy-1.6.2-cp37-cp37m-manylinux1_x86_64.whl > d378f725958bd6a83db7ef23e8659762 > scipy-1.6.2-cp37-cp37m-manylinux2014_aarch64.whl > 87bc2771b8a8ab1f10168b1563300415 scipy-1.6.2-cp37-cp37m-win32.whl > 861dab18fe41e82c08c8f585f2710545 scipy-1.6.2-cp37-cp37m-win_amd64.whl > d2e2002b526adeebf94489aa95031f54 > scipy-1.6.2-cp38-cp38-macosx_10_9_x86_64.whl > 2dc36bfbe3938c492533604aba002c17 scipy-1.6.2-cp38-cp38-manylinux1_i686.whl > 0114de2118d41f9440cf86fdd67434fc > scipy-1.6.2-cp38-cp38-manylinux1_x86_64.whl > ede6db56b1bf0a7fed0c75acac7dcb85 > scipy-1.6.2-cp38-cp38-manylinux2014_aarch64.whl > 191636ac3276da0ee9fd263b47927b73 scipy-1.6.2-cp38-cp38-win32.whl > 8bdf7ab041b9115b379f043bb02d905f scipy-1.6.2-cp38-cp38-win_amd64.whl > 608c82b227b6077d9a7871ac6278e64d > scipy-1.6.2-cp39-cp39-macosx_10_9_x86_64.whl > 4c0313b2cccc85666b858ffd692a3c87 scipy-1.6.2-cp39-cp39-manylinux1_i686.whl > 92da8ffe165034dbbe5f098d0ed58aec > scipy-1.6.2-cp39-cp39-manylinux1_x86_64.whl > b4b225fb1deeaaf0eda909fdd3bd6ca6 > scipy-1.6.2-cp39-cp39-manylinux2014_aarch64.whl > 662969220eadbb6efec99030e4d00268 scipy-1.6.2-cp39-cp39-win32.whl > f19186d6d91c7e37000e9f6ccd9b9b60 scipy-1.6.2-cp39-cp39-win_amd64.whl > cbcb9b39bd9d877ad3deeccc7c37bb7f scipy-1.6.2.tar.gz > b56e705c653ad808a9725dfe840d1258 scipy-1.6.2.tar.xz > 6f615549670cd3d312dc9e4359d2436a scipy-1.6.2.zip > > SHA256 > ~~~~~~ > > 77f7a057724545b7e097bfdca5c6006bed8580768cd6621bb1330aedf49afba5 > scipy-1.6.2-cp37-cp37m-macosx_10_9_x86_64.whl > e547f84cd52343ac2d56df0ab08d3e9cc202338e7d09fafe286d6c069ddacb31 > scipy-1.6.2-cp37-cp37m-manylinux1_i686.whl > bc52d4d70863141bb7e2f8fd4d98e41d77375606cde50af65f1243ce2d7853e8 > scipy-1.6.2-cp37-cp37m-manylinux1_x86_64.whl > adf7cee8e5c92b05f2252af498f77c7214a2296d009fc5478fc432c2f8fb953b > scipy-1.6.2-cp37-cp37m-manylinux2014_aarch64.whl > e3e9742bad925c421d39e699daa8d396c57535582cba90017d17f926b61c1552 > scipy-1.6.2-cp37-cp37m-win32.whl > ffdfb09315896c6e9ac739bb6e13a19255b698c24e6b28314426fd40a1180822 > scipy-1.6.2-cp37-cp37m-win_amd64.whl > 6ca1058cb5bd45388041a7c3c11c4b2bd58867ac9db71db912501df77be2c4a4 > scipy-1.6.2-cp38-cp38-macosx_10_9_x86_64.whl > 993c86513272bc84c451349b10ee4376652ab21f312b0554fdee831d593b6c02 > scipy-1.6.2-cp38-cp38-manylinux1_i686.whl > 37f4c2fb904c0ba54163e03993ce3544c9c5cde104bcf90614f17d85bdfbb431 > scipy-1.6.2-cp38-cp38-manylinux1_x86_64.whl > 96620240b393d155097618bcd6935d7578e85959e55e3105490bbbf2f594c7ad > scipy-1.6.2-cp38-cp38-manylinux2014_aarch64.whl > 03f1fd3574d544456325dae502facdf5c9f81cbfe12808a5e67a737613b7ba8c > scipy-1.6.2-cp38-cp38-win32.whl > 0c81ea1a95b4c9e0a8424cf9484b7b8fa7ef57169d7bcc0dfcfc23e3d7c81a12 > scipy-1.6.2-cp38-cp38-win_amd64.whl > c1d3f771c19af00e1a36f749bd0a0690cc64632783383bc68f77587358feb5a4 > scipy-1.6.2-cp39-cp39-macosx_10_9_x86_64.whl > 50e5bcd9d45262725e652611bb104ac0919fd25ecb78c22f5282afabd0b2e189 > scipy-1.6.2-cp39-cp39-manylinux1_i686.whl > 816951e73d253a41fa2fd5f956f8e8d9ac94148a9a2039e7db56994520582bf2 > scipy-1.6.2-cp39-cp39-manylinux1_x86_64.whl > 1fba8a214c89b995e3721670e66f7053da82e7e5d0fe6b31d8e4b19922a9315e > scipy-1.6.2-cp39-cp39-manylinux2014_aarch64.whl > e89091e6a8e211269e23f049473b2fde0c0e5ae0dd5bd276c3fc91b97da83480 > scipy-1.6.2-cp39-cp39-win32.whl > d744657c27c128e357de2f0fd532c09c84cd6e4933e8232895a872e67059ac37 > scipy-1.6.2-cp39-cp39-win_amd64.whl > e9da33e21c9bc1b92c20b5328adb13e5f193b924c9b969cd700c8908f315aa59 > scipy-1.6.2.tar.gz > 8fadc443044396283c48191d48e4e07a3c3b6e2ae320b1a56e76bb42929e84d2 > scipy-1.6.2.tar.xz > 2af283054d91865336b4579aa91f9e59d648d436cf561f96d4692008f795c750 > scipy-1.6.2.zip > From warren.weckesser at gmail.com Thu Mar 25 11:24:31 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 25 Mar 2021 11:24:31 -0400 Subject: [SciPy-Dev] New feature: fitting distributions to censored data. Message-ID: Hey folks, A new enhancement for fitting probability distributions to censored data by maximum likelihood estimation is in progress in https://github.com/scipy/scipy/pull/13699 This is a feature in the statistics roadmap, and is part of the work for the CZI EOSS cycle 1 grant. Censored data is data where the true value of the measurement is unknown, but there is a known bound. The common terminology is: * left-censored: The true value is greater than some known bound. * right-censored: The true value is less than some known bound. * interval-censored: The true value is between known bounds. (By allowing the bounds to be +inf and -inf, we can think of all these cases as interval-censored.) In the PR, a new class, `CensoredData`, is defined to represent censored data. The `fit` method of the univariate continuous distributions is updated to accept an instance of `CensoredData`. The `CensoredData` class constructor accepts two arrays, `lower` and `upper`, to hold the known bounds of the data. (If `lower[i] == upper[i]`, it means that data value is not censored.) Because it is quite common for data to be encountered where the only censored values in the data set are either all right-censored or all left-censored, two convenience methods, `CensoredData.right_censored(x, censored)` and `CensoredData.left_censored(x, censored)`, are provide to create a `CensoredData` instance from an array of values and a corresponding boolean array that indicates if the value is censored. Here's a quick example, with data from https://www.itl.nist.gov/div898/handbook/apr/section4/apr413.htm. The data table shows 10 regular values and 10 right-censored values. The results reported there for fitting the two-parameter Weibull distribution (location fixed at 0) to that data are shape = 1.7208 scale = 606.5280 Here's the calculation using the proposed API in SciPy: In [55]: from scipy.stats import weibull_min, CensoredData Create the `CensoredData` instance: In [56]: x = np.array([54, 187, 216, 240, 244, 335, 361, 373, 375, 386] + [500]*10) In [57]: data = CensoredData.right_censored(x, x == 500) In [58]: print(data) CensoredData(20 values: 10 not censored, 10 right-censored) Fit `weibull_min` (with the location fixed at 0) to the censored data: In [59]: shape, loc, scale = weibull_min.fit(data, floc=0) In [60]: shape Out[60]: 1.720797180719942 In [61]: scale Out[61]: 606.527565269458 Matt Haberland has already suggested quite a few improvements to the PR. Additional comments would be appreciated. Warren From amulyabitspilani at gmail.com Thu Mar 25 13:52:12 2021 From: amulyabitspilani at gmail.com (Amulya Gupta) Date: Thu, 25 Mar 2021 23:22:12 +0530 Subject: [SciPy-Dev] Regarding contribution in Scipy organization for GSoC 21 In-Reply-To: References: Message-ID: Greetings Ralf, In the project idea mentions *Add type annotation of a submodule*. Can you please tell which submodule I will be adding the annotation. Regards AMULYA GUPTA On Sun, Mar 21, 2021 at 2:10 AM Ralf Gommers wrote: > > > On Sat, Mar 20, 2021 at 5:35 PM Amulya Gupta > wrote: > >> My intentions were not that. I just inquired that you need any pre-plan >> for the proposal. I have started my early proposal and will submit as soon >> as possible. >> > > That sounds good, thanks Amulya. > > Cheers, > Ralf > > > Regards >> AMULYA GUPTA >> >> On Sat, Mar 20, 2021 at 8:56 PM Ralf Gommers >> wrote: >> >>> Hi Amulya, >>> >>> >>> On Sat, Mar 20, 2021 at 4:09 PM Amulya Gupta >>> wrote: >>> >>>> Dear Sir/Ma'am, >>>> I am Amulya Gupta, a sophomore from BITS,Pilani. I have been coding >>>> for the past 3 years. I am profound in Python(Modules: nose test, poetry, >>>> numpy, matplotlib, cmake, pipenv, tkinter, turtle and currently learning >>>> mypy etc), C++, Mysql, Java script. I have experience in open source with >>>> an internship with an organisation called NumFocus. I want to openly >>>> contribute to the project "*Add type annotations to a submodule*" from >>>> SciPy. Please can I talk to the mentor about his expectations from >>>> the student applying. How many more students have applied for the same >>>> project and how does the mentor has pre-planned about the project and >>>> his inputs for the proposal buildup. >>>> >>> >>> I have replied to you already. We are happy to help, but do expect that >>> you read the information we have put together on the ideas page ( >>> https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas) and try to >>> understand what is needed. If you have concrete questions or an early draft >>> of a proposal we can give you feedback. But we won't draft the proposal for >>> you - there are lots of examples, from the PSF template to the guidance in >>> https://google.github.io/gsocguides/student/writing-a-proposal.html >>> >>> Cheers, >>> Ralf >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amulyabitspilani at gmail.com Thu Mar 25 14:18:45 2021 From: amulyabitspilani at gmail.com (Amulya Gupta) Date: Thu, 25 Mar 2021 23:48:45 +0530 Subject: [SciPy-Dev] Regarding contribution in Scipy organization for GSoC 21 In-Reply-To: References: Message-ID: Please can you ignore the above mail Regards AMULYA On Thu, Mar 25, 2021 at 11:22 PM Amulya Gupta wrote: > Greetings Ralf, > In the project idea mentions *Add type annotation of a submodule*. Can > you please tell which submodule I will be adding the annotation. > Regards > AMULYA GUPTA > > On Sun, Mar 21, 2021 at 2:10 AM Ralf Gommers > wrote: > >> >> >> On Sat, Mar 20, 2021 at 5:35 PM Amulya Gupta >> wrote: >> >>> My intentions were not that. I just inquired that you need any pre-plan >>> for the proposal. I have started my early proposal and will submit as soon >>> as possible. >>> >> >> That sounds good, thanks Amulya. >> >> Cheers, >> Ralf >> >> >> Regards >>> AMULYA GUPTA >>> >>> On Sat, Mar 20, 2021 at 8:56 PM Ralf Gommers >>> wrote: >>> >>>> Hi Amulya, >>>> >>>> >>>> On Sat, Mar 20, 2021 at 4:09 PM Amulya Gupta < >>>> amulyabitspilani at gmail.com> wrote: >>>> >>>>> Dear Sir/Ma'am, >>>>> I am Amulya Gupta, a sophomore from BITS,Pilani. I have been coding >>>>> for the past 3 years. I am profound in Python(Modules: nose test, poetry, >>>>> numpy, matplotlib, cmake, pipenv, tkinter, turtle and currently learning >>>>> mypy etc), C++, Mysql, Java script. I have experience in open source with >>>>> an internship with an organisation called NumFocus. I want to openly >>>>> contribute to the project "*Add type annotations to a submodule*" >>>>> from SciPy. Please can I talk to the mentor about his expectations >>>>> from the student applying. How many more students have applied for the same >>>>> project and how does the mentor has pre-planned about the project and >>>>> his inputs for the proposal buildup. >>>>> >>>> >>>> I have replied to you already. We are happy to help, but do expect that >>>> you read the information we have put together on the ideas page ( >>>> https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas) and try >>>> to understand what is needed. If you have concrete questions or an early >>>> draft of a proposal we can give you feedback. But we won't draft the >>>> proposal for you - there are lots of examples, from the PSF template to the >>>> guidance in >>>> https://google.github.io/gsocguides/student/writing-a-proposal.html >>>> >>>> Cheers, >>>> Ralf >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/scipy-dev >>>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scipy at blackward.eu Thu Mar 25 19:17:07 2021 From: scipy at blackward.eu (scipy at blackward.eu) Date: Fri, 26 Mar 2021 00:17:07 +0100 Subject: [SciPy-Dev] scipy documentation Message-ID: Howdy Folks, I would like to suggest adding "Blythooon" (in contrast to "WinPython" this supports Python 2.7) to the list of "Scientific Python Distributions" listed here: https://www.scipy.org/install.html Blythooon can be found here: https://pypi.org/project/blythooon/ and the belonging installation step-by-step-guide video here: https://www.youtube.com/channel/UCOE8xqYS_2azFsFjBVEwVMg May I ask - how can I do that best? Thanks in advance and Best Regards Dominik From ralf.gommers at gmail.com Fri Mar 26 04:49:00 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 26 Mar 2021 09:49:00 +0100 Subject: [SciPy-Dev] scipy documentation In-Reply-To: References: Message-ID: On Fri, Mar 26, 2021 at 12:17 AM wrote: > Howdy Folks, > > > I would like to suggest adding "Blythooon" (in contrast to "WinPython" > this supports Python 2.7) to the list of "Scientific Python > Distributions" listed here: > > > https://www.scipy.org/install.html > > > > Blythooon can be found here: > > https://pypi.org/project/blythooon/ > > and the belonging installation step-by-step-guide video here: > > https://www.youtube.com/channel/UCOE8xqYS_2azFsFjBVEwVMg > > > > May I ask - how can I do that best? Thanks in advance and > Hi Dominik, thanks for the suggestion. I had a quick look and Blythooon has only had a single release, last month, and ships numpy 1.16.6 only. That does not look like something mature enough that we'd want to recommend it to the average SciPy user yet. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Mar 26 05:10:13 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 26 Mar 2021 10:10:13 +0100 Subject: [SciPy-Dev] New feature: fitting distributions to censored data. In-Reply-To: References: Message-ID: On Thu, Mar 25, 2021 at 4:24 PM Warren Weckesser wrote: > Hey folks, > > A new enhancement for fitting probability distributions to censored > data by maximum likelihood estimation is in progress in > > https://github.com/scipy/scipy/pull/13699 Thanks Warren. Overall this looks quite good to me. > > This is a feature in the statistics roadmap, and is part of the work > for the CZI EOSS cycle 1 grant. > > Censored data is data where the true value of the measurement is > unknown, but there is a known bound. The common terminology is: > > * left-censored: > The true value is greater than some known bound. > > * right-censored: > The true value is less than some known bound. > > * interval-censored: > The true value is between known bounds. > > (By allowing the bounds to be +inf and -inf, we can think of all these > cases as interval-censored.) > > In the PR, a new class, `CensoredData`, is defined to represent > censored data. The `fit` method of the univariate continuous > distributions is updated to accept an instance of `CensoredData`. The > `CensoredData` class constructor accepts two arrays, `lower` and > `upper`, to hold the known bounds of the data. (If `lower[i] == > upper[i]`, it means that data value is not censored.) > > Because it is quite common for data to be encountered where the only > censored values in the data set are either all right-censored or all > left-censored, two convenience methods, > `CensoredData.right_censored(x, censored)` and > `CensoredData.left_censored(x, censored)`, are provide to create a > `CensoredData` instance from an array of values and a corresponding > boolean array that indicates if the value is censored. > > Here's a quick example, with data from > https://www.itl.nist.gov/div898/handbook/apr/section4/apr413.htm. The > data table shows 10 regular values and 10 right-censored values. The > results reported there for fitting the two-parameter Weibull > distribution (location fixed at 0) to that data are > > shape = 1.7208 > scale = 606.5280 > > Here's the calculation using the proposed API in SciPy: > > In [55]: from scipy.stats import weibull_min, CensoredData > > Create the `CensoredData` instance: > > In [56]: x = np.array([54, 187, 216, 240, 244, 335, 361, 373, 375, > 386] + [500]*10) > > In [57]: data = CensoredData.right_censored(x, x == 500) > This `x == 500` looks a little odd API-wise. Why not `right_censored(x, 500)`. Or, more importantly, why not something like: data = CensoredData(x).rightcensor(500) There are of course multiple ways of doing this, but the use of classmethods and only taking arrays seems unusual. Also in the constructor, why do `lower` and `upper` have to be boolean arrays rather than scalars? Cheers, Ralf > > In [58]: print(data) > > CensoredData(20 values: 10 not censored, 10 right-censored) > > Fit `weibull_min` (with the location fixed at 0) to the censored data: > > In [59]: shape, loc, scale = weibull_min.fit(data, floc=0) > > In [60]: shape > > Out[60]: 1.720797180719942 > > In [61]: scale > > Out[61]: 606.527565269458 > > > Matt Haberland has already suggested quite a few improvements to the > PR. Additional comments would be appreciated. > > Warren > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scipy at blackward.eu Fri Mar 26 08:24:08 2021 From: scipy at blackward.eu (scipy at blackward.eu) Date: Fri, 26 Mar 2021 13:24:08 +0100 Subject: [SciPy-Dev] scipy documentation In-Reply-To: References: Message-ID: Hi Ralf, thank you very much for having a quick look into Blythooon and thanks for your fast and kind response! NumPy 1.16.6 is the very last version supporting Python 2.7.* - as far as I know. That is the reason why it is used by Blythooon. Blythooon packages the latest versions known to fit together around: Python 2.7.18 PyQtGraph 0.10.0 PySide 1.2.2 Although the current dogma might be "switch to Python 3.*", the belonging ecosystem is not suitable for production quality software yet, at least if you ask me. This has several reasons. The primary reason is the vast number of minor and major incompatibilities between the different subversions and related packages and the risk which comes with each updating due to that. Blythooon is thought for production (industry) quality, scientific applications with GUI. The purpose of Blythooon is not to provide the newest packages out there or an ecosystem which is permanently updating. The opposite is the case, it provides a "freezed" compilation. It is all about stability and compatibility. All Blythooon installations are 100% compatible. You e.g. can check out the combination PyQtGraph 0.11.* with PySide 1.2.* - you then will detect, that this combination has severe incompatibility issues. It is the same with PyQtGraph 0.10.0 and PysSide 1.2.4 and so on. (You can easily try that out using the test program bundled with blythooon - the plots will not plot at all or not plot properly in said cases) People using Blythooon do not need to resolve these issues on their own, they just get a working compilation - comprising a fitting SciPy and NumPy. And you might have noticed, that WinPython for example does not seem to support Python 2.7.* environments anymore. So, Blythooon fills a huge gap. By the way, as long as the current Blythooon version is working without errors, there will not be (the need for) another release. That might somehow be the nature of a freezed compilation... It is somehow a snapshot of time. But naturally I accept your decision not to promote it on the SciPy website yet. I just wanted to let you know the idea behind Blythooon... Keep on folks, I dearly enjoy SciPy, you are doing a great work! Best Regards Dominik On 2021-03-26 09:49, Ralf Gommers wrote: > On Fri, Mar 26, 2021 at 12:17 AM wrote: > >> Howdy Folks, >> >> I would like to suggest adding "Blythooon" (in contrast to >> "WinPython" >> this supports Python 2.7) to the list of "Scientific Python >> Distributions" listed here: >> >> https://www.scipy.org/install.html >> >> Blythooon can be found here: >> >> https://pypi.org/project/blythooon/ >> >> and the belonging installation step-by-step-guide video here: >> >> https://www.youtube.com/channel/UCOE8xqYS_2azFsFjBVEwVMg >> >> May I ask - how can I do that best? Thanks in advance and > > Hi Dominik, thanks for the suggestion. I had a quick look and > Blythooon has only had a single release, last month, and ships numpy > 1.16.6 only. That does not look like something mature enough that we'd > want to recommend it to the average SciPy user yet. > > Cheers, > > Ralf > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From roy.pamphile at gmail.com Fri Mar 26 08:59:54 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Fri, 26 Mar 2021 13:59:54 +0100 Subject: [SciPy-Dev] scipy documentation In-Reply-To: References: Message-ID: Hi Dominik, Thanks for explaining the rationale behind Blythooon. I am afraid that Python 2 is and must not be promoted any more, in any way. Being out of support is really concerning, it means that it will not officially get security fixes, platform-specific fixes, etc. From now on, any new version of windows, or any other OS, could potentially break Python 2. Almost everyone has already migrated to Python 3, production code included. The only notable example I know is JPMorgan which is still transitioning. But even they expect the migration to be complete this year. In the scientific sphere (at least all the communities I know), everyone has or is moving away from Python 2. You can have a look at all surveys showing how Python 2 is dying. For instance, https://www.jetbrains.com/lp/python-developers-survey-2020/. In my opinion, we should call it a day for Python 2 once and for all. Cheers, Pamphile > On 26.03.2021, at 13:24, scipy at blackward.eu wrote: > > Hi Ralf, > > > thank you very much for having a quick look into Blythooon and thanks for your fast and kind response! > > NumPy 1.16.6 is the very last version supporting Python 2.7.* - as far as I know. That is the reason why it is used by Blythooon. > > Blythooon packages the latest versions known to fit together around: > > Python 2.7.18 > PyQtGraph 0.10.0 > PySide 1.2.2 > > Although the current dogma might be "switch to Python 3.*", the belonging ecosystem is not suitable for production quality software yet, at least if you ask me. This has several reasons. The primary reason is the vast number of minor and major incompatibilities between the different subversions and related packages and the risk which comes with each updating due to that. > > Blythooon is thought for production (industry) quality, scientific applications with GUI. The purpose of Blythooon is not to provide the newest packages out there or an ecosystem which is permanently updating. The opposite is the case, it provides a "freezed" compilation. It is all about stability and compatibility. All Blythooon installations are 100% compatible. > > You e.g. can check out the combination PyQtGraph 0.11.* with PySide 1.2.* - you then will detect, that this combination has severe incompatibility issues. It is the same with PyQtGraph 0.10.0 and PysSide 1.2.4 and so on. (You can easily try that out using the test program bundled with blythooon - the plots will not plot at all or not plot properly in said cases) > > People using Blythooon do not need to resolve these issues on their own, they just get a working compilation - comprising a fitting SciPy and NumPy. And you might have noticed, that WinPython for example does not seem to support Python 2.7.* environments anymore. So, Blythooon fills a huge gap. > > By the way, as long as the current Blythooon version is working without errors, there will not be (the need for) another release. That might somehow be the nature of a freezed compilation... It is somehow a snapshot of time. > > But naturally I accept your decision not to promote it on the SciPy website yet. I just wanted to let you know the idea behind Blythooon... > > Keep on folks, I dearly enjoy SciPy, you are doing a great work! > > > Best Regards > Dominik > > > > > > > > > > > > On 2021-03-26 09:49, Ralf Gommers wrote: >> On Fri, Mar 26, 2021 at 12:17 AM wrote: >>> Howdy Folks, >>> I would like to suggest adding "Blythooon" (in contrast to >>> "WinPython" >>> this supports Python 2.7) to the list of "Scientific Python >>> Distributions" listed here: >>> https://www.scipy.org/install.html >>> Blythooon can be found here: >>> https://pypi.org/project/blythooon/ >>> and the belonging installation step-by-step-guide video here: >>> https://www.youtube.com/channel/UCOE8xqYS_2azFsFjBVEwVMg >>> May I ask - how can I do that best? Thanks in advance and >> Hi Dominik, thanks for the suggestion. I had a quick look and >> Blythooon has only had a single release, last month, and ships numpy >> 1.16.6 only. That does not look like something mature enough that we'd >> want to recommend it to the average SciPy user yet. >> Cheers, >> Ralf >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From robert.kern at gmail.com Fri Mar 26 09:22:35 2021 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 26 Mar 2021 09:22:35 -0400 Subject: [SciPy-Dev] New feature: fitting distributions to censored data. In-Reply-To: References: Message-ID: On Fri, Mar 26, 2021 at 5:11 AM Ralf Gommers wrote: > > > On Thu, Mar 25, 2021 at 4:24 PM Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> Hey folks, >> >> A new enhancement for fitting probability distributions to censored >> data by maximum likelihood estimation is in progress in >> >> https://github.com/scipy/scipy/pull/13699 > > > Thanks Warren. Overall this looks quite good to me. > > >> >> This is a feature in the statistics roadmap, and is part of the work >> for the CZI EOSS cycle 1 grant. >> >> Censored data is data where the true value of the measurement is >> unknown, but there is a known bound. The common terminology is: >> >> * left-censored: >> The true value is greater than some known bound. >> >> * right-censored: >> The true value is less than some known bound. >> > It's the other way around, isn't it? > * interval-censored: >> The true value is between known bounds. >> >> (By allowing the bounds to be +inf and -inf, we can think of all these >> cases as interval-censored.) >> >> In the PR, a new class, `CensoredData`, is defined to represent >> censored data. The `fit` method of the univariate continuous >> distributions is updated to accept an instance of `CensoredData`. The >> `CensoredData` class constructor accepts two arrays, `lower` and >> `upper`, to hold the known bounds of the data. (If `lower[i] == >> upper[i]`, it means that data value is not censored.) >> >> Because it is quite common for data to be encountered where the only >> censored values in the data set are either all right-censored or all >> left-censored, two convenience methods, >> `CensoredData.right_censored(x, censored)` and >> `CensoredData.left_censored(x, censored)`, are provide to create a >> `CensoredData` instance from an array of values and a corresponding >> boolean array that indicates if the value is censored. >> >> Here's a quick example, with data from >> >> https://www.itl.nist.gov/div898/handboodifferentlyk/apr/section4/apr413.htm >> . The >> data table shows 10 regular values and 10 right-censored values. The >> results reported there for fitting the two-parameter Weibull >> distribution (location fixed at 0) to that data are >> >> shape = 1.7208 >> scale = 606.5280 >> >> Here's the calculation using the proposed API in SciPy: >> >> In [55]: from scipy.stats import weibull_min, CensoredData >> >> Create the `CensoredData` instance: >> >> In [56]: x = np.array([54, 187, 216, 240, 244, 335, 361, 373, 375, >> 386] + [500]*10) >> >> In [57]: data = CensoredData.right_censored(x, x == 500) >> > > This `x == 500` looks a little odd API-wise. Why not `right_censored(x, > 500)`. Or, more importantly, why not something like: > > data = CensoredData(x).rightcensor(500) > > There are of course multiple ways of doing this, but the use of > classmethods and only taking arrays seems unusual. Also in the constructor, > why do `lower` and `upper` have to be boolean arrays rather than scalars? > The value in `x` itself is the bound. Each data point can be censored with a different bound. The censoring bound is not a shared property of the whole dataset. It just happened to be the case for this example (which may indicate that a more general motivating example should be used, at least while the API is under design). For example, if you are studying the duration of some condition in a longitudinal study. Individuals entered the study at different times, and now you have to close the books and write the paper, but some stubborn individuals still have the condition. The durations for those individuals would be right-censored, but with different durations because of their different entry points. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Mar 26 12:56:16 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 26 Mar 2021 12:56:16 -0400 Subject: [SciPy-Dev] New feature: fitting distributions to censored data. In-Reply-To: References: Message-ID: On 3/26/21, Ralf Gommers wrote: > On Thu, Mar 25, 2021 at 4:24 PM Warren Weckesser > > wrote: > >> Hey folks, >> >> A new enhancement for fitting probability distributions to censored >> data by maximum likelihood estimation is in progress in >> >> https://github.com/scipy/scipy/pull/13699 > > > Thanks Warren. Overall this looks quite good to me. > Thanks Ralf! (More responses below...) > >> >> This is a feature in the statistics roadmap, and is part of the work >> for the CZI EOSS cycle 1 grant. >> >> Censored data is data where the true value of the measurement is >> unknown, but there is a known bound. The common terminology is: >> >> * left-censored: >> The true value is greater than some known bound. >> >> * right-censored: >> The true value is less than some known bound. >> >> * interval-censored: >> The true value is between known bounds. >> >> (By allowing the bounds to be +inf and -inf, we can think of all these >> cases as interval-censored.) >> >> In the PR, a new class, `CensoredData`, is defined to represent >> censored data. The `fit` method of the univariate continuous >> distributions is updated to accept an instance of `CensoredData`. The >> `CensoredData` class constructor accepts two arrays, `lower` and >> `upper`, to hold the known bounds of the data. (If `lower[i] == >> upper[i]`, it means that data value is not censored.) >> >> Because it is quite common for data to be encountered where the only >> censored values in the data set are either all right-censored or all >> left-censored, two convenience methods, >> `CensoredData.right_censored(x, censored)` and >> `CensoredData.left_censored(x, censored)`, are provide to create a >> `CensoredData` instance from an array of values and a corresponding >> boolean array that indicates if the value is censored. >> >> Here's a quick example, with data from >> https://www.itl.nist.gov/div898/handbook/apr/section4/apr413.htm. The >> data table shows 10 regular values and 10 right-censored values. The >> results reported there for fitting the two-parameter Weibull >> distribution (location fixed at 0) to that data are >> >> shape = 1.7208 >> scale = 606.5280 >> >> Here's the calculation using the proposed API in SciPy: >> >> In [55]: from scipy.stats import weibull_min, CensoredData >> >> Create the `CensoredData` instance: >> >> In [56]: x = np.array([54, 187, 216, 240, 244, 335, 361, 373, 375, >> 386] + [500]*10) >> >> In [57]: data = CensoredData.right_censored(x, x == 500) >> > > This `x == 500` looks a little odd API-wise. Why not `right_censored(x, > 500)`. Or, more importantly, why not something like: > > data = CensoredData(x).rightcensor(500) > The API is a work in progress. I didn't do something like that because, in general, the lower bound for right-censored data isn't necessarily the same for each censored value. (See, for example, the data shown in the second slide of http://www.ams.sunysb.edu/~zhu/ams588/Lecture_3_likelihood.pdf, and the example at https://support.sas.com/documentation/cdl/en/qcug/63922/HTML/default/viewer.htm#qcug_reliability_sect004.htm. Both of those data sets are used for unit tests in the PR.) However, the case of a single bound for left- or right-censored data is common, so it might be nice to have a convenient way to write it. Another possible enhancement to the CensoredData API is a `count` argument that gives the number of times the value is to be repeated, but I figured I would propose that in a follow-up PR. > There are of course multiple ways of doing this, but the use of > classmethods and only taking arrays seems unusual. Also in the constructor, > why do `lower` and `upper` have to be boolean arrays rather than scalars? (The constructor args don't have to be boolean, so I assume you meant "have to be 1D arrays".) I suppose accepting a scalar for one of them and using broadcasting would work. I did a lot of searching for examples to use as unit tests, and I don't recall any where *all* the values were censored, so I don't think such behavior would actually be useful. And I would worry that someone might misinterpret what using a scalar means. Warren > > Cheers, > Ralf > > > >> >> In [58]: print(data) >> >> CensoredData(20 values: 10 not censored, 10 right-censored) >> >> Fit `weibull_min` (with the location fixed at 0) to the censored data: >> >> In [59]: shape, loc, scale = weibull_min.fit(data, floc=0) >> >> In [60]: shape >> >> Out[60]: 1.720797180719942 >> >> In [61]: scale >> >> Out[61]: 606.527565269458 >> >> >> Matt Haberland has already suggested quite a few improvements to the >> PR. Additional comments would be appreciated. >> >> Warren >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > From warren.weckesser at gmail.com Fri Mar 26 13:01:53 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 26 Mar 2021 13:01:53 -0400 Subject: [SciPy-Dev] New feature: fitting distributions to censored data. In-Reply-To: References: Message-ID: On 3/26/21, Robert Kern wrote: > On Fri, Mar 26, 2021 at 5:11 AM Ralf Gommers > wrote: > >> >> >> On Thu, Mar 25, 2021 at 4:24 PM Warren Weckesser < >> warren.weckesser at gmail.com> wrote: >> >>> Hey folks, >>> >>> A new enhancement for fitting probability distributions to censored >>> data by maximum likelihood estimation is in progress in >>> >>> https://github.com/scipy/scipy/pull/13699 >> >> >> Thanks Warren. Overall this looks quite good to me. >> >> >>> >>> This is a feature in the statistics roadmap, and is part of the work >>> for the CZI EOSS cycle 1 grant. >>> >>> Censored data is data where the true value of the measurement is >>> unknown, but there is a known bound. The common terminology is: >>> >>> * left-censored: >>> The true value is greater than some known bound. >>> >>> * right-censored: >>> The true value is less than some known bound. >>> >> > It's the other way around, isn't it? Argh, yes, you are correct. That's what I get for last-second editing before hitting the "send" button. It should be * left-censored: The true value is *less than* some known bound. * right-censored: The true value is *greater than* some known bound. Warren > > >> * interval-censored: >>> The true value is between known bounds. >>> >>> (By allowing the bounds to be +inf and -inf, we can think of all these >>> cases as interval-censored.) >>> >>> In the PR, a new class, `CensoredData`, is defined to represent >>> censored data. The `fit` method of the univariate continuous >>> distributions is updated to accept an instance of `CensoredData`. The >>> `CensoredData` class constructor accepts two arrays, `lower` and >>> `upper`, to hold the known bounds of the data. (If `lower[i] == >>> upper[i]`, it means that data value is not censored.) >>> >>> Because it is quite common for data to be encountered where the only >>> censored values in the data set are either all right-censored or all >>> left-censored, two convenience methods, >>> `CensoredData.right_censored(x, censored)` and >>> `CensoredData.left_censored(x, censored)`, are provide to create a >>> `CensoredData` instance from an array of values and a corresponding >>> boolean array that indicates if the value is censored. >>> >>> Here's a quick example, with data from >>> >>> https://www.itl.nist.gov/div898/handboodifferentlyk/apr/section4/apr413.htm >>> . The >>> data table shows 10 regular values and 10 right-censored values. The >>> results reported there for fitting the two-parameter Weibull >>> distribution (location fixed at 0) to that data are >>> >>> shape = 1.7208 >>> scale = 606.5280 >>> >>> Here's the calculation using the proposed API in SciPy: >>> >>> In [55]: from scipy.stats import weibull_min, CensoredData >>> >>> Create the `CensoredData` instance: >>> >>> In [56]: x = np.array([54, 187, 216, 240, 244, 335, 361, 373, 375, >>> 386] + [500]*10) >>> >>> In [57]: data = CensoredData.right_censored(x, x == 500) >>> >> >> This `x == 500` looks a little odd API-wise. Why not `right_censored(x, >> 500)`. Or, more importantly, why not something like: >> >> data = CensoredData(x).rightcensor(500) >> >> There are of course multiple ways of doing this, but the use of >> classmethods and only taking arrays seems unusual. Also in the >> constructor, >> why do `lower` and `upper` have to be boolean arrays rather than scalars? >> > > The value in `x` itself is the bound. Each data point can be censored with > a different bound. The censoring bound is not a shared property of the > whole dataset. It just happened to be the case for this example (which may > indicate that a more general motivating example should be used, at least > while the API is under design). For example, if you are studying the > duration of some condition in a longitudinal study. Individuals entered the > study at different times, and now you have to close the books and write the > paper, but some stubborn individuals still have the condition. The > durations for those individuals would be right-censored, but with different > durations because of their different entry points. > > -- > Robert Kern > From scipy at blackward.eu Fri Mar 26 13:55:28 2021 From: scipy at blackward.eu (scipy at blackward.eu) Date: Fri, 26 Mar 2021 18:55:28 +0100 Subject: [SciPy-Dev] scipy documentation In-Reply-To: References: Message-ID: <6485c93fc4fc5279a58620aaef2dfe5f@blackward.eu> Hi Pamphile, thanks for sharing your opinion. As I am new to this mailing list I am a bit unsure, whether discussing such a general and versatile topic via the mailing list in detail would be appropriate - but I at least want to be as polite as to answer you shortly. Freedom is constituted by the freedom to choose, if you ask me. Python has been a very free - e.g. multi paradigm - language from the start. Providing separated 'stable' (2.7.*) and 'experimental' (3.*) branches was a reflection of said spirit of freedom (and also a very clever thing). If you ask me, this spirit was a major reason for Python gaining popularity fast. I do not deem it clever to now try to FORCE people to switch to Python 3.* by taking away the possibility of installing 2.7.*. People do not like being infantilised, that could backfire; there are other nice languages out there... I e.g. am fluent in C# and WinForms as well - no problem to develop FRONTends with that instead in the future :) In this case SciPy naturally won't be part of it anymore. By the way, I am not in the least sure, that "almost everyone has already migrated" and "the only notable example" is JPMorgan. We also would have to discuss, what "notable" is, first. Furthermore Windows has proven to be very downward compatible over decades. Your arguments might fit well into the scientific linux sphere or in the area of BACKends. Blythooon is just an offer. No developer is forced to use it. Listing Blythooon is not the same as recommending Blythooon; it is possible to list something with a hint that you do not recommend the usage of Python 2.7.* based systems anymore. That would let the people the freedom of choice. But as I said, I am quite fine with the decision not to list Blythooon, it is your decision. Best Regards Dominik On 2021-03-26 13:59, Pamphile Roy wrote: > Hi Dominik, > > Thanks for explaining the rationale behind Blythooon. > > I am afraid that Python 2 is and must not be promoted any more, in any > way. > Being out of support is really concerning, it means that it will not > officially get security fixes, platform-specific fixes, etc. > From now on, any new version of windows, or any other OS, could > potentially break Python 2. > > Almost everyone has already migrated to Python 3, production code > included. The only notable example I know is JPMorgan which is still > transitioning. > But even they expect the migration to be complete this year. In the > scientific sphere (at least all the communities I know), everyone has > or is moving away from Python 2. > You can have a look at all surveys showing how Python 2 is dying. For > instance, https://www.jetbrains.com/lp/python-developers-survey-2020/. > > In my opinion, we should call it a day for Python 2 once and for all. > > Cheers, > Pamphile > > > >> On 26.03.2021, at 13:24, scipy at blackward.eu wrote: >> >> Hi Ralf, >> >> >> thank you very much for having a quick look into Blythooon and thanks >> for your fast and kind response! >> >> NumPy 1.16.6 is the very last version supporting Python 2.7.* - as far >> as I know. That is the reason why it is used by Blythooon. >> >> Blythooon packages the latest versions known to fit together around: >> >> Python 2.7.18 >> PyQtGraph 0.10.0 >> PySide 1.2.2 >> >> Although the current dogma might be "switch to Python 3.*", the >> belonging ecosystem is not suitable for production quality software >> yet, at least if you ask me. This has several reasons. The primary >> reason is the vast number of minor and major incompatibilities between >> the different subversions and related packages and the risk which >> comes with each updating due to that. >> >> Blythooon is thought for production (industry) quality, scientific >> applications with GUI. The purpose of Blythooon is not to provide the >> newest packages out there or an ecosystem which is permanently >> updating. The opposite is the case, it provides a "freezed" >> compilation. It is all about stability and compatibility. All >> Blythooon installations are 100% compatible. >> >> You e.g. can check out the combination PyQtGraph 0.11.* with PySide >> 1.2.* - you then will detect, that this combination has severe >> incompatibility issues. It is the same with PyQtGraph 0.10.0 and >> PysSide 1.2.4 and so on. (You can easily try that out using the test >> program bundled with blythooon - the plots will not plot at all or not >> plot properly in said cases) >> >> People using Blythooon do not need to resolve these issues on their >> own, they just get a working compilation - comprising a fitting SciPy >> and NumPy. And you might have noticed, that WinPython for example does >> not seem to support Python 2.7.* environments anymore. So, Blythooon >> fills a huge gap. >> >> By the way, as long as the current Blythooon version is working >> without errors, there will not be (the need for) another release. That >> might somehow be the nature of a freezed compilation... It is somehow >> a snapshot of time. >> >> But naturally I accept your decision not to promote it on the SciPy >> website yet. I just wanted to let you know the idea behind >> Blythooon... >> >> Keep on folks, I dearly enjoy SciPy, you are doing a great work! >> >> >> Best Regards >> Dominik >> >> >> >> >> >> >> >> >> >> >> >> On 2021-03-26 09:49, Ralf Gommers wrote: >>> On Fri, Mar 26, 2021 at 12:17 AM wrote: >>>> Howdy Folks, >>>> I would like to suggest adding "Blythooon" (in contrast to >>>> "WinPython" >>>> this supports Python 2.7) to the list of "Scientific Python >>>> Distributions" listed here: >>>> https://www.scipy.org/install.html >>>> Blythooon can be found here: >>>> https://pypi.org/project/blythooon/ >>>> and the belonging installation step-by-step-guide video here: >>>> https://www.youtube.com/channel/UCOE8xqYS_2azFsFjBVEwVMg >>>> May I ask - how can I do that best? Thanks in advance and >>> Hi Dominik, thanks for the suggestion. I had a quick look and >>> Blythooon has only had a single release, last month, and ships numpy >>> 1.16.6 only. That does not look like something mature enough that >>> we'd >>> want to recommend it to the average SciPy user yet. >>> Cheers, >>> Ralf >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From ralf.gommers at gmail.com Sat Mar 27 05:42:27 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 27 Mar 2021 10:42:27 +0100 Subject: [SciPy-Dev] New feature: fitting distributions to censored data. In-Reply-To: References: Message-ID: On Fri, Mar 26, 2021 at 5:56 PM Warren Weckesser wrote: > On 3/26/21, Ralf Gommers wrote: > > On Thu, Mar 25, 2021 at 4:24 PM Warren Weckesser > > > > wrote: > > > >> Hey folks, > >> > >> A new enhancement for fitting probability distributions to censored > >> data by maximum likelihood estimation is in progress in > >> > >> https://github.com/scipy/scipy/pull/13699 > > >> > >> Here's the calculation using the proposed API in SciPy: > >> > >> In [55]: from scipy.stats import weibull_min, CensoredData > >> > >> Create the `CensoredData` instance: > >> > >> In [56]: x = np.array([54, 187, 216, 240, 244, 335, 361, 373, 375, > >> 386] + [500]*10) > >> > >> In [57]: data = CensoredData.right_censored(x, x == 500) > >> > > > > This `x == 500` looks a little odd API-wise. Why not `right_censored(x, > > 500)`. Or, more importantly, why not something like: > > > > data = CensoredData(x).rightcensor(500) > > > > The API is a work in progress. I didn't do something like that > because, in general, the lower bound for right-censored data isn't > necessarily the same for each censored value. Thanks, after the explanation of Robert and the examples below it makes sense to me. I added some more comments on the PR. (See, for example, the > data shown in the second slide of > http://www.ams.sunysb.edu/~zhu/ams588/Lecture_3_likelihood.pdf, and > the example at > https://support.sas.com/documentation/cdl/en/qcug/63922/HTML/default/viewer.htm#qcug_reliability_sect004.htm > . > Both of those data sets are used for unit tests in the PR.) However, > the case of a single bound for left- or right-censored data is common, > so it might be nice to have a convenient way to write it. > > Another possible enhancement to the CensoredData API is a `count` > argument that gives the number of times the value is to be repeated, > but I figured I would propose that in a follow-up PR. > > > > There are of course multiple ways of doing this, but the use of > > classmethods and only taking arrays seems unusual. Also in the > constructor, > > why do `lower` and `upper` have to be boolean arrays rather than scalars? > > (The constructor args don't have to be boolean, so I assume you meant > "have to be 1D arrays".) I suppose accepting a scalar for one of > them and using broadcasting would work. I did a lot of searching for > examples to use as unit tests, and I don't recall any where *all* the > values were censored, so I don't think such behavior would actually be > useful. And I would worry that someone might misinterpret what using > a scalar means. > Yes, that makes sense. The asymmetry between the constructor and the `*_censored` classmethods is a concern though, it looks like you can't get from one to the other. The scalar in your example threw me off, I agree a scalar for `lower` or `upper` in the constructor is potentially confusing. Cheers, Ralf > Warren > > > > > > Cheers, > > Ralf > > > > > > > >> > >> In [58]: print(data) > >> > >> CensoredData(20 values: 10 not censored, 10 right-censored) > >> > >> Fit `weibull_min` (with the location fixed at 0) to the censored data: > >> > >> In [59]: shape, loc, scale = weibull_min.fit(data, floc=0) > >> > >> In [60]: shape > >> > >> Out[60]: 1.720797180719942 > >> > >> In [61]: scale > >> > >> Out[61]: 606.527565269458 > >> > >> > >> Matt Haberland has already suggested quite a few improvements to the > >> PR. Additional comments would be appreciated. > >> > >> Warren > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at python.org > >> https://mail.python.org/mailman/listinfo/scipy-dev > >> > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 27 05:49:49 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 27 Mar 2021 10:49:49 +0100 Subject: [SciPy-Dev] scipy documentation In-Reply-To: <6485c93fc4fc5279a58620aaef2dfe5f@blackward.eu> References: <6485c93fc4fc5279a58620aaef2dfe5f@blackward.eu> Message-ID: On Fri, Mar 26, 2021 at 6:55 PM wrote: > Hi Pamphile, > > > thanks for sharing your opinion. > > As I am new to this mailing list I am a bit unsure, whether discussing > such a general and versatile topic via the mailing list in detail would > be appropriate - but I at least want to be as polite as to answer you > shortly. > It's fine, thank you for sharing your project with us. Blythooon is just an offer. No developer is forced to use it. Listing > Blythooon is not the same as recommending Blythooon; It kind of is. There's an almost infinite amount of packages on PyPI. And we have millions of users, very many of which are relatively new to Python or scientific computing. We aim to guide them to best practices and the most popular and high-quality packages, rather than list everything under the sun. And if someone doesn't know what they need to install to get started, it's *definitely* not Python 2.x. So irrespective of the quality of Blythooon, we will not list any Python 2.*-only installer. it is possible to > list something with a hint that you do not recommend the usage of Python > 2.7.* based systems anymore. That would let the people the freedom of > choice. > People do have the freedom to choose. All packages are discoverable, everyone is free to write their own website with information. Search engines will help surface your project to users who are looking for the exact thing you offer. > But as I said, I am quite fine with the decision not to list Blythooon, > it is your decision. > Thanks. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From scipy at blackward.eu Sat Mar 27 14:44:54 2021 From: scipy at blackward.eu (scipy at blackward.eu) Date: Sat, 27 Mar 2021 19:44:54 +0100 Subject: [SciPy-Dev] scipy documentation In-Reply-To: References: <6485c93fc4fc5279a58620aaef2dfe5f@blackward.eu> Message-ID: <39a0d6fbae2e6b3784d392133f125fe6@blackward.eu> Hi Ralf, you are welcome and thanks for listening! Have a good time and stay healthy - Cheers Dominik On 2021-03-27 10:49, Ralf Gommers wrote: > On Fri, Mar 26, 2021 at 6:55 PM wrote: > >> Hi Pamphile, >> >> thanks for sharing your opinion. >> >> As I am new to this mailing list I am a bit unsure, whether >> discussing >> such a general and versatile topic via the mailing list in detail >> would >> be appropriate - but I at least want to be as polite as to answer >> you >> shortly. > > It's fine, thank you for sharing your project with us. > >> Blythooon is just an offer. No developer is forced to use it. >> Listing >> Blythooon is not the same as recommending Blythooon; > > It kind of is. There's an almost infinite amount of packages on PyPI. > And we have millions of users, very many of which are relatively new > to Python or scientific computing. We aim to guide them to best > practices and the most popular and high-quality packages, rather than > list everything under the sun. And if someone doesn't know what they > need to install to get started, it's *definitely* not Python 2.x. So > irrespective of the quality of Blythooon, we will not list any Python > 2.*-only installer. > >> it is possible to >> list something with a hint that you do not recommend the usage of >> Python >> 2.7.* based systems anymore. That would let the people the freedom >> of >> choice. > > People do have the freedom to choose. All packages are discoverable, > everyone is free to write their own website with information. Search > engines will help surface your project to users who are looking for > the exact thing you offer. > >> But as I said, I am quite fine with the decision not to list >> Blythooon, >> it is your decision. > > Thanks. > > Cheers, > > Ralf > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From christoph.baumgarten at gmail.com Sat Mar 27 16:09:12 2021 From: christoph.baumgarten at gmail.com (Christoph Baumgarten) Date: Sat, 27 Mar 2021 21:09:12 +0100 Subject: [SciPy-Dev] GSoC project UNU.RAN In-Reply-To: References: Message-ID: Hi, > Because I can see that there are things like MCMC or copulas which we might not want here (we had discussions about it and let this for statsmodels). not all functions in UNURAN need to be included in SciPy. The methods for generating univariate RVs might be a good starting point. Christoph schrieb am Do., 25. M?rz 2021, 03:15: > Send SciPy-Dev mailing list submissions to > scipy-dev at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at python.org > > You can reach the person managing the list at > scipy-dev-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. GSoC project UNU.RAN (Christoph Baumgarten) > 2. Re: GSoC project UNU.RAN (Pamphile Roy) > 3. Re: GSoC'21 participation SciPy (Nikolay Mayorov) > 4. ANN: SciPy 1.6.2 (Tyler Reddy) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 24 Mar 2021 19:49:54 +0100 > From: Christoph Baumgarten > To: scipy-dev at python.org > Subject: [SciPy-Dev] GSoC project UNU.RAN > Message-ID: > zx06A at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi all, > > the developers of the library UNU.RAN ( > http://statmath.wu.ac.at/software/unuran/) have given their permission to > integrate it into SciPy under a BSD licence.It would be a great addition to > scipy.stats since UNU.RAN contains a lot of powerful tools for generating > random variates. I already discussed the idea with some of the developers > of scipy.stats and we agreed to propose this as a project idea for GSoC, > see https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas > > Please let me know if you have any questions / comments. > > Thanks > > Christoph > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://mail.python.org/pipermail/scipy-dev/attachments/20210324/c39121ca/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Wed, 24 Mar 2021 20:31:26 +0100 > From: Pamphile Roy > To: SciPy Developers List > Subject: Re: [SciPy-Dev] GSoC project UNU.RAN > Message-ID: <89C523C1-8C58-425D-AE83-215D26C271D0 at gmail.com> > Content-Type: text/plain; charset="us-ascii" > > Hi, > > This is good news! > > By integrating, do you mean that you propose to write python wrappers on > top of the whole C library? > Because I can see that there are things like MCMC or copulas which we > might not want here (we had discussions about it and let this for > statsmodels). > > Thanks in advance for the precision. > > Cheers, > Pamphile > > > > On 24.03.2021, at 19:49, Christoph Baumgarten < > christoph.baumgarten at gmail.com> wrote: > > > > > > Hi all, > > > > the developers of the library UNU.RAN ( > http://statmath.wu.ac.at/software/unuran/ < > http://statmath.wu.ac.at/software/unuran/>) have given their permission > to integrate it into SciPy under a BSD licence.It would be a great addition > to scipy.stats since UNU.RAN contains a lot of powerful tools for > generating random variates. I already discussed the idea with some of the > developers of scipy.stats and we agreed to propose this as a project idea > for GSoC, see https://github.com/scipy/scipy/wiki/GSoC-2021-project-ideas > > > > > Please let me know if you have any questions / comments. > > > > Thanks > > > > Christoph > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://mail.python.org/pipermail/scipy-dev/attachments/20210324/63d8e7ef/attachment-0001.html > > > > ------------------------------ > > Message: 3 > Date: Thu, 25 Mar 2021 03:58:53 +0500 > From: Nikolay Mayorov > To: "SciPy Developers List" > Subject: Re: [SciPy-Dev] GSoC'21 participation SciPy > Message-ID: <178667650ce.ee2557c489059.5035677909948735908 at zoho.com> > Content-Type: text/plain; charset="utf-8" > > Ralf, thanks for the feedback! > > > > Waiting for a good student to apply for this project :) > > > > > > Nikolay > > > > > > > > ---- On Sat, 20 Mar 2021 20:18:06 +0500 Ralf Gommers < > ralf.gommers at gmail.com> wrote ---- > > > Hi Nikolay, > > > > Thanks for adding that! I agree there's a lot to improve there. > > > > It's a nice and detailed description, I don't have much to add. Gmail > unhelpfully classified your email as spam, so I thought I'd reply on the > list in case that happened for other people as well. > > > > Cheers, > > Ralf > > > > > On Fri, Mar 12, 2021 at 8:08 PM Nikolay Mayorov nikolay.mayorov at zoho.com> wrote: > > Hi! > > > > I've added an idea about implementing object-oriented design of filtering > in scipy.signal. It was discussed quite a lot in the past, I think it's a > sane idea and scipy.signal definitely can be made more user friendly and > convenient. > > > > This is only some preliminarily view on the project. Feel free to edit the > text. So far I've put only myself as a possible mentor. > > > > Nikolay > > > > > > > > > > _______________________________________________ > SciPy-Dev mailing list > mailto:SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://mail.python.org/pipermail/scipy-dev/attachments/20210325/bfec63d3/attachment-0001.html > > > > ------------------------------ > > Message: 4 > Date: Wed, 24 Mar 2021 20:13:19 -0600 > From: Tyler Reddy > To: SciPy Developers List , Discussion of > Numerical Python , > python-announce-list at python.org > Subject: [SciPy-Dev] ANN: SciPy 1.6.2 > Message-ID: > < > CAHPuU_bbFev0NzsrVefg5WEW2kTaUdBYJuAPRBSyQyOVoV1_zQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi all, > > On behalf of the SciPy development team I'm pleased to announce > the release of SciPy 1.6.2, which is a bug fix release. > > Sources and binary wheels can be found at: > https://pypi.org/project/scipy/ > and at: https://github.com/scipy/scipy/releases/tag/v1.6.2 > > > One of a few ways to install this release with pip: > > pip install scipy==1.6.2 > > ===================== > SciPy 1.6.2 Release Notes > ===================== > > SciPy 1.6.2 is a bug-fix release with no new features > compared to 1.6.1. This is also the first SciPy release > to place upper bounds on some dependencies to improve > the long-term repeatability of source builds. > > Authors > ====== > > * Pradipta Ghosh + > * Tyler Reddy > * Ralf Gommers > * Martin K. Scherer + > * Robert Uhl > * Warren Weckesser > > A total of 6 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > This list of names is automatically generated, and may not be fully > complete. > > Issues closed for 1.6.2 > ------------------------------ > > * `#13512 `__: > \`stats.gaussian_kde.evaluate\` broken on S390X > * `#13584 `__: > rotation._compute_euler_from_matrix() creates an array with negative... > * `#13585 `__: Behavior > change > in coo_matrix when dtype=None > * `#13686 `__: delta0 > argument > of scipy.odr.ODR() ignored > > Pull requests for 1.6.2 > ------------------------------ > > * `#12862 `__: REL: put upper > bounds on versions of dependencies > * `#13575 `__: BUG: fix > \`gaussian_kernel_estimate\` on S390X > * `#13586 `__: BUG: sparse: > Create a utility function \`getdata\` > * `#13598 `__: MAINT, BUG: > enforce contiguous layout for output array in Rotation.as_euler > * `#13687 `__: BUG: fix > scipy.odr to consider given delta0 argument > > Checksums > ========= > > MD5 > ~~~ > > fc81d43879a28270d593aaea37c74ff8 > scipy-1.6.2-cp37-cp37m-macosx_10_9_x86_64.whl > 9213533bfd3c2f1563d169009c39825c > scipy-1.6.2-cp37-cp37m-manylinux1_i686.whl > 2ddd03b89efdb1619fa995da7b83aa6f > scipy-1.6.2-cp37-cp37m-manylinux1_x86_64.whl > d378f725958bd6a83db7ef23e8659762 > scipy-1.6.2-cp37-cp37m-manylinux2014_aarch64.whl > 87bc2771b8a8ab1f10168b1563300415 scipy-1.6.2-cp37-cp37m-win32.whl > 861dab18fe41e82c08c8f585f2710545 scipy-1.6.2-cp37-cp37m-win_amd64.whl > d2e2002b526adeebf94489aa95031f54 > scipy-1.6.2-cp38-cp38-macosx_10_9_x86_64.whl > 2dc36bfbe3938c492533604aba002c17 scipy-1.6.2-cp38-cp38-manylinux1_i686.whl > 0114de2118d41f9440cf86fdd67434fc > scipy-1.6.2-cp38-cp38-manylinux1_x86_64.whl > ede6db56b1bf0a7fed0c75acac7dcb85 > scipy-1.6.2-cp38-cp38-manylinux2014_aarch64.whl > 191636ac3276da0ee9fd263b47927b73 scipy-1.6.2-cp38-cp38-win32.whl > 8bdf7ab041b9115b379f043bb02d905f scipy-1.6.2-cp38-cp38-win_amd64.whl > 608c82b227b6077d9a7871ac6278e64d > scipy-1.6.2-cp39-cp39-macosx_10_9_x86_64.whl > 4c0313b2cccc85666b858ffd692a3c87 scipy-1.6.2-cp39-cp39-manylinux1_i686.whl > 92da8ffe165034dbbe5f098d0ed58aec > scipy-1.6.2-cp39-cp39-manylinux1_x86_64.whl > b4b225fb1deeaaf0eda909fdd3bd6ca6 > scipy-1.6.2-cp39-cp39-manylinux2014_aarch64.whl > 662969220eadbb6efec99030e4d00268 scipy-1.6.2-cp39-cp39-win32.whl > f19186d6d91c7e37000e9f6ccd9b9b60 scipy-1.6.2-cp39-cp39-win_amd64.whl > cbcb9b39bd9d877ad3deeccc7c37bb7f scipy-1.6.2.tar.gz > b56e705c653ad808a9725dfe840d1258 scipy-1.6.2.tar.xz > 6f615549670cd3d312dc9e4359d2436a scipy-1.6.2.zip > > SHA256 > ~~~~~~ > > 77f7a057724545b7e097bfdca5c6006bed8580768cd6621bb1330aedf49afba5 > scipy-1.6.2-cp37-cp37m-macosx_10_9_x86_64.whl > e547f84cd52343ac2d56df0ab08d3e9cc202338e7d09fafe286d6c069ddacb31 > scipy-1.6.2-cp37-cp37m-manylinux1_i686.whl > bc52d4d70863141bb7e2f8fd4d98e41d77375606cde50af65f1243ce2d7853e8 > scipy-1.6.2-cp37-cp37m-manylinux1_x86_64.whl > adf7cee8e5c92b05f2252af498f77c7214a2296d009fc5478fc432c2f8fb953b > scipy-1.6.2-cp37-cp37m-manylinux2014_aarch64.whl > e3e9742bad925c421d39e699daa8d396c57535582cba90017d17f926b61c1552 > scipy-1.6.2-cp37-cp37m-win32.whl > ffdfb09315896c6e9ac739bb6e13a19255b698c24e6b28314426fd40a1180822 > scipy-1.6.2-cp37-cp37m-win_amd64.whl > 6ca1058cb5bd45388041a7c3c11c4b2bd58867ac9db71db912501df77be2c4a4 > scipy-1.6.2-cp38-cp38-macosx_10_9_x86_64.whl > 993c86513272bc84c451349b10ee4376652ab21f312b0554fdee831d593b6c02 > scipy-1.6.2-cp38-cp38-manylinux1_i686.whl > 37f4c2fb904c0ba54163e03993ce3544c9c5cde104bcf90614f17d85bdfbb431 > scipy-1.6.2-cp38-cp38-manylinux1_x86_64.whl > 96620240b393d155097618bcd6935d7578e85959e55e3105490bbbf2f594c7ad > scipy-1.6.2-cp38-cp38-manylinux2014_aarch64.whl > 03f1fd3574d544456325dae502facdf5c9f81cbfe12808a5e67a737613b7ba8c > scipy-1.6.2-cp38-cp38-win32.whl > 0c81ea1a95b4c9e0a8424cf9484b7b8fa7ef57169d7bcc0dfcfc23e3d7c81a12 > scipy-1.6.2-cp38-cp38-win_amd64.whl > c1d3f771c19af00e1a36f749bd0a0690cc64632783383bc68f77587358feb5a4 > scipy-1.6.2-cp39-cp39-macosx_10_9_x86_64.whl > 50e5bcd9d45262725e652611bb104ac0919fd25ecb78c22f5282afabd0b2e189 > scipy-1.6.2-cp39-cp39-manylinux1_i686.whl > 816951e73d253a41fa2fd5f956f8e8d9ac94148a9a2039e7db56994520582bf2 > scipy-1.6.2-cp39-cp39-manylinux1_x86_64.whl > 1fba8a214c89b995e3721670e66f7053da82e7e5d0fe6b31d8e4b19922a9315e > scipy-1.6.2-cp39-cp39-manylinux2014_aarch64.whl > e89091e6a8e211269e23f049473b2fde0c0e5ae0dd5bd276c3fc91b97da83480 > scipy-1.6.2-cp39-cp39-win32.whl > d744657c27c128e357de2f0fd532c09c84cd6e4933e8232895a872e67059ac37 > scipy-1.6.2-cp39-cp39-win_amd64.whl > e9da33e21c9bc1b92c20b5328adb13e5f193b924c9b969cd700c8908f315aa59 > scipy-1.6.2.tar.gz > 8fadc443044396283c48191d48e4e07a3c3b6e2ae320b1a56e76bb42929e84d2 > scipy-1.6.2.tar.xz > 2af283054d91865336b4579aa91f9e59d648d436cf561f96d4692008f795c750 > scipy-1.6.2.zip > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://mail.python.org/pipermail/scipy-dev/attachments/20210324/71d8be5c/attachment.html > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > End of SciPy-Dev Digest, Vol 209, Issue 25 > ****************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sat Mar 27 16:55:03 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sat, 27 Mar 2021 16:55:03 -0400 Subject: [SciPy-Dev] New feature: fitting distributions to censored data. In-Reply-To: References: Message-ID: On 3/27/21, Ralf Gommers wrote: > On Fri, Mar 26, 2021 at 5:56 PM Warren Weckesser > > wrote: > >> On 3/26/21, Ralf Gommers wrote: >> > On Thu, Mar 25, 2021 at 4:24 PM Warren Weckesser >> > >> > wrote: >> > >> >> Hey folks, >> >> >> >> A new enhancement for fitting probability distributions to censored >> >> data by maximum likelihood estimation is in progress in >> >> >> >> https://github.com/scipy/scipy/pull/13699 >> >> >> >> >> Here's the calculation using the proposed API in SciPy: >> >> >> >> In [55]: from scipy.stats import weibull_min, CensoredData >> >> >> >> Create the `CensoredData` instance: >> >> >> >> In [56]: x = np.array([54, 187, 216, 240, 244, 335, 361, 373, 375, >> >> 386] + [500]*10) >> >> >> >> In [57]: data = CensoredData.right_censored(x, x == 500) >> >> >> > >> > This `x == 500` looks a little odd API-wise. Why not `right_censored(x, >> > 500)`. Or, more importantly, why not something like: >> > >> > data = CensoredData(x).rightcensor(500) >> > >> >> The API is a work in progress. I didn't do something like that >> because, in general, the lower bound for right-censored data isn't >> necessarily the same for each censored value. > > > Thanks, after the explanation of Robert and the examples below it makes > sense to me. I added some more comments on the PR. > > (See, for example, the >> data shown in the second slide of >> http://www.ams.sunysb.edu/~zhu/ams588/Lecture_3_likelihood.pdf, and >> the example at >> https://support.sas.com/documentation/cdl/en/qcug/63922/HTML/default/viewer.htm#qcug_reliability_sect004.htm >> . >> Both of those data sets are used for unit tests in the PR.) However, >> the case of a single bound for left- or right-censored data is common, >> so it might be nice to have a convenient way to write it. >> >> Another possible enhancement to the CensoredData API is a `count` >> argument that gives the number of times the value is to be repeated, >> but I figured I would propose that in a follow-up PR. >> >> >> > There are of course multiple ways of doing this, but the use of >> > classmethods and only taking arrays seems unusual. Also in the >> constructor, >> > why do `lower` and `upper` have to be boolean arrays rather than >> > scalars? >> >> (The constructor args don't have to be boolean, so I assume you meant >> "have to be 1D arrays".) I suppose accepting a scalar for one of >> them and using broadcasting would work. I did a lot of searching for >> examples to use as unit tests, and I don't recall any where *all* the >> values were censored, so I don't think such behavior would actually be >> useful. And I would worry that someone might misinterpret what using >> a scalar means. >> > > Yes, that makes sense. The asymmetry between the constructor and the > `*_censored` classmethods is a concern though, it looks like you can't get > from one to the other. The scalar in your example threw me off, I agree a > scalar for `lower` or `upper` in the constructor is potentially confusing. > > Cheers, > Ralf > After several comments and questions from Ralf (and having had earlier questions from Matt about the API), I think it makes sense to have an issue just for the API design for censored data. Here it is: https://github.com/scipy/scipy/issues/13757 Comments, critiques, etc. are welcome, especially from anyone who has used censored data in anger [*]. Warren [*] http://onlineslangdictionary.com/meaning-definition-of/use-in-anger > > >> Warren >> >> >> > >> > Cheers, >> > Ralf >> > >> > >> > >> >> >> >> In [58]: print(data) >> >> >> >> CensoredData(20 values: 10 not censored, 10 right-censored) >> >> >> >> Fit `weibull_min` (with the location fixed at 0) to the censored data: >> >> >> >> In [59]: shape, loc, scale = weibull_min.fit(data, floc=0) >> >> >> >> In [60]: shape >> >> >> >> Out[60]: 1.720797180719942 >> >> >> >> In [61]: scale >> >> >> >> Out[61]: 606.527565269458 >> >> >> >> >> >> Matt Haberland has already suggested quite a few improvements to the >> >> PR. Additional comments would be appreciated. >> >> >> >> Warren >> >> _______________________________________________ >> >> SciPy-Dev mailing list >> >> SciPy-Dev at python.org >> >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> >> > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > From charlesr.harris at gmail.com Sat Mar 27 19:45:18 2021 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 27 Mar 2021 17:45:18 -0600 Subject: [SciPy-Dev] NumPy 1.20.2 released. Message-ID: Charles R Harris Sun, Feb 7, 2:23 PM to numpy-discussion, SciPy, SciPy-User, bcc: python-announce-list Hi All, On behalf of the NumPy team I am pleased to announce the release of NumPy 1.20.2. NumPy 1,20.2 is a bugfix release containing several fixes merged to the main branch after the NumPy 1.20.1 release. The Python versions supported for this release are 3.7-3.9. Wheels can be downloaded from PyPI ; source archives, release notes, and wheel hashes are available on Github . Linux users will need pip >= 0.19.3 in order to install manylinux2010 and manylinux2014 wheels. *Contributors* A total of 7 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Allan Haldane - Bas van Beek - Charles Harris - Christoph Gohlke - Mateusz Sok?? + - Michael Lamparski - Sebastian Berg *Pull requests merged* A total of 20 pull requests were merged for this release. - #18382: MAINT: Update f2py from master. - #18459: BUG: ``diagflat`` could overflow on windows or 32-bit platforms - #18460: BUG: Fix refcount leak in f2py ``complex_double_from_pyobj``. - #18461: BUG: Fix tiny memory leaks when ``like=`` overrides are used - #18462: BUG: Remove temporary change of descr/flags in VOID functions - #18469: BUG: Segfault in nditer buffer dealloc for Object arrays - #18485: BUG: Remove suspicious type casting - #18486: BUG: remove nonsensical comparison of pointer < 0 - #18487: BUG: verify pointer against NULL before using it - #18488: BUG: check if PyArray_malloc succeeded - #18546: BUG: incorrect error fallthrough in nditer - #18559: CI: Backport CI fixes from main. - #18599: MAINT: Add annotations for `dtype.__getitem__`, `__mul__` and... - #18611: BUG: NameError in numpy.distutils.fcompiler.compaq - #18612: BUG: Fixed ``where`` keyword for ``np.mean`` & ``np.var`` methods - #18617: CI: Update apt package list before Python install - #18636: MAINT: Ensure that re-exported sub-modules are properly annotated - #18638: BUG: Fix ma coercion list-of-ma-arrays if they do not cast to... - #18661: BUG: Fix small valgrind-found issues - #18671: BUG: Fix small issues found with pytest-leaks Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Mon Mar 29 18:30:57 2021 From: andyfaff at gmail.com (Andrew Nelson) Date: Tue, 30 Mar 2021 09:30:57 +1100 Subject: [SciPy-Dev] Github actions feedback Message-ID: I'm meeting with the github actions team tomorrow for a feedback session. I was going to mention our desire to test on aarch64. Does anyone else have feedback on what we'd like to see implemented in CI, Github actions or elsewhere? Andrew. -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Mar 30 04:18:55 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 30 Mar 2021 10:18:55 +0200 Subject: [SciPy-Dev] Github actions feedback In-Reply-To: References: Message-ID: On Tue, Mar 30, 2021 at 12:31 AM Andrew Nelson wrote: > I'm meeting with the github actions team tomorrow for a feedback session. > I was going to mention our desire to test on aarch64. > Does anyone else have feedback on what we'd like to see implemented in CI, > Github actions or elsewhere? > For Actions, making the Azure integration more streamlined so it doesn't take so many clicks to get to the actual failure. For the GitHub CLI (`gh`), making it possible to filter by PR author. For GitHub PRs, making it possible for PRs to depend on each other. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy.pamphile at gmail.com Tue Mar 30 04:50:41 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Tue, 30 Mar 2021 10:50:41 +0200 Subject: [SciPy-Dev] Github actions feedback In-Reply-To: References: Message-ID: > On 30.03.2021, at 00:30, Andrew Nelson wrote: > > I'm meeting with the github actions team tomorrow for a feedback session. I was going to mention our desire to test on aarch64. > Does anyone else have feedback on what we'd like to see implemented in CI, Github actions or elsewhere? Great! On top of Ralf?s comments (+1000 for Azure CI). Sorry if this is off-topic, listing my wish list: I would like to have a separation between the discussion, git related events, and GitHub events (like adding a tag). Currently everything get?s pilled up and it creates super long threads. Having a review tab would be better. Add more filtering options for file changes. I am missing a file move/rename filter. Would be great to link comments to code itself and not commit hash. Or another way. This way we could rebase, force-push, etc. Discussions should somehow count as a contribution in GitHub stats. Some discussions are even more valuable than actual commits. Be able to merge master or rebase from the UI. Annoying to ask contributors to do so. Interactively playing with git history of a PR would be nice too. On Git Tower of Sublime Merge, you can select some commit in the history and concatenate them. It would be easier to clean up before merging. For CI, have tools to compare what changed between two runs. Something like diff of logs/configs/files/env. Some stop button in the UI for CI actions. When iterating over a doc PR for instance. Using skip CI in commit message is not so nice as it stays in the history. I should stop thinking now ;) Thanks! Cheers, Pamphile -------------- next part -------------- An HTML attachment was scrubbed... URL: From xingyuliu at g.harvard.edu Tue Mar 30 08:53:30 2021 From: xingyuliu at g.harvard.edu (Xingyu Liu) Date: Tue, 30 Mar 2021 12:53:30 +0000 Subject: [SciPy-Dev] Adding Z-test to scipy.stats Message-ID: Hi everyone: I'm Xingyu Liu, a first year Data Science master student at Harvard University, and I'm thinking of adding Z-test to scipy.stats. Z-test is one of the most basic types of hypothesis test and is covered in almost all the statistics textbooks (e.g. Statistics by David Freedman, et,al.) . However, currently, Z-test is not implemented in scipy(see this feature request issue https://github.com/scipy/scipy/issues/13662 ). Its principles are quite similar to t-test except that it uses the population variance rather than the sample variance for calculation, which means that the population variance shoud be known in advance. In application, Z-test is used when sample size is large (n>50), or the population variance is known; t-test is used when sample size is small (n<50) and population variance is unknown(https://en.wikipedia.org/wiki/Z-test). There are mainly three types of Z-test and the coding part can be quite similar to t-test: 1. One Sample Z-Test: Does the mean of the sample differ from the expected mean? 2. Two Independent Sample Z-Test: Do the means of two independent samples differ? 3. Paired-Sample Z-Test: Do the means of the same sample differ before and after? Do you think it would be helpful to do this enhancement? If it is, I can work on it :) Cheers, Xingyu -------------- next part -------------- An HTML attachment was scrubbed... URL: From roy.pamphile at gmail.com Wed Mar 31 06:36:02 2021 From: roy.pamphile at gmail.com (Pamphile Roy) Date: Wed, 31 Mar 2021 12:36:02 +0200 Subject: [SciPy-Dev] Move some validations as asserts? In-Reply-To: References: <6380A893-34C5-45BF-B257-BC630D166B79@gmail.com> Message-ID: Hi everyone, FYI, I opened an issue to keep track of this discussion: https://github.com/scipy/scipy/issues/13761 Cheers, Pamphile > On 06.03.2021, at 21:28, Ralf Gommers wrote: > > > > On Wed, Mar 3, 2021 at 4:43 PM Pamphile Roy > wrote: > >> > >> > A simple approach would be to systematically separate the logic in two: core function on one side, user interface on the other. This way the user could by-pass all the time the validation. >> > Mock the validations. After some quick tests, it looks like the overhead is too big. I could be wrong. >> >> I'm not sure what this means exactly. > > I just did some quick tests like the following. (I just wanted to see how the API could look like.) > > import time > from unittest.mock import patch > import numpy as np > from scipy.spatial import distance > > coords = np.random.random((100, 2)) > weights = [1, 2] > > itime = time.time() > for _ in range(50): > distance.cdist(coords, coords, metric='sqeuclidean', w=weights) > > print(f"Time: {time.time() - itime}") > > with patch('scipy.spatial.distance._validate_vector') as mock_requests: > mock_requests.side_effect = lambda x, *args, **kwargs: np.asarray(x) > > itime = time.time() > for _ in range(50): > distance.cdist(coords, coords, 'sqeuclidean', w=weights) > > print(f"Time: {time.time() - itime}") > >> > Using assert for all validation. >> >> One obvious issue is that `python -OO` will remove all asserts, and that's not what we want (we've had lots of issues with popular web deployment tools running with -OO by default). Hence the rule has always been "no plain asserts". >> >> Also, error messages for plain asserts are bad usually. > > Oh ok then it?s off the table. > >> >> The first option sounds best to me. >> >> Probably - if we do something, that's likely the best direction. The question though is how to provide a sensible UX for it, that's nontrivial. Splitting the whole API is a lot of work and new API surface. There may be alternatives, such as new keywords or fancier approaches (e.g., cython has a uniform way of turning off bounds checking and negative indexing with `cython.boundscheck`, `cython.wraparound`). > > What I had in mind was more in line with what Evgini explains. We would not change the public API, just within the functions, separate the validation from the actual work. > > That seems fine to me, and easy to do in most cases. We typically need that pattern anyway when we move code to Cython/Pythran. > > > The decorator thing in Cython would probably be what I had in mind with the mocking. If we would manage to do something like this fast, this could be a great solution IMO. > > This one would not be fast I'm afraid. > > Cheers, > Ralf > > > > Cheers, > Pamphile > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabrielfranksimonetto at gmail.com Wed Mar 31 20:41:20 2021 From: gabrielfranksimonetto at gmail.com (Gabriel Simonetto) Date: Wed, 31 Mar 2021 21:41:20 -0300 Subject: [SciPy-Dev] GSoC proposal draft: Add type hints on scipy.special Message-ID: Hi everyone! I finished making a rough draft of what I would like to do for this year's gsoc, could someone give me some tips on how to improve it? In particular I would like to know if this is a good module for the project or if it would be better allocated elsewhere, and how could I make my timeline a bit sharper. Here is the link for adding comments: https://docs.google.com/document/d/1d3NkbQC9rBcoKkuOmsx95wYDO_OMA5m1PBSu_oiwQVw/edit?usp=sharing Thanks! Gabriel Simonetto -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.craig.wilson at gmail.com Wed Mar 31 23:15:13 2021 From: josh.craig.wilson at gmail.com (Joshua Wilson) Date: Wed, 31 Mar 2021 20:15:13 -0700 Subject: [SciPy-Dev] GSoC proposal draft: Add type hints on scipy.special In-Reply-To: References: Message-ID: Hey Gabriel, One thing to consider is that a large chunk of special is ufuncs, and better ufunc typing is going to need PRs like https://github.com/numpy/numpy/pull/18417 which aren't yet in a released version of NumPy. So you'll want to make sure the timing works out there. You mention .pyf files in the doc, they are an interesting case because ideally we'd be able to auto-generate stubs for them. I even have a languishing branch somewhere that has a start on doing that... there are a few complications because the objects exported by the pyf extension module are actually instances of one class, so you'd need to either fudge the typing a bit or use Generic a lot. I'd recommend thinking through how to handle the above complications and discussing that in the doc. - Josh (also the person142 mentioned in the doc) On Wed, Mar 31, 2021 at 5:42 PM Gabriel Simonetto wrote: > > Hi everyone! > > I finished making a rough draft of what I would like to do for this year's gsoc, could someone give me some tips on how to improve it? > > In particular I would like to know if this is a good module for the project or if it would be better allocated elsewhere, and how could I make my timeline a bit sharper. > > Here is the link for adding comments: https://docs.google.com/document/d/1d3NkbQC9rBcoKkuOmsx95wYDO_OMA5m1PBSu_oiwQVw/edit?usp=sharing > > Thanks! > Gabriel Simonetto > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev