From charlesr.harris at gmail.com Sun Sep 1 12:54:53 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 1 Sep 2013 10:54:53 -0600 Subject: [SciPy-Dev] ANN: Numpy 1.8.0 beta 1 release Message-ID: Hi all, I'm happy to announce the first beta release of Numpy 1.8.0. Please try this beta and report any issues on the numpy-dev mailing list. Source tarballs and release notes can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.8.0b1/. The Windows and OS X installers will follow when the infrastructure issues are dealt with. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From yangofzeal at gmail.com Sun Sep 1 22:57:18 2013 From: yangofzeal at gmail.com (Michael Yang) Date: Sun, 1 Sep 2013 22:57:18 -0400 Subject: [SciPy-Dev] contributing to scipy.spatial.distance Message-ID: hi - I'm new to this development list so please excuse me if I'm not doing things right (like top-posting or bottom-posting, etc.) I would like to contribute an additional vector distance function to the absolutely wonderful scipy.spatial.distance package written by Damian Eads. I have a patch for scipy/spatial/distance.py written for scipy version 0.7.1 a while back. Could someone please advise me on how to incorporate the patch? I don't know if the repo is in github or somewhere else. Also (not to be self-aggrandizing!) but would it be possible to add my name somewhere to this file if the patch gets makes it in successfully? Thanks, Michael Yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakevdp at cs.washington.edu Mon Sep 2 10:39:08 2013 From: jakevdp at cs.washington.edu (Jacob Vanderplas) Date: Mon, 2 Sep 2013 07:39:08 -0700 Subject: [SciPy-Dev] contributing to scipy.spatial.distance In-Reply-To: References: Message-ID: Hi Michael, Scipy is indeed on Github, and can be found at [1]. Contributions happen through Pull Requests -- if you're not familiar with that system, there are tutorials available through Github. Before opening a pull request, it would be good to have a look through the documentation on contributing [2]. Thanks! Jake [1] https://github.com/scipy/scipy [2] http://docs.scipy.org/doc/scipy/reference/hacking.html On Sun, Sep 1, 2013 at 7:57 PM, Michael Yang wrote: > hi - I'm new to this development list so please excuse me if I'm not doing > things right (like top-posting or bottom-posting, etc.) > > I would like to contribute an additional vector distance function to the > absolutely wonderful scipy.spatial.distance package written by Damian Eads. > I have a patch for scipy/spatial/distance.py written for scipy version > 0.7.1 a while back. > > Could someone please advise me on how to incorporate the patch? I don't > know if the repo is in github or somewhere else. Also (not to be > self-aggrandizing!) but would it be possible to add my name somewhere to > this file if the patch gets makes it in successfully? > > Thanks, > Michael Yang > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Sep 2 11:30:23 2013 From: cournape at gmail.com (David Cournapeau) Date: Mon, 2 Sep 2013 16:30:23 +0100 Subject: [SciPy-Dev] [Numpy-discussion] ANN: Scipy 0.13.0 beta 1 release In-Reply-To: References: Message-ID: On Fri, Aug 30, 2013 at 7:16 PM, David Cournapeau wrote: > It looks like it broke the build with MKL as well (in, surprised, ARPACK). > I will investigate this further this WE > Ok, I think the commit 5935030f8cced33e433804a21bdb15572d1d38e8 is quite wrong. It conflates the issue of dealing with Accelerate brokenness and using g77 ABI. I would suggest reverting it for 0.13.0 (and re-disable single precision), as fixing this correctly may require quite some time/testing. David > > > On Thu, Aug 22, 2013 at 2:12 PM, Ralf Gommers wrote: > >> Hi all, >> >> I'm happy to announce the availability of the first beta release of Scipy >> 0.13.0. Please try this beta and report any issues on the scipy-dev mailing >> list. >> >> Source tarballs and release notes can be found at >> https://sourceforge.net/projects/scipy/files/scipy/0.13.0b1/. Windows >> and OS X installers will follow later (we have a minor infrastructure issue >> to solve, and I'm at EuroScipy now). >> >> Cheers, >> Ralf >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Sep 2 14:12:00 2013 From: cournape at gmail.com (David Cournapeau) Date: Mon, 2 Sep 2013 19:12:00 +0100 Subject: [SciPy-Dev] [Numpy-discussion] ANN: Scipy 0.13.0 beta 1 release In-Reply-To: References: Message-ID: On Mon, Sep 2, 2013 at 5:46 PM, Pauli Virtanen wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi, > > 02.09.2013 18:30, David Cournapeau kirjoitti: > [clip] > > Ok, I think the commit 5935030f8cced33e433804a21bdb15572d1d38e8 is > > quite wrong. > > > > It conflates the issue of dealing with Accelerate brokenness and > > using g77 ABI. I would suggest reverting it for 0.13.0 (and > > re-disable single precision), as fixing this correctly may require > > quite some time/testing. > > I'm -1 on returning to the previous situation where many routines on > OSX are simply broken (which was the situation previously --- > "disabling single precision" left several things still broken). I'd > rather just postpone the 0.13.0 release until this issue is solved > properly. > I see, I missed that it was more than just reverting to slower versions. > > Can you say what exactly is wrong -- as far as I know, Accelerate on > OSX simply uses g77 ABI. There were some bugs previously where it in > places did things differently, but on recent OSX releases this is no > longer the case? > > I guess you are trying to link with MKL? In that case, I would rather > propose restoring the previous MKL wrappers (with trivial extensions > for the missing w* symbols). > The pb is specific to MKL, yes, but I've found a workaround: in the case of MKL, you just need to workaround the g77 ABI of the MKL vs gfortran ABI, but the LAPACK interface is the actual LAPACK, not CLAPACK. So for MKL, you need non dummy wrappers iff the function returns a complex. A dirty patch seems to confirm that this fixes the issue, I will prepare an actual patch tomorrow, David > > - -- > Pauli Virtanen > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.12 (GNU/Linux) > > iEYEARECAAYFAlIkwPQACgkQ6BQxb7O0pWBnkwCfeKj32CYYCPdEWVcYYMtq/OZM > 32wAoIn9yG/HNtUeh+XwqAm2uAS9sVQ5 > =CKBN > -----END PGP SIGNATURE----- > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Sep 2 14:31:54 2013 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 02 Sep 2013 21:31:54 +0300 Subject: [SciPy-Dev] ANN: Scipy 0.13.0 beta 1 release In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, 02.09.2013 21:12, David Cournapeau kirjoitti: [clip] > The pb is specific to MKL, yes, but I've found a workaround: in the > case of MKL, you just need to workaround the g77 ABI of the MKL vs > gfortran ABI, but the LAPACK interface is the actual LAPACK, not > CLAPACK. > > So for MKL, you need non dummy wrappers iff the function returns a > complex. A dirty patch seems to confirm that this fixes the issue, > I will prepare an actual patch tomorrow, Thanks, that sounds like an OK solution. I don't see a problem having separate wrappers for MKL; it's sort of a maintenance burden, but this probably can't be helped as the ABIs don't sound like they are compatible. The new ABI wrappers were obviously not tested against MKL, so we have to do this now... Best, Pauli -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iEYEARECAAYFAlIk2ZoACgkQ6BQxb7O0pWDUKwCfR35y4AS2zECzr2y7enwaPEbH JUoAoKEG1ptACEiuShbhr3Tg/QtrpFSz =irId -----END PGP SIGNATURE----- From cournape at gmail.com Tue Sep 3 09:38:33 2013 From: cournape at gmail.com (David Cournapeau) Date: Tue, 3 Sep 2013 14:38:33 +0100 Subject: [SciPy-Dev] ANN: Scipy 0.13.0 beta 1 release In-Reply-To: References: Message-ID: On Mon, Sep 2, 2013 at 7:31 PM, Pauli Virtanen wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi, > > 02.09.2013 21:12, David Cournapeau kirjoitti: > [clip] > > The pb is specific to MKL, yes, but I've found a workaround: in the > > case of MKL, you just need to workaround the g77 ABI of the MKL vs > > gfortran ABI, but the LAPACK interface is the actual LAPACK, not > > CLAPACK. > > > > So for MKL, you need non dummy wrappers iff the function returns a > > complex. A dirty patch seems to confirm that this fixes the issue, > > I will prepare an actual patch tomorrow, > > Thanks, that sounds like an OK solution. I don't see a problem having > separate wrappers for MKL; it's sort of a maintenance burden, but this > probably can't be helped as the ABIs don't sound like they are compatible. > > The new ABI wrappers were obviously not tested against MKL, so we have > to do this now... > See https://github.com/scipy/scipy/pull/2824 I decided to split our wrappers into the G77 ABI-specific part (returning complex) and the CBLAS/CLAPACK hacks specific to Accelerate. Not sure the complexity is worth it compared to simply have three cases (MKL on OS X, Accelerate, rest), but it may help integration with other libraries. David > > Best, > Pauli > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.12 (GNU/Linux) > > iEYEARECAAYFAlIk2ZoACgkQ6BQxb7O0pWDUKwCfR35y4AS2zECzr2y7enwaPEbH > JUoAoKEG1ptACEiuShbhr3Tg/QtrpFSz > =irId > -----END PGP SIGNATURE----- > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Sep 3 13:09:03 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 3 Sep 2013 13:09:03 -0400 Subject: [SciPy-Dev] fortran docstrings Message-ID: playing with parsing fortran documentation in special cdflib https://gist.github.com/josef-pkt/6426618 some fortran files don't work example ################# cumpoi(s, xlam, cum, ccum) CUMulative POIsson distribution Returns the probability of S or fewer events in a Poisson distribution with mean XLAM. Parameters ---------- s : array_like Upper limit of cumulation of the Poisson. xlam : array_like Mean of the Poisson distribution. Returns ------- cum : array_like Cumulative poisson distribution. ccum : array_like Compliment of Cumulative poisson distribution. Notes ----- Uses formula 26.4.21 of Abramowitz and Stegun, Handbook of Mathematical Functions to reduce the cumulative Poisson to the cumulative chi-square distribution. ################# cumt(t, df, cum, ccum) CUMulative T-distribution Computes the integral from -infinity to T of the t-density. Parameters ---------- t : array_like Upper limit of integration of the t-density. df : array_like Degrees of freedom of the t-distribution. Returns ------- cum : array_like Cumulative t-distribution. ccum : array_like Compliment of Cumulative t-distribution. Notes ----- Formula 26.5.27 of Abramowitz and Stegun, Handbook of Mathematical Functions is used to reduce the t-distribution to an incomplete beta. ################# cumtnc(t, df, pnonc, cum, ccum) CUMulative Non-Central T-distribution Computes the integral from -infinity to T of the non-central t-density. Parameters ---------- t : array_like Upper limit of integration of the non-central t-density. df : array_like Degrees of freedom of the non-central t-distribution. pnonc : array_like Non-centrality parameter of the non-central t distibutio Returns ------- cum : array_like Cumulative t-distribution. ccum : array_like Compliment of Cumulative t-distribution. Notes ----- Upper tail of the cumulative noncentral t using formulae from page 532 of Johnson, Kotz, Balakrishnan, Coninuous Univariate Distributions, Vol 2, 2nd Edition. Wiley (1995) This implementation starts the calculation at i = lambda, which is near the largest Di. It then sums forward and backward. From josef.pktd at gmail.com Tue Sep 3 14:56:38 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 3 Sep 2013 14:56:38 -0400 Subject: [SciPy-Dev] matrix_rank scipy version ? Message-ID: random question, while catching up with some linalg (matrix decomposition and least squares with reduced rank) scipy has rank-revealing (pivoting) QR for a few versions now. Would it be worth it to add a scipy version of matrix_rank that uses QR, instead of SVD as in numpy (IIRC) What's the cutoff for a QR version? Josef From msyang at alumni.princeton.edu Wed Sep 4 01:21:49 2013 From: msyang at alumni.princeton.edu (Michael Yang) Date: Wed, 4 Sep 2013 01:21:49 -0400 Subject: [SciPy-Dev] contributing to scipy.spatial.distance In-Reply-To: References: Message-ID: Thanks Jacob for the helpful info and links! On Mon, Sep 2, 2013 at 10:39 AM, Jacob Vanderplas wrote: > Hi Michael, > Scipy is indeed on Github, and can be found at [1]. Contributions happen > through Pull Requests -- if you're not familiar with that system, there are > tutorials available through Github. Before opening a pull request, it > would be good to have a look through the documentation on contributing [2]. > Thanks! > Jake > > [1] https://github.com/scipy/scipy > [2] http://docs.scipy.org/doc/scipy/reference/hacking.html > > > On Sun, Sep 1, 2013 at 7:57 PM, Michael Yang wrote: > >> hi - I'm new to this development list so please excuse me if I'm not >> doing things right (like top-posting or bottom-posting, etc.) >> >> I would like to contribute an additional vector distance function to the >> absolutely wonderful scipy.spatial.distance package written by Damian Eads. >> I have a patch for scipy/spatial/distance.py written for scipy version >> 0.7.1 a while back. >> >> Could someone please advise me on how to incorporate the patch? I don't >> know if the repo is in github or somewhere else. Also (not to be >> self-aggrandizing!) but would it be possible to add my name somewhere to >> this file if the patch gets makes it in successfully? >> >> Thanks, >> Michael Yang >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boyfarrell at gmail.com Wed Sep 4 12:14:44 2013 From: boyfarrell at gmail.com (boyfarrell at gmail.com) Date: Thu, 5 Sep 2013 01:14:44 +0900 Subject: [SciPy-Dev] Consensus on the future of integrate.ode Message-ID: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> Dear list, I think it would be interesting to start a discussion regarding the direction of integrate.ode. Reading the list, there seems to be many people interested seeing improvements. Is there a roadmap for development? If not can we come to a consensus about how best to contribute to this part of scipy? I think it is fair to say that other projects have overtaken scipy in terms integrating new solvers and general development in this area. Which is a shame. Personally, speaking as a user, I would like to see the use of more modern solvers. For example the code could be updated to use CVODE. This has many improvements/fixes over the Fortran VODE solver that is currently used. This would be a nice place to start, clearly that is still a nontrivial task. Also, is there a way that developers could be encourage to contribute additional solvers? For example by making an elegant low level API that makes this easier. Moreover, say that in order for a developer to connect a new solver to scipy all that had to be done was implement a particular, well defined, interface for their wrapper. Once implemented the new solver could be called with the exciting integrate.ode interface. Has anybody got experience is such design? Surely something like this has been attempted? The GNU Scientific Library interface springs to mind as a good example, http://www.gnu.org/software/gsl/manual/html_node/Ordinary-Differential-Equations.html Finally, I have read some discussion here regarding changing the API and or unifying the odeint and ode. I don't want to get too distracted with this as my main point is outlined above. Personally the current API doesn't bother me too much, I find both easy to use. But if this is being considered for an update I think the crucial thing is to have the interface have no reliance on any particular solver. Abstraction is key. Happy to know your thoughts and if we can make a push together for some improvements in this area then all the better. Best regard, Dan From juanlu001 at gmail.com Wed Sep 4 19:01:47 2013 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Thu, 05 Sep 2013 01:01:47 +0200 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> Message-ID: <5227BBDB.1020106@gmail.com> On 09/04/2013 06:14 PM, boyfarrell at gmail.com wrote: > Dear list, > > I think it would be interesting to start a discussion regarding the direction of integrate.ode. Reading the list, there seems to be many people interested seeing improvements. Is there a roadmap for development? If not can we come to a consensus about how best to contribute to this part of scipy? I think it is fair to say that other projects have overtaken scipy in terms integrating new solvers and general development in this area. Which is a shame. > > Personally, speaking as a user, I would like to see the use of more modern solvers. For example the code could be updated to use CVODE. This has many improvements/fixes over the Fortran VODE solver that is currently used. This would be a nice place to start, clearly that is still a nontrivial task. > > Also, is there a way that developers could be encourage to contribute additional solvers? For example by making an elegant low level API that makes this easier. Moreover, say that in order for a developer to connect a new solver to scipy all that had to be done was implement a particular, well defined, interface for their wrapper. Once implemented the new solver could be called with the exciting integrate.ode interface. Has anybody got experience is such design? Surely something like this has been attempted? The GNU Scientific Library interface springs to mind as a good example, http://www.gnu.org/software/gsl/manual/html_node/Ordinary-Differential-Equations.html > > Finally, I have read some discussion here regarding changing the API and or unifying the odeint and ode. I don't want to get too distracted with this as my main point is outlined above. Personally the current API doesn't bother me too much, I find both easy to use. But if this is being considered for an update I think the crucial thing is to have the interface have no reliance on any particular solver. Abstraction is key. > > Happy to know your thoughts and if we can make a push together for some improvements in this area then all the better. Regarding the last part, I created a PR last week with some changes to the odeint function, but I didn't aim to unify odeint and ode or do a big rework of the API. I introduced a couple of convenience changes: odeint objetive function has different signature than ode (one is (y, t) and the other is (t, y)) and some minor (but backwards incompatible) changes, done to solve some annoying issues that odeint has had. The thing is that even this little changes require a proper deprecation cycle to not break everyone's code too soon, so this major uplifting in ode/odeint must be taken very carefully. The good thing about odeint is that it's a simple function to GetThingsDone? when you just want to quickly integrate some equation. You just need a lambda, an initial condition and a vector of time values. Having to configure a solver with its options is not very straightforward, specially for interactive work, as Ga?l Varoquaux had already described in a previous discussion some years ago: http://osdir.com/ml/python-scientific-devel/2009-02/msg00042.html (this is true for everything else IMHO, also for building splines and stuff like that). Perhaps the most important thing is that someone with interest in this patiently iterates with this idea, searches consensus in the mailing list, write a proposal somewhere if needed, thinks about the proper deprecation process with the maintainers, writes the code and, well, takes responsibility of this until the very end. I guess it can take some months. Juan Luis Cano From benny.malengier at gmail.com Thu Sep 5 05:10:26 2013 From: benny.malengier at gmail.com (Benny Malengier) Date: Thu, 5 Sep 2013 11:10:26 +0200 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: <5227BBDB.1020106@gmail.com> References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> Message-ID: I commented my ideas in the pull request here: https://github.com/scipy/scipy/issues/2818#issuecomment-23646270 Main point: I think ode class should be redone to drop all the vode specific stuff and keep to the minimum, using like matlab set_options for all the rest, and then current odeint implemented as a simple wrapper around ode. Benny 2013/9/5 Juan Luis Cano > On 09/04/2013 06:14 PM, boyfarrell at gmail.com wrote: > > Dear list, > > > > I think it would be interesting to start a discussion regarding the > direction of integrate.ode. Reading the list, there seems to be many people > interested seeing improvements. Is there a roadmap for development? If not > can we come to a consensus about how best to contribute to this part of > scipy? I think it is fair to say that other projects have overtaken scipy > in terms integrating new solvers and general development in this area. > Which is a shame. > > > > Personally, speaking as a user, I would like to see the use of more > modern solvers. For example the code could be updated to use CVODE. This > has many improvements/fixes over the Fortran VODE solver that is currently > used. This would be a nice place to start, clearly that is still a > nontrivial task. > > > > Also, is there a way that developers could be encourage to contribute > additional solvers? For example by making an elegant low level API that > makes this easier. Moreover, say that in order for a developer to connect a > new solver to scipy all that had to be done was implement a particular, > well defined, interface for their wrapper. Once implemented the new solver > could be called with the exciting integrate.ode interface. Has anybody got > experience is such design? Surely something like this has been attempted? > The GNU Scientific Library interface springs to mind as a good example, > http://www.gnu.org/software/gsl/manual/html_node/Ordinary-Differential-Equations.html > > > > Finally, I have read some discussion here regarding changing the API and > or unifying the odeint and ode. I don't want to get too distracted with > this as my main point is outlined above. Personally the current API doesn't > bother me too much, I find both easy to use. But if this is being > considered for an update I think the crucial thing is to have the interface > have no reliance on any particular solver. Abstraction is key. > > > > Happy to know your thoughts and if we can make a push together for some > improvements in this area then all the better. > > Regarding the last part, I created a PR last week with some changes to > the odeint function, but I didn't aim to unify odeint and ode or do a > big rework of the API. I introduced a couple of convenience changes: > odeint objetive function has different signature than ode (one is (y, t) > and the other is (t, y)) and some minor (but backwards incompatible) > changes, done to solve some annoying issues that odeint has had. The > thing is that even this little changes require a proper deprecation > cycle to not break everyone's code too soon, so this major uplifting in > ode/odeint must be taken very carefully. > > The good thing about odeint is that it's a simple function to > GetThingsDone? when you just want to quickly integrate some equation. > You just need a lambda, an initial condition and a vector of time > values. Having to configure a solver with its options is not very > straightforward, specially for interactive work, as Ga?l Varoquaux had > already described in a previous discussion some years ago: > > http://osdir.com/ml/python-scientific-devel/2009-02/msg00042.html > > (this is true for everything else IMHO, also for building splines and > stuff like that). > > Perhaps the most important thing is that someone with interest in this > patiently iterates with this idea, searches consensus in the mailing > list, write a proposal somewhere if needed, thinks about the proper > deprecation process with the maintainers, writes the code and, well, > takes responsibility of this until the very end. I guess it can take > some months. > > Juan Luis Cano > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnd.baecker at web.de Thu Sep 5 05:25:37 2013 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 5 Sep 2013 11:25:37 +0200 (CEST) Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> Message-ID: Could odeint be useful in this context? http://headmyshoulder.github.io/odeint-v2/ "Odeint is a modern C++ library for numerically solving Ordinary Differential Equations. It is developed in a generic way using Template Metaprogramming which leads to extraordinary high flexibility at top performance." Best, Arnd From goxberry at mit.edu Thu Sep 5 17:50:10 2013 From: goxberry at mit.edu (Geoff Oxberry) Date: Thu, 5 Sep 2013 17:50:10 -0400 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> Message-ID: Please, please don't implement anything in the SciPy ODE solvers with Boost. odeint unless it's a separate interface that requires extreme generality. Boost.odeint uses Boost.uBLAS for its linear algebra (see http://headmyshoulder.github.io/odeint-v2/doc/boost_numeric_odeint/concepts/implicit_system.html), and Boost.uBLAS is slow because it tries to be general (everything is templated), and it has debugging features added by default. A lot of users (myself included) don't need that generality, and would prefer the extra performance that you'd get from a code that can use a faster dense linear algebra library (like an optimized LAPACK, Eigen, Elemental, maybe MTL4). Geoff On Thu, Sep 5, 2013 at 5:25 AM, Arnd Baecker wrote: > Could odeint be useful in this context? > > http://headmyshoulder.github.io/odeint-v2/ > > "Odeint is a modern C++ library for numerically solving Ordinary > Differential Equations. It is developed in a generic way using Template > Metaprogramming which leads to extraordinary high flexibility at top > performance." > > Best, Arnd > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- Geoffrey Oxberry, Ph.D., E.I.T. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Sep 5 17:55:13 2013 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 5 Sep 2013 23:55:13 +0200 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> Message-ID: Let's say that boost.ublas is not optimized, not because it is templated, but because of something else. Eigen is very efficient, as is nt2, and there are both pure templates! Cheers, Le 5 sept. 2013 22:50, "Geoff Oxberry" a ?crit : > Please, please don't implement anything in the SciPy ODE solvers with Boost > .odeint unless it's a separate interface that requires extreme generality. > > Boost.odeint uses Boost.uBLAS for its linear algebra (see > http://headmyshoulder.github.io/odeint-v2/doc/boost_numeric_odeint/concepts/implicit_system.html), > and Boost.uBLAS is slow because it tries to be general (everything is > templated), and it has debugging features added by default. > > A lot of users (myself included) don't need that generality, and would > prefer the extra performance that you'd get from a code that can use a > faster dense linear algebra library (like an optimized LAPACK, Eigen, > Elemental, maybe MTL4). > > Geoff > > On Thu, Sep 5, 2013 at 5:25 AM, Arnd Baecker wrote: > >> Could odeint be useful in this context? >> >> http://headmyshoulder.github.io/odeint-v2/ >> >> "Odeint is a modern C++ library for numerically solving Ordinary >> Differential Equations. It is developed in a generic way using Template >> Metaprogramming which leads to extraordinary high flexibility at top >> performance." >> >> Best, Arnd >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > > > -- > Geoffrey Oxberry, Ph.D., E.I.T. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mario.mulansky at gmx.net Thu Sep 5 19:18:12 2013 From: mario.mulansky at gmx.net (Mario Mulansky) Date: Thu, 05 Sep 2013 18:18:12 -0500 Subject: [SciPy-Dev] Consensus on the future of integrate.ode Message-ID: <2011631.3fpCFRPXsg@muhhhbook.home> > Please, please don't implement anything in the SciPy ODE solvers with Boost. > odeint unless it's a separate interface that requires extreme generality. > > Boost.odeint uses Boost.uBLAS for its linear algebra (see > http://headmyshoulder.github.io/odeint-v2/doc/boost_numeric_odeint/concepts/ > implicit_system.html), and Boost.uBLAS is slow because it tries to be > general (everything is templated), and it has debugging features added by > default. > > A lot of users (myself included) don't need that generality, and would > prefer the extra performance that you'd get from a code that can use a > faster dense linear algebra library (like an optimized LAPACK, Eigen, > Elemental, maybe MTL4). I agree with the concerns but I would like to note that the current implementation of implicit routines in Boost.odeint is preliminary. We are currently working on a generalization of these routines to allow the usage of arbitrary backends that should be done by the end of the year. However, Boost.odeint misses some more of the functionality of CVODE, e.g. automatic detection of stiffness or numerical Jacobi approximation. But I think that using Boost.odeint could be benefitial for scipy.odeint (e.g. parallelization) and we would be happy to assist scipy developers, e.g. by providing missing functionality, if you want to use Boost.odeint. Best regards, Mario From boyfarrell at gmail.com Sat Sep 7 03:40:52 2013 From: boyfarrell at gmail.com (boyfarrell at gmail.com) Date: Sat, 7 Sep 2013 16:40:52 +0900 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: <5227BBDB.1020106@gmail.com> References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> Message-ID: <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> Hello all, > Perhaps the most important thing is that someone with interest in this > patiently iterates with this idea, searches consensus in the mailing > list, write a proposal somewhere if needed, thinks about the proper > deprecation process with the maintainers, writes the code and, well, > takes responsibility of this until the very end. I guess it can take > some months. I was hoping that we have enough people that we could avoid the Rambo developer phenomenon. Besides this seems to be exactly what we have right now. What I would like to do is find is a group of people that are interested in seeing improvements and working together to fix things. Remark 1. Maybe we should set a few simple goals? For example, would people be interested in replacing vode with the cvode solver from sundials? This would be a nice improvement. What would you like to see? Remark 2. I don't understand the problem well enough to design an general new interface that solvers can be connected to. I'm not even sure is this will help. I think someone with more experience writing numerical code could architect something like this. Best wishes, Dan From juanlu001 at gmail.com Sat Sep 7 10:05:42 2013 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Sat, 07 Sep 2013 16:05:42 +0200 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> Message-ID: <522B32B6.5060504@gmail.com> On 09/07/2013 09:40 AM, boyfarrell at gmail.com wrote: > Hello all, > >> Perhaps the most important thing is that someone with interest in this >> patiently iterates with this idea, searches consensus in the mailing >> list, write a proposal somewhere if needed, thinks about the proper >> deprecation process with the maintainers, writes the code and, well, >> takes responsibility of this until the very end. I guess it can take >> some months. > I was hoping that we have enough people that we could avoid the Rambo developer phenomenon. Besides this seems to be exactly what we have right now. What I would like to do is find is a group of people that are interested in seeing improvements and working together to fix things. > > Remark 1. Maybe we should set a few simple goals? For example, would people be interested in replacing vode with the cvode solver from sundials? This would be a nice improvement. What would you like to see? > > Remark 2. I don't understand the problem well enough to design an general new interface that solvers can be connected to. I'm not even sure is this will help. I think someone with more experience writing numerical code could architect something like this. (I am sorry: long email ahead) I have not been in this list for long, but as far as I can tell, the problem is: there are two *unrelated* interfaces to integrate differential equations. The simple one is `odeint`, which uses a manually C-wrapped version of lsoda.f and does the iteration on the C side, and the complex one (also in the mathematical sense) is the `ode` class, which uses f2py generated interfaces to vode and other packages and does the iteration in the Python side. These two have several incompatibilities, the most silly one being the different signature of the objective function: `odeint` expects `func(y, t)` and `ode` expects `func(t, y)`. So let's say we want to solve the problem of the interfaces being unrelated. Then you would make `odeint` a simplified wrapper to the `ode` class, providing some nice defaults to rapidly integrate an ode in a one-liner. Then we step on the `ode` territory: the routines it wraps are outdated and other packages, and they probably don't see bug fixes anymore. So it is desirable to replace these with newly developed codes, like the mentioned sundials. Benny Malengier already created the odes scikit, which if I understand correctly wraps sundials. Plus, related to this replacement, it's been suggested also that we should make the `ode` class more solver-agnostic, providing an more unified interface for options and a MATLAB-style way of setting extra parameters, because as Benny already pointed out in the previously linked comment, "the scipy ode class is also too much linked to the old vode solver". In that comment he also points out which he considers are the main pieces for a full DE solution. I think this is a decent summary of what we are discussing (others please correct my mistakes and omissions). Now my opinion is: from an API point of view, I think all these are good things, both making odeint a wrapper to ode and simplifying and reworking the parameters handling in the ode class. From an implementation point of view, my only concern is: now `odeint` iterates in a compiled language (C in current SciPy, Fortran 90 in my pull request), whereas `ode` iterates on Python. There are C/Fortran <-> Python callbacks anyway, but while working on my `odeint` rewrite I noticed that the Fortran iteration was ~30 % faster. If after rewriting `ode` we keep iterating on Python and make `odeint` a wrapper around it, we might see performance losses. Other than that, I am not really familiarized with the state-of-the-art differential equations solving and such low-level numerical code (just reading lsoda.f was an enormous pain). The reason I called for a Rambo developer here is that, at the end of the day, both `odeint` and `ode` work decently, people use them and probably most really don't care about this "problems". That's why we need enough motivated people to carry the change. I ain't be that Rambo, but definitely will help. I hope this clarified something, apart from cluttering everybody's inbox :) Regards Juan Luis Cano From Brian.Newsom at Colorado.EDU Sun Sep 8 00:43:59 2013 From: Brian.Newsom at Colorado.EDU (Brian Lee Newsom) Date: Sat, 7 Sep 2013 22:43:59 -0600 Subject: [SciPy-Dev] Quadpack conversion to Fortran Message-ID: Hello, My name is Brian Newsom and I am just beginning to use/work on the scipy library, specifically the quadpack within the integration pack. Because of the recursion and wrapping methods used now, running nquad (scipy 0.13) function is slow-the fortran and python are nested in a way that requires switching back and forth between the two often. Ideally this inefficiency could be overcome by translating the main interface (quadpack.py) into fortran and just having a minimal python wrapper that is not used in the recursion. First off, has anyone attempted (or is currently attempting) this? Are there any major difficulties I should be wary of or is this unrealistic for any reason? Additionally, I was curious if there are any computational advantages to using C or Fortran for this purpose within this library, either is a possibility in the case that one is strongly preferred. Otherwise, I am sure I will be full of questions in the near future with specific regards to the building and setup of these files within the scipy library. Thank you, Brian Newsom Brian.Newsom at colorado.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From boyfarrell at gmail.com Sun Sep 8 03:50:53 2013 From: boyfarrell at gmail.com (boyfarrell at gmail.com) Date: Sun, 8 Sep 2013 16:50:53 +0900 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: <522B32B6.5060504@gmail.com> References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> <522B32B6.5060504@gmail.com> Message-ID: <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> Hello Juan Luis, Yes, I saw your pull request and this, in part, motivated me to start this discussion. Looks like you put a lot of work into your odeint modifications! I completely agree with you, that ode should be fixed first, then odeint should be a nice interface wrapped around it. integrate.ode becomes a package aimed at people who understand the complexities of solving ODE problems and integrate.odeint is for people who just want an answer. Also, thank you for doing a nice summary, that was really helpful. Remark 1: Backend. I don't think we need to reinvent the wheel, Benny's work looks excellent so I think it make sense to base a new integrate.ode off scikits.odes. It has a nice array of modern solvers, in both C and Fortran. Remark 2 Frontend. We can define a new integrate.ode interface that wraps scikits.odes. It seems that you already have some ideas in that area, mentioning the MATLAB style. I looked at the MATLAB docs just now and spend 20 minutes thinking how this might look if it were more pythonic. You can take a look here, fork your own version to make changes or add comments, https://gist.github.com/danieljfarrell/6482713 Best wishes, Dan From charlesr.harris at gmail.com Sun Sep 8 15:14:28 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 8 Sep 2013 13:14:28 -0600 Subject: [SciPy-Dev] ANN: 1.8.0b2 release. Message-ID: Hi all, I'm happy to announce the second beta release of Numpy 1.8.0. This release should solve the Windows problems encountered in the first beta. Many thanks to Christolph Gohlke and Julian Taylor for their hard work in getting those issues settled. It would be good if folks running OS X could try out this release and report any issues on the numpy-dev mailing list. Unfortunately the files still need to be installed from source as dmg files are not avalable at this time. Source tarballs and release notes can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.8.0b2/. The Windows and OS X installers will follow when the infrastructure issues are dealt with. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Sep 8 15:53:24 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 08 Sep 2013 22:53:24 +0300 Subject: [SciPy-Dev] Quadpack conversion to Fortran In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 08.09.2013 07:43, Brian Lee Newsom kirjoitti: > My name is Brian Newsom and I am just beginning to use/work on the > scipy library, specifically the quadpack within the integration > pack. Because of the recursion and wrapping methods used now, > running nquad (scipy 0.13) function is slow-the fortran and python > are nested in a way that requires switching back and forth between > the two often. Ideally this inefficiency could be overcome by > translating the main interface (quadpack.py) into fortran and just > having a minimal python wrapper that is not used in the recursion. The first question: did you measure the overhead involved in the Python code, and determined that it dominates the computation time? If not, that would be the first thing to do, otherwise it's possible to end up with a wild goose chase. On the face of it, it does not sound likely to me that much will be gained by writing the recursion in a lower-level language. Suppose that each level requires N >> 1 evaluations of the function. The innermost function will be called in total N^3 times, the next level N^2 times and the outer level N times. The inner level involves N^3 calls back to Python, which are not removed by writing the recursion in Fortran. Since N^3 >> N^2, the innermost callback overhead dominates the run time. Of course, the N^2 term has a bigger prefactor than N^3, so this argument is not fully convincing, but it would be best to measure first. IIRC, the `quad` integration routine can directly call ctypes function pointers, skipping Python overhead. (See the Python ctypes module documentation for details on how to define them.) This requires Scipy > = 0.11.0, however. There have been plans to improve the interoperability further (with Cython, f2py, and other ways) to interact directly with low-level code in Scipy's computational routines. Putting effort into this direction and trying to speed up the innermost part would seem to me the most sensible direction of further work. Of course, the above is written assuming your benchmarks don't show me wrong. I don't believe anyone is currently working on `nquad`, so go ahead. However, note that due to the current situation with Mingw Gfortran compiler on Windows, there may be issues in accepting Fortran 90 code. Best regards, - -- Pauli Virtanen -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iEYEARECAAYFAlIs1a8ACgkQ6BQxb7O0pWBYtwCdHrXw+wXD4UMf9CLzHHe3hkpM V6QAniW2X6HeC4YWDHnDJwEZ5suxnXvi =3ptO -----END PGP SIGNATURE----- From davidmenhur at gmail.com Sun Sep 8 16:42:02 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sun, 8 Sep 2013 22:42:02 +0200 Subject: [SciPy-Dev] Fwd: [Numpy-discussion] ANN: 1.8.0b2 release. In-Reply-To: References: Message-ID: On 8 September 2013 21:14, Charles R Harris wrote: > It would be good if folks running OS X could try out this release and > report any issues on the numpy-dev mailing list. Unfortunately the files > still need to be installed from source as dmg files are not avalable at > this time. > Hi, I have tried on a intel Mac with Python2.7 and gcc installed with Macports and it builds fine. The FORTRAN compiler is gfortran. There are a bunch of compiling warnings and errors* (attachment log27.log), but it works fine. Running the tests it gets printed: .......Parameter 4 to routine DGETRF was incorrect....... but all of them pass. I did not specify which BLAS to use, and it choose Accelerate, but ATLAS is also installed. >>> numpy.show_config() atlas_threads_info: NOT AVAILABLE blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] atlas_blas_threads_info: NOT AVAILABLE openblas_info: NOT AVAILABLE lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3'] define_macros = [('NO_ATLAS_INFO', 3)] Running SciPy (from Macports) test does raise two errors and a bunch of Deprecation warnings. See attachment scipy.log. The first two warnings were present before upgrading NumPy, but not the rest. Let me know if you need me to run it again, or change the configuration in any way. Regards, David. ------- * Errors such as: _configtest.c:1:20: error: endian.h: No such file or directory _configtest.c:5: error: size of array ?test_array? is negative _configtest.c:7: error: ?Py_UNICODE_WIDE? undeclared (first use in this function) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: log27.log Type: application/octet-stream Size: 164352 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy.log Type: application/octet-stream Size: 150394 bytes Desc: not available URL: From davidmenhur at gmail.com Sun Sep 8 16:45:41 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sun, 8 Sep 2013 22:45:41 +0200 Subject: [SciPy-Dev] [Numpy-discussion] ANN: 1.8.0b2 release. In-Reply-To: References: Message-ID: On 8 September 2013 22:42, Da?id wrote: > I have tried on a intel Mac with Python2.7 and gcc installed with Macports > and it builds fine. Sorry, forgot to add: it is OS X 10.6.8 uname -a Darwin Tilda2.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 Also, the full test suite reports one error: ====================================================================== FAIL: test_allnans (test_nanfunctions.TestNanFunctions_Sum) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/tests/test_nanfunctions.py", line 249, in test_allnans assert_(len(w) == 1, 'no warning raised') File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: no warning raised ---------------------------------------------------------------------- Ran 5235 tests in 1414.570s FAILED (KNOWNFAIL=5, SKIP=18, failures=1) David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Sep 8 17:18:54 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 8 Sep 2013 15:18:54 -0600 Subject: [SciPy-Dev] Fwd: [Numpy-discussion] ANN: 1.8.0b2 release. In-Reply-To: References: Message-ID: On Sun, Sep 8, 2013 at 2:42 PM, Da?id wrote: > On 8 September 2013 21:14, Charles R Harris wrote: > >> It would be good if folks running OS X could try out this release and >> report any issues on the numpy-dev mailing list. Unfortunately the files >> still need to be installed from source as dmg files are not avalable at >> this time. >> > > Hi, > > I have tried on a intel Mac with Python2.7 and gcc installed with Macports > and it builds fine. The FORTRAN compiler is gfortran. There are a bunch of > compiling warnings and errors* (attachment log27.log), but it works fine. > Running the tests it gets printed: > > .......Parameter 4 to routine DGETRF was incorrect....... > > but all of them pass. > > I did not specify which BLAS to use, and it choose Accelerate, but ATLAS > is also installed. > > >>> numpy.show_config() > atlas_threads_info: > NOT AVAILABLE > blas_opt_info: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > extra_compile_args = ['-msse3', > '-I/System/Library/Frameworks/vecLib.framework/Headers'] > define_macros = [('NO_ATLAS_INFO', 3)] > atlas_blas_threads_info: > NOT AVAILABLE > openblas_info: > NOT AVAILABLE > lapack_opt_info: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > extra_compile_args = ['-msse3'] > define_macros = [('NO_ATLAS_INFO', 3)] > > > > Running SciPy (from Macports) test does raise two errors and a bunch of > Deprecation warnings. See attachment scipy.log. The first two warnings were > present before upgrading NumPy, but not the rest. > > > Let me know if you need me to run it again, or change the configuration in > any way. > > > > Regards, > > David. > > > ------- > * Errors such as: > _configtest.c:1:20: error: endian.h: No such file or directory > _configtest.c:5: error: size of array ?test_array? is negative > _configtest.c:7: error: ?Py_UNICODE_WIDE? undeclared (first use in this > function) > > > You can ignore all the _configtest errors, they are the install program trying to figure out what works. Thanks for the report. Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Mon Sep 9 03:37:52 2013 From: benny.malengier at gmail.com (Benny Malengier) Date: Mon, 9 Sep 2013 07:37:52 +0000 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> <522B32B6.5060504@gmail.com> <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> Message-ID: 2013/9/8 boyfarrell at gmail.com > Hello Juan Luis, > > Yes, I saw your pull request and this, in part, motivated me to start this > discussion. Looks like you put a lot of work into your odeint modifications! > > I completely agree with you, that ode should be fixed first, then odeint > should be a nice interface wrapped around it. integrate.ode becomes a > package aimed at people who understand the complexities of solving ODE > problems and integrate.odeint is for people who just want an answer. > > Also, thank you for doing a nice summary, that was really helpful. > > Remark 1: Backend. > > I don't think we need to reinvent the wheel, Benny's work looks excellent > so I think it make sense to base a new integrate.ode off scikits.odes. It > has a nice array of modern solvers, in both C and Fortran. > Thanks for the thumbs up. However, odes is only one of 3 implementations, and then the one that exposes least of the C solvers. Odes was based originally on the previous pysundials which was no longer maintained and was a non-cython wrapper. The other implementations are geared towards a specific problem domain though. One is: https://github.com/casadi/casadi/wiki (LGPL tough) The other: http://www.jmodelica.org/assimulo (annoying copyright transfer contribution license though to a company,http://www.jmodelica.org/page/14) The non sundials solvers in odes are also not as state of the art as some that are added in above two interfaces. The problem with above packages is their license and the fact that they are packages with parsing language, ..., see eg: https://github.com/casadi/casadi/blob/master/examples/python/dae_single_shooting_in_30_lines.py. Note that you can pass an array of times to sundials at which you want output, and then all iteration is done inside sundials, so the point of time step outside of sundials is not needed (though in many problems you'll want to do stepping controlled in a python loop). Benny > Remark 2 Frontend. > > We can define a new integrate.ode interface that wraps scikits.odes. It > seems that you already have some ideas in that area, mentioning the MATLAB > style. I looked at the MATLAB docs just now and spend 20 minutes thinking > how this might look if it were more pythonic. You can take a look here, > fork your own version to make changes or add comments, > > https://gist.github.com/danieljfarrell/6482713 > > Best wishes, > > Dan > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goxberry at mit.edu Mon Sep 9 04:27:33 2013 From: goxberry at mit.edu (Geoff Oxberry) Date: Mon, 9 Sep 2013 04:27:33 -0400 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> <522B32B6.5060504@gmail.com> <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> Message-ID: There's also petsc4py (https://code.google.com/p/petsc4py/), which is a set of Python bindings for the scientific library PETSc. PETSc implements a lot of solvers, including an interface to CVODE in SUNDIALS. However, it generally does things using sparse linear algebra, so if you want to solve a system of ODEs using a direct linear solver, you essentially have to use LU decomposition as a preconditioner, and pass the inverted linear system to CVODE (which will then solve it trivially in one iteration of GMRES, TFQMR, or BiCGStab). There's no interface for IDA, although PETSc has a number of DAE solvers implemented itself. The license is BSD, and is well-maintained. I'd say the main drawbacks of the petsc4py library are that: - the library itself is not well-documented, but PETSc is, and the library generally follows the PETSc interface, plus has a number of Python examples - PETSc's a huge library, and it takes a while to get a handle on the API PETSc and scipy are trying to do different, but related things. PETSc and petsc4py are trying to provide a software platform for developing computational science applications that can run on anything from desktops with a single processor to a Blue Gene/Q with tens or hundreds of thousands of processors. SciPy is more geared towards rapid prototyping and computational exploration on a single processor, so it helps to have a more intuitive API, preferably something like MATLAB since that's so prevalent in education, and a lot of people who use SciPy come from a MATLAB background. Even though the licenses are compatible, I'm not sure you'd want to incorporate elements of petsc4py into SciPy (although it would be incredibly cool to be able to use any of their extremely long list of ODE or DAE solvers). Assimulo does not require you to write your ODE system in a parsing language, though in the Python implementation of JModelica, you could certainly do that. I used it for my thesis without too much difficulty, and sent the authors a few feature requests. Assimulo makes assumptions about the arguments being passed to CVODE, so you can't pass arbitrary data to CVODE via the Assimulo interface without some kludging; they assume you're going to pass an array of doubles, essentially. (Or at least, this was the case 15 months ago; I have no idea if they've changed the interface.) On Mon, Sep 9, 2013 at 3:37 AM, Benny Malengier wrote: > > > > 2013/9/8 boyfarrell at gmail.com > > Hello Juan Luis, >> >> Yes, I saw your pull request and this, in part, motivated me to start >> this discussion. Looks like you put a lot of work into your odeint >> modifications! >> >> I completely agree with you, that ode should be fixed first, then odeint >> should be a nice interface wrapped around it. integrate.ode becomes a >> package aimed at people who understand the complexities of solving ODE >> problems and integrate.odeint is for people who just want an answer. >> >> Also, thank you for doing a nice summary, that was really helpful. >> >> Remark 1: Backend. >> >> I don't think we need to reinvent the wheel, Benny's work looks excellent >> so I think it make sense to base a new integrate.ode off scikits.odes. It >> has a nice array of modern solvers, in both C and Fortran. >> > > Thanks for the thumbs up. > However, odes is only one of 3 implementations, and then the one that > exposes least of the C solvers. Odes was based originally on the previous > pysundials which was no longer maintained and was a non-cython wrapper. The > other implementations are geared towards a specific problem domain though. > > One is: https://github.com/casadi/casadi/wiki (LGPL tough) > > The other: http://www.jmodelica.org/assimulo (annoying copyright transfer > contribution license though to a company,http://www.jmodelica.org/page/14) > > The non sundials solvers in odes are also not as state of the art as some > that are added in above two interfaces. > The problem with above packages is their license and the fact that they > are packages with parsing language, ..., see eg: > https://github.com/casadi/casadi/blob/master/examples/python/dae_single_shooting_in_30_lines.py. > > Note that you can pass an array of times to sundials at which you want > output, and then all iteration is done inside sundials, so the point of > time step outside of sundials is not needed (though in many problems you'll > want to do stepping controlled in a python loop). > > Benny > > >> Remark 2 Frontend. >> >> We can define a new integrate.ode interface that wraps scikits.odes. It >> seems that you already have some ideas in that area, mentioning the MATLAB >> style. I looked at the MATLAB docs just now and spend 20 minutes thinking >> how this might look if it were more pythonic. You can take a look here, >> fork your own version to make changes or add comments, >> >> https://gist.github.com/danieljfarrell/6482713 >> >> Best wishes, >> >> Dan >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- Geoffrey Oxberry, Ph.D., E.I.T. -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Mon Sep 9 05:53:16 2013 From: benny.malengier at gmail.com (Benny Malengier) Date: Mon, 9 Sep 2013 09:53:16 +0000 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> <522B32B6.5060504@gmail.com> <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> Message-ID: 2013/9/9 Benny Malengier > > > > 2013/9/8 boyfarrell at gmail.com > > Hello Juan Luis, >> >> Yes, I saw your pull request and this, in part, motivated me to start >> this discussion. Looks like you put a lot of work into your odeint >> modifications! >> >> I completely agree with you, that ode should be fixed first, then odeint >> should be a nice interface wrapped around it. integrate.ode becomes a >> package aimed at people who understand the complexities of solving ODE >> problems and integrate.odeint is for people who just want an answer. >> >> Also, thank you for doing a nice summary, that was really helpful. >> >> Remark 1: Backend. >> >> I don't think we need to reinvent the wheel, Benny's work looks excellent >> so I think it make sense to base a new integrate.ode off scikits.odes. It >> has a nice array of modern solvers, in both C and Fortran. >> > > Thanks for the thumbs up. > However, odes is only one of 3 implementations, and then the one that > exposes least of the C solvers. Odes was based originally on the previous > pysundials which was no longer maintained and was a non-cython wrapper. The > other implementations are geared towards a specific problem domain though. > > One is: https://github.com/casadi/casadi/wiki (LGPL tough) > > The other: http://www.jmodelica.org/assimulo (annoying copyright transfer > contribution license though to a company,http://www.jmodelica.org/page/14) > > The non sundials solvers in odes are also not as state of the art as some > that are added in above two interfaces. > The problem with above packages is their license and the fact that they > are packages with parsing language, ..., see eg: > https://github.com/casadi/casadi/blob/master/examples/python/dae_single_shooting_in_30_lines.py. > > Note that you can pass an array of times to sundials at which you want > output, and then all iteration is done inside sundials, so the point of > time step outside of sundials is not needed (though in many problems you'll > want to do stepping controlled in a python loop). > > What would really be interesting is find some way to use the parallel implementation of sundals in python. Some mapping of a numpy array to the parallel sundial vector ( https://github.com/bmcage/odes/blob/master/scikits/odes/sundials/nvector/nvector_parallel.h) would then be needed. Has there ever been work to contruct a parallel aware array? Would be nice to find funding for that somewhere Benny > Benny > > >> Remark 2 Frontend. >> >> We can define a new integrate.ode interface that wraps scikits.odes. It >> seems that you already have some ideas in that area, mentioning the MATLAB >> style. I looked at the MATLAB docs just now and spend 20 minutes thinking >> how this might look if it were more pythonic. You can take a look here, >> fork your own version to make changes or add comments, >> >> https://gist.github.com/danieljfarrell/6482713 >> >> Best wishes, >> >> Dan >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boyfarrell at gmail.com Mon Sep 9 09:08:24 2013 From: boyfarrell at gmail.com (boyfarrell at gmail.com) Date: Mon, 9 Sep 2013 22:08:24 +0900 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> <522B32B6.5060504@gmail.com> <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> Message-ID: Hello all, > Even though the licenses are compatible, I'm not sure you'd want to incorporate elements of petsc4py into SciPy (although it would be incredibly cool to be able to use any of their extremely long list of ODE or DAE solvers). PETSc seems powerful and I love the idea of having many solver than can take advantage of multiple cores. That is certainly very attractive. I agree with your downsides Geoff; it looks terrifying! This is compounded by the fact that I can't even tell how to solve a simple problem because the docs are poor (well not as friendly as scipy at least). Here is what I think we should do: 1) Update interface to integrate.ode. - see my suggestion here, https://gist.github.com/danieljfarrell/6482713 - comments welcomed 2) Make integrate.odeint a functional wrapper around integrate.ode. - We should use the concept of 'event' from the MATLAB/sundials API here, it would be a nice addition. 3) Update the integrate.ode solvers - replace VODE with CVODE - add CODES - add a DAE solver(s). Point 3 is the most unclear. What technology do we use as the foundation? We have mentioned (in not particular order), * petsc4py, https://code.google.com/p/petsc4py/ * scikit.odes, http://cage.ugent.be/~bm/progs.html * assimulo, http://www.jmodelica.org/assimulo * casadi, https://github.com/casadi/casadi/wiki I really don't know where we should start as I have used none of the above. Generally the higher level we start at the more quickly we can get things done. Maybe Geoff and Benny (and others) can comment. Best wishes, Dan From goxberry at mit.edu Mon Sep 9 13:08:17 2013 From: goxberry at mit.edu (Geoff Oxberry) Date: Mon, 9 Sep 2013 13:08:17 -0400 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> <522B32B6.5060504@gmail.com> <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> Message-ID: On Mon, Sep 9, 2013 at 5:53 AM, Benny Malengier wrote: > > > > 2013/9/9 Benny Malengier > >> >> >> >> 2013/9/8 boyfarrell at gmail.com >> >> Hello Juan Luis, >>> >>> Yes, I saw your pull request and this, in part, motivated me to start >>> this discussion. Looks like you put a lot of work into your odeintmodifications! >>> >>> I completely agree with you, that ode should be fixed first, then odeintshould be a nice interface wrapped around it. >>> integrate.ode becomes a package aimed at people who understand the >>> complexities of solving ODE problems and integrate.odeint is for people >>> who just want an answer. >>> >>> Also, thank you for doing a nice summary, that was really helpful. >>> >>> Remark 1: Backend. >>> >>> I don't think we need to reinvent the wheel, Benny's work looks >>> excellent so I think it make sense to base a new integrate.ode off >>> scikits.odes. It has a nice array of modern solvers, in both C and >>> Fortran. >>> >> >> Thanks for the thumbs up. >> However, odes is only one of 3 implementations, and then the one that >> exposes least of the C solvers. Odes was based originally on the previous >> pysundials which was no longer maintained and was a non-cython wrapper. >> The other implementations are geared towards a specific problem domain >> though. >> >> One is: https://github.com/casadi/casadi/wiki (LGPL tough) >> >> The other: http://www.jmodelica.org/assimulo (annoying copyright >> transfer contribution license though to a company, >> http://www.jmodelica.org/page/14) >> >> The non sundials solvers in odes are also not as state of the art as >> some that are added in above two interfaces. >> The problem with above packages is their license and the fact that they >> are packages with parsing language, ..., see eg: >> https://github.com/casadi/casadi/blob/master/examples/python/dae_single_shooting_in_30_lines.py. >> >> Note that you can pass an array of times to sundials at which you want >> output, and then all iteration is done inside sundials, so the point of >> time step outside of sundials is not needed (though in many problems you'll >> want to do stepping controlled in a python loop). >> >> > What would really be interesting is find some way to use the parallel > implementation of sundals in python. Some mapping of a numpy array to the > parallel sundial vector ( > https://github.com/bmcage/odes/blob/master/scikits/odes/sundials/nvector/nvector_parallel.h) > would then be needed. Has there ever been work to contruct a parallel > aware array? > petsc4py does this already, using data structures from PETSc. You can instantiate some of these data structures with NumPy arrays. I'm not sure if such work has already been done in SciPy, nor am I sure if SciPy is "the right place" for it. I don't know a ton about parallel computing, but my limited experience with it backs up the common warning people give about parallelizing code -- you really have to parallelize your data structures in order for most algorithms to be effective. My worry is that people will write SciPy code as they have done for the previous releases -- that is, in serial -- and then expect parallel ODE functionality to just work, when continuing to use serial data structures won't let you take advantage of parallelism. The other thing about wrapping the parallel implementation of SUNDIALS I'd be concerned about is the preconditioning of iterative linear solvers; in most cases, these solvers have to be preconditioned in order to converge quickly. For that, do you expect users to provide their own preconditioners, like SUNDIALS does? Something that is useful about solvers like PETSc/petsc4py or Trilinos/PyTrilinos is that they provide a number of methods that can also be used to precondition iterative linear solvers (variants of incomplete LU/Cholesky factorization, block Jacobi, algebraic multigrid, and others), which makes it easier for users to select from a number of "black box" methods in the event that they cannot easily construct a preconditioner themselves. These sorts of things would probably be better implemented in other parts of SciPy, since they could be used for other purposes besides solving ODEs (namely, for any application involving the solution of a linear system using iterative methods). Geoff > Would be nice to find funding for that somewhere > > Benny > > > >> Benny >> >> >>> Remark 2 Frontend. >>> >>> We can define a new integrate.ode interface that wraps scikits.odes. It >>> seems that you already have some ideas in that area, mentioning the MATLAB >>> style. I looked at the MATLAB docs just now and spend 20 minutes thinking >>> how this might look if it were more pythonic. You can take a look here, >>> fork your own version to make changes or add comments, >>> >>> https://gist.github.com/danieljfarrell/6482713 >>> >>> Best wishes, >>> >>> Dan >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- Geoffrey Oxberry, Ph.D., E.I.T. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.guyer at nist.gov Mon Sep 9 13:08:19 2013 From: jonathan.guyer at nist.gov (Guyer, Jonathan E. Dr.) Date: Mon, 9 Sep 2013 17:08:19 +0000 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> <522B32B6.5060504@gmail.com> <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> Message-ID: On Sep 9, 2013, at 9:08 AM, boyfarrell at gmail.com wrote: > PETSc seems powerful and I love the idea of having many solver than can take advantage of multiple cores. That is certainly very attractive. I agree with your downsides Geoff; it looks terrifying! This is compounded by the fact that I can't even tell how to solve a simple problem because the docs are poor (well not as friendly as scipy at least). The petsc4py docs are effectively nonexistent, although the examples are not bad. When I was working on a FiPy-PETSc interface (not quite there, yet) I spent a lot of time in the petsc4py code to figure out what it was actually doing and what it was calling in PETSc-proper (not that I know that interface, either, but it's at least documented). From juanlu001 at gmail.com Mon Sep 9 18:10:14 2013 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Tue, 10 Sep 2013 00:10:14 +0200 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> <522B32B6.5060504@gmail.com> <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> Message-ID: <522E4746.509@gmail.com> On 09/09/2013 10:27 AM, Geoff Oxberry wrote: > PETSc and scipy are trying to do different, but related things. PETSc > and petsc4py are trying to provide a software platform for developing > computational science applications that can run on anything from > desktops with a single processor to a Blue Gene/Q with tens or > hundreds of thousands of processors. > > SciPy is more geared towards rapid prototyping and computational > exploration on a single processor, so it helps to have a more > intuitive API, preferably something like MATLAB since that's so > prevalent in education, and a lot of people who use SciPy come from a > MATLAB background. Even though the licenses are compatible, I'm not > sure you'd want to incorporate elements of petsc4py into SciPy > (although it would be incredibly cool to be able to use any of their > extremely long list of ODE or DAE solvers). I think this is a good point that has been made. Maybe we could aim for a less complex package that fits our (more modest) needs. > The non sundials solvers in odes are also not as state of the art > as some that are added in above two interfaces. > The problem with above packages is their license and the fact that > they are packages with parsing language, ..., see eg: > https://github.com/casadi/casadi/blob/master/examples/python/dae_single_shooting_in_30_lines.py > . > What if we just stay with your odes? It's still a big improvement, perhaps a good tradeoff between state of the art algorithms, simplicity and licensing compatibility. > > Note that you can pass an array of times to sundials at which you > want output, and then all iteration is done inside sundials, so > the point of time step outside of sundials is not needed (though > in many problems you'll want to do stepping controlled in a python > loop). > > Benny > > > Remark 2 Frontend. > > We can define a new integrate.ode interface that wraps > scikits.odes. It seems that you already have some ideas in > that area, mentioning the MATLAB style. I looked at the MATLAB > docs just now and spend 20 minutes thinking how this might > look if it were more pythonic. You can take a look here, fork > your own version to make changes or add comments, > > https://gist.github.com/danieljfarrell/6482713 > > Best wishes, > > Dan > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > -- > Geoffrey Oxberry, Ph.D., E.I.T. > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Tue Sep 10 04:44:36 2013 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 10 Sep 2013 08:44:36 +0000 Subject: [SciPy-Dev] Consensus on the future of integrate.ode In-Reply-To: <522E4746.509@gmail.com> References: <1CF5E828-320D-431E-BB4A-622444909DCB@gmail.com> <5227BBDB.1020106@gmail.com> <44D1AA60-01B3-484F-9DF1-1C11185839E7@gmail.com> <522B32B6.5060504@gmail.com> <9CE23FBC-6E7A-4748-83F2-040556F1EE00@gmail.com> <522E4746.509@gmail.com> Message-ID: 2013/9/9 Juan Luis Cano > On 09/09/2013 10:27 AM, Geoff Oxberry wrote: > > PETSc and scipy are trying to do different, but related things. PETSc and > petsc4py are trying to provide a software platform for developing > computational science applications that can run on anything from desktops > with a single processor to a Blue Gene/Q with tens or hundreds of thousands > of processors. > > SciPy is more geared towards rapid prototyping and computational > exploration on a single processor, so it helps to have a more intuitive > API, preferably something like MATLAB since that's so prevalent in > education, and a lot of people who use SciPy come from a MATLAB background. > Even though the licenses are compatible, I'm not sure you'd want to > incorporate elements of petsc4py into SciPy (although it would be > incredibly cool to be able to use any of their extremely long list of ODE > or DAE solvers). > > > I think this is a good point that has been made. Maybe we could aim for a > less complex package that fits our (more modest) needs. > > The non sundials solvers in odes are also not as state of the art as >> some that are added in above two interfaces. >> The problem with above packages is their license and the fact that they >> are packages with parsing language, ..., see eg: >> https://github.com/casadi/casadi/blob/master/examples/python/dae_single_shooting_in_30_lines.py. >> > > What if we just stay with your odes? It's still a big improvement, perhaps > a good tradeoff between state of the art algorithms, simplicity and > licensing compatibility. > Yes, for current scipy it would be a logical evolution. For my own work though I might need more complex stuff present in sundials and not exposed yet like in PETsc (preconditioning, krylov, parallel). But then, these things can be added as needed, and as PETsc has a nice license, we are allowed to look at it without fear of contamination. Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Tue Sep 10 05:54:38 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Tue, 10 Sep 2013 11:54:38 +0200 Subject: [SciPy-Dev] Akima Interpolation In-Reply-To: References: <521F7051.7090102@hilboll.de> Message-ID: <522EEC5E.2060804@hilboll.de> On 30.08.2013 18:15, Pauli Virtanen wrote: > 29.08.2013 19:01, Andreas Hilboll kirjoitti: >> I'd like to have Akima interpolation in scipy.interpolate, as I >> couldn't find any. > >> 1.) Did I miss something and there actually *is* an Akima >> interpolation in scipy? 2.) Is there interest in having Akima >> interpolation in Scipy? 3.) Does anyone have a good recommendation >> which implementation I should consider for pulling into Scipy? > > 1) I don't think so, unless pchip is related. > > 2) Perhaps. Is it useful? I would claim yes. Akima splines are more robust to outliers, and I think they would make a nice addition > > 3) No idea. > > One thing to look out: based on a quick look, Akima interpolation is a > form of spline interpolation, so it is probably possible to represent > the result as a B-spline. If so, FITPACK's spline representation > should be used, i.e., the same (t, c, k) format as in splrep. > > This allows reusing `splev` for evaluating the spline values, which > will make life easier later on when we clean up scipy.interpolate. I agree it would be nice to store the spline in the same tck format as used by FITPACK. Unfortunately, all references I could find directly compute the polynomial coefficients of the interpolants, and I couldn't find a refernce about how to derive the B-spline coefficients, i.e., the c, from the polynomial coefficients. I checked the bspline work by Chuck, but there isn't any B-spline coeffient calculation from the pp representation (yet?). Any ideas where to look? Andreas. From pav at iki.fi Tue Sep 10 10:00:33 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 10 Sep 2013 14:00:33 +0000 (UTC) Subject: [SciPy-Dev] Akima Interpolation References: <521F7051.7090102@hilboll.de> <522EEC5E.2060804@hilboll.de> Message-ID: Andreas Hilboll hilboll.de> writes: [clip] > I agree it would be nice to store the spline in the same tck format as > used by FITPACK. Unfortunately, all references I could find directly > compute the polynomial coefficients of the interpolants, and I couldn't > find a refernce about how to derive the B-spline coefficients, i.e., the > c, from the polynomial coefficients. I checked the bspline work by > Chuck, but there isn't any B-spline coeffient calculation from the pp > representation (yet?). Any ideas where to look? In that case, you can format the data in piecewise polynomial form, in terms of coefficients and breakpoints. There's a class named `ppform` in `scipy.interpolate`, and you probably can return an instance of that. I think there is a generic algorithm for converting from that format to B-splines, but the pp representation will also work fine as-is. Unfortunately, the current ppform implementation is crappy (quadratic memory and time requirement in evaluation . . .), and should be rewritten e.g. in Cython. During the scipy sprint @ euroscipy there was discussion about fixing the situation (i.e. make the ppform implementation sane, and deprecate polyint.PiecewisePolynomial, which seems inefficient), but this was not yet done. I think Evgeni Burovski was interested in tackling this, so if you want to also attack the problem, you can coordinate. -- Pauli Virtanen From ndbecker2 at gmail.com Tue Sep 10 11:33:46 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 10 Sep 2013 11:33:46 -0400 Subject: [SciPy-Dev] Akima Interpolation References: <521F7051.7090102@hilboll.de> Message-ID: This could be a useful source: http://www.alglib.net/interpolation/spline3.php From argriffi at ncsu.edu Tue Sep 10 11:40:32 2013 From: argriffi at ncsu.edu (alex) Date: Tue, 10 Sep 2013 11:40:32 -0400 Subject: [SciPy-Dev] Akima Interpolation In-Reply-To: References: <521F7051.7090102@hilboll.de> Message-ID: On Tue, Sep 10, 2013 at 11:33 AM, Neal Becker wrote: > This could be a useful source: > > http://www.alglib.net/interpolation/spline3.php It looks like GPL. http://www.alglib.net/faq.php """ >Can I distribute ALGLIB Free Edition as a part of a commercial application? Almost surely - no. """ I'm pretty sure scipy will only accept things that Enthought and Continuum Analytics can distribute as part of their commercial applications, so this probably won't work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Sep 10 11:46:08 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 10 Sep 2013 16:46:08 +0100 Subject: [SciPy-Dev] Akima Interpolation In-Reply-To: References: <521F7051.7090102@hilboll.de> Message-ID: On Tue, Sep 10, 2013 at 4:40 PM, alex wrote: > > On Tue, Sep 10, 2013 at 11:33 AM, Neal Becker wrote: >> >> This could be a useful source: >> >> http://www.alglib.net/interpolation/spline3.php > > > It looks like GPL. > http://www.alglib.net/faq.php > > """ > >Can I distribute ALGLIB Free Edition as a part of a commercial application? > > Almost surely - no. > > """ > > I'm pretty sure scipy will only accept things that Enthought and Continuum Analytics can distribute as part of their commercial applications, so this probably won't work. Neither Enthought nor Continuum Analytics control this policy. But yes, scipy is a roughly-BSD-licensed project and does not accept code with significantly more restrictive licenses. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Tue Sep 10 13:13:22 2013 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 10 Sep 2013 18:13:22 +0100 Subject: [SciPy-Dev] Akima Interpolation In-Reply-To: <522EEC5E.2060804@hilboll.de> References: <521F7051.7090102@hilboll.de> <522EEC5E.2060804@hilboll.de> Message-ID: Christoph Gohlke has a python-friendly implementation: http://www.lfd.uci.edu/~gohlke/code/akima.c.html Quick googling also shows http://code.google.com/p/miyoshi/source/browse/trunk/common/common.f90?r=88 Which seems to be MIT-licensed. Haven't read the code though. On Tue, Sep 10, 2013 at 10:54 AM, Andreas Hilboll wrote: > On 30.08.2013 18:15, Pauli Virtanen wrote: > > 29.08.2013 19:01, Andreas Hilboll kirjoitti: > >> I'd like to have Akima interpolation in scipy.interpolate, as I > >> couldn't find any. > > > >> 1.) Did I miss something and there actually *is* an Akima > >> interpolation in scipy? 2.) Is there interest in having Akima > >> interpolation in Scipy? 3.) Does anyone have a good recommendation > >> which implementation I should consider for pulling into Scipy? > > > > 1) I don't think so, unless pchip is related. > > > > 2) Perhaps. Is it useful? > > I would claim yes. Akima splines are more robust to outliers, and I > think they would make a nice addition > > > > > 3) No idea. > > > > One thing to look out: based on a quick look, Akima interpolation is a > > form of spline interpolation, so it is probably possible to represent > > the result as a B-spline. If so, FITPACK's spline representation > > should be used, i.e., the same (t, c, k) format as in splrep. > > > > This allows reusing `splev` for evaluating the spline values, which > > will make life easier later on when we clean up scipy.interpolate. > > I agree it would be nice to store the spline in the same tck format as > used by FITPACK. Unfortunately, all references I could find directly > compute the polynomial coefficients of the interpolants, and I couldn't > find a refernce about how to derive the B-spline coefficients, i.e., the > c, from the polynomial coefficients. I checked the bspline work by > Chuck, but there isn't any B-spline coeffient calculation from the pp > representation (yet?). Any ideas where to look? > > Andreas. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From micky.latowicki at mobileye.com Wed Sep 11 07:28:33 2013 From: micky.latowicki at mobileye.com (Micky Latowicki) Date: Wed, 11 Sep 2013 14:28:33 +0300 Subject: [SciPy-Dev] tol parameter to minimize_scalar is inconsistent Message-ID: <523053E1.7010205@mobileye.com> An HTML attachment was scrubbed... URL: From micky.latowicki at mobileye.com Wed Sep 11 13:15:04 2013 From: micky.latowicki at mobileye.com (Micky Latowicki) Date: Wed, 11 Sep 2013 20:15:04 +0300 Subject: [SciPy-Dev] tol parameter to minimize_scalar is inconsistent [without HTML this time] Message-ID: <5230A518.3030601@mobileye.com> [sorry for the HTML in the first try] Hi, I'm a longtime user of scipy and greatly appreciate your work. I'd like to ask/suggest something about minimize_scalar. It seems to me that the tol parameter is relative tolerance when the method is 'brent' and absolute tolerance when the method is 'bounded'. Isn't this confusing? Perhaps this parameter should be deprecated, until both methods can support either relative or absolute tolerance? Meanwhile, parameters with names such as 'abs_xtol' and 'rel_xtol' can be introduced, and invalid combinations of parameters and method would be rejected with a ValueError? Cheers. This mail was sent via Mail-SeCure system. ************************************************************************************ This footnote confirms that this email message has been scanned by PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses. ************************************************************************************ From denis at laxalde.org Thu Sep 12 02:08:34 2013 From: denis at laxalde.org (Denis Laxalde) Date: Thu, 12 Sep 2013 08:08:34 +0200 Subject: [SciPy-Dev] tol parameter to minimize_scalar is inconsistent [without HTML this time] In-Reply-To: <5230A518.3030601@mobileye.com> References: <5230A518.3030601@mobileye.com> Message-ID: <52315A62.3010207@laxalde.org> Hi, Micky Latowicki wrote: > I'd like to ask/suggest something about minimize_scalar. It seems to me > that the tol parameter is relative tolerance when the method is 'brent' > and absolute tolerance when the method is 'bounded'. Isn't this confusing? This appears to be correct. It is indeed confusing. You might want to submit a bug report about this. See http://www.scipy.org/scipylib/bug-report.html. > Perhaps this parameter should be deprecated, until both methods can > support either relative or absolute tolerance? Meanwhile, parameters > with names such as 'abs_xtol' and 'rel_xtol' can be introduced, and > invalid combinations of parameters and method would be rejected with a > ValueError? This should at least be documented. On the other hand, adding support for both relative and absolute tolerances in underlying solvers might not be too hard. Greetings, Denis From josef.pktd at gmail.com Thu Sep 12 10:16:00 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 12 Sep 2013 10:16:00 -0400 Subject: [SciPy-Dev] recent scipy.stats work Message-ID: Evgeni (mainly) has been busy to improve stats.distributions stats.distributions is on the way to losing it's remaining bugs in higher moments and entropy. :( and is getting a reliable test suite for them. Here is a, maybe incomplete, list to see if we want to backport any of them to 0.13 https://github.com/scipy/scipy/pull/2801 fix beta_gen.fit (Warren) https://github.com/scipy/scipy/pull/2845 open, numerical improvemnt (but also vonmises_line) https://github.com/scipy/scipy/pull/2774 open, bugfix for entropy docs https://github.com/scipy/scipy/pull/2814 not bugfix https://github.com/scipy/scipy/pull/2822 speed improvement https://github.com/scipy/scipy/pull/2841 numerical improvements not ready https://github.com/scipy/scipy/pull/2821 fixing moments https://github.com/scipy/scipy/pull/2790 Josef From pav at iki.fi Thu Sep 12 13:16:43 2013 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 12 Sep 2013 20:16:43 +0300 Subject: [SciPy-Dev] tol parameter to minimize_scalar is inconsistent [without HTML this time] In-Reply-To: <52315A62.3010207@laxalde.org> References: <5230A518.3030601@mobileye.com> <52315A62.3010207@laxalde.org> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 12.09.2013 09:08, Denis Laxalde kirjoitti: [clip: tol in minimize_scalar] > This should at least be documented. On the other hand, adding > support for both relative and absolute tolerances in underlying > solvers might not be too hard. It would probably be best to harmonize tol to work in the way it does in `minimize`, and provide per-solver parameters for more special tolerance needs. - -- Pauli Virtanen -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iEYEARECAAYFAlIx9vMACgkQ6BQxb7O0pWBuqwCdEK1K6cEhpe+hY/7rf+8Z53y0 ISgAn1lB++ttOWeO7zABCheimLVo2Mbh =jIag -----END PGP SIGNATURE----- From ralf.gommers at gmail.com Thu Sep 12 14:38:34 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 12 Sep 2013 18:38:34 +0000 Subject: [SciPy-Dev] recent scipy.stats work In-Reply-To: References: Message-ID: On Thu, Sep 12, 2013 at 2:16 PM, wrote: > Evgeni (mainly) has been busy to improve stats.distributions > stats.distributions is on the way to losing it's remaining bugs in > higher moments and entropy. :( and is getting a reliable test suite > for them. > :( --> :) > > Here is a, maybe incomplete, list to see if we want to backport any of > them to 0.13 > Thanks for the overview. I'd like to limit the number of backports to essential stuff - based on your descriptions below that's maybe only the beta_gen.fit and entropy bug fixes. I'll try to look at it in more detail the coming weekend. Cheers, Ralf > https://github.com/scipy/scipy/pull/2801 fix beta_gen.fit (Warren) > > https://github.com/scipy/scipy/pull/2845 open, numerical improvemnt > (but also vonmises_line) > https://github.com/scipy/scipy/pull/2774 open, bugfix for entropy > > > docs > https://github.com/scipy/scipy/pull/2814 > > not bugfix > https://github.com/scipy/scipy/pull/2822 speed improvement > https://github.com/scipy/scipy/pull/2841 numerical improvements > > not ready > https://github.com/scipy/scipy/pull/2821 fixing moments > https://github.com/scipy/scipy/pull/2790 > > Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From biosap at gmail.com Fri Sep 13 01:43:12 2013 From: biosap at gmail.com (Micky Latowicki) Date: Fri, 13 Sep 2013 05:43:12 +0000 (UTC) Subject: [SciPy-Dev] =?utf-8?q?tol_parameter_to_minimize=5Fscalar_is_incon?= =?utf-8?q?sistent_=5Bwithout_HTML_this_time=5D?= References: <5230A518.3030601@mobileye.com> <52315A62.3010207@laxalde.org> Message-ID: Pauli Virtanen iki.fi> writes: > > ... > > It would probably be best to harmonize tol to work in the way it does > in `minimize`, and provide per-solver parameters for more special > tolerance needs. > I'd prefer an interface where the meaning of a parameter does not depend on the value of another. I'd like to be able to change the tolerance and the method in separate. Optimally, the user should be able to specify both relative and absolute tolerance, and the selected method will use the tolerance it supports. I've created a github issue to work on this, and intend to submit a pull request along the lines of my suggestion. Any further comments would be welcome. https://github.com/scipy/scipy/issues/2859 From charlesr.harris at gmail.com Fri Sep 13 22:47:41 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 13 Sep 2013 20:47:41 -0600 Subject: [SciPy-Dev] reported error for numpy 1.8 Message-ID: Hi all, I don't know if this is known or not, so posting here before opening an issue. This shows up in scipy master as well as 0.12, along with a ton of (expected) deprecation warnings for the latter. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/charris/.local/lib/python2.7/site-packages/scipy/io/matlab/tests/test_mio.py", line 301, in _load_check_case matdict = loadmat(file_name, struct_as_record=True) File "/home/charris/.local/lib/python2.7/site-packages/scipy/io/matlab/mio.py", line 126, in loadmat matfile_dict = MR.get_variables(variable_names) File "/home/charris/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5.py", line 288, in get_variables res = self.read_var_array(hdr, process) File "/home/charris/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5.py", line 248, in read_var_array return self._matrix_reader.array_from_header(header, process) File "mio5_utils.pyx", line 616, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy/io/matlab/mio5_utils.c:5889) File "mio5_utils.pyx", line 657, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy/io/matlab/mio5_utils.c:5446) File "mio5_utils.pyx", line 814, in scipy.io.matlab.mio5_utils.VarReader5.read_char (scipy/io/matlab/mio5_utils.c:7270) TypeError: buffer is too small for requested array Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Sat Sep 14 08:15:22 2013 From: ckkart at hoc.net (Christian K.) Date: Sat, 14 Sep 2013 09:15:22 -0300 Subject: [SciPy-Dev] ANN: 1.8.0b2 release. In-Reply-To: References: Message-ID: Am 08.09.13 16:14, schrieb Charles R Harris: > Hi all, > > I'm happy to announce the second beta release of Numpy 1.8.0. This > release should solve the Windows problems encountered in the first beta. > Many thanks to Christolph Gohlke and Julian Taylor for their hard work > in getting those issues settled. > > It would be good if folks running OS X could try out this release and > report any issues on the numpy-dev mailing list. Unfortunately the files > still need to be installed from source as dmg files are not avalable at > this time. Build works (built with latest Anaconda python) but the tests fail on my machine (OSX 10.8.4): FAIL: test_mode_raw (test_linalg.TestQR) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/linalg/tests/test_linalg.py", line 798, in test_mode_raw old_assert_almost_equal(h, h1, decimal=8) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/testing/utils.py", line 454, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 8 decimals (mismatch 83.3333333333%) x: array([[ 5.91607978, -0.61024233, -1.01707056], [ 7.43735744, 0.82807867, -3.21391305]]) y: array([[-5.91607978, 0.43377175, 0.72295291], [-7.43735744, 0.82807867, 0.89262383]]) ====================================================================== FAIL: test_linalg.test_xerbla ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/testing/decorators.py", line 146, in skipper_func return f(*args, **kwargs) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/linalg/tests/test_linalg.py", line 925, in test_xerbla assert_(False) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/testing/utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError ---------------------------------------------------------------------- Ran 5235 tests in 264.167s In [7]: np.__config__.show() atlas_threads_info: NOT AVAILABLE blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] atlas_blas_threads_info: NOT AVAILABLE openblas_info: NOT AVAILABLE lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3'] define_macros = [('NO_ATLAS_INFO', 3)] atlas_info: NOT AVAILABLE lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: NOT AVAILABLE mkl_info: NOT AVAILABLE compilers: GNU Fortran (GCC) 4.2.3 i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) Should I use another compiler? Regards, Christian From cimrman3 at ntc.zcu.cz Wed Sep 18 10:20:27 2013 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 18 Sep 2013 16:20:27 +0200 Subject: [SciPy-Dev] ANN: SfePy 2013.3 Message-ID: <5239B6AB.3050208@ntc.zcu.cz> I am pleased to announce release 2013.3 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Downloads, mailing list, wiki: http://code.google.com/p/sfepy/ Git (source) repository, issue tracker: http://github.com/sfepy Highlights of this release -------------------------- - implementation of Mesh topology data structures in C - implementation of regions based on C Mesh (*) - MultiProblem solver for conjugate solution of subproblems - new advanced examples (vibro-acoustics, Stokes flow with slip conditions) (*) Warning: region selection syntax has been changed in a principal way, see [1]. Besides the simple renaming, all regions meant for boundary conditions or boundary/surface integrals need to have their kind set explicitly to 'facet' (or 'edge' in 2D, 'face' in 3D). [1] http://sfepy.org/doc-devel/users_guide.html#regions For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladim?r Luke? From ralf.gommers at gmail.com Wed Sep 18 17:19:16 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 18 Sep 2013 23:19:16 +0200 Subject: [SciPy-Dev] welcome Evgeni Burovski to the scipy team Message-ID: Hi all, On behalf of the scipy developers I'd like to welcome Evgeni Burovski as a member of the core dev team. Evgeni has (among other things) spent a lot of effort over the last months on making the statistical distributions more user-friendly and accurate: http://www.ohloh.net/p/scipy/contributors/21026013018665. I'm sure more good stuff will follow:) Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Wed Sep 18 17:24:53 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 18 Sep 2013 17:24:53 -0400 Subject: [SciPy-Dev] welcome Evgeni Burovski to the scipy team In-Reply-To: References: Message-ID: On Wed, Sep 18, 2013 at 5:19 PM, Ralf Gommers wrote: > Hi all, > > On behalf of the scipy developers I'd like to welcome Evgeni Burovski as > a member of the core dev team. Evgeni has (among other things) spent a lot > of effort over the last months on making the statistical distributions more > user-friendly and accurate: > http://www.ohloh.net/p/scipy/contributors/21026013018665. I'm sure more > good stuff will follow:) > > Cheers, > Ralf > > Welcome aboard, Evgeni! Thanks for all the great work so far. Warren > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Sep 18 17:28:44 2013 From: cournape at gmail.com (David Cournapeau) Date: Wed, 18 Sep 2013 22:28:44 +0100 Subject: [SciPy-Dev] welcome Evgeni Burovski to the scipy team In-Reply-To: References: Message-ID: Welcome and keep up the good work ! On Wed, Sep 18, 2013 at 10:24 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > > On Wed, Sep 18, 2013 at 5:19 PM, Ralf Gommers wrote: > >> Hi all, >> >> On behalf of the scipy developers I'd like to welcome Evgeni Burovski as >> a member of the core dev team. Evgeni has (among other things) spent a lot >> of effort over the last months on making the statistical distributions more >> user-friendly and accurate: >> http://www.ohloh.net/p/scipy/contributors/21026013018665. I'm sure more >> good stuff will follow:) >> >> Cheers, >> Ralf >> >> > > Welcome aboard, Evgeni! Thanks for all the great work so far. > > Warren > > > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Sep 18 18:13:55 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 18 Sep 2013 18:13:55 -0400 Subject: [SciPy-Dev] welcome Evgeni Burovski to the scipy team In-Reply-To: References: Message-ID: On Wed, Sep 18, 2013 at 5:28 PM, David Cournapeau wrote: > Welcome and keep up the good work ! > > > On Wed, Sep 18, 2013 at 10:24 PM, Warren Weckesser > wrote: >> >> >> On Wed, Sep 18, 2013 at 5:19 PM, Ralf Gommers >> wrote: >>> >>> Hi all, >>> >>> On behalf of the scipy developers I'd like to welcome Evgeni Burovski as >>> a member of the core dev team. Evgeni has (among other things) spent a lot >>> of effort over the last months on making the statistical distributions more >>> user-friendly and accurate: >>> http://www.ohloh.net/p/scipy/contributors/21026013018665. I'm sure more good >>> stuff will follow:) Welcome Evgeni, I'm very glad we have Eveni working on scipy. Besides interface, numerical and other improvements, Evgeni extended the test suite to entropy and moments, and fixed the bugs in the individual distributions. Longstanding incorrect results in entropy and higher moments, skew and kurtosis are or will be soon gone. That was the last big gap in the cleanup of the stats.distributions that was left. (There are still 153 issues and PRs open for scipy.stats right now. So, we won't run out of work anytime soon. :) Josef >>> >>> Cheers, >>> Ralf >>> >> >> >> Welcome aboard, Evgeni! Thanks for all the great work so far. >> >> Warren >> >> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From scopatz at gmail.com Wed Sep 18 20:49:25 2013 From: scopatz at gmail.com (Anthony Scopatz) Date: Wed, 18 Sep 2013 20:49:25 -0400 Subject: [SciPy-Dev] welcome Evgeni Burovski to the scipy team In-Reply-To: References: Message-ID: Welcome! On Wed, Sep 18, 2013 at 6:13 PM, wrote: > On Wed, Sep 18, 2013 at 5:28 PM, David Cournapeau > wrote: > > Welcome and keep up the good work ! > > > > > > On Wed, Sep 18, 2013 at 10:24 PM, Warren Weckesser > > wrote: > >> > >> > >> On Wed, Sep 18, 2013 at 5:19 PM, Ralf Gommers > >> wrote: > >>> > >>> Hi all, > >>> > >>> On behalf of the scipy developers I'd like to welcome Evgeni Burovski > as > >>> a member of the core dev team. Evgeni has (among other things) spent a > lot > >>> of effort over the last months on making the statistical distributions > more > >>> user-friendly and accurate: > >>> http://www.ohloh.net/p/scipy/contributors/21026013018665. I'm sure > more good > >>> stuff will follow:) > > > Welcome Evgeni, > > I'm very glad we have Eveni working on scipy. > > Besides interface, numerical and other improvements, Evgeni extended > the test suite to entropy and moments, and fixed the bugs in the > individual distributions. Longstanding incorrect results in entropy > and higher moments, skew and kurtosis are or will be soon gone. > That was the last big gap in the cleanup of the stats.distributions > that was left. > > (There are still 153 issues and PRs open for scipy.stats right now. > So, we won't run out of work anytime soon. :) > > Josef > > >>> > >>> Cheers, > >>> Ralf > >>> > >> > >> > >> Welcome aboard, Evgeni! Thanks for all the great work so far. > >> > >> Warren > >> > >> > >>> > >>> _______________________________________________ > >>> SciPy-Dev mailing list > >>> SciPy-Dev at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>> > >> > >> > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > >> > > > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at continuum.io Thu Sep 19 01:41:45 2013 From: travis at continuum.io (Travis Oliphant) Date: Thu, 19 Sep 2013 00:41:45 -0500 Subject: [SciPy-Dev] welcome Evgeni Burovski to the scipy team In-Reply-To: References: Message-ID: Thank you for all your hard work. It is greatly appreciated. -Travis On Wed, Sep 18, 2013 at 4:19 PM, Ralf Gommers wrote: > Hi all, > > On behalf of the scipy developers I'd like to welcome Evgeni Burovski as > a member of the core dev team. Evgeni has (among other things) spent a lot > of effort over the last months on making the statistical distributions more > user-friendly and accurate: > http://www.ohloh.net/p/scipy/contributors/21026013018665. I'm sure more > good stuff will follow:) > > Cheers, > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- Travis Oliphant Continuum Analytics, Inc. http://www.continuum.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Fri Sep 20 13:49:45 2013 From: suryak at ieee.org (Surya Kasturi) Date: Fri, 20 Sep 2013 23:19:45 +0530 Subject: [SciPy-Dev] Voting functionality on scipy central Message-ID: Hi there, At this time, we thought of giving a relook to Reputation system we built for SciPy Central. There are some of design decisions that have to be looked into 1. Should new revisions carry on reputation of previous revision? or 2. Should we attach reputation to "Submission" as a whole or each "Revision" individually? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Sep 20 14:32:17 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 20 Sep 2013 14:32:17 -0400 Subject: [SciPy-Dev] Voting functionality on scipy central In-Reply-To: References: Message-ID: On Fri, Sep 20, 2013 at 1:49 PM, Surya Kasturi wrote: > Hi there, > > At this time, we thought of giving a relook to Reputation system we built > for SciPy Central. There are some of design decisions that have to be looked > into > > 1. Should new revisions carry on reputation of previous revision? > > or > > 2. Should we attach reputation to "Submission" as a whole or each "Revision" > individually? What is reputation based upon? Does it update automatically over time? I would prefer carrying over reputation across revisions, as long as ancient reputation doesn't have a full influence forever. another possibility is to give the submitter a choice whether to carry over reputation. Assuming revisions don't make code much worse, but might be a large improvement. Josef > > > Thanks > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From suryak at ieee.org Fri Sep 20 21:40:14 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sat, 21 Sep 2013 07:10:14 +0530 Subject: [SciPy-Dev] Voting functionality on scipy central In-Reply-To: References: Message-ID: On Fri, Sep 20, 2013 at 11:19 PM, Surya Kasturi wrote: > Hi there, > > At this time, we thought of giving a relook to Reputation system we built > for SciPy Central. There are some of design decisions that have to be > looked into > > 1. Should new revisions carry on reputation of previous revision? > > or > > 2. Should we attach reputation to "Submission" as a whole or each > "Revision" individually? > > > Thanks > Also, I forgot to mention about how the current system we built (didn't merge into Master) is -- Each revision has its own reputation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Sep 21 11:11:28 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 21 Sep 2013 11:11:28 -0400 Subject: [SciPy-Dev] stats test coverage Message-ID: fresh from our CI and maintenance team an excerpt 85% build/testenv/lib/python2.7/site-packages/scipy/stats/_binned_statistic.py 393122104180.9 100% build/testenv/lib/python2.7/site-packages/scipy/stats/contingency.py 272373701.0 93% build/testenv/lib/python2.7/site-packages/scipy/stats/distributions.py 7960335631282280.9 100% build/testenv/lib/python2.7/site-packages/scipy/stats/__init__.py 346131301.0 79% build/testenv/lib/python2.7/site-packages/scipy/stats/kde.py 51312599260.8 61% build/testenv/lib/python2.7/site-packages/scipy/stats/morestats.py 16155803552250.6 67% build/testenv/lib/python2.7/site-packages/scipy/stats/mstats_basic.py 20508795912880.7 66% build/testenv/lib/python2.7/site-packages/scipy/stats/mstats_extras.py 466162107550.7 100% build/testenv/lib/python2.7/site-packages/scipy/stats/mstats.py 823301.0 94% build/testenv/lib/python2.7/site-packages/scipy/stats/_multivariate.py 49312511780.9 35% build/testenv/lib/python2.7/site-packages/scipy/stats/rv.py 72176110.4 83% build/testenv/lib/python2.7/site-packages/scipy/stats/stats.py 446310328541780.8 Josef From ralf.gommers at gmail.com Sat Sep 21 11:30:53 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 21 Sep 2013 17:30:53 +0200 Subject: [SciPy-Dev] stats test coverage In-Reply-To: References: Message-ID: On Sat, Sep 21, 2013 at 5:11 PM, wrote: > fresh from our CI and maintenance team > > an excerpt > > 85% > build/testenv/lib/python2.7/site-packages/scipy/stats/_binned_statistic.py > 393122104180.9 > 100% build/testenv/lib/python2.7/site-packages/scipy/stats/contingency.py > 272373701.0 > 93% > build/testenv/lib/python2.7/site-packages/scipy/stats/distributions.py > 7960335631282280.9 > 100% build/testenv/lib/python2.7/site-packages/scipy/stats/__init__.py > 346131301.0 > 79% build/testenv/lib/python2.7/site-packages/scipy/stats/kde.py > 51312599260.8 > 61% build/testenv/lib/python2.7/site-packages/scipy/stats/morestats.py > 16155803552250.6 > 67% build/testenv/lib/python2.7/site-packages/scipy/stats/mstats_basic.py > 20508795912880.7 > 66% > build/testenv/lib/python2.7/site-packages/scipy/stats/mstats_extras.py > 466162107550.7 > 100% build/testenv/lib/python2.7/site-packages/scipy/stats/mstats.py > 823301.0 > 94% > build/testenv/lib/python2.7/site-packages/scipy/stats/_multivariate.py > 49312511780.9 > 35% build/testenv/lib/python2.7/site-packages/scipy/stats/rv.py > 72176110.4 > 83% build/testenv/lib/python2.7/site-packages/scipy/stats/stats.py > 446310328541780.8 > Guess that's a "please go improve" plea:) Here is the overview: https://coveralls.io/r/scipy/scipy. Then click on the build number or commit message to go to per-file stats. Clicking on the file name then give the source code with lines highlighted in green or red. Quite useful to check (your own) PRs. The number for all of scipy is 87% now. That includes test files so is skewed upwards a bit, but overall I'd say it's not bad. Now to add a couple of % for each release. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Sep 21 11:44:58 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 21 Sep 2013 18:44:58 +0300 Subject: [SciPy-Dev] stats test coverage In-Reply-To: References: Message-ID: 21.09.2013 18:30, Ralf Gommers kirjoitti: [clip] > Here is the overview: https://coveralls.io/r/scipy/scipy. Then > click on the build number or commit message to go to per-file > stats. Clicking on the file name then give the source code with > lines highlighted in green or red. Quite useful to check (your own) > PRs. To do this on your own machine, install first the coverage package via one of sudo apt-get install python-coverage pip install --user coverage ... and then run in scipy source dir python runtests.py --coverage Check out the HTML files under build/coverage -- Pauli Virtanen From ralf.gommers at gmail.com Sat Sep 21 14:54:09 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 21 Sep 2013 20:54:09 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap Message-ID: Hi all, At EuroScipy Pauli, David and I sat together and drafted a roadmap for Scipy 1.0. We then discussed this offline with some of the other currently most active core devs, to get it into a state that's ready for discussion on this list. So here it is: https://github.com/scipy/scipy/pull/2908 Our aim is for this roadmap to help guide us towards a 1.0 version, which will contain only code that we consider to be "of sufficient quality". Also, it will help to communicate to new and potential developers where their contributions are especially needed. In order to discuss/review this roadmap without generating a monster thread, I propose the following: - topics like "do we need a roadmap?" or "what does 1.0-ready really mean?" are discussed on this thread. - things in the General section (API changes, documentation/test/build guidelines, etc.), are discussed on this thread as well. - for discussion of module-specific content, start a new thread and name it "1.0 roadmap: ". - for minor things, comment on the PR. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Sep 21 14:57:06 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 21 Sep 2013 20:57:06 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: On Sat, Sep 21, 2013 at 8:54 PM, Ralf Gommers wrote: > Hi all, > > At EuroScipy Pauli, David and I sat together and drafted a roadmap for > Scipy 1.0. We then discussed this offline with some of the other currently > most active core devs, to get it into a state that's ready for discussion > on this list. So here it is: https://github.com/scipy/scipy/pull/2908 > > Our aim is for this roadmap to help guide us towards a 1.0 version, which > will contain only code that we consider to be "of sufficient quality". > Also, it will help to communicate to new and potential developers where > their contributions are especially needed. > > In order to discuss/review this roadmap without generating a monster > thread, I propose the following: > - topics like "do we need a roadmap?" or "what does 1.0-ready really > mean?" are discussed on this thread. > - things in the General section (API changes, documentation/test/build > guidelines, etc.), are discussed on this thread as well. > - for discussion of module-specific content, start a new thread and name > it "1.0 roadmap: ". > - for minor things, comment on the PR. > > Cheers, > Ralf > > Github may not survive forever, so for the record here the full text of the draft roadmap: Roadmap to Scipy 1.0 ==================== This roadmap provides a high-level view on what is needed per scipy submodule in terms of new functionality, bug fixes, etc. before we can release a ``1.0`` version of Scipy. Things not mentioned in this roadmap are not necessarily unimportant or out of scope, however we (the Scipy developers) want to provide to our users and contributors a clear picture of where Scipy is going and where help is needed most urgently. When a module is in a 1.0-ready state, it means that it has the functionality we consider essential and has an API and code quality (including documentation and tests) that's of high enough quality. General ------- This roadmap will be evolving together with Scipy. Updates can be submitted as pull requests and, unless they're very minor, have to be discussed on the scipy-dev mailing list. API changes ``````````` In general, we want to take advantage of the major version change to fix the known warts in the API. The change from 0.x.x to 1.x.x is the chance to fix those API issues that we all know are ugly warts. Example: unify the convention for specifying tolerances (including absolute, relative, argument and function value tolerances) of the optimization functions. More API issues will be noted in the module sections below. It should be made more clear what is public and what is private in scipy. Everything private should be underscored as much as possible. Now this is done consistently when we add new code, but for 1.0 it should also be done for existing code. Test coverage ````````````` Test coverage of code added in the last few years is quite good, and we aim for a high coverage for all new code that is added. However, there is still a significant amount of old code for which coverage is poor. Bringing that up to the current standard is probably not realistic, but we should plug the biggest holes. Additionally the coverage should be tracked over time and we should ensure it only goes up. Besides coverage there is also the issue of correctness - older code may have a few tests that provide decent statement coverage, but that doesn't necessarily say much about whether the code does what it says on the box. Therefore code review of some parts of the code (``stats`` and ``signal`` in particular) is necessary. Documentation ````````````` The documentation is in decent shape. Expanding of current docstrings and putting them in the standard numpy format should continue, so the number of reST errors and glitches in the html docs decreases. Most modules also have a tutorial in the reference guide that is a good introduction, however there are a few missing or incomplete tutorials - this should be fixed. Other ````` Scipy 1.0 will likely contain more backwards-incompatible changes than a minor release. Therefore we will have a longer-lived maintenance branch of the last 0.X release. It's not clear how much functionality can be Cythonized without making the .so files too large. This needs measuring. Bento will be officially supported as the second build tool besides distutils. At the moment it still has an experimental, use-at-your-own-risk status, but that has to change. A more complete continuous integration setup is needed; at the moment we often find out right before a release that there are issues on some less-often used platform or Python version. At least needed are a Windows, Linux and OS X build, coverage of the lowest and highest Python and Numpy versions that are supported, a Bento build and a PEP8 checker. Modules ------- cluster ``````` Most of the cluster module is a candidate for a Cython rewrite; this will speed up the code and it will be more maintainable than the current C code. The code should remain (or become) simple and easy to understand. Support for the arbitrary distance metrics in ``scipy.spatial`` is probably best left to scikit-learn or other more specialized libraries. constants ````````` This module is basically done, low-maintenance and without open issues. fftpack ``````` Needed: - solve issues with single precision: large errors, disabled for difficult sizes - fix caching bug - Bluestein algorithm nice to have, padding is alternative - deprecate fftpack.convolve as public function (was not meant to be public), resolve differences between ``signal.fftconvolve`` / ``fftpack.convolve`` / ``signal.convolve`` and ``numpy.convolve`` There's a large overlap with ``numpy.fft``. This duplication has to change (both are too widely used to deprecate one); in the documentation we should make clear that ``scipy.fftpack`` is preferred over ``numpy.fft``. integrate ````````` Needed for ODE solvers: - documentation is pretty bad, needs fixing - figure out if/how to integrate scikits.odes (Sundials wrapper) - figure out what to deprecate The numerical integration functions are in good shape, not much to do here. interpolate ``````````` Needed: - Transparant B-splines and their usage in the interpolation routines is needed. - Both fitpack and fitpack2 interfaces will be kept. - splmake should go; is different spline representation --> need exactly one - interp1d/interp2d are somewhat ugly but widely used, so we keep them. - Regular grid interpolation routines needed io -- wavfile; - PCM float will be supported, for anything else use audiolab or other specialized libraries. - raise errors instead of warnings if data not understood. Other sub-modules (matlab, netcdf, idl, harwell-boeing, arff, matrix market) are in good shape. lib --- ``scipy.lib`` contains nothing public anymore, so rename to ``scipy._lib``. linalg `````` Needed: - remove functions that are duplicate with numpy.linalg - get_lapack_funcs should always use flapack - cblas, clapack are deprecated, will go away - wrap more lapack functions - one too many funcs for LU decomposition, remove one misc ```` ``scipy.misc`` will be removed as a public module. The functions in it can be moved to other modules: - pilutil, images : ndimage - comb, factorials, logsumexp, pade : special - doccer : move to scipy._lib - info, who : these are in numpy - derivative, central_diff_weight : remove, replace with more extensive functionality for numerical differentiation - likely in a new module ``scipy.diff``, as discussed in https://github.com/scipy/scipy/issues/2035 ndimage ``````` Underlying ndimage is a powerful interpolation engine. Unfortunately, it was never decided whether to use a pixel model (``(1, 1)`` elements with centers ``(0.5, 0.5)``) or a data point model (values at points on a grid). Over time, it seems that the data point model is better defined and easier to implement. We therefore propose to move to this data representation for 1.0, and to vet all interpolation code to ensure that boundary values, transformations, etc. are correctly computed. Addressing this issue will close several issues, including #1323, #1903, #2045 and #2640. odr --- Rename the module to ``regression`` or ``fitting``, include ``optimize.curve_fit``. This module will then provide a home for other fitting functionality - what exactly needs to be worked out in more detail, a discussion can be found at https://github.com/scipy/scipy/pull/448. optimize ```````` Overall this module is in reasonably good shape, however it is missing a few more good global optimizers as well as large-scale optimizers. These should be added. Other things that are needed: - deprecate ``anneal``, it just doesn't work well enough. - deprecate the ``fmin_*`` functions in the documentation, ``minimize`` is preferred. - clearly define what's out of scope for this module. signal `````` *Convolution and correlation*: (Relevant functions are convolve, correlate, fftconvolve, convolve2d, correlate2d, and sepfir2d.) Eliminate the overlap with `ndimage` (and elsewhere). From `numpy`, `scipy.signal` and `scipy.ndimage` (and anywhere else we find them), pick the "best of class" for 1-D, 2-D and n-d convolution and correlation, put the implementation somewhere, and use that consistently throughout scipy. *B-splines*: (Relevant functions are bspline, cubic, quadratic, gauss_spline, cspline1d, qspline1d, cspline2d, qspline2d, cspline1d_eval, and spline_filter.) Move the good stuff to `interpolate` (with appropriate API changes to match how things are done in `interpolate`), and eliminate any duplication. *Filter design*: merge `firwin` and `firwin2` so `firwin2` can be removed. *Continuous-Time Linear Systems*: remove `lsim2`, `impulse2`, `step2`. Make `lsim`, `impulse` and `step` "just work" for any input system. Improve performance of ltisys (less internal transformations between different representations). *Wavelets*: add proper wavelets, including discrete wavelet transform. What's there now doesn't make much sense. sparse `````` The sparse matrix formats are getting feature-complete but are slow ... reimplement parts in Cython? - Small matrices are slower than PySparse, needs fixing There are a lot of formats. These should all be kept, but improvements/optimizations should go into CSR/CSC, which are the preferred formats. Don't emulate np.matrix behavior, drop 2-D? sparse.csgraph `````````````` This module is in good shape. sparse.linalg ````````````` Arpack is in good shape. isolve: - callback keyword is inconsistent - tol keyword is broken, should be relative tol - Fortran code not re-entrant (but we don't solve, maybe re-use from PyKrilov) dsolve: - remove umfpack wrapper due to license reasons - add sparse Cholesky or incomplete Cholesky - look at CHOLMOD spatial ``````` KDTree/cKDTree and the QHull wrappers are in good shape. The distance module needs bug fixes in the distance metrics, and distance_wrap.c needs to be cleaned up (maybe rewrite in Cython). special ``````` special has a lot of functions that need improvements in precision. All functions that are also implemented in mpmath can be tested against mpmath, and should match well. Things not in mpmath: - cdflib - some others stats ````` This is a large module with by far the most open issues. It has improved a lot over the past few releases, but more cleanup and rewriting of functions is needed. The Statistics Review milestone on Github gives a reasonable overview of which functions need checking, documentation and tests. ``stats.distributions`` : - skew/kurtosis of a number of distributions needs fixing - fix generic docstring examples, they should be valid Python and make sense for each distributions - document subclassing of distributions even better, make issues with state of instances clear. All hypothesis tests should get a keyword 'alternative' where applicable (see ``stats.kstest`` for an example). ``gaussian_kde`` is in good shape but limited. It should not be expanded probably, this fits better in statsmodels (which already has a lot more KDE functionality). ``stats.mstats`` is a useful module for worked with data with missing values. One problem it has though is that in many cases the functions have diverged from their counterparts in `scipy.stats`. The ``mstats`` functions should be updated so that the two sets of functions are consistent. weave ````` This is the only module that was not ported to Python 3. Effectively it's deprecated (not recommended to use for new code). In the future it should be removed from scipy (can be made into a separate module). -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Sep 21 18:35:48 2013 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 21 Sep 2013 23:35:48 +0100 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: On Sat, Sep 21, 2013 at 7:54 PM, Ralf Gommers wrote: > - topics like "do we need a roadmap?" or "what does 1.0-ready really mean?" > are discussed on this thread. I would be curious what the answers are to these questions :-). This looks like a big list with many good improvements on it, but I'm not sure what makes them "1.0 changes" instead of just "good changes we should do". Does 1.0 mean we can break a lot of stuff at once and get away with it? Does it mean that after that we're not allowed to change things like this ever again so we have to get it right (and maybe keep slipping the schedule until we're certain)? Does it mean no more releases until all the below things happen? Or on the other extreme, does it mean that someone will keep an eye on this list, and at some point maybe in a few years when we notice that all of these things have happened, then the next release gets called "1.0" instead of "0.18" or whatever? I think when you start talking about "1.0" people have very strong conflicting assumptions about what this "obviously" means, so... -n From lomegor at gmail.com Sat Sep 21 18:39:23 2013 From: lomegor at gmail.com (=?ISO-8859-1?Q?Sebasti=E1n_Ventura?=) Date: Sat, 21 Sep 2013 23:39:23 +0100 Subject: [SciPy-Dev] Voting functionality on scipy central In-Reply-To: References: Message-ID: On 20 September 2013 19:32, wrote: > On Fri, Sep 20, 2013 at 1:49 PM, Surya Kasturi wrote: > > Hi there, > > > > At this time, we thought of giving a relook to Reputation system we built > > for SciPy Central. There are some of design decisions that have to be > looked > > into > > > > 1. Should new revisions carry on reputation of previous revision? > > > > or > > > > 2. Should we attach reputation to "Submission" as a whole or each > "Revision" > > individually? > > What is reputation based upon? Does it update automatically over time? > Reputation is based on votes by logged in people. There will be an up arrow and a down arrow on revisions. Personally, I believe the best solution would be to have reputation be a field on revisions that gets copied (from the last revision) when a new revision is created for the same submission. But after the new revision is created, it is no longer linked to the past revision (so both of them can be voted separately). Sebastian > > I would prefer carrying over reputation across revisions, as long as > ancient reputation doesn't have a full influence forever. > > another possibility is to give the submitter a choice whether to carry > over reputation. Assuming revisions don't make code much worse, but > might be a large improvement. > > Josef > > > > > > > Thanks > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Sep 21 19:12:59 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 21 Sep 2013 19:12:59 -0400 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: On Sat, Sep 21, 2013 at 6:35 PM, Nathaniel Smith wrote: > On Sat, Sep 21, 2013 at 7:54 PM, Ralf Gommers wrote: >> - topics like "do we need a roadmap?" or "what does 1.0-ready really mean?" >> are discussed on this thread. > > I would be curious what the answers are to these questions :-). > > This looks like a big list with many good improvements on it, but I'm > not sure what makes them "1.0 changes" instead of just "good changes > we should do". Does 1.0 mean we can break a lot of stuff at once and > get away with it? Does it mean that after that we're not allowed to > change things like this ever again so we have to get it right (and > maybe keep slipping the schedule until we're certain)? I haven't been to EuroScipy so I don't know the original discussion. The main difference for me would be that I expect more stability afterwards. Currently we still need to clean up code that requires deprecation and API changes pretty regularly (or they are cumbersome to cleanup while maintaining backwards compatibility.) Several parts of scipy have "organically grown", sometimes with several competing and incompatible implementations, some parts of the code were never reviewed and tested. IMO, these are the main problems with the current code that need to be cleaned up before we can call it a scipy 1.0. In the mean time, we still get new code, missing algorithms and improvements in many sub-packages. > Does it mean > no more releases until all the below things happen? Or on the other > extreme, does it mean that someone will keep an eye on this list, and > at some point maybe in a few years when we notice that all of these > things have happened, then the next release gets called "1.0" instead > of "0.18" or whatever? I think it will be the latter, keep working as usual, with an eye or priority on the things on the list, release at a pretty regular schedule. How fast we can check off the items on the roadmap to 1.0 list will depend on volunteer work. Positive case: wrong skew and kurtosis in stats.distributions has already been fixed, thanks to Evgeni's recent work. stats.distributions are in pretty good shape now and some of the remaining issues are bonus points that don't need to hold up a 1.0. Some other parts of scipy.stats have test coverage in the 60%-70% which is also a rough estimate how many functions still need to be reviewed, tested and cleaned up (30%?) (cleanup or delete legacy code? before 1.0) Once we have these functions in 1.0 condition, I don't expect that they will change much anymore. Once a t-test has been properly written, we don't need to change it anymore, (we can still add a few more bells). My examples are stats, but the same applies (at least) to signal and interpolate. my view, Josef > > I think when you start talking about "1.0" people have very strong > conflicting assumptions about what this "obviously" means, so... > > -n > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From pav at iki.fi Sat Sep 21 19:19:51 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 22 Sep 2013 02:19:51 +0300 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: 22.09.2013 01:35, Nathaniel Smith kirjoitti: > On Sat, Sep 21, 2013 at 7:54 PM, Ralf Gommers wrote: >> - topics like "do we need a roadmap?" or "what does 1.0-ready really mean?" >> are discussed on this thread. > > I would be curious what the answers are to these questions :-). > > This looks like a big list with many good improvements on it, but I'm > not sure what makes them "1.0 changes" instead of just "good changes > we should do". Does 1.0 mean we can break a lot of stuff at once and > get away with it? Does it mean that after that we're not allowed to > change things like this ever again so we have to get it right (and > maybe keep slipping the schedule until we're certain)? Does it mean > no more releases until all the below things happen? Or on the other > extreme, does it mean that someone will keep an eye on this list, and > at some point maybe in a few years when we notice that all of these > things have happened, then the next release gets called "1.0" instead > of "0.18" or whatever? > > I think when you start talking about "1.0" people have very strong > conflicting assumptions about what this "obviously" means, so... In my mind, "1.0" for Scipy means that we cover a basic set of features needed for numerical science, and do it well. A definition via negative is: at "1.0" we don't have (i) stuff that's obviously crap (ii) blank spots at commonly needed functionality (iii) awkward usage patterns It is of course a subjective matter what this means, but writing the known issues down in the roadmap is one way to define it. Regarding API deprecations: I think breaking a lot of stuff at "1.0" at once is not a good idea, and I don't think that will happen. Rather, things progress as we have done so far --- if something is horribly awkward, a new API is introduced and the old one is deprecated. We can perhaps be slightly more aggressive in the cleaning out crap at "1.0", but I don't think there'll be a qualitative difference. This is also for practical reasons, as we will not be able to fix everything in one go in any case. Rather, one day in the distant future, it turns out that the next release is in a good enough shape to be called "1.0". The "0.x" roll up to that point. -- Pauli Virtanen From josef.pktd at gmail.com Sat Sep 21 19:23:39 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 21 Sep 2013 19:23:39 -0400 Subject: [SciPy-Dev] Voting functionality on scipy central In-Reply-To: References: Message-ID: On Sat, Sep 21, 2013 at 6:39 PM, Sebasti?n Ventura wrote: > > > > On 20 September 2013 19:32, wrote: >> >> On Fri, Sep 20, 2013 at 1:49 PM, Surya Kasturi wrote: >> > Hi there, >> > >> > At this time, we thought of giving a relook to Reputation system we >> > built >> > for SciPy Central. There are some of design decisions that have to be >> > looked >> > into >> > >> > 1. Should new revisions carry on reputation of previous revision? >> > >> > or >> > >> > 2. Should we attach reputation to "Submission" as a whole or each >> > "Revision" >> > individually? >> >> What is reputation based upon? Does it update automatically over time? > > > Reputation is based on votes by logged in people. There will be an up arrow > and a down arrow on revisions. > > Personally, I believe the best solution would be to have reputation be a > field on revisions that gets copied (from the last revision) when a new > revision is created for the same submission. But after the new revision is > created, it is no longer linked to the past revision (so both of them can be > voted separately). Sounds good to me. I just realized that a submitter can start fresh, with a blank reputation, by renaming and submitting a new item instead of a revision, which might be useful if a submission gets improved over time and there are negative votes early on. Josef > > Sebastian > >> >> >> I would prefer carrying over reputation across revisions, as long as >> ancient reputation doesn't have a full influence forever. >> >> another possibility is to give the submitter a choice whether to carry >> over reputation. Assuming revisions don't make code much worse, but >> might be a large improvement. >> >> Josef >> >> > >> > >> > Thanks >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From jabooth at gmail.com Sat Sep 21 19:28:19 2013 From: jabooth at gmail.com (James Booth) Date: Sun, 22 Sep 2013 00:28:19 +0100 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: I think it's great to have some rough goal posts set out for 1.0. Regardless of exactly what 1.0 means, by making a giant todo list you've made contributing to SciPy much more appealing - it now feels like there is a targeted effort to tidy the whole project up, and I for one would like to be a part of it. My name is James Booth and I'm a Comp Sci PhD student. I'm part of a team at Imperial College London that is leaning heavily on Scipy for our research - I'll take a look through the list in detail and see if I can't help out in some way. Best wishes, James On 22 September 2013 00:19, Pauli Virtanen wrote: > 22.09.2013 01:35, Nathaniel Smith kirjoitti: > > On Sat, Sep 21, 2013 at 7:54 PM, Ralf Gommers > wrote: > >> - topics like "do we need a roadmap?" or "what does 1.0-ready really > mean?" > >> are discussed on this thread. > > > > I would be curious what the answers are to these questions :-). > > > > This looks like a big list with many good improvements on it, but I'm > > not sure what makes them "1.0 changes" instead of just "good changes > > we should do". Does 1.0 mean we can break a lot of stuff at once and > > get away with it? Does it mean that after that we're not allowed to > > change things like this ever again so we have to get it right (and > > maybe keep slipping the schedule until we're certain)? Does it mean > > no more releases until all the below things happen? Or on the other > > extreme, does it mean that someone will keep an eye on this list, and > > at some point maybe in a few years when we notice that all of these > > things have happened, then the next release gets called "1.0" instead > > of "0.18" or whatever? > > > > I think when you start talking about "1.0" people have very strong > > conflicting assumptions about what this "obviously" means, so... > > In my mind, "1.0" for Scipy means that we cover a basic set of features > needed for numerical science, and do it well. A definition via negative > is: at "1.0" we don't have > > (i) stuff that's obviously crap > (ii) blank spots at commonly needed functionality > (iii) awkward usage patterns > > It is of course a subjective matter what this means, but writing the > known issues down in the roadmap is one way to define it. > > > Regarding API deprecations: I think breaking a lot of stuff at "1.0" at > once is not a good idea, and I don't think that will happen. Rather, > things progress as we have done so far --- if something is horribly > awkward, a new API is introduced and the old one is deprecated. We can > perhaps be slightly more aggressive in the cleaning out crap at "1.0", > but I don't think there'll be a qualitative difference. This is also for > practical reasons, as we will not be able to fix everything in one go in > any case. > > Rather, one day in the distant future, it turns out that the next > release is in a good enough shape to be called "1.0". The "0.x" roll up > to that point. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Sep 21 19:29:14 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 22 Sep 2013 02:29:14 +0300 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: 22.09.2013 02:12, josef.pktd at gmail.com kirjoitti: [clip] > Several parts of scipy have "organically grown", sometimes with > several competing and incompatible implementations, some parts of the > code were never reviewed and tested. > > IMO, these are the main problems with the current code that need to be > cleaned up before we can call it a scipy 1.0. I agree, this is the main issue to be cleaned up before 1.0. There's some code there that is "research quality" [e.g. O(N^2) algorithm in a place where there is O(N) alternative and N is often big in real use cases], incompatible implementations of similar or the same things, and implementations that are missing commonly needed things. We've slowly sorted some of this crap out in the last few years, but there remains still a number of things to fix. Those that we remembered about are listed in the roadmap proposal. -- Pauli Virtanen From josef.pktd at gmail.com Sat Sep 21 21:02:59 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 21 Sep 2013 21:02:59 -0400 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: On Sat, Sep 21, 2013 at 7:29 PM, Pauli Virtanen wrote: > 22.09.2013 02:12, josef.pktd at gmail.com kirjoitti: > [clip] >> Several parts of scipy have "organically grown", sometimes with >> several competing and incompatible implementations, some parts of the >> code were never reviewed and tested. >> >> IMO, these are the main problems with the current code that need to be >> cleaned up before we can call it a scipy 1.0. > > I agree, this is the main issue to be cleaned up before 1.0. > > There's some code there that is "research quality" [e.g. O(N^2) > algorithm in a place where there is O(N) alternative and N is often big > in real use cases], incompatible implementations of similar or the same > things, and implementations that are missing commonly needed things. > We've slowly sorted some of this crap out in the last few years, but > there remains still a number of things to fix. Those that we remembered > about are listed in the roadmap proposal. to emphasize this point We can talk *now* of a roadmap because there have been many improvements in scipy in the last few years by old and many new contributors, so that we can almost see what still needs to be done. Josef Once there was a dark tunnel, and now we can see a glimmer. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From blake.a.griffith at gmail.com Sat Sep 21 23:03:35 2013 From: blake.a.griffith at gmail.com (Blake Griffith) Date: Sat, 21 Sep 2013 22:03:35 -0500 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: > sparse > `````` > Don't emulate np.matrix behavior, drop 2-D? > What is meant by this? Emulate np.array instead? -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjordan1 at uw.edu Sun Sep 22 01:52:01 2013 From: cjordan1 at uw.edu (Christopher Jordan-Squire) Date: Sat, 21 Sep 2013 22:52:01 -0700 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: For scipy stats, is there anything on the table regarding somehow unifying the sampling in numpy.random and the distributions in scipy.stats? I'm specifically thinking of two issues: (1) There's a lot of duplication between numpy.random and scipy.stats but with different interfaces. This seems like something that ideally would be reduced. (2) The interface for the distributions in scipy.stats seems to explicitly be for scalar random variables, so there's no multivariate normals, multinomials, dirichlet, wishart, etc.. Instead the sampling is in numpy.random, and pdf's aren't there. Has this been discussed elsewhere? On Sat, Sep 21, 2013 at 8:03 PM, Blake Griffith wrote: > >> sparse >> `````` >> >> Don't emulate np.matrix behavior, drop 2-D? > > > What is meant by this? Emulate np.array instead? > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ralf.gommers at gmail.com Sun Sep 22 04:44:19 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 22 Sep 2013 10:44:19 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: On Sun, Sep 22, 2013 at 1:28 AM, James Booth wrote: > I think it's great to have some rough goal posts set out for 1.0. > Regardless of exactly what 1.0 means, by making a giant todo > list you've made contributing to SciPy much more appealing - > it now feels like there is a targeted effort to tidy the whole > project up, and I for one would like to be a part of it. > Great. This is exactly why we want to have a roadmap. > My name is James Booth and I'm a Comp Sci PhD student. > I'm part of a team at Imperial College London that is leaning > heavily on Scipy for our research - I'll take a look through the list in > detail and see if I can't help out in some way. > Looking forward to your pull requests! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Sep 22 04:52:53 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 22 Sep 2013 10:52:53 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: On Sun, Sep 22, 2013 at 7:52 AM, Christopher Jordan-Squire wrote: > For scipy stats, is there anything on the table regarding somehow > unifying the sampling in numpy.random and the distributions in > scipy.stats? I'm specifically thinking of two issues: > > (1) There's a lot of duplication between numpy.random and scipy.stats > but with different interfaces. This seems like something that ideally > would be reduced. > numpy.random only provides sampling and only has about half the distributions of scipy.stats. Sampling is really only a small part of what scipy.stats provides (pdf, cdf, moments, fitting a distribution, etc.). So I'm not bothered by that duplication. If we'd want to reduce it I think it would have to be removed from numpy, which doesn't sound like a good idea. > (2) The interface for the distributions in scipy.stats seems to > explicitly be for scalar random variables, so there's no multivariate > normals, multinomials, dirichlet, wishart, etc.. Instead the sampling > is in numpy.random, and pdf's aren't there. > Two days ago PR-2726 was merged, which adds a multivariate normal distribution. Others can be added. IIRC there has been an enhancement ticket for wishhart somewhere and there's a Python implementation floating around somewhere. Cheers, Ralf > Has this been discussed elsewhere? > > On Sat, Sep 21, 2013 at 8:03 PM, Blake Griffith > wrote: > > > >> sparse > >> `````` > >> > >> Don't emulate np.matrix behavior, drop 2-D? > > > > > > What is meant by this? Emulate np.array instead? > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Sep 22 06:46:31 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 Sep 2013 06:46:31 -0400 Subject: [SciPy-Dev] Scipy 1.0 roadmap - stats Message-ID: On Sun, Sep 22, 2013 at 4:52 AM, Ralf Gommers wrote: > > > > On Sun, Sep 22, 2013 at 7:52 AM, Christopher Jordan-Squire > wrote: >> >> For scipy stats, is there anything on the table regarding somehow >> unifying the sampling in numpy.random and the distributions in >> scipy.stats? I'm specifically thinking of two issues: >> >> (1) There's a lot of duplication between numpy.random and scipy.stats >> but with different interfaces. This seems like something that ideally >> would be reduced. > > > numpy.random only provides sampling and only has about half the > distributions of scipy.stats. Sampling is really only a small part of what > scipy.stats provides (pdf, cdf, moments, fitting a distribution, etc.). So > I'm not bothered by that duplication. If we'd want to reduce it I think it > would have to be removed from numpy, which doesn't sound like a good idea. There is no code duplication, so it also never bothered me. scipy.stats distributions has quite a bit more overhead than numpy random if you call random number generation repeatedly instead of requesting one big array. There are some naming inconsistencies between scipy.stats and numpy.random, but I never looked systematically for that, and there is no open issue. The distributions in scipy.stats have restrictions on the choice of paramaterization because of the generic use of loc and scale. One issue that should be added to the roadmap is fixing the broadcasting of loc and scale in the scipy random numbers. I don't see a way to fix this in a backwards compatible way. --- Some functions like nanmean, nanstd and others can be removed from scipy.stats because they will be available in numpy. (when scipy requires a minimum version of numpy that contains those.) > >> >> (2) The interface for the distributions in scipy.stats seems to >> explicitly be for scalar random variables, so there's no multivariate >> normals, multinomials, dirichlet, wishart, etc.. Instead the sampling >> is in numpy.random, and pdf's aren't there. > > > Two days ago PR-2726 was merged, which adds a multivariate normal > distribution. Others can be added. IIRC there has been an enhancement ticket > for wishhart somewhere and there's a Python implementation floating around > somewhere. > > Cheers, > Ralf > >> >> Has this been discussed elsewhere? No, all the discussions for scipy.stats are inn github issues (including PRs) or on the mailing list. Josef >> >> On Sat, Sep 21, 2013 at 8:03 PM, Blake Griffith >> wrote: >> > >> >> sparse >> >> `````` >> >> >> >> Don't emulate np.matrix behavior, drop 2-D? >> > >> > >> > What is meant by this? Emulate np.array instead? >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Sun Sep 22 07:32:48 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 Sep 2013 07:32:48 -0400 Subject: [SciPy-Dev] how to turn internal names into API and freeze them forever Message-ID: The scipy.stats.distributions allow now keyword access to the shape parameters. Before, the names of shape parameters where just part of the signature, but couldn't be used in code, only positional argument where possible. Now, users can actually use them in their code. hypergeom uses `M,n,N` where I usually need to read the description for several minutes to remember what they are. most distributions use one letter parameters: Since I just checked numpy.random.triangular uses `mode` scipy.triang just uses `c` Josef From evgeny.burovskiy at gmail.com Sun Sep 22 08:17:01 2013 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Sun, 22 Sep 2013 13:17:01 +0100 Subject: [SciPy-Dev] how to turn internal names into API and freeze them forever In-Reply-To: References: Message-ID: Josef, Add your comment on good-bad-tot and wikipedia entry to the docstring maybe? Meanwhile, the named args have only been added in 0.13, which is still in beta. It might be not too late to change the names. Zh On Sun, Sep 22, 2013 at 12:32 PM, wrote: > The scipy.stats.distributions allow now keyword access to the shape > parameters. > > Before, the names of shape parameters where just part of the > signature, but couldn't be used in code, only positional argument > where possible. > > Now, users can actually use them in their code. > > hypergeom uses `M,n,N` where I usually need to read the description > for several minutes to remember what they are. > > most distributions use one letter parameters: > > Since I just checked > numpy.random.triangular uses `mode` > scipy.triang just uses `c` > > Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Sep 22 11:25:23 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 22 Sep 2013 18:25:23 +0300 Subject: [SciPy-Dev] Scipy 1.0 roadmap - sparse In-Reply-To: References: Message-ID: 22.09.2013 06:03, Blake Griffith kirjoitti: >> sparse >> `````` >> Don't emulate np.matrix behavior, drop 2-D? >> > > What is meant by this? Emulate np.array instead? Yes, having something like `csr_array` instead of/in addition to `csr_matrix`. I haven't completely thought out how useful this would be in the long run. -- Pauli Virtanen From suryak at ieee.org Sun Sep 22 13:51:22 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sun, 22 Sep 2013 23:21:22 +0530 Subject: [SciPy-Dev] Voting functionality on scipy central In-Reply-To: References: Message-ID: On Sun, Sep 22, 2013 at 4:09 AM, Sebasti?n Ventura wrote: > > > > On 20 September 2013 19:32, wrote: > >> On Fri, Sep 20, 2013 at 1:49 PM, Surya Kasturi wrote: >> > Hi there, >> > >> > At this time, we thought of giving a relook to Reputation system we >> built >> > for SciPy Central. There are some of design decisions that have to be >> looked >> > into >> > >> > 1. Should new revisions carry on reputation of previous revision? >> > >> > or >> > >> > 2. Should we attach reputation to "Submission" as a whole or each >> "Revision" >> > individually? >> >> What is reputation based upon? Does it update automatically over time? >> > > Reputation is based on votes by logged in people. There will be an up > arrow and a down arrow on revisions. > > Personally, I believe the best solution would be to have reputation be a > field on revisions that gets copied (from the last revision) when a new > revision is created for the same submission. But after the new revision is > created, it is no longer linked to the past revision (so both of them can > be voted separately). > > This idea is quite different. Could you please tell me why is that only last revision should carry on reputation? If there are 3 revisions, should 3rd rev carry reputation of 1 and 2 or only 2? > Sebastian > > >> I would prefer carrying over reputation across revisions, as long as >> ancient reputation doesn't have a full influence forever. >> >> another possibility is to give the submitter a choice whether to carry >> over reputation. Assuming revisions don't make code much worse, but >> might be a large improvement. >> >> Josef >> >> > >> > >> > Thanks >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > I think we need to be a bit clear as to when a new revision is usually created. 1. Does new revisions are made making some minor changes? or 2. Does revisions are made only if there is change in functionality of work? More generally, what kind of changes are made usually in revisions? If the changes are very minor and latest revision is almost same as previous revision, its totally fine to carry on the reputation. But if there were any major, significant changes in latest revision, we never know which one is better until another user votes. Thanks Surya -------------- next part -------------- An HTML attachment was scrubbed... URL: From lomegor at gmail.com Sun Sep 22 14:56:20 2013 From: lomegor at gmail.com (=?ISO-8859-1?Q?Sebasti=E1n_Ventura?=) Date: Sun, 22 Sep 2013 19:56:20 +0100 Subject: [SciPy-Dev] Voting functionality on scipy central In-Reply-To: References: Message-ID: On 22 September 2013 18:51, Surya Kasturi wrote: > > > > On Sun, Sep 22, 2013 at 4:09 AM, Sebasti?n Ventura wrote: > >> >> Personally, I believe the best solution would be to have reputation be a >> field on revisions that gets copied (from the last revision) when a new >> revision is created for the same submission. But after the new revision is >> created, it is no longer linked to the past revision (so both of them can >> be voted separately). >> >> This idea is quite different. Could you please tell me why is that only > last revision should carry on reputation? If there are 3 revisions, should > 3rd rev carry reputation of 1 and 2 or only 2? > All revisions would have reputation, not just the last one... I don't know if that answers your question. Here's a simple example: Laura makes a new submission, and it gets 3 upvote; those 3 upvotes are stored on the first revision; Laura then creates a new revision for the reputation, both revisions now have 3 points. Someone downvotes the first revision and upvote the last one: now the first revision has 2 points, and the last one has 4 points. If someone now were to create a new revision for the same submission, it will have the same reputation as the last revision, 4 points. Someone again could downvote that revision, and it would have 3 points, while the first revision would still have 2 points, and the second revision 4 points. Is it any clearer? > > I think we need to be a bit clear as to when a new revision is usually > created. > 1. Does new revisions are made making some minor changes? or > 2. Does revisions are made only if there is change in functionality of > work? > > More generally, what kind of changes are made usually in revisions? > > If the changes are very minor and latest revision is almost same as > previous revision, its totally fine to carry on the reputation. > But if there were any major, significant changes in latest revision, we > never know which one is better until another user votes. > Maybe we can just add an option to copy reputation or not? I'm not even sure if that would work, just throwing it out here. Sebastian > > Thanks > Surya > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjordan1 at uw.edu Sun Sep 22 15:25:45 2013 From: cjordan1 at uw.edu (Christopher Jordan-Squire) Date: Sun, 22 Sep 2013 12:25:45 -0700 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: On Sun, Sep 22, 2013 at 1:52 AM, Ralf Gommers wrote: > > > > On Sun, Sep 22, 2013 at 7:52 AM, Christopher Jordan-Squire > wrote: >> >> For scipy stats, is there anything on the table regarding somehow >> unifying the sampling in numpy.random and the distributions in >> scipy.stats? I'm specifically thinking of two issues: >> >> (1) There's a lot of duplication between numpy.random and scipy.stats >> but with different interfaces. This seems like something that ideally >> would be reduced. > > > numpy.random only provides sampling and only has about half the > distributions of scipy.stats. Sampling is really only a small part of what > scipy.stats provides (pdf, cdf, moments, fitting a distribution, etc.). So > I'm not bothered by that duplication. If we'd want to reduce it I think it > would have to be removed from numpy, which doesn't sound like a good idea. > Yeah, I'm not about to suggest removing sampling from numpy.random. That'd be crazy. There's still the API mismatch between the names. Numpy.random, as a rule, uses the full expansion of the name while scipy.stats, as a rule, tends to abbreviate. That often confuses me, but not as confusing as I first thought, since at least it's consistent. >> >> (2) The interface for the distributions in scipy.stats seems to >> explicitly be for scalar random variables, so there's no multivariate >> normals, multinomials, dirichlet, wishart, etc.. Instead the sampling >> is in numpy.random, and pdf's aren't there. > > > Two days ago PR-2726 was merged, which adds a multivariate normal > distribution. Others can be added. IIRC there has been an enhancement ticket > for wishhart somewhere and there's a Python implementation floating around > somewhere. > A multivariate normal is a great addition. Currently, dirichlet and multinomial are the only random variables you can sample from in numpy.random that aren't in scipy.stats. My $0.02 for scipy 1.0 roadmap is adding dirichlet and multinomial to scipy.stats as well as wishart/inverse-wishart. Then distributions in scipy.stats would be a superset of numpy.random, and scipy.stats would include one of the most widely used distributions currently not in it. (In addition to the implementations floating around, both scikit-learn and pymc include bits and pieces of wishart-related code.) Also, right now you can use scipy.stats.rv_discrete to create your own discrete random variable, but only for an array of integers--so [1,2,3] rather than ['apple', 'orange', 'banana']. Which is fine, but that also means a lot of code duplication/wrapper classes for everyone who wants their random variable to be over a space of fruits rather than integers. Not sure how many people that effects, though. Not sure if these belong on the roadmap or just as enhancement requests. Thanks, Chris > Cheers, > Ralf > >> >> Has this been discussed elsewhere? >> >> On Sat, Sep 21, 2013 at 8:03 PM, Blake Griffith >> wrote: >> > >> >> sparse >> >> `````` >> >> >> >> Don't emulate np.matrix behavior, drop 2-D? >> > >> > >> > What is meant by this? Emulate np.array instead? >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From boyfarrell at gmail.com Sun Sep 22 21:19:40 2013 From: boyfarrell at gmail.com (boyfarrell at gmail.com) Date: Mon, 23 Sep 2013 10:19:40 +0900 Subject: [SciPy-Dev] Scipy 1.0 roadmap - integrate In-Reply-To: References: Message-ID: <3A9418CC-521B-4BEA-906F-E4637AE20607@gmail.com> Dear list, There was some discussion on how to improve integrate recently mostly focused around the ode solvers, http://comments.gmane.org/gmane.comp.python.scientific.devel/18150 I'll summarise the changes we want to make here, 1) Improve the interface to integrate.ode. - see the suggestion here, https://gist.github.com/danieljfarrell/6482713 2) Make integrate.odeint a functional wrapper around integrate.ode. - For the functional version we can use the MATLAB concept of 'events' to specify time points at which the solution can be exported. 3) Update the integrate.ode solvers (from Sundials via scikits.odes) - replace VODE with CVODE (a version written in C with many improvements) - add CVODES (CVODE + simultaneous corrector methods) - add IDA (similar to MATLAB's ode15s, also has the option for solving DAEs) Points 1 and 2 are fairly standard, however point 3 generated lots of discussion i.e. do we use sundials, scikits.odes, petsc4py... as the basis of the new module? petsc4py would allow solving ODEs in parallel and has a huge array of solvers. But I think in the end it was decided that scikits.odes would be a better starting point because of the lower complexity. Plus it already wraps the Sundials suite and is actively maintained. Best wishes, Dan From padarn at gmail.com Mon Sep 23 00:02:06 2013 From: padarn at gmail.com (Padarn Wilson) Date: Mon, 23 Sep 2013 14:02:06 +1000 Subject: [SciPy-Dev] Comment on scipy.interpolate.UnivariateSpline error Message-ID: Hi, I (well actually a friend) was recently trying to fit a UnivariateSpline to some data and then evaluate the derivative. The spline generated fine and could be evaluated (it looked reasonable) but the derivative would give the error: File "/usr/lib/pytho n2.7/dist-packages/scipy/interpolate/fitpack2.py", line 258, in derivatives raise ValueError("Error code returned by spalde: %s" % ier) ValueError: Error code returned by spalde: 10 It turned out that the x-array input was not in ascending order, as the documentation says is required. The exact data I was using lead to non-ascending knots. I recognise this error is not encountered if the documentation is followed, but the error is uninformative, and it seems like an easy mistake to make. Especially given the spline still looks fine otherwise. In case anyone wants to reproduce the example, the data are: 2.651,1.421 2.581,1.36 2.494,1.256 2.398,1.186 2.328,1.116 2.232,1.046 2.11,0.951 2.041,0.881 1.945,0.837 1.84,0.794 1.727,0.733 1.631,0.689 1.57,0.663 1.483,0.602 1.413,0.567 1.291,0.515 1.203,0.488 1.108,0.445 0.994,0.392 0.898,0.349 0.802,0.314 0.724,0.279 0.619,0.253 0.523,0.218 0.419,0.183 0.314,0.14 0.253,0.096 0.157,0.078 0.087,0.044 Thanks, Padarn -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Mon Sep 23 03:51:02 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Mon, 23 Sep 2013 09:51:02 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: Message-ID: <523FF2E6.9090305@astro.uio.no> On 09/21/2013 08:57 PM, Ralf Gommers wrote: > fftpack > ``````` > Needed: > > - solve issues with single precision: large errors, disabled for > difficult sizes > - fix caching bug > - Bluestein algorithm nice to have, padding is alternative A battle-tested Bluestein is included in Martin Reinecke's C port of FFTPACK. https://github.com/dagss/libfftpack As you can see in the readme there were a couple of changes to FFTPACK to improve accuracy for large primes. Dag Sverre From benny.malengier at gmail.com Mon Sep 23 09:17:55 2013 From: benny.malengier at gmail.com (Benny Malengier) Date: Mon, 23 Sep 2013 13:17:55 +0000 Subject: [SciPy-Dev] Scipy 1.0 roadmap - integrate In-Reply-To: <3A9418CC-521B-4BEA-906F-E4637AE20607@gmail.com> References: <3A9418CC-521B-4BEA-906F-E4637AE20607@gmail.com> Message-ID: 2013/9/23 boyfarrell at gmail.com > Dear list, > > There was some discussion on how to improve integrate recently mostly > focused around the ode solvers, > http://comments.gmane.org/gmane.comp.python.scientific.devel/18150 > > I'll summarise the changes we want to make here, > > 1) Improve the interface to integrate.ode. > - see the suggestion here, > https://gist.github.com/danieljfarrell/6482713 > > 2) Make integrate.odeint a functional wrapper around integrate.ode. > - For the functional version we can use the MATLAB concept of > 'events' to specify time points at which the solution can be exported. > > 3) Update the integrate.ode solvers (from Sundials via scikits.odes) > - replace VODE with CVODE (a version written in C with many > improvements) > - add CVODES (CVODE + simultaneous corrector methods) > - add IDA (similar to MATLAB's ode15s, also has the option for > solving DAEs) > > Points 1 and 2 are fairly standard, however point 3 generated lots of > discussion i.e. do we use sundials, scikits.odes, petsc4py... as the basis > of the new module? petsc4py would allow solving ODEs in parallel and has a > huge array of solvers. But I think in the end it was decided that > scikits.odes would be a better starting point because of the lower > complexity. Plus it already wraps the Sundials suite and is actively > maintained. > I don't know the events concept. Is there something like that already in scipy? Some doc? Sundials and hence scikits.odes also have event monitoring, in essence, checking when a condition is fullfilled, eg when a ball hits a wall, and return the solver at that point (as the physics and the equations will change). I think that was meant with 'events' in an earlier discussion on sundials. In sundials it is the RootFn functions. I could set-up a clone of scipy and integrate scikits.ode as best as possible. I'm leaning though to renaming scikit.odes.ode into deq, and rewriting scipy ode and odeint as wrappers around deq. Or would you break ode api fully? Seems like a world of hurt for the users of ode. The main programmer of scikit.odes (so not me, but Pavol) very much likes to circumvent the ode wrapper and instead use directly the sundials wrapped methods. To each it's own, but that does raise the question, do we consider that part of the API (so the methods ode or deq use to set up sundials problems)? Personally, I don't see problems with it, on the other hand, documentation might become confusing. Note that for all parallel solvers, the problem seems to be more the user who has to write the correct functions to drive the solver, and not so much the solver. Once we have sundials, people can contribute to use the parallel version. I'm sure some would be interested and contribute that. Note that once you integrate sundials, you obtain kinsol for free, which should be in the linalg package or accessible from the generic solve? If you ask ida to compute the initial condition, ida in essence calls kinsol, so you can solve nonlinear algebraic equations. It would be good to expose this in the correct scipy subpackage if it complements existing methods. As a final note, scikits.odes is maintained, but I have no fixed position, so as always, that can change quickly. Benny > Best wishes, > > Dan > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.hirschfeld at gmail.com Tue Sep 24 07:08:59 2013 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Tue, 24 Sep 2013 11:08:59 +0000 (UTC) Subject: [SciPy-Dev] Scipy 1.0 roadmap References: Message-ID: Ralf Gommers gmail.com> writes: > > Hi all, > At EuroScipy Pauli, David and I sat together and drafted a roadmap for Scipy 1.0. We then discussed this offline with some of the other currently most active core devs, to get it into a state that's ready for discussion on this list. So here it is: https://github.com/scipy/scipy/pull/2908 > > Our aim is for this roadmap to help guide us towards a 1.0 version, which will contain only code that we consider to be "of sufficient quality". Also, it will help to communicate to new and potential developers where their contributions are especially needed. > > In order to discuss/review this roadmap without generating a monster thread, I propose the following: > - topics like "do we need a roadmap?" or "what does 1.0-ready really mean?" are discussed on this thread. > - things in the General section (API changes, documentation/test/build guidelines, etc.), are discussed on this thread as well. > - for discussion of module-specific content, start a new thread and name it "1.0 roadmap: ". > - for minor things, comment on the PR. > Cheers,Ralf > The spline clean up previously mentioned by charles/pauli (http://article.gmane.org/gmane.comp.python.scientific.devel/17349) might be nice to get in. -Dave From njs at pobox.com Tue Sep 24 07:28:07 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 24 Sep 2013 12:28:07 +0100 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: <523FF2E6.9090305@astro.uio.no> References: <523FF2E6.9090305@astro.uio.no> Message-ID: On Mon, Sep 23, 2013 at 8:51 AM, Dag Sverre Seljebotn wrote: > On 09/21/2013 08:57 PM, Ralf Gommers wrote: >> fftpack >> ``````` >> Needed: >> >> - solve issues with single precision: large errors, disabled for >> difficult sizes >> - fix caching bug >> - Bluestein algorithm nice to have, padding is alternative > > A battle-tested Bluestein is included in Martin Reinecke's C port of > FFTPACK. > > https://github.com/dagss/libfftpack > > As you can see in the readme there were a couple of changes to FFTPACK > to improve accuracy for large primes. If this is nicely-licensed C code that provides a superset of scipy.fftpack's functionality, ought we to merge it into *numpy*.fft and deprecate scipy.fftpack? (I'm a little confused at what exactly the difference between the numpy and scipy modules is in this case, except that of course the scipy version needs a fortran compiler.) -n From deil.christoph at googlemail.com Tue Sep 24 09:29:45 2013 From: deil.christoph at googlemail.com (Christoph Deil) Date: Tue, 24 Sep 2013 15:29:45 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: <523FF2E6.9090305@astro.uio.no> Message-ID: On Sep 24, 2013, at 1:28 PM, Nathaniel Smith wrote: > On Mon, Sep 23, 2013 at 8:51 AM, Dag Sverre Seljebotn > wrote: >> On 09/21/2013 08:57 PM, Ralf Gommers wrote: >>> fftpack >>> ``````` >>> Needed: >>> >>> - solve issues with single precision: large errors, disabled for >>> difficult sizes >>> - fix caching bug >>> - Bluestein algorithm nice to have, padding is alternative >> >> A battle-tested Bluestein is included in Martin Reinecke's C port of >> FFTPACK. >> >> https://github.com/dagss/libfftpack >> >> As you can see in the readme there were a couple of changes to FFTPACK >> to improve accuracy for large primes. > > If this is nicely-licensed C code that provides a superset of > scipy.fftpack's functionality, ought we to merge it into *numpy*.fft > and deprecate scipy.fftpack? (I'm a little confused at what exactly > the difference between the numpy and scipy modules is in this case, > except that of course the scipy version needs a fortran compiler.) > > -n On that note ? a thought for scipy 1.0 : Does scipy absolutely need Fortran in general? Or are there equivalent C or C++ packages that might be used instead? Getting rid of Fortran code would simplify life for Mac users (and maybe Windows or even Linux). I don't really know how much Fortran is used in scipy or libraries it depends on, or how much of a problem Fortran really is. I'm just asking if getting rid of Fortran is an option even worth considering for scipy 1.0. Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From robert.kern at gmail.com Tue Sep 24 09:53:23 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Sep 2013 14:53:23 +0100 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: <523FF2E6.9090305@astro.uio.no> Message-ID: On Tue, Sep 24, 2013 at 12:28 PM, Nathaniel Smith wrote: > > On Mon, Sep 23, 2013 at 8:51 AM, Dag Sverre Seljebotn > wrote: > > On 09/21/2013 08:57 PM, Ralf Gommers wrote: > >> fftpack > >> ``````` > >> Needed: > >> > >> - solve issues with single precision: large errors, disabled for > >> difficult sizes > >> - fix caching bug > >> - Bluestein algorithm nice to have, padding is alternative > > > > A battle-tested Bluestein is included in Martin Reinecke's C port of > > FFTPACK. > > > > https://github.com/dagss/libfftpack > > > > As you can see in the readme there were a couple of changes to FFTPACK > > to improve accuracy for large primes. > > If this is nicely-licensed C code that provides a superset of > scipy.fftpack's functionality, ought we to merge it into *numpy*.fft > and deprecate scipy.fftpack? It does not provide a superset. The FORTRAN code in scipy.fftpack does ND transforms, the DCT and DST, and FFT-based convolutions. None of that code *must* remain in FORTRAN, but you would have to rewrite it all in C using the new libfftpack C code underneath. It's not a matter of "merging", I'm afraid, but significant rewriting. > (I'm a little confused at what exactly > the difference between the numpy and scipy modules is in this case, > except that of course the scipy version needs a fortran compiler.) At one time, scipy.fftpack supported several different backends like FFTW, not just the FORTRAN FFTPACK. They have since been dropped because they added more difficulties to the build process than we gained in performance. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Sep 24 09:56:08 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Sep 2013 14:56:08 +0100 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: <523FF2E6.9090305@astro.uio.no> Message-ID: On Tue, Sep 24, 2013 at 2:29 PM, Christoph Deil < deil.christoph at googlemail.com> wrote: > On that note ? a thought for scipy 1.0 : > > Does scipy absolutely need Fortran in general? > Or are there equivalent C or C++ packages that might be used instead? Not really. Not with BSD-like licenses, at any rate. > Getting rid of Fortran code would simplify life for Mac users (and maybe Windows or even Linux). > I don't really know how much Fortran is used in scipy or libraries it depends on, or how much of a problem Fortran really is. > I'm just asking if getting rid of Fortran is an option even worth considering for scipy 1.0. scipy 2.0, maybe. It will be a *lot* of work to find and write replacements for everything, not to mention that the replacements will probably not work the same as the current code. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Sep 24 13:56:48 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 Sep 2013 20:56:48 +0300 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: <523FF2E6.9090305@astro.uio.no> Message-ID: 24.09.2013 16:29, Christoph Deil kirjoitti: [clip] > On that note ? a thought for scipy 1.0 : > > Does scipy absolutely need Fortran in general? > Or are there equivalent C or C++ packages that might be used instead? Sounds unrealistic to me. Compiler issues alone definitely won't justify the work that would involved in this. Scipy can be compiled both on OSX and Windows, if you have clue what you are doing. -- Pauli Virtanen From cournape at gmail.com Tue Sep 24 14:05:33 2013 From: cournape at gmail.com (David Cournapeau) Date: Tue, 24 Sep 2013 19:05:33 +0100 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: <523FF2E6.9090305@astro.uio.no> Message-ID: On Tue, Sep 24, 2013 at 2:29 PM, Christoph Deil < deil.christoph at googlemail.com> wrote: > > On Sep 24, 2013, at 1:28 PM, Nathaniel Smith wrote: > > > On Mon, Sep 23, 2013 at 8:51 AM, Dag Sverre Seljebotn > > wrote: > >> On 09/21/2013 08:57 PM, Ralf Gommers wrote: > >>> fftpack > >>> ``````` > >>> Needed: > >>> > >>> - solve issues with single precision: large errors, disabled for > >>> difficult sizes > >>> - fix caching bug > >>> - Bluestein algorithm nice to have, padding is alternative > >> > >> A battle-tested Bluestein is included in Martin Reinecke's C port of > >> FFTPACK. > >> > >> https://github.com/dagss/libfftpack > >> > >> As you can see in the readme there were a couple of changes to FFTPACK > >> to improve accuracy for large primes. > > > > If this is nicely-licensed C code that provides a superset of > > scipy.fftpack's functionality, ought we to merge it into *numpy*.fft > > and deprecate scipy.fftpack? (I'm a little confused at what exactly > > the difference between the numpy and scipy modules is in this case, > > except that of course the scipy version needs a fortran compiler.) > > > > -n > > On that note ? a thought for scipy 1.0 : > > Does scipy absolutely need Fortran in general? > Or are there equivalent C or C++ packages that might be used instead? > > Getting rid of Fortran code would simplify life for Mac users (and maybe > Windows or even Linux). > I don't really know how much Fortran is used in scipy or libraries it > depends on, or how much of a problem Fortran really is. > I'm just asking if getting rid of Fortran is an option even worth > considering for scipy 1.0. > scipy contains approximately 100k LOC of often non trivial code. Lots of them most likely don't have an equivalent in any other language (ARPACK, specfunc, etc...), and few people both have the domain expertise and the will to port those things. That + the fact that fortran is often more appropriate than C/C++ anyway for numeric work. The issue of fortran on mac is partly Apple's fault for screwing up their libraries BTW. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Tue Sep 24 16:07:53 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Tue, 24 Sep 2013 22:07:53 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: References: <523FF2E6.9090305@astro.uio.no> Message-ID: <5241F119.5020103@astro.uio.no> On 09/24/2013 03:53 PM, Robert Kern wrote: > On Tue, Sep 24, 2013 at 12:28 PM, Nathaniel Smith > wrote: > > > > On Mon, Sep 23, 2013 at 8:51 AM, Dag Sverre Seljebotn > > > wrote: > > > On 09/21/2013 08:57 PM, Ralf Gommers wrote: > > >> fftpack > > >> ``````` > > >> Needed: > > >> > > >> - solve issues with single precision: large errors, disabled for > > >> difficult sizes > > >> - fix caching bug > > >> - Bluestein algorithm nice to have, padding is alternative > > > > > > A battle-tested Bluestein is included in Martin Reinecke's C port of > > > FFTPACK. > > > > > > https://github.com/dagss/libfftpack > > > > > > As you can see in the readme there were a couple of changes to FFTPACK > > > to improve accuracy for large primes. > > > > If this is nicely-licensed C code that provides a superset of > > scipy.fftpack's functionality, ought we to merge it into *numpy*.fft > > and deprecate scipy.fftpack? > > It does not provide a superset. The FORTRAN code in scipy.fftpack does > ND transforms, the DCT and DST, and FFT-based convolutions. None of that > code *must* remain in FORTRAN, but you would have to rewrite it all in C > using the new libfftpack C code underneath. It's not a matter of > "merging", I'm afraid, but significant rewriting. It seems to me all the FORTRAN code there is plain vanilla Netlib FFTPACK; the extra ND and convolutions etc. are in C? But indeed, seems like Martin's C port doesn't include the DCT and DST parts of FFTPACK, only complex and real FFT. That makes it not completely trivial, but I wouldn't say "significant rewrite" is accurate either, it's just about making the C port of FFTPACK complete. Martin (on CC), how long did it take to port the complex/real FFTs from Fortran to C? Dag Sverre From d.s.seljebotn at astro.uio.no Tue Sep 24 16:33:06 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Tue, 24 Sep 2013 22:33:06 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap In-Reply-To: <5241F119.5020103@astro.uio.no> References: <523FF2E6.9090305@astro.uio.no> <5241F119.5020103@astro.uio.no> Message-ID: <5241F702.1040802@astro.uio.no> [Forwarding from Martin who's not subscribed.] Hi all! > It seems to me all the FORTRAN code there is plain vanilla Netlib FFTPACK; the > extra ND and convolutions etc. are in C? > > But indeed, seems like Martin's C port doesn't include the DCT and DST parts of > FFTPACK, only complex and real FFT. That makes it not completely trivial, but I > wouldn't say "significant rewrite" is accurate either, it's just about making > the C port of FFTPACK complete. > > Martin (on CC), how long did it take to port the complex/real FFTs from Fortran > to C? Difficult to say, I spent much more time tweaking and polishing the code after porting, and I also had a template C implementation to begin with ... But please have a look at http://www.netlib.org/fftpack/fft.c This is a C port by Christopher Montgomery (of Ogg Vorbis/Opus fame), which he put into public domain. This file should have DCTs/DSTs (single precision only, but changing that should be trivial). Hope this helps, Martin From arnd.baecker at web.de Wed Sep 25 12:18:15 2013 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 25 Sep 2013 18:18:15 +0200 (CEST) Subject: [SciPy-Dev] 1.0 roadmap: weave Message-ID: Hi, > On Sat, Sep 21, 2013 at 8:54 PM, Ralf Gommers wrote: [...] > weave > ````` > This is the only module that was not ported to Python 3.? Effectively it's > deprecated (not recommended to use for new code).? In the future it should be > removed from scipy (can be made into a separate module). Let me just mention that I am worried about the removal of weave: a substantial amount of code developed in our group over the years uses weave. Porting this to cython seems not trivial, in particular, because many of the original authors (diploma students, PhD students, ...) have left academia by now. Well, as porting weave to Python 3 is way out of my capabilities, this is just a comment in the hope that someone could give this a try (if at some point testing would be needed, I could try to help, of course). Best, Arnd From robert.kern at gmail.com Wed Sep 25 12:32:01 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Sep 2013 17:32:01 +0100 Subject: [SciPy-Dev] 1.0 roadmap: weave In-Reply-To: References: Message-ID: On Wed, Sep 25, 2013 at 5:18 PM, Arnd Baecker wrote: > > Hi, > >> On Sat, Sep 21, 2013 at 8:54 PM, Ralf Gommers wrote: > > [...] > >> weave >> ````` >> This is the only module that was not ported to Python 3. Effectively it's >> deprecated (not recommended to use for new code). In the future it should be >> removed from scipy (can be made into a separate module). > > > Let me just mention that I am worried about the removal of weave: > a substantial amount of code developed in our group over the years > uses weave. Porting this to cython seems not trivial, > in particular, because many of the original authors > (diploma students, PhD students, ...) have left academia by now. > > Well, as porting weave to Python 3 is way out of my capabilities, > this is just a comment in the hope that someone could give this a try > (if at some point testing would be needed, I could try to help, > of course). Would it be within your capabilities to volunteer to maintain it as a separate package outside of scipy? -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From cevans at evanslabs.org Wed Sep 25 15:01:07 2013 From: cevans at evanslabs.org (Constantine Evans) Date: Wed, 25 Sep 2013 12:01:07 -0700 Subject: [SciPy-Dev] 1.0 roadmap: weave In-Reply-To: References: Message-ID: I can at least describe our case of porting away from weave. In order to improve portability and stability when releasing our software, we ported our inline C weave code to normal Python C modules. This only required, at least in our case, separating out the C portions of our code into their own functions and adding some short boilerplate C code for dealing with parameters and output. There was no need to modify the already-written C code beyond placing it into separate files. If I understand the mechanisms behind weave correctly, this is essentially what weave.inline does automatically. Perhaps documentation of this porting method, maybe in the SciPy tutorial to replace the weave section, might be useful. This of course would not work for porting code that uses weave.blitz, but blitz involves such small expressions that porting to pure C wouldn't necessarily seem too tricky (so long as blitz arrays aren't being used). Regards, Constantine On Wed, Sep 25, 2013 at 9:18 AM, Arnd Baecker wrote: > Hi, > > On Sat, Sep 21, 2013 at 8:54 PM, Ralf Gommers >> wrote: >> > [...] > > weave >> ````` >> This is the only module that was not ported to Python 3. Effectively it's >> deprecated (not recommended to use for new code). In the future it >> should be >> removed from scipy (can be made into a separate module). >> > > Let me just mention that I am worried about the removal of weave: > a substantial amount of code developed in our group over the years > uses weave. Porting this to cython seems not trivial, > in particular, because many of the original authors > (diploma students, PhD students, ...) have left academia by now. > > Well, as porting weave to Python 3 is way out of my capabilities, > this is just a comment in the hope that someone could give this a try > (if at some point testing would be needed, I could try to help, > of course). > > Best, Arnd > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Sep 25 15:23:25 2013 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 25 Sep 2013 22:23:25 +0300 Subject: [SciPy-Dev] 1.0 roadmap: weave In-Reply-To: References: Message-ID: Hi, 25.09.2013 19:18, Arnd Baecker kirjoitti: [clip] > Let me just mention that I am worried about the removal of weave: > a substantial amount of code developed in our group over the years > uses weave. Porting this to cython seems not trivial, > in particular, because many of the original authors > (diploma students, PhD students, ...) have left academia by now. Note that weave is not going to vanish even if it happens that nobody steps up and it's removed from Scipy. Rather, it will simply be split out to a separate Python package --- which however will likely not receive further attention from our side. In your own code, it should then be enough to change import scipy.weave to something like import weave and make sure the `weave` package is installed locally. -- Pauli Virtanen From arnd.baecker at web.de Thu Sep 26 04:05:50 2013 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 26 Sep 2013 10:05:50 +0200 (CEST) Subject: [SciPy-Dev] 1.0 roadmap: weave In-Reply-To: References: Message-ID: Hi, thanks for all the replies! It seems that Constantines approach to transforming weave code into normal Python C modules would apply also to our situtation (only weave.inline being used). Even if the current weave will be split out from scipy and can be easily used as Pauli pointed out, at some point one needs to make the transition to Python 3. If weave is deprecated (even if someone makes the effort to port it to Python 3), in the long run it seems to make more sense to convert the code to either modules or cython. Of course I do understand the reasons to deprecate weave. Personally, I only use cython for new code and Numba looks extremely promising. So surely the question is, whether porting weave is worth the needed effort (as I said, I have no idea how much work is necessary for this). Finally, to answer Roberts question whether I could volunteer to maintain weave as a separate package: not sure - maybe with some helping hands (currently I have no clue what this would require) it might be possible. The important aspect is the long term perspective: How many people would be interested in this and maybe even in a python 3 port and actively use weave also for new code, or is the general impression, that the more modern tools (cython, ...) should be used? Best, Arnd From davidmenhur at gmail.com Thu Sep 26 05:33:51 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 26 Sep 2013 11:33:51 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap - wavelets Message-ID: On 21 September 2013 20:57, Ralf Gommers wrote: > *Wavelets*: add proper wavelets, including discrete wavelet transform. > What's > there now doesn't make much sense. > There is a package on Wavelets under MIT license, but it looks abandoned for more than a year. Perhaps it could be reintegrated into Scipy, perhaps it would be better to create a scikit-wavelets while we get something of enough quality. http://www.pybytes.com/pywavelets I can step in, if someone helps and guides me. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Sep 26 08:05:06 2013 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 26 Sep 2013 12:05:06 +0000 (UTC) Subject: [SciPy-Dev] 1.0 roadmap: weave References: Message-ID: Arnd Baecker web.de> writes: [clip] > Of course I do understand the reasons to deprecate weave. > Personally, I only use cython for new code and Numba looks > extremely promising. So surely the question is, whether porting weave is > worth the needed effort (as I said, I have no idea how much work is > necessary for this). If you drop support for some features that relate to stuff in which the C API changed a lot in Python 3, such as passing in file pointers, porting weave to Python 3 is probably doable without too much trouble. Getting the basics working in my estimation is not hard, the trouble is in the corners. -- Pauli Virtanen From pav at iki.fi Thu Sep 26 08:12:12 2013 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 26 Sep 2013 12:12:12 +0000 (UTC) Subject: [SciPy-Dev] Scipy 1.0 roadmap - wavelets References: Message-ID: Da?id gmail.com> writes: [clip] > http://www.pybytes.com/pywavelets > > I can step in, if someone helps and guides me. I think Ralf is already gone almost the whole way in integrating this. I'm sure he could use some help, though. @Ralf: was your current code somewhere available? -- Pauli Virtanen From pav at iki.fi Thu Sep 26 08:09:55 2013 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 26 Sep 2013 12:09:55 +0000 (UTC) Subject: [SciPy-Dev] 1.0 roadmap: weave References: Message-ID: Arnd Baecker web.de> writes: [clip] > Finally, to answer Roberts question whether I could volunteer > to maintain weave as a separate package: > not sure - maybe with some helping hands (currently I have no clue what > this would require) it might be possible. > The important aspect is the long term perspective: > How many people would be interested in this and maybe even in a python 3 > port and actively use weave also for new code, or > is the general impression, that the more modern tools (cython, ...) > should be used? Also, let's say that if someone with personal interest in keeping it working addresses the issues within the next few years, then the pressure of splitting it out decreases quite a lot. In the big picture, weave is relatively "mature" code base, and keeping it working is probably not too big apart from the Py3 port. -- Pauli Virtanen From toddrjen at gmail.com Thu Sep 26 09:11:38 2013 From: toddrjen at gmail.com (Todd) Date: Thu, 26 Sep 2013 15:11:38 +0200 Subject: [SciPy-Dev] 1.0 roadmap: weave In-Reply-To: References: Message-ID: On Thu, Sep 26, 2013 at 2:05 PM, Pauli Virtanen wrote: > Arnd Baecker web.de> writes: > [clip] > > Of course I do understand the reasons to deprecate weave. > > Personally, I only use cython for new code and Numba looks > > extremely promising. So surely the question is, whether porting weave is > > worth the needed effort (as I said, I have no idea how much work is > > necessary for this). > > If you drop support for some features that relate to stuff in which > the C API changed a lot in Python 3, such as passing in file pointers, > porting weave to Python 3 is probably doable without too much trouble. > > Getting the basics working in my estimation is not hard, the trouble > is in the corners. > > How much work is porting to python 3 when using weave? If someone is porting to python 3 already, would porting away from weave at the same time add very much additional work? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Sep 26 12:02:55 2013 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 26 Sep 2013 16:02:55 +0000 (UTC) Subject: [SciPy-Dev] 1.0 roadmap: weave References: Message-ID: Todd gmail.com> writes: [clip] > How much work is porting to python 3 when using weave? > If someone is porting to python 3 already, would porting > away from weave at the same time add very much additional work? Depends. I would expect little to no changes required in most cases. -- Pauli Virtanen From ralf.gommers at gmail.com Thu Sep 26 13:24:30 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 26 Sep 2013 19:24:30 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap - wavelets In-Reply-To: References: Message-ID: On Thu, Sep 26, 2013 at 2:12 PM, Pauli Virtanen wrote: > Da?id gmail.com> writes: > [clip] > > http://www.pybytes.com/pywavelets > > > > I can step in, if someone helps and guides me. > > I think Ralf is already gone almost the whole way in integrating > this. I'm sure he could use some help, though. > > @Ralf: was your current code somewhere available? > Yes, https://github.com/rgommers/pywt. That turned into a slightly larger refactoring (or rewrite) than I had planned. Next step is to finish converting the doctests in doc/source/regression/ to proper unit tests. After that the found issue(s) need to be fixed. I've opened and closed issues for TODOs, https://github.com/rgommers/pywt/issues. The idea is to finish the rewrite without breaking the API, then submit that back to upstream. I'm not sure whether or not that's still being actively developed, but it's the right thing to do. After that it can be cpnverted to a scipy.signal subpackage easily, and only at that point I want to think about cleaning up the API where needed. David, you help is very welcome! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Fri Sep 27 04:29:39 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Fri, 27 Sep 2013 10:29:39 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap - wavelets In-Reply-To: References: Message-ID: On 26 September 2013 19:24, Ralf Gommers wrote: > David, you help is very welcome! Ok, I am forking the repository. Any suggestion on where to start? Looking at the open issues, I think the most important are creating the tests. What is your status? BTW, installing pywt on Linux I get: >>> import pywt Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python2.7/site-packages/PyWavelets-0.2.2-py2.7-linux-x86_64.egg/pywt/__init__.py", line 15, in from ._pywt import * File "_pywt.pyx", line 24, in init pywt._pywt (/home/david/gits/pywt/src/_pywt.c:16572) ImportError: No module named tools.six But it is not in the requirements. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Sep 27 12:33:56 2013 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 27 Sep 2013 17:33:56 +0100 Subject: [SciPy-Dev] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: [CC'ing scipy-dev because see below] On Tue, Sep 24, 2013 at 6:45 PM, Nathaniel Smith wrote: > Hi all, > > I'm getting a very strange error, and unfortunately I can't seem to > reproduce it *except* when running on Travis-CI, so my debugging tools > are really limited. I'm wondering if anyone else has seen anything > like this? > > The offending line of code is: > > File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pyrerp/incremental_ls.py", > line 323, in append_bottom_half > self.xtx += xtx > ValueError: setting an array element with a sequence. > > And debug prints reveal that in the case that causes the error, > 'self.xtx' is an ndarray that prints as: > > [[ 0. 0.] > [ 0. 0.]] > > while 'xtx' is a scipy.sparse.csc.csc_matrix that prints as: > > (1, 0) 45.0 > (0, 0) 10.0 > (1, 1) 285.0 > (0, 1) 45.0 In accordance with the cosmic law governing such things, this turns out to be a bug in numpy that I introduced in 2397c9d4, specifically this line, which introduces an unchecked DEPRECATE: https://github.com/numpy/numpy/commit/2397c9d4#L5R528 (Plus some complicated nonsense involving virtualenvs that sometimes fall back on system libraries even though the virtualenv contains a perfectly good version of the same library etc. to make reproduction more confusing.) My test code was setting warnings to raise errors by default, and apparently ndarray += csc_matrix goes through some circuitous path that (AFAICT) involves creating an object ndarray containing the csc_matrix and calling the add ufunc, which somehow trips on the cast warning above. Then later on, at line 159 of arraytypes.c.src, the the @TYPE at _setitem function does: if (PyErr_Occurred()) { if (PySequence_Check(op) && !PyString_Check(op) && !PyUnicode_Check(op)) { PyErr_Clear(); PyErr_SetString(PyExc_ValueError, "setting an array element with a sequence."); } return -1; } and the PyErr_Occurred here catches the real error and replaces it with some nonsense. So three points: 1) Maybe this will be useful to someone googling later. 2) I guess I'll file some horrible patch against numpy that just throws away exceptions caused by that deprecation, because there is no way for PyArray_CanCastTypeTo to raise an error :-(. 3) This script raises a DeprecationWarning with numpy 1.7.1 and scipy 0.12.0: import warnings, numpy, scipy.sparse warnings.filterwarnings("always") a = numpy.zeros((2, 2)) b = scipy.sparse.csc_matrix([[0.0, 0.0], [0.0, 0.0]]) a += b test-script3.py:5: DeprecationWarning: Implicitly casting between incompatible kinds. In a future numpy release, this will raise an error. Use casting="unsafe" if this is intentional. I really don't understand what arcane magic is used to make ndarray += csc_matrix work at all, but my question is, is it going to break when we complete the casting transition described above? It was just supposed to catch things like int += float. -n From njs at pobox.com Fri Sep 27 15:15:37 2013 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 27 Sep 2013 20:15:37 +0100 Subject: [SciPy-Dev] [Numpy-discussion] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: On Fri, Sep 27, 2013 at 7:34 PM, Pauli Virtanen wrote: > 27.09.2013 19:33, Nathaniel Smith kirjoitti: > [clip] >> I really don't understand what arcane magic is used to make ndarray += >> csc_matrix work at all, but my question is, is it going to break when >> we complete the casting transition described above? It was just >> supposed to catch things like int += float. > > This maybe clarifies it: > >>>> import numpy >>>> import scipy.sparse >>>> x = numpy.ones((2,2)) >>>> y = scipy.sparse.csr_matrix(x) >>>> z = x >>>> z += y >>>> x > array([[ 1., 1.], > [ 1., 1.]]) >>>> z > matrix([[ 2., 2.], > [ 2., 2.]]) > > The execution flows like this: > > ndarray.__iadd__(arr, sparr) > np.add(arr, sparr, out=???) > return NotImplemented # wtf > return NotImplemented > Python does arr = sparr.__radd__(arr) > > Since Scipy master sparse matrices now have __numpy_ufunc__, but it > doesn't handle out= arguments, the second step currently raises a > TypeError (for Numpy master + Scipy master). > > And this is actually the correct thing to do, as having np.add return > NotImplemented is just broken. Only ndarray.__iadd__ has the authority > to return the NotImplemented. > > To make the in-place ops work again, it seems Numpy needs some > additional fixes in its binary op machinery, before __numpy_ufunc__ > business works fully as intended. Namely, the binary op routines will > need to catch TypeErrors and convert them to NotImplemented. > > The code paths where Numpy ufuncs currently return NotImplemented could > also be changed to raise TypeErrors, but I'm not sure if someone > somewhere relies on this behavior (I hope not). Okay, so I see three separate issues: 1) My original concern, that the upcoming casting change for in-place operations will cause some horrible interaction. Tentatively this seems like it might be okay since even after the "cast" succeeds, np.add is still just refusing to do the operation, so hopefully we can set it up so that it will continue to fail once the casting rule becomes more strict. 2) The issue that ufuncs return NotImplemented and it makes baby Guido cry. This is completely broken, agreed. Not sure when someone will get around to clearing this stuff up. 3) The issue of how to make an in-place like ndarray += sparse continue to work in the brave new __numpy_ufunc__ world. For this last issue, I think we disagree. It seems to me that the right answer is that csc_matrix.__numpy_ufunc__ needs to step up and start supporting out=! If I have a large dense ndarray and I try to += a sparse array to it, this operation should take no temporary memory and nnz time. Right now it sounds like it actually copies the large dense ndarray, which takes time and space proportional to its size. AFAICT the only way to avoid that is for scipy.sparse to implement out=. It shouldn't be that hard...? -n From njs at pobox.com Fri Sep 27 16:33:24 2013 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 27 Sep 2013 21:33:24 +0100 Subject: [SciPy-Dev] [Numpy-discussion] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: On Fri, Sep 27, 2013 at 8:27 PM, Pauli Virtanen wrote: > 27.09.2013 22:15, Nathaniel Smith kirjoitti: > [clip] >> 3) The issue of how to make an in-place like ndarray += sparse >> continue to work in the brave new __numpy_ufunc__ world. >> >> For this last issue, I think we disagree. It seems to me that the >> right answer is that csc_matrix.__numpy_ufunc__ needs to step up and >> start supporting out=! If I have a large dense ndarray and I try to += >> a sparse array to it, this operation should take no temporary memory >> and nnz time. Right now it sounds like it actually copies the large >> dense ndarray, which takes time and space proportional to its size. >> AFAICT the only way to avoid that is for scipy.sparse to implement >> out=. It shouldn't be that hard...? > > Sure, scipy.sparse can easily support also the output argument. Great! I guess solving this somehow will be release-critical, to avoid a regression in this case when __numpy_ufunc__ gets released. If the solution should be in scipy, I guess we should file the bug there? > But I still maintain that the implementation of __iadd__ in Numpy is > wrong. Oh yeah totally. > What it does now is: > > def __iadd__(self, other): > return np.add(self, other, out=self) > > But since we know this raises a TypeError if the input is of a type that > cannot be dealt with, it should be: > > def __iadd__(self, other): > try: > return np.add(self, other, out=self) > except TypeError: > return NotImplemented > > Of course, it's written in C so it's a bit more painful to write this. > > I think this will have little performance impact, since the check would > be only a NULL check in the inplace op methods + subsequent handling. I > can take a look at some point at this... I'm a little uncertain about the "swallow all TypeErrors" aspect of this -- e.g. this could have really weird effects for object arrays, where ufuncs may raise arbitrary user exceptions. One possibility in the long run is to just say, if you want to override ndarray __iadd__ or whatever, then you have to use __numpy_ufunc__. Not really much point in having *two* sets of implementations of the NotImplemented dance for the same operation. -n From pav at iki.fi Fri Sep 27 16:54:15 2013 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 27 Sep 2013 23:54:15 +0300 Subject: [SciPy-Dev] [Numpy-discussion] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: 27.09.2013 23:33, Nathaniel Smith kirjoitti: [clip] > Great! I guess solving this somehow will be release-critical, to avoid > a regression in this case when __numpy_ufunc__ gets released. If the > solution should be in scipy, I guess we should file the bug there? It's release-critical, but the feature is added only in Numpy 1.9 and Scipy 0.14.0, so there's several months time to iron out the bugs here. Scipy bug: https://github.com/scipy/scipy/issues/2938 Numpy bug: https://github.com/numpy/numpy/issues/3812 [clip] > I'm a little uncertain about the "swallow all TypeErrors" aspect of > this -- e.g. this could have really weird effects for object arrays, > where ufuncs may raise arbitrary user exceptions. A second alternative here would be to pass an additional internal-use keyword argument to the generic ufunc that instructs it to return NotImplemented rather than raising errors. This could also make the ufuncs better Python citizens by stopping them from littering NotImplemented around. > One possibility in the long run is to just say, if you want to > override ndarray __iadd__ or whatever, then you have to use > __numpy_ufunc__. Not really much point in having *two* sets of > implementations of the NotImplemented dance for the same operation. I think __numpy_ufunc__ breaking the default Python __*__ binary op system is a bit nasty, and would be better avoided if possible. -- Pauli Virtanen From ralf.gommers at gmail.com Sat Sep 28 12:28:17 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 28 Sep 2013 18:28:17 +0200 Subject: [SciPy-Dev] Scipy 1.0 roadmap - wavelets In-Reply-To: References: Message-ID: On Fri, Sep 27, 2013 at 10:29 AM, Da?id wrote: > On 26 September 2013 19:24, Ralf Gommers wrote: > >> David, you help is very welcome! > > > Ok, I am forking the repository. Any suggestion on where to start? > > Looking at the open issues, I think the most important are creating the > tests. What is your status? > > BTW, installing pywt on Linux I get: > > >>> import pywt > Traceback (most recent call last): > File "", line 1, in > File > "/usr/lib64/python2.7/site-packages/PyWavelets-0.2.2-py2.7-linux-x86_64.egg/pywt/__init__.py", > line 15, in > from ._pywt import * > File "_pywt.pyx", line 24, in init pywt._pywt > (/home/david/gits/pywt/src/_pywt.c:16572) > ImportError: No module named tools.six > > But it is not in the requirements. > It's not an external requirement but shipped in src/pywt/tools/. Due to setuptools weirdness the whole tests/ and tools/ dirs are missing when installing into site-packages. I hadn't noticed because I use in-place builds to develop. Issue at https://github.com/rgommers/pywt/issues/28. Converting the source tree to a more standard layout for Python projects will fix it, I'll go do that. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Sun Sep 29 16:59:00 2013 From: xavier.gnata at gmail.com (xavier.gnata at gmail.com) Date: Sun, 29 Sep 2013 22:59:00 +0200 Subject: [SciPy-Dev] ImportError: cannot import name oldnumeric Message-ID: <52489494.5030709@gmail.com> Hi, I have compiled numpy1.9.0.dev-e051ff following http://osdf.github.io/blog/numpyscipy-with-openblas-for-ubuntu-1204-second-try.html to unable Openblas. It looks ok (run=5277 errors=0 failures=0 and Openblas is used) but now scipy git complains like this: "python setup.py build Cythonizing sources scipy/interpolate/interpnd.pyx has not changed scipy/ndimage/src/_ni_label.pyx has not changed scipy/stats/vonmises_cython.pyx has not changed scipy/stats/_rank.pyx has not changed scipy/signal/_spectral.pyx has not changed scipy/special/_ufuncs_cxx.pyx has not changed scipy/special/_ufuncs.pyx has not changed scipy/cluster/_vq_rewrite.pyx has not changed scipy/sparse/csgraph/_shortest_path.pyx has not changed scipy/sparse/csgraph/_traversal.pyx has not changed scipy/sparse/csgraph/_min_spanning_tree.pyx has not changed scipy/sparse/csgraph/_tools.pyx has not changed scipy/io/matlab/mio5_utils.pyx has not changed scipy/io/matlab/mio_utils.pyx has not changed scipy/io/matlab/streams.pyx has not changed scipy/spatial/ckdtree.pyx has not changed scipy/spatial/qhull.pyx has not changed /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'test_suite' warnings.warn(msg) blas_opt_info: blas_mkl_info: Disabled blas_mkl_info: (MKL is None) Disabled blas_mkl_info: (MKL is None) libraries mkl,vml,guide not found in [] NOT AVAILABLE openblas_info: FOUND: libraries = ['openblas'] library_dirs = ['/usr/local/lib'] language = f77 FOUND: libraries = ['openblas'] library_dirs = ['/usr/local/lib'] language = f77 Traceback (most recent call last): File "setup.py", line 230, in setup_package() File "setup.py", line 227, in setup_package setup(**metadata) File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/core.py", line 135, in setup config = configuration() File "setup.py", line 170, in configuration config.add_subpackage('scipy') File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line 966, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line 935, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line 872, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 15, in configuration config.add_subpackage('lib') File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line 966, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line 935, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line 872, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/lib/setup.py", line 9, in configuration config.add_subpackage('blas') File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line 966, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line 935, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line 872, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/lib/blas/setup.py", line 22, in configuration from scipy._build_utils import get_g77_abi_wrappers File "/usr/local/src/scipy/scipy/__init__.py", line 77, in from numpy import oldnumeric ImportError: cannot import name oldnumeric " What's going on with oldnumeric? xavier From ralf.gommers at gmail.com Sun Sep 29 17:01:52 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 29 Sep 2013 23:01:52 +0200 Subject: [SciPy-Dev] ImportError: cannot import name oldnumeric In-Reply-To: <52489494.5030709@gmail.com> References: <52489494.5030709@gmail.com> Message-ID: On Sun, Sep 29, 2013 at 10:59 PM, xavier.gnata at gmail.com < xavier.gnata at gmail.com> wrote: > Hi, > > I have compiled numpy1.9.0.dev-e051ff following > > http://osdf.github.io/blog/numpyscipy-with-openblas-for-ubuntu-1204-second-try.html > to unable Openblas. It looks ok (run=5277 errors=0 failures=0 and > Openblas is used) but now scipy git complains like this: > > "python setup.py build > Cythonizing sources > scipy/interpolate/interpnd.pyx has not changed > scipy/ndimage/src/_ni_label.pyx has not changed > scipy/stats/vonmises_cython.pyx has not changed > scipy/stats/_rank.pyx has not changed > scipy/signal/_spectral.pyx has not changed > scipy/special/_ufuncs_cxx.pyx has not changed > scipy/special/_ufuncs.pyx has not changed > scipy/cluster/_vq_rewrite.pyx has not changed > scipy/sparse/csgraph/_shortest_path.pyx has not changed > scipy/sparse/csgraph/_traversal.pyx has not changed > scipy/sparse/csgraph/_min_spanning_tree.pyx has not changed > scipy/sparse/csgraph/_tools.pyx has not changed > scipy/io/matlab/mio5_utils.pyx has not changed > scipy/io/matlab/mio_utils.pyx has not changed > scipy/io/matlab/streams.pyx has not changed > scipy/spatial/ckdtree.pyx has not changed > scipy/spatial/qhull.pyx has not changed > /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown > distribution option: 'test_suite' > warnings.warn(msg) > blas_opt_info: > blas_mkl_info: > Disabled blas_mkl_info: (MKL is None) > Disabled blas_mkl_info: (MKL is None) > libraries mkl,vml,guide not found in [] > NOT AVAILABLE > > openblas_info: > FOUND: > libraries = ['openblas'] > library_dirs = ['/usr/local/lib'] > language = f77 > > FOUND: > libraries = ['openblas'] > library_dirs = ['/usr/local/lib'] > language = f77 > > Traceback (most recent call last): > File "setup.py", line 230, in > setup_package() > File "setup.py", line 227, in setup_package > setup(**metadata) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/core.py", line > 135, in setup > config = configuration() > File "setup.py", line 170, in configuration > config.add_subpackage('scipy') > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 966, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 935, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 872, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/setup.py", line 15, in configuration > config.add_subpackage('lib') > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 966, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 935, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 872, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/lib/setup.py", line 9, in configuration > config.add_subpackage('blas') > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 966, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 935, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 872, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/lib/blas/setup.py", line 22, in configuration > from scipy._build_utils import get_g77_abi_wrappers > File "/usr/local/src/scipy/scipy/__init__.py", line 77, in > from numpy import oldnumeric > ImportError: cannot import name oldnumeric > " > > What's going on with oldnumeric? > You just picked a bad day to update numpy. See https://github.com/numpy/numpy/pull/3638 for a 1-line workaround. Should be fixed in numpy master soon. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Sun Sep 29 18:46:07 2013 From: xavier.gnata at gmail.com (xavier.gnata at gmail.com) Date: Mon, 30 Sep 2013 00:46:07 +0200 Subject: [SciPy-Dev] ImportError: cannot import name oldnumeric In-Reply-To: References: <52489494.5030709@gmail.com> Message-ID: <5248ADAF.4090202@gmail.com> On 29/09/2013 23:01, Ralf Gommers wrote: > > > > On Sun, Sep 29, 2013 at 10:59 PM, xavier.gnata at gmail.com > > wrote: > > Hi, > > I have compiled numpy1.9.0.dev-e051ff following > http://osdf.github.io/blog/numpyscipy-with-openblas-for-ubuntu-1204-second-try.html > to unable Openblas. It looks ok (run=5277 errors=0 failures=0 and > Openblas is used) but now scipy git complains like this: > > "python setup.py build > Cythonizing sources > scipy/interpolate/interpnd.pyx has not changed > scipy/ndimage/src/_ni_label.pyx has not changed > scipy/stats/vonmises_cython.pyx has not changed > scipy/stats/_rank.pyx has not changed > scipy/signal/_spectral.pyx has not changed > scipy/special/_ufuncs_cxx.pyx has not changed > scipy/special/_ufuncs.pyx has not changed > scipy/cluster/_vq_rewrite.pyx has not changed > scipy/sparse/csgraph/_shortest_path.pyx has not changed > scipy/sparse/csgraph/_traversal.pyx has not changed > scipy/sparse/csgraph/_min_spanning_tree.pyx has not changed > scipy/sparse/csgraph/_tools.pyx has not changed > scipy/io/matlab/mio5_utils.pyx has not changed > scipy/io/matlab/mio_utils.pyx has not changed > scipy/io/matlab/streams.pyx has not changed > scipy/spatial/ckdtree.pyx has not changed > scipy/spatial/qhull.pyx has not changed > /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown > distribution option: 'test_suite' > warnings.warn(msg) > blas_opt_info: > blas_mkl_info: > Disabled blas_mkl_info: (MKL is None) > Disabled blas_mkl_info: (MKL is None) > libraries mkl,vml,guide not found in [] > NOT AVAILABLE > > openblas_info: > FOUND: > libraries = ['openblas'] > library_dirs = ['/usr/local/lib'] > language = f77 > > FOUND: > libraries = ['openblas'] > library_dirs = ['/usr/local/lib'] > language = f77 > > Traceback (most recent call last): > File "setup.py", line 230, in > setup_package() > File "setup.py", line 227, in setup_package > setup(**metadata) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/core.py", line > 135, in setup > config = configuration() > File "setup.py", line 170, in configuration > config.add_subpackage('scipy') > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 966, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 935, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 872, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/setup.py", line 15, in configuration > config.add_subpackage('lib') > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 966, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 935, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 872, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/lib/setup.py", line 9, in configuration > config.add_subpackage('blas') > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 966, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 935, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", > line 872, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/lib/blas/setup.py", line 22, in configuration > from scipy._build_utils import get_g77_abi_wrappers > File "/usr/local/src/scipy/scipy/__init__.py", line 77, in > from numpy import oldnumeric > ImportError: cannot import name oldnumeric > " > > What's going on with oldnumeric? > > > You just picked a bad day to update numpy. See > https://github.com/numpy/numpy/pull/3638 for a 1-line workaround. > Should be fixed in numpy master soon. > > Ralf OK sorry for the noise. xavier From charlesr.harris at gmail.com Mon Sep 30 11:17:14 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 30 Sep 2013 09:17:14 -0600 Subject: [SciPy-Dev] 1.8.0rc1 Message-ID: Hi All, NumPy 1.8.0rc1 is up now on sourceforge.The binary builds are included except for Python 3.3 on windows, which will arrive later. Many thanks to Ralf for the binaries, and to those who found and fixed the bugs in the last beta. Any remaining bugs are all my fault ;) I hope this will be the last release before final, so please test it thoroughly. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: