From ewm at redtetrahedron.org Mon Dec 3 21:54:12 2012 From: ewm at redtetrahedron.org (Eric Moore) Date: Mon, 03 Dec 2012 21:54:12 -0500 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions Message-ID: <50BD65D4.7050509@redtetrahedron.org> Hi, I've just submitted a pull request to add a function to compute a simple (modified) periodogram and a periodogram using Welch's method. The pull request is https://github.com/scipy/scipy/pull/373 The biggest question I have with my implementation is whether welch should support padded FFTs and detrending. But I'm happy to hear any feedback. Eric From vanderplas at astro.washington.edu Mon Dec 3 22:26:01 2012 From: vanderplas at astro.washington.edu (Jake Vanderplas) Date: Mon, 03 Dec 2012 19:26:01 -0800 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: <50BD65D4.7050509@redtetrahedron.org> References: <50BD65D4.7050509@redtetrahedron.org> Message-ID: <50BD6D49.9000804@astro.washington.edu> Hi Eric, The contribution looks good. Have you compared your results to those of matplotlib.mlab.psd()? I believe that also implements Welch's method. Jake On 12/03/2012 06:54 PM, Eric Moore wrote: > Hi, I've just submitted a pull request to add a function to compute a > simple (modified) periodogram and a periodogram using Welch's method. > > The pull request is https://github.com/scipy/scipy/pull/373 > > The biggest question I have with my implementation is whether welch > should support padded FFTs and detrending. But I'm happy to hear any > feedback. > > Eric > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From josef.pktd at gmail.com Mon Dec 3 23:01:32 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 3 Dec 2012 23:01:32 -0500 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: <50BD6D49.9000804@astro.washington.edu> References: <50BD65D4.7050509@redtetrahedron.org> <50BD6D49.9000804@astro.washington.edu> Message-ID: On Mon, Dec 3, 2012 at 10:26 PM, Jake Vanderplas wrote: > Hi Eric, > The contribution looks good. Have you compared your results to those of > matplotlib.mlab.psd()? I believe that also implements Welch's method. > Jake I'm also glad someone provides this in scipy. nitime from the nipy community also has more spectral analysis. Just one question: Should the periodogram have a bandwidth, window length choice? for example what's the M in your "flattop" example. partial aside: I worked on this for statsmodels some time (years) ago, but I never could quite figure out how to do the windows smoothing with the provided windows. (statsmodels only has raw, unwindowed periodogram and spectral density from estimated IIR/ARMA coefficients.) I didn't manage to get matplotlib's Welch implementation to work meaningfully for short time series that we have in economics, but I think your periodogram should "work like in the textbook". Thanks, Josef > > On 12/03/2012 06:54 PM, Eric Moore wrote: >> Hi, I've just submitted a pull request to add a function to compute a >> simple (modified) periodogram and a periodogram using Welch's method. >> >> The pull request is https://github.com/scipy/scipy/pull/373 >> >> The biggest question I have with my implementation is whether welch >> should support padded FFTs and detrending. But I'm happy to hear any >> feedback. >> >> Eric >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From mcp.stras at gmail.com Mon Dec 3 23:47:29 2012 From: mcp.stras at gmail.com (Martin Campos Pinto) Date: Tue, 04 Dec 2012 05:47:29 +0100 Subject: [SciPy-Dev] Gmres with Sparse Complex Matrices gives Segmentation fault (or NULL result) In-Reply-To: References: <50B0603B.3070801@gmail.com> <50B59962.1020205@gmail.com> Message-ID: <50BD8061.6040403@gmail.com> Hi, Le 11/28/12 7:26 PM, Pauli Virtanen a ?crit : > Hi, > > 28.11.2012 06:56, Martin Campos Pinto kirjoitti: >> I have just tried with scipy version 0.11.0, installed with mac ports >> and gmres runs without crashing. >> Since scipy 0.10.1 was the package provided with EPD I first thought it >> should run fine... > That is sort of strange in itself, given that there are essentially no > changes between 0.10.1 and 0.11.0 in that part of code. The test suite > also includes complex-valued test problems for gmres, so it not crashing > is also not immediately understood. > > If the crashing version was 0.9.0 or earlier, then I'd understand what > is going on... If you still can, double-checking what > 'print scipy.__version__' said could be useful. > I have just reinstalled my os (for reasons not related to scipy...) and updated the scipy mac ports install but not the EPD one, but that one already used scipy 0.10.0 and the result is the same. Here's the test with the scipy version number written down: import scipy from scipy import sparse from numpy.linalg import norm from numpy.random import rand import scipy.sparse.linalg as spla print "scipy version: ", scipy.__version__ C = sparse.lil_matrix((10, 10), dtype=complex) C.setdiag(rand(10)) C[0,3] = 1j C = C.tocsr() c = rand(10)+rand(10)*1j x = spla.spsolve(C, c) print "spsolve: norm(C*x - c) = ", norm(C*x - c) (x,info) = spla.gmres(C, c) print "gmres: norm(C*x - c) = ", norm(C*x - c) gives: 1. with the python binary installed with mac ports: Python 2.7.3 (default, Oct 22 2012, 06:12:28) Martins-MacBook-Pro:~ campos$ /opt/local/bin/python2.7 test_gmres_cplx.py scipy version: 0.11.0 spsolve: norm(C*x - c) = 2.60370378581e-16 gmres: norm(C*x - c) = 1.21827900786e-06 2. with the python binary installed with EPD: Python 2.7.2 |EPD 7.2-2 (64-bit)| (default, Sep 7 2011, 16:31:15) Martins-MacBook-Pro:~ campos$ python test_gmres_cplx.py scipy version: 0.10.0 spsolve: norm(C*x - c) = 2.48641008715e-16 Segmentation fault Best, Martin -- Martin Campos Pinto http://www-irma.u-strasbg.fr/~campos From ewm at redtetrahedron.org Tue Dec 4 07:49:02 2012 From: ewm at redtetrahedron.org (Eric Moore) Date: Tue, 04 Dec 2012 07:49:02 -0500 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: <50BD6D49.9000804@astro.washington.edu> References: <50BD65D4.7050509@redtetrahedron.org> <50BD6D49.9000804@astro.washington.edu> Message-ID: <50BDF13E.1010508@redtetrahedron.org> Jake Vanderplas wrote: > Hi Eric, > The contribution looks good. Have you compared your results to those of > matplotlib.mlab.psd()? I believe that also implements Welch's method. > Jake > > On 12/03/2012 06:54 PM, Eric Moore wrote: >> Hi, I've just submitted a pull request to add a function to compute a >> simple (modified) periodogram and a periodogram using Welch's method. >> >> The pull request is https://github.com/scipy/scipy/pull/373 >> >> The biggest question I have with my implementation is whether welch >> should support padded FFTs and detrending. But I'm happy to hear any >> feedback. >> >> Eric It does produce the same output as mlab.psd(). The functions have slightly different defaults. Equivalent calls, for a sampling frequency of fs, a 1024 point transform, with 50% overlap, and a hanning window are signal.welch(x, fs, nfft=1024) vs. mlab.psd(x, 1024, fs, noverlap=512) I'm not sure with just having quickly looked at this what mlab.psd is calculating if you set scale_by_freq=False, but it doesnt match my scaling='spectrum'. I've also just noticed that my code dealing with the output of rfft is slightly broken. I'll push a correct version later today. Eric From ewm at redtetrahedron.org Tue Dec 4 07:51:14 2012 From: ewm at redtetrahedron.org (Eric Moore) Date: Tue, 04 Dec 2012 07:51:14 -0500 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: References: <50BD65D4.7050509@redtetrahedron.org> <50BD6D49.9000804@astro.washington.edu> Message-ID: <50BDF1C2.5060006@redtetrahedron.org> josef.pktd at gmail.com wrote: > On Mon, Dec 3, 2012 at 10:26 PM, Jake Vanderplas > wrote: >> Hi Eric, >> The contribution looks good. Have you compared your results to those of >> matplotlib.mlab.psd()? I believe that also implements Welch's method. >> Jake > > I'm also glad someone provides this in scipy. > > nitime from the nipy community also has more spectral analysis. > > Just one question: Should the periodogram have a bandwidth, window > length choice? > for example what's the M in your "flattop" example. > > partial aside: > I worked on this for statsmodels some time (years) ago, but I never > could quite figure out how to do the windows smoothing with the > provided windows. (statsmodels only has raw, unwindowed periodogram > and spectral density from estimated IIR/ARMA coefficients.) > I didn't manage to get matplotlib's Welch implementation to work > meaningfully for short time series that we have in economics, but I > think your periodogram should "work like in the textbook". > > Thanks, > Josef > >> On 12/03/2012 06:54 PM, Eric Moore wrote: >>> Hi, I've just submitted a pull request to add a function to compute a >>> simple (modified) periodogram and a periodogram using Welch's method. >>> >>> The pull request is https://github.com/scipy/scipy/pull/373 >>> >>> The biggest question I have with my implementation is whether welch >>> should support padded FFTs and detrending. But I'm happy to hear any >>> feedback. >>> >>> Eric I'm not sure what M is. The size of the window is nfft. Eric From josef.pktd at gmail.com Tue Dec 4 09:00:10 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 4 Dec 2012 09:00:10 -0500 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: <50BDF1C2.5060006@redtetrahedron.org> References: <50BD65D4.7050509@redtetrahedron.org> <50BD6D49.9000804@astro.washington.edu> <50BDF1C2.5060006@redtetrahedron.org> Message-ID: On Tue, Dec 4, 2012 at 7:51 AM, Eric Moore wrote: > josef.pktd at gmail.com wrote: >> On Mon, Dec 3, 2012 at 10:26 PM, Jake Vanderplas >> wrote: >>> Hi Eric, >>> The contribution looks good. Have you compared your results to those of >>> matplotlib.mlab.psd()? I believe that also implements Welch's method. >>> Jake >> >> I'm also glad someone provides this in scipy. >> >> nitime from the nipy community also has more spectral analysis. >> >> Just one question: Should the periodogram have a bandwidth, window >> length choice? >> for example what's the M in your "flattop" example. >> >> partial aside: >> I worked on this for statsmodels some time (years) ago, but I never >> could quite figure out how to do the windows smoothing with the >> provided windows. (statsmodels only has raw, unwindowed periodogram >> and spectral density from estimated IIR/ARMA coefficients.) >> I didn't manage to get matplotlib's Welch implementation to work >> meaningfully for short time series that we have in economics, but I >> think your periodogram should "work like in the textbook". >> >> Thanks, >> Josef >> >>> On 12/03/2012 06:54 PM, Eric Moore wrote: >>>> Hi, I've just submitted a pull request to add a function to compute a >>>> simple (modified) periodogram and a periodogram using Welch's method. >>>> >>>> The pull request is https://github.com/scipy/scipy/pull/373 >>>> >>>> The biggest question I have with my implementation is whether welch >>>> should support padded FFTs and detrending. But I'm happy to hear any >>>> feedback. >>>> >>>> Eric > > I'm not sure what M is. The size of the window is nfft. scipy.signal.flattop(M, sym=True)[source]? The M-point Flat top window. http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.flattop.html More general question then: How do I make a periodogram smooth or less smooth? Josef > > Eric > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From sturla at molden.no Tue Dec 4 11:09:19 2012 From: sturla at molden.no (Sturla Molden) Date: Tue, 4 Dec 2012 17:09:19 +0100 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: References: <50BD65D4.7050509@redtetrahedron.org> <50BD6D49.9000804@astro.washington.edu> <50BDF1C2.5060006@redtetrahedron.org> Message-ID: <359D537D-B79B-49AC-9B58-693B2F94A691@molden.no> Den 4. des. 2012 kl. 15:00 skrev josef.pktd at gmail.com: > > More general question then: > How do I make a periodogram smooth or less smooth? > You don't. The variance of the periodogram is independent of its length. But intuitively, sampling more data should produce a better estimate of the power spectrum: The power-spectrum is the Fourier transform of the signal's autocorrelation. The more data we sample, the better we can estimate the autocorrelation ? and thus the better we can estimate the spectrum. But the periodogram does not work that way. As a spectrum estimator, its variance does not decay with the amount of sampled data. That is what Welch, Blackman-Tuckey, and multitaper methods tries to remedy, but they do so by introducing bias. Personally I prefer ARMA modelling or continous wavelet transform for spectrum analysis. For FFT-based power spectra, a surprisingly efficient method is to wavelet denoise a multitaper spectrum. Multitapering seems to work better than Welch, and after wavelet shrinkage the spectra look smooth. But still this is inferior to using parametric ARMA modelling ? or just averaging a CWT over time. Sturla From ewm at redtetrahedron.org Tue Dec 4 23:07:15 2012 From: ewm at redtetrahedron.org (Eric Moore) Date: Tue, 04 Dec 2012 23:07:15 -0500 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: <50BD6D49.9000804@astro.washington.edu> References: <50BD65D4.7050509@redtetrahedron.org> <50BD6D49.9000804@astro.washington.edu> Message-ID: <50BEC873.10409@redtetrahedron.org> Jake Vanderplas wrote: > Hi Eric, > The contribution looks good. Have you compared your results to those of > matplotlib.mlab.psd()? I believe that also implements Welch's method. > Jake > > On 12/03/2012 06:54 PM, Eric Moore wrote: >> Hi, I've just submitted a pull request to add a function to compute a >> simple (modified) periodogram and a periodogram using Welch's method. >> >> The pull request is https://github.com/scipy/scipy/pull/373 >> >> The biggest question I have with my implementation is whether welch >> should support padded FFTs and detrending. But I'm happy to hear any >> feedback. >> >> Eric I've added another commit that fixes a bunch of bugs in what I originally submitted. I produce exactly the same output as psd. Except for a bug in psd for two-sided spectra with NFFT odd. (Submitted here: https://github.com/matplotlib/matplotlib/issues/1561) Eric From daniele at grinta.net Wed Dec 5 04:31:10 2012 From: daniele at grinta.net (Daniele Nicolodi) Date: Wed, 05 Dec 2012 10:31:10 +0100 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: <50BD65D4.7050509@redtetrahedron.org> References: <50BD65D4.7050509@redtetrahedron.org> Message-ID: <50BF145E.5040501@grinta.net> On 04/12/2012 03:54, Eric Moore wrote: > The biggest question I have with my implementation is whether welch > should support padded FFTs and detrending. But I'm happy to hear any > feedback. I think detrending is a required functionality for the welch method. Global detrending can be easily applied before feeding the data to welch, but what is required in most application is the independent detrending of each data segment used in the estimator. Detrending up to an arbitrary polynomial order, in addition to the common linear detrending, would be a nice addition too. PS: the lack of segment by segment detrending in Matlab's welch() function forces me to maintain a forked version of it. Cheers, Daniele From mail.till at gmx.de Wed Dec 5 07:03:42 2012 From: mail.till at gmx.de (Till Stensitzki) Date: Wed, 5 Dec 2012 12:03:42 +0000 (UTC) Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions References: <50BD65D4.7050509@redtetrahedron.org> <50BD6D49.9000804@astro.washington.edu> <50BDF1C2.5060006@redtetrahedron.org> <359D537D-B79B-49AC-9B58-693B2F94A691@molden.no> Message-ID: > > You don't. The variance of the periodogram is independent of its length. But intuitively, sampling more > data should produce a better estimate of the power spectrum: The power-spectrum is the Fourier transform > of the signal's autocorrelation. The more data we sample, the better we can estimate the autocorrelation > ? and thus the better we can estimate the spectrum. But the periodogram does not work that way. As a > spectrum estimator, its variance does not decay with the amount of sampled data. That is what Welch, > Blackman-Tuckey, and multitaper methods tries to remedy, but they do so by introducing bias. > > Personally I prefer ARMA modelling or continous wavelet transform for spectrum analysis. > > For FFT-based power spectra, a surprisingly efficient method is to wavelet denoise a multitaper > spectrum. Multitapering seems to work better than Welch, and after wavelet shrinkage the spectra look > smooth. But still this is inferior to using parametric ARMA modelling ? or just averaging a CWT over time. > > Sturla > The book from Larry Bretthorst "Bayesian Spectrum Analysis and Parameter Estimation." shows nicely how spectrum estimation fits into (bayessian-) statistics. It can be downloaded from his site: bayes.wustl.edu/glb/ Till From ndbecker2 at gmail.com Wed Dec 5 10:35:17 2012 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 05 Dec 2012 10:35:17 -0500 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions References: <50BD65D4.7050509@redtetrahedron.org> <50BD6D49.9000804@astro.washington.edu> <50BDF1C2.5060006@redtetrahedron.org> <359D537D-B79B-49AC-9B58-693B2F94A691@molden.no> Message-ID: Till Stensitzki wrote: > bayes.wustl.edu/glb/ Which file? From jsseabold at gmail.com Wed Dec 5 10:47:13 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 5 Dec 2012 10:47:13 -0500 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: References: <50BD65D4.7050509@redtetrahedron.org> <50BD6D49.9000804@astro.washington.edu> <50BDF1C2.5060006@redtetrahedron.org> <359D537D-B79B-49AC-9B58-693B2F94A691@molden.no> Message-ID: On Wed, Dec 5, 2012 at 10:35 AM, Neal Becker wrote: > Till Stensitzki wrote: > > > bayes.wustl.edu/glb/ > > Which file? book.pdf Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrocklin at gmail.com Wed Dec 5 11:34:54 2012 From: mrocklin at gmail.com (Matthew Rocklin) Date: Wed, 5 Dec 2012 10:34:54 -0600 Subject: [SciPy-Dev] stats.distributions.py documentation Message-ID: I was looking through the archives and stumbled across this thread. We had a tireless contributor (@raoulb) do something like this in SymPy a while back. The results were quite nice. http://docs.sympy.org/0.7.2/modules/stats.html#continuous-types https://github.com/sympy/sympy/blob/master/sympy/stats/crv_types.py#L126 You may also be able to copy a fair amount of the content from these docstrings. There are more in a pull request if you're interested. Best, -Matthew -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Wed Dec 5 12:37:46 2012 From: sturla at molden.no (Sturla Molden) Date: Wed, 05 Dec 2012 18:37:46 +0100 Subject: [SciPy-Dev] PR: Add solout support to dopri5 and dop853 In-Reply-To: References: Message-ID: <50BF866A.6090903@molden.no> I guess we could also add DPSS windows for multitaper methods? Sturla On 29.11.2012 16:35, John Travers wrote: > Hi all, > > I realise that the ode code has its problems/detractors, but I needed a long requested feature, so I tried to add it: > > https://github.com/scipy/scipy/pull/366 > > This enables the use of a callback function which is called at each internal integration step of dopri5 or dop853. This is useful to monitor the ode evolution at its natural time-scale without increasing the output density. It also enables stopping the integration mid-way. > > This feature was requested quite a while ago, just before my absence from scipy development: > > http://mail.scipy.org/pipermail/scipy-dev/2010-September/015540.html > > Comments/improvements welcome. > > Cheers, > John > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From sturla at molden.no Wed Dec 5 12:40:55 2012 From: sturla at molden.no (Sturla Molden) Date: Wed, 05 Dec 2012 18:40:55 +0100 Subject: [SciPy-Dev] PR: Add solout support to dopri5 and dop853 In-Reply-To: <50BF866A.6090903@molden.no> References: <50BF866A.6090903@molden.no> Message-ID: <50BF8727.1070501@molden.no> Sorry, that should have been a response to the periodogram thread :) Sturla On 05.12.2012 18:37, Sturla Molden wrote: > I guess we could also add DPSS windows for multitaper methods? > > > Sturla > > > On 29.11.2012 16:35, John Travers wrote: >> Hi all, >> >> I realise that the ode code has its problems/detractors, but I needed a long requested feature, so I tried to add it: >> >> https://github.com/scipy/scipy/pull/366 >> >> This enables the use of a callback function which is called at each internal integration step of dopri5 or dop853. This is useful to monitor the ode evolution at its natural time-scale without increasing the output density. It also enables stopping the integration mid-way. >> >> This feature was requested quite a while ago, just before my absence from scipy development: >> >> http://mail.scipy.org/pipermail/scipy-dev/2010-September/015540.html >> >> Comments/improvements welcome. >> >> Cheers, >> John >> >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From ewm at redtetrahedron.org Thu Dec 6 21:50:51 2012 From: ewm at redtetrahedron.org (Eric Moore) Date: Thu, 06 Dec 2012 21:50:51 -0500 Subject: [SciPy-Dev] Pull Req: Add periodogram and welch functions In-Reply-To: <50BF145E.5040501@grinta.net> References: <50BD65D4.7050509@redtetrahedron.org> <50BF145E.5040501@grinta.net> Message-ID: <50C1598B.3030009@redtetrahedron.org> Daniele Nicolodi wrote: > On 04/12/2012 03:54, Eric Moore wrote: >> The biggest question I have with my implementation is whether welch >> should support padded FFTs and detrending. But I'm happy to hear any >> feedback. > > I think detrending is a required functionality for the welch method. > > Global detrending can be easily applied before feeding the data to > welch, but what is required in most application is the independent > detrending of each data segment used in the estimator. Detrending up to > an arbitrary polynomial order, in addition to the common linear > detrending, would be a nice addition too. > > PS: the lack of segment by segment detrending in Matlab's welch() > function forces me to maintain a forked version of it. > > Cheers, > Daniele > I've added detrending. The relevant part of the docstring is: detrend : string or function, optional Specifies how to detend each segment. If `detrend` is a string, it is passed as the `type` argument to scipy.signal.detrend. If it is a function, it takes a segment and returns a detrended segment. Defaults to 'constant' Supporting polynomial detrending would be best done by extending detrend, I think. (Also I'd love to see an actual use case for it's `bp` argument.) Looking at matplotlib's psd again, it also supports setting the length of the fft separately from the segment length. Would we also like to support that option? Eric From pav at iki.fi Fri Dec 7 13:54:24 2012 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 07 Dec 2012 20:54:24 +0200 Subject: [SciPy-Dev] PR 377: Convex hulls and Voronoi diagrams via Qhull Message-ID: Hi, PR https://github.com/scipy/scipy/pull/377 tries to finish up the Qhull interface in scipy.spatial. If you're interested in this sort of thing, please take a look (at least at the docstrings). Is there some functionality missing that you'd like to also have? -- Pauli Virtanen From pav at iki.fi Fri Dec 7 14:36:23 2012 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 07 Dec 2012 21:36:23 +0200 Subject: [SciPy-Dev] Jaccard distance? Message-ID: Hi, Does someone know about what is the "correct" definition for the Jaccard distance? There's a bug report that claims that the current behavior is wrong: http://projects.scipy.org/scipy/ticket/1774 However, as far as I see, the result is exactly what the docstring says and our result agrees with Octave. -- Pauli Virtanen From vanderplas at astro.washington.edu Fri Dec 7 15:06:07 2012 From: vanderplas at astro.washington.edu (Jake Vanderplas) Date: Fri, 07 Dec 2012 12:06:07 -0800 Subject: [SciPy-Dev] Jaccard distance? In-Reply-To: References: Message-ID: <50C24C2F.8030104@astro.washington.edu> Hi, Jaccard distance is a dissimilarity metric between two sets. I think the confusion here is how the sets are specified. The definition of Jaccard distance is: J(A, B) = 1 - |A intersect B| / |A union B| so if A = {1, 2, 3, 4} and B = {1, 2, 4, 3}, then J(A, B) = 0. Recall that order does not matter for sets. Sets can also be encoded as an ordered list of binary variables: the above sets (zero-indexed) could be represented by A = [0, 1, 1, 1, 1]; B = [0, 1, 1, 1, 1]. In this case, order matters, and the distance can be specified by J_bin(A, B) = N_unequal(A, B) / N_nonzero(A, B) and we recover J_bin(A, B) = 0 as above. As a more complicated example, if A = [1, 0, 0, 1] and B = [1, 0, 1, 0], then N_unequal(A, B) = 2 N_nonzero(A, B) = 3 (only a single index has zero for both A and B) and J_bin(A, B) = 2/3 Where things get a bit messy is that scipy & octave extend this binary notion of the Jaccard distance to arbitrary numbers. So if A = [1, 2, 3, 4] and B = [1, 2, 4, 3], then N_unequal(A, B) = 2 N_nonzero(A, B) = 4 and J_ext(A, B) = 1/2 I'm not sure whether this results in a true metric - that would be interesting to figure out. This seems to be the issue in ticket 1774: the user expected the metric to operate as if A and B are unordered sets, while the function actually operates as if they're ordered lists of (extended) binary variables. Hope that helps, Jake On 12/07/2012 11:36 AM, Pauli Virtanen wrote: > Hi, > > Does someone know about what is the "correct" definition for the Jaccard > distance? > > There's a bug report that claims that the current behavior is wrong: > > http://projects.scipy.org/scipy/ticket/1774 > > However, as far as I see, the result is exactly what the docstring says > and our result agrees with Octave. > From matt.terry at gmail.com Sat Dec 8 04:09:13 2012 From: matt.terry at gmail.com (Matt Terry) Date: Sat, 8 Dec 2012 01:09:13 -0800 Subject: [SciPy-Dev] PR 377: Convex hulls and Voronoi diagrams via Qhull In-Reply-To: References: Message-ID: Very cool and thanks! It would be nice to be able to add points to an existing triangulation. I looked into this briefly, and it looked like qhull has the needed functions, but it does some strange things with globals that would make retaining the needed data structures tricky. On Fri, Dec 7, 2012 at 10:54 AM, Pauli Virtanen wrote: > Hi, > > PR https://github.com/scipy/scipy/pull/377 tries to finish up the Qhull > interface in scipy.spatial. > > If you're interested in this sort of thing, please take a look (at least > at the docstrings). Is there some functionality missing that you'd like > to also have? > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Dec 8 06:28:19 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 8 Dec 2012 12:28:19 +0100 Subject: [SciPy-Dev] stats.distributions.py documentation In-Reply-To: References: Message-ID: On Wed, Dec 5, 2012 at 5:34 PM, Matthew Rocklin wrote: > I was looking through the archives and stumbled across this thread. We > had a tireless contributor (@raoulb) do something like this in SymPy a > while back. The results were quite nice. > http://docs.sympy.org/0.7.2/modules/stats.html#continuous-types > https://github.com/sympy/sympy/blob/master/sympy/stats/crv_types.py#L126 > > You may also be able to copy a fair amount of the content from these > docstrings. There are more in a pull request if you're interested. > Thanks, that looks useful. The "SymPy Live" feature on the examples is really cool by the way! Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Dec 8 09:49:34 2012 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 08 Dec 2012 16:49:34 +0200 Subject: [SciPy-Dev] PR 377: Convex hulls and Voronoi diagrams via Qhull In-Reply-To: References: Message-ID: Hi, 08.12.2012 11:09, Matt Terry kirjoitti: [clip] > It would be nice to be able to add points to an existing triangulation. > I looked into this briefly, and it looked like qhull has the needed > functions, but it does some strange things with globals that would make > retaining the needed data structures tricky. I also looked at this: there's some code here: https://github.com/pv/scipy-work/tree/incremental-qhull This works fine as long as the triangulation has no geometrical degeneracies. However, Qhull seems to not be really designed for this, and I so far I've had some difficulties with this feature. For instance: import numpy as np from scipy.spatial import Delaunay points = [(0,0), (0,1), (1,0), (1,1)] tri = Delaunay(points, incremental=True, qhull_options="Qt Q11 Qz") tri.add_points([(1, 1 + 1e-3)]) print tri.vertices Despite the "Qt Q11" options and calling qh_triangulate, Qhull seems to generate a non-simplical facet here, so the triangulation has to be re-done from scratch. Moreover, if you hit a case in which Qhull makes a longjmp() out from the middle of the code (or, aborts the process, if error handling is not enabled), the internal state is apparently left inconsistent. Interestingly, this seems to also happens in qh_new_qhull, so in fact the Qhull code probably leaks memory on failures. -- Pauli Virtanen From pav at iki.fi Sat Dec 8 19:35:36 2012 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 09 Dec 2012 02:35:36 +0200 Subject: [SciPy-Dev] PR 377: Convex hulls and Voronoi diagrams via Qhull In-Reply-To: References: Message-ID: <50C3DCD8.6000603@iki.fi> 08.12.2012 16:49, Pauli Virtanen kirjoitti: > 08.12.2012 11:09, Matt Terry kirjoitti: > [clip] >> It would be nice to be able to add points to an existing triangulation. >> I looked into this briefly, and it looked like qhull has the needed >> functions, but it does some strange things with globals that would make >> retaining the needed data structures tricky. > > I also looked at this: there's some code here: > > https://github.com/pv/scipy-work/tree/incremental-qhull > > This works fine as long as the triangulation has no geometrical > degeneracies. > > However, Qhull seems to not be really designed for this, and I so far > I've had some difficulties with this feature. [clip] Ok, the problem was that I had an error in my own code. Now it works fine enough --- I added it as a part of the pull request: https://github.com/scipy/scipy/pull/377 Testing is welcome! If you have some real-world data at hand, I'd be interested in hearing if it works or not. Cheers, Pauli From matt.terry at gmail.com Mon Dec 10 12:45:08 2012 From: matt.terry at gmail.com (Matt Terry) Date: Mon, 10 Dec 2012 09:45:08 -0800 Subject: [SciPy-Dev] PR 377: Convex hulls and Voronoi diagrams via Qhull In-Reply-To: <50C3DCD8.6000603@iki.fi> References: <50C3DCD8.6000603@iki.fi> Message-ID: cool! it will be a couple days, but I'll give it a spin. -matt On Sat, Dec 8, 2012 at 4:35 PM, Pauli Virtanen wrote: > 08.12.2012 16:49, Pauli Virtanen kirjoitti: > > 08.12.2012 11:09, Matt Terry kirjoitti: > > [clip] > >> It would be nice to be able to add points to an existing triangulation. > >> I looked into this briefly, and it looked like qhull has the needed > >> functions, but it does some strange things with globals that would make > >> retaining the needed data structures tricky. > > > > I also looked at this: there's some code here: > > > > https://github.com/pv/scipy-work/tree/incremental-qhull > > > > This works fine as long as the triangulation has no geometrical > > degeneracies. > > > > However, Qhull seems to not be really designed for this, and I so far > > I've had some difficulties with this feature. > [clip] > > Ok, the problem was that I had an error in my own code. > > Now it works fine enough --- I added it as a part of the pull request: > > https://github.com/scipy/scipy/pull/377 > > Testing is welcome! If you have some real-world data at hand, I'd be > interested in hearing if it works or not. > > Cheers, > Pauli > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanderplas at astro.washington.edu Mon Dec 10 13:29:43 2012 From: vanderplas at astro.washington.edu (Jake Vanderplas) Date: Mon, 10 Dec 2012 10:29:43 -0800 Subject: [SciPy-Dev] SVDLIBC for sparse SVDs Message-ID: <50C62A17.2090403@astro.washington.edu> Hi folks, I just came across a sparse svd implementation based on SVDLIBC [1] with a nice python wrapper utilizing Scipy's csc_matrix type [2]. Scipy currently includes a basic iterative sparse svd implementation based on ARPACK (scipy.sparse.linalg.svds), but the implementation is somewhat hackish. The SVDLIBC version uses the same principles as ARPACK -- Lanczos factorization -- and from my quick checks, can be faster than the ARPACK version in some cases. All the code, including python wrappers, is released under a BSD license, so it would be fairly seamless to include in Scipy. On the plus side, incorporating SVDLIBC would add some well-tested sparse functionality and gives users more powerful options. Where our current svds function performs iterations within python, the SVDLIBC implementation performs the iterations directly within the C code. It uses the csc_matrix format internally, so no data copying is involved. It could fairly easily supplement or replace our current sparse svd. On the minus side, the functionality does duplicate what we already have, and would involve bundling another C package in Scipy. This might cause some linking headaches (what if the user already has a different version of SVDLIBC on their system? We experienced this with ARPACK) and maintenance overhead (possibility of added compilation issues; the need to keep up with updates to SVDLIBC). Furthermore, sparsesvd is a fairly light-weight python package, and users needing the functionality could easily install it with pip if the need arises. I could be convinced either way, but I thought I'd ask the list: any thoughts on whether this would be worth including in Scipy? Jake [1] http://tedlab.mit.edu/~dr/SVDLIBC/ [2] http://pypi.python.org/pypi/sparsesvd/ From josef.pktd at gmail.com Mon Dec 10 20:09:27 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Dec 2012 20:09:27 -0500 Subject: [SciPy-Dev] scipy.linalg.svdvals and complex Message-ID: I'm trying out potential problems in statsmodels with dtypes, or find dtypes that work without problems one exception I get is because svdvals doesn't work with complex type, while svd does. Is this a feature or a bug? >>> scipy.__version__ '0.9.0' x is just a random float array >>> (x+0.1j).dtype dtype('complex128') >>> from scipy import linalg >>> linalg.svdvals(x+0.1j) Traceback (most recent call last): File "", line 1, in File "C:\Python26\lib\site-packages\scipy\linalg\decomp_svd.py", line 117, in svdvals return svd(a, compute_uv=0, overwrite_a=overwrite_a) File "C:\Python26\lib\site-packages\scipy\linalg\decomp_svd.py", line 81, in svd overwrite_a = overwrite_a) flapack.error: (lwork>=(compute_uv?2*minmn*minmn+MAX(m,n)+2*minmn:2*minmn+MAX(m,n))) failed for 2nd keyword lwork: zgesdd:lwork=26 >>> linalg.svd(x+0.1j) (array([[-0.10514587-0.0248632j , -0.19299488-0.00077008j, -0.09957051+0.01354966j, ..., -0.05697563+0.02706233j, .... Josef From josef.pktd at gmail.com Tue Dec 11 08:53:21 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 11 Dec 2012 08:53:21 -0500 Subject: [SciPy-Dev] scipy.linalg.svdvals and complex In-Reply-To: References: Message-ID: Fixed in 0.11 Sorry for the noise, I should keep better track of my scipy versions. Josef On Mon, Dec 10, 2012 at 8:09 PM, wrote: > I'm trying out potential problems in statsmodels with dtypes, or find > dtypes that work without problems > > one exception I get is because svdvals doesn't work with complex type, > while svd does. > Is this a feature or a bug? > >>>> scipy.__version__ > '0.9.0' > > x is just a random float array >>>> (x+0.1j).dtype > dtype('complex128') > >>>> from scipy import linalg >>>> linalg.svdvals(x+0.1j) > Traceback (most recent call last): > File "", line 1, in > File "C:\Python26\lib\site-packages\scipy\linalg\decomp_svd.py", > line 117, in svdvals > return svd(a, compute_uv=0, overwrite_a=overwrite_a) > File "C:\Python26\lib\site-packages\scipy\linalg\decomp_svd.py", > line 81, in svd > overwrite_a = overwrite_a) > flapack.error: (lwork>=(compute_uv?2*minmn*minmn+MAX(m,n)+2*minmn:2*minmn+MAX(m,n))) > failed for 2nd keyword lwork: zgesdd:lwork=26 > >>>> linalg.svd(x+0.1j) > (array([[-0.10514587-0.0248632j , -0.19299488-0.00077008j, > -0.09957051+0.01354966j, ..., -0.05697563+0.02706233j, > .... > > Josef From pav at iki.fi Tue Dec 11 19:34:33 2012 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 12 Dec 2012 02:34:33 +0200 Subject: [SciPy-Dev] SVDLIBC for sparse SVDs In-Reply-To: <50C62A17.2090403@astro.washington.edu> References: <50C62A17.2090403@astro.washington.edu> Message-ID: 10.12.2012 20:29, Jake Vanderplas kirjoitti: [clip] > I could be convinced either way, but I thought I'd ask the list: any > thoughts on whether this would be worth including in Scipy? [clip] > [1] http://tedlab.mit.edu/~dr/SVDLIBC/ > [2] http://pypi.python.org/pypi/sparsesvd/ I don't think that this will bring new compilation problems, as Scipy is probably already as bad as it gets. I think the main question would be about numerical robustness, as the current svds() implementation doesn't do anything special. Also, how much additional trouble comes from maintenance of the wrappers (probably not so much, if the package already has existed for some time). I glanced through the svdlibc source code and the review paper: - The algorithm is at least on high level the same, Lanczos on A'A - The C version doesn't seem to use LAPACK or BLAS. It wasn't immediately clear what the pieces of code were that in the LAS2 algorithm were tailored for SVD. On the other hand, it's based on some venerable and published code, so it's probably good. Dealing with CSR internally is probably not a bad thing --- svds() won't work with general linear operators either because it needs the transpose. It's not clear to me either how big overall win this would be, but I won't object if you'd like to add it. Pauli From pav at iki.fi Tue Dec 11 19:39:10 2012 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 12 Dec 2012 02:39:10 +0200 Subject: [SciPy-Dev] SVDLIBC for sparse SVDs In-Reply-To: References: <50C62A17.2090403@astro.washington.edu> Message-ID: 12.12.2012 02:34, Pauli Virtanen kirjoitti: [clip] > It's not clear to me either how big overall win this would be, but I > won't object if you'd like to add it. Ah, also: the C version apparently doesn't seem to have complex-value support. That makes it somewhat less useful... Pauli From d.s.seljebotn at astro.uio.no Wed Dec 12 03:23:58 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 12 Dec 2012 09:23:58 +0100 Subject: [SciPy-Dev] SVDLIBC for sparse SVDs In-Reply-To: References: <50C62A17.2090403@astro.washington.edu> Message-ID: <50C83F1E.4050107@astro.uio.no> On 12/12/2012 01:34 AM, Pauli Virtanen wrote: > 10.12.2012 20:29, Jake Vanderplas kirjoitti: > [clip] >> I could be convinced either way, but I thought I'd ask the list: any >> thoughts on whether this would be worth including in Scipy? > [clip] >> [1] http://tedlab.mit.edu/~dr/SVDLIBC/ >> [2] http://pypi.python.org/pypi/sparsesvd/ > > I don't think that this will bring new compilation problems, as Scipy is > probably already as bad as it gets. Well there's no Fortran 90 :-) > Dealing with CSR internally is probably not a bad thing --- svds() won't > work with general linear operators either because it needs the transpose. As long as there's still some routine in place that can work with operators; I usually have matrices that are not sparse but can still be multiplied with quickly from either side. Dag Sverre From fabian.pedregosa at inria.fr Wed Dec 12 07:50:45 2012 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Wed, 12 Dec 2012 13:50:45 +0100 Subject: [SciPy-Dev] SVDLIBC for sparse SVDs In-Reply-To: <50C62A17.2090403@astro.washington.edu> References: <50C62A17.2090403@astro.washington.edu> Message-ID: On Mon, Dec 10, 2012 at 7:29 PM, Jake Vanderplas < vanderplas at astro.washington.edu> wrote: > Hi folks, > I just came across a sparse svd implementation based on SVDLIBC [1] with > a nice python wrapper utilizing Scipy's csc_matrix type [2]. Scipy > currently includes a basic iterative sparse svd implementation based on > ARPACK (scipy.sparse.linalg.svds), but the implementation is somewhat > hackish. The SVDLIBC version uses the same principles as ARPACK -- > Lanczos factorization -- and from my quick checks, can be faster than > the ARPACK version in some cases. All the code, including python > wrappers, is released under a BSD license, so it would be fairly > seamless to include in Scipy. > > On the plus side, incorporating SVDLIBC would add some well-tested > sparse functionality and gives users more powerful options. Where our > current svds function performs iterations within python, the SVDLIBC > implementation performs the iterations directly within the C code. It > uses the csc_matrix format internally, so no data copying is involved. > It could fairly easily supplement or replace our current sparse svd. > I used this routine for the passed weeks. I frankly saw no improvements in performance over the current ARPACK implementation, plus I found it annoying to having to explicitly convert to CSC. Also, the current bindings do not provide any optional parameters such as tolerance or maxiter. Part of my applications is large dense matrices, and in that case converting to CSC kills performance, loosing a factor 2-3 over ARPACK. But I'd be interested to see if it has practical advantages (stability? accuracy?) over ARPACK. Best, Fabian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanderplas at astro.washington.edu Wed Dec 12 14:15:42 2012 From: vanderplas at astro.washington.edu (Jake Vanderplas) Date: Wed, 12 Dec 2012 11:15:42 -0800 Subject: [SciPy-Dev] SVDLIBC for sparse SVDs In-Reply-To: References: <50C62A17.2090403@astro.washington.edu> Message-ID: <50C8D7DE.70103@astro.washington.edu> Hi All, Here is a quick notebook benchmarking the speeds of ARPACK vs SVDLIBC: http://nbviewer.ipython.org/url/www.astro.washington.edu/users/vanderplas/sparse_bench.ipynb Quick summary: SVDLIBC seems to be faster for matrices smaller than N of a few hundred, while ARPACK is faster for larger matrices. I think this reflects the python overhead in ARPACK's iteration interface, which becomes negligible as the cost of an iteration grows. I haven't compared the stability or accuracy of the two algorithms. However, both use Lanczos diagonalization under the hood, so I'd expect them to be similar in this regard. Unless there's another compelling reason to port SVDLIBC to scipy, it looks like the ARPACK svd is generally a sufficient option. It's more flexible, faster where it matters, and (perhaps most importantly) already in the library :) Jake On 12/12/2012 04:50 AM, Fabian Pedregosa wrote: > > On Mon, Dec 10, 2012 at 7:29 PM, Jake Vanderplas > > wrote: > > Hi folks, > I just came across a sparse svd implementation based on SVDLIBC > [1] with > a nice python wrapper utilizing Scipy's csc_matrix type [2]. Scipy > currently includes a basic iterative sparse svd implementation > based on > ARPACK (scipy.sparse.linalg.svds), but the implementation is somewhat > hackish. The SVDLIBC version uses the same principles as ARPACK -- > Lanczos factorization -- and from my quick checks, can be faster than > the ARPACK version in some cases. All the code, including python > wrappers, is released under a BSD license, so it would be fairly > seamless to include in Scipy. > > On the plus side, incorporating SVDLIBC would add some well-tested > sparse functionality and gives users more powerful options. Where our > current svds function performs iterations within python, the SVDLIBC > implementation performs the iterations directly within the C code. It > uses the csc_matrix format internally, so no data copying is involved. > It could fairly easily supplement or replace our current sparse svd. > > > I used this routine for the passed weeks. I frankly saw no > improvements in performance over the current ARPACK implementation, > plus I found it annoying to having to explicitly convert to CSC. Also, > the current bindings do not provide any optional parameters such as > tolerance or maxiter. > > Part of my applications is large dense matrices, and in that case > converting to CSC kills performance, loosing a factor 2-3 over ARPACK. > But I'd be interested to see if it has practical advantages > (stability? accuracy?) over ARPACK. > > Best, > > Fabian. > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Dec 12 14:31:45 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 12 Dec 2012 20:31:45 +0100 Subject: [SciPy-Dev] SVDLIBC for sparse SVDs In-Reply-To: <50C62A17.2090403@astro.washington.edu> References: <50C62A17.2090403@astro.washington.edu> Message-ID: On Mon, Dec 10, 2012 at 7:29 PM, Jake Vanderplas wrote: > Hi folks, > I just came across a sparse svd implementation based on SVDLIBC [1] with > a nice python wrapper utilizing Scipy's csc_matrix type [2]. Scipy > currently includes a basic iterative sparse svd implementation based on > ARPACK (scipy.sparse.linalg.svds), but the implementation is somewhat > hackish. The SVDLIBC version uses the same principles as ARPACK -- > Lanczos factorization -- and from my quick checks, can be faster than > the ARPACK version in some cases. All the code, including python > wrappers, is released under a BSD license, so it would be fairly > seamless to include in Scipy. > > On the plus side, incorporating SVDLIBC would add some well-tested > sparse functionality and gives users more powerful options. Where our > current svds function performs iterations within python, the SVDLIBC > implementation performs the iterations directly within the C code. It > uses the csc_matrix format internally, so no data copying is involved. > It could fairly easily supplement or replace our current sparse svd. > > On the minus side, the functionality does duplicate what we already > have, and would involve bundling another C package in Scipy. This might > cause some linking headaches (what if the user already has a different > version of SVDLIBC on their system? We experienced this with ARPACK) and > maintenance overhead (possibility of added compilation issues; the need > to keep up with updates to SVDLIBC). Furthermore, sparsesvd is a fairly > light-weight python package, and users needing the functionality could > easily install it with pip if the need arises. > > I could be convinced either way, but I thought I'd ask the list: any > thoughts on whether this would be worth including in Scipy? I think that if we are changing the method for sparse SVD, we should just use propack. Fabian noticed that they finally changed the license from unspecified to BSD (this was not true last year). Propack, while claiming higher accuracy (by avoiding computing A A'), was one order of magnitude faster than Arpack for some matrices I have tried. cheers, David From denis-bz-gg at t-online.de Thu Dec 13 13:52:04 2012 From: denis-bz-gg at t-online.de (denis) Date: Thu, 13 Dec 2012 18:52:04 +0000 (UTC) Subject: [SciPy-Dev] suggest a class Stop to unify fmin* convergence and output Message-ID: Optimizer folks, would a class Stop help unify the different conevergence criteria in fmin* ? Today (12.1) some have maxiter ftol and xtol, others have ... https://github.com/denis-bz/opt/tree/master/util/stopmod.py is a 1-page prototype, doc below. Stop() is also an easy hook for uniform verbose= output to help people track what an optimizer is doing, e.g. ... iter 16 cost 121 f 4.64997 df .00288 x -.507 .266 ... iter 17 cost 128 f 4.64855 df .00142 x -.507 .268 ... Stop Rosenbrock-dim5 ftol met: iter 18 cost 135 f 4.64783 df .00072 x -.506 .269 ... https://github.com/denis-bz/opt/tree/master/scopt/bfgs has an fmin_bfgs using Stop, and example logs on Rosenbrock 5dim and 10dim. (Sorry if the git stuff is mangled -- git noob.) cheers -- denis class Stop: """ Stop( maxiter, ftol, verbose ... ): a class for convergence tests Example ------- stop = Stop( maxiter= ftol= xtol= verbose= ... ) while 1: ... optimization loop ... if stop( iter, f=f, x=x ): # compare, print break ... ... Parameters ---------- maxiter : stop if iter >= maxiter ftol : stop if fprev - ftol <= f <= fprev -- for minimizing, f decreasing xtol : stop if average |x - xprev| <= xtol maxcost, func : stop if func.cost >= maxcost. E.g. def func(): func.cost += 1 ... func.cost = 0 # init outside verbose : print the stop reason when done, like Stop ftol met: iter 18 cost 135 f 4.64783 x ... default verbose=2: print iter f x on each iteration From ralf.gommers at gmail.com Thu Dec 13 15:24:24 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 13 Dec 2012 21:24:24 +0100 Subject: [SciPy-Dev] dropping Python 2.4 and 2.5 support Message-ID: Hi all, On the NumPy list a proposal was made to drop support for Python 2.4 and 2.5 for NumPy 1.8. For SciPy it would be good to do the same. Benefits: less testing, the with keyword, Cython memoryviews, etc. I propose to just follow the decision made for NumPy for the next SciPy release. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Dec 13 15:30:57 2012 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 13 Dec 2012 22:30:57 +0200 Subject: [SciPy-Dev] dropping Python 2.4 and 2.5 support In-Reply-To: References: Message-ID: 13.12.2012 22:24, Ralf Gommers kirjoitti: > On the NumPy list a proposal was made to drop support for Python 2.4 and > 2.5 for NumPy 1.8. For SciPy it would be good to do the same. Benefits: > less testing, the with keyword, Cython memoryviews, etc. > > I propose to just follow the decision made for NumPy for the next SciPy > release. Sounds like a good idea to me, 2.4 is sort of a PITA, and 2.5 makes some things more difficult than necessary. Pauli From warren.weckesser at gmail.com Thu Dec 13 15:36:16 2012 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 13 Dec 2012 14:36:16 -0600 Subject: [SciPy-Dev] dropping Python 2.4 and 2.5 support In-Reply-To: References: Message-ID: On Thu, Dec 13, 2012 at 2:24 PM, Ralf Gommers wrote: > Hi all, > > On the NumPy list a proposal was made to drop support for Python 2.4 and > 2.5 for NumPy 1.8. For SciPy it would be good to do the same. Benefits: > less testing, the with keyword, Cython memoryviews, etc. > > I propose to just follow the decision made for NumPy for the next SciPy > release. > > +1 Warren > Cheers, > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Thu Dec 13 16:52:06 2012 From: sturla at molden.no (Sturla Molden) Date: Thu, 13 Dec 2012 22:52:06 +0100 Subject: [SciPy-Dev] dropping Python 2.4 and 2.5 support In-Reply-To: References: Message-ID: As I said on the NumPy list it will also allow us to use Cython's memoryview syntax instead of the ndarray syntax. It is much better for anything but trivial algorithms. Sturla Sendt fra min iPad Den 13. des. 2012 kl. 21:24 skrev Ralf Gommers : > Hi all, > > On the NumPy list a proposal was made to drop support for Python 2.4 and 2.5 for NumPy 1.8. For SciPy it would be good to do the same. Benefits: less testing, the with keyword, Cython memoryviews, etc. > > I propose to just follow the decision made for NumPy for the next SciPy release. > > Cheers, > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From pav at iki.fi Fri Dec 14 04:37:58 2012 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 14 Dec 2012 09:37:58 +0000 (UTC) Subject: [SciPy-Dev] 64-bit sparse matrix indices Message-ID: Hi, I've been looking a bit at making sparse matrices work with 64-bit indices: https://github.com/pv/scipy-work/commits/ticket/1307 The motivation is that 32-bit indices on 64-bit machines don't allow representing sparse matrices with large nnz. One option A (currently there) is to allow both int32 and int64 as indices, and use the larger one only when required by nnz. The second option B would be to just use intp for everything. The problem with A is that I'm far from certain that I found all the corner cases yet, and I'm fairly certain there are some undiscovered bugs still somewhere. The test suite doesn't yet have the level of coverage on this issue I'd be comfortable with. The problem with B is that on 64-bit systems, it it increases the memory needs of sparse matrices by about 50%. However, as a solution it's more robust and elegant. Opinions on how it should work? Pauli From njs at pobox.com Fri Dec 14 10:54:52 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 14 Dec 2012 15:54:52 +0000 Subject: [SciPy-Dev] 64-bit sparse matrix indices In-Reply-To: References: Message-ID: On Fri, Dec 14, 2012 at 9:37 AM, Pauli Virtanen wrote: > Hi, > > I've been looking a bit at making sparse matrices work with 64-bit > indices: > > https://github.com/pv/scipy-work/commits/ticket/1307 > > The motivation is that 32-bit indices on 64-bit machines don't allow > representing sparse matrices with large nnz. > > One option A (currently there) is to allow both int32 and int64 as > indices, and use the larger one only when required by nnz. > > The second option B would be to just use intp for everything. > > The problem with A is that I'm far from certain that I found all the > corner cases yet, and I'm fairly certain there are some undiscovered > bugs still somewhere. The test suite doesn't yet have the level of > coverage on this issue I'd be comfortable with. > > The problem with B is that on 64-bit systems, it it increases the > memory needs of sparse matrices by about 50%. However, as a solution > it's more robust and elegant. One problem with B is if there is code out there which "knows" that sparse matrices use 32-bit indices. E.g. I can adapt scikits.sparse.cholmod to handle 64-bit indices, but it will require code changes, because you have to use different flags when calling the underlying routines and so far there was no point in it. It looks like I was paranoid enough that switching to option B would just require changing ~4 lines of code, and that if you somehow passed 64-bit indices to the current version then it will downcast and keep going (not sure if this is better than crashing or not!). But there may well be other code out there that passes scipy.sparse matrices to C/Fortran, and if indices suddenly become 64-bit, then that code may start simply returning nonsense... I'd be concerned, anyway. I guess this is a problem with option A as well, but at least existing code working on matrices that currently work, would keep working. OTOH option A also means that any future C/Fortran code has to be prepared to handle both cases. Not really a big deal when working in Cython, but I hear that some people still use other tools... Do all the sparse matrix kernels we care about even handle 64-bit indices? CHOLMOD does, but it takes special setup, and I don't know if all kernel authors are so careful. -n From scopatz at gmail.com Fri Dec 14 10:56:34 2012 From: scopatz at gmail.com (Anthony Scopatz) Date: Fri, 14 Dec 2012 09:56:34 -0600 Subject: [SciPy-Dev] 64-bit sparse matrix indices In-Reply-To: References: Message-ID: It may be less elegant to write, but I am sort of a fan of option A. On Fri, Dec 14, 2012 at 9:54 AM, Nathaniel Smith wrote: > On Fri, Dec 14, 2012 at 9:37 AM, Pauli Virtanen wrote: > > Hi, > > > > I've been looking a bit at making sparse matrices work with 64-bit > > indices: > > > > https://github.com/pv/scipy-work/commits/ticket/1307 > > > > The motivation is that 32-bit indices on 64-bit machines don't allow > > representing sparse matrices with large nnz. > > > > One option A (currently there) is to allow both int32 and int64 as > > indices, and use the larger one only when required by nnz. > > > > The second option B would be to just use intp for everything. > > > > The problem with A is that I'm far from certain that I found all the > > corner cases yet, and I'm fairly certain there are some undiscovered > > bugs still somewhere. The test suite doesn't yet have the level of > > coverage on this issue I'd be comfortable with. > > > > The problem with B is that on 64-bit systems, it it increases the > > memory needs of sparse matrices by about 50%. However, as a solution > > it's more robust and elegant. > > One problem with B is if there is code out there which "knows" that > sparse matrices use 32-bit indices. E.g. I can adapt > scikits.sparse.cholmod to handle 64-bit indices, but it will require > code changes, because you have to use different flags when calling the > underlying routines and so far there was no point in it. It looks like > I was paranoid enough that switching to option B would just require > changing ~4 lines of code, and that if you somehow passed 64-bit > indices to the current version then it will downcast and keep going > (not sure if this is better than crashing or not!). But there may well > be other code out there that passes scipy.sparse matrices to > C/Fortran, and if indices suddenly become 64-bit, then that code may > start simply returning nonsense... I'd be concerned, anyway. > > I guess this is a problem with option A as well, but at least existing > code working on matrices that currently work, would keep working. OTOH > option A also means that any future C/Fortran code has to be prepared > to handle both cases. Not really a big deal when working in Cython, > but I hear that some people still use other tools... > > Do all the sparse matrix kernels we care about even handle 64-bit > indices? CHOLMOD does, but it takes special setup, and I don't know if > all kernel authors are so careful. > > -n > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Dec 14 11:08:50 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 14 Dec 2012 16:08:50 +0000 Subject: [SciPy-Dev] 64-bit sparse matrix indices In-Reply-To: References: Message-ID: On Fri, Dec 14, 2012 at 3:54 PM, Nathaniel Smith wrote: > I guess this is a problem with option A as well, but at least existing > code working on matrices that currently work, would keep working. OTOH > option A also means that any future C/Fortran code has to be prepared > to handle both cases. Not really a big deal when working in Cython, > but I hear that some people still use other tools... Actually on re-reading your mail, I guess the options you're suggesting are: A: both 32- and 64-bit indices are possible, which is used depends on nnz B: both 32- and 64-bit indices are possible, which is used depends on python's architecture ? So I withdraw the above comment -- both options require some sort of annoying type parametrization, it isn't really a disadvantage of option A. -n (Also surely it's only a 33% memory overhead compared to now? But still I have the feeling people really do work with sparse matrices right up to the limit of their available memory.) From pav at iki.fi Fri Dec 14 11:37:18 2012 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 14 Dec 2012 16:37:18 +0000 (UTC) Subject: [SciPy-Dev] 64-bit sparse matrix indices References: Message-ID: Nathaniel Smith pobox.com> writes: [clip] > Actually on re-reading your mail, I guess the options you're suggesting are: > > A: both 32- and 64-bit indices are possible, which is used depends on nnz > B: both 32- and 64-bit indices are possible, which is used depends on > python's architecture > > ? Precisely. > So I withdraw the above comment -- both options require some sort of > annoying type parametrization, it isn't really a disadvantage of > option A. Yep, except that in B the "type parameterization" just means using the intp data type and can be done with a single typedef. But I see your point, B completely breaks backward compatibility, whereas A does not. > (Also surely it's only a 33% memory overhead compared to now? But > still I have the feeling people really do work with sparse matrices > right up to the limit of their available memory.) Yeah, this might be the case. I guess I'll try to make A to work then. This is mostly just a matter of ensuring the test suite coverage is good enough. Pauli From ralf.gommers at gmail.com Fri Dec 14 16:01:41 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 14 Dec 2012 22:01:41 +0100 Subject: [SciPy-Dev] MKL licenses for core scientific Python projects Message-ID: Hi all, Intel has offered to provide free MKL licenses for main contributors to scientific Python projects - at least those listed at numfocus.org/projects/. Licenses for all OSes that are required can be provided, the condition is that they're used for building/testing our projects and not for broader purposes. If you're interested, please let me know your full name and what OS you need a license for. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Tue Dec 18 13:41:59 2012 From: sturla at molden.no (Sturla Molden) Date: Tue, 18 Dec 2012 19:41:59 +0100 Subject: [SciPy-Dev] Fwd: [Numpy-discussion] Support for python 2.4 dropped. Should we drop 2.5 also? References: Message-ID: Does this apply for SciPy as well? Sturla Sendt fra min iPad Videresendt melding: > Fra: Charles R Harris > Dato: 16. desember 2012 18:28:34 CET > Til: numpy-discussion > Emne: Re: [Numpy-discussion] Support for python 2.4 dropped. Should we drop 2.5 also? > Svar til: Discussion of Numerical Python > > > > On Thu, Dec 13, 2012 at 10:38 AM, Charles R Harris wrote: >> The previous proposal to drop python 2.4 support garnered no opposition. How about dropping support for python 2.5 also? > > The proposal to drop support for python 2.5 and 2.4 in numpy 1.8 has carried. It is now a todo issue on github. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Dec 18 13:47:37 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 18 Dec 2012 19:47:37 +0100 Subject: [SciPy-Dev] Fwd: [Numpy-discussion] Support for python 2.4 dropped. Should we drop 2.5 also? In-Reply-To: References: Message-ID: On Tue, Dec 18, 2012 at 7:41 PM, Sturla Molden wrote: > Does this apply for SciPy as well? > I haven't seen any objections to this proposal ( http://thread.gmane.org/gmane.comp.python.scientific.devel/17114/focus=17117) and several +1's, so yes. SciPy 0.12.0 will support Python versions 2.6 - 3.3. Ralf > Sturla > > Sendt fra min iPad > > Videresendt melding: > > *Fra:* Charles R Harris > *Dato:* 16. desember 2012 18:28:34 CET > *Til:* numpy-discussion > *Emne:* *Re: [Numpy-discussion] Support for python 2.4 dropped. Should we > drop 2.5 also?* > *Svar til:* Discussion of Numerical Python > > > > On Thu, Dec 13, 2012 at 10:38 AM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> The previous proposal to drop python 2.4 support garnered no opposition. >> How about dropping support for python 2.5 also? >> >> > The proposal to drop support for python 2.5 and 2.4 in numpy 1.8 has > carried. It is now a todo issue on github > . > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Thu Dec 20 18:36:03 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 20 Dec 2012 18:36:03 -0500 Subject: [SciPy-Dev] scipy test errors Message-ID: I am unable to login to my trac count anymore for some reason to comment on this ticket http://projects.scipy.org/scipy/ticket/1796 But I installed master numpy/scipy yesterday on a new system, and I'm also experience a large number of errors. Not all the same as listed on the ticket. FAILED (KNOWNFAIL=17, SKIP=37, errors=48) Test output attached. Skipper -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_log.zip Type: application/zip Size: 58571 bytes Desc: not available URL: From pav at iki.fi Fri Dec 21 04:39:59 2012 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 21 Dec 2012 09:39:59 +0000 (UTC) Subject: [SciPy-Dev] scipy test errors References: Message-ID: Skipper Seabold gmail.com> writes: [clip] > http://projects.scipy.org/scipy/ticket/1796 > > But I installed master numpy/scipy yesterday on a new system, and I'm > also experience a large number of errors. Not all the same as listed > on the ticket. > > FAILED (KNOWNFAIL=17, SKIP=37, errors=48) > > Test output attached. I think all these are due to changes made in Numpy master, with an older Numpy version (e.g. 1.6.2) all of them should go away. The failures listed in that ticket are related to some as-yet unresolved Accelerate/Veclib on OSX 10.8 --- I think they are not present on OSX 10.6. I'm planning to take a look at this eventually, now that I have access to such a machine... -- Pauli Virtanen From paul.anton.letnes at gmail.com Fri Dec 21 10:16:40 2012 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Fri, 21 Dec 2012 16:16:40 +0100 Subject: [SciPy-Dev] default scipy fft performance Message-ID: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> Dear Scipy devs, 1) thanks for doing a great job in general! 2) Why is there a huge performance difference between scipy.fft and scipy.fftpack.fft? It apperars that scipy.fft == numpy.fft: In [4]: import numpy as np In [5]: from scipy import fft In [6]: from scipy import fftpack In [7]: d = np.linspace(0, 1e3, 1e7) In [8]: %timeit np.fft.fft(d) 1 loops, best of 3: 1.29 s per loop In [9]: %timeit fft(d) 1 loops, best of 3: 1.3 s per loop In [10]: %timeit fftpack.fft(d) 1 loops, best of 3: 651 ms per loop 3) On a related note - what's the best performing python fft library/wrapper out there? I take it from some google research I did that e.g. fftw cannot be used due to the GPL licence. Cheers, Paul From jsseabold at gmail.com Fri Dec 21 10:19:50 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 21 Dec 2012 10:19:50 -0500 Subject: [SciPy-Dev] scipy test errors In-Reply-To: References: Message-ID: On Fri, Dec 21, 2012 at 4:39 AM, Pauli Virtanen wrote: > Skipper Seabold gmail.com> writes: > [clip] >> http://projects.scipy.org/scipy/ticket/1796 >> >> But I installed master numpy/scipy yesterday on a new system, and I'm >> also experience a large number of errors. Not all the same as listed >> on the ticket. >> >> FAILED (KNOWNFAIL=17, SKIP=37, errors=48) >> >> Test output attached. > > I think all these are due to changes made in Numpy master, > with an older Numpy version (e.g. 1.6.2) all of them should go away. > That's what I figured. I submitted a PR for the indexing with a float. Not sure why the deprecation warnings raised errors though. Don't recall seeing this before. Maybe some configuration that I've overlooked? I was curious if the problems coming from as_strided were a numpy problem. Similarly, I get these errors with numpy 1.7.0b2 ====================================================================== ERROR: test_interpolate.TestInterp1D.test_bounds ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", line 257, in test_bounds self._bounds_check(kind) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", line 224, in _bounds_check extrap10(11.2), File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 858, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_interpolate.TestInterp1D.test_complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", line 349, in test_complex self._check_complex(np.complex64, kind) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", line 337, in _check_complex assert_array_almost_equal(y[:-1], c(x)[:-1]) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 855, in spleval res[sl].real = _fitpack._bspleval(xx,xj,cvals.real[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: Check the actual implementation of spline interpolation. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", line 177, in test_cubic interp10(self.x10), File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 858, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_interpolate.TestInterp1D.test_nd ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", line 327, in test_nd self._nd_check_interp(kind) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", line 267, in _nd_check_interp interp10(np.array([[3., 5.], [2., 7.]])), File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 858, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_ndgriddata.TestGriddata.test_1d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_ndgriddata.py", line 73, in test_1d assert_allclose(griddata(x, y, x, method=method), y, File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/ndgriddata.py", line 178, in griddata return ip(xi) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 858, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_ndgriddata.TestGriddata.test_1d_unsorted ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_ndgriddata.py", line 85, in test_1d_unsorted assert_allclose(griddata(x, y, x, method=method), y, File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/ndgriddata.py", line 178, in griddata return ip(xi) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 858, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_kdtree.test_vectorization.test_vectorized_query ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", line 159, in test_vectorized_query d, i = self.kdtree.query(np.zeros((2,4,3))) File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/kdtree.py", line 434, in query for c in np.ndindex(retshape): File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 320, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ====================================================================== ERROR: test_kdtree.test_vectorization.test_vectorized_query_all_neighbors ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", line 187, in test_vectorized_query_all_neighbors d, i = self.kdtree.query(np.zeros((2,4,3)),k=None,distance_upper_bound=1.1) File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/kdtree.py", line 434, in query for c in np.ndindex(retshape): File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 320, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ====================================================================== ERROR: test_kdtree.test_vectorization.test_vectorized_query_multiple_neighbors ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", line 175, in test_vectorized_query_multiple_neighbors d, i = self.kdtree.query(np.zeros((2,4,3)),k=kk) File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/kdtree.py", line 434, in query for c in np.ndindex(retshape): File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 320, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ====================================================================== ERROR: test_kdtree.test_random_ball_vectorized ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", line 332, in test_random_ball_vectorized r = T.query_ball_point(np.random.randn(2,3,m),1) File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/kdtree.py", line 544, in query_ball_point for c in np.ndindex(retshape): File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 320, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ====================================================================== ERROR: test_kdtree.test_random_ball_vectorized_compiled ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", line 342, in test_random_ball_vectorized_compiled r = T.query_ball_point(np.random.randn(2,3,m),1) File "ckdtree.pyx", line 1399, in scipy.spatial.ckdtree.cKDTree.query_ball_point (scipy/spatial/ckdtree.c:11875) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 320, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ---------------------------------------------------------------------- Ran 5744 tests in 59.973s FAILED (KNOWNFAIL=15, SKIP=36, errors=11) [2]: From cournape at gmail.com Fri Dec 21 10:53:20 2012 From: cournape at gmail.com (David Cournapeau) Date: Fri, 21 Dec 2012 15:53:20 +0000 Subject: [SciPy-Dev] default scipy fft performance In-Reply-To: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> References: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> Message-ID: On Fri, Dec 21, 2012 at 3:16 PM, Paul Anton Letnes wrote: > Dear Scipy devs, > > 1) thanks for doing a great job in general! > 2) Why is there a huge performance difference between scipy.fft and scipy.fftpack.fft? It apperars that scipy.fft == numpy.fft: > > In [4]: import numpy as np > > In [5]: from scipy import fft > > In [6]: from scipy import fftpack > > In [7]: d = np.linspace(0, 1e3, 1e7) > > In [8]: %timeit np.fft.fft(d) > 1 loops, best of 3: 1.29 s per loop > > In [9]: %timeit fft(d) > 1 loops, best of 3: 1.3 s per loop > > In [10]: %timeit fftpack.fft(d) > 1 loops, best of 3: 651 ms per loop > > 3) On a related note - what's the best performing python fft library/wrapper out there? I take it from some google research I did that e.g. fftw cannot be used due to the GPL licence. But you could use e.g. pyfftw that will give you a better wrapper that scipy ever had for FFTW. The difference in speed is not unexpected: the fft in numpy is there for historical reasons and backward compatibility. Unless you have a very good reason not to use it, you should be using scipy.fftpack instead of numpy.fft when you can depend on scipy. regards, David From sturla at molden.no Fri Dec 21 11:14:16 2012 From: sturla at molden.no (Sturla Molden) Date: Fri, 21 Dec 2012 17:14:16 +0100 Subject: [SciPy-Dev] default scipy fft performance In-Reply-To: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> References: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> Message-ID: <0972D8E1-B2AC-4474-ACF8-4B2FE78B049C@molden.no> Den 21. des. 2012 kl. 16:16 skrev Paul Anton Letnes : > Dear Scipy devs, > > 1) thanks for doing a great job in general! > 2) Why is there a huge performance difference between scipy.fft and scipy.fftpack.fft? It apperars that scipy.fft == numpy.fft: > > In [4]: import numpy as np > > In [5]: from scipy import fft > > In [6]: from scipy import fftpack > > In [7]: d = np.linspace(0, 1e3, 1e7) > > In [8]: %timeit np.fft.fft(d) > 1 loops, best of 3: 1.29 s per loop > > In [9]: %timeit fft(d) > 1 loops, best of 3: 1.3 s per loop > > In [10]: %timeit fftpack.fft(d) > 1 loops, best of 3: 651 ms per loop They are not the same. SciPy fftpack is fortran FFTPACK. NumPy uses the C coded fftpack-lite. The wrappers are also different, and SciPy can modify arrays inplace. > > 3) On a related note - what's the best performing python fft library/wrapper out there? I take it from some google research I did that e.g. fftw cannot be used due to the GPL licence. > > If you use EPD, numpy.fft will use MKL. That will be about as fast as it gets. https://mail.enthought.com/pipermail/enthought-dev/2011-June/029299.html FFTW is GPL, but you can use it if GPL is ok. It usually is: If you can use Linux, you can use FFTW as well. ACML is also an option. It is not as fast as MKL or FFTW, but faster than FFTPACK. You can also call MKL, ACML or FFTW directly using Cython, f2py or ctypes. There is also a TVirtualFFT in ROOT, that uses FFTW. ROOT has a Python interface as well. Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Fri Dec 21 11:26:07 2012 From: sturla at molden.no (Sturla Molden) Date: Fri, 21 Dec 2012 17:26:07 +0100 Subject: [SciPy-Dev] default scipy fft performance In-Reply-To: References: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> Message-ID: <7434AA51-A7F5-4A23-A386-CE01B457989A@molden.no> Den 21. des. 2012 kl. 16:53 skrev David Cournapeau : > > But you could use e.g. pyfftw that will give you a better wrapper that > scipy ever had for FFTW. > > The difference in speed is not unexpected: the fft in numpy is there > for historical reasons and backward compatibility. Unless you have a > very good reason not to use it, you should be using scipy.fftpack > instead of numpy.fft when you can depend on scipy. > Enthought is a good reason, as numpy.fft will use MKL. Another good reason is that the wrapper for numpy.fft is written in a way that makes it easy to release the GIL (no global states in C land), whereas scipy.fftpack is not reentrant. Sometimes it can be nice to do multiple FFTs in parallel threads. (I might try to re-post a PR to NumPy for releasing the GIL in numpy.fft again, unless it is taken care of now, but last time nobody cared to review it.) Sturla From njs at pobox.com Fri Dec 21 11:29:48 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Dec 2012 16:29:48 +0000 Subject: [SciPy-Dev] default scipy fft performance In-Reply-To: <0972D8E1-B2AC-4474-ACF8-4B2FE78B049C@molden.no> References: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> <0972D8E1-B2AC-4474-ACF8-4B2FE78B049C@molden.no> Message-ID: On Fri, Dec 21, 2012 at 4:14 PM, Sturla Molden wrote: > FFTW is GPL, but you can use it if GPL is ok. It usually is: If you can use > Linux, you can use FFTW as well. This is not true at all. The GPL on Linux does not impose any requirements at all on software that runs on Linux, mostly because they have an explicit exception on top of the GPL saying this. Using FFTW means that if you distribute your software, you almost certainly have to do it under the GPL or similar. This may or may not be a problem (you might not distribute, you might be fine with the GPL, etc.), but people should at least understand how licenses work. -n From sturla at molden.no Fri Dec 21 11:54:45 2012 From: sturla at molden.no (Sturla Molden) Date: Fri, 21 Dec 2012 17:54:45 +0100 Subject: [SciPy-Dev] default scipy fft performance In-Reply-To: References: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> <0972D8E1-B2AC-4474-ACF8-4B2FE78B049C@molden.no> Message-ID: Den 21. des. 2012 kl. 17:29 skrev Nathaniel Smith : > > This is not true at all. The GPL on Linux does not impose any > requirements at all on software that runs on Linux, mostly because > they have an explicit exception on top of the GPL saying this. Using > FFTW means that if you distribute your software, you almost certainly > have to do it under the GPL or similar. This may or may not be a > problem (you might not distribute, you might be fine with the GPL, > etc.), but people should at least understand how licenses work. > If you distribute commercial software, you can get a commercial FFTW license. (MATLAB is one example. At least it used to include a non-GPL FFTW, it might use MKL now.) Sturla From paul.anton.letnes at gmail.com Fri Dec 21 12:30:54 2012 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Fri, 21 Dec 2012 18:30:54 +0100 Subject: [SciPy-Dev] default scipy fft performance In-Reply-To: References: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> Message-ID: On 21. des. 2012, at 16:53, David Cournapeau wrote: > On Fri, Dec 21, 2012 at 3:16 PM, Paul Anton Letnes > wrote: >> Dear Scipy devs, >> >> 1) thanks for doing a great job in general! >> 2) Why is there a huge performance difference between scipy.fft and scipy.fftpack.fft? It apperars that scipy.fft == numpy.fft: >> >> In [4]: import numpy as np >> >> In [5]: from scipy import fft >> >> In [6]: from scipy import fftpack >> >> In [7]: d = np.linspace(0, 1e3, 1e7) >> >> In [8]: %timeit np.fft.fft(d) >> 1 loops, best of 3: 1.29 s per loop >> >> In [9]: %timeit fft(d) >> 1 loops, best of 3: 1.3 s per loop >> >> In [10]: %timeit fftpack.fft(d) >> 1 loops, best of 3: 651 ms per loop >> >> 3) On a related note - what's the best performing python fft library/wrapper out there? I take it from some google research I did that e.g. fftw cannot be used due to the GPL licence. > > But you could use e.g. pyfftw that will give you a better wrapper that > scipy ever had for FFTW. > > The difference in speed is not unexpected: the fft in numpy is there > for historical reasons and backward compatibility. Unless you have a > very good reason not to use it, you should be using scipy.fftpack > instead of numpy.fft when you can depend on scipy. I understand that this must be something like a backwards compatibility thing. But my point was (although I was perhaps not all that clear): Why does scipy.fft link to numpy.fft, and not scipy.fftpack.fft? The interface seems similar enough: fft(a[, n, axis]) fft(x[, n, axis, overwrite_x]) The extra overwrite_x in fftpack shouldn't be much of an issue. Cheers Paul From ralf.gommers at gmail.com Fri Dec 21 14:46:31 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Dec 2012 20:46:31 +0100 Subject: [SciPy-Dev] default scipy fft performance In-Reply-To: References: <95F43353-AE9C-40E7-9D9E-1C4C512C7361@gmail.com> Message-ID: On Fri, Dec 21, 2012 at 6:30 PM, Paul Anton Letnes < paul.anton.letnes at gmail.com> wrote: > > On 21. des. 2012, at 16:53, David Cournapeau wrote: > > > On Fri, Dec 21, 2012 at 3:16 PM, Paul Anton Letnes > > wrote: > >> Dear Scipy devs, > >> > >> 1) thanks for doing a great job in general! > >> 2) Why is there a huge performance difference between scipy.fft and > scipy.fftpack.fft? It apperars that scipy.fft == numpy.fft: > >> > >> In [4]: import numpy as np > >> > >> In [5]: from scipy import fft > >> > >> In [6]: from scipy import fftpack > >> > >> In [7]: d = np.linspace(0, 1e3, 1e7) > >> > >> In [8]: %timeit np.fft.fft(d) > >> 1 loops, best of 3: 1.29 s per loop > >> > >> In [9]: %timeit fft(d) > >> 1 loops, best of 3: 1.3 s per loop > >> > >> In [10]: %timeit fftpack.fft(d) > >> 1 loops, best of 3: 651 ms per loop > >> > >> 3) On a related note - what's the best performing python fft > library/wrapper out there? I take it from some google research I did that > e.g. fftw cannot be used due to the GPL licence. > > > > But you could use e.g. pyfftw that will give you a better wrapper that > > scipy ever had for FFTW. > > > > The difference in speed is not unexpected: the fft in numpy is there > > for historical reasons and backward compatibility. Unless you have a > > very good reason not to use it, you should be using scipy.fftpack > > instead of numpy.fft when you can depend on scipy. > > I understand that this must be something like a backwards compatibility > thing. But my point was (although I was perhaps not all that clear): Why > does scipy.fft link to numpy.fft, That is the same function; in scipy/__init__.py there's "from numpy import *". This is again for backwards compatibility only. You should not use anything from the main scipy namespace (except for running tests with scipy.test()). Ralf > and not scipy.fftpack.fft? The interface seems similar enough: > fft(a[, n, axis]) > fft(x[, n, axis, overwrite_x]) > The extra overwrite_x in fftpack shouldn't be much of an issue. > > Cheers > Paul > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Dec 21 14:51:06 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Dec 2012 20:51:06 +0100 Subject: [SciPy-Dev] scipy test errors In-Reply-To: References: Message-ID: On Fri, Dec 21, 2012 at 4:19 PM, Skipper Seabold wrote: > On Fri, Dec 21, 2012 at 4:39 AM, Pauli Virtanen wrote: > > Skipper Seabold gmail.com> writes: > > [clip] > >> http://projects.scipy.org/scipy/ticket/1796 > >> > >> But I installed master numpy/scipy yesterday on a new system, and I'm > >> also experience a large number of errors. Not all the same as listed > >> on the ticket. > >> > >> FAILED (KNOWNFAIL=17, SKIP=37, errors=48) > >> > >> Test output attached. > > > > I think all these are due to changes made in Numpy master, > > with an older Numpy version (e.g. 1.6.2) all of them should go away. > > > > That's what I figured. I submitted a PR for the indexing with a float. > Not sure why the deprecation warnings raised errors though. Don't > recall seeing this before. Maybe some configuration that I've > overlooked? > > I was curious if the problems coming from as_strided were a numpy > problem. Similarly, I get these errors with numpy 1.7.0b2 > These should be fixed with the latest numpy 1.7.x branch. If not, please open a new numpy issue. Ralf > > ====================================================================== > ERROR: test_interpolate.TestInterp1D.test_bounds > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", > line 257, in test_bounds > self._bounds_check(kind) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", > line 224, in _bounds_check > extrap10(11.2), > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 419, in __call__ > y_new = self._call(x_new) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 395, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 858, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_interpolate.TestInterp1D.test_complex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", > line 349, in test_complex > self._check_complex(np.complex64, kind) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", > line 337, in _check_complex > assert_array_almost_equal(y[:-1], c(x)[:-1]) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 419, in __call__ > y_new = self._call(x_new) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 395, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 855, in spleval > res[sl].real = _fitpack._bspleval(xx,xj,cvals.real[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: Check the actual implementation of spline interpolation. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", > line 177, in test_cubic > interp10(self.x10), > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 419, in __call__ > y_new = self._call(x_new) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 395, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 858, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_interpolate.TestInterp1D.test_nd > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", > line 327, in test_nd > self._nd_check_interp(kind) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_interpolate.py", > line 267, in _nd_check_interp > interp10(np.array([[3., 5.], [2., 7.]])), > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 419, in __call__ > y_new = self._call(x_new) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 395, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 858, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_ndgriddata.TestGriddata.test_1d > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_ndgriddata.py", > line 73, in test_1d > assert_allclose(griddata(x, y, x, method=method), y, > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/ndgriddata.py", > line 178, in griddata > return ip(xi) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 419, in __call__ > y_new = self._call(x_new) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 395, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 858, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_ndgriddata.TestGriddata.test_1d_unsorted > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/tests/test_ndgriddata.py", > line 85, in test_1d_unsorted > assert_allclose(griddata(x, y, x, method=method), y, > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/ndgriddata.py", > line 178, in griddata > return ip(xi) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 419, in __call__ > y_new = self._call(x_new) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 395, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", > line 858, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_kdtree.test_vectorization.test_vectorized_query > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", > line 159, in test_vectorized_query > d, i = self.kdtree.query(np.zeros((2,4,3))) > File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/kdtree.py", > line 434, in query > for c in np.ndindex(retshape): > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", > line 320, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ====================================================================== > ERROR: test_kdtree.test_vectorization.test_vectorized_query_all_neighbors > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", > line 187, in test_vectorized_query_all_neighbors > d, i = > self.kdtree.query(np.zeros((2,4,3)),k=None,distance_upper_bound=1.1) > File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/kdtree.py", > line 434, in query > for c in np.ndindex(retshape): > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", > line 320, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ====================================================================== > ERROR: > test_kdtree.test_vectorization.test_vectorized_query_multiple_neighbors > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", > line 175, in test_vectorized_query_multiple_neighbors > d, i = self.kdtree.query(np.zeros((2,4,3)),k=kk) > File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/kdtree.py", > line 434, in query > for c in np.ndindex(retshape): > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", > line 320, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ====================================================================== > ERROR: test_kdtree.test_random_ball_vectorized > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", > line 332, in test_random_ball_vectorized > r = T.query_ball_point(np.random.randn(2,3,m),1) > File "/usr/local/lib/python2.7/dist-packages/scipy/spatial/kdtree.py", > line 544, in query_ball_point > for c in np.ndindex(retshape): > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", > line 320, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ====================================================================== > ERROR: test_kdtree.test_random_ball_vectorized_compiled > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/nose-1.2.1-py2.7.egg/nose/case.py", > line 197, in runTest > self.test(*self.arg) > File > "/usr/local/lib/python2.7/dist-packages/scipy/spatial/tests/test_kdtree.py", > line 342, in test_random_ball_vectorized_compiled > r = T.query_ball_point(np.random.randn(2,3,m),1) > File "ckdtree.pyx", line 1399, in > scipy.spatial.ckdtree.cKDTree.query_ball_point > (scipy/spatial/ckdtree.c:11875) > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File "/usr/local/lib/python2.7/dist-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", > line 320, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ---------------------------------------------------------------------- > Ran 5744 tests in 59.973s > > FAILED (KNOWNFAIL=15, SKIP=36, errors=11) > [2]: > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Dec 21 17:24:50 2012 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 22 Dec 2012 00:24:50 +0200 Subject: [SciPy-Dev] Cython on build-time (except on source tarballs) Message-ID: Hi, Here's a suggestion on how to get Cython as a build-time dependency, so that building from source tarballs (sdist) doesn't require it: https://github.com/pv/scipy-work/compare/cython-builddep I'm not completely sure this is the way to implement it, but I think getting rid of the committed-in C sources would be useful for the long run. Requiring people who want to compile from Git to have Cython installed is perhaps not such a big deal, since Cython is fairly easy to install everywhere. We'll still need the committed-in generated .pyx files, though... -- Pauli Virtanen From jsseabold at gmail.com Fri Dec 21 20:09:06 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 21 Dec 2012 20:09:06 -0500 Subject: [SciPy-Dev] scipy test errors In-Reply-To: References: Message-ID: On Fri, Dec 21, 2012 at 2:51 PM, Ralf Gommers wrote: > > > > On Fri, Dec 21, 2012 at 4:19 PM, Skipper Seabold > wrote: >> >> On Fri, Dec 21, 2012 at 4:39 AM, Pauli Virtanen wrote: >> > Skipper Seabold gmail.com> writes: >> > [clip] >> >> http://projects.scipy.org/scipy/ticket/1796 >> >> >> >> But I installed master numpy/scipy yesterday on a new system, and I'm >> >> also experience a large number of errors. Not all the same as listed >> >> on the ticket. >> >> >> >> FAILED (KNOWNFAIL=17, SKIP=37, errors=48) >> >> >> >> Test output attached. >> > >> > I think all these are due to changes made in Numpy master, >> > with an older Numpy version (e.g. 1.6.2) all of them should go away. >> > >> >> That's what I figured. I submitted a PR for the indexing with a float. >> Not sure why the deprecation warnings raised errors though. Don't >> recall seeing this before. Maybe some configuration that I've >> overlooked? >> >> I was curious if the problems coming from as_strided were a numpy >> problem. Similarly, I get these errors with numpy 1.7.0b2 > > > These should be fixed with the latest numpy 1.7.x branch. If not, please > open a new numpy issue. Can confirm that they go away with this branch and see the discussion now. Is the idea that these will end up back in master? Skipper From ralf.gommers at gmail.com Sat Dec 22 04:22:44 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 22 Dec 2012 10:22:44 +0100 Subject: [SciPy-Dev] scipy test errors In-Reply-To: References: Message-ID: On Sat, Dec 22, 2012 at 2:09 AM, Skipper Seabold wrote: > On Fri, Dec 21, 2012 at 2:51 PM, Ralf Gommers > wrote: > > > > > > > > On Fri, Dec 21, 2012 at 4:19 PM, Skipper Seabold > > wrote: > >> > >> On Fri, Dec 21, 2012 at 4:39 AM, Pauli Virtanen wrote: > >> > Skipper Seabold gmail.com> writes: > >> > [clip] > >> >> http://projects.scipy.org/scipy/ticket/1796 > >> >> > >> >> But I installed master numpy/scipy yesterday on a new system, and I'm > >> >> also experience a large number of errors. Not all the same as listed > >> >> on the ticket. > >> >> > >> >> FAILED (KNOWNFAIL=17, SKIP=37, errors=48) > >> >> > >> >> Test output attached. > >> > > >> > I think all these are due to changes made in Numpy master, > >> > with an older Numpy version (e.g. 1.6.2) all of them should go away. > >> > > >> > >> That's what I figured. I submitted a PR for the indexing with a float. > >> Not sure why the deprecation warnings raised errors though. Don't > >> recall seeing this before. Maybe some configuration that I've > >> overlooked? > >> > >> I was curious if the problems coming from as_strided were a numpy > >> problem. Similarly, I get these errors with numpy 1.7.0b2 > > > > > > These should be fixed with the latest numpy 1.7.x branch. If not, please > > open a new numpy issue. > > Can confirm that they go away with this branch and see the discussion > now. Is the idea that these will end up back in master? It should be fixed in master, but I'm not exactly sure how or when. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils106 at googlemail.com Sat Dec 22 09:10:51 2012 From: nils106 at googlemail.com (Nils Wagner) Date: Sat, 22 Dec 2012 14:10:51 +0000 Subject: [SciPy-Dev] scipy test errors In-Reply-To: References: Message-ID: >>> scipy.test() Running unit tests for scipy NumPy version 1.8.0.dev-3abd869 NumPy is installed in /home/nwagner/local/lib64/python2.7/site-packages/numpy SciPy version 0.12.0.dev-785491c SciPy is installed in /home/nwagner/local/lib64/python2.7/site-packages/scipy Python version 2.7.2 (default, Aug 19 2011, 20:41:43) [GCC] nose version 1.1.2 ====================================================================== ERROR: test_interpolate.TestInterp1D.test_bounds ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 257, in test_bounds self._bounds_check(kind) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 224, in _bounds_check extrap10(11.2), File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 852, in spleval for index in np.ndindex(*sh): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: __array_interface__ shape must be at least size 1 ====================================================================== ERROR: test_interpolate.TestInterp1D.test_complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 349, in test_complex self._check_complex(np.complex64, kind) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 337, in _check_complex assert_array_almost_equal(y[:-1], c(x)[:-1]) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 852, in spleval for index in np.ndindex(*sh): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: __array_interface__ shape must be at least size 1 ====================================================================== ERROR: Check the actual implementation of spline interpolation. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 177, in test_cubic interp10(self.x10), File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 852, in spleval for index in np.ndindex(*sh): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: __array_interface__ shape must be at least size 1 ====================================================================== ERROR: test_interpolate.TestInterp1D.test_nd ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 327, in test_nd self._nd_check_interp(kind) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 267, in _nd_check_interp interp10(np.array([[3., 5.], [2., 7.]])), File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 852, in spleval for index in np.ndindex(*sh): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: __array_interface__ shape must be at least size 1 ====================================================================== ERROR: test_ndgriddata.TestGriddata.test_1d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_ndgriddata.py", line 73, in test_1d assert_allclose(griddata(x, y, x, method=method), y, File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/ndgriddata.py", line 178, in griddata return ip(xi) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 852, in spleval for index in np.ndindex(*sh): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: __array_interface__ shape must be at least size 1 ====================================================================== ERROR: test_ndgriddata.TestGriddata.test_1d_unsorted ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_ndgriddata.py", line 85, in test_1d_unsorted assert_allclose(griddata(x, y, x, method=method), y, File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/ndgriddata.py", line 178, in griddata return ip(xi) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 419, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 395, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 852, in spleval for index in np.ndindex(*sh): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: __array_interface__ shape must be at least size 1 ====================================================================== ERROR: test_kdtree.test_vectorization.test_vectorized_query ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 159, in test_vectorized_query d, i = self.kdtree.query(np.zeros((2,4,3))) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", line 434, in query for c in np.ndindex(retshape): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) TypeError: an integer is required ====================================================================== ERROR: test_kdtree.test_vectorization.test_vectorized_query_all_neighbors ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 187, in test_vectorized_query_all_neighbors d, i = self.kdtree.query(np.zeros((2,4,3)),k=None,distance_upper_bound=1.1) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", line 434, in query for c in np.ndindex(retshape): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) TypeError: an integer is required ====================================================================== ERROR: test_kdtree.test_vectorization.test_vectorized_query_multiple_neighbors ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 175, in test_vectorized_query_multiple_neighbors d, i = self.kdtree.query(np.zeros((2,4,3)),k=kk) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", line 434, in query for c in np.ndindex(retshape): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) TypeError: an integer is required ====================================================================== ERROR: test_kdtree.test_random_ball_vectorized ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 332, in test_random_ball_vectorized r = T.query_ball_point(np.random.randn(2,3,m),1) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", line 544, in query_ball_point for c in np.ndindex(retshape): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) TypeError: an integer is required ====================================================================== ERROR: test_kdtree.test_random_ball_vectorized_compiled ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 342, in test_random_ball_vectorized_compiled r = T.query_ball_point(np.random.randn(2,3,m),1) File "ckdtree.pyx", line 1399, in scipy.spatial.ckdtree.cKDTree.query_ball_point (scipy/spatial/ckdtree.c:11875) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) TypeError: an integer is required ---------------------------------------------------------------------- Ran 5744 tests in 166.233s FAILED (KNOWNFAIL=15, SKIP=28, errors=11) On Sat, Dec 22, 2012 at 9:22 AM, Ralf Gommers wrote: > > > > On Sat, Dec 22, 2012 at 2:09 AM, Skipper Seabold wrote: > >> On Fri, Dec 21, 2012 at 2:51 PM, Ralf Gommers >> wrote: >> > >> > >> > >> > On Fri, Dec 21, 2012 at 4:19 PM, Skipper Seabold >> > wrote: >> >> >> >> On Fri, Dec 21, 2012 at 4:39 AM, Pauli Virtanen wrote: >> >> > Skipper Seabold gmail.com> writes: >> >> > [clip] >> >> >> http://projects.scipy.org/scipy/ticket/1796 >> >> >> >> >> >> But I installed master numpy/scipy yesterday on a new system, and >> I'm >> >> >> also experience a large number of errors. Not all the same as listed >> >> >> on the ticket. >> >> >> >> >> >> FAILED (KNOWNFAIL=17, SKIP=37, errors=48) >> >> >> >> >> >> Test output attached. >> >> > >> >> > I think all these are due to changes made in Numpy master, >> >> > with an older Numpy version (e.g. 1.6.2) all of them should go away. >> >> > >> >> >> >> That's what I figured. I submitted a PR for the indexing with a float. >> >> Not sure why the deprecation warnings raised errors though. Don't >> >> recall seeing this before. Maybe some configuration that I've >> >> overlooked? >> >> >> >> I was curious if the problems coming from as_strided were a numpy >> >> problem. Similarly, I get these errors with numpy 1.7.0b2 >> > >> > >> > These should be fixed with the latest numpy 1.7.x branch. If not, please >> > open a new numpy issue. >> >> Can confirm that they go away with this branch and see the discussion >> now. Is the idea that these will end up back in master? > > > It should be fixed in master, but I'm not exactly sure how or when. > > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Sat Dec 22 09:33:49 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Sat, 22 Dec 2012 09:33:49 -0500 Subject: [SciPy-Dev] scipy test errors In-Reply-To: References: Message-ID: On Sat, Dec 22, 2012 at 9:10 AM, Nils Wagner wrote: >>>> scipy.test() > FAILED (KNOWNFAIL=15, SKIP=28, errors=11) > As Ralf mentioned, these errors shouldn't be present in the maintenance/1.7.x branch. Skipper From ralf.gommers at gmail.com Wed Dec 26 16:52:45 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Dec 2012 22:52:45 +0100 Subject: [SciPy-Dev] Cython on build-time (except on source tarballs) In-Reply-To: References: Message-ID: On Fri, Dec 21, 2012 at 11:24 PM, Pauli Virtanen wrote: > Hi, > > Here's a suggestion on how to get Cython as a build-time dependency, so > that building from source tarballs (sdist) doesn't require it: > > https://github.com/pv/scipy-work/compare/cython-builddep > > I'm not completely sure this is the way to implement it, but I think > getting rid of the committed-in C sources would be useful for the long run. > > Requiring people who want to compile from Git to have Cython installed > is perhaps not such a big deal, since Cython is fairly easy to install > everywhere. > > We'll still need the committed-in generated .pyx files, though... > That looks pretty good to me. Here's a minor change, so that you don't need to install Cython for each Python version: https://github.com/rgommers/scipy/tree/cython-builddep Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From vretiel at gmail.com Fri Dec 28 15:22:01 2012 From: vretiel at gmail.com (Shuaib Osman) Date: Fri, 28 Dec 2012 22:22:01 +0200 Subject: [SciPy-Dev] Would to post a question to the scipy mailing lists Message-ID: Hi, I basically need a way to make scipy.spatial.delauny find_simplex method warn me when it falls back on a brute force search - as it takes several seconds longer on large triangulations. I can brute force search for the points in question via cuda which is several times faster. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.loew at gmx.net Fri Dec 28 17:09:02 2012 From: a.loew at gmx.net (=?ISO-8859-1?Q?Alexander_L=F6w?=) Date: Fri, 28 Dec 2012 23:09:02 +0100 Subject: [SciPy-Dev] Installing multiple scipy versions for development Message-ID: <50DE187E.1050101@gmx.net> Hi, I am a newbie to scipy development, thus sorry, if the following question might sound stupid for some of you: What is best practice to install multiple scipy versions on my machine for local development? I run python under Ubuntu and have general python 2.7 installed together with scipy 0.10.1. I recently checked out the recent scipy source code. Now I want to be able to switch easily between the developer version of scipy and the stable code. How can I do this? I did not find anything in the Wiki. I guess I would need to change some PATH variables during by session, but how? Thanks for your help! Alex From travis at continuum.io Fri Dec 28 18:01:27 2012 From: travis at continuum.io (Travis Oliphant) Date: Fri, 28 Dec 2012 17:01:27 -0600 Subject: [SciPy-Dev] Installing multiple scipy versions for development In-Reply-To: <50DE187E.1050101@gmx.net> References: <50DE187E.1050101@gmx.net> Message-ID: <45AA71EA-A8E1-4EBB-8279-D0DB2E0E018D@continuum.io> This is exactly what Anaconda CE let's you do by creating environments: http://docs.continuum.io/conda/examples/create.html Here is more information about Anaconda CE: - Cross-platform distribution for Linux, Windows, Mac, both 32 and 64 bit. - Large list of included libraries, selected based on our knowledge of the Python/Scipy data analytics space - All libraries and the Python interpreter itself are built to be installed relocatably, so you can put Anaconda into ~/anaconda or /opt/anaconda and not worry about it polluting the rest of /usr/lib or /usr/local/lib. - Robust support for concurrently having *multiple versions* of libraries and even the Python interpreter installed all at the same time. Switching between these is as easy as setting PATH or explicitly running a particular 'python' via its full path. (See http://docs.continuum.io/anaconda/launcher.html#environments for more information about how this works.) - Python package management tool that handles C extensions and shared libraries in a robust, reliable manner. (http://docs.continuum.io/conda/intro.html) The functionality for much of this is actually in the conda package which is available here: https://github.com/ContinuumIO/conda or here: http://pypi.python.org/pypi/conda/1.2.1-52-g44d5674 You can also use Python virtual environments, but Anaconda's environments are intended to be more general and work with more than just different versions of Python packages. Best regards, -Travis On Dec 28, 2012, at 4:09 PM, Alexander L?w wrote: > Hi, > > I am a newbie to scipy development, thus sorry, if the following > question might sound stupid for some of you: > > What is best practice to install multiple scipy versions on my machine > for local development? > > I run python under Ubuntu and have general python 2.7 installed together > with scipy 0.10.1. > I recently checked out the recent scipy source code. Now I want to be > able to switch easily between the developer version of scipy and the > stable code. How can I do this? I did not find anything in the Wiki. I > guess I would need to change some PATH variables during by session, but how? > > Thanks for your help! > > Alex > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Dec 28 18:44:31 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 28 Dec 2012 23:44:31 +0000 Subject: [SciPy-Dev] Installing multiple scipy versions for development In-Reply-To: <50DE187E.1050101@gmx.net> References: <50DE187E.1050101@gmx.net> Message-ID: On 28 Dec 2012 22:09, "Alexander L?w" wrote: > > Hi, > > I am a newbie to scipy development, thus sorry, if the following > question might sound stupid for some of you: > > What is best practice to install multiple scipy versions on my machine > for local development? > > I run python under Ubuntu and have general python 2.7 installed together > with scipy 0.10.1. > I recently checked out the recent scipy source code. Now I want to be > able to switch easily between the developer version of scipy and the > stable code. How can I do this? I did not find anything in the Wiki. I > guess I would need to change some PATH variables during by session, but how? The standard solution for this in python is a tool called "virtualenv": http://www.virtualenv.org Just apt-get install virtualenv, and then you can do: virtualenv myenv to create a new isolated python environment in a directory called 'myenv' using the default python version. Now you can run myenv/bin/python to get a shell, and to install things into that environment you do myenv/bin/pip install or myenv/bin/python setup.py install Or just do source myenv/bin/activate to automatically switch your PATH around to use that environment by default for the current shell session and then run 'python', 'pip', etc as normal; just type 'deactivate' to turn it off again. Pretty simple! If you want to throw that environment away and start over, just delete the directory. By default your new environment will contain *only* the standard library. If you want it to inherit all the packages you have installed systemwide, use the --system-site-packages option. (You can still override specific packages in the virtualenv by installing the version you want there. This is handy if you don't want to e.g. recompile numpy.) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.forbes+python at gmail.com Fri Dec 28 19:40:26 2012 From: michael.forbes+python at gmail.com (Michael McNeil Forbes) Date: Fri, 28 Dec 2012 17:40:26 -0700 Subject: [SciPy-Dev] Installing multiple scipy versions for development In-Reply-To: <45AA71EA-A8E1-4EBB-8279-D0DB2E0E018D@continuum.io> References: <50DE187E.1050101@gmx.net> <45AA71EA-A8E1-4EBB-8279-D0DB2E0E018D@continuum.io> Message-ID: On 28 Dec 2012, at 4:01 PM, Travis Oliphant wrote: > - Cross-platform distribution for Linux, Windows, Mac, both 32 and > 64 bit. Where can one fine the 32 bit Mac version of Anaconda? Thanks, Michael.