From pav at iki.fi Sat Oct 1 11:37:53 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 01 Oct 2011 17:37:53 +0200 Subject: [SciPy-Dev] scipy.sparse vs. pysparse In-Reply-To: References: Message-ID: <4E8733D1.4080502@iki.fi> Hi, Here is some optimization reducing the runtime overhead of scipy.sparse matrix-vector multiplication by a factor of 5. https://github.com/pv/scipy-work/compare/master...enh/sparse-speedup And a patch against Scipy 0.9.0 (@Tony: maybe you want to try it out?): https://github.com/pv/scipy-work/compare/v0.9.0...enh/sparse-speedup-0.9.patch *** Quick benchmark: http://dl.dropbox.com/u/5453551/bench_sparse.py (Multiply vector with 1-D CSR Laplacian operator.) After: % N scipy.sparse pysparse 32 7.169e-06 1.048e-06 64 7.367e-06 1.787e-06 128 7.814e-06 3.284e-06 256 8.633e-06 6.336e-06 512 1.025e-05 1.241e-05 1024 1.435e-05 2.455e-05 2048 1.989e-05 4.882e-05 4096 3.384e-05 9.798e-05 8192 6.098e-05 0.0001959 Before: % N scipy.sparse pysparse 32 3.708e-05 1.032e-06 64 3.736e-05 1.803e-06 128 3.777e-05 3.368e-06 256 3.95e-05 6.324e-06 512 4.116e-05 1.267e-05 1024 4.661e-05 2.455e-05 2048 5.38e-05 4.873e-05 4096 6.946e-05 9.763e-05 8192 9.563e-05 0.0001959 The cross-over occurs around N ~ 300 instead of around N ~ 3000. The main reason for the overhead is that multiplication with a sparse Laplacian is a pretty lightweight operation, so the fact that some of scipy.sparse is written in pure Python starts to matter. -- Pauli Virtanen From lkb.teichmann at gmail.com Sat Oct 1 17:31:52 2011 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Sat, 1 Oct 2011 23:31:52 +0200 Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: References: Message-ID: Hello everybody, I just saw that there is some discussion about leastsq, and guessed that this is a good time to add some ideas. I have been working on an all-python fitter, which is just the minpack fitter translated to python. This is why I was bugging yall for the qr_multiply stuff, for those who remember. Having the fitter in python is quite useful, because this calling python from FORTRAN sometimes was ugly: most importantly because exceptions often didn't find their way through FORTRAN. I added some new stuff as well: if the fitted function realizes that it's parameters are running away, it may raise an InvalidParameter exception, and the fitting algorithm will choose a smaller step size on this parameter. You can find the code at https://github.com/scipy/scipy/pull/88 Greetings Martin From tony at maths.lth.se Mon Oct 3 05:11:07 2011 From: tony at maths.lth.se (Tony Stillfjord) Date: Mon, 3 Oct 2011 11:11:07 +0200 Subject: [SciPy-Dev] scipy.sparse vs. pysparse In-Reply-To: <4E8733D1.4080502@iki.fi> References: <4E8733D1.4080502@iki.fi> Message-ID: Hello, I apologize for not responding earlier. Thanks for taking the time to work on this, Pauli. I took the liberty of sending this to the mailing list since I believe that's where you meant to send your previous email, instead of directly to me. I managed to apply the patch (first time doing that :) ) and ran the same test script you supplied. I get similar speed-ups, but on my system pysparse seems to be faster than on your system, while scipy.sparse is slower. I'm inclined to believe that that has something to do with how I built scipy, though I can't think of any reason straight away. I was even a little less kind towards pysparse in my test, in that I used the high-level matrix class and A*b rather than the low-level A.matvec(b, x) - see below. Results of Pauli's test script on my Ubuntu system: After: % N scipy.sparse pysparse 32 9.585e-06 4.476e-07 64 1.021e-05 6.214e-07 128 1.054e-05 1.009e-06 256 1.158e-05 1.651e-06 512 1.328e-05 3.177e-06 1024 1.619e-05 6.085e-06 2048 2.251e-05 1.18e-05 4096 3.483e-05 2.314e-05 8192 5.997e-05 4.585e-05 16384 0.0001373 9.185e-05 32768 0.0002084 0.0001839 Before: % N scipy.sparse pysparse 32 5.375e-05 4.51e-07 64 5.47e-05 6.221e-07 128 5.563e-05 9.973e-07 256 5.579e-05 1.681e-06 512 5.723e-05 3.166e-06 1024 6.057e-05 6.074e-06 2048 6.736e-05 1.177e-05 4096 7.966e-05 3.063e-05 8192 0.0001047 4.587e-05 16384 0.0001538 9.193e-05 32768 0.0002546 0.0001832 I also tried my own benchmark that you can get here: http://dl.dropbox.com/u/2349184/pysparse_vs_scipy_dev.py The results (in micro-seconds): 1D: N SciPy pysparse 50 1.60e+01 5.99e+00 100 1.01e+01 6.31e+00 200 1.10e+01 7.39e+00 300 1.16e+01 8.25e+00 500 1.29e+01 1.02e+01 1000 1.71e+01 1.36e+01 2500 2.51e+01 2.47e+01 5000 4.03e+01 4.26e+01 10000 7.07e+01 8.15e+01 25000 1.62e+02 1.90e+02 50000 3.67e+02 4.90e+02 100000 1.14e+03 1.37e+03 2D: N=M^2 SciPy pysparse 100 1.05e+01 6.86e+00 625 1.58e+01 1.42e+01 2500 3.21e+01 3.82e+01 10000 9.81e+01 1.36e+02 40000 5.15e+02 9.93e+02 90000 1.40e+03 2.56e+03 250000 3.99e+03 7.79e+03 1000000 1.55e+04 3.36e+04 Comparing to my original email one can see that the 2D results are even more satisfying. The same factor-5 speedup at the smallest size and a significant decrease also at N=10000 (almost 50%). When I get around to it I will try this out with some more "realistic" matrices. Regards, Tony Stillfjord On Sat, Oct 1, 2011 at 5:37 PM, Pauli Virtanen wrote: > Hi, > > Here is some optimization reducing the runtime overhead of scipy.sparse > matrix-vector multiplication by a factor of 5. > > https://github.com/pv/scipy-**work/compare/master...enh/** > sparse-speedup > > And a patch against Scipy 0.9.0 (@Tony: maybe you want to try it out?): > > https://github.com/pv/scipy-**work/compare/v0.9.0...enh/** > sparse-speedup-0.9.patch > > *** > > Quick benchmark: http://dl.dropbox.com/u/**5453551/bench_sparse.py > (Multiply vector with 1-D CSR Laplacian operator.) > > After: > > % N scipy.sparse pysparse > 32 7.169e-06 1.048e-06 > 64 7.367e-06 1.787e-06 > 128 7.814e-06 3.284e-06 > 256 8.633e-06 6.336e-06 > 512 1.025e-05 1.241e-05 > 1024 1.435e-05 2.455e-05 > 2048 1.989e-05 4.882e-05 > 4096 3.384e-05 9.798e-05 > 8192 6.098e-05 0.0001959 > > Before: > > % N scipy.sparse pysparse > 32 3.708e-05 1.032e-06 > 64 3.736e-05 1.803e-06 > 128 3.777e-05 3.368e-06 > 256 3.95e-05 6.324e-06 > 512 4.116e-05 1.267e-05 > 1024 4.661e-05 2.455e-05 > 2048 5.38e-05 4.873e-05 > 4096 6.946e-05 9.763e-05 > 8192 9.563e-05 0.0001959 > > The cross-over occurs around N ~ 300 instead of around N ~ 3000. The main > reason for the overhead is that multiplication with a sparse Laplacian is a > pretty lightweight operation, so the fact that some of scipy.sparse is > written in pure Python starts to matter. > > -- > Pauli Virtanen > -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.durin at inrim.it Mon Oct 3 11:06:11 2011 From: g.durin at inrim.it (Gianfranco Durin) Date: Mon, 03 Oct 2011 17:06:11 +0200 (CEST) Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: Message-ID: > The minimization of $ \chi^{2}_{}$above is sometimes called weighted > least squares in which case the inverse quantities 1/ e 2 are called > the weights. Clearly this is simply a different word for the same > thing, but in practice the use of these words sometimes means that > the interpretation of e 2 as variances or squared errors is not > straightforward. The word weight often implies that only the > relative weights are known (``point two is twice as important as > point one'') in which case there is apparently an unknown overall > normalization factor. Unfortunately the parameter errors coming out > of such a fit will be proportional to this factor, and the user must > be aware of this in the formulation of his problem. > ''' > (I don't quite understand the last sentence.) > > M = ?^2 W^(-1), where ?^2 is estimated by residual sum of squares > from weighted regression. > > W only specifies relative errors, the assumption is that the > covariance matrix of the errors is *proportional* to W. The scaling > is arbitrary. If the scale of W changes, then the estimated residual > sum of squares from weighted regression will compensate for it. So, > rescaling W has no effect on the covariance of the parameter > estimates. > > I checked in Greene: Econometric Analysis, and briefly looked at the > SAS description. It looks like weighted least squares is always with > automatic scaling, W is defined only as relative weights. > > All I seem to be able to find is weighted least squares with > automatic scaling (except for maybe some two-step estimators). > > Curve_fit calculates > > M = W \sigma^2 W^(-1) = \sigma^2 > > > If I remember correctly (calculated from the transformed model) it > should be: > > the cov of the parameter estimates is s^2 (X'WX)^(-1) > error estimates should be s^2 * W > > where W = diag(1/curvefit_sigma**2) unfortunate terminology for > curve_fit's sigma or intentional ? (as I mentioned in the other > thread) > > Josef > > _______________________________________________ > > Gianfranco, > Can you please provide some Python and R (or SAS) code to show what > you mean? > ... > Bruce Bruce, Josef and the others, after reading a few books, searching over many website, and tried many different software packages (and, with the help of many colleagues), I came to a conclusion which should fix the question: Weights and data errors (or uncertainties) CAN be different concepts, as written above. It depends on the user... In particular: 1. If he/she thinks the sigma JUST as weights and NOT as standard-deviation of ydata, cov_x MUST be multiplied by the residual standard deviation 2. If the user thinks the sigma as standard-deviation of ydata (i.e. measurement errors), which are by the way ALSO good to weight the data themself, then cov_x DOES NOT NEED to be multiplied by the residual standard deviation. I tried a few packages, and found they assume one of the two options by default without 'asking' (or making aware of) the user. In particular (check using the very same data and errors): Option 1: SAS, R, gnuplot, octave... Option 2: Profit, Origin, ... And mathematica? In the HOWTO: Fit Models with Measurement Errors (see below), mathematica makes the difference between weights and measurement errors, so the user can decide how to use his/her sigma. I think we should make this distinction explicit also in our curve_fit, and report it on the leastsq doc. Gianfranco ============================================================ from Mathematica: "Particularly in the physical sciences,it is common to use measurement errors as weights to incorporate measured variation into the fitting. Weights have a relative effect on the parameter estimates, but an error variance still needs to be estimated in weighted regression, and this impacts error estimates for results." 1. When using Weights alone, the variance scale is estimated using the default method [i.e. our s_sq]. Error estimates will depend on both the weights and the estimated variance scale. However, if the weights are from measurement errors, you would want error estimates to depend solely on the weights. It is important to note that weights do not change the fitting or error estimates. For example, multiplying all weights by a constant increases the estimated variance,but does not change the parameter estimates or standard errors. (Ps. This is what I meant saying that the parameters' errors .... 2. For measurements errors, you want standard errors to be computed only from the weights.... While weights have an impact on parameter estimates, the variance estimate itself does not." From bsouthey at gmail.com Mon Oct 3 11:51:19 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 03 Oct 2011 10:51:19 -0500 Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: References: Message-ID: <4E89D9F7.8050002@gmail.com> On 10/03/2011 10:06 AM, Gianfranco Durin wrote: > >> The minimization of $ \chi^{2}_{}$above is sometimes called weighted >> least squares in which case the inverse quantities 1/ e 2 are called >> the weights. Clearly this is simply a different word for the same >> thing, but in practice the use of these words sometimes means that >> the interpretation of e 2 as variances or squared errors is not >> straightforward. The word weight often implies that only the >> relative weights are known (``point two is twice as important as >> point one'') in which case there is apparently an unknown overall >> normalization factor. Unfortunately the parameter errors coming out >> of such a fit will be proportional to this factor, and the user must >> be aware of this in the formulation of his problem. >> ''' >> (I don't quite understand the last sentence.) >> >> M = ?^2 W^(-1), where ?^2 is estimated by residual sum of squares >> from weighted regression. >> >> W only specifies relative errors, the assumption is that the >> covariance matrix of the errors is *proportional* to W. The scaling >> is arbitrary. If the scale of W changes, then the estimated residual >> sum of squares from weighted regression will compensate for it. So, >> rescaling W has no effect on the covariance of the parameter >> estimates. >> >> I checked in Greene: Econometric Analysis, and briefly looked at the >> SAS description. It looks like weighted least squares is always with >> automatic scaling, W is defined only as relative weights. >> >> All I seem to be able to find is weighted least squares with >> automatic scaling (except for maybe some two-step estimators). >> >> Curve_fit calculates >> >> M = W \sigma^2 W^(-1) = \sigma^2 >> >> >> If I remember correctly (calculated from the transformed model) it >> should be: >> >> the cov of the parameter estimates is s^2 (X'WX)^(-1) >> error estimates should be s^2 * W >> >> where W = diag(1/curvefit_sigma**2) unfortunate terminology for >> curve_fit's sigma or intentional ? (as I mentioned in the other >> thread) >> >> Josef >> >> _______________________________________________ >> >> Gianfranco, >> Can you please provide some Python and R (or SAS) code to show what >> you mean? >> ... >> Bruce > Bruce, Josef and the others, > after reading a few books, searching over many website, and tried many different software packages (and, with the help of many colleagues), I came to a conclusion which should fix the question: > > Weights and data errors (or uncertainties) CAN be different concepts, as written above. It depends on the user... > In particular: > 1. If he/she thinks the sigma JUST as weights and NOT as standard-deviation of ydata, cov_x MUST be multiplied by the residual standard deviation > 2. If the user thinks the sigma as standard-deviation of ydata (i.e. measurement errors), which are by the way ALSO good to weight the data themself, then cov_x DOES NOT NEED to be multiplied by the residual standard deviation. It is very simple to know if the weights include 'sigma' because the estimated sigma must be one. So it should not any difference but if it does then there is probably a bigger issue than that! > I tried a few packages, and found they assume one of the two options by default without 'asking' (or making aware of) the user. In particular (check using the very same data and errors): > Option 1: SAS, R, gnuplot, octave... > Option 2: Profit, Origin, ... > > And mathematica? In the HOWTO: Fit Models with Measurement Errors (see below), mathematica makes the difference between weights and measurement errors, so the user can decide how to use his/her sigma. > > I think we should make this distinction explicit also in our curve_fit, and report it on the leastsq doc. > > Gianfranco Some packages (R and SAS) allow fit this by fixing certain parameters or use equivalent models to avoid estimating the those parameters. I am not knowledgeable to know if you can do that with the underlying leastsq function perhaps with some equivalent model and function. > ============================================================ > from Mathematica: > "Particularly in the physical sciences,it is common to use measurement errors as weights to incorporate measured variation into the fitting. Weights have a relative effect on the parameter estimates, but an error variance still needs to be estimated in weighted regression, and this impacts error estimates for results." > > 1. When using Weights alone, the variance scale is estimated using the default method [i.e. our s_sq]. Error estimates will depend on both the weights and the estimated variance scale. However, if the weights are from measurement errors, you would want error estimates to depend solely on the weights. > It is important to note that weights do not change the fitting or error estimates. For example, multiplying all weights by a constant increases the estimated variance,but does not change the parameter estimates or standard errors. (Ps. This is what I meant saying that the parameters' errors > .... (That is, well, obvious!) > 2. For measurements errors, you want standard errors to be computed only from the weights.... While weights have an impact on parameter estimates, the variance estimate itself does not." > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev That is because the weight are imposing a known variance-covariance structure on the data but still assuming the errors are identically and independently distributed. Bruce From pav at iki.fi Mon Oct 3 19:12:22 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 04 Oct 2011 01:12:22 +0200 Subject: [SciPy-Dev] RSS feeds for pull requests Message-ID: Hi, It seems that a part of the nitty-gritty development discussions has moved to the Github pull requests. Keeping tabs on these is not completely easy, since (i) AFAIK only the people listed as "repository owners" get email notifications of new PRs, and (ii) also they do not get notifications of changes unless they say something in the discussion. Here's one work-around to the situation: RSS feeds for everything that's going on in the pull requests. Github doesn't itself seem to offer this -- at least I didn't manage to find it -- but at least it was fairly simple [1] to patch it together. All chatter: http://new.scipy.org/numpy-chatter.xml http://new.scipy.org/scipy-chatter.xml Only new pull requests: http://new.scipy.org/numpy-pull.xml http://new.scipy.org/scipy-pull.xml They don't update in real time, and I'm not sure how useful these turn out to be in reality, but there it goes. [1] https://github.com/pv/github-pull-request-rss -- Pauli Virtanen From warren.weckesser at enthought.com Mon Oct 3 19:22:06 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 3 Oct 2011 18:22:06 -0500 Subject: [SciPy-Dev] RSS feeds for pull requests In-Reply-To: References: Message-ID: On Mon, Oct 3, 2011 at 6:12 PM, Pauli Virtanen wrote: > Hi, > > It seems that a part of the nitty-gritty development discussions has > moved to the Github pull requests. Keeping tabs on these is not > completely easy, since (i) AFAIK only the people listed as "repository > owners" get email notifications of new PRs, and (ii) also they do not > get notifications of changes unless they say something in the discussion. > > I'm not sure this is relevant, but it sounds like it: At the bottom of a pull request there is this: """ Notifications for new comments on this Pull Request are *off*. Enable notifications for this Pull Request """ Does anyone know what that link does? Warren Here's one work-around to the situation: RSS feeds for everything that's > going on in the pull requests. Github doesn't itself seem to offer this > -- at least I didn't manage to find it -- but at least it was fairly > simple [1] to patch it together. > > All chatter: > > http://new.scipy.org/numpy-chatter.xml > http://new.scipy.org/scipy-chatter.xml > > Only new pull requests: > > http://new.scipy.org/numpy-pull.xml > http://new.scipy.org/scipy-pull.xml > > They don't update in real time, and I'm not sure how useful these turn > out to be in reality, but there it goes. > > > [1] https://github.com/pv/github-pull-request-rss > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Oct 3 19:50:09 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 04 Oct 2011 01:50:09 +0200 Subject: [SciPy-Dev] RSS feeds for pull requests In-Reply-To: References: Message-ID: 04.10.2011 01:22, Warren Weckesser kirjoitti: [clip] > I'm not sure this is relevant, but it sounds like it: > > At the bottom of a pull request there is this: > > """ > Notifications for new comments on this Pull Request are *off*. Enable > notifications for this Pull Request > > """ > > Does anyone know what that link does? That sounds relevant -- I guess it makes Github send you mail for each new comment. At least it seems to be turned on for the PRs for which I get mail. But this requires you to go and click through every new PR... In principle I can also 'watch' the projects I want to follow, and then grab the RSS feed of my Dashboard. However, since everything is pretty much jumbled together there, it's not so easy to follow. *** Another thing that was on the table at some point were automatic mails to the ML for new pull requests. I think GH doesn't offer support for this, but it could still be doable thanks to their API. -- Pauli Virtanen From raphael.leplae at ulb.ac.be Tue Oct 4 09:09:20 2011 From: raphael.leplae at ulb.ac.be (Raphael Leplae) Date: Tue, 04 Oct 2011 15:09:20 +0200 Subject: [SciPy-Dev] Compilation problem on CentOS release 5.6 Message-ID: <4E8B0580.308@ulb.ac.be> Dear all, I am having troubles with the compilation of SciPy version 0.10.0b2 on CentOS 5.6. Here is the info: Kernel 2.6.18-238.9.1.el5 x86_64 Python 2.7.2 GCC 4.6.1 ATLAS 3.8.3 (rpm install with yum) NumPy 1.6.1 (compiled, tested and installed successfully) Defined in the shell: export BLAS=/usr/lib64/atlas/libcblas.so export ATLAS=/usr/lib64/atlas/libatlas.so export LAPACK=/usr/lib64/atlas/liblapack.so File site.cfg: [DEFAULT] libraries = gfortran, gfortranbegin library_dirs = /usr/local/lib:/usr/local/opt/gcc/4.6.1/lib When issuing the command: python setup.py build I get a big list of warnings and then error messages. The whole log is in attachment. After the line: gcc: build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c Error messages like this start to show up: /usr/lib/../lib64/crt1.o: In function `_start': (.text+0x20): undefined reference to `main' build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.o: In function `f2py_rout__fftpack_destroy_dct1_cache': /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2532: undefined reference to `PyArg_ParseTupleAndK eywords' /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2543: undefined reference to `PyErr_Occurred' /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2553: undefined reference to `Py_BuildValue' build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.o: In function `f2py_rout__fftpack_destroy_dct2_cache': /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2471: undefined reference to `PyArg_ParseTupleAndK eywords' Someone knows why those "undefined reference" errors are showing up? Thanks in advance for your help. Raphael -- ___________________________________________________________ Raphael Leplae, Ph.D. Operations manager Tel: +32 2 650 3727 Computing Center Fax: +32 2 650 3740 ULB/VUB Avenue A. Buyl, 91 - CP 197 1050 Brussels Belgium ___________________________________________________________ -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: text/x-log Size: 382198 bytes Desc: not available URL: From bsouthey at gmail.com Tue Oct 4 09:22:26 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 04 Oct 2011 08:22:26 -0500 Subject: [SciPy-Dev] Compilation problem on CentOS release 5.6 In-Reply-To: <4E8B0580.308@ulb.ac.be> References: <4E8B0580.308@ulb.ac.be> Message-ID: <4E8B0892.60806@gmail.com> On 10/04/2011 08:09 AM, Raphael Leplae wrote: > Dear all, > > I am having troubles with the compilation of SciPy version 0.10.0b2 on > CentOS 5.6. > Here is the info: > Kernel 2.6.18-238.9.1.el5 x86_64 > Python 2.7.2 > GCC 4.6.1 > ATLAS 3.8.3 (rpm install with yum) > NumPy 1.6.1 (compiled, tested and installed successfully) > > Defined in the shell: > export BLAS=/usr/lib64/atlas/libcblas.so > export ATLAS=/usr/lib64/atlas/libatlas.so > export LAPACK=/usr/lib64/atlas/liblapack.so > > File site.cfg: > [DEFAULT] > libraries = gfortran, gfortranbegin > library_dirs = /usr/local/lib:/usr/local/opt/gcc/4.6.1/lib > > When issuing the command: > python setup.py build > I get a big list of warnings and then error messages. The whole log is > in attachment. > > After the line: > gcc: build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c > > Error messages like this start to show up: > /usr/lib/../lib64/crt1.o: In function `_start': > (.text+0x20): undefined reference to `main' > build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.o: > In function `f2py_rout__fftpack_destroy_dct1_cache': > /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2532: > undefined reference to `PyArg_ParseTupleAndK > eywords' > /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2543: > undefined reference to `PyErr_Occurred' > /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2553: > undefined reference to `Py_BuildValue' > build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.o: > In function `f2py_rout__fftpack_destroy_dct2_cache': > /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2471: > undefined reference to `PyArg_ParseTupleAndK > eywords' > > Someone knows why those "undefined reference" errors are showing up? > > Thanks in advance for your help. > > Raphael > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev You missed this part at the start: "Found executable /usr/bin/g77" But then it is looking for gfortran (which I know Fedora installs) later on: "F77 : gfortran, version GNU Fortran (GCC) 4.1.2 20080704 (Red Hat 4.1.2-44)" This is addressed at: http://docs.scipy.org/doc/numpy/user/install.html#fortran-abi-mismatch Regards Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From raphael.leplae at ulb.ac.be Tue Oct 4 10:22:48 2011 From: raphael.leplae at ulb.ac.be (Raphael Leplae) Date: Tue, 04 Oct 2011 16:22:48 +0200 Subject: [SciPy-Dev] Compilation problem on CentOS release 5.6 In-Reply-To: <4E8B0892.60806@gmail.com> References: <4E8B0580.308@ulb.ac.be> <4E8B0892.60806@gmail.com> Message-ID: <4E8B16B8.7010705@ulb.ac.be> Dear Bruce, Thanks for the quick reply. I tried again as: python setup.py clean python setup.py build --fcompiler=gnu F77 is used for the compilation: Fortran f77 compiler: /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops ... g77:f77: scipy/fftpack/src/dfftpack/dcosqi.f ... but I still get the same errors. What I find strange is that definitions such as PyArg_ParseTupleAndKeywords are defined in the python include files. In my case: /usr/local/opt/python/2.7.2/include/python2.7/modsupport.h:#define PyArg_ParseTupleAndKeywords _PyArg_ParseTupleAndKeywords_SizeT And the right path is given to the compiler: -I/usr/local/opt/python/2.7.2/include/python2.7 I was not confronted with this problem while compiling other modules. Cheers, Raphael On 10/04/2011 03:22 PM, Bruce Southey wrote: > On 10/04/2011 08:09 AM, Raphael Leplae wrote: >> Dear all, >> >> I am having troubles with the compilation of SciPy version 0.10.0b2 on >> CentOS 5.6. >> Here is the info: >> Kernel 2.6.18-238.9.1.el5 x86_64 >> Python 2.7.2 >> GCC 4.6.1 >> ATLAS 3.8.3 (rpm install with yum) >> NumPy 1.6.1 (compiled, tested and installed successfully) >> >> Defined in the shell: >> export BLAS=/usr/lib64/atlas/libcblas.so >> export ATLAS=/usr/lib64/atlas/libatlas.so >> export LAPACK=/usr/lib64/atlas/liblapack.so >> >> File site.cfg: >> [DEFAULT] >> libraries = gfortran, gfortranbegin >> library_dirs = /usr/local/lib:/usr/local/opt/gcc/4.6.1/lib >> >> When issuing the command: >> python setup.py build >> I get a big list of warnings and then error messages. The whole log is >> in attachment. >> >> After the line: >> gcc: build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c >> >> Error messages like this start to show up: >> /usr/lib/../lib64/crt1.o: In function `_start': >> (.text+0x20): undefined reference to `main' >> build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.o: >> In function `f2py_rout__fftpack_destroy_dct1_cache': >> /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2532: >> undefined reference to `PyArg_ParseTupleAndK >> eywords' >> /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2543: >> undefined reference to `PyErr_Occurred' >> /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2553: >> undefined reference to `Py_BuildValue' >> build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.o: >> In function `f2py_rout__fftpack_destroy_dct2_cache': >> /software/CC/local/src/python/Libraries/scipy-0.10.0b2/build/src.linux-x86_64-2.7/scipy/fftpack/_fftpackmodule.c:2471: >> undefined reference to `PyArg_ParseTupleAndK >> eywords' >> >> Someone knows why those "undefined reference" errors are showing up? >> >> Thanks in advance for your help. >> >> Raphael >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > You missed this part at the start: > "Found executable /usr/bin/g77" > > But then it is looking for gfortran (which I know Fedora installs) later on: > "F77 : gfortran, version GNU Fortran (GCC) 4.1.2 20080704 (Red Hat > 4.1.2-44)" > > This is addressed at: > http://docs.scipy.org/doc/numpy/user/install.html#fortran-abi-mismatch > > Regards > Bruce > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -- ___________________________________________________________ Raphael Leplae, Ph.D. Operations manager Tel: +32 2 650 3727 Computing Center Fax: +32 2 650 3740 ULB/VUB Avenue A. Buyl, 91 - CP 197 1050 Brussels Belgium ___________________________________________________________ From tony at maths.lth.se Thu Oct 6 04:59:07 2011 From: tony at maths.lth.se (Tony Stillfjord) Date: Thu, 6 Oct 2011 10:59:07 +0200 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode Message-ID: Dear all, Given that Python is not MATLAB, would it be possible to add an additional, in some cases more intuitive, way of calling the spdiags function? This function creates a sparse matrix given its non-zero diagonals. Currently, the first input has to be an array 'data' where every row corresponds to a diagonal - which one is specified in the next input, 'diags'. However, as the diagonals have different length, the actual values have to be zero-padded on either the left or right end to fit the array. This is not a big problem if all the elements in a diagonal are the same, for example spdiags([ones(4), -2*ones(4), ones(4)], (-1,0,1), 4, 4) works nicely as some values are just ignored. But when that is not the case, and all the values are already stored in some arrays, it does not make sense that the user should have to pad them with a number of zeros (remembering on which end to put them!) and then also construct one more temporary array to hold them. I suggest that spdiags accepts a list/tuple as data input, where every list element is supposed to be an array holding a diagonal of the correct size. That is, instead of doing something like spdiags(row_stack(r_[zeros(2), a], b, r_[c, zeros(3)]), (-2, 0, 3), n, n ) one would write spdiags([a,b,c], (-2, 0, 3), n, n) Does that make sense? What it boils down to is that you should not have to know how the matrix type is implemented to use it - right? The diagonals could just as well be stored in different ways than they are currently. When I had a look at the source code, it seems that all the work is done in the dia_matrix __init__ method, but the same argument applies there. Currently, you -can- send in a list [a,b,c] where a, b and c are arrays of different length, but the function atleast_2D() is applied to the list which turns it into an array of objects. This obviously causes problems. Kind regards, Tony Stillfjord -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Oct 6 05:33:52 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 06 Oct 2011 11:33:52 +0200 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: References: Message-ID: Hi, 06.10.2011 10:59, Tony Stillfjord kirjoitti: > Given that Python is not MATLAB, would it be possible to add an > additional, in some cases more intuitive, way of calling the spdiags > function? This function creates a sparse matrix given its non-zero diagonals. > > Currently, the first input has to be an array 'data' where every row > corresponds to a diagonal - which one is specified in the next input, 'diags'. However, > as the diagonals have different length, the actual values have to be zero-padded on > either the left or right end to fit the array. [clip] I was just thinking about the same thing: the way it works now is a bit of a pain. Currently, I'm using this: def spdiags2(data, diags, M, N, **kw): n = min(M, N) data_mat = np.zeros((len(data), n), dtype=np.common_type(*data)) for j, q in enumerate(data): m = n - abs(diags[j]) if diags[j] >= 0: data_mat[j,-m:] = q else: data_mat[j,:m] = q return sparse.spdiags(data_mat, diags, M, N, **kw) But I agree the `spdiags` function itself should be fixed. -- Pauli Virtanen From pav at iki.fi Thu Oct 6 08:30:04 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 06 Oct 2011 14:30:04 +0200 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: References: Message-ID: 06.10.2011 11:33, Pauli Virtanen kirjoitti: [clip] >> Currently, the first input has to be an array 'data' where every row >> corresponds to a diagonal - which one is specified in the next input, 'diags'. However, >> as the diagonals have different length, the actual values have to be zero-padded on >> either the left or right end to fit the array. > [clip] > > I was just thinking about the same thing: the way it works now is a bit > of a pain. After a moment of thought, I believe the semantics of `spdiags` cannot be changed without also affecting the meaning of: sparse.spdiags([[1,2,3]], [1], 4, 4) The remaining options are either to add a keyword flag to the function, or to add a separate function (maybe called `diags` or something) to toggle the more convenient offset semantics. -- Pauli Virtanen From tony at maths.lth.se Thu Oct 6 09:10:41 2011 From: tony at maths.lth.se (Tony Stillfjord) Date: Thu, 6 Oct 2011 15:10:41 +0200 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: References: Message-ID: At first I thought that example would fail since the row(s) did not have length 4, but then I realised that there are actually no checks at all on the size of the diagonals, just that the number of them correspond to the number of offsets and that there are no duplicates... I tried it in MATLAB too just to check if this behaviour really is that strange. There, the equivalent expression spdiags([[1,2,3]'], 1, 4, 4) gives an index error (also strange kind of error). With the offset -1 it works fine, though, and also if I add elements to the diagonal so that it has length 4. So somewhat strange, but the scipy implementation does not seem to be fully equivalent to the MATLAB one. Still, the change I suggested would probably have to raise an exception if the list has diagonals of the wrong length, and this would still break more 'normal' constructions like my original example, spdiags([ones(4), -2*ones(4), ones(4)], (-1,0,1), 4, 4), so I agree that this shouldn't be done. I'm more in favour of a keyword flag rather than a new function. spdiags feels like the accepted name for what is to be done, and then how to do it more specifically should be an option within this framework (function). Maybe 'padding = False/True' or 'exact_length = False/True'? Regards, Tony Stillfjord On Thu, Oct 6, 2011 at 2:30 PM, Pauli Virtanen wrote: > 06.10.2011 11:33, Pauli Virtanen kirjoitti: > [clip] > >> Currently, the first input has to be an array 'data' where every row > >> corresponds to a diagonal - which one is specified in the next input, > 'diags'. However, > >> as the diagonals have different length, the actual values have to be > zero-padded on > >> either the left or right end to fit the array. > > [clip] > > > > I was just thinking about the same thing: the way it works now is a bit > > of a pain. > > After a moment of thought, I believe the semantics of `spdiags` cannot > be changed without also affecting the meaning of: > > sparse.spdiags([[1,2,3]], [1], 4, 4) > > The remaining options are either to add a keyword flag to the function, > or to add a separate function (maybe called `diags` or something) to > toggle the more convenient offset semantics. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Oct 6 09:26:14 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 06 Oct 2011 15:26:14 +0200 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: References: Message-ID: 06.10.2011 15:10, Tony Stillfjord kirjoitti: [clip] > I'm more in favour of a keyword flag rather than a new function. spdiags > feels like the accepted name for what is to be done, and then how to do it more > specifically should be an option within this framework (function). Maybe > 'padding = False/True' or 'exact_length = False/True'? Adding a new function for new semantics, rather than using a keyword argument, feels to me like a clearer design choice here. I think adding a new function called `diags` could be justified here. Numpy already has a function called `diag`, and with the new semantics one would have diags([a, b, c], [i, j, k]) == diag(a, i) + diag(b, j) + diag(c, k) which hangs together nicely. -- Pauli Virtanen From guyer at nist.gov Thu Oct 6 09:31:28 2011 From: guyer at nist.gov (Jonathan Guyer) Date: Thu, 6 Oct 2011 09:31:28 -0400 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: References: Message-ID: <342166CF-D760-478C-84B7-C170EA35065B@nist.gov> On Oct 6, 2011, at 9:26 AM, Pauli Virtanen wrote: > I think adding a new function called `diags` could be justified here. > Numpy already has a function called `diag`, and with the new semantics > one would have > > diags([a, b, c], [i, j, k]) == diag(a, i) + diag(b, j) + diag(c, k) > > which hangs together nicely. +1 From tony at maths.lth.se Thu Oct 6 09:34:10 2011 From: tony at maths.lth.se (Tony Stillfjord) Date: Thu, 6 Oct 2011 15:34:10 +0200 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: <342166CF-D760-478C-84B7-C170EA35065B@nist.gov> References: <342166CF-D760-478C-84B7-C170EA35065B@nist.gov> Message-ID: Well, when you put it like that, I guess I agree as well. :) On Thu, Oct 6, 2011 at 3:31 PM, Jonathan Guyer wrote: > > On Oct 6, 2011, at 9:26 AM, Pauli Virtanen wrote: > > > I think adding a new function called `diags` could be justified here. > > Numpy already has a function called `diag`, and with the new semantics > > one would have > > > > diags([a, b, c], [i, j, k]) == diag(a, i) + diag(b, j) + diag(c, k) > > > > which hangs together nicely. > > +1 > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis.laxalde at mcgill.ca Thu Oct 6 10:41:19 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Thu, 6 Oct 2011 10:41:19 -0400 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: References: Message-ID: <20111006104119.6cb16fce@mcgill.ca> On Thu, 06 Oct 2011 15:26:14 +0200, Pauli Virtanen wrote: > Adding a new function for new semantics, rather than using a keyword > argument, feels to me like a clearer design choice here. > > I think adding a new function called `diags` could be justified here. > Numpy already has a function called `diag`, and with the new semantics > one would have > > diags([a, b, c], [i, j, k]) == diag(a, i) + diag(b, j) + diag(c, k) > > which hangs together nicely. However this trailing 's' is not so consistent with other sparse matrix constructors (e.g. eye, kron, identity) which follow NumPy names. -- Denis Laxalde From pav at iki.fi Thu Oct 6 10:57:37 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 06 Oct 2011 16:57:37 +0200 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: <20111006104119.6cb16fce@mcgill.ca> References: <20111006104119.6cb16fce@mcgill.ca> Message-ID: 06.10.2011 16:41, Denis Laxalde kirjoitti: [clip] > However this trailing 's' is not so consistent with other sparse matrix > constructors (e.g. eye, kron, identity) which follow NumPy names. Those, however, are just the sparse version of their Numpy counterparts. The Numpy `diag` function only takes a single diagonal, so one cannot use that as the name. -- Pauli Virtanen From denis.laxalde at mcgill.ca Thu Oct 6 11:05:09 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Thu, 6 Oct 2011 11:05:09 -0400 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: References: <20111006104119.6cb16fce@mcgill.ca> Message-ID: <20111006110509.7e9dfd20@mcgill.ca> On Thu, 06 Oct 2011 16:57:37 +0200, Pauli Virtanen wrote: > > However this trailing 's' is not so consistent with other sparse matrix > > constructors (e.g. eye, kron, identity) which follow NumPy names. > > Those, however, are just the sparse version of their Numpy counterparts. > The Numpy `diag` function only takes a single diagonal, so one cannot > use that as the name. That's a good point, indeed. From pav at iki.fi Thu Oct 6 16:38:32 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 06 Oct 2011 22:38:32 +0200 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: References: <342166CF-D760-478C-84B7-C170EA35065B@nist.gov> Message-ID: 06.10.2011 15:34, Tony Stillfjord kirjoitti: [clip] There goes: https://github.com/scipy/scipy/pull/91 -- Pauli Virtanen From martin.dulovits at woogieworks.at Thu Oct 6 18:09:24 2011 From: martin.dulovits at woogieworks.at (Martin Dulovits) Date: Fri, 07 Oct 2011 00:09:24 +0200 Subject: [SciPy-Dev] odepack.lib(vode.o) : error LNK2019 In-Reply-To: References: Message-ID: <4E8E2714.70609@woogieworks.at> I am trying to compile scipy for python 2.6 but with VC8 and Intel Fortran compiler for win64. I tryed Scipy-0.9.0 and the latest scipy-0.10.0b2 . I start compiling with this commands : scipy-0.10.0b2>python2.6.exe setup.py config --fcompiler=intelem config_fc scipy-0.10.0b2>python2.6.exe setup.py install I compiles some time but alway ends in the following linking errors : C:\Root\VS2005\VC\BIN\amd64\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:c:\Ro ot\Houdini\V11_1_035\python26\libs /LIBPATH:c:\Root\Houdini\V11_1_035\python26\l ibs /LIBPATH:c:\Root\Houdini\V11_1_035\python26\PC\VS8.0\win32release /LIBPATH:b uild\temp.win-amd64-2.6 /LIBPATH:c:\Root\Houdini\V11_1_035\python26\libs /LIBPAT H:c:\Root\Houdini\V11_1_035\python26\PC\VS8.0\win32release /LIBPATH:build\temp.w in-amd64-2.6 odepack.lib linpack_lite.lib mach.lib blas.lib /EXPORT:init_odepack build\temp.win-amd64-2.6\Release\scipy\integrate\_odepackmodule.obj /OUT:build\ lib.win-amd64-2.6\scipy\integrate\_odepack.pyd /IMPLIB:build\temp.win-amd64-2.6\ Release\scipy\integrate\_odepack.lib /MANIFESTFILE:build\temp.win-amd64-2.6\Rele ase\scipy\integrate\_odepack.pyd.manifest _odepackmodule.obj : warning LNK4197: export 'init_odepack' specified multiple t imes; using first specification Creating library build\temp.win-amd64-2.6\Release\scipy\integrate\_odepack.li b and object build\temp.win-amd64-2.6\Release\scipy\integrate\_odepack.exp odepack.lib(vode.o) : error LNK2019: unresolved external symbol dcopy_ reference d in function dvode_ odepack.lib(vode.o) : error LNK2019: unresolved external symbol dscal_ reference d in function dvode_ linpack_lite.lib(dgbfa.o) : error LNK2001: unresolved external symbol dscal_ linpack_lite.lib(dgefa.o) : error LNK2001: unresolved external symbol dscal_ linpack_lite.lib(dgesl.o) : error LNK2001: unresolved external symbol daxpy_ odepack.lib(vode.o) : error LNK2019: unresolved external symbol daxpy_ reference d in function dvstep_ linpack_lite.lib(dgbfa.o) : error LNK2001: unresolved external symbol daxpy_ linpack_lite.lib(dgefa.o) : error LNK2001: unresolved external symbol daxpy_ linpack_lite.lib(dgbsl.o) : error LNK2001: unresolved external symbol daxpy_ linpack_lite.lib(dgbfa.o) : error LNK2019: unresolved external symbol idamax_ re ferenced in function dgbfa_ linpack_lite.lib(dgefa.o) : error LNK2001: unresolved external symbol idamax_ linpack_lite.lib(dgbsl.o) : error LNK2019: unresolved external symbol ddot_ refe renced in function dgbsl_ linpack_lite.lib(dgesl.o) : error LNK2001: unresolved external symbol ddot_ build\lib.win-amd64-2.6\scipy\integrate\_odepack.pyd : fatal error LNK1120: 5 un resolved externals error: Command "C:\Root\VS2005\VC\BIN\amd64\link.exe /DLL /nologo /INCREMENTAL:N O /LIBPATH:c:\Root\Houdini\V11_1_035\python26\libs /LIBPATH:c:\Root\Houdini\V11_ 1_035\python26\libs /LIBPATH:c:\Root\Houdini\V11_1_035\python26\PC\VS8.0\win32re lease /LIBPATH:build\temp.win-amd64-2.6 /LIBPATH:c:\Root\Houdini\V11_1_035\pytho n26\libs /LIBPATH:c:\Root\Houdini\V11_1_035\python26\PC\VS8.0\win32release /LIBP ATH:build\temp.win-amd64-2.6 odepack.lib linpack_lite.lib mach.lib blas.lib /EXP ORT:init_odepack build\temp.win-amd64-2.6\Release\scipy\integrate\_odepackmodule .obj /OUT:build\lib.win-amd64-2.6\scipy\integrate\_odepack.pyd /IMPLIB:build\tem p.win-amd64-2.6\Release\scipy\integrate\_odepack.lib /MANIFESTFILE:build\temp.wi n-amd64-2.6\Release\scipy\integrate\_odepack.pyd.manifest" failed with exit stat us 1120 I hope you can help -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at maths.lth.se Fri Oct 7 02:03:07 2011 From: tony at maths.lth.se (Tony Stillfjord) Date: Fri, 7 Oct 2011 08:03:07 +0200 Subject: [SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode In-Reply-To: References: <342166CF-D760-478C-84B7-C170EA35065B@nist.gov> Message-ID: Wow, that was fast. Now I'm going to come up with a list of other changes/improvements to the sparse module, and maybe they will be implemented on Monday. :) Nah, seriously, thanks for the quick work. I had my own work-around function for the last few days too, but having it officially (soon, I guess) in SciPy is so much more convenient. Kind regards, Tony Stillfjord On Thu, Oct 6, 2011 at 10:38 PM, Pauli Virtanen wrote: > 06.10.2011 15:34, Tony Stillfjord kirjoitti: > [clip] > > There goes: > > https://github.com/scipy/scipy/pull/91 > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Oct 10 18:22:27 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Oct 2011 18:22:27 -0400 Subject: [SciPy-Dev] weird results in frozen distribution fit Message-ID: Given a report on the scipy-user mailing list http://projects.scipy.org/scipy/ticket/1536 I started for the first time to look at some examples with the frozen fit introduced in scipy 0.9. I just picked randomly some distributions that came to mind, except for the reported lognorm. some cases look ok, maybe. Some cases look "weird" Josef lognorm true 0.25 0.0 20.0 estimated, floc=0, loc=0 [ 2.1221 0. 2.4403] [ 0.2303 -1.8759 21.9986] [ 2.1454 0. 2.3763] [ 0.2446 0.0305 19.9945] [ 2.1274 0. 2.414 ] [ 0.2469 0.0296 19.9415] [ 2.1362 0. 2.3897] [ 0.2525 0.4579 19.4758] [ 2.1334 0. 2.4104] [ 0.2484 0.0298 20.0291] [ 2.1033 0. 2.475 ] [ 2.5335e-01 1.0858e-02 1.9959e+01] [ 2.1266 0. 2.414 ] [ 0.2662 1.3626 18.5458] [ 2.1316 0. 2.4176] [ 0.2491 0.0299 20.0509] [ 2.0897 0. 2.5211] [ 0.276 1.4881 18.5202] [ 2.1382 0. 2.3738] [ 0.2465 0.0301 19.8241] gamma true 2 20 10.0 estimated, floc=0, loc=0 [ 2.0945 20. 9.6182] [ 1.9864 20.0832 10.2119] [ 2.0317 20. 10. ] [ 2.0084 20. 10.101 ] [ 1.9364 20.1738 10.41 ] [ 2.0241 20. 10. ] [ 2.1359 20. 9.1555] [ 2.0353 19.8413 9.8974] [ 1.9937 20. 10. ] [ 2.2296 20. 8.6785] [ 2.152 19.8834 9.1795] [ 1.9948 20. 10. ] [ 1.8822 20. 10.8936] [ 2.0389 19.9836 9.8365] [ 2.0104 20. 10. ] [ 1.8304 20. 11.3802] [ 2.0341 20.169 9.7599] [ 2.0214 20. 10. ] [ 1.972 20. 9.7958] [ 1.9107 20.0777 10.1223] [ 1.9409 20. 10. ] [ 1.6211 20. 13.3028] [ 1.9636 20.1903 10.1156] [ 2.0103 20. 10. ] [ 2.0413 20. 9.8165] [ 2.0238 19.9456 9.9794] [ 2.0121 20. 10. ] [ 2.0087 20. 10.1362] [ 2.0128 20.1212 9.9924] [ 2.0299 20. 10. ] normal true 0.0 2.0 estimated, floc=0, loc=0, fscale=2 [ 0. 2.0024] [-0.0417 2.0024] [-0.0417 2. ] [ 0. 2.0003] [-0.0879 2.0003] [-0.0879 2. ] [ 0. 1.9466] [-0.0133 1.9466] [-0.0133 2. ] [ 0. 1.9726] [-0.036 1.9726] [-0.0359 2. ] [ 0. 2.0007] [-0.0194 2.0007] [-0.0195 2. ] [ 0. 1.9565] [ 0.0338 1.9565] [ 0.0337 2. ] [ 0. 1.9674] [ 0.0288 1.9674] [ 0.0289 2. ] [ 0. 1.9962] [ 0.0051 1.9962] [ 0.0051 2. ] [ 0. 1.8901] [ 0.0179 1.8901] [ 0.0179 2. ] [ 0. 2.0017] [ 0.0307 2.0017] [ 0.0307 2. ] chi2 true 10 0.0 2.0 estimated, floc=0, loc=0 [ 3.8459 0. 6.1457] [ 10.9616 -0.6938 1.8646] [ 3.8314 0. 6.3129] [ 10.4894 -0.295 1.9516] [ 3.7449 0. 6.4353] [ 9.7304 -0.0364 2.0654] [ 3.8094 0. 6.2268] [ 10.4314 -0.3388 1.9283] [ 3.69 0. 6.5633] [ 8.4951 1.1035 2.2316] [ 3.6475 0. 6.6415] [ 8.3315 0.8241 2.3152] [ 3.7288 0. 6.4983] [ 9.4114 0.2128 2.1185] [ 3.8005 0. 6.2852] [ 10.0038 -0.0154 1.9917] [ 3.7384 0. 6.4327] [ 8.4504 1.1931 2.224 ] [ 3.9063 0. 6.0929] [ 10.3413 0.2105 1.902 ] >>> -------------- next part -------------- A non-text attachment was scrubbed... Name: try_scipy_floc_bug.py Type: text/x-python Size: 1974 bytes Desc: not available URL: From ognen at enthought.com Mon Oct 10 19:26:10 2011 From: ognen at enthought.com (Ognen Duzlevski) Date: Mon, 10 Oct 2011 18:26:10 -0500 Subject: [SciPy-Dev] New machine for scipy.org Message-ID: Hello, I would like to use this email to introduce myself - my name is Ognen and I am Enthought's new IT director/Unix sysadmin. I would like to get your opinion on perhaps moving the scipy.org machine to an Amazon EC2 instance. I feel that such a solution would be more flexible and more accessible to people (not to mention probably faster and more stable). The box is right now (from what I can tell) an OpenVZ container on an older model physical Dell server housed in our data center, the container itself is backed up daily but were there to be connectivity or hardware issues, I am not sure recovery would be instantaneous. The underlying OpenVZ host seems to be low on disk space (and so is the scipy.org container consequently). The box itself runs an old RedHat version (I am not even sure it can be upgraded at this point). None of the above is really critical but I would like to improve the service if I can. Naturally, I would be more than happy to keep administering the new box and keep whoever else has administrative privileges to the old one. Opinions, thoughts...? Thanks! Ognen From gael.varoquaux at normalesup.org Tue Oct 11 01:10:10 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 11 Oct 2011 07:10:10 +0200 Subject: [SciPy-Dev] New machine for scipy.org In-Reply-To: References: Message-ID: <20111011051010.GB26373@phare.normalesup.org> On Mon, Oct 10, 2011 at 06:26:10PM -0500, Ognen Duzlevski wrote: > I would like to get your opinion on perhaps moving the scipy.org > machine to an Amazon EC2 instance. > [...] > Opinions, thoughts...? Certainly sounds good to me. Thanks for doing this! Gael From cournape at gmail.com Tue Oct 11 03:54:58 2011 From: cournape at gmail.com (David Cournapeau) Date: Tue, 11 Oct 2011 08:54:58 +0100 Subject: [SciPy-Dev] New machine for scipy.org In-Reply-To: References: Message-ID: Hi Ognen, Thank you for engaging with the community on this point. On Tue, Oct 11, 2011 at 12:26 AM, Ognen Duzlevski wrote: > Hello, > > I would like to use this email to introduce myself - my name is Ognen > and I am Enthought's new IT director/Unix sysadmin. I would like to > get your opinion on perhaps moving the scipy.org machine to an Amazon > EC2 instance. I feel that such a solution would be more flexible and > more accessible to people (not to mention probably faster and more > stable). The box is right now (from what I can tell) an OpenVZ > container on an older model physical Dell server housed in our data > center, the container itself is backed up daily but were there to be > connectivity or hardware issues, I am not sure recovery would be > instantaneous. The underlying OpenVZ host seems to be low on disk > space (and so is the scipy.org container consequently). The box itself > runs an old RedHat version (I am not even sure it can be upgraded at > this point). None of the above is really critical but I would like to > improve the service if I can. Ec2 makes sense. We could finally declare scipy cloud-ready (chuckles). A couple of questions: - IIRC, one of the main issue for scipy.org was memory. Do you know how much memory was used there ? - What would be the backup strategy ? I think it is important to stress to people unfamiliar with ec2 that the instance itself shall not be considered "stable". cheers, David From grzegorz10brzeczyszczykiewicz at gmail.com Tue Oct 11 05:27:47 2011 From: grzegorz10brzeczyszczykiewicz at gmail.com (=?ISO-8859-2?Q?Grzegorz_Brz=EAczyszczykiewicz?=) Date: Tue, 11 Oct 2011 11:27:47 +0200 Subject: [SciPy-Dev] numpy with dynamil python Message-ID: Hi first of all I'm not sure of writeing on this mailing list. The description looked closest to nature of my problem. Soo if y question should be on other mailing ling please let me know and I will post it where it should be :). I need to compile numpy to use python as dynamic library. When you are compiling python from sources, usualy it builds libpython.2.X.a. I had build python2.X.so. My app uses it, the ldd command shows that app is loading libpytho2.X.so. Unfortunatly I have tried using -L and -l in setup.py scipr to add location of my lib and -python2.X flag to gcc. After the build numpy does not load the libpython SO file. For instance this shows ldd on multiarray.so: linux-gate.so.1 => (0x00f90000) libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0x0033f000) libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0x00632000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0x00a8e000) /lib/ld-linux.so.2 (0x00952000) Is there an easy way to tell numpy on build to use libpython2.X.so? Thx for your time :) Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From subha_rajk at yahoo.com Tue Oct 11 10:20:31 2011 From: subha_rajk at yahoo.com (Subha Raj) Date: Tue, 11 Oct 2011 07:20:31 -0700 (PDT) Subject: [SciPy-Dev] Scipy Hilbert function in C# In-Reply-To: <1318341911.23417.YahooMailClassic@web161204.mail.bf1.yahoo.com> Message-ID: <1318342831.68677.YahooMailClassic@web161201.mail.bf1.yahoo.com> > Hello All, > I simply need to port the scipy Hilbert function in C# and > I am looking to use an open-source library (such as > Exocortex) for the FFT component of it. I have the following > code snippet. > float[] trace = new float[] { 1, 2, 3, 4}; > Fourier.FFT(trace, trace.Length/2, > FourierDirection.Forward); > Fourier.FFT(trace, trace.Length/2, > FourierDirection.Forward); > List hilbert = new > List(trace.Length); > for (int i = 0; i < trace.Length; i++) > { > hilbert.Add(0.0f); > } > for(int i=1; i { > hilbert[i] = 2; > } > if(hilbert.Count/2 == 0) > { > hilbert[0] = hilbert[hilbert.Count/2] = 1; > } > List hilbertResult = new List(); > for (int i = 0; i < trace.Length; i++) > { > hilbertResult.Add(trace[i] * hilbert[i]); > } > float[] hilbertResult2 = hilbertResult.ToArray(); > Fourier.FFT(hilbertResult2, hilbertResult.Count/2, > FourierDirection.Backward); > hilbertResult2.Take(trace.Length); > But something doesn't seem to be working right. Can someone > help? > Regards, > Subha > From ognen at enthought.com Tue Oct 11 14:41:37 2011 From: ognen at enthought.com (Ognen Duzlevski) Date: Tue, 11 Oct 2011 13:41:37 -0500 Subject: [SciPy-Dev] New machine for scipy.org In-Reply-To: References: Message-ID: On Tue, Oct 11, 2011 at 2:54 AM, David Cournapeau wrote: > A couple of questions: > ?- IIRC, one of the main issue for scipy.org was memory. Do you know > how much memory was used there ? Yes, we will go with a 4-core large instance that (I think) offers 2 GB of ram. I am going to make sure that there is enough in terms of CPU and memory, whatever we need really. There are a bunch of things running on that machine and I would like to start getting input on what needs to stay and what needs to go (if anything) ;) > ?- What would be the backup strategy ? I think it is important to > stress to people unfamiliar with ec2 that the instance itself shall > not be considered "stable". Well, I am planning on using an S3 bucket for storing things that need to be permanent (in terms of configuration, data, archives mail etc.). For the actual Linux image, we will install and bundle a new image when ready and store it somewhere so we can always go back to it. Of course, at any point, any involvement, advice, experiences etc. is welcome and encouraged :) Ognen From gael.varoquaux at normalesup.org Tue Oct 11 14:52:03 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 11 Oct 2011 20:52:03 +0200 Subject: [SciPy-Dev] New machine for scipy.org In-Reply-To: References: Message-ID: <20111011185203.GN26373@phare.normalesup.org> On Tue, Oct 11, 2011 at 01:41:37PM -0500, Ognen Duzlevski wrote: > There are a bunch of things running on that machine and I would like > to start getting input on what needs to stay and what needs to go (if > anything) ;) Planet scipy is hosted on that box. It's a bunch of Python scripts in /var/www/planet, and a cron job. It needs to stay :) Gael From ognen at enthought.com Tue Oct 11 15:43:16 2011 From: ognen at enthought.com (Ognen Duzlevski) Date: Tue, 11 Oct 2011 14:43:16 -0500 Subject: [SciPy-Dev] New machine for scipy.org In-Reply-To: <20111011185203.GN26373@phare.normalesup.org> References: <20111011185203.GN26373@phare.normalesup.org> Message-ID: On Tue, Oct 11, 2011 at 1:52 PM, Gael Varoquaux wrote: > On Tue, Oct 11, 2011 at 01:41:37PM -0500, Ognen Duzlevski wrote: >> There are a bunch of things running on that machine and I would like >> to start getting input on what needs to stay and what needs to go (if >> anything) ;) > > Planet scipy is hosted on that box. It's a bunch of Python scripts in > /var/www/planet, and a cron job. It needs to stay :) Works for me :). How about I post the output of 'ps -ef' and ask questions about certain things that I am curious about? Ognen From gael.varoquaux at normalesup.org Tue Oct 11 15:49:31 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 11 Oct 2011 21:49:31 +0200 Subject: [SciPy-Dev] New machine for scipy.org In-Reply-To: References: <20111011185203.GN26373@phare.normalesup.org> Message-ID: <20111011194931.GO26373@phare.normalesup.org> On Tue, Oct 11, 2011 at 02:43:16PM -0500, Ognen Duzlevski wrote: > Works for me :). How about I post the output of 'ps -ef' and ask > questions about certain things that I am curious about? You might not get answers for everything :). I just did that, I note a few important lines, which the authors might not be around on the mailing list: swc: Software Carpentry. You need to ask Greg Wilson about that one. But I suspect that he doesn't need the scipy hosting anymore. wavelets: pywavelets. It seems that this has moved too. Gael From ognen at enthought.com Tue Oct 11 15:52:05 2011 From: ognen at enthought.com (Ognen Duzlevski) Date: Tue, 11 Oct 2011 14:52:05 -0500 Subject: [SciPy-Dev] New machine for scipy.org In-Reply-To: <20111011194931.GO26373@phare.normalesup.org> References: <20111011185203.GN26373@phare.normalesup.org> <20111011194931.GO26373@phare.normalesup.org> Message-ID: On Tue, Oct 11, 2011 at 2:49 PM, Gael Varoquaux wrote: > On Tue, Oct 11, 2011 at 02:43:16PM -0500, Ognen Duzlevski wrote: >> Works for me :). How about I post the output of 'ps -ef' and ask >> questions about certain things that I am curious about? > > You might not get answers for everything :). I just did that, I note a > few important lines, which the authors might not be around on the mailing > list: > > ?swc: Software Carpentry. You need to ask Greg Wilson about that one. > ? But I suspect that he doesn't need the scipy hosting anymore. > ?wavelets: pywavelets. It seems that this has moved too. Thank you for doing that. I will ask Greg and see what is going on and if it is still needed. I am not intending on re-purposing the "old" scipy.org machine even after we move to the EC2 instance. I will preserve it (turn it off) after the transition. So, if anything turns out to be needed - we can always go back for files etc. Ognen From conradlee at gmail.com Wed Oct 12 07:12:18 2011 From: conradlee at gmail.com (Conrad Lee) Date: Wed, 12 Oct 2011 12:12:18 +0100 Subject: [SciPy-Dev] Faster implementation of cluster.hierarchy Message-ID: A mathematician at Stanford named Daniel M?llner recently came up with a package that implements the hierarchical clustering methods found in scipy.cluster.hierarchy. His implementation is in C++, but includes a python API that uses the same interface as scipy.cluster.hierarchy. M?llner has posted benchmarks as well as algorithmic explanations of why his implementation is faster in a paper on arXiv. He also has a webpage that describes the package here . Because the results of the benchmarks look good, I am interested in getting the scikit-learn package to use this implementation for the hierarchical clustering provided by that package. Rather than integrate the code in scikit-learn, it seems more appropriate to integrate it upstream in scipy.cluster.hierarchy. Is there anyone who is interested in this integration? I am inexperienced with integrating C++ code and python code, and also with how things work in the scipy project, so I'm not sure how to proceed. Note: Although M?llner's code is currently under a GPL license, he has stated to me in e-mail that he would be willing to put it under the BSD-2 license it somebody put the time to integrate it into scipy. Best regards, Conrad Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Wed Oct 12 10:18:00 2011 From: oliphant at enthought.com (Travis Oliphant) Date: Wed, 12 Oct 2011 09:18:00 -0500 Subject: [SciPy-Dev] Faster implementation of cluster.hierarchy In-Reply-To: References: Message-ID: This looks very cool. If we can get the BSD license then it would be perfect for integration into SciPy It would be best of you could use Cython for the integration work. You can fork the SciPy project on github and start making your changes using another package's setup.py as an example. Thanks for your contributions. Best, Travis -- Travis Oliphant (on a mobile) 512-826-7480 On Oct 12, 2011, at 6:12 AM, Conrad Lee wrote: > A mathematician at Stanford named Daniel M?llner recently came up with a package that implements the hierarchical clustering methods found in scipy.cluster.hierarchy. His implementation is in C++, but includes a python API that uses the same interface as scipy.cluster.hierarchy. > > M?llner has posted benchmarks as well as algorithmic explanations of why his implementation is faster in a paper on arXiv. He also has a webpage that describes the package here. > > Because the results of the benchmarks look good, I am interested in getting the scikit-learn package to use this implementation for the hierarchical clustering provided by that package. Rather than integrate the code in scikit-learn, it seems more appropriate to integrate it upstream in scipy.cluster.hierarchy. Is there anyone who is interested in this integration? I am inexperienced with integrating C++ code and python code, and also with how things work in the scipy project, so I'm not sure how to proceed. > > Note: Although M?llner's code is currently under a GPL license, he has stated to me in e-mail that he would be willing to put it under the BSD-2 license it somebody put the time to integrate it into scipy. > > Best regards, > > Conrad Lee > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Oct 12 10:17:37 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 12 Oct 2011 08:17:37 -0600 Subject: [SciPy-Dev] Faster implementation of cluster.hierarchy In-Reply-To: References: Message-ID: On Wed, Oct 12, 2011 at 5:12 AM, Conrad Lee wrote: > A mathematician at Stanford named Daniel M?llner recently came up with a > package that implements the hierarchical clustering methods found in > scipy.cluster.hierarchy. His implementation is in C++, but includes a > python API that uses the same interface as scipy.cluster.hierarchy. > > M?llner has posted benchmarks as well as algorithmic explanations of why > his implementation is faster in a paper on arXiv. > He also has a webpage that describes the package here > . > > Because the results of the benchmarks look good, I am interested in getting > the scikit-learn package to use this implementation for the hierarchical > clustering provided by that package. Rather than integrate the code in > scikit-learn, it seems more appropriate to integrate it upstream in > scipy.cluster.hierarchy. Is there anyone who is interested in this > integration? I am inexperienced with integrating C++ code and python code, > and also with how things work in the scipy project, so I'm not sure how to > proceed. > > Note: Although M?llner's code is currently under a GPL license, he has > stated to me in e-mail that he would be willing to put it under the BSD-2 > license it somebody put the time to integrate it into scipy. > > Not my area, but I think it is a good thing to encourage such contributions. If the new code preserves the interface, comes with tests and documentation, and performs better, then I am all in favor of getting it in. I believe there is already a fair amount of c++ in scipy, so that shouldn't be a problem and there are probably folks who can give you advice on how to proceed. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Oct 12 14:30:35 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 12 Oct 2011 20:30:35 +0200 Subject: [SciPy-Dev] Faster implementation of cluster.hierarchy In-Reply-To: References: Message-ID: On Wed, Oct 12, 2011 at 4:17 PM, Charles R Harris wrote: > > > On Wed, Oct 12, 2011 at 5:12 AM, Conrad Lee wrote: > >> A mathematician at Stanford named Daniel M?llner recently came up with a >> package that implements the hierarchical clustering methods found in >> scipy.cluster.hierarchy. His implementation is in C++, but includes a >> python API that uses the same interface as scipy.cluster.hierarchy. >> >> M?llner has posted benchmarks as well as algorithmic explanations of why >> his implementation is faster in a paper on arXiv. >> He also has a webpage that describes the package here >> . >> >> Because the results of the benchmarks look good, I am interested in >> getting the scikit-learn package to use this implementation for the >> hierarchical clustering provided by that package. Rather than integrate the >> code in scikit-learn, it seems more appropriate to integrate it upstream in >> scipy.cluster.hierarchy. Is there anyone who is interested in this >> integration? I am inexperienced with integrating C++ code and python code, >> and also with how things work in the scipy project, so I'm not sure how to >> proceed. >> >> Note: Although M?llner's code is currently under a GPL license, he has >> stated to me in e-mail that he would be willing to put it under the BSD-2 >> license it somebody put the time to integrate it into scipy. >> >> > Not my area, but I think it is a good thing to encourage such > contributions. > Agreed, if you mean state of the art algorithms - 2 to 3 orders of magnitude speedup would be very nice to have. > If the new code preserves the interface, comes with tests and > documentation, and performs better, then I am all in favor of getting it in. > I believe there is already a fair amount of c++ in scipy, so that shouldn't > be a problem and there are probably folks who can give you advice on how to > proceed. > Not sure what you consider a fair amount, but it's basically one file in interpolate and sparse.sparsetools. Plus weave of course, but that's unmaintained. The sparsetools code is a pain, it takes roughly as much time to compile as the rest of scipy combined on my machine. Combine that with the few people who know C++ well, and it leads me to think that the bar for adding C++ code should be set high. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From zunzun at zunzun.com Fri Oct 14 12:04:12 2011 From: zunzun at zunzun.com (James Phillips) Date: Fri, 14 Oct 2011 11:04:12 -0500 Subject: [SciPy-Dev] Update on pythonequations unit tests Message-ID: From: Alan G Isaac gmail.com> Subject: Re: Subversion scipy.stats irregular problem with source code example Newsgroups: gmane.comp.python.scientific.devel Date: 2010-09-28 18:10:43 GMT (1 year, 2 weeks, 1 day, 15 hours and 46 minutes ago) As long as you can provide unit tests, I don't see a problem. But you and Skipper shd work out the details. Cheers, Alan Isaac ------------------------------------------------------------------------------------------------------------------ I have 76 unit tests so far, see below. I'm working on more, and expect to be done sometime around the end of this year. James Phillips http://zunzun.com testCalculateCoefficientAndFitStatisticsUsingDE_2D (Test_CalculateCoefficientAndFitStatistics.TestCalculateCoefficientAndFitStatistics) ... ok testCalculateCoefficientAndFitStatisticsUsingSpline_2D (Test_CalculateCoefficientAndFitStatistics.TestCalculateCoefficientAndFitStatistics) ... ok testDataCache_2D (Test_DataCache.TestDataCache) ... ok testDataCache_3D (Test_DataCache.TestDataCache) ... ok testReducedDataSize_2D (Test_DataCache.TestDataCache) ... ok testConversionOfColumns_ASCII_2D_NoWeights (Test_DataConverterService.TestConversions) ... ok testConversionOfColumns_ASCII_2D_NoWeights_ExampleData (Test_DataConverterService.TestConversions) ... ok testConversionOfColumns_ASCII_2D_Weights (Test_DataConverterService.TestConversions) ... ok testConversionOfColumns_ASCII_3D_NoWeights (Test_DataConverterService.TestConversions) ... ok testConversionOfColumns_ASCII_3D_Weights (Test_DataConverterService.TestConversions) ... ok testPolynomialLinear_2D (Test_EquationPolynomials.TestEquationPolynomials) ... ok testPolynomialLinear_3D (Test_EquationPolynomials.TestEquationPolynomials) ... ok testExtendedVersion_Exponential_WithExponentialDecayAndOffset_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Exponential_WithExponentialDecay_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Exponential_WithExponentialGrowthAndOffset_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Exponential_WithExponentialGrowth_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Exponential_WithLinearDecayAndOffset_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Exponential_WithLinearDecay_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Exponential_WithLinearGrowthAndOffset_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Exponential_WithLinearGrowth_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Exponential_WithOffset_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Inverse_Exponential_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Inverse_Exponential_WithOffset_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Reciprocal_Exponential_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testExtendedVersion_Reciprocal_Exponential_WithOffset_2D (Test_ExtendedVersionHandlers.TestExtendedVersionHandlers) ... ok testArcTangent_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testCosine_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testExponential_VariableTimesNegativeOne_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testExponential_VariableUnchanged_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testHyperbolicCosine_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testHyperbolicSine_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testHyperbolicTangent_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testLog_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testOffset_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testPower_NegativeOne_OfLog_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testPower_NegativeOne_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testPower_NegativeTwo_OfLog_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testPower_NegativeTwo_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testPower_NegativeZeroPointFive_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testPower_OnePointFive_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testPower_Two_OfLog_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testPower_Two_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testPower_ZeroPointFive_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testSine_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testTangent_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testVariableUnchanged_Term (Test_IndividualPolyFunctions.TestPolyFunctions) ... ok testSplineSolve_2D (Test_ModelSolveMethods.TestModelSolveMethods) ... ok testSplineSolve_3D (Test_ModelSolveMethods.TestModelSolveMethods) ... ok testUserDefinedFunctionSolve_2D (Test_ModelSolveMethods.TestModelSolveMethods) ... ok testUserDefinedFunctionSolve_3D (Test_ModelSolveMethods.TestModelSolveMethods) ... ok testConversionFromCppToCSHARP (Test_OutputSourceCodeService.TestConversionsFromCPP) ... ok testConversionFromCppToJAVA (Test_OutputSourceCodeService.TestConversionsFromCPP) ... ok testConversionFromCppToMATLAB (Test_OutputSourceCodeService.TestConversionsFromCPP) ... ok testConversionFromCppToPYTHON (Test_OutputSourceCodeService.TestConversionsFromCPP) ... ok testConversionFromCppToSCILAB (Test_OutputSourceCodeService.TestConversionsFromCPP) ... ok testConversionFromCppToVBA (Test_OutputSourceCodeService.TestConversionsFromCPP) ... ok testGenerationOfCPP (Test_OutputSourceCodeService.TestGenerationOfOutputSourceCode) ... ok testGenerationOfCSHARP (Test_OutputSourceCodeService.TestGenerationOfOutputSourceCode) ... ok testGenerationOfJAVA (Test_OutputSourceCodeService.TestGenerationOfOutputSourceCode) ... ok testGenerationOfMATLAB (Test_OutputSourceCodeService.TestGenerationOfOutputSourceCode) ... ok testGenerationOfPYTHON (Test_OutputSourceCodeService.TestGenerationOfOutputSourceCode) ... ok testGenerationOfSCILAB (Test_OutputSourceCodeService.TestGenerationOfOutputSourceCode) ... ok testGenerationOfVBA (Test_OutputSourceCodeService.TestGenerationOfOutputSourceCode) ... ok testSolveUsingDE_2D (Test_SolverService.TestSolverService) ... ok testSolveUsingDE_3D (Test_SolverService.TestSolverService) ... ok testSolveUsingLevenbergMarquardt_2D (Test_SolverService.TestSolverService) ... ok testSolveUsingLevenbergMarquardt_3D (Test_SolverService.TestSolverService) ... ok testSolveUsingLinear_2D (Test_SolverService.TestSolverService) ... ok testSolveUsingLinear_3D (Test_SolverService.TestSolverService) ... ok testSolveUsingODR_2D (Test_SolverService.TestSolverService) ... ok testSolveUsingODR_3D (Test_SolverService.TestSolverService) ... ok testSolveUsingSimplex_3D (Test_SolverService.TestSolverService) ... ok testSolveUsingSimplex_SSQABS_2D (Test_SolverService.TestSolverService) ... ok testSolveUsingSimplex_SSQREL_2D (Test_SolverService.TestSolverService) ... ok testSolveUsingSpline_2D (Test_SolverService.TestSolverService) ... ok testSolveUsingSpline_3D (Test_SolverService.TestSolverService) ... ok ---------------------------------------------------------------------- Ran 76 tests in 94.615s OK -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Tue Oct 18 10:21:51 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Tue, 18 Oct 2011 19:51:51 +0530 Subject: [SciPy-Dev] Fwd: scipy.test gives error In-Reply-To: References: Message-ID: On Tue, Oct 18, 2011 at 12:23 PM, akshar bhosale wrote: > > > ---------- Forwarded message ---------- > From: akshar bhosale > Date: Mon, Oct 17, 2011 at 11:26 PM > Subject: scipy.test gives error > To: scipy-user at scipy.org, scipy-dev-owner at scipy.org > > > Hi, > scipy.test gives following error : > MKL FATAL ERROR : can not load libmkl_lapack.so > import numpy;import scipy does not throw any error. > This library is present in the LD_LIBARY_PATH. > What can be the issue? > These modules were installed few days back and used to work. > > -Aksharb > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Tue Oct 18 12:43:34 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Tue, 18 Oct 2011 22:13:34 +0530 Subject: [SciPy-Dev] Fwd: scipy.test gives error In-Reply-To: References: Message-ID: python -c 'import scipy;scipy.test(verbose=10)' gives me : NumPy version 1.6.0 NumPy is installed in /home/akshar/.local/lib/python2.6/site-packages/numpy SciPy version 0.9.0 SciPy is installed in /home/akshar/.local/lib/python2.6/site-packages/scipy Python version 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] nose version 1.0.0 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] nose.selector: INFO: /home/akshar/.local/lib/python2.6/site-packages/scipy/fftpack/convolve.so is executable; skipped nose.selector: INFO: /home/akshar/.local/lib/python2.6/site-packages/scipy/integrate/vode.so is executable; skipped . . . . . Tests maxRstat(Z, R, 1) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 2) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 2) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 3) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3.3). Expecting exception. ... ok Tests maxRstat(Z, R, -1). Expecting exception. ... ok Tests maxRstat(Z, R, 4). Expecting exception. ... ok Tests num_obs_linkage(Z) on linkage over 2 observations. ... ok Tests num_obs_linkage(Z) on linkage over 3 observations. ... ok Tests num_obs_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests num_obs_linkage(Z) with empty linkage. ... ok Tests to_mlab_linkage on linkage array with multiple rows. ... ok Tests to_mlab_linkage on empty linkage array. ... ok Tests to_mlab_linkage on linkage array with single row. ... ok test_hierarchy.load_testing_files ... ok Ticket #505. ... ok Testing that kmeans2 init methods work. ... MKL FATAL ERROR: Cannot load libmkl_lapack.so ---------- Forwarded message ---------- From: akshar bhosale Date: Tue, Oct 18, 2011 at 7:51 PM Subject: Fwd: scipy.test gives error To: scipy-dev at scipy.org On Tue, Oct 18, 2011 at 12:23 PM, akshar bhosale wrote: > > > ---------- Forwarded message ---------- > From: akshar bhosale > Date: Mon, Oct 17, 2011 at 11:26 PM > Subject: scipy.test gives error > To: scipy-user at scipy.org, scipy-dev-owner at scipy.org > > > Hi, > scipy.test gives following error : > MKL FATAL ERROR : can not load libmkl_lapack.so > import numpy;import scipy does not throw any error. > This library is present in the LD_LIBARY_PATH. > What can be the issue? > These modules were installed few days back and used to work. > > -Aksharb > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kmichael.aye at gmail.com Thu Oct 20 07:25:44 2011 From: kmichael.aye at gmail.com (K.-Michael Aye) Date: Thu, 20 Oct 2011 13:25:44 +0200 Subject: [SciPy-Dev] weird results in frozen distribution fit References: Message-ID: So, Josef, are you saying that we should not use frozen fits for the moment? I consider you one of the authorities on these matters, me as a fitting beginner am mostly lost in the choices I have. Do we have alternatives/work-arounds? Michael On 2011-10-10 22:22:27 +0000, josef.pktd at gmail.com said: > Given a report on the scipy-user mailing list > http://projects.scipy.org/scipy/ticket/1536 I started for the first > time to look at some examples with the frozen fit introduced in scipy > 0.9. I just picked randomly some distributions that came to mind, > except for the reported lognorm. > > some cases look ok, maybe. Some cases look "weird" > > Josef > > lognorm true > 0.25 0.0 20.0 > estimated, floc=0, loc=0 > [ 2.1221 0. 2.4403] [ 0.2303 -1.8759 21.9986] > [ 2.1454 0. 2.3763] [ 0.2446 0.0305 19.9945] > [ 2.1274 0. 2.414 ] [ 0.2469 0.0296 19.9415] > [ 2.1362 0. 2.3897] [ 0.2525 0.4579 19.4758] > [ 2.1334 0. 2.4104] [ 0.2484 0.0298 20.0291] > [ 2.1033 0. 2.475 ] [ 2.5335e-01 1.0858e-02 1.9959e+01] > [ 2.1266 0. 2.414 ] [ 0.2662 1.3626 18.5458] > [ 2.1316 0. 2.4176] [ 0.2491 0.0299 20.0509] > [ 2.0897 0. 2.5211] [ 0.276 1.4881 18.5202] > [ 2.1382 0. 2.3738] [ 0.2465 0.0301 19.8241] > gamma true > 2 20 10.0 > estimated, floc=0, loc=0 > [ 2.0945 20. 9.6182] [ 1.9864 20.0832 10.2119] [ 2.0317 > 20. 10. ] > [ 2.0084 20. 10.101 ] [ 1.9364 20.1738 10.41 ] [ 2.0241 > 20. 10. ] > [ 2.1359 20. 9.1555] [ 2.0353 19.8413 9.8974] [ 1.9937 > 20. 10. ] > [ 2.2296 20. 8.6785] [ 2.152 19.8834 9.1795] [ 1.9948 > 20. 10. ] > [ 1.8822 20. 10.8936] [ 2.0389 19.9836 9.8365] [ 2.0104 > 20. 10. ] > [ 1.8304 20. 11.3802] [ 2.0341 20.169 9.7599] [ 2.0214 > 20. 10. ] > [ 1.972 20. 9.7958] [ 1.9107 20.0777 10.1223] [ 1.9409 > 20. 10. ] > [ 1.6211 20. 13.3028] [ 1.9636 20.1903 10.1156] [ 2.0103 > 20. 10. ] > [ 2.0413 20. 9.8165] [ 2.0238 19.9456 9.9794] [ 2.0121 > 20. 10. ] > [ 2.0087 20. 10.1362] [ 2.0128 20.1212 9.9924] [ 2.0299 > 20. 10. ] > normal true > 0.0 2.0 > estimated, floc=0, loc=0, fscale=2 > [ 0. 2.0024] [-0.0417 2.0024] [-0.0417 2. ] > [ 0. 2.0003] [-0.0879 2.0003] [-0.0879 2. ] > [ 0. 1.9466] [-0.0133 1.9466] [-0.0133 2. ] > [ 0. 1.9726] [-0.036 1.9726] [-0.0359 2. ] > [ 0. 2.0007] [-0.0194 2.0007] [-0.0195 2. ] > [ 0. 1.9565] [ 0.0338 1.9565] [ 0.0337 2. ] > [ 0. 1.9674] [ 0.0288 1.9674] [ 0.0289 2. ] > [ 0. 1.9962] [ 0.0051 1.9962] [ 0.0051 2. ] > [ 0. 1.8901] [ 0.0179 1.8901] [ 0.0179 2. ] > [ 0. 2.0017] [ 0.0307 2.0017] [ 0.0307 2. ] > chi2 true > 10 0.0 2.0 > estimated, floc=0, loc=0 > [ 3.8459 0. 6.1457] [ 10.9616 -0.6938 1.8646] > [ 3.8314 0. 6.3129] [ 10.4894 -0.295 1.9516] > [ 3.7449 0. 6.4353] [ 9.7304 -0.0364 2.0654] > [ 3.8094 0. 6.2268] [ 10.4314 -0.3388 1.9283] > [ 3.69 0. 6.5633] [ 8.4951 1.1035 2.2316] > [ 3.6475 0. 6.6415] [ 8.3315 0.8241 2.3152] > [ 3.7288 0. 6.4983] [ 9.4114 0.2128 2.1185] > [ 3.8005 0. 6.2852] [ 10.0038 -0.0154 1.9917] > [ 3.7384 0. 6.4327] [ 8.4504 1.1931 2.224 ] > [ 3.9063 0. 6.0929] [ 10.3413 0.2105 1.902 ] >>>> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From kmichael.aye at gmail.com Thu Oct 20 07:59:05 2011 From: kmichael.aye at gmail.com (K.-Michael Aye) Date: Thu, 20 Oct 2011 13:59:05 +0200 Subject: [SciPy-Dev] Fwd: scipy.test gives error References: Message-ID: Motivated by this simple way to start some tests, I tried this on my Enthought installation, but it fails very early without running any test? What is going wrong? Do I see it right, that a os.getcwd() returns 'No such file or directory'? How can that be possible? maye at paradigm: results$ python Enthought Python Distribution -- www.enthought.com Version: 7.1-2 (32-bit) Python 2.7.2 |EPD 7.1-2 (32-bit)| (default, Jul 27 2011, 13:29:32) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "packages", "demo" or "enthought" for more information. >>> exit() maye at paradigm: results$ python -c 'import scipy;scipy.test(verbose=10)' Running unit tests for scipy Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/numpy/testing/nosetester.py", line 318, in test self._show_system_info() File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/numpy/testing/nosetester.py", line 187, in _show_system_info nose = import_nose() File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/numpy/testing/nosetester.py", line 56, in import_nose import nose File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/nose/__init__.py", line 1, in from nose.core import collector, main, run, run_exit, runmodule File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/nose/core.py", line 11, in from nose.config import Config, all_config_files File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/nose/config.py", line 9, in from nose.plugins.manager import NoPlugins File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/nose/plugins/__init__.py", line 156, in from nose.plugins.manager import * File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/nose/plugins/manager.py", line 405, in import pkg_resources File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/pkg_resources.py", line 2699, in _declare_state('object', working_set = WorkingSet()) File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/pkg_resources.py", line 435, in __init__ self.add_entry(entry) File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/pkg_resources.py", line 450, in add_entry for dist in find_distributions(entry, True): File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/pkg_resources.py", line 1723, in find_on_path path_item = _normalize_cached(path_item) File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/pkg_resources.py", line 1872, in _normalize_cached _cache[filename] = result = normalize_path(filename) File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/pkg_resources.py", line 1866, in normalize_path return os.path.normcase(os.path.realpath(filename)) File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/posixpath.py", line 375, in realpath return abspath(filename) File "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/posixpath.py", line 347, in abspath cwd = os.getcwd() OSError: [Errno 2] No such file or directory On 2011-10-18 17:13:34 +0000, akshar bhosale said: > python -c 'import scipy;scipy.test(verbose=10)' gives me : > > NumPy version 1.6.0 > NumPy is installed in /home/akshar/.local/lib/python2.6/site-packages/numpy > SciPy version 0.9.0 > SciPy is installed in /home/akshar/.local/lib/python2.6/site-packages/scipy > Python version 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 > 20071124 (Red Hat 4.1.2-42)] > nose version 1.0.0 > nose.config: INFO: Excluding tests matching ['f2py_ext', > 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] > nose.selector: INFO: > /home/akshar/.local/lib/python2.6/site-packages/scipy/fftpack/convolve.so > is executable; skipped > nose.selector: INFO: > /home/akshar/.local/lib/python2.6/site-packages/scipy/integrate/vode.so > is executable; skipped > . > . > . > . > . > Tests maxRstat(Z, R, 1) on linkage and inconsistency matrices with > different numbers of clusters. Expecting exception. ... ok > Tests maxRstat(Z, R, 1) on empty linkage. Expecting exception. ... ok > Tests maxRstat(Z, R, 1) on linkage with one cluster. ... ok > Tests maxRstat(Z, R, 2) on the Q data set using centroid linkage. ... ok > Tests maxRstat(Z, R, 2) on the Q data set using complete linkage. ... ok > Tests maxRstat(Z, R, 2) on the Q data set using median linkage. ... ok > Tests maxRstat(Z, R, 2) on the Q data set using single linkage. ... ok > Tests maxRstat(Z, R, 2) on the Q data set using Ward linkage. ... ok > Tests maxRstat(Z, R, 2) on linkage and inconsistency matrices with > different numbers of clusters. Expecting exception. ... ok > Tests maxRstat(Z, R, 2) on empty linkage. Expecting exception. ... ok > Tests maxRstat(Z, R, 2) on linkage with one cluster. ... ok > Tests maxRstat(Z, R, 3) on the Q data set using centroid linkage. ... ok > Tests maxRstat(Z, R, 3) on the Q data set using complete linkage. ... ok > Tests maxRstat(Z, R, 3) on the Q data set using median linkage. ... ok > Tests maxRstat(Z, R, 3) on the Q data set using single linkage. ... ok > Tests maxRstat(Z, R, 3) on the Q data set using Ward linkage. ... ok > Tests maxRstat(Z, R, 3) on linkage and inconsistency matrices with > different numbers of clusters. Expecting exception. ... ok > Tests maxRstat(Z, R, 3) on empty linkage. Expecting exception. ... ok > Tests maxRstat(Z, R, 3) on linkage with one cluster. ... ok > Tests maxRstat(Z, R, 3.3). Expecting exception. ... ok > Tests maxRstat(Z, R, -1). Expecting exception. ... ok > Tests maxRstat(Z, R, 4). Expecting exception. ... ok > Tests num_obs_linkage(Z) on linkage over 2 observations. ... ok > Tests num_obs_linkage(Z) on linkage over 3 observations. ... ok > Tests num_obs_linkage(Z) on linkage on observation sets between sizes 4 > and 15 (step size 3). ... ok > Tests num_obs_linkage(Z) with empty linkage. ... ok > Tests to_mlab_linkage on linkage array with multiple rows. ... ok > Tests to_mlab_linkage on empty linkage array. ... ok > Tests to_mlab_linkage on linkage array with single row. ... ok > test_hierarchy.load_testing_files ... ok > Ticket #505. ... ok > Testing that kmeans2 init methods work. ... MKL FATAL ERROR: Cannot > load libmkl_lapack.so > > > ---------- Forwarded message ---------- > From: akshar bhosale > Date: Tue, Oct 18, 2011 at 7:51 PM > Subject: Fwd: scipy.test gives error > To: scipy-dev at scipy.org > > > > > > On Tue, Oct 18, 2011 at 12:23 PM, akshar bhosale > wrote: > > > > ---------- Forwarded message ---------- > From: akshar bhosale > Date: Mon, Oct 17, 2011 at 11:26 PM > Subject: scipy.test gives error > To: scipy-user at scipy.org, scipy-dev-owner at scipy.org > > > Hi, > scipy.test gives following error : > MKL FATAL ERROR : can not load libmkl_lapack.so > import numpy;import scipy does not throw any error. > This library is present in the LD_LIBARY_PATH. > What can be the issue? > These modules were installed few days back and used to work. > > -Aksharb > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Oct 20 09:01:47 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 20 Oct 2011 09:01:47 -0400 Subject: [SciPy-Dev] weird results in frozen distribution fit In-Reply-To: References: Message-ID: On Thu, Oct 20, 2011 at 7:25 AM, K.-Michael Aye wrote: > So, Josef, are you saying that we should not use frozen fits for the > moment? I consider you one of the authorities on these matters, me as a > fitting beginner am mostly lost in the choices I have. > Do we have alternatives/work-arounds? Yes, that's what I'm saying. The results are wrong in all examples that I tried for fitting with some frozen parameters with the scipy 0.9 release. Per Brodtkorb provided the bugfix in the ticket. It's just changing the indexing in a few lines. If you want, you could just change the few lines in your installed scipy. There is no workaround outside changing the source that I can think of. (as a reminder: testing this part was not my responsibility.) Josef > > Michael > > On 2011-10-10 22:22:27 +0000, josef.pktd at gmail.com said: > >> Given a report on the scipy-user mailing list >> http://projects.scipy.org/scipy/ticket/1536 I started for the first >> time to look at some examples with the frozen fit introduced in scipy >> 0.9. ?I just picked randomly some distributions that came to mind, >> except for the reported lognorm. >> >> some cases look ok, maybe. Some cases look "weird" >> >> Josef >> >> lognorm true >> 0.25 0.0 20.0 >> estimated, floc=0, loc=0 >> [ 2.1221 ?0. ? ? ?2.4403] [ ?0.2303 ?-1.8759 ?21.9986] >> [ 2.1454 ?0. ? ? ?2.3763] [ ?0.2446 ? 0.0305 ?19.9945] >> [ 2.1274 ?0. ? ? ?2.414 ] [ ?0.2469 ? 0.0296 ?19.9415] >> [ 2.1362 ?0. ? ? ?2.3897] [ ?0.2525 ? 0.4579 ?19.4758] >> [ 2.1334 ?0. ? ? ?2.4104] [ ?0.2484 ? 0.0298 ?20.0291] >> [ 2.1033 ?0. ? ? ?2.475 ] [ ?2.5335e-01 ? 1.0858e-02 ? 1.9959e+01] >> [ 2.1266 ?0. ? ? ?2.414 ] [ ?0.2662 ? 1.3626 ?18.5458] >> [ 2.1316 ?0. ? ? ?2.4176] [ ?0.2491 ? 0.0299 ?20.0509] >> [ 2.0897 ?0. ? ? ?2.5211] [ ?0.276 ? ?1.4881 ?18.5202] >> [ 2.1382 ?0. ? ? ?2.3738] [ ?0.2465 ? 0.0301 ?19.8241] >> gamma true >> 2 20 10.0 >> estimated, floc=0, loc=0 >> [ ?2.0945 ?20. ? ? ? 9.6182] [ ?1.9864 ?20.0832 ?10.2119] [ ?2.0317 >> 20. ? ? ?10. ? ?] >> [ ?2.0084 ?20. ? ? ?10.101 ] [ ?1.9364 ?20.1738 ?10.41 ?] [ ?2.0241 >> 20. ? ? ?10. ? ?] >> [ ?2.1359 ?20. ? ? ? 9.1555] [ ?2.0353 ?19.8413 ? 9.8974] [ ?1.9937 >> 20. ? ? ?10. ? ?] >> [ ?2.2296 ?20. ? ? ? 8.6785] [ ?2.152 ? 19.8834 ? 9.1795] [ ?1.9948 >> 20. ? ? ?10. ? ?] >> [ ?1.8822 ?20. ? ? ?10.8936] [ ?2.0389 ?19.9836 ? 9.8365] [ ?2.0104 >> 20. ? ? ?10. ? ?] >> [ ?1.8304 ?20. ? ? ?11.3802] [ ?2.0341 ?20.169 ? ?9.7599] [ ?2.0214 >> 20. ? ? ?10. ? ?] >> [ ?1.972 ? 20. ? ? ? 9.7958] [ ?1.9107 ?20.0777 ?10.1223] [ ?1.9409 >> 20. ? ? ?10. ? ?] >> [ ?1.6211 ?20. ? ? ?13.3028] [ ?1.9636 ?20.1903 ?10.1156] [ ?2.0103 >> 20. ? ? ?10. ? ?] >> [ ?2.0413 ?20. ? ? ? 9.8165] [ ?2.0238 ?19.9456 ? 9.9794] [ ?2.0121 >> 20. ? ? ?10. ? ?] >> [ ?2.0087 ?20. ? ? ?10.1362] [ ?2.0128 ?20.1212 ? 9.9924] [ ?2.0299 >> 20. ? ? ?10. ? ?] >> normal true >> 0.0 2.0 >> estimated, floc=0, loc=0, fscale=2 >> [ 0. ? ? ?2.0024] [-0.0417 ?2.0024] [-0.0417 ?2. ? ?] >> [ 0. ? ? ?2.0003] [-0.0879 ?2.0003] [-0.0879 ?2. ? ?] >> [ 0. ? ? ?1.9466] [-0.0133 ?1.9466] [-0.0133 ?2. ? ?] >> [ 0. ? ? ?1.9726] [-0.036 ? 1.9726] [-0.0359 ?2. ? ?] >> [ 0. ? ? ?2.0007] [-0.0194 ?2.0007] [-0.0195 ?2. ? ?] >> [ 0. ? ? ?1.9565] [ 0.0338 ?1.9565] [ 0.0337 ?2. ? ?] >> [ 0. ? ? ?1.9674] [ 0.0288 ?1.9674] [ 0.0289 ?2. ? ?] >> [ 0. ? ? ?1.9962] [ 0.0051 ?1.9962] [ 0.0051 ?2. ? ?] >> [ 0. ? ? ?1.8901] [ 0.0179 ?1.8901] [ 0.0179 ?2. ? ?] >> [ 0. ? ? ?2.0017] [ 0.0307 ?2.0017] [ 0.0307 ?2. ? ?] >> chi2 true >> 10 0.0 2.0 >> estimated, floc=0, loc=0 >> [ 3.8459 ?0. ? ? ?6.1457] [ 10.9616 ?-0.6938 ? 1.8646] >> [ 3.8314 ?0. ? ? ?6.3129] [ 10.4894 ?-0.295 ? ?1.9516] >> [ 3.7449 ?0. ? ? ?6.4353] [ 9.7304 -0.0364 ?2.0654] >> [ 3.8094 ?0. ? ? ?6.2268] [ 10.4314 ?-0.3388 ? 1.9283] >> [ 3.69 ? ?0. ? ? ?6.5633] [ 8.4951 ?1.1035 ?2.2316] >> [ 3.6475 ?0. ? ? ?6.6415] [ 8.3315 ?0.8241 ?2.3152] >> [ 3.7288 ?0. ? ? ?6.4983] [ 9.4114 ?0.2128 ?2.1185] >> [ 3.8005 ?0. ? ? ?6.2852] [ 10.0038 ?-0.0154 ? 1.9917] >> [ 3.7384 ?0. ? ? ?6.4327] [ 8.4504 ?1.1931 ?2.224 ] >> [ 3.9063 ?0. ? ? ?6.0929] [ 10.3413 ? 0.2105 ? 1.902 ] >>>>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From akshar.bhosale at gmail.com Thu Oct 20 11:51:01 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Thu, 20 Oct 2011 21:21:01 +0530 Subject: [SciPy-Dev] error in numpy-1.6.0 install Message-ID: Hi, i have intel xeon 64 bit machine running rhel 5.2 x86_64. i have intel cluster toolkit installed (11/069) and mkl 10.3 what can be the best options for installing numpy-1.6.0 like what changes are required in site.cfg and intel compilers.py etc? what flags / options to be given while configuring/building/installing the same ? Any patches required? what are the deatail steps for the same? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Oct 20 12:53:14 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 20 Oct 2011 18:53:14 +0200 Subject: [SciPy-Dev] Strange results by eigs Message-ID: Hi all, I am confused by the results of the following test from numpy import set_printoptions from numpy.random import seed, rand from scipy.linalg import eig from numpy import dot, diag from scipy.sparse.linalg import eigs from scipy.linalg import norm set_printoptions(precision=6) seed(10) n=5 A = rand(n,n)+ 1j*rand(n,n) B = rand(n,n)+ 1j*rand(n,n) w,vl,vr = eig(A,B,left=True, right=True) print 'Eigenvalues by eig' print w k = n-2 ws,vs = eigs(A,k,B) for i in range(k): r =dot(A,vr[:,i])-w[i]*dot(B,vr[:,i]) res = norm(r) print 'Residual', res print 'Eigenvalues by eigs' k = n-2 ws,vs = eigs(A,k,B) print ws for i in range(k): r =dot(A,vs[:,i])-ws[i]*dot(B,vs[:,i]) res = norm(r) print 'Residual', res Eigenvalues by eig [ 2.093178-1.783532j -0.202628+1.195944j 1.186895+0.56595j 0.014805-0.287354j 0.772942-0.266609j] Residual 1.9173960652e-15 Residual 2.20392640899e-15 Residual 3.21653472875e-15 Eigenvalues by eigs [ 19.064654+27.897369j 15.797549-27.544265j -0.249578-30.673974j] Residual 2.39453499962e-14 Residual 7.33698764801e-14 Residual 1.76193714267e-14 The eigenvalues returned by eig and eigs are completely different. Any idea ? I am using >>> scipy.__version__ '0.11.0.dev-600e81f' Nils From akshar.bhosale at gmail.com Thu Oct 20 13:32:42 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Thu, 20 Oct 2011 23:02:42 +0530 Subject: [SciPy-Dev] numpy.test hangs Message-ID: Hi, i have intel xeon 64 bit machine running rhel 5.2 x86_64. i have intel cluster toolkit installed (11/069) and mkl 10.3. I have installed numpy-1.6.0. I am testing it by using nose. numpy.test just hangs like below and nothing comes. What could be the issue? ###################### python Python 2.6 (r26:66714, Sep 22 2011, 12:28:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(); Running unit tests for numpy NumPy version 1.6.0 NumPy is installed in /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy Python version 2.6 (r26:66714, Sep 22 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] nose version 1.0.0 ..................................................................................................................... ########################### -akshar -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Oct 20 13:34:03 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 20 Oct 2011 19:34:03 +0200 Subject: [SciPy-Dev] Strange results by eigs In-Reply-To: References: Message-ID: 20.10.2011 18:53, Nils Wagner kirjoitti: [clip] > The eigenvalues returned by eig and eigs are completely > different. The matrix B must be hermitian positive (semi-)definite for `eigs`. -- Pauli Virtanen From jsseabold at gmail.com Thu Oct 20 13:34:24 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 20 Oct 2011 13:34:24 -0400 Subject: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: On Thu, Oct 20, 2011 at 1:32 PM, akshar bhosale wrote: > Hi, > i have intel xeon 64 bit machine running rhel 5.2 x86_64. i have intel > cluster toolkit installed (11/069) and mkl 10.3. I have installed > numpy-1.6.0. I am testing it by using nose. > numpy.test just hangs like below and nothing comes. What could be the issue? > Try np.test(verbose=3) to see where it's hanging. Skipper > ###################### > > python > Python 2.6 (r26:66714, Sep 22 2011, 12:28:47) > [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import numpy >>>> numpy.test(); > Running unit tests for numpy > NumPy version 1.6.0 > NumPy is installed in > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy > Python version 2.6 (r26:66714, Sep 22 2011, 15:10:47) [GCC 4.1.2 20071124 > (Red Hat 4.1.2-42)] > nose version 1.0.0 > ..................................................................................................................... > > ########################### > -akshar > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From nwagner at iam.uni-stuttgart.de Thu Oct 20 13:57:40 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 20 Oct 2011 19:57:40 +0200 Subject: [SciPy-Dev] Strange results by eigs In-Reply-To: References: Message-ID: On Thu, 20 Oct 2011 19:34:03 +0200 Pauli Virtanen wrote: > 20.10.2011 18:53, Nils Wagner kirjoitti: > [clip] >> The eigenvalues returned by eig and eigs are completely >> different. > > The matrix B must be hermitian positive (semi-)definite >for `eigs`. > > -- > Pauli Virtanen It means that there is no check for B inside eigs. IMHO, a warning should be raised if B is not hermitian positive (semi-)definite. Nils From ralf.gommers at googlemail.com Thu Oct 20 15:22:17 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 20 Oct 2011 21:22:17 +0200 Subject: [SciPy-Dev] error in numpy-1.6.0 install In-Reply-To: References: Message-ID: On Thu, Oct 20, 2011 at 5:51 PM, akshar bhosale wrote: > Hi, > > i have intel xeon 64 bit machine running rhel 5.2 x86_64. i have intel > cluster toolkit installed (11/069) and mkl 10.3 > what can be the best options for installing numpy-1.6.0 like what changes > are required in site.cfg and intel compilers.py etc? > what flags / options to be given while configuring/building/installing the > same ? > Any patches required? > what are the deatail steps for the same? > > Have you seen these instructions: http://scipy.org/Installing_SciPy/Linux#head-7ce43956a69ec51c6f2cedd894a4715d5bfff974 ? If so, please point out where things go wrong and provide a build log. If your problem is related to the cluster toolkit instead of MKL, then I don't think we have any instructions. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Oct 20 16:16:03 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 20 Oct 2011 22:16:03 +0200 Subject: [SciPy-Dev] Strange results by eigs In-Reply-To: References: Message-ID: (20.10.2011 19:57), Nils Wagner wrote: [clip] > It means that there is no check for B inside eigs. > IMHO, a warning should be raised if B is not hermitian > positive (semi-)definite. That could be useful. Checking PD is may be more expensive since it requires trying to do a Cholesky decomposition. Would need some benchmarks to check whether it matters. It's also possible to do the check only for dense matrices. Scipy doesn't have a sparse Cholesky at the moment, and moreover, the linear operator can be an arbitrary function with no way to obtain the transpose. -- Pauli Virtanen From josef.pktd at gmail.com Thu Oct 20 16:39:28 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 20 Oct 2011 16:39:28 -0400 Subject: [SciPy-Dev] Strange results by eigs In-Reply-To: References: Message-ID: On Thu, Oct 20, 2011 at 4:16 PM, Pauli Virtanen wrote: > (20.10.2011 19:57), Nils Wagner wrote: > [clip] >> It means that there is no check for B inside eigs. >> IMHO, a warning should be raised if B is not hermitian >> positive (semi-)definite. > > That could be useful. > > Checking PD is may be more expensive since it requires trying to do a > Cholesky decomposition. Would need some benchmarks to check whether it > matters. > > It's also possible to do the check only for dense matrices. Scipy > doesn't have a sparse Cholesky at the moment, and moreover, the linear > operator can be an arbitrary function with no way to obtain the transpose. none of the scipy numpy eigh functions do a check. Since I use them mostly for gram, covariance matrices, where I already know it's symmetric, I wouldn't like any expensive checks, in linalg, I don't use sparse so far. user responsibility to check the doc string ? Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From akshar.bhosale at gmail.com Fri Oct 21 00:01:51 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Fri, 21 Oct 2011 09:31:51 +0530 Subject: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: numpy.test(verbose=3) Running unit tests for numpy NumPy version 1.6.0 NumPy is installed in /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy Python version 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] nose version 1.0.0 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/multiarray.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/scalarmath.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/umath.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/multiarray_tests.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/umath_tests.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/fft/fftpack_lite.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/linalg/lapack_lite.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/random/mtrand.so is executable; skipped test_api.test_fastCopyAndTranspose ... ok test_arrayprint.TestArrayRepr.test_nan_inf ... ok test_str (test_arrayprint.TestComplexArray) ... ok Ticket 844. ... ok test_blasdot.test_blasdot_used ... ok test_blasdot.test_dot_2args ... ok test_blasdot.test_dot_3args ... ok test_blasdot.test_dot_3args_errors ... ok test_creation (test_datetime.TestDateTime) ... ok test_creation_overflow (test_datetime.TestDateTime) ... ok test_divisor_conversion_as (test_datetime.TestDateTime) ... ok test_divisor_conversion_bday (test_datetime.TestDateTime) ... ok test_divisor_conversion_day (test_datetime.TestDateTime) ... ok test_divisor_conversion_fs (test_datetime.TestDateTime) ... ok test_divisor_conversion_hour (test_datetime.TestDateTime) ... ok test_divisor_conversion_minute (test_datetime.TestDateTime) ... ok test_divisor_conversion_month (test_datetime.TestDateTime) ... ok test_divisor_conversion_second (test_datetime.TestDateTime) ... ok test_divisor_conversion_week (test_datetime.TestDateTime) ... ok test_divisor_conversion_year (test_datetime.TestDateTime) ... ok test_hours (test_datetime.TestDateTime) ... ok test_from_object_array (test_defchararray.TestBasic) ... ok test_from_object_array_unicode (test_defchararray.TestBasic) ... ok test_from_string (test_defchararray.TestBasic) ... ok test_from_string_array (test_defchararray.TestBasic) ... ok test_from_unicode (test_defchararray.TestBasic) ... ok test_from_unicode_array (test_defchararray.TestBasic) ... ok test_unicode_upconvert (test_defchararray.TestBasic) ... ok test_it (test_defchararray.TestChar) ... ok test_equal (test_defchararray.TestComparisons) ... ok test_greater (test_defchararray.TestComparisons) ... ok test_greater_equal (test_defchararray.TestComparisons) ... ok test_less (test_defchararray.TestComparisons) ... ok test_less_equal (test_defchararray.TestComparisons) ... ok test_not_equal (test_defchararray.TestComparisons) ... ok test_equal (test_defchararray.TestComparisonsMixed1) ... ok test_greater (test_defchararray.TestComparisonsMixed1) ... ok test_greater_equal (test_defchararray.TestComparisonsMixed1) ... ok test_less (test_defchararray.TestComparisonsMixed1) ... ok test_less_equal (test_defchararray.TestComparisonsMixed1) ... ok test_not_equal (test_defchararray.TestComparisonsMixed1) ... ok test_equal (test_defchararray.TestComparisonsMixed2) ... ok test_greater (test_defchararray.TestComparisonsMixed2) ... ok test_greater_equal (test_defchararray.TestComparisonsMixed2) ... ok test_less (test_defchararray.TestComparisonsMixed2) ... ok test_less_equal (test_defchararray.TestComparisonsMixed2) ... ok test_not_equal (test_defchararray.TestComparisonsMixed2) ... ok test_count (test_defchararray.TestInformation) ... ok test_endswith (test_defchararray.TestInformation) ... ok test_find (test_defchararray.TestInformation) ... ok test_index (test_defchararray.TestInformation) ... ok test_isalnum (test_defchararray.TestInformation) ... ok test_isalpha (test_defchararray.TestInformation) ... ok test_isdigit (test_defchararray.TestInformation) ... ok test_islower (test_defchararray.TestInformation) ... ok test_isspace (test_defchararray.TestInformation) ... ok test_istitle (test_defchararray.TestInformation) ... ok test_isupper (test_defchararray.TestInformation) ... ok test_len (test_defchararray.TestInformation) ... ok test_rfind (test_defchararray.TestInformation) ... ok test_rindex (test_defchararray.TestInformation) ... ok test_startswith (test_defchararray.TestInformation) ... ok test_capitalize (test_defchararray.TestMethods) ... ok test_center (test_defchararray.TestMethods) ... ok test_decode (test_defchararray.TestMethods) ... ok test_encode (test_defchararray.TestMethods) ... ok test_expandtabs (test_defchararray.TestMethods) ... ok test_isdecimal (test_defchararray.TestMethods) ... ok test_isnumeric (test_defchararray.TestMethods) ... ok test_join (test_defchararray.TestMethods) ... ok test_ljust (test_defchararray.TestMethods) ... ok test_lower (test_defchararray.TestMethods) ... ok test_lstrip (test_defchararray.TestMethods) ... ok test_partition (test_defchararray.TestMethods) ... ok test_replace (test_defchararray.TestMethods) ... ok test_rjust (test_defchararray.TestMethods) ... ok test_rpartition (test_defchararray.TestMethods) ... ok test_rsplit (test_defchararray.TestMethods) ... ok test_rstrip (test_defchararray.TestMethods) ... ok test_split (test_defchararray.TestMethods) ... ok test_splitlines (test_defchararray.TestMethods) ... ok test_strip (test_defchararray.TestMethods) ... ok test_swapcase (test_defchararray.TestMethods) ... ok test_title (test_defchararray.TestMethods) ... ok test_upper (test_defchararray.TestMethods) ... ok test_add (test_defchararray.TestOperations) ... ok Ticket #856 ... ok test_mul (test_defchararray.TestOperations) ... ok test_radd (test_defchararray.TestOperations) ... ok test_rmod (test_defchararray.TestOperations) ... ok test_rmul (test_defchararray.TestOperations) ... ok test_broadcast_error (test_defchararray.TestVecString) ... ok test_invalid_args_tuple (test_defchararray.TestVecString) ... ok test_invalid_function_args (test_defchararray.TestVecString) ... ok test_invalid_result_type (test_defchararray.TestVecString) ... ok test_invalid_type_descr (test_defchararray.TestVecString) ... ok test_non_existent_method (test_defchararray.TestVecString) ... ok test_non_string_array (test_defchararray.TestVecString) ... ok test1 (test_defchararray.TestWhitespace) ... ok test_dtype (test_dtype.TestBuiltin) ... ok Only test hash runs at all. ... ok test_metadata_rejects_nondict (test_dtype.TestMetadata) ... ok test_metadata_takes_dict (test_dtype.TestMetadata) ... ok test_nested_metadata (test_dtype.TestMetadata) ... ok test_no_metadata (test_dtype.TestMetadata) ... ok test1 (test_dtype.TestMonsterType) ... ok test_different_names (test_dtype.TestRecord) ... ok test_different_titles (test_dtype.TestRecord) ... ok Test whether equivalent record dtypes hash the same. ... ok Test if an appropriate exception is raised when passing bad values to ... ok Test whether equivalent subarray dtypes hash the same. ... ok Test whether different subarray dtypes hash differently. ... ok Test some data types that are equal ... ok Test some more complicated cases that shouldn't be equal ... ok Test some simple cases that shouldn't be equal ... ok test_single_subarray (test_dtype.TestSubarray) ... ok test_einsum_errors (test_einsum.TestEinSum) ... ok test_einsum_sums_cfloat128 (test_einsum.TestEinSum) ... It hangs here.. On Thu, Oct 20, 2011 at 11:04 PM, Skipper Seabold wrote: > On Thu, Oct 20, 2011 at 1:32 PM, akshar bhosale > wrote: > > Hi, > > i have intel xeon 64 bit machine running rhel 5.2 x86_64. i have intel > > cluster toolkit installed (11/069) and mkl 10.3. I have installed > > numpy-1.6.0. I am testing it by using nose. > > numpy.test just hangs like below and nothing comes. What could be the > issue? > > > > Try np.test(verbose=3) to see where it's hanging. > > Skipper > > > ###################### > > > > python > > Python 2.6 (r26:66714, Sep 22 2011, 12:28:47) > > [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > >>>> import numpy > >>>> numpy.test(); > > Running unit tests for numpy > > NumPy version 1.6.0 > > NumPy is installed in > > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy > > Python version 2.6 (r26:66714, Sep 22 2011, 15:10:47) [GCC 4.1.2 20071124 > > (Red Hat 4.1.2-42)] > > nose version 1.0.0 > > > ..................................................................................................................... > > > > ########################### > > -akshar > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Oct 21 00:15:21 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 20 Oct 2011 22:15:21 -0600 Subject: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: On Thu, Oct 20, 2011 at 10:01 PM, akshar bhosale wrote: > > numpy.test(verbose=3) > > Running unit tests for numpy > NumPy version 1.6.0 > NumPy is installed in > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy > Python version 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 > (Red Hat 4.1.2-42)] > nose version 1.0.0 > nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', > 'gen_ext', 'pyrex_ext', 'swig_ext'] > nose.selector: INFO: > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/multiarray.so > is executable; skipped > nose.selector: INFO: > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/scalarmath.so > is executable; skipped > nose.selector: INFO: > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/umath.so is > executable; skipped > nose.selector: INFO: > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/multiarray_tests.so > is executable; skipped > nose.selector: INFO: > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/umath_tests.so > is executable; skipped > nose.selector: INFO: > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/fft/fftpack_lite.so > is executable; skipped > nose.selector: INFO: > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/linalg/lapack_lite.so > is executable; skipped > nose.selector: INFO: > /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/random/mtrand.so > is executable; skipped > test_api.test_fastCopyAndTranspose ... ok > test_arrayprint.TestArrayRepr.test_nan_inf ... ok > test_str (test_arrayprint.TestComplexArray) ... ok > Ticket 844. ... ok > test_blasdot.test_blasdot_used ... ok > test_blasdot.test_dot_2args ... ok > test_blasdot.test_dot_3args ... ok > test_blasdot.test_dot_3args_errors ... ok > test_creation (test_datetime.TestDateTime) ... ok > test_creation_overflow (test_datetime.TestDateTime) ... ok > test_divisor_conversion_as (test_datetime.TestDateTime) ... ok > test_divisor_conversion_bday (test_datetime.TestDateTime) ... ok > test_divisor_conversion_day (test_datetime.TestDateTime) ... ok > test_divisor_conversion_fs (test_datetime.TestDateTime) ... ok > test_divisor_conversion_hour (test_datetime.TestDateTime) ... ok > test_divisor_conversion_minute (test_datetime.TestDateTime) ... ok > test_divisor_conversion_month (test_datetime.TestDateTime) ... ok > test_divisor_conversion_second (test_datetime.TestDateTime) ... ok > test_divisor_conversion_week (test_datetime.TestDateTime) ... ok > test_divisor_conversion_year (test_datetime.TestDateTime) ... ok > test_hours (test_datetime.TestDateTime) ... ok > test_from_object_array (test_defchararray.TestBasic) ... ok > test_from_object_array_unicode (test_defchararray.TestBasic) ... ok > test_from_string (test_defchararray.TestBasic) ... ok > test_from_string_array (test_defchararray.TestBasic) ... ok > test_from_unicode (test_defchararray.TestBasic) ... ok > test_from_unicode_array (test_defchararray.TestBasic) ... ok > test_unicode_upconvert (test_defchararray.TestBasic) ... ok > test_it (test_defchararray.TestChar) ... ok > test_equal (test_defchararray.TestComparisons) ... ok > test_greater (test_defchararray.TestComparisons) ... ok > test_greater_equal (test_defchararray.TestComparisons) ... ok > test_less (test_defchararray.TestComparisons) ... ok > test_less_equal (test_defchararray.TestComparisons) ... ok > test_not_equal (test_defchararray.TestComparisons) ... ok > test_equal (test_defchararray.TestComparisonsMixed1) ... ok > test_greater (test_defchararray.TestComparisonsMixed1) ... ok > test_greater_equal (test_defchararray.TestComparisonsMixed1) ... ok > test_less (test_defchararray.TestComparisonsMixed1) ... ok > test_less_equal (test_defchararray.TestComparisonsMixed1) ... ok > test_not_equal (test_defchararray.TestComparisonsMixed1) ... ok > test_equal (test_defchararray.TestComparisonsMixed2) ... ok > test_greater (test_defchararray.TestComparisonsMixed2) ... ok > test_greater_equal (test_defchararray.TestComparisonsMixed2) ... ok > test_less (test_defchararray.TestComparisonsMixed2) ... ok > test_less_equal (test_defchararray.TestComparisonsMixed2) ... ok > test_not_equal (test_defchararray.TestComparisonsMixed2) ... ok > test_count (test_defchararray.TestInformation) ... ok > test_endswith (test_defchararray.TestInformation) ... ok > test_find (test_defchararray.TestInformation) ... ok > test_index (test_defchararray.TestInformation) ... ok > test_isalnum (test_defchararray.TestInformation) ... ok > test_isalpha (test_defchararray.TestInformation) ... ok > test_isdigit (test_defchararray.TestInformation) ... ok > test_islower (test_defchararray.TestInformation) ... ok > test_isspace (test_defchararray.TestInformation) ... ok > test_istitle (test_defchararray.TestInformation) ... ok > test_isupper (test_defchararray.TestInformation) ... ok > test_len (test_defchararray.TestInformation) ... ok > test_rfind (test_defchararray.TestInformation) ... ok > test_rindex (test_defchararray.TestInformation) ... ok > test_startswith (test_defchararray.TestInformation) ... ok > test_capitalize (test_defchararray.TestMethods) ... ok > test_center (test_defchararray.TestMethods) ... ok > test_decode (test_defchararray.TestMethods) ... ok > test_encode (test_defchararray.TestMethods) ... ok > test_expandtabs (test_defchararray.TestMethods) ... ok > test_isdecimal (test_defchararray.TestMethods) ... ok > test_isnumeric (test_defchararray.TestMethods) ... ok > test_join (test_defchararray.TestMethods) ... ok > test_ljust (test_defchararray.TestMethods) ... ok > test_lower (test_defchararray.TestMethods) ... ok > test_lstrip (test_defchararray.TestMethods) ... ok > test_partition (test_defchararray.TestMethods) ... ok > test_replace (test_defchararray.TestMethods) ... ok > test_rjust (test_defchararray.TestMethods) ... ok > test_rpartition (test_defchararray.TestMethods) ... ok > test_rsplit (test_defchararray.TestMethods) ... ok > test_rstrip (test_defchararray.TestMethods) ... ok > test_split (test_defchararray.TestMethods) ... ok > test_splitlines (test_defchararray.TestMethods) ... ok > test_strip (test_defchararray.TestMethods) ... ok > test_swapcase (test_defchararray.TestMethods) ... ok > test_title (test_defchararray.TestMethods) ... ok > test_upper (test_defchararray.TestMethods) ... ok > test_add (test_defchararray.TestOperations) ... ok > Ticket #856 ... ok > test_mul (test_defchararray.TestOperations) ... ok > test_radd (test_defchararray.TestOperations) ... ok > test_rmod (test_defchararray.TestOperations) ... ok > test_rmul (test_defchararray.TestOperations) ... ok > test_broadcast_error (test_defchararray.TestVecString) ... ok > test_invalid_args_tuple (test_defchararray.TestVecString) ... ok > test_invalid_function_args (test_defchararray.TestVecString) ... ok > test_invalid_result_type (test_defchararray.TestVecString) ... ok > test_invalid_type_descr (test_defchararray.TestVecString) ... ok > test_non_existent_method (test_defchararray.TestVecString) ... ok > test_non_string_array (test_defchararray.TestVecString) ... ok > test1 (test_defchararray.TestWhitespace) ... ok > test_dtype (test_dtype.TestBuiltin) ... ok > Only test hash runs at all. ... ok > test_metadata_rejects_nondict (test_dtype.TestMetadata) ... ok > test_metadata_takes_dict (test_dtype.TestMetadata) ... ok > test_nested_metadata (test_dtype.TestMetadata) ... ok > test_no_metadata (test_dtype.TestMetadata) ... ok > test1 (test_dtype.TestMonsterType) ... ok > test_different_names (test_dtype.TestRecord) ... ok > test_different_titles (test_dtype.TestRecord) ... ok > Test whether equivalent record dtypes hash the same. ... ok > Test if an appropriate exception is raised when passing bad values to ... > ok > Test whether equivalent subarray dtypes hash the same. ... ok > Test whether different subarray dtypes hash differently. ... ok > Test some data types that are equal ... ok > Test some more complicated cases that shouldn't be equal ... ok > Test some simple cases that shouldn't be equal ... ok > test_single_subarray (test_dtype.TestSubarray) ... ok > test_einsum_errors (test_einsum.TestEinSum) ... ok > test_einsum_sums_cfloat128 (test_einsum.TestEinSum) ... > > It hangs here.. > > Ah, not a surprise. Einsum was new in version 1.6.0 and there were several bugs. Could you try version 1.6.1? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at maths.lth.se Fri Oct 21 02:19:02 2011 From: tony at maths.lth.se (Tony Stillfjord) Date: Fri, 21 Oct 2011 08:19:02 +0200 Subject: [SciPy-Dev] Strange results by eigs In-Reply-To: References: Message-ID: I'm not really an authority on these matters, but I agree with Josef. In the dense case you would have to do a Cholesky decomposition, and we have to assume that it does not break down until the last step. As far as I know, the cost for this is O(n^3). On the other hand, computing a few eigenvalues should be O(n^2) if you do not want them all (in which case you would be using eig anyway). In addition to what Pauli wrote about the sparse case, even if there was a sparse (incomplete) Cholesky decomposition implemented you could not really use it. To be sure that the matrix really was positive definite you would essentially have to set all tolerances to zero and compute the full factorization. This would (possibly) result in huge amounts of fill-in. Not good. So yes, please leave it as it is. Kind regards, Tony Stillfjord On Thu, Oct 20, 2011 at 10:39 PM, wrote: > On Thu, Oct 20, 2011 at 4:16 PM, Pauli Virtanen wrote: > > (20.10.2011 19:57), Nils Wagner wrote: > > [clip] > >> It means that there is no check for B inside eigs. > >> IMHO, a warning should be raised if B is not hermitian > >> positive (semi-)definite. > > > > That could be useful. > > > > Checking PD is may be more expensive since it requires trying to do a > > Cholesky decomposition. Would need some benchmarks to check whether it > > matters. > > > > It's also possible to do the check only for dense matrices. Scipy > > doesn't have a sparse Cholesky at the moment, and moreover, the linear > > operator can be an arbitrary function with no way to obtain the > transpose. > > none of the scipy numpy eigh functions do a check. > > Since I use them mostly for gram, covariance matrices, where I already > know it's symmetric, I wouldn't like any expensive checks, in linalg, > I don't use sparse so far. > > user responsibility to check the doc string ? > > Josef > > > > > -- > > Pauli Virtanen > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Fri Oct 21 07:25:24 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Fri, 21 Oct 2011 16:55:24 +0530 Subject: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: Hi, does this mean that numpy is not configured properly or i can ignore this and go ahead with scipy installation? On Fri, Oct 21, 2011 at 9:45 AM, Charles R Harris wrote: > > > On Thu, Oct 20, 2011 at 10:01 PM, akshar bhosale > wrote: > >> >> numpy.test(verbose=3) >> >> Running unit tests for numpy >> NumPy version 1.6.0 >> NumPy is installed in >> /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy >> Python version 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 >> (Red Hat 4.1.2-42)] >> nose version 1.0.0 >> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', >> 'gen_ext', 'pyrex_ext', 'swig_ext'] >> nose.selector: INFO: >> /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/multiarray.so >> is executable; skipped >> nose.selector: INFO: >> /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/scalarmath.so >> is executable; skipped >> nose.selector: INFO: >> /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/umath.so is >> executable; skipped >> nose.selector: INFO: >> /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/multiarray_tests.so >> is executable; skipped >> nose.selector: INFO: >> /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/umath_tests.so >> is executable; skipped >> nose.selector: INFO: >> /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/fft/fftpack_lite.so >> is executable; skipped >> nose.selector: INFO: >> /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/linalg/lapack_lite.so >> is executable; skipped >> nose.selector: INFO: >> /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/random/mtrand.so >> is executable; skipped >> test_api.test_fastCopyAndTranspose ... ok >> test_arrayprint.TestArrayRepr.test_nan_inf ... ok >> test_str (test_arrayprint.TestComplexArray) ... ok >> Ticket 844. ... ok >> test_blasdot.test_blasdot_used ... ok >> test_blasdot.test_dot_2args ... ok >> test_blasdot.test_dot_3args ... ok >> test_blasdot.test_dot_3args_errors ... ok >> test_creation (test_datetime.TestDateTime) ... ok >> test_creation_overflow (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_as (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_bday (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_day (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_fs (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_hour (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_minute (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_month (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_second (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_week (test_datetime.TestDateTime) ... ok >> test_divisor_conversion_year (test_datetime.TestDateTime) ... ok >> test_hours (test_datetime.TestDateTime) ... ok >> test_from_object_array (test_defchararray.TestBasic) ... ok >> test_from_object_array_unicode (test_defchararray.TestBasic) ... ok >> test_from_string (test_defchararray.TestBasic) ... ok >> test_from_string_array (test_defchararray.TestBasic) ... ok >> test_from_unicode (test_defchararray.TestBasic) ... ok >> test_from_unicode_array (test_defchararray.TestBasic) ... ok >> test_unicode_upconvert (test_defchararray.TestBasic) ... ok >> test_it (test_defchararray.TestChar) ... ok >> test_equal (test_defchararray.TestComparisons) ... ok >> test_greater (test_defchararray.TestComparisons) ... ok >> test_greater_equal (test_defchararray.TestComparisons) ... ok >> test_less (test_defchararray.TestComparisons) ... ok >> test_less_equal (test_defchararray.TestComparisons) ... ok >> test_not_equal (test_defchararray.TestComparisons) ... ok >> test_equal (test_defchararray.TestComparisonsMixed1) ... ok >> test_greater (test_defchararray.TestComparisonsMixed1) ... ok >> test_greater_equal (test_defchararray.TestComparisonsMixed1) ... ok >> test_less (test_defchararray.TestComparisonsMixed1) ... ok >> test_less_equal (test_defchararray.TestComparisonsMixed1) ... ok >> test_not_equal (test_defchararray.TestComparisonsMixed1) ... ok >> test_equal (test_defchararray.TestComparisonsMixed2) ... ok >> test_greater (test_defchararray.TestComparisonsMixed2) ... ok >> test_greater_equal (test_defchararray.TestComparisonsMixed2) ... ok >> test_less (test_defchararray.TestComparisonsMixed2) ... ok >> test_less_equal (test_defchararray.TestComparisonsMixed2) ... ok >> test_not_equal (test_defchararray.TestComparisonsMixed2) ... ok >> test_count (test_defchararray.TestInformation) ... ok >> test_endswith (test_defchararray.TestInformation) ... ok >> test_find (test_defchararray.TestInformation) ... ok >> test_index (test_defchararray.TestInformation) ... ok >> test_isalnum (test_defchararray.TestInformation) ... ok >> test_isalpha (test_defchararray.TestInformation) ... ok >> test_isdigit (test_defchararray.TestInformation) ... ok >> test_islower (test_defchararray.TestInformation) ... ok >> test_isspace (test_defchararray.TestInformation) ... ok >> test_istitle (test_defchararray.TestInformation) ... ok >> test_isupper (test_defchararray.TestInformation) ... ok >> test_len (test_defchararray.TestInformation) ... ok >> test_rfind (test_defchararray.TestInformation) ... ok >> test_rindex (test_defchararray.TestInformation) ... ok >> test_startswith (test_defchararray.TestInformation) ... ok >> test_capitalize (test_defchararray.TestMethods) ... ok >> test_center (test_defchararray.TestMethods) ... ok >> test_decode (test_defchararray.TestMethods) ... ok >> test_encode (test_defchararray.TestMethods) ... ok >> test_expandtabs (test_defchararray.TestMethods) ... ok >> test_isdecimal (test_defchararray.TestMethods) ... ok >> test_isnumeric (test_defchararray.TestMethods) ... ok >> test_join (test_defchararray.TestMethods) ... ok >> test_ljust (test_defchararray.TestMethods) ... ok >> test_lower (test_defchararray.TestMethods) ... ok >> test_lstrip (test_defchararray.TestMethods) ... ok >> test_partition (test_defchararray.TestMethods) ... ok >> test_replace (test_defchararray.TestMethods) ... ok >> test_rjust (test_defchararray.TestMethods) ... ok >> test_rpartition (test_defchararray.TestMethods) ... ok >> test_rsplit (test_defchararray.TestMethods) ... ok >> test_rstrip (test_defchararray.TestMethods) ... ok >> test_split (test_defchararray.TestMethods) ... ok >> test_splitlines (test_defchararray.TestMethods) ... ok >> test_strip (test_defchararray.TestMethods) ... ok >> test_swapcase (test_defchararray.TestMethods) ... ok >> test_title (test_defchararray.TestMethods) ... ok >> test_upper (test_defchararray.TestMethods) ... ok >> test_add (test_defchararray.TestOperations) ... ok >> Ticket #856 ... ok >> test_mul (test_defchararray.TestOperations) ... ok >> test_radd (test_defchararray.TestOperations) ... ok >> test_rmod (test_defchararray.TestOperations) ... ok >> test_rmul (test_defchararray.TestOperations) ... ok >> test_broadcast_error (test_defchararray.TestVecString) ... ok >> test_invalid_args_tuple (test_defchararray.TestVecString) ... ok >> test_invalid_function_args (test_defchararray.TestVecString) ... ok >> test_invalid_result_type (test_defchararray.TestVecString) ... ok >> test_invalid_type_descr (test_defchararray.TestVecString) ... ok >> test_non_existent_method (test_defchararray.TestVecString) ... ok >> test_non_string_array (test_defchararray.TestVecString) ... ok >> test1 (test_defchararray.TestWhitespace) ... ok >> test_dtype (test_dtype.TestBuiltin) ... ok >> Only test hash runs at all. ... ok >> test_metadata_rejects_nondict (test_dtype.TestMetadata) ... ok >> test_metadata_takes_dict (test_dtype.TestMetadata) ... ok >> test_nested_metadata (test_dtype.TestMetadata) ... ok >> test_no_metadata (test_dtype.TestMetadata) ... ok >> test1 (test_dtype.TestMonsterType) ... ok >> test_different_names (test_dtype.TestRecord) ... ok >> test_different_titles (test_dtype.TestRecord) ... ok >> Test whether equivalent record dtypes hash the same. ... ok >> Test if an appropriate exception is raised when passing bad values to ... >> ok >> Test whether equivalent subarray dtypes hash the same. ... ok >> Test whether different subarray dtypes hash differently. ... ok >> Test some data types that are equal ... ok >> Test some more complicated cases that shouldn't be equal ... ok >> Test some simple cases that shouldn't be equal ... ok >> test_single_subarray (test_dtype.TestSubarray) ... ok >> test_einsum_errors (test_einsum.TestEinSum) ... ok >> test_einsum_sums_cfloat128 (test_einsum.TestEinSum) ... >> >> It hangs here.. >> >> > Ah, not a surprise. Einsum was new in version 1.6.0 and there were several > bugs. Could you try version 1.6.1? > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Oct 21 10:15:57 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Oct 2011 16:15:57 +0200 Subject: [SciPy-Dev] Strange results by eigs In-Reply-To: References: Message-ID: On Thu, 20 Oct 2011 16:39:28 -0400 josef.pktd at gmail.com wrote: > On Thu, Oct 20, 2011 at 4:16 PM, Pauli Virtanen > wrote: >> (20.10.2011 19:57), Nils Wagner wrote: >> [clip] >>> It means that there is no check for B inside eigs. >>> IMHO, a warning should be raised if B is not hermitian >>> positive (semi-)definite. >> >> That could be useful. >> >> Checking PD is may be more expensive since it requires >>trying to do a >> Cholesky decomposition. Would need some benchmarks to >>check whether it >> matters. >> >> It's also possible to do the check only for dense >>matrices. Scipy >> doesn't have a sparse Cholesky at the moment, and >>moreover, the linear >> operator can be an arbitrary function with no way to >>obtain the transpose. > > none of the scipy numpy eigh functions do a check. > > Since I use them mostly for gram, covariance matrices, >where I already > know it's symmetric, I wouldn't like any expensive >checks, in linalg, > I don't use sparse so far. > > user responsibility to check the doc string ? > > Josef > >> from scipy.sparse.linalg import eigs help (eigs) >>> scipy.__version__ '0.10.0b2' The docstring of eigs is misleading in that context. M must represent a real symmetric matrix. For best results, M should be of the same type as A. Additionally: * If sigma==None, M is positive definite * If sigma is specified, M is positive semi-definite If sigma==None, eigs requires an operator to compute the solution of the linear equation `M * x = b`. This is done internally via a (sparse) LU decomposition for an explicit matrix M, or via an iterative solver for a general linear operator. Alternatively, the user can supply the matrix or operator Minv, which gives x = Minv * b = M^-1 * b Nils From pav at iki.fi Fri Oct 21 10:25:49 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 21 Oct 2011 16:25:49 +0200 Subject: [SciPy-Dev] Strange results by eigs In-Reply-To: References: Message-ID: 21.10.2011 16:15, Nils Wagner kirjoitti: [clip] > The docstring of eigs is misleading in that context. > > M must represent a real symmetric matrix. Yep, it should say hermitian instead of real sym. here. -- Pauli Virtanen From gkclri at yahoo.com Fri Oct 21 10:29:20 2011 From: gkclri at yahoo.com (Gopalakrishnan Ravimohan) Date: Fri, 21 Oct 2011 07:29:20 -0700 (PDT) Subject: [SciPy-Dev] Fw: problem in installing th scipy In-Reply-To: <1319187924.36334.YahooMailNeo@web44813.mail.sp1.yahoo.com> References: <1319187924.36334.YahooMailNeo@web44813.mail.sp1.yahoo.com> Message-ID: <1319207360.31134.YahooMailNeo@web44813.mail.sp1.yahoo.com> with this mail i have attached the error file. which i tried to install in my cluster. plz help me in sorting the error as i am unable to find what is causing to exit the program before building. With Regards GOPALAKRISHNAN.R Senior Research Fellow C/o. Dr. V. Subramanian Chemical Laboratory Central Leather Research Institute (CSIR) India -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_installation_problem.rtf Type: application/msword Size: 8177 bytes Desc: not available URL: From vanderplas at astro.washington.edu Fri Oct 21 13:12:38 2011 From: vanderplas at astro.washington.edu (Jacob VanderPlas) Date: Fri, 21 Oct 2011 10:12:38 -0700 Subject: [SciPy-Dev] Strange results by eigs Message-ID: <4EA1A806.1010507@astro.washington.edu> >> The docstring of eigs is misleading in that context. >> >> M must represent a real symmetric matrix. >> > Yep, it should say hermitian instead of real sym. here. Actually, the piece of the documentation quoted here has to do with the second matrix supplied for the general eigenvalue problem. This matrix (M) must be real symmetric, as desribed in the ARPACK function documentation http://www.caam.rice.edu/software/ARPACK/UG/node137.html Jake From nwagner at iam.uni-stuttgart.de Fri Oct 21 13:34:02 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Oct 2011 19:34:02 +0200 Subject: [SciPy-Dev] Strange results by eigs In-Reply-To: <4EA1A806.1010507@astro.washington.edu> References: <4EA1A806.1010507@astro.washington.edu> Message-ID: On Fri, 21 Oct 2011 10:12:38 -0700 Jacob VanderPlas wrote: >>> The docstring of eigs is misleading in that context. >>> >>> M must represent a real symmetric matrix. >>> >> Yep, it should say hermitian instead of real sym. here. > > Actually, the piece of the documentation quoted here has >to do with the second matrix supplied for the general >eigenvalue problem. This matrix (M) must be real >symmetric, as desribed in the ARPACK function >documentation > http://www.caam.rice.edu/software/ARPACK/UG/node137.html > > Jake > The Hermitian case is here http://www.caam.rice.edu/software/ARPACK/UG/node138.html Nils From vanderplas at astro.washington.edu Fri Oct 21 13:50:13 2011 From: vanderplas at astro.washington.edu (Jacob VanderPlas) Date: Fri, 21 Oct 2011 10:50:13 -0700 Subject: [SciPy-Dev] Strange results by eigs In-Reply-To: References: <4EA1A806.1010507@astro.washington.edu> Message-ID: <4EA1B0D5.9090707@astro.washington.edu> Ah, thanks Nils. I missed that detail when I wrote the wrapper. It's all a bit confusing, because each wrapper calls one of six arpack routines, depending on the type of the input. Looking at the code, I think everything should work as expected with hermitian input for M, but the docs need to be updated, as Pauli suggested. The bit on M in both eigs() and eigsh() should specify that if the matrix A is complex hermitian, then M may be complex hermitian as well. It still must be real symmetric in any other case. Jake Nils Wagner wrote: > On Fri, 21 Oct 2011 10:12:38 -0700 > Jacob VanderPlas > wrote: > >>>> The docstring of eigs is misleading in that context. >>>> >>>> M must represent a real symmetric matrix. >>>> >>>> >>> Yep, it should say hermitian instead of real sym. here. >>> >> Actually, the piece of the documentation quoted here has >> to do with the second matrix supplied for the general >> eigenvalue problem. This matrix (M) must be real >> symmetric, as desribed in the ARPACK function >> documentation >> http://www.caam.rice.edu/software/ARPACK/UG/node137.html >> >> Jake >> >> > > The Hermitian case is here > http://www.caam.rice.edu/software/ARPACK/UG/node138.html > > Nils > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From charlesr.harris at gmail.com Fri Oct 21 14:49:48 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 21 Oct 2011 12:49:48 -0600 Subject: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: On Fri, Oct 21, 2011 at 5:25 AM, akshar bhosale wrote: > Hi, > does this mean that numpy is not configured properly or i can ignore this > and go ahead with scipy installation? > Scipy will probably work, but you should really install numpy 1.6.1 instead of 1.6.0. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Fri Oct 21 21:48:20 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sat, 22 Oct 2011 07:18:20 +0530 Subject: [SciPy-Dev] error in numpy-1.6.0 install In-Reply-To: References: Message-ID: Hi, I am getting following errors after i have installed numpy python Python 2.6 (r26:66714, Oct 21 2011, 14:44:56) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in File "/home/aksharb/.local/lib/python2.6/site-packages/numpy/__init__.py", line 137, in import add_newdocs File "/home/aksharb/.local/lib/python2.6/site-packages/numpy/add_newdocs.py", line 9, in from numpy.lib import add_newdoc File "/home/aksharb/.local/lib/python2.6/site-packages/numpy/lib/__init__.py", line 13, in from polynomial import * File "/home/aksharb/.local/lib/python2.6/site-packages/numpy/lib/polynomial.py", line 17, in from numpy.linalg import eigvals, lstsq File "/home/aksharb/.local/lib/python2.6/site-packages/numpy/linalg/__init__.py", line 48, in from linalg import * File "/home/aksharb/.local/lib/python2.6/site-packages/numpy/linalg/linalg.py", line 23, in from numpy.linalg import lapack_lite On Fri, Oct 21, 2011 at 12:52 AM, Ralf Gommers wrote: > > > On Thu, Oct 20, 2011 at 5:51 PM, akshar bhosale wrote: > >> Hi, >> >> i have intel xeon 64 bit machine running rhel 5.2 x86_64. i have intel >> cluster toolkit installed (11/069) and mkl 10.3 >> what can be the best options for installing numpy-1.6.0 like what changes >> are required in site.cfg and intel compilers.py etc? >> what flags / options to be given while configuring/building/installing the >> same ? >> Any patches required? >> what are the deatail steps for the same? >> >> Have you seen these instructions: > http://scipy.org/Installing_SciPy/Linux#head-7ce43956a69ec51c6f2cedd894a4715d5bfff974 > ? > If so, please point out where things go wrong and provide a build log. If > your problem is related to the cluster toolkit instead of MKL, then I don't > think we have any instructions. > > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Sat Oct 22 01:49:33 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sat, 22 Oct 2011 11:19:33 +0530 Subject: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: Hi, unfortunately 1.6.1 also hangs on the same place. Can i move ahead with installing scipy? On Sat, Oct 22, 2011 at 12:19 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Fri, Oct 21, 2011 at 5:25 AM, akshar bhosale wrote: > >> Hi, >> does this mean that numpy is not configured properly or i can ignore this >> and go ahead with scipy installation? >> > > Scipy will probably work, but you should really install numpy 1.6.1 instead > of 1.6.0. > > > > Chuck > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Oct 22 02:02:16 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 22 Oct 2011 00:02:16 -0600 Subject: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: On Fri, Oct 21, 2011 at 11:49 PM, akshar bhosale wrote: > Hi, > > unfortunately 1.6.1 also hangs on the same place. Can i move ahead with > installing scipy? > > Hmm. Well, give scipy a try, but it would be nice to know what the problem is with einsum. I'm thinking compiler, GCC 4.1.2 might be a bit old, but it could easily be something else. Can you give us more information about your system? Chuck > > On Sat, Oct 22, 2011 at 12:19 AM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Fri, Oct 21, 2011 at 5:25 AM, akshar bhosale > > wrote: >> >>> Hi, >>> does this mean that numpy is not configured properly or i can ignore this >>> and go ahead with scipy installation? >>> >> >> Scipy will probably work, but you should really install numpy 1.6.1 >> instead of 1.6.0. >> >> >> >> Chuck >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Sat Oct 22 02:24:36 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sat, 22 Oct 2011 11:54:36 +0530 Subject: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: yes sure.. i have intel cluster toolkit installed on my system. (11/069 version and mkl 10.3). i have machine having intel xeon processor and rhel 5.2 x86_64 platform. i am trying with intel compilers. if i do python -c 'import numpy;numpy.matrix([[1, 5, 10], [1.0, 3j, 4]], numpy.complex128).T.I.H' python: symbol lookup error: /opt/intel/Compiler/11.0/069/mkl/lib/em64/libmkl_lapack.so: undefined symbol: mkl_lapack_zgeqrf my site.cfg is : #################### [mkl] mkl_libs = mkl_def, mkl_intel_lp64, mkl_intel_thread, mkl_core, mkl_mc lapack_libs = mkl_lapack95_lp64 library_dirs = /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/lib/intel64/ include_dirs = /opt/intel/Compiler/11.0/069/mkl/include:/opt/intel/Compiler/11.0/069/include/ #################### and intelcompiler.py is : ############################ from distutils.unixccompiler import UnixCCompiler from numpy.distutils.exec_command import find_executable import sys class IntelCCompiler(UnixCCompiler): """ A modified Intel compiler compatible with an gcc built Python.""" compiler_type = 'intel' cc_exe = 'icc' cc_args = 'fPIC' def __init__ (self, verbose=0, dry_run=0, force=0): sys.exit(0) UnixCCompiler.__init__ (self, verbose,dry_run, force) self.cc_exe = 'icc -fPIC ' compiler = self.cc_exe self.set_executables(compiler=compiler, compiler_so=compiler, compiler_cxx=compiler, linker_exe=compiler, linker_so=compiler + ' -shared -lstdc++') class IntelItaniumCCompiler(IntelCCompiler): compiler_type = 'intele' # On Itanium, the Intel Compiler used to be called ecc, let's search for # it (now it's also icc, so ecc is last in the search). for cc_exe in map(find_executable,['icc','ecc']): if cc_exe: break class IntelEM64TCCompiler(UnixCCompiler): """ A modified Intel x86_64 compiler compatible with a 64bit gcc built Python. """ compiler_type = 'intelem' cc_exe = 'icc -m64 -fPIC' cc_args = "-fPIC -openmp" def __init__ (self, verbose=0, dry_run=0, force=0): UnixCCompiler.__init__ (self, verbose,dry_run, force) self.cc_exe = 'icc -m64 -fPIC -openmp ' compiler = self.cc_exe self.set_executables(compiler=compiler, compiler_so=compiler, compiler_cxx=compiler, linker_exe=compiler, linker_so=compiler + ' -shared -lstdc++') ########################## LD_LIBRARY_PATH is : ######################### /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/scalasca-1.3.3/lib:/opt/PBS/lib:/opt/intel/mpi/lib64:/opt/maui/lib:/opt/jdk1.6.0_23/lib:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/usr/local/lib ######################### -AKSHAR On Sat, Oct 22, 2011 at 11:32 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Fri, Oct 21, 2011 at 11:49 PM, akshar bhosale > wrote: > >> Hi, >> >> unfortunately 1.6.1 also hangs on the same place. Can i move ahead with >> installing scipy? >> >> > Hmm. Well, give scipy a try, but it would be nice to know what the problem > is with einsum. I'm thinking compiler, GCC 4.1.2 might be a bit old, but > it could easily be something else. Can you give us more information about > your system? > > Chuck > >> >> On Sat, Oct 22, 2011 at 12:19 AM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Fri, Oct 21, 2011 at 5:25 AM, akshar bhosale < >>> akshar.bhosale at gmail.com> wrote: >>> >>>> Hi, >>>> does this mean that numpy is not configured properly or i can ignore >>>> this and go ahead with scipy installation? >>>> >>> >>> Scipy will probably work, but you should really install numpy 1.6.1 >>> instead of 1.6.0. >>> >>> >>> >>> Chuck >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Sat Oct 22 04:16:49 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sat, 22 Oct 2011 13:46:49 +0530 Subject: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: Hi, python Python 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.show_config() lapack_opt_info: libraries = ['mkl_lapack95_lp64', 'mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] blas_opt_info: libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] lapack_mkl_info: libraries = ['mkl_lapack95_lp64', 'mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] blas_mkl_info: libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] mkl_info: libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] Akshar On Sat, Oct 22, 2011 at 11:54 AM, akshar bhosale wrote: > yes sure.. > i have intel cluster toolkit installed on my system. (11/069 version and > mkl 10.3). i have machine having intel xeon processor and rhel 5.2 x86_64 > platform. i am trying with intel compilers. > if i do > > python -c 'import numpy;numpy.matrix([[1, 5, 10], [1.0, 3j, 4]], > numpy.complex128).T.I.H' > python: symbol lookup error: > /opt/intel/Compiler/11.0/069/mkl/lib/em64/libmkl_lapack.so: undefined > symbol: mkl_lapack_zgeqrf > > my site.cfg is : > #################### > [mkl] > > mkl_libs = mkl_def, mkl_intel_lp64, mkl_intel_thread, mkl_core, mkl_mc > lapack_libs = mkl_lapack95_lp64 > > library_dirs = > /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/lib/intel64/ > include_dirs = > /opt/intel/Compiler/11.0/069/mkl/include:/opt/intel/Compiler/11.0/069/include/ > #################### > and intelcompiler.py is : > ############################ > from distutils.unixccompiler import UnixCCompiler > from numpy.distutils.exec_command import find_executable > import sys > > class IntelCCompiler(UnixCCompiler): > """ A modified Intel compiler compatible with an gcc built Python.""" > compiler_type = 'intel' > cc_exe = 'icc' > cc_args = 'fPIC' > > def __init__ (self, verbose=0, dry_run=0, force=0): > sys.exit(0) > UnixCCompiler.__init__ (self, verbose,dry_run, force) > self.cc_exe = 'icc -fPIC ' > compiler = self.cc_exe > self.set_executables(compiler=compiler, > compiler_so=compiler, > compiler_cxx=compiler, > linker_exe=compiler, > linker_so=compiler + ' -shared -lstdc++') > > class IntelItaniumCCompiler(IntelCCompiler): > compiler_type = 'intele' > > # On Itanium, the Intel Compiler used to be called ecc, let's search > for > # it (now it's also icc, so ecc is last in the search). > for cc_exe in map(find_executable,['icc','ecc']): > if cc_exe: > break > > class IntelEM64TCCompiler(UnixCCompiler): > """ A modified Intel x86_64 compiler compatible with a 64bit gcc built > Python. > """ > compiler_type = 'intelem' > cc_exe = 'icc -m64 -fPIC' > cc_args = "-fPIC -openmp" > def __init__ (self, verbose=0, dry_run=0, force=0): > UnixCCompiler.__init__ (self, verbose,dry_run, force) > self.cc_exe = 'icc -m64 -fPIC -openmp ' > compiler = self.cc_exe > self.set_executables(compiler=compiler, > compiler_so=compiler, > compiler_cxx=compiler, > linker_exe=compiler, > linker_so=compiler + ' -shared -lstdc++') > ########################## > LD_LIBRARY_PATH is : > ######################### > > /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/scalasca-1.3.3/lib:/opt/PBS/lib:/opt/intel/mpi/lib64:/opt/maui/lib:/opt/jdk1.6.0_23/lib:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/usr/local/lib > ######################### > > -AKSHAR > > > > On Sat, Oct 22, 2011 at 11:32 AM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Fri, Oct 21, 2011 at 11:49 PM, akshar bhosale < >> akshar.bhosale at gmail.com> wrote: >> >>> Hi, >>> >>> unfortunately 1.6.1 also hangs on the same place. Can i move ahead with >>> installing scipy? >>> >>> >> Hmm. Well, give scipy a try, but it would be nice to know what the problem >> is with einsum. I'm thinking compiler, GCC 4.1.2 might be a bit old, but >> it could easily be something else. Can you give us more information about >> your system? >> >> Chuck >> >>> >>> On Sat, Oct 22, 2011 at 12:19 AM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> On Fri, Oct 21, 2011 at 5:25 AM, akshar bhosale < >>>> akshar.bhosale at gmail.com> wrote: >>>> >>>>> Hi, >>>>> does this mean that numpy is not configured properly or i can ignore >>>>> this and go ahead with scipy installation? >>>>> >>>> >>>> Scipy will probably work, but you should really install numpy 1.6.1 >>>> instead of 1.6.0. >>>> >>>> >>>> >>>> Chuck >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Sat Oct 22 05:24:39 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sat, 22 Oct 2011 14:54:39 +0530 Subject: [SciPy-Dev] Fwd: numpy.test hangs In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: akshar bhosale Date: Sat, Oct 22, 2011 at 1:46 PM Subject: Re: [SciPy-Dev] numpy.test hangs To: SciPy Developers List , Discussion of Numerical Python Hi, python Python 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.show_config() lapack_opt_info: libraries = ['mkl_lapack95_lp64', 'mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] blas_opt_info: libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] lapack_mkl_info: libraries = ['mkl_lapack95_lp64', 'mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] blas_mkl_info: libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] mkl_info: libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] Akshar On Sat, Oct 22, 2011 at 11:54 AM, akshar bhosale wrote: > yes sure.. > i have intel cluster toolkit installed on my system. (11/069 version and > mkl 10.3). i have machine having intel xeon processor and rhel 5.2 x86_64 > platform. i am trying with intel compilers. > if i do > > python -c 'import numpy;numpy.matrix([[1, 5, 10], [1.0, 3j, 4]], > numpy.complex128).T.I.H' > python: symbol lookup error: > /opt/intel/Compiler/11.0/069/mkl/lib/em64/libmkl_lapack.so: undefined > symbol: mkl_lapack_zgeqrf > > my site.cfg is : > #################### > [mkl] > > mkl_libs = mkl_def, mkl_intel_lp64, mkl_intel_thread, mkl_core, mkl_mc > lapack_libs = mkl_lapack95_lp64 > > library_dirs = > /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/lib/intel64/ > include_dirs = > /opt/intel/Compiler/11.0/069/mkl/include:/opt/intel/Compiler/11.0/069/include/ > #################### > and intelcompiler.py is : > ############################ > from distutils.unixccompiler import UnixCCompiler > from numpy.distutils.exec_command import find_executable > import sys > > class IntelCCompiler(UnixCCompiler): > """ A modified Intel compiler compatible with an gcc built Python.""" > compiler_type = 'intel' > cc_exe = 'icc' > cc_args = 'fPIC' > > def __init__ (self, verbose=0, dry_run=0, force=0): > sys.exit(0) > UnixCCompiler.__init__ (self, verbose,dry_run, force) > self.cc_exe = 'icc -fPIC ' > compiler = self.cc_exe > self.set_executables(compiler=compiler, > compiler_so=compiler, > compiler_cxx=compiler, > linker_exe=compiler, > linker_so=compiler + ' -shared -lstdc++') > > class IntelItaniumCCompiler(IntelCCompiler): > compiler_type = 'intele' > > # On Itanium, the Intel Compiler used to be called ecc, let's search > for > # it (now it's also icc, so ecc is last in the search). > for cc_exe in map(find_executable,['icc','ecc']): > if cc_exe: > break > > class IntelEM64TCCompiler(UnixCCompiler): > """ A modified Intel x86_64 compiler compatible with a 64bit gcc built > Python. > """ > compiler_type = 'intelem' > cc_exe = 'icc -m64 -fPIC' > cc_args = "-fPIC -openmp" > def __init__ (self, verbose=0, dry_run=0, force=0): > UnixCCompiler.__init__ (self, verbose,dry_run, force) > self.cc_exe = 'icc -m64 -fPIC -openmp ' > compiler = self.cc_exe > self.set_executables(compiler=compiler, > compiler_so=compiler, > compiler_cxx=compiler, > linker_exe=compiler, > linker_so=compiler + ' -shared -lstdc++') > ########################## > LD_LIBRARY_PATH is : > ######################### > > /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/scalasca-1.3.3/lib:/opt/PBS/lib:/opt/intel/mpi/lib64:/opt/maui/lib:/opt/jdk1.6.0_23/lib:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/usr/local/lib > ######################### > > -AKSHAR > > > > On Sat, Oct 22, 2011 at 11:32 AM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Fri, Oct 21, 2011 at 11:49 PM, akshar bhosale < >> akshar.bhosale at gmail.com> wrote: >> >>> Hi, >>> >>> unfortunately 1.6.1 also hangs on the same place. Can i move ahead with >>> installing scipy? >>> >>> >> Hmm. Well, give scipy a try, but it would be nice to know what the problem >> is with einsum. I'm thinking compiler, GCC 4.1.2 might be a bit old, but >> it could easily be something else. Can you give us more information about >> your system? >> >> Chuck >> >>> >>> On Sat, Oct 22, 2011 at 12:19 AM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> On Fri, Oct 21, 2011 at 5:25 AM, akshar bhosale < >>>> akshar.bhosale at gmail.com> wrote: >>>> >>>>> Hi, >>>>> does this mean that numpy is not configured properly or i can ignore >>>>> this and go ahead with scipy installation? >>>>> >>>> >>>> Scipy will probably work, but you should really install numpy 1.6.1 >>>> instead of 1.6.0. >>>> >>>> >>>> >>>> Chuck >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Fri Oct 28 13:41:57 2011 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 28 Oct 2011 10:41:57 -0700 Subject: [SciPy-Dev] [ANN] SciPy India 2011 Abstracts due November 2nd Message-ID: ========================== SciPy 2011 Call for Papers ========================== The third `SciPy India Conference `_ will be held from December 4th through the 7th at the `Indian Institute of Technology, Bombay (IITB) `_ in Mumbai, Maharashtra India. At this conference, novel applications and breakthroughs made in the pursuit of science using Python are presented. Attended by leading figures from both academia and industry, it is an excellent opportunity to experience the cutting edge of scientific software development. The conference is followed by two days of tutorials and a code sprint, during which community experts provide training on several scientific Python packages. We invite you to take part by submitting a talk abstract on the conference website at: http://scipy.in Talk/Paper Submission ========================== We solicit talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics regarding scientific computing using Python, including applications, teaching, development and research. We welcome contributions from academia as well as industry. Important Dates ========================== November 2, 2011, Wednesday: Abstracts Due November 7, 2011, Monday: Schedule announced November 28, 2011, Monday: Proceedings paper submission due December 4-5, 2011, Sunday-Monday: Conference December 6-7 2011, Tuesday-Wednesday: Tutorials/Sprints Organizers ========================== * Jarrod Millman, Neuroscience Institute, UC Berkeley, USA (Conference Co-Chair) * Prabhu Ramachandran, Department of Aerospace Engineering, IIT Bombay, India (Conference Co-Chair) * FOSSEE Team From vladimir.rdguez at gmail.com Sat Oct 29 09:14:43 2011 From: vladimir.rdguez at gmail.com (=?ISO-8859-1?Q?Vlad=EDmir_Rodr=EDguez?=) Date: Sat, 29 Oct 2011 09:14:43 -0400 Subject: [SciPy-Dev] tplquad alternative Message-ID: I have a working alternative to tplquad which I builded from http://dl.acm.org/citation.cfm?id=155785&dl=ACM&coll=DL&CFID=66021551&CFTOKEN=74592904 via F2PY. For testing aim you can use Scilab's int3d function (also from this code), I found it faster and smarter. How can we include it in standart Scipy library? Vlad?mir From pav at iki.fi Sat Oct 29 12:27:00 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 29 Oct 2011 18:27:00 +0200 Subject: [SciPy-Dev] tplquad alternative In-Reply-To: References: Message-ID: 29.10.2011 15:14, Vlad?mir Rodr?guez kirjoitti: > I have a working alternative to tplquad which I builded from > http://dl.acm.org/citation.cfm?id=155785&dl=ACM&coll=DL&CFID=66021551&CFTOKEN=74592904 > via F2PY. For testing aim you can use Scilab's int3d function (also > from this code), I found it faster and smarter. > How can we include it in standart Scipy library? Not possible. ACM owns the copyright of software published in its TOMS magazine, and the ACM software license allows use "solely for academic, research and other similar noncommercial uses". http://www.acm.org/publications/policies/softwarecrnotice Such software cannot be included in Scipy. Scilab, or anyone who uses Scilab in a commercial setting, is actually violating the copyright of the ACM if it includes this code. -- Pauli Virtanen From ralf.gommers at googlemail.com Sun Oct 30 15:41:51 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 30 Oct 2011 20:41:51 +0100 Subject: [SciPy-Dev] scipy 0.10.0 release schedule update Message-ID: Hi all, For those of you who are wondering what the status of the 0.10.0 release is, we should have RC1 ready very soon. The blocker was/is that single precision Arpack routines on OS X are completely broken. For now the plan is to simply switch to double precision. See https://github.com/scipy/scipy/pull/98 There was one other unresolved issue that came up for the beta release, which is that on Windows there's a Cython bug which prevents compilation with any released Cython version. We can't switch to the latest released Cython + bugfix, because the io.matlab Cython files can't be compiled with them: http://projects.scipy.org/scipy/ticket/1524. My assessment of this issue is that it's not a blocker for the release. Please speak up if you disagree. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Oct 30 16:04:15 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 30 Oct 2011 13:04:15 -0700 Subject: [SciPy-Dev] scipy 0.10.0 release schedule update In-Reply-To: References: Message-ID: Hi, On Sun, Oct 30, 2011 at 12:41 PM, Ralf Gommers wrote: > Hi all, > > For those of you who are wondering what the status of the 0.10.0 release is, > we should have RC1 ready very soon. The blocker was/is that single precision > Arpack routines on OS X are completely broken. For now the plan is to simply > switch to double precision. See https://github.com/scipy/scipy/pull/98 > > There was one other unresolved issue that came up for the beta release, > which is that on Windows there's a Cython bug which prevents compilation > with any released Cython version. We can't switch to the latest released > Cython + bugfix, because the io.matlab Cython files can't be compiled with > them: http://projects.scipy.org/scipy/ticket/1524. My assessment of this > issue is that it's not a blocker for the release. Please speak up if you > disagree. I've got some other work I'd like to do on that (matlab io) code - so if you can wait a few days, I will try and fix that. See you, Matthew From josef.pktd at gmail.com Sun Oct 30 17:17:07 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 30 Oct 2011 17:17:07 -0400 Subject: [SciPy-Dev] oops, don't trust a test even if it's in the test suite Message-ID: I made a mistake when I wrote some tests for scipy.stats.distributions, typo and thinko, and I didn't use a pre-made tests. stats() for some distributions returns an incorrect variance, I thought it's only skew and kurtosis that are still wrong. http://projects.scipy.org/scipy/ticket/1546 Josef From ralf.gommers at googlemail.com Sun Oct 30 17:31:07 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 30 Oct 2011 22:31:07 +0100 Subject: [SciPy-Dev] scipy 0.10.0 release schedule update In-Reply-To: References: Message-ID: On Sun, Oct 30, 2011 at 9:04 PM, Matthew Brett wrote: > Hi, > > On Sun, Oct 30, 2011 at 12:41 PM, Ralf Gommers > wrote: > > Hi all, > > > > For those of you who are wondering what the status of the 0.10.0 release > is, > > we should have RC1 ready very soon. The blocker was/is that single > precision > > Arpack routines on OS X are completely broken. For now the plan is to > simply > > switch to double precision. See https://github.com/scipy/scipy/pull/98 > > > > There was one other unresolved issue that came up for the beta release, > > which is that on Windows there's a Cython bug which prevents compilation > > with any released Cython version. We can't switch to the latest released > > Cython + bugfix, because the io.matlab Cython files can't be compiled > with > > them: http://projects.scipy.org/scipy/ticket/1524. My assessment of this > > issue is that it's not a blocker for the release. Please speak up if you > > disagree. > > I've got some other work I'd like to do on that (matlab io) code - so > if you can wait a few days, I will try and fix that. > > Sure, that would be great. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Oct 30 18:22:35 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 30 Oct 2011 15:22:35 -0700 Subject: [SciPy-Dev] scipy 0.10.0 release schedule update In-Reply-To: References: Message-ID: Hi, On Sun, Oct 30, 2011 at 2:31 PM, Ralf Gommers wrote: > > Ralf It's just a one-liner fix to the compilation for cython 0.15, as far as I can see: diff --git a/scipy/io/matlab/mio5_utils.pyx b/scipy/io/matlab/mio5_utils.pyx index cb15a00..1604a61 100644 --- a/scipy/io/matlab/mio5_utils.pyx +++ b/scipy/io/matlab/mio5_utils.pyx @@ -169,7 +169,7 @@ cdef class VarReader5: * mat_dtype (bool) * squeeze_me (bool) """ - def __new__(self, preader): + def __cinit__(self, preader): byte_order = preader.byte_order self.is_swapped = byte_order == swapped_code if self.is_swapped: Do you want me to build the c files with Cython 0.15 as well? I've also put in a pull request to add the nice 'cythonize' script by Dag Sverre: https://github.com/scipy/scipy/pull/99 It's useful for making sure the c files are up to date. See you, Matthew From ralf.gommers at googlemail.com Sun Oct 30 18:56:07 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 30 Oct 2011 23:56:07 +0100 Subject: [SciPy-Dev] scipy 0.10.0 release schedule update In-Reply-To: References: Message-ID: On Sun, Oct 30, 2011 at 11:22 PM, Matthew Brett wrote: > Hi, > > On Sun, Oct 30, 2011 at 2:31 PM, Ralf Gommers > wrote: > > > > Ralf > > It's just a one-liner fix to the compilation for cython 0.15, as far > as I can see: > That's good news. > > diff --git a/scipy/io/matlab/mio5_utils.pyx > b/scipy/io/matlab/mio5_utils.pyx > index cb15a00..1604a61 100644 > --- a/scipy/io/matlab/mio5_utils.pyx > +++ b/scipy/io/matlab/mio5_utils.pyx > @@ -169,7 +169,7 @@ cdef class VarReader5: > * mat_dtype (bool) > * squeeze_me (bool) > """ > - def __new__(self, preader): > + def __cinit__(self, preader): > byte_order = preader.byte_order > self.is_swapped = byte_order == swapped_code > if self.is_swapped: > > Do you want me to build the c files with Cython 0.15 as well? > Either way is fine. All Cython files should be regenerated in any case with this fix included: https://github.com/cython/cython/commit/0443ad3d55f0a4762d4009bc606cb98ee4f4a1d6 Thanks, Ralf > I've also put in a pull request to add the nice 'cythonize' script by > Dag Sverre: > > https://github.com/scipy/scipy/pull/99 > > It's useful for making sure the c files are up to date. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Oct 31 02:46:37 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 30 Oct 2011 23:46:37 -0700 Subject: [SciPy-Dev] scipy 0.10.0 release schedule update In-Reply-To: References: Message-ID: Hi, On Sun, Oct 30, 2011 at 3:56 PM, Ralf Gommers wrote: > > > On Sun, Oct 30, 2011 at 11:22 PM, Matthew Brett > wrote: >> >> Hi, >> >> On Sun, Oct 30, 2011 at 2:31 PM, Ralf Gommers >> wrote: >> > >> > Ralf >> >> It's just a one-liner fix to the compilation for cython 0.15, as far >> as I can see: > > That's good news. >> >> diff --git a/scipy/io/matlab/mio5_utils.pyx >> b/scipy/io/matlab/mio5_utils.pyx >> index cb15a00..1604a61 100644 >> --- a/scipy/io/matlab/mio5_utils.pyx >> +++ b/scipy/io/matlab/mio5_utils.pyx >> @@ -169,7 +169,7 @@ cdef class VarReader5: >> ? ? * mat_dtype (bool) >> ? ? * squeeze_me (bool) >> ? ? """ >> - ? ?def __new__(self, preader): >> + ? ?def __cinit__(self, preader): >> ? ? ? ? byte_order = preader.byte_order >> ? ? ? ? self.is_swapped = byte_order == swapped_code >> ? ? ? ? if self.is_swapped: >> >> Do you want me to build the c files with Cython 0.15 as well? > > Either way is fine. All Cython files should be regenerated in any case with > this fix included: > https://github.com/cython/cython/commit/0443ad3d55f0a4762d4009bc606cb98ee4f4a1d6 Maybe I'll leave the building to you then - so we can make sure they all get built right? I'll just submit the thing above as a tiny pull request. as I believe it has no functional consequences. See you, Matthew From pav at iki.fi Mon Oct 31 06:05:50 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 31 Oct 2011 11:05:50 +0100 Subject: [SciPy-Dev] tplquad alternative In-Reply-To: References: Message-ID: 29.10.2011 18:27, Pauli Virtanen kirjoitti: > 29.10.2011 15:14, Vlad?mir Rodr?guez kirjoitti: >> I have a working alternative to tplquad which I builded from >> http://dl.acm.org/citation.cfm?id=155785&dl=ACM&coll=DL&CFID=66021551&CFTOKEN=74592904 >> via F2PY. For testing aim you can use Scilab's int3d function (also >> from this code), I found it faster and smarter. >> How can we include it in standart Scipy library? > > Not possible. ACM owns the copyright of software published in its TOMS > magazine, and the ACM software license allows use "solely for academic, > research and other similar noncommercial uses". [clip] Or, at least, this is how it currently apparently works. Whether such restrictions apply to code published tens of years ago (when such license probably didn't exist), I'm not sure. However, some clarification on the license status of the code is needed before including it. One way to obtain this would be to ask the ACM directly about their stance on this. -- Pauli Virtanen From robert.kern at gmail.com Mon Oct 31 06:22:06 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 31 Oct 2011 10:22:06 +0000 Subject: [SciPy-Dev] tplquad alternative In-Reply-To: References: Message-ID: On Mon, Oct 31, 2011 at 10:05, Pauli Virtanen wrote: > 29.10.2011 18:27, Pauli Virtanen kirjoitti: >> 29.10.2011 15:14, Vlad?mir Rodr?guez kirjoitti: >>> I have a working alternative to tplquad which I builded from >>> http://dl.acm.org/citation.cfm?id=155785&dl=ACM&coll=DL&CFID=66021551&CFTOKEN=74592904 >>> via F2PY. For testing aim you can use Scilab's int3d function (also >>> from this code), I found it faster and smarter. >>> How can we include it in standart Scipy library? >> >> Not possible. ACM owns the copyright of software published in its TOMS >> magazine, and the ACM software license allows use "solely for academic, >> research and other similar noncommercial uses". > [clip] > > Or, at least, this is how it currently apparently works. Whether such > restrictions apply to code published tens of years ago (when such > license probably didn't exist), I'm not sure. > > However, some clarification on the license status of the code is needed > before including it. One way to obtain this would be to ask the ACM > directly about their stance on this. Alan Isaac has had success getting a BSD license for ACM TOMS code for use in scipy. Here is the contact information he has used in the past. Deborah Cotton, Copyright & Permissions permissions AT acm.org ACM Publications 2 Penn Plaza, Suite 701** New York, NY 10121-0701 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Mon Oct 31 08:41:27 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 31 Oct 2011 08:41:27 -0400 Subject: [SciPy-Dev] scipy 0.10.0 release schedule update In-Reply-To: References: Message-ID: On Mon, Oct 31, 2011 at 2:46 AM, Matthew Brett wrote: > Hi, > > On Sun, Oct 30, 2011 at 3:56 PM, Ralf Gommers > wrote: >> >> >> On Sun, Oct 30, 2011 at 11:22 PM, Matthew Brett >> wrote: >>> >>> Hi, >>> >>> On Sun, Oct 30, 2011 at 2:31 PM, Ralf Gommers >>> wrote: >>> > >>> > Ralf >>> >>> It's just a one-liner fix to the compilation for cython 0.15, as far >>> as I can see: >> >> That's good news. >>> >>> diff --git a/scipy/io/matlab/mio5_utils.pyx >>> b/scipy/io/matlab/mio5_utils.pyx >>> index cb15a00..1604a61 100644 >>> --- a/scipy/io/matlab/mio5_utils.pyx >>> +++ b/scipy/io/matlab/mio5_utils.pyx >>> @@ -169,7 +169,7 @@ cdef class VarReader5: >>> ? ? * mat_dtype (bool) >>> ? ? * squeeze_me (bool) >>> ? ? """ >>> - ? ?def __new__(self, preader): >>> + ? ?def __cinit__(self, preader): >>> ? ? ? ? byte_order = preader.byte_order >>> ? ? ? ? self.is_swapped = byte_order == swapped_code >>> ? ? ? ? if self.is_swapped: >>> >>> Do you want me to build the c files with Cython 0.15 as well? >> >> Either way is fine. All Cython files should be regenerated in any case with >> this fix included: >> https://github.com/cython/cython/commit/0443ad3d55f0a4762d4009bc606cb98ee4f4a1d6 > > Maybe I'll leave the building to you then - so we can make sure they > all get built right? > > I'll just submit the thing above as a tiny pull request. as I believe > it has no functional consequences. Ralf, I finally prepared a pull request for fixing the distribution fit() with fixed parameters, including tests. Since the current version is broken it would be good to get this into 0.10. Thanks, Josef > > See you, > > Matthew > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ralf.gommers at googlemail.com Mon Oct 31 14:25:27 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 31 Oct 2011 19:25:27 +0100 Subject: [SciPy-Dev] scipy 0.10.0 release schedule update In-Reply-To: References: Message-ID: On Mon, Oct 31, 2011 at 7:46 AM, Matthew Brett wrote: > Hi, > > On Sun, Oct 30, 2011 at 3:56 PM, Ralf Gommers > wrote: > > > > > > On Sun, Oct 30, 2011 at 11:22 PM, Matthew Brett > > > wrote: > >> > >> Hi, > >> > >> On Sun, Oct 30, 2011 at 2:31 PM, Ralf Gommers > >> wrote: > >> > > >> > Ralf > >> > >> It's just a one-liner fix to the compilation for cython 0.15, as far > >> as I can see: > > > > That's good news. > >> > >> diff --git a/scipy/io/matlab/mio5_utils.pyx > >> b/scipy/io/matlab/mio5_utils.pyx > >> index cb15a00..1604a61 100644 > >> --- a/scipy/io/matlab/mio5_utils.pyx > >> +++ b/scipy/io/matlab/mio5_utils.pyx > >> @@ -169,7 +169,7 @@ cdef class VarReader5: > >> * mat_dtype (bool) > >> * squeeze_me (bool) > >> """ > >> - def __new__(self, preader): > >> + def __cinit__(self, preader): > >> byte_order = preader.byte_order > >> self.is_swapped = byte_order == swapped_code > >> if self.is_swapped: > >> > >> Do you want me to build the c files with Cython 0.15 as well? > > > > Either way is fine. All Cython files should be regenerated in any case > with > > this fix included: > > > https://github.com/cython/cython/commit/0443ad3d55f0a4762d4009bc606cb98ee4f4a1d6 > > Maybe I'll leave the building to you then - so we can make sure they > all get built right? > > I'll just submit the thing above as a tiny pull request. as I believe > it has no functional consequences. > > Sounds good. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Oct 31 21:07:12 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 31 Oct 2011 21:07:12 -0400 Subject: [SciPy-Dev] scipy 0.10.0 release schedule update In-Reply-To: References: Message-ID: On Mon, Oct 31, 2011 at 2:25 PM, Ralf Gommers wrote: > > > On Mon, Oct 31, 2011 at 7:46 AM, Matthew Brett > wrote: >> >> Hi, >> >> On Sun, Oct 30, 2011 at 3:56 PM, Ralf Gommers >> wrote: >> > >> > >> > On Sun, Oct 30, 2011 at 11:22 PM, Matthew Brett >> > >> > wrote: >> >> >> >> Hi, >> >> >> >> On Sun, Oct 30, 2011 at 2:31 PM, Ralf Gommers >> >> wrote: >> >> > >> >> > Ralf >> >> >> >> It's just a one-liner fix to the compilation for cython 0.15, as far >> >> as I can see: >> > >> > That's good news. >> >> >> >> diff --git a/scipy/io/matlab/mio5_utils.pyx >> >> b/scipy/io/matlab/mio5_utils.pyx >> >> index cb15a00..1604a61 100644 >> >> --- a/scipy/io/matlab/mio5_utils.pyx >> >> +++ b/scipy/io/matlab/mio5_utils.pyx >> >> @@ -169,7 +169,7 @@ cdef class VarReader5: >> >> ? ? * mat_dtype (bool) >> >> ? ? * squeeze_me (bool) >> >> ? ? """ >> >> - ? ?def __new__(self, preader): >> >> + ? ?def __cinit__(self, preader): >> >> ? ? ? ? byte_order = preader.byte_order >> >> ? ? ? ? self.is_swapped = byte_order == swapped_code >> >> ? ? ? ? if self.is_swapped: >> >> >> >> Do you want me to build the c files with Cython 0.15 as well? >> > >> > Either way is fine. All Cython files should be regenerated in any case >> > with >> > this fix included: >> > >> > https://github.com/cython/cython/commit/0443ad3d55f0a4762d4009bc606cb98ee4f4a1d6 >> >> Maybe I'll leave the building to you then - so we can make sure they >> all get built right? >> >> I'll just submit the thing above as a tiny pull request. as I believe >> it has no functional consequences. >> > Sounds good. What happened to the bfgs endless loop? I don't find a pull request or information anymore. It would be nice to have it in the next release. Josef > > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > >