From hofsaess at ifb.uni-stuttgart.de Thu Sep 1 02:24:06 2011 From: hofsaess at ifb.uni-stuttgart.de (=?ISO-8859-1?Q?Martin_Hofs=E4=DF?=) Date: Thu, 01 Sep 2011 08:24:06 +0200 Subject: [SciPy-Dev] problems building scipy with scons & mkl In-Reply-To: <4E5E509D.8030901@uci.edu> References: <4E5DEEBE.2040807@ifb.uni-stuttgart.de> <4E5E509D.8030901@uci.edu> Message-ID: <4E5F2506.2050107@ifb.uni-stuttgart.de> Hi, thanks Christoph for your help, now it's working. Martin Am 31.08.2011 17:17, schrieb Christoph Gohlke: > On 8/31/2011 1:20 AM, Martin Hofs?? wrote: >> Hi all, >> >> I updated my git repository today and want to rebuild scipy with scons >> and mkl support. >> >> But I get errors (please see attachment). >> >> Can anyone help me? >> >> Regards >> >> Martin >> > Hi, > > Try change line 350 in scipy/linalg/fblas_l1.pyf.src to > > fortranname F_FUNC(iamax,IAMAX) > > > > > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From david.huard at gmail.com Fri Sep 2 16:10:33 2011 From: david.huard at gmail.com (David Huard) Date: Fri, 2 Sep 2011 16:10:33 -0400 Subject: [SciPy-Dev] NearestNDInterpolator refuses to handle (N,1) arrays Message-ID: Hi, In version 0.9.0, the NearestNDInterpolator class raises an error if given arrays of shape (N,1). The error is raised by _check_init_shape if points.shape[1] < 2: raise ValueError("input data must be at least 2-D") but the KDTree implementation that is used for interpolation can handle (N,1) arrays. I'm wondering if there is a reason for not allowing those cases through or it's just an oversight. Thanks, David From pav at iki.fi Sat Sep 3 06:49:17 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 3 Sep 2011 10:49:17 +0000 (UTC) Subject: [SciPy-Dev] NearestNDInterpolator refuses to handle (N, 1) arrays References: Message-ID: On Fri, 02 Sep 2011 16:10:33 -0400, David Huard wrote: > In version 0.9.0, the NearestNDInterpolator class raises an error if > given arrays of shape (N,1). The error is raised by _check_init_shape > > if points.shape[1] < 2: > raise ValueError("input data must be at least 2-D") > > but the KDTree implementation that is used for interpolation can handle > (N,1) arrays. I'm wondering if there is a reason for not allowing those > cases through or it's just an oversight. Probably an oversight (file a ticket :), probably because the Qhull-based methods need >= 2-D data. -- Pauli Virtanen From nwagner at iam.uni-stuttgart.de Sun Sep 4 09:53:35 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 04 Sep 2011 15:53:35 +0200 Subject: [SciPy-Dev] Generalized eigenproblem with rank deficient matrices Message-ID: Hi all, how can I solve the eigenproblem A x = \lambda B x where both matrices are rank deficient ? Nils From charlesr.harris at gmail.com Sun Sep 4 11:29:19 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Sep 2011 09:29:19 -0600 Subject: [SciPy-Dev] Generalized eigenproblem with rank deficient matrices In-Reply-To: References: Message-ID: On Sun, Sep 4, 2011 at 7:53 AM, Nils Wagner wrote: > Hi all, > > how can I solve the eigenproblem > > A x = \lambda B x > > where both matrices are rank deficient ? > I'd do eigh and transform the problem to something like: U * A * U^t * x= \lambda D * x where D is diagonal. Note that the solutions may not be unique and \lambda can be arbitrary, as you can see by studying A = B = array([[1, 0], [0, 0]]) Where there are solutions for arbitrary \lambda. Likewise, there may be no solutions under the requirement that x is non-zero: A = array([[1, 1], [1, 0]]), B = array([[1, 0], [0, 0]]) The usual case where B is positive definite corresponds to finding extrema on a compact surface x^t * B *x = 1, but the surface is no longer compact when B isn't positive definite. Note that these cases are all sensitive to roundoff error. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Sun Sep 4 12:09:03 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 04 Sep 2011 18:09:03 +0200 Subject: [SciPy-Dev] Generalized eigenproblem with rank deficient matrices In-Reply-To: References: Message-ID: On Sun, 4 Sep 2011 09:29:19 -0600 Charles R Harris wrote: > On Sun, Sep 4, 2011 at 7:53 AM, Nils Wagner >wrote: > >> Hi all, >> >> how can I solve the eigenproblem >> >> A x = \lambda B x >> >> where both matrices are rank deficient ? >> > > I'd do eigh and transform the problem to something like: > > U * A * U^t * x= \lambda D * x > > where D is diagonal. Note that the solutions may not be >unique and \lambda > can be arbitrary, as you can see by studying > > A = B = array([[1, 0], [0, 0]]) > > Where there are solutions for arbitrary \lambda. >Likewise, there may be no > solutions under the requirement that x is non-zero: > > A = array([[1, 1], [1, 0]]), > B = array([[1, 0], [0, 0]]) > > The usual case where B is positive definite corresponds >to finding extrema > on a compact surface x^t * B *x = 1, but the surface is >no longer compact > when B isn't positive definite. Note that these cases >are all sensitive to > roundoff error. > > Chuck Hi Chuck, I am only interested in the real and complex eigensolutions. The complex eigenvalues appear in pairs a_i \pm \sqrt{-1} b_i sind A and B are real. How can I reject infinite eigenvalues ? Both matrices, A and B, are indefinite. Nils From charlesr.harris at gmail.com Sun Sep 4 12:55:43 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Sep 2011 10:55:43 -0600 Subject: [SciPy-Dev] Generalized eigenproblem with rank deficient matrices In-Reply-To: References: Message-ID: On Sun, Sep 4, 2011 at 10:09 AM, Nils Wagner wrote: > On Sun, 4 Sep 2011 09:29:19 -0600 > Charles R Harris wrote: > > On Sun, Sep 4, 2011 at 7:53 AM, Nils Wagner > >wrote: > > > >> Hi all, > >> > >> how can I solve the eigenproblem > >> > >> A x = \lambda B x > >> > >> where both matrices are rank deficient ? > >> > > > > I'd do eigh and transform the problem to something like: > > > > U * A * U^t * x= \lambda D * x > > > > where D is diagonal. Note that the solutions may not be > >unique and \lambda > > can be arbitrary, as you can see by studying > > > > A = B = array([[1, 0], [0, 0]]) > > > > Where there are solutions for arbitrary \lambda. > >Likewise, there may be no > > solutions under the requirement that x is non-zero: > > > > A = array([[1, 1], [1, 0]]), > > B = array([[1, 0], [0, 0]]) > > > > The usual case where B is positive definite corresponds > >to finding extrema > > on a compact surface x^t * B *x = 1, but the surface is > >no longer compact > > when B isn't positive definite. Note that these cases > >are all sensitive to > > roundoff error. > > > > Chuck > > Hi Chuck, > > I am only interested in the real and complex > eigensolutions. > The complex eigenvalues appear in pairs a_i \pm \sqrt{-1} > b_i sind A and B are real. > How can I reject infinite eigenvalues ? > Both matrices, A and B, are indefinite. > > It depends on the particular problem. In general, the solution to the generalized eigenvalue problem starts by making a variable substitution that reduces B to the identity matrix, usually by using a Cholesky factorization, i.e., B = U^t U, y = U x. This can still be done in the (numerically) indefinite case but Cholesky won't be reliable and that is why I suggested eigh. Note that the problem we are trying to solve is finding the extrema of x^t A x subject to the constraint x^t B x = 1, \lambda is then a Lagrange multiplier. I assume A and B are both symmetric? Anyway, in terms of y, the problem then reduces to finding extrema of y^t U^t^{-1} A U^{-1} y subject to y^t D y = 1, where D is diagonal and has ones along part of the diagonal, zeros for the remainder. In the usual case, D is the identity. The trick is then to divide y into two parts, one for where D is one (u), another for the rest (v), so that y = [u v]. If you are lucky, the v can be solved in terms of u using the transformed A, and things will reduce to an eigenvalue problem for u. If the original A was symmetric, so will be the reduced problem. If you can't solve v as a function of u, then you can reduce things further, but it is possible that at some point there is no solution. I don't have practical experience with this sort of problem with indefinite B, so I can't tell you much more than that. I assume you've googled for relevant documents. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Sep 4 13:12:01 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Sep 2011 11:12:01 -0600 Subject: [SciPy-Dev] Generalized eigenproblem with rank deficient matrices In-Reply-To: References: Message-ID: On Sun, Sep 4, 2011 at 10:55 AM, Charles R Harris wrote: > > > On Sun, Sep 4, 2011 at 10:09 AM, Nils Wagner > wrote: > >> On Sun, 4 Sep 2011 09:29:19 -0600 >> Charles R Harris wrote: >> > On Sun, Sep 4, 2011 at 7:53 AM, Nils Wagner >> >wrote: >> > >> >> Hi all, >> >> >> >> how can I solve the eigenproblem >> >> >> >> A x = \lambda B x >> >> >> >> where both matrices are rank deficient ? >> >> >> > >> > I'd do eigh and transform the problem to something like: >> > >> > U * A * U^t * x= \lambda D * x >> > >> > where D is diagonal. Note that the solutions may not be >> >unique and \lambda >> > can be arbitrary, as you can see by studying >> > >> > A = B = array([[1, 0], [0, 0]]) >> > >> > Where there are solutions for arbitrary \lambda. >> >Likewise, there may be no >> > solutions under the requirement that x is non-zero: >> > >> > A = array([[1, 1], [1, 0]]), >> > B = array([[1, 0], [0, 0]]) >> > >> > The usual case where B is positive definite corresponds >> >to finding extrema >> > on a compact surface x^t * B *x = 1, but the surface is >> >no longer compact >> > when B isn't positive definite. Note that these cases >> >are all sensitive to >> > roundoff error. >> > >> > Chuck >> >> Hi Chuck, >> >> I am only interested in the real and complex >> eigensolutions. >> The complex eigenvalues appear in pairs a_i \pm \sqrt{-1} >> b_i sind A and B are real. >> How can I reject infinite eigenvalues ? >> Both matrices, A and B, are indefinite. >> >> > It depends on the particular problem. In general, the solution to the > generalized eigenvalue problem starts by making a variable substitution that > reduces B to the identity matrix, usually by using a Cholesky factorization, > i.e., B = U^t U, y = U x. This can still be done in the (numerically) > indefinite case but Cholesky won't be reliable and that is why I suggested > eigh. Note that the problem we are trying to solve is > finding the extrema of x^t A x subject to the constraint x^t B x = 1, > \lambda is then a Lagrange multiplier. I assume A and B are both symmetric? > Anyway, in terms of y, the problem then reduces to finding extrema of y^t > U^t^{-1} A U^{-1} y subject to y^t D y = 1, where D is diagonal and has ones > along part of the diagonal, zeros for the remainder. In the usual case, D is > the identity. The trick is then to divide y into two parts, one for where D > is one (u), another for the rest (v), so that y = [u v]. If you are lucky, > the v can be solved in terms of u using the transformed A, and things will > reduce to an eigenvalue problem for u. If the original A was symmetric, so > will be the reduced problem. If you can't solve v as a function of u, then > you can reduce things further, but it is possible that at some point there > is no solution. > > I don't have practical experience with this sort of problem with indefinite > B, so I can't tell you much more than that. I assume you've googled for > relevant documents. > > It occurs to me that even if A and B aren't symmetric, that you can still bring B to the desired form using the svd. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Sun Sep 4 13:46:26 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 04 Sep 2011 19:46:26 +0200 Subject: [SciPy-Dev] Generalized eigenproblem with rank deficient matrices In-Reply-To: References: Message-ID: On Sun, 4 Sep 2011 11:12:01 -0600 Charles R Harris wrote: > On Sun, Sep 4, 2011 at 10:55 AM, Charles R Harris >> wrote: > >> >> >> On Sun, Sep 4, 2011 at 10:09 AM, Nils Wagner >>> > wrote: >> >>> On Sun, 4 Sep 2011 09:29:19 -0600 >>> Charles R Harris wrote: >>> > On Sun, Sep 4, 2011 at 7:53 AM, Nils Wagner >>> >wrote: >>> > >>> >> Hi all, >>> >> >>> >> how can I solve the eigenproblem >>> >> >>> >> A x = \lambda B x >>> >> >>> >> where both matrices are rank deficient ? >>> >> >>> > >>> > I'd do eigh and transform the problem to something >>>like: >>> > >>> > U * A * U^t * x= \lambda D * x >>> > >>> > where D is diagonal. Note that the solutions may not >>>be >>> >unique and \lambda >>> > can be arbitrary, as you can see by studying >>> > >>> > A = B = array([[1, 0], [0, 0]]) >>> > >>> > Where there are solutions for arbitrary \lambda. >>> >Likewise, there may be no >>> > solutions under the requirement that x is non-zero: >>> > >>> > A = array([[1, 1], [1, 0]]), >>> > B = array([[1, 0], [0, 0]]) >>> > >>> > The usual case where B is positive definite >>>corresponds >>> >to finding extrema >>> > on a compact surface x^t * B *x = 1, but the surface >>>is >>> >no longer compact >>> > when B isn't positive definite. Note that these cases >>> >are all sensitive to >>> > roundoff error. >>> > >>> > Chuck >>> >>> Hi Chuck, >>> >>> I am only interested in the real and complex >>> eigensolutions. >>> The complex eigenvalues appear in pairs a_i \pm >>>\sqrt{-1} >>> b_i sind A and B are real. >>> How can I reject infinite eigenvalues ? >>> Both matrices, A and B, are indefinite. >>> >>> >> It depends on the particular problem. In general, the >>solution to the >> generalized eigenvalue problem starts by making a >>variable substitution that >> reduces B to the identity matrix, usually by using a >>Cholesky factorization, >> i.e., B = U^t U, y = U x. This can still be done in the >>(numerically) >> indefinite case but Cholesky won't be reliable and that >>is why I suggested >> eigh. Note that the problem we are trying to solve is >> finding the extrema of x^t A x subject to the constraint >>x^t B x = 1, >> \lambda is then a Lagrange multiplier. I assume A and B >>are both symmetric? >> Anyway, in terms of y, the problem then reduces to >>finding extrema of y^t >> U^t^{-1} A U^{-1} y subject to y^t D y = 1, where D is >>diagonal and has ones >> along part of the diagonal, zeros for the remainder. In >>the usual case, D is >> the identity. The trick is then to divide y into two >>parts, one for where D >> is one (u), another for the rest (v), so that y = [u v]. >>If you are lucky, >> the v can be solved in terms of u using the transformed >>A, and things will >> reduce to an eigenvalue problem for u. If the original A >>was symmetric, so >> will be the reduced problem. If you can't solve v as a >>function of u, then >> you can reduce things further, but it is possible that >>at some point there >> is no solution. >> >> I don't have practical experience with this sort of >>problem with indefinite >> B, so I can't tell you much more than that. I assume >>you've googled for >> relevant documents. >> >> It occurs to me that even if A and B aren't symmetric, >>that you can still > bring B to the desired form using the svd. > > Chuck Thank you very much. A and B are symmetric. Nils From nwagner at iam.uni-stuttgart.de Tue Sep 6 14:35:21 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 06 Sep 2011 20:35:21 +0200 Subject: [SciPy-Dev] More scipy.test() failures Message-ID: Hi all, can someone reproduce the following failures Nils >>> scipy.__version__ '0.10.0.dev7179' ====================================================================== FAIL: test_mio_utils.test_squeeze_element(False,) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) AssertionError ====================================================================== FAIL: test_arpack.test_complex_nonsymmetric_modes(False, , 'F', 2, 'LM', None, None, ) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 196, in eval_evec assert_allclose(LHS, RHS, rtol=_rtol[typ], atol=_atol[typ], err_msg=err) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", line 1213, in assert_allclose verbose=verbose, header=header) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", line 677, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000238419, atol=0.000238419 error for eigs:standard, typ=F, which=LM, sigma=None, mattype=csr_matrix, OPpart=None, mode=normal (mismatch 100.0%) x: array([[-0.15612537-0.34876961j, 0.54372370+1.58556008j], [ 0.07264681-0.29588005j, 0.79194915+1.30910718j], [ 0.18219496+0.68975669j, 0.85219419+1.5947448j ],... y: array([[-0.15577540-0.34831554j, 0.54372382+1.58555984j], [ 0.07204140-0.29578942j, 0.79194933+1.30910718j], [ 0.18207264+0.69015396j, 0.85219407+1.59474432j],... ====================================================================== FAIL: Some very simple tests for chi2_contingency. ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/tests/test_contingency.py", line 67, in test_chi2_contingency_trival assert_equal(p, 1.0) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", line 300, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: nan DESIRED: 1.0 ---------------------------------------------------------------------- Ran 5625 tests in 202.342s FAILED (KNOWNFAIL=12, SKIP=28, failures=3) From ralf.gommers at googlemail.com Tue Sep 6 15:14:28 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 6 Sep 2011 21:14:28 +0200 Subject: [SciPy-Dev] More scipy.test() failures In-Reply-To: References: Message-ID: On Tue, Sep 6, 2011 at 8:35 PM, Nils Wagner wrote: > Hi all, > > can someone reproduce the following failures > > I can reproduce the first and third failures, but I haven't had time to look at them yet. If anyone wants to take a look, that would be helpful. The second one I don't see, but it looks like the test tolerance being slightly too high. Ralf > Nils > > >>> scipy.__version__ > '0.10.0.dev7179' > > ====================================================================== > FAIL: test_mio_utils.test_squeeze_element(False,) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > > "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", > line 183, in runTest > self.test(*self.arg) > AssertionError > > ====================================================================== > FAIL: test_arpack.test_complex_nonsymmetric_modes(False, > , 'F', 2, 'LM', None, None, 'scipy.sparse.csr.csr_matrix'>) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > > "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", > line 183, in runTest > self.test(*self.arg) > File > > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", > line 196, in eval_evec > assert_allclose(LHS, RHS, rtol=_rtol[typ], > atol=_atol[typ], err_msg=err) > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", > line 1213, in assert_allclose > verbose=verbose, header=header) > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", > line 677, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=0.000238419, atol=0.000238419 > error for eigs:standard, typ=F, which=LM, sigma=None, > mattype=csr_matrix, OPpart=None, mode=normal > (mismatch 100.0%) > x: array([[-0.15612537-0.34876961j, > 0.54372370+1.58556008j], > [ 0.07264681-0.29588005j, > 0.79194915+1.30910718j], > [ 0.18219496+0.68975669j, 0.85219419+1.5947448j > ],... > y: array([[-0.15577540-0.34831554j, > 0.54372382+1.58555984j], > [ 0.07204140-0.29578942j, > 0.79194933+1.30910718j], > [ 0.18207264+0.69015396j, > 0.85219407+1.59474432j],... > > ====================================================================== > FAIL: Some very simple tests for chi2_contingency. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > > "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", > line 183, in runTest > self.test(*self.arg) > File > > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/tests/test_contingency.py", > line 67, in test_chi2_contingency_trival > assert_equal(p, 1.0) > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", > line 300, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: nan > DESIRED: 1.0 > > ---------------------------------------------------------------------- > Ran 5625 tests in 202.342s > > FAILED (KNOWNFAIL=12, SKIP=28, failures=3) > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Tue Sep 6 15:40:57 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 6 Sep 2011 14:40:57 -0500 Subject: [SciPy-Dev] More scipy.test() failures In-Reply-To: References: Message-ID: On Tue, Sep 6, 2011 at 2:14 PM, Ralf Gommers wrote: > > > On Tue, Sep 6, 2011 at 8:35 PM, Nils Wagner wrote: > >> Hi all, >> >> can someone reproduce the following failures >> >> I can reproduce the first and third failures, but I haven't had time to > look at them yet. If anyone wants to take a look, that would be helpful. > The third test (chi2_contingency) fails because special.chdtrc(0, 0.0) returns nan. It used to return 1. I haven't checked when the behavior changed. The chi2_contingency function can be changed to handle this degenerate case explicitly. Warren > The second one I don't see, but it looks like the test tolerance being > slightly too high. > > Ralf > > > > >> Nils >> >> >>> scipy.__version__ >> '0.10.0.dev7179' >> >> ====================================================================== >> FAIL: test_mio_utils.test_squeeze_element(False,) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> >> "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", >> line 183, in runTest >> self.test(*self.arg) >> AssertionError >> >> ====================================================================== >> FAIL: test_arpack.test_complex_nonsymmetric_modes(False, >> , 'F', 2, 'LM', None, None, > 'scipy.sparse.csr.csr_matrix'>) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> >> "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", >> line 183, in runTest >> self.test(*self.arg) >> File >> >> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", >> line 196, in eval_evec >> assert_allclose(LHS, RHS, rtol=_rtol[typ], >> atol=_atol[typ], err_msg=err) >> File >> >> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", >> line 1213, in assert_allclose >> verbose=verbose, header=header) >> File >> >> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", >> line 677, in assert_array_compare >> raise AssertionError(msg) >> AssertionError: >> Not equal to tolerance rtol=0.000238419, atol=0.000238419 >> error for eigs:standard, typ=F, which=LM, sigma=None, >> mattype=csr_matrix, OPpart=None, mode=normal >> (mismatch 100.0%) >> x: array([[-0.15612537-0.34876961j, >> 0.54372370+1.58556008j], >> [ 0.07264681-0.29588005j, >> 0.79194915+1.30910718j], >> [ 0.18219496+0.68975669j, 0.85219419+1.5947448j >> ],... >> y: array([[-0.15577540-0.34831554j, >> 0.54372382+1.58555984j], >> [ 0.07204140-0.29578942j, >> 0.79194933+1.30910718j], >> [ 0.18207264+0.69015396j, >> 0.85219407+1.59474432j],... >> >> ====================================================================== >> FAIL: Some very simple tests for chi2_contingency. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> >> "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", >> line 183, in runTest >> self.test(*self.arg) >> File >> >> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/tests/test_contingency.py", >> line 67, in test_chi2_contingency_trival >> assert_equal(p, 1.0) >> File >> >> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", >> line 300, in assert_equal >> raise AssertionError(msg) >> AssertionError: >> Items are not equal: >> ACTUAL: nan >> DESIRED: 1.0 >> >> ---------------------------------------------------------------------- >> Ran 5625 tests in 202.342s >> >> FAILED (KNOWNFAIL=12, SKIP=28, failures=3) >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Sep 6 15:57:42 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 6 Sep 2011 19:57:42 +0000 (UTC) Subject: [SciPy-Dev] More scipy.test() failures References: Message-ID: On Tue, 06 Sep 2011 20:35:21 +0200, Nils Wagner wrote: > can someone reproduce the following failures > >>>> scipy.__version__ > '0.10.0.dev7179' BTW, what version of Scipy is this? (Do you have an old scipy/__svn_version__.py lying around?) Pauli From pav at iki.fi Tue Sep 6 16:19:10 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 6 Sep 2011 20:19:10 +0000 (UTC) Subject: [SciPy-Dev] More scipy.test() failures References: Message-ID: On Tue, 06 Sep 2011 21:14:28 +0200, Ralf Gommers wrote: [clip] > The second one I don't see, but it looks like the test tolerance being > slightly too high. One would need to bump the tolerance for 'F' by a factor of 5 to make the test pass. I'm not sure I'm fully comfortable doing that without looking, as I'm not sure whether it's supposed to be more accurate or not. It's single precision ok, but the error is ~ 15000*eps; or 5*sqrt(eps). Maybe it's nevertheless best to just bump the tolerance --- it's not probable that this is due to a real bug anywhere in the codebase. *** Another BTW: what's the platform, and which BLAS/LAPACK libraries are used? Pauli From nwagner at iam.uni-stuttgart.de Tue Sep 6 17:51:56 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 06 Sep 2011 23:51:56 +0200 Subject: [SciPy-Dev] More scipy.test() failures In-Reply-To: References: Message-ID: On Tue, 6 Sep 2011 20:19:10 +0000 (UTC) Pauli Virtanen wrote: > On Tue, 06 Sep 2011 21:14:28 +0200, Ralf Gommers wrote: > [clip] >> The second one I don't see, but it looks like the test >>tolerance being >> slightly too high. > > One would need to bump the tolerance for 'F' by a factor >of 5 to make > the test pass. > > I'm not sure I'm fully comfortable doing that without >looking, as I'm > not sure whether it's supposed to be more accurate or >not. It's single > precision ok, but the error is ~ 15000*eps; or >5*sqrt(eps). > > Maybe it's nevertheless best to just bump the tolerance >--- it's not > probable that this is due to a real bug anywhere in the >codebase. > > *** > > Another BTW: what's the platform, and which BLAS/LAPACK >libraries > are used? > >>> show_config() atlas_threads_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/nwagner/src/ATLAS3.8.2/mybuild/lib'] define_macros = [('ATLAS_INFO', '"\\"3.8.2\\""')] language = f77 include_dirs = ['/home/nwagner/src/ATLAS3.8.2/include'] blas_opt_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/nwagner/src/ATLAS3.8.2/mybuild/lib'] define_macros = [('ATLAS_INFO', '"\\"3.8.2\\""')] language = c include_dirs = ['/home/nwagner/src/ATLAS3.8.2/include'] atlas_blas_threads_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/nwagner/src/ATLAS3.8.2/mybuild/lib'] define_macros = [('ATLAS_INFO', '"\\"3.8.2\\""')] language = c include_dirs = ['/home/nwagner/src/ATLAS3.8.2/include'] lapack_opt_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/nwagner/src/ATLAS3.8.2/mybuild/lib'] define_macros = [('ATLAS_INFO', '"\\"3.8.2\\""')] language = f77 include_dirs = ['/home/nwagner/src/ATLAS3.8.2/include'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE Linux linux-mogv 2.6.34.10-0.2-desktop #1 SMP PREEMPT 2011-07-20 18:48:56 +0200 x86_64 x86_64 x86_64 GNU/Linux Nils From forkandwait at gmail.com Tue Sep 6 18:52:12 2011 From: forkandwait at gmail.com (fork) Date: Tue, 6 Sep 2011 22:52:12 +0000 (UTC) Subject: [SciPy-Dev] =?utf-8?q?import_scipy=2Estats_hangs_in_mod=5Fpython_?= =?utf-8?q?application?= References: Message-ID: gmail.com> writes: > I have no idea about the actual problem, but scipy.stats requires or > loads many of the other scipy subpackages. Since stats is relatively > light in compiled extension modules, maybe trying to load first the > other subpackages might help to narrow it down. > import scipy > import scipy.special > sparse, optimize, ... import scipy works ok, but hangs in import scipy.stats.distributions. I have not started putting in "raise" statements to see where the problem might be. It only happens in my web-application, so it probably isn't a problem in scipy, unless there is something in the handling of imports that cycles forever which stops when called at command line... From josef.pktd at gmail.com Tue Sep 6 19:28:00 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 6 Sep 2011 19:28:00 -0400 Subject: [SciPy-Dev] import scipy.stats hangs in mod_python application In-Reply-To: References: Message-ID: On Tue, Sep 6, 2011 at 6:52 PM, fork wrote: > ? gmail.com> writes: > > >> I have no idea about the actual problem, but scipy.stats requires or >> loads many of the other scipy subpackages. Since stats is relatively >> light in compiled extension modules, maybe trying to load first the >> other subpackages might help to narrow it down. >> import scipy >> import scipy.special >> sparse, optimize, ... > > import scipy works ok, but hangs in import scipy.stats.distributions. "import scipy" is pretty empty and doesn't load any subpackages. If the other subpackages haven't been loaded explicitly before, then "import scipy.stats" imports also the other packages for the first time. I still cannot think of anything that would be specific to scipy.stats or scipy.stats.distributions, and I think Pauli and David have moved some functions during the python 3 porting to avoid circular imports. Josef > > I have not started putting in "raise" statements to see where the problem might > be. ?It only happens in my web-application, so it probably isn't a problem in > scipy, unless there is something in the handling of imports that cycles forever > which stops when called at command line... > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From warren.weckesser at enthought.com Wed Sep 7 01:04:10 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 7 Sep 2011 00:04:10 -0500 Subject: [SciPy-Dev] More scipy.test() failures In-Reply-To: References: Message-ID: On Tue, Sep 6, 2011 at 2:40 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Tue, Sep 6, 2011 at 2:14 PM, Ralf Gommers wrote: > >> >> >> On Tue, Sep 6, 2011 at 8:35 PM, Nils Wagner > > wrote: >> >>> Hi all, >>> >>> can someone reproduce the following failures >>> >>> I can reproduce the first and third failures, but I haven't had time to >> look at them yet. If anyone wants to take a look, that would be helpful. >> > > > The third test (chi2_contingency) fails because special.chdtrc(0, 0.0) > returns nan. It used to return 1. I haven't checked when the behavior > changed. > For the record: the behavior changed here: https://github.com/scipy/scipy/commit/b2e875add019aa01b4d7ea0e10e2974b27a992fd but that change didn't get merged into master until https://github.com/scipy/scipy/commit/936e795df9de65602c65cc3b1b8d87de42d6fe06 Warren > The chi2_contingency function can be changed to handle this degenerate case > explicitly. > > Warren > > > >> The second one I don't see, but it looks like the test tolerance being >> slightly too high. >> >> Ralf >> >> >> >> >>> Nils >>> >>> >>> scipy.__version__ >>> '0.10.0.dev7179' >>> >>> ====================================================================== >>> FAIL: test_mio_utils.test_squeeze_element(False,) >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File >>> >>> "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", >>> line 183, in runTest >>> self.test(*self.arg) >>> AssertionError >>> >>> ====================================================================== >>> FAIL: test_arpack.test_complex_nonsymmetric_modes(False, >>> , 'F', 2, 'LM', None, None, >> 'scipy.sparse.csr.csr_matrix'>) >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File >>> >>> "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", >>> line 183, in runTest >>> self.test(*self.arg) >>> File >>> >>> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", >>> line 196, in eval_evec >>> assert_allclose(LHS, RHS, rtol=_rtol[typ], >>> atol=_atol[typ], err_msg=err) >>> File >>> >>> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", >>> line 1213, in assert_allclose >>> verbose=verbose, header=header) >>> File >>> >>> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", >>> line 677, in assert_array_compare >>> raise AssertionError(msg) >>> AssertionError: >>> Not equal to tolerance rtol=0.000238419, atol=0.000238419 >>> error for eigs:standard, typ=F, which=LM, sigma=None, >>> mattype=csr_matrix, OPpart=None, mode=normal >>> (mismatch 100.0%) >>> x: array([[-0.15612537-0.34876961j, >>> 0.54372370+1.58556008j], >>> [ 0.07264681-0.29588005j, >>> 0.79194915+1.30910718j], >>> [ 0.18219496+0.68975669j, 0.85219419+1.5947448j >>> ],... >>> y: array([[-0.15577540-0.34831554j, >>> 0.54372382+1.58555984j], >>> [ 0.07204140-0.29578942j, >>> 0.79194933+1.30910718j], >>> [ 0.18207264+0.69015396j, >>> 0.85219407+1.59474432j],... >>> >>> ====================================================================== >>> FAIL: Some very simple tests for chi2_contingency. >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File >>> >>> "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", >>> line 183, in runTest >>> self.test(*self.arg) >>> File >>> >>> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/tests/test_contingency.py", >>> line 67, in test_chi2_contingency_trival >>> assert_equal(p, 1.0) >>> File >>> >>> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", >>> line 300, in assert_equal >>> raise AssertionError(msg) >>> AssertionError: >>> Items are not equal: >>> ACTUAL: nan >>> DESIRED: 1.0 >>> >>> ---------------------------------------------------------------------- >>> Ran 5625 tests in 202.342s >>> >>> FAILED (KNOWNFAIL=12, SKIP=28, failures=3) >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Sep 7 04:50:55 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 7 Sep 2011 08:50:55 +0000 (UTC) Subject: [SciPy-Dev] import scipy.stats hangs in mod_python application References: Message-ID: Tue, 06 Sep 2011 22:52:12 +0000, fork wrote: > gmail.com> writes: >> I have no idea about the actual problem, but scipy.stats requires or >> loads many of the other scipy subpackages. Since stats is relatively >> light in compiled extension modules, maybe trying to load first the >> other subpackages might help to narrow it down. import scipy >> import scipy.special >> sparse, optimize, ... > > import scipy works ok, but hangs in import scipy.stats.distributions. How about import scipy.constants # import scipy.cluster # import scipy.fftpack ... I.e., try to import each subpackage, one at a time, to see which ones hang. Also try import scipy.stats.vonmises_cython -- Pauli Virtanen From pav at iki.fi Wed Sep 7 05:07:03 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 7 Sep 2011 09:07:03 +0000 (UTC) Subject: [SciPy-Dev] More scipy.test() failures References: Message-ID: Tue, 06 Sep 2011 23:51:56 +0200, Nils Wagner wrote: [clip] > ['/home/nwagner/src/ATLAS3.8.2/mybuild/lib'] > define_macros = [('ATLAS_INFO', '"\\"3.8.2\\""')] language = f77 > include_dirs = [clip] > Linux linux-mogv 2.6.34.10-0.2-desktop #1 SMP PREEMPT 2011-07-20 > 18:48:56 +0200 x86_64 x86_64 x86_64 GNU/Linux This seems to be a somewhat standard setup, then. The difference to the platforms I have at hand is probably the self-compiled ATLAS. Maybe it has a worse single-precision accuracy, as compared to ATLAS libraries built by Linux distro providers? -- Pauli Virtanen From joscha.schmiedt at googlemail.com Fri Sep 9 04:21:14 2011 From: joscha.schmiedt at googlemail.com (Joscha Schmiedt) Date: Fri, 9 Sep 2011 09:21:14 +0100 Subject: [SciPy-Dev] Official alias for logical_not? Message-ID: Hi, Coming from Matlab I really really enjoy the object-oriented power and flexibility of python. There is only one tiny thing, which puts me off regularly: logical indexing in python. Although there are the functions logical_not, logical_and and logical_xor, they make quite lenghty expressions compared to ~, & and xor in Matlab. Therefore, I regularly find myself creating aliases as n_ for them, which is basically fine, but I could imagine other people doing so. So, are there any plans for officially creating shorthand aliases for the logical_ functions? Best, Joscha -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Sep 9 05:02:07 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 9 Sep 2011 09:02:07 +0000 (UTC) Subject: [SciPy-Dev] Official alias for logical_not? References: Message-ID: Fri, 09 Sep 2011 09:21:14 +0100, Joscha Schmiedt wrote: [clip] > expressions compared to ~, & and xor in Matlab. Therefore, I regularly > find myself creating aliases as n_ for them, which is basically fine, > but I could imagine other people doing so. > > So, are there any plans for officially creating shorthand aliases for > the logical_ functions? For boolean arrays, ~, &, ^ et al. are the logical operations. From joscha.schmiedt at googlemail.com Fri Sep 9 05:13:35 2011 From: joscha.schmiedt at googlemail.com (Joscha Schmiedt) Date: Fri, 9 Sep 2011 10:13:35 +0100 Subject: [SciPy-Dev] Official alias for logical_not? In-Reply-To: References: Message-ID: Hm, I see I should have read the documentation more carefully and noticed: "If you know you have boolean arguments, you can get away with using Numpy's bitwise operators, but be careful with parentheses, like this: z = (x > 1) & (x < 2). The absence of Numpy operator forms of logical_and and logical_or is an unfortunate consequence of Python's design." on http://www.scipy.org/NumPy_for_Matlab_Users#logicalNotes So, the solution to my problems were parenthese. Thank you for the hint and not saying RTFM :-) 2011/9/9 Pauli Virtanen > Fri, 09 Sep 2011 09:21:14 +0100, Joscha Schmiedt wrote: > [clip] > > expressions compared to ~, & and xor in Matlab. Therefore, I regularly > > find myself creating aliases as n_ for them, which is basically fine, > > but I could imagine other people doing so. > > > > So, are there any plans for officially creating shorthand aliases for > > the logical_ functions? > > For boolean arrays, ~, &, ^ et al. are the logical operations. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scipy at samueljohn.de Fri Sep 9 08:23:40 2011 From: scipy at samueljohn.de (Samuel John) Date: Fri, 9 Sep 2011 14:23:40 +0200 Subject: [SciPy-Dev] More scipy.test() failures In-Reply-To: References: Message-ID: <2EF77545-09F9-44E3-96F3-5A1D8D1A4475@samueljohn.de> Hi all, On 07.09.2011, at 07:04, Warren Weckesser wrote: > I can reproduce the first and third failures, but I haven't had time to look at them yet. If anyone wants to take a look, that would be helpful. > > The third test (chi2_contingency) fails because special.chdtrc(0, 0.0) returns nan. It used to return 1. I haven't checked when the behavior changed. > > > For the record: the behavior changed here: > > https://github.com/scipy/scipy/commit/b2e875add019aa01b4d7ea0e10e2974b27a992fd > > but that change didn't get merged into master until > > https://github.com/scipy/scipy/commit/936e795df9de65602c65cc3b1b8d87de42d6fe06 concerning the chi2_contingency, I've created a ticket: http://projects.scipy.org/scipy/ticket/1511 bests, Samuel From scipy at samueljohn.de Fri Sep 9 09:15:28 2011 From: scipy at samueljohn.de (Samuel John) Date: Fri, 9 Sep 2011 15:15:28 +0200 Subject: [SciPy-Dev] installation of scipy on Mac OS X 10.7 In-Reply-To: References: Message-ID: Hey Ben, hey scipy community! great description. Helped a lot. I wonder why "pip install scipy" does not do this. Who feels responsible for the pip install? What is the suggested (default) way of installing scipy? There is (not yet) a homebrew script. Do we want one? Opinions? cheers, Samuel On 09.08.2011, at 18:09, Ben Willmore wrote: > I have spent a day attempting to get a version of scipy that passes unit tests on Mac OS X 10.7. I hope the results of my investigations (below) are useful to others trying to do the same thing. > > Ben > > > == Introduction > > I have attempted to obtain a version of scipy on Mac OS X 10.7 that passes > unit tests, both by downloading it, or by compiling against the system's > python, using the three C compilers available, and various compile flags. > > > == Conclusions > > 1. Both Enthought Python 7.1 and the scipy superpack crash during scipy unit > tests. > > 2. Compilation with the default C compiler (llvm-gcc-4.2) also leads to a > crash during unit tests. > > 3. Compilation using either of the other C compilers (gcc-4.2 or clang) > allows scipy to complete unit tests, only if the fortran flag --ff2c is used. > In both cases, scipy passes almost all tests -- producing errors on one, and > failing two more. With gcc-4.2, numpy passes all tests. With clang, numpy > fails one test. > > 4. The following recipe was most successful for me: > > * Remove everything in /usr/local and /Library/Python/2.7/site-packages > (BEWARE! These directories may well contain stuff you care about) > > * Install gfortran from here: > http://r.research.att.com/gfortran-lion-5666-3.pkg > > mkdir ~/tmp > cd ~/tmp > git clone git://github.com/numpy/numpy.git > cd numpy > export CC=gcc-4.2 > export CXX=g++-4.2 > export FFLAGS=-ff2c > python setupegg.py build --fcompiler=gfortran > sudo python setupegg.py install > > cd ~/tmp > git clone git://github.com/scipy/scipy.git > cd scipy > export CC=gcc-4.2 > export CXX=g++-4.2 > export FFLAGS=-ff2c > python setupegg.py build --fcompiler=gfortran > sudo python setupegg.py install > > (repeat for matplotlib, ipython ...) > > > == Details of test results > > = Enthought python 7.1 > numpy.test(): OK > scipy.test(): Crash (Segmentation fault) > > > = Scipy superpack > numpy.test(): 3 failures: > FAIL: test_umath_complex.TestCsqrt.test_special_values(, 1, > inf, inf, inf) > FAIL: test_umath_complex.TestCsqrt.test_special_values(, -1, > inf, inf, inf) > FAIL: test_umath_complex.TestCsqrt.test_special_values(, 0.0, > inf, inf, inf) > FAIL: test_umath_complex.TestCsqrt.test_special_values(, -0.0, > inf, inf, inf) > FAIL: test_umath_complex.TestCsqrt.test_special_values(, inf, > inf, inf, inf) > FAIL: test_umath_complex.TestCsqrt.test_special_values(, -inf, > inf, inf, inf) > FAIL: test_umath_complex.TestCsqrt.test_special_values(, nan, > inf, inf, inf) > FAIL: test_umath_complex.TestCsqrt.test_special_values(, -inf, > 1, 0.0, inf) > scipy.test(): Crash (Abort trap) > > > = Compilation from scratch (see above for recipe) > > = llvm-gcc-4.2 > = (no compiler flags) > numpy.test(): OK > scipy.test(): Crash (Segmentation fault: 11) > > > = gcc-4.2 > = CC=gcc-4.2 CXX=g++-4.2 > numpy.test(): OK > scipy.test(): Hang (during ARPACK tests) > > > = clang > = CC=clang CXX=clang++ > numpy.test(): 1 failure: > FAIL: Test basic arithmetic function errors > AssertionError: Type did not raise fpe error ''. > scipy.test(): Hang (during ARPACK tests) > > > = llvm-gcc-4.2 and -ff2c > = FFLAGS=-ff2c > numpy.test(): OK > scipy.test(): Crash (Segmentation fault: 11) > > > = gcc-4.2 and -ff2c > = CC=gcc-4.2 CXX=g++-4.2 FFLAGS=-ff2c > numpy.test(): OK > scipy.test(): 1 error, 2 failures: > ERROR: test_arpack.test_hermitian_modes(True, , 'F', 2, 'SA', > None, None, ) > FAIL: test_iterative.test_convergence(, > ) > FAIL: test_expon (test_morestats.TestAnderson) > > > = clang and -ff2c > = CC=clang CXX=clang++ FFLAGS=-ff2c > numpy.test(): 1 failure: > FAIL: Test basic arithmetic function errors > AssertionError: Type did not raise fpe error ''. > scipy.test(): 1 error, 2 failures: > ERROR: test_arpack.test_hermitian_modes(True, , 'F', 2, 'SA', > None, None, ) > FAIL: test_iterative.test_convergence(, > ) > FAIL: test_expon (test_morestats.TestAnderson) > > > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From cournape at gmail.com Fri Sep 9 10:05:15 2011 From: cournape at gmail.com (David Cournapeau) Date: Fri, 9 Sep 2011 10:05:15 -0400 Subject: [SciPy-Dev] installation of scipy on Mac OS X 10.7 In-Reply-To: References: Message-ID: On Fri, Sep 9, 2011 at 9:15 AM, Samuel John wrote: > Hey Ben, hey scipy community! > > great description. Helped a lot. > I wonder why "pip install scipy" does not do this. The problem is that gcc-llvm seems to have changed the ABI. pip cannot help at all here (pip is a simple installer: it just knows where to download a tarball, read its dependencies, and install them in the correct order for simple cases). cheers, David From warren.weckesser at enthought.com Fri Sep 9 14:58:44 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 9 Sep 2011 13:58:44 -0500 Subject: [SciPy-Dev] More scipy.test() failures In-Reply-To: <2EF77545-09F9-44E3-96F3-5A1D8D1A4475@samueljohn.de> References: <2EF77545-09F9-44E3-96F3-5A1D8D1A4475@samueljohn.de> Message-ID: On Fri, Sep 9, 2011 at 7:23 AM, Samuel John wrote: > > Hi all, > > > > On 07.09.2011, at 07:04, Warren Weckesser wrote: > > I can reproduce the first and third failures, but I haven't had time to > look at them yet. If anyone wants to take a look, that would be helpful. > > > > The third test (chi2_contingency) fails because special.chdtrc(0, 0.0) > returns nan. It used to return 1. I haven't checked when the behavior > changed. > > > > > > For the record: the behavior changed here: > > > > > https://github.com/scipy/scipy/commit/b2e875add019aa01b4d7ea0e10e2974b27a992fd > > > > but that change didn't get merged into master until > > > > > https://github.com/scipy/scipy/commit/936e795df9de65602c65cc3b1b8d87de42d6fe06 > > > concerning the chi2_contingency, I've created a ticket: > http://projects.scipy.org/scipy/ticket/1511 > > Thanks. Fixed in https://github.com/scipy/scipy/commit/e90c7361878117171296c6e91dc63c8141b5ecc1 Warren > > bests, > Samuel > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adamhadani at gmail.com Fri Sep 9 18:23:10 2011 From: adamhadani at gmail.com (Adam Ever-Hadani) Date: Fri, 9 Sep 2011 15:23:10 -0700 Subject: [SciPy-Dev] add_yaxis compatibility with recent matplotlib Message-ID: In scikits/timeseries/lib/plotlib.py , around line 1185 ( http://projects.scipy.org/scikits/browser/trunk/timeseries/scikits/timeseries/lib/plotlib.py?rev=2213#L1185), the framework relies on "private" members of the matplotlib Axis / Figure objects (e.g _rows, _cols, _num). These apparently no longer exist and in any case we're better off using a public method specifically designed for accessing these values, namely the get_geometry function. So the change to the line seems to fix the issue for me: . . fsp_alt_args = fsp.get_geometry() . . This might break backward compatibility so perhaps in a nested try catch could be achieved backward compliance.. Hope this makes sense, Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Sep 10 15:13:51 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 10 Sep 2011 21:13:51 +0200 Subject: [SciPy-Dev] Target date for 0.10 release In-Reply-To: References: Message-ID: 2011/8/30 St?fan van der Walt > On Mon, Aug 29, 2011 at 7:56 PM, Ralf Gommers > wrote: > > Here is a proposed schedule: > > Sun 11 Sep: beta 1 > > Sun 25 Sep: rc 1 > > Sun 02 Oct: rc 2 (if needed) > > Sun 09 Oct: final release > > > > Does that work for everyone? > > I'll be out of touch for the next 10 days, but if I can help with any > patch reviews / reworking after that, let me know. > > The schedule looks good! > > Hi all, just a reminder that 0.10b1 is planned for tomorrow. For those of you that have some last minute features to add or pull requests to merge, please do so by 5pm GMT tomorrow. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Sep 10 16:15:59 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 10 Sep 2011 22:15:59 +0200 Subject: [SciPy-Dev] installation of scipy on Mac OS X 10.7 In-Reply-To: References: Message-ID: On Fri, Sep 9, 2011 at 3:15 PM, Samuel John wrote: > Hey Ben, hey scipy community! > > great description. Helped a lot. > I wonder why "pip install scipy" does not do this. > Who feels responsible for the pip install? > What is the suggested (default) way of installing scipy? > Binary installers on Windows and OS X, or a package like EPD or Python(x,y). On Linux just get it though the package manager. > There is (not yet) a homebrew script. Do we want one? > > That may be useful. I'm not a user of homebrew myself, but have only heard good things about it so far. So please go for it! Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sun Sep 11 11:12:27 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 11 Sep 2011 17:12:27 +0200 Subject: [SciPy-Dev] [ANN] ESCO2012 and PyHPC2011: Python in science in major computational science conference Message-ID: <20110911151227.GA23609@phare.normalesup.org> ESCO 2012 - European Seminar on Coupled Problems ================================================= ESCO2012 http://esco2012.femhub.com/ is the 3rd event in a series of interdisciplineary meetings dedicated to computational science challenges in multi-physics and PDEs. I was invited as ESCO last year. It was an aboslute pleasure, because it is a small conference that is very focused on discussions. I learned a lot and could sit down with people who code top notch PDE libraries such as FEniCS and have technical discussions. Besides, it is hosted in the historical brewery where the Pilsner was invented. Plenty of great beer. Application areas ------------------ Theoretical results as well as applications are welcome. Application areas include, but are not limited to: Computational electromagnetics, Civil engineering, Nuclear engineering, Mechanical engineering, Computational fluid dynamics, Computational geophysics, Geomechanics and rock mechanics, Computational hydrology, Subsurface modeling, Biomechanics, Computational chemistry, Climate and weather modeling, Wave propagation, Acoustics, Stochastic differential equations, and Uncertainty quantification. Minisymposia * Multiphysics and Multiscale Problems in Civil Engineering * Modern Numerical Methods for ODE * Porous Media Hydrodynamics * Nuclear Fuel Recycling Simulations * Adaptive Methods for Eigenproblems * Discontinuous Galerkin Methods for Electromagnetics * Undergraduate Projects in Technical Computing Software afternoon ------------------- Important part of each ESCO conference is a software afternoon featuring software projects by participants. Presented can be any computational software that has reached certain level of maturity, i.e., it is used outside of the author's institution, and it has a web page and a user documentation. Proceedings ----------- For each ESCO we strive to reserve a special issue of an international journal with impact factor. Proceedings of ESCO 2008 appeared in Math. Comput. Simul., proceedings of ESCO 2010 in CiCP and Appl. Math. Comput. Proceedings of ESCO 2012 will appear in Computing. Important Dates * December 15, 2011: Abstract submission deadline. * December 15, 2011: Minisymposia proposals. * January 15, 2012: Notification of acceptance. PyHPC: Python for High performance computing -------------------------------------------- If you are doing super computing, SC11, ( http://sc11.supercomputing.org/) the Super Computing conference is the reference conference. This year there will a workshop on high performance computing with Python: PyHPC (http://www.dlr.de/sc/desktopdefault.aspx/tabid-1183/1638_read-31733/). At the scipy conference, I was having a discussion with some of the attendees on how people often still do process management and I/O with Fortran in the big computing environment. This is counter productive. However, has success stories of supercomputing folks using high-level languages are not advertized, this is bound to stay. Come and tell us how you use Python for high performance computing! Topics * Python-based scientific applications and libraries * High performance computing * Parallel Python-based programming languages * Scientific visualization * Scientific computing education * Python performance and language issues * Problem solving environments with Python * Performance analysis tools for Python application Papers We invite you to submit a paper of up to 10 pages via the submission site. Authors are encouraged to use IEEE two column format. Important Dates * Full paper submission: September 19, 2011 * Notification of acceptance: October 7, 2011 * Camera-ready papers: October 31, 2011 From ralf.gommers at googlemail.com Sun Sep 11 11:42:09 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 11 Sep 2011 17:42:09 +0200 Subject: [SciPy-Dev] [Numpy-discussion] life expectancy of scipy.stats nan statistics In-Reply-To: References: Message-ID: On Wed, Aug 24, 2011 at 1:04 PM, Ralf Gommers wrote: > > > On Sun, Aug 21, 2011 at 3:22 AM, Bruce Southey wrote: > >> On Fri, Aug 19, 2011 at 10:19 PM, wrote: >> > I'm just looking at http://projects.scipy.org/scipy/ticket/1200 >> Cringe, I wrote that poor code... >> > >> > I agree with Ralf that the bias keyword should be changed to ddof as >> > in the numpy functions. For functions in scipy.stats, and statistics >> > in general, I prefer the usual axis=0 default. >> >> I agree for the keyword but not the axis since a user should expect >> the same default behavior as in numpy for the 'same' function. >> >> In this case I agree with Bruce, because then it'll be easy to switch to a > future numpy nanstd implementation. > >> >> > >> > However, I think these functions, like scipy.stats.nanstd, should be >> > replaced by corresponding numpy functions, which might happen >> > relatively soon. But how soon? >> >> Anyhow, I like Mark's NA stuff for the little that I have done with it >> and there still is the masked stuff. But it is still going to be a >> while for it to be mainstream yet a really good numpy 2.0 feature. If >> the 'skipna' could be more general then these separate na handling >> functions would be unnecessary. >> >> > >> > Is it worth deprecating bias in scipy 0.10, and then deprecate again >> > for removal in 0.11 or 0.12? >> > >> > Josef >> >> +1 >> >> OK, I'll do this then. > > Or not. I've changed my mind after trying to do make this change. It's not possible to do this in a reasonable way without breaking people's code. Plus they'd have to explicitly test for the scipy version if they want to support 0.9 and 0.10 from their code. Let's just wait till this function appears in numpy, and then completely deprecate the scipy function. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Sep 11 13:32:20 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 11 Sep 2011 19:32:20 +0200 Subject: [SciPy-Dev] More scipy.test() failures In-Reply-To: References: Message-ID: On Wed, Sep 7, 2011 at 11:07 AM, Pauli Virtanen wrote: > Tue, 06 Sep 2011 23:51:56 +0200, Nils Wagner wrote: > [clip] > > ['/home/nwagner/src/ATLAS3.8.2/mybuild/lib'] > > define_macros = [('ATLAS_INFO', '"\\"3.8.2\\""')] language = f77 > > include_dirs = > [clip] > > Linux linux-mogv 2.6.34.10-0.2-desktop #1 SMP PREEMPT 2011-07-20 > > 18:48:56 +0200 x86_64 x86_64 x86_64 GNU/Linux > > This seems to be a somewhat standard setup, then. The difference > to the platforms I have at hand is probably the self-compiled ATLAS. > Maybe it has a worse single-precision accuracy, as compared to > ATLAS libraries built by Linux distro providers? > > I opened http://projects.scipy.org/scipy/ticket/1515 to keep track of this. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Mon Sep 12 01:13:00 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sun, 11 Sep 2011 22:13:00 -0700 Subject: [SciPy-Dev] Two quick fixes for scipy 0.10 and trunk Message-ID: <4E6D94DC.5090909@uci.edu> Hello, please consider these two trivial changes for scipy 0.10 and master: 1) Initialize variable *d in _logit.c.src: diff --git a/scipy/special/_logit.c.src b/scipy/special/_logit.c.src index 7b830f7..237ca83 100644 --- a/scipy/special/_logit.c.src +++ b/scipy/special/_logit.c.src @@ -107,7 +107,7 @@ static PyModuleDef moduledef = { NULL }; -PyObject * +PyMODINIT_FUNC PyInit__logit() { PyObject *m, *f, *d; @@ -119,6 +119,8 @@ PyInit__logit() import_array(); import_umath(); + d = PyModule_GetDict(m); + f = PyUFunc_FromFuncAndData(logit_funcs,data, types, 3, 1, 1, PyUFunc_None, "logit",NULL , 0); PyDict_SetItemString(d, "logit", f); 2) Fix syntax error in fblas_l1.pyf.src: diff --git a/scipy/linalg/fblas_l1.pyf.src b/scipy/linalg/fblas_l1.pyf.src index 1543a20..664bdd8 100644 --- a/scipy/linalg/fblas_l1.pyf.src +++ b/scipy/linalg/fblas_l1.pyf.src @@ -347,7 +347,7 @@ function iamax(n,x,offx,incx) result(k) ! This is to avoid Fortran wrappers. integer iamax,k - fortranname F_FUNC(iamax,I>AMAX) + fortranname F_FUNC(iamax,IAMAX) intent(c) iamax dimension(*), intent(in) :: x integer optional, intent(in), check(incx>0||incx<0) :: incx = 1 Thanks, Christoph -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy-0.10b1.diff URL: From ralf.gommers at googlemail.com Mon Sep 12 17:36:11 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 12 Sep 2011 23:36:11 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10 beta 1 Message-ID: Hi, I am pleased to announce the availability of the first beta release of SciPy0.10.0. For this release over a 100 tickets and pull requests have been closed, and many new features have been added. Some of the highlights are: - support for Bento as a build system for scipy - generalized and shift-invert eigenvalue problems in sparse.linalg - addition of discrete-time linear systems in the signal module Sources and binaries can be found at https://sourceforge.net/projects/scipy/files/scipy/0.10.0b1/, release notes are copied below. Binaries for Python 2.x are available, on Python 3 there are a few known problems that should be solved first. When they are, a second beta will follow. Please try this release and report problems on the mailing list. Cheers, Ralf ========================== SciPy 0.10.0 Release Notes ========================== .. note:: Scipy 0.10.0 is not released yet! .. contents:: SciPy 0.10.0 is the culmination of XXX months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.10.x branch, and on adding new features on the development trunk. This release requires Python 2.4-2.7 or 3.1- and NumPy 1.5 or greater. New features ============ Bento: new optional build system -------------------------------- Scipy can now be built with `Bento `_. Bento has some nice features like parallel builds and partial rebuilds, that are not possible with the default build system (distutils). For usage instructions see BENTO_BUILD.txt in the scipy top-level directory. Currently Scipy has three build systems, distutils, numscons and bento. Numscons is deprecated and is planned and will likely be removed in the next release. Generalized and shift-invert eigenvalue problems in ``scipy.sparse.linalg`` --------------------------------------------------------------------------- The sparse eigenvalue problem solver functions ``scipy.sparse.eigs/eigh`` now support generalized eigenvalue problems, and all shift-invert modes available in ARPACK. Discrete-Time Linear Systems (``scipy.signal``) ----------------------------------------------- Support for simulating discrete-time linear systems, including ``scipy.signal.dlsim``, ``scipy.signal.dimpulse``, and ``scipy.signal.dstep``, has been added to SciPy. Conversion of linear systems from continuous-time to discrete-time representations is also present via the ``scipy.signal.cont2discrete`` function. Enhancements to ``scipy.signal`` -------------------------------- A Lomb-Scargle periodogram can now be computed with the new function ``scipy.signal.lombscargle``. The forward-backward filter function ``scipy.signal.filtfilt`` can now filter the data in a given axis of an n-dimensional numpy array. (Previously it only handled a 1-dimensional array.) Options have been added to allow more control over how the data is extended before filtering. FIR filter design with ``scipy.signal.firwin2`` now has options to create filters of type III (zero at zero and Nyquist frequencies) and IV (zero at zero frequency). Additional decomposition options (``scipy.linalg``) --------------------------------------------------- A sort keyword has been added to the Schur decomposition routine (``scipy.linalg.schur``) to allow the sorting of eigenvalues in the resultant Schur form. Additional special matrices (``scipy.linalg``) ---------------------------------------------- The functions ``hilbert`` and ``invhilbert`` were added to ``scipy.linalg``. Enhancements to ``scipy.stats`` ------------------------------- * The *one-sided form* of Fisher's exact test is now also implemented in ``stats.fisher_exact``. * The function ``stats.chi2_contingency`` for computing the chi-square test of independence of factors in a contingency table has been added, along with the related utility functions ``stats.contingency.margins`` and ``stats.contingency.expected_freq``. Basic support for Harwell-Boeing file format for sparse matrices ---------------------------------------------------------------- Both read and write are support through a simple function-based API, as well as a more complete API to control number format. The functions may be found in scipy.sparse.io. The following features are supported: * Read and write sparse matrices in the CSC format * Only real, symmetric, assembled matrix are supported (RUA format) Deprecated features =================== ``scipy.maxentropy`` -------------------- The maxentropy module is unmaintained, rarely used and has not been functioning well for several releases. Therefore it has been deprecated for this release, and will be removed for scipy 0.11. Logistic regression in scikits.learn is a good alternative for this functionality. The ``scipy.maxentropy.logsumexp`` function has been moved to ``scipy.misc``. ``scipy.lib.blas`` ------------------ There are similar BLAS wrappers in ``scipy.linalg`` and ``scipy.lib``. These have now been consolidated as ``scipy.linalg.blas``, and ``scipy.lib.blas`` is deprecated. Numscons build system --------------------- The numscons build system is being replaced by Bento, and will be removed in one of the next scipy releases. Removed features ================ The deprecated name `invnorm` was removed from ``scipy.stats.distributions``, this distribution is available as `invgauss`. The following deprecated nonlinear solvers from ``scipy.optimize`` have been removed:: - ``broyden_modified`` (bad performance) - ``broyden1_modified`` (bad performance) - ``broyden_generalized`` (equivalent to ``anderson``) - ``anderson2`` (equivalent to ``anderson``) - ``broyden3`` (obsoleted by new limited-memory broyden methods) - ``vackar`` (renamed to ``diagbroyden``) Other changes ============= ``scipy.constants`` has been updated with the CODATA 2010 constants. ``__all__`` dicts have been added to all modules, which has cleaned up the namespaces (particularly useful for interactive work). An API section has been added to the documentation, giving recommended import guidelines and specifying which submodules are public and which aren't. Checksums ========= f30a85149ebc3d023fce5e012cc7a28a release/installers/scipy-0.10.0b1-py2.7-python.org-macosx10.6.dmg 5c4a74cca13e9225efd1840d99af9ee8 release/installers/scipy-0.10.0b1-win32-superpack-python2.5.exe b24dd33bfeb07058038ba85d2b1ddfd3 release/installers/scipy-0.10.0b1-win32-superpack-python2.6.exe c5222bb7b7fcc28cec4730e3caebf43f release/installers/scipy-0.10.0b1-win32-superpack-python2.7.exe c8c6f3870f9d0ef571861da63b4b374b release/installers/scipy-0.10.0b1.tar.gz 1ce4f01acfccb68dcd6c387eb08a8a88 release/installers/scipy-0.10.0b1.zip -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Mon Sep 12 18:21:35 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 12 Sep 2011 15:21:35 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.10 beta 1 In-Reply-To: References: Message-ID: <4E6E85EF.4050005@uci.edu> On 9/12/2011 2:36 PM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the first beta release of > SciPy 0.10.0. For this release over a 100 tickets and pull requests have > been closed, and many new features have been added. Some of the > highlights are: > > - support for Bento as a build system for scipy > - generalized and shift-invert eigenvalue problems in sparse.linalg > - addition of discrete-time linear systems in the signal module > > Sources and binaries can be found at > https://sourceforge.net/projects/scipy/files/scipy/0.10.0b1/, release > notes are copied below. > Binaries for Python 2.x are available, on Python 3 there are a few known > problems that should be solved first. When they are, a second beta will > follow. > > Please try this release and report problems on the mailing list. > > Cheers, > Ralf > Hi Ralf, regarding Python 3: 1) The patch from numpy ticket #1919 might be necessary on some platforms . 2) On my system, 2to3 did not correctly convert all imports in special.__init__.py, special.add_newdocs.py, and linalg.misc.py. I needed to manually change -from _logit import logit, expit +from ._logit import logit, expit -import fblas +from . import fblas 3) The attached patch fixes some test errors and failures. Christoph -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: py3.diff URL: From scott.sinclair.za at gmail.com Tue Sep 13 10:08:07 2011 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Tue, 13 Sep 2011 16:08:07 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10 beta 1 In-Reply-To: References: Message-ID: On 12 September 2011 23:36, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the first beta release of SciPy > 0.10.0. For this release over a 100 tickets and pull requests have been > closed, and many new features have been added. Some of the highlights are: > > ? - support for Bento as a build system for scipy > ? - generalized and shift-invert eigenvalue problems in sparse.linalg > ? - addition of discrete-time linear systems in the signal module > > Sources and binaries can be found at > https://sourceforge.net/projects/scipy/files/scipy/0.10.0b1/, release notes > are copied below. I'm not able to build 0.10.0b1 from the source distribution at SourceForge (http://sourceforge.net/projects/scipy/files/scipy/0.10.0b1/scipy-0.10.0b1.tar.gz/download) scott at godzilla:tmp$ tar -xzf scipy-0.10.0b1.tar.gz scott at godzilla:tmp$ cd scipy-0.10.0b1 scott at godzilla:scipy-0.10.0b1$ python setup.py build Traceback (most recent call last): File "setup.py", line 188, in setup_package() File "setup.py", line 165, in setup_package write_version_py() File "setup.py", line 105, in write_version_py from scipy.version import git_revision as GIT_REVISION File "/home/scott/tmp/scipy-0.10.0b1/scipy/init.py", line 115, in raise ImportError(msg) ImportError: Error importing scipy: you cannot import scipy while being in scipy source directory; please exit the scipy source tree first, and relaunch your python intepreter. There's a proposed fix at https://github.com/scipy/scipy/pull/75 Cheers, Scott From bsouthey at gmail.com Tue Sep 13 11:40:50 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 13 Sep 2011 10:40:50 -0500 Subject: [SciPy-Dev] ANN: SciPy 0.10 beta 1 In-Reply-To: References: Message-ID: <4E6F7982.1000301@gmail.com> On 09/12/2011 04:36 PM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the first beta release of > SciPy 0.10.0. For this release over a 100 tickets and pull requests > have been closed, and many new features have been added. Some of the > highlights are: > > - support for Bento as a build system for scipy > - generalized and shift-invert eigenvalue problems in sparse.linalg > - addition of discrete-time linear systems in the signal module > > Sources and binaries can be found at > https://sourceforge.net/projects/scipy/files/scipy/0.10.0b1/, release > notes are copied below. > Binaries for Python 2.x are available, on Python 3 there are a few > known problems that should be solved first. When they are, a second > beta will follow. > > Please try this release and report problems on the mailing list. > > Cheers, > Ralf > > > > > ========================== > SciPy 0.10.0 Release Notes > ========================== > > .. note:: Scipy 0.10.0 is not released yet! > > .. contents:: > > SciPy 0.10.0 is the culmination of XXX months of hard work. It contains > many new features, numerous bug-fixes, improved test coverage and > better documentation. There have been a number of deprecations and > API changes in this release, which are documented below. All users > are encouraged to upgrade to this release, as there are a large number > of bug-fixes and optimizations. Moreover, our development attention > will now shift to bug-fix releases on the 0.10.x branch, and on adding > new features on the development trunk. > > This release requires Python 2.4-2.7 or 3.1- and NumPy 1.5 or greater. > > > New features > ============ > > Bento: new optional build system > -------------------------------- > > Scipy can now be built with `Bento `_. > Bento has some nice features like parallel builds and partial > rebuilds, that > are not possible with the default build system (distutils). For usage > instructions see BENTO_BUILD.txt in the scipy top-level directory. > > Currently Scipy has three build systems, distutils, numscons and bento. > Numscons is deprecated and is planned and will likely be removed in > the next > release. > > > Generalized and shift-invert eigenvalue problems in > ``scipy.sparse.linalg`` > --------------------------------------------------------------------------- > > The sparse eigenvalue problem solver functions > ``scipy.sparse.eigs/eigh`` now support generalized eigenvalue > problems, and all shift-invert modes available in ARPACK. > > > Discrete-Time Linear Systems (``scipy.signal``) > ----------------------------------------------- > > Support for simulating discrete-time linear systems, including > ``scipy.signal.dlsim``, ``scipy.signal.dimpulse``, and > ``scipy.signal.dstep``, > has been added to SciPy. Conversion of linear systems from > continuous-time to > discrete-time representations is also present via the > ``scipy.signal.cont2discrete`` function. > > > Enhancements to ``scipy.signal`` > -------------------------------- > > A Lomb-Scargle periodogram can now be computed with the new function > ``scipy.signal.lombscargle``. > > The forward-backward filter function ``scipy.signal.filtfilt`` can now > filter the data in a given axis of an n-dimensional numpy array. > (Previously it only handled a 1-dimensional array.) Options have been > added to allow more control over how the data is extended before > filtering. > > FIR filter design with ``scipy.signal.firwin2`` now has options to create > filters of type III (zero at zero and Nyquist frequencies) and IV > (zero at zero > frequency). > > > Additional decomposition options (``scipy.linalg``) > --------------------------------------------------- > > A sort keyword has been added to the Schur decomposition routine > (``scipy.linalg.schur``) to allow the sorting of eigenvalues in > the resultant Schur form. > > Additional special matrices (``scipy.linalg``) > ---------------------------------------------- > > The functions ``hilbert`` and ``invhilbert`` were added to > ``scipy.linalg``. > > > Enhancements to ``scipy.stats`` > ------------------------------- > > * The *one-sided form* of Fisher's exact test is now also implemented in > ``stats.fisher_exact``. > * The function ``stats.chi2_contingency`` for computing the chi-square > test of > independence of factors in a contingency table has been added, along > with > the related utility functions ``stats.contingency.margins`` and > ``stats.contingency.expected_freq``. > > > Basic support for Harwell-Boeing file format for sparse matrices > ---------------------------------------------------------------- > > Both read and write are support through a simple function-based API, > as well as > a more complete API to control number format. The functions may be > found in > scipy.sparse.io . > > The following features are supported: > > * Read and write sparse matrices in the CSC format > * Only real, symmetric, assembled matrix are supported (RUA format) > > > Deprecated features > =================== > > ``scipy.maxentropy`` > -------------------- > > The maxentropy module is unmaintained, rarely used and has not been > functioning > well for several releases. Therefore it has been deprecated for this > release, > and will be removed for scipy 0.11. Logistic regression in > scikits.learn is a > good alternative for this functionality. The > ``scipy.maxentropy.logsumexp`` > function has been moved to ``scipy.misc``. > > > ``scipy.lib.blas`` > ------------------ > > There are similar BLAS wrappers in ``scipy.linalg`` and > ``scipy.lib``. These > have now been consolidated as ``scipy.linalg.blas``, and > ``scipy.lib.blas`` is > deprecated. > > > Numscons build system > --------------------- > > The numscons build system is being replaced by Bento, and will be > removed in > one of the next scipy releases. > > > Removed features > ================ > > The deprecated name `invnorm` was removed from > ``scipy.stats.distributions``, > this distribution is available as `invgauss`. > > The following deprecated nonlinear solvers from ``scipy.optimize`` > have been > removed:: > > - ``broyden_modified`` (bad performance) > - ``broyden1_modified`` (bad performance) > - ``broyden_generalized`` (equivalent to ``anderson``) > - ``anderson2`` (equivalent to ``anderson``) > - ``broyden3`` (obsoleted by new limited-memory broyden methods) > - ``vackar`` (renamed to ``diagbroyden``) > > > Other changes > ============= > > ``scipy.constants`` has been updated with the CODATA 2010 constants. > > ``__all__`` dicts have been added to all modules, which has cleaned up the > namespaces (particularly useful for interactive work). > > An API section has been added to the documentation, giving recommended > import > guidelines and specifying which submodules are public and which aren't. > > > Checksums > ========= > > f30a85149ebc3d023fce5e012cc7a28a > release/installers/scipy-0.10.0b1-py2.7-python.org-macosx10.6.dmg > 5c4a74cca13e9225efd1840d99af9ee8 > release/installers/scipy-0.10.0b1-win32-superpack-python2.5.exe > b24dd33bfeb07058038ba85d2b1ddfd3 > release/installers/scipy-0.10.0b1-win32-superpack-python2.6.exe > c5222bb7b7fcc28cec4730e3caebf43f > release/installers/scipy-0.10.0b1-win32-superpack-python2.7.exe > c8c6f3870f9d0ef571861da63b4b374b release/installers/scipy-0.10.0b1.tar.gz > 1ce4f01acfccb68dcd6c387eb08a8a88 release/installers/scipy-0.10.0b1.zip > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev The INSTALL.TXT and weave documentation still refers to svn and not git. Also, I do not know if the usage of 'svn' in the tools directory of the developmental version are valid. I noticed these two usages of svn in the release version: 1) 'scipy/setup.py': 'config.make_svn_version_py() # installs __svn_version__.py'? 2) scipy/special/special_version.py The __svn_version__.py was removed so scipy/special/special_version.py fails: $ git log -p scipy/version.py commit 28a5f5f0e03b7746a43ae31925806cbf8df958a8 Author: David Cournapeau Date: Sun May 10 06:57:17 2009 +0000 [snip] With over a year and no complaints, perhaps we could just remove these rather than fix scipy/special/special_version.py? It is easy to fix but it has not changed since 2007 when it was apparently created! Can file a ticket if needed. Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Sep 13 12:20:03 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 13 Sep 2011 18:20:03 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10 beta 1 In-Reply-To: <4E6E85EF.4050005@uci.edu> References: <4E6E85EF.4050005@uci.edu> Message-ID: Hi Christoph, On Tue, Sep 13, 2011 at 12:21 AM, Christoph Gohlke wrote: > > > On 9/12/2011 2:36 PM, Ralf Gommers wrote: > >> Hi, >> >> I am pleased to announce the availability of the first beta release of >> SciPy 0.10.0. For this release over a 100 tickets and pull requests have >> been closed, and many new features have been added. Some of the >> highlights are: >> >> - support for Bento as a build system for scipy >> - generalized and shift-invert eigenvalue problems in sparse.linalg >> - addition of discrete-time linear systems in the signal module >> >> Sources and binaries can be found at >> https://sourceforge.net/**projects/scipy/files/scipy/0.**10.0b1/, >> release >> notes are copied below. >> Binaries for Python 2.x are available, on Python 3 there are a few known >> problems that should be solved first. When they are, a second beta will >> follow. >> >> Please try this release and report problems on the mailing list. >> >> Cheers, >> Ralf >> >> > Hi Ralf, > > regarding Python 3: > > 1) The patch from numpy ticket #1919 might be necessary on some platforms < > http://projects.scipy.org/**numpy/ticket/1919 > >. > > Any important platforms? I haven't noticed any problems related to that issue yet, and I'd like to keep compatibility with numpy 1.5.x for the binaries. > 2) On my system, 2to3 did not correctly convert all imports in > special.__init__.py, special.add_newdocs.py, and linalg.misc.py. I needed > to manually change > > -from _logit import logit, expit > +from ._logit import logit, expit > > -import fblas > +from . import fblas > I noticed that too. "from . import foo" is not valid python 2.4 syntax, so I hope there's an easy way we can make 2to3 see the error of its ways. > > 3) The attached patch fixes some test errors and failures. > > That all looks fine. Is the changed PIL import necessary for PIL trunk? I seem to remember something about changed namespaces in PIL, but "import Image" used to be standard. Does your fix work with older PIL (1.1.6)? Thanks, Ralf > Christoph > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Sep 13 12:29:20 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 13 Sep 2011 18:29:20 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10 beta 1 In-Reply-To: <4E6F7982.1000301@gmail.com> References: <4E6F7982.1000301@gmail.com> Message-ID: On Tue, Sep 13, 2011 at 5:40 PM, Bruce Southey wrote: > ** > On 09/12/2011 04:36 PM, Ralf Gommers wrote: > > Hi, > > I am pleased to announce the availability of the first beta release of > SciPy 0.10.0. For this release over a 100 tickets and pull requests have > been closed, and many new features have been added. Some of the highlights > are: > > - support for Bento as a build system for scipy > - generalized and shift-invert eigenvalue problems in sparse.linalg > - addition of discrete-time linear systems in the signal module > > Sources and binaries can be found at > https://sourceforge.net/projects/scipy/files/scipy/0.10.0b1/, release > notes are copied below. > Binaries for Python 2.x are available, on Python 3 there are a few known > problems that should be solved first. When they are, a second beta will > follow. > > Please try this release and report problems on the mailing list. > > The INSTALL.TXT and weave documentation still refers to svn and not git. > Also, I do not know if the usage of 'svn' in the tools directory of the > developmental version are valid. > The builds scripts under tools/ are outdated, I'm not sure if there's still useful info in them. > I noticed these two usages of svn in the release version: > 1) 'scipy/setup.py': 'config.make_svn_version_py() # installs > __svn_version__.py'? > 2) scipy/special/special_version.py > Should probably both be removed. > > The __svn_version__.py was removed so scipy/special/special_version.py > fails: > $ git log -p scipy/version.py > commit 28a5f5f0e03b7746a43ae31925806cbf8df958a8 > Author: David Cournapeau > Date: Sun May 10 06:57:17 2009 +0000 > [snip] > > With over a year and no complaints, perhaps we could just remove these > rather than fix scipy/special/special_version.py? > It is easy to fix but it has not changed since 2007 when it was apparently > created! > > Can file a ticket if needed. > Please do. And if anyone feels like writing a patch that makes "grin svn" return zero results, that would be helpful. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Tue Sep 13 12:57:26 2011 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 13 Sep 2011 11:57:26 -0500 Subject: [SciPy-Dev] removing netcdf_variable from netcdf.__all__ In-Reply-To: References: Message-ID: On Tue, Aug 30, 2011 at 11:48 AM, Ralf Gommers wrote: > Hi, > > Question for users of scipy.io.netcdf: did you ever use netcdf_variable > directly, or only via netcdf_file.createVariable? The documentation says > that the latter is the only intended use, which means that this class > shouldn't be in __all__. https://github.com/scipy/scipy/pull/67 removes it > (and changes its __init__ in a non-backwards compatible way) - the patch > looks correct but I want to double check that no one is using this class. > > Thanks, > Ralf > > Sorry for not replying earlier (I don't regularly track this list). No, I don't use netcdf_variable directly. However, would the documentation for the class's other methods still show up in the main documentation if you remove it from __all__? Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Sep 13 13:04:58 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 13 Sep 2011 19:04:58 +0200 Subject: [SciPy-Dev] removing netcdf_variable from netcdf.__all__ In-Reply-To: References: Message-ID: On Tue, Sep 13, 2011 at 6:57 PM, Benjamin Root wrote: > On Tue, Aug 30, 2011 at 11:48 AM, Ralf Gommers < > ralf.gommers at googlemail.com> wrote: > >> Hi, >> >> Question for users of scipy.io.netcdf: did you ever use netcdf_variable >> directly, or only via netcdf_file.createVariable? The documentation says >> that the latter is the only intended use, which means that this class >> shouldn't be in __all__. https://github.com/scipy/scipy/pull/67 removes >> it (and changes its __init__ in a non-backwards compatible way) - the patch >> looks correct but I want to double check that no one is using this class. >> >> Thanks, >> Ralf >> >> > Sorry for not replying earlier (I don't regularly track this list). No, I > don't use netcdf_variable directly. However, would the documentation for > the class's other methods still show up in the main documentation if you > remove it from __all__? > > It should, since the io module docstring contains: Netcdf (:mod:`scipy.io.netcdf`) =============================== .. module:: scipy.io.netcdf .. autosummary:: :toctree: generated/ netcdf_file - A file object for NetCDF data netcdf_variable - A data object for the netcdf module Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Tue Sep 13 13:22:41 2011 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 13 Sep 2011 12:22:41 -0500 Subject: [SciPy-Dev] removing netcdf_variable from netcdf.__all__ In-Reply-To: References: Message-ID: On Tue, Sep 13, 2011 at 12:04 PM, Ralf Gommers wrote: > > > On Tue, Sep 13, 2011 at 6:57 PM, Benjamin Root wrote: > >> On Tue, Aug 30, 2011 at 11:48 AM, Ralf Gommers < >> ralf.gommers at googlemail.com> wrote: >> >>> Hi, >>> >>> Question for users of scipy.io.netcdf: did you ever use netcdf_variable >>> directly, or only via netcdf_file.createVariable? The documentation says >>> that the latter is the only intended use, which means that this class >>> shouldn't be in __all__. https://github.com/scipy/scipy/pull/67 removes >>> it (and changes its __init__ in a non-backwards compatible way) - the patch >>> looks correct but I want to double check that no one is using this class. >>> >>> Thanks, >>> Ralf >>> >>> >> Sorry for not replying earlier (I don't regularly track this list). No, I >> don't use netcdf_variable directly. However, would the documentation for >> the class's other methods still show up in the main documentation if you >> remove it from __all__? >> >> It should, since the io module docstring contains: > > Netcdf (:mod:`scipy.io.netcdf`) > =============================== > > .. module:: scipy.io.netcdf > > .. autosummary:: > :toctree: generated/ > > netcdf_file - A file object for NetCDF data > netcdf_variable - A data object for the netcdf module > > Ralf > Ok, I am fine with that. However, if we have this change in the call signature, I would still feel better having a little note in the docstring pointing out that change in case there was someone who was using this constructor directly. I am still a little wary of how the change to the call signature of the constructor was done. In particular, I am not exactly sure why it would even be needed in the first place. For each NC_* type, there is only one element size that is valid, and each dtype character code should only correspond to a single NC_* type (and vice-versa). Why not have a dict of character code to element size, and just let the character code determine the item size? Or, maybe I am misunderstanding what is going on. I will also double-check on my 32 and 64 bit machines to make sure that everything works as expected. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Tue Sep 13 13:46:11 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 13 Sep 2011 12:46:11 -0500 Subject: [SciPy-Dev] ANN: SciPy 0.10 beta 1 In-Reply-To: References: <4E6F7982.1000301@gmail.com> Message-ID: <4E6F96E3.5010102@gmail.com> On 09/13/2011 11:29 AM, Ralf Gommers wrote: > > > On Tue, Sep 13, 2011 at 5:40 PM, Bruce Southey > wrote: > > On 09/12/2011 04:36 PM, Ralf Gommers wrote: >> Hi, >> >> I am pleased to announce the availability of the first beta >> release of SciPy 0.10.0. For this release over a 100 tickets and >> pull requests have been closed, and many new features have been >> added. Some of the highlights are: >> >> - support for Bento as a build system for scipy >> - generalized and shift-invert eigenvalue problems in sparse.linalg >> - addition of discrete-time linear systems in the signal module >> >> Sources and binaries can be found at >> https://sourceforge.net/projects/scipy/files/scipy/0.10.0b1/, >> release notes are copied below. >> Binaries for Python 2.x are available, on Python 3 there are a >> few known problems that should be solved first. When they are, a >> second beta will follow. >> >> Please try this release and report problems on the mailing list. >> > The INSTALL.TXT and weave documentation still refers to svn and > not git. > Also, I do not know if the usage of 'svn' in the tools directory > of the developmental version are valid. > > > The builds scripts under tools/ are outdated, I'm not sure if there's > still useful info in them. > > > I noticed these two usages of svn in the release version: > 1) 'scipy/setup.py': 'config.make_svn_version_py() # installs > __svn_version__.py'? > 2) scipy/special/special_version.py > > > Should probably both be removed. > > > The __svn_version__.py was removed so > scipy/special/special_version.py fails: > $ git log -p scipy/version.py > commit 28a5f5f0e03b7746a43ae31925806cbf8df958a8 > Author: David Cournapeau > > Date: Sun May 10 06:57:17 2009 +0000 > [snip] > > With over a year and no complaints, perhaps we could just remove > these rather than fix scipy/special/special_version.py? > It is easy to fix but it has not changed since 2007 when it was > apparently created! > > Can file a ticket if needed. > > > Please do. And if anyone feels like writing a patch that makes "grin > svn" return zero results, that would be helpful. > > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev Done in ticket 1516 http://projects.scipy.org/scipy/ticket/1516 Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Sep 13 13:56:47 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 13 Sep 2011 19:56:47 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10 beta 1 In-Reply-To: References: <4E6E85EF.4050005@uci.edu> Message-ID: On Tue, Sep 13, 2011 at 6:20 PM, Ralf Gommers wrote: > Hi Christoph, > > > On Tue, Sep 13, 2011 at 12:21 AM, Christoph Gohlke wrote: > >> >> >> On 9/12/2011 2:36 PM, Ralf Gommers wrote: >> >>> Hi, >>> >>> I am pleased to announce the availability of the first beta release of >>> SciPy 0.10.0. For this release over a 100 tickets and pull requests have >>> been closed, and many new features have been added. Some of the >>> highlights are: >>> >>> - support for Bento as a build system for scipy >>> - generalized and shift-invert eigenvalue problems in sparse.linalg >>> - addition of discrete-time linear systems in the signal module >>> >>> Sources and binaries can be found at >>> https://sourceforge.net/**projects/scipy/files/scipy/0.**10.0b1/, >>> release >>> notes are copied below. >>> Binaries for Python 2.x are available, on Python 3 there are a few known >>> problems that should be solved first. When they are, a second beta will >>> follow. >>> >>> Please try this release and report problems on the mailing list. >>> >>> Cheers, >>> Ralf >>> >>> >> Hi Ralf, >> >> regarding Python 3: >> >> > >> 2) On my system, 2to3 did not correctly convert all imports in >> special.__init__.py, special.add_newdocs.py, and linalg.misc.py. I needed >> to manually change >> >> -from _logit import logit, expit >> +from ._logit import logit, expit >> >> -import fblas >> +from . import fblas >> > > I noticed that too. "from . import foo" is not valid python 2.4 syntax, so > I hope there's an easy way we can make 2to3 see the error of its ways. > > Found it, in tools/py3tool.py diff --git a/tools/py3tool.py b/tools/py3tool.py index 1360c03..fa96066 100755 --- a/tools/py3tool.py +++ b/tools/py3tool.py @@ -162,6 +162,7 @@ def custom_mangling(filename): os.path.join('linalg', 'lapack.py'), os.path.join('linalg', 'flinalg.py'), os.path.join('linalg', 'iterative.py'), + os.path.join('linalg', 'misc.py'), os.path.join('lib', 'blas', '__init__.py'), os.path.join('lib', 'lapack', '__init__.py'), os.path.join('ndimage', 'filters.py'), @@ -180,6 +181,7 @@ def custom_mangling(filename): os.path.join('signal', 'signaltools.py'), os.path.join('signal', 'fir_filter_design.py'), os.path.join('special', '__init__.py'), + os.path.join('special', 'add_newdocs.py'), os.path.join('special', 'basic.py'), os.path.join('special', 'orthogonal.py'), os.path.join('spatial', '__init__.py'), @@ -205,7 +207,7 @@ def custom_mangling(filename): for mod in ['_vq', '_hierarchy_wrap', '_fftpack', 'convolve', '_flinalg', 'fblas', 'flapack', 'cblas', 'clapack', 'calc_lwork', '_cephes', 'specfun', 'orthogonal_eval', - 'lambertw', 'ckdtree', '_distance_wrap', + 'lambertw', 'ckdtree', '_distance_wrap', '_logit', '_minpack', '_zeros', '_lbfgsb', '_cobyla', '_slsqp', '_nnls', 'sigtools', 'spline', 'spectral', Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Tue Sep 13 14:03:45 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 13 Sep 2011 11:03:45 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.10 beta 1 In-Reply-To: References: <4E6E85EF.4050005@uci.edu> Message-ID: <4E6F9B01.9090806@uci.edu> On 9/13/2011 9:20 AM, Ralf Gommers wrote: > Hi Christoph, > > > On Tue, Sep 13, 2011 at 12:21 AM, Christoph Gohlke > wrote: > > > > On 9/12/2011 2:36 PM, Ralf Gommers wrote: > > Hi, > > I am pleased to announce the availability of the first beta > release of > SciPy 0.10.0. For this release over a 100 tickets and pull > requests have > been closed, and many new features have been added. Some of the > highlights are: > > - support for Bento as a build system for scipy > - generalized and shift-invert eigenvalue problems in > sparse.linalg > - addition of discrete-time linear systems in the signal module > > Sources and binaries can be found at > https://sourceforge.net/ projects/scipy/files/scipy/0. 10.0b1/ > , > release > notes are copied below. > Binaries for Python 2.x are available, on Python 3 there are a > few known > problems that should be solved first. When they are, a second > beta will > follow. > > Please try this release and report problems on the mailing list. > > Cheers, > Ralf > > > Hi Ralf, > > regarding Python 3: > > 1) The patch from numpy ticket #1919 might be necessary on some > platforms >. > > Any important platforms? I haven't noticed any problems related to that > issue yet, and I'd like to keep compatibility with numpy 1.5.x for the > binaries. It works for me too. The ticket doesn't specify the platform, but according to the Cython mailing list the import_umath() macro can fail to compile with Python 3.2, NumPy 1.6.1, 64bit Linux, GCC 4.5.2. > > 2) On my system, 2to3 did not correctly convert all imports in > special.__init__.py, special.add_newdocs.py > , and linalg.misc.py > . I needed to manually change > > -from _logit import logit, expit > +from ._logit import logit, expit > > -import fblas > +from . import fblas > > > I noticed that too. "from . import foo" is not valid python 2.4 syntax, > so I hope there's an easy way we can make 2to3 see the error of its ways. > > > 3) The attached patch fixes some test errors and failures. > > That all looks fine. Is the changed PIL import necessary for PIL trunk? > I seem to remember something about changed namespaces in PIL, but > "import Image" used to be standard. Does your fix work with older PIL > (1.1.6)? This is ticket #1374 . To be safe one could use a try-except: diff --git a/scipy/misc/pilutil.py b/scipy/misc/pilutil.py index d7e89bb..a54edf4 100644 --- a/scipy/misc/pilutil.py +++ b/scipy/misc/pilutil.py @@ -14,8 +14,11 @@ from numpy import amin, amax, ravel, asarray, cast, arange, \ ones, newaxis, transpose, mgrid, iscomplexobj, sum, zeros, uint8, \ issubdtype, array -import Image -import ImageFilter +try: + from PIL import Image, ImageFilter +except ImportError: + import Image + import ImageFilter __all__ = ['fromimage','toimage','imsave','imread','bytescale', 'imrotate','imresize','imshow','imfilter','radon'] Christoph > > Thanks, > Ralf > > > Christoph > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: PIL.diff URL: From ralf.gommers at googlemail.com Tue Sep 13 14:06:02 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 13 Sep 2011 20:06:02 +0200 Subject: [SciPy-Dev] removing netcdf_variable from netcdf.__all__ In-Reply-To: References: Message-ID: On Tue, Sep 13, 2011 at 7:22 PM, Benjamin Root wrote: > On Tue, Sep 13, 2011 at 12:04 PM, Ralf Gommers < > ralf.gommers at googlemail.com> wrote: > >> >> >> On Tue, Sep 13, 2011 at 6:57 PM, Benjamin Root wrote: >> >>> On Tue, Aug 30, 2011 at 11:48 AM, Ralf Gommers < >>> ralf.gommers at googlemail.com> wrote: >>> >>>> Hi, >>>> >>>> Question for users of scipy.io.netcdf: did you ever use netcdf_variable >>>> directly, or only via netcdf_file.createVariable? The documentation says >>>> that the latter is the only intended use, which means that this class >>>> shouldn't be in __all__. https://github.com/scipy/scipy/pull/67 removes >>>> it (and changes its __init__ in a non-backwards compatible way) - the patch >>>> looks correct but I want to double check that no one is using this class. >>>> >>>> Thanks, >>>> Ralf >>>> >>>> >>> Sorry for not replying earlier (I don't regularly track this list). No, >>> I don't use netcdf_variable directly. However, would the documentation for >>> the class's other methods still show up in the main documentation if you >>> remove it from __all__? >>> >>> It should, since the io module docstring contains: >> >> Netcdf (:mod:`scipy.io.netcdf`) >> =============================== >> >> .. module:: scipy.io.netcdf >> >> .. autosummary:: >> :toctree: generated/ >> >> netcdf_file - A file object for NetCDF data >> netcdf_variable - A data object for the netcdf module >> >> Ralf >> > > Ok, I am fine with that. However, if we have this change in the call > signature, I would still feel better having a little note in the docstring > pointing out that change in case there was someone who was using this > constructor directly. > > I am still a little wary of how the change to the call signature of the > constructor was done. In particular, I am not exactly sure why it would > even be needed in the first place. For each NC_* type, there is only one > element size that is valid, and each dtype character code should only > correspond to a single NC_* type (and vice-versa). Why not have a dict of > character code to element size, and just let the character code determine > the item size? > > Or, maybe I am misunderstanding what is going on. I will also double-check > on my 32 and 64 bit machines to make sure that everything works as expected. > > Discussion about this is at https://github.com/scipy/scipy/pull/51 It clearly was broken before, size is platform dependent. If you find a cleaner way to fix the bug, it's not too late to undo this change. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Sep 13 17:25:30 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 13 Sep 2011 23:25:30 +0200 Subject: [SciPy-Dev] Two quick fixes for scipy 0.10 and trunk In-Reply-To: <4E6D94DC.5090909@uci.edu> References: <4E6D94DC.5090909@uci.edu> Message-ID: On Mon, Sep 12, 2011 at 7:13 AM, Christoph Gohlke wrote: > Hello, > > please consider these two trivial changes for scipy 0.10 and master: > Those work for me, thanks. I've added them to https://github.com/scipy/scipy/pull/77 Cheers, Ralf > > > 1) Initialize variable *d in _logit.c.src: > > diff --git a/scipy/special/_logit.c.src b/scipy/special/_logit.c.src > index 7b830f7..237ca83 100644 > --- a/scipy/special/_logit.c.src > +++ b/scipy/special/_logit.c.src > @@ -107,7 +107,7 @@ static PyModuleDef moduledef = { > NULL > }; > > -PyObject * > +PyMODINIT_FUNC > PyInit__logit() > { > PyObject *m, *f, *d; > @@ -119,6 +119,8 @@ PyInit__logit() > import_array(); > import_umath(); > > + d = PyModule_GetDict(m); > + > f = PyUFunc_FromFuncAndData(logit_**funcs,data, types, 3, 1, 1, > PyUFunc_None, "logit",NULL , 0); > PyDict_SetItemString(d, "logit", f); > > > > 2) Fix syntax error in fblas_l1.pyf.src: > > diff --git a/scipy/linalg/fblas_l1.pyf.**src b/scipy/linalg/fblas_l1.pyf.* > *src > index 1543a20..664bdd8 100644 > --- a/scipy/linalg/fblas_l1.pyf.**src > +++ b/scipy/linalg/fblas_l1.pyf.**src > @@ -347,7 +347,7 @@ function iamax(n,x,offx,incx) result(k) > > ! This is to avoid Fortran wrappers. > integer iamax,k > - fortranname F_FUNC(iamax,I>AMAX) > + fortranname F_FUNC(iamax,IAMAX) > intent(c) iamax > dimension(*), intent(in) :: x > integer optional, intent(in), check(incx>0||incx<0) :: incx = 1 > > > Thanks, > > Christoph > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim.ivanov at viewdle.com Wed Sep 14 13:29:36 2011 From: maxim.ivanov at viewdle.com (Maxim Ivanov) Date: Wed, 14 Sep 2011 20:29:36 +0300 Subject: [SciPy-Dev] [PATCH] proper broadcasting for epsilon in scipy.optimize.approx_fprime() Message-ID: <1316021122.1610.15.camel@turing> Hi SciPy developers! Many of the functions in scipy.optimize module (e.g. fmin_bfgs) accept the 'epsilon' parameter, which is meant to be the step size coefficient. For multidimensional optimization one may want to set different values of the coefficient for different dimensions. But, at least in 0.8, the fmin_bfgs function, contrary to what its docs say, can't handle vectors as epsilon argument, only scalars work. The patch which I'm submitting uses Numpy broadcasting for the purpose of accepting both scalars AND vectors of appropriate size as the epsilon parameter. It may not apply clearly in trunk, as was developed (or, rather, quickly hacked around) for 0.8. Please rebase if needed. Best regards, Maxim Ivanov --- optimize.py 2010-07-26 17:48:33.000000000 +0300 +++ optimize.py 2011-09-14 19:49:11.040484658 +0300 @@ -617,8 +617,9 @@ grad = numpy.zeros((len(xk),), float) ei = numpy.zeros((len(xk),), float) for k in range(len(xk)): - ei[k] = epsilon - grad[k] = (f(*((xk+ei,)+args)) - f0)/epsilon + ei[k] = 1.0 + d = epsilon * ei + grad[k] = (f(*((xk+d,)+args)) - f0)/d[k] ei[k] = 0.0 return grad From fabian.pedregosa at inria.fr Thu Sep 15 06:42:00 2011 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Thu, 15 Sep 2011 12:42:00 +0200 Subject: [SciPy-Dev] [PATCH] proper broadcasting for epsilon in scipy.optimize.approx_fprime() In-Reply-To: <752245404.163147.1316021384376.JavaMail.root@zmbs3.inria.fr> References: <752245404.163147.1316021384376.JavaMail.root@zmbs3.inria.fr> Message-ID: On Wed, Sep 14, 2011 at 7:29 PM, Maxim Ivanov wrote: > Hi SciPy developers! > > Many of the functions in scipy.optimize module (e.g. fmin_bfgs) accept > the 'epsilon' parameter, which is meant to be the step size coefficient. > For multidimensional optimization one may want to set different values > of the coefficient for different dimensions. But, at least in 0.8, the > fmin_bfgs function, contrary to what its docs say, can't handle vectors > as epsilon argument, only scalars work. > > The patch which I'm submitting uses Numpy broadcasting for the purpose > of accepting both scalars AND vectors of appropriate size as the epsilon > parameter. Hi Maxim. Thanks for the patch, I could apply it to current master and made a pull request out of it: https://github.com/scipy/scipy/pull/80 Hopefully people with more experience than me can comment on it. Best, Fabian From tomasz at kotarba.net Thu Sep 15 08:39:49 2011 From: tomasz at kotarba.net (Tomasz J. Kotarba) Date: Thu, 15 Sep 2011 13:39:49 +0100 Subject: [SciPy-Dev] scipy.spatial.distance.pdist - unnecessarily limited functionality and a suggestion of a possible solution In-Reply-To: References: Message-ID: Hello, Thanks, I will do that. Regards, T On 29 August 2011 18:11, Ralf Gommers wrote: > Could you put > together a patch including a test that covers your use case? > > Ralf From denis.laxalde at mcgill.ca Thu Sep 15 15:02:21 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Thu, 15 Sep 2011 15:02:21 -0400 Subject: [SciPy-Dev] improvements of optimize package Message-ID: <20110915150221.6614ea93@mail.gmail.com> Hi, As discussed recently on the -user list [1], I've started thinking about possible improvements of the optimize package, in particular concerning the consistency of functions signature, parameters/returns names, etc. I've posted a proposal on the wiki of my gihub account [2]. In brief, the proposal is two-fold: - First, concerning the standardization of functions signature, I would propose in particular to gather solver settings in a dictionary (named `options`) and to generalized the use of the `infodict`, `ier`, `mesg` outputs which respectively correspond solver statistics, exit flag and information message. I've tried also to choose simple yet informative variables names. - Then, as discussed in the aforementioned thread, the implementation of unified interfaces (or wrappers) to several algorithms with similar purpose is proposed. There definition is basically taken from the classification of the optimize package documentation [3]. I've also try to list the impact of the proposed changes on the existing functions as well. Also, having started working on the code, I would say that most of this is feasible. Yet, many choices are somehow arbitrary and are thus subject to discussions so: comments welcome! -- Denis Laxalde 1:?http://mail.scipy.org/pipermail/scipy-user/2011-September/030444.html 2:?https://github.com/dlaxalde/scipy/wiki/Improvements-to-optimize-package 3: http://docs.scipy.org/doc/scipy/reference/optimize.html From jason-sage at creativetrax.com Fri Sep 16 04:29:49 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Fri, 16 Sep 2011 03:29:49 -0500 Subject: [SciPy-Dev] balancing matrices before computing eigenvalues Message-ID: <4E7308FD.9050904@creativetrax.com> I recently ran into some significant roundoff error when computing the eigenvalues and eigenvectors of a matrix with condition number around 3000. I didn't have this problem with matlab. Reading the matlab documentation revealed that matlab, by default, balances the matrix before computing the eigenvalues/eigenvectors [1]. Presumably, they use something like dgebal or dggbal [2] from lapack. Does scipy do any sort of balancing before computing eigenvalues/eigenvectors? Thanks, Jason [1] http://www.mathworks.com/help/techdoc/ref/eig.html [2] http://www.netlib.org/lapack/lapack-3.1.1/html/dgebal.f.html From cjordan1 at uw.edu Fri Sep 16 17:12:48 2011 From: cjordan1 at uw.edu (Christopher Jordan-Squire) Date: Fri, 16 Sep 2011 16:12:48 -0500 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: <20110915150221.6614ea93@mail.gmail.com> References: <20110915150221.6614ea93@mail.gmail.com> Message-ID: On Thu, Sep 15, 2011 at 2:02 PM, Denis Laxalde wrote: > Hi, > > As discussed recently on the -user list [1], I've started thinking > about possible improvements of the optimize package, in particular > concerning the consistency of functions signature, parameters/returns > names, etc. I've posted a proposal on the wiki of my gihub account [2]. > In brief, the proposal is two-fold: > > ?- First, concerning the standardization of functions signature, I > ? would propose in particular to gather solver settings in a > ? dictionary (named `options`) and to generalized the use of the > ? `infodict`, `ier`, `mesg` outputs which respectively correspond > ? solver statistics, exit flag and information message. I've tried > ? also to choose simple yet informative variables names. > > ?- Then, as discussed in the aforementioned thread, the implementation > ? of unified interfaces (or wrappers) to several algorithms with > ? similar purpose is proposed. There definition is basically taken > ? from the classification of the optimize package documentation [3]. > > I've also try to list the impact of the proposed changes on the > existing functions as well. Also, having started working on the code, I > would say that most of this is feasible. Yet, many choices are somehow > arbitrary and are thus subject to discussions so: comments welcome! > > -- > Denis Laxalde > > ?1:?http://mail.scipy.org/pipermail/scipy-user/2011-September/030444.html > ?2:?https://github.com/dlaxalde/scipy/wiki/Improvements-to-optimize-package > ?3: http://docs.scipy.org/doc/scipy/reference/optimize.html > > Looks well thought-out. A few comments: Is there a reason to give two interfaces for constrained and unconstrained optimization instead of combining them somehow? Why not include leastsq with the unconstrained optimization methods for the unconstrained optimization interface? Why not put anneal in with the unconstrained optimization interface? You could include in the interface the possibility of returning a dictionary of solver specific outputs. Bound constrained optimization should be associated with constrained optimization, not unconstrained. Also, I think there's a typo in the multivariate solvers: 'newton\_krylov' should be 'newton_krylov'. -Chris JS > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From pav at iki.fi Fri Sep 16 20:10:39 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 17 Sep 2011 00:10:39 +0000 (UTC) Subject: [SciPy-Dev] balancing matrices before computing eigenvalues References: <4E7308FD.9050904@creativetrax.com> Message-ID: Fri, 16 Sep 2011 03:29:49 -0500, Jason Grout wrote: [clip] > before computing the eigenvalues/eigenvectors [1]. Presumably, they use > something like dgebal or dggbal [2] from lapack. Does scipy do any sort > of balancing before computing eigenvalues/eigenvectors? Scipy and Numpy call directly LAPACK's DGEEV. This routine does internally call DGEBAL to do balancing, which is what I believe Matlab's documentation refers to. Balancing can in some cases in fact be a source of additional errors -- see LAPACK Errata #0057 -- so it may be useful to actually turn it off sometimes. So it seems not so sure the balancing is an issue here. Having different versions of LAPACK etc. may also matter. -- Pauli Virtanen From jason-sage at creativetrax.com Fri Sep 16 20:23:40 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Fri, 16 Sep 2011 19:23:40 -0500 Subject: [SciPy-Dev] balancing matrices before computing eigenvalues In-Reply-To: References: <4E7308FD.9050904@creativetrax.com> Message-ID: <4E73E88C.1090204@creativetrax.com> On 9/16/11 7:10 PM, Pauli Virtanen wrote: > Fri, 16 Sep 2011 03:29:49 -0500, Jason Grout wrote: > [clip] >> before computing the eigenvalues/eigenvectors [1]. Presumably, they use >> something like dgebal or dggbal [2] from lapack. Does scipy do any sort >> of balancing before computing eigenvalues/eigenvectors? > > Scipy and Numpy call directly LAPACK's DGEEV. This routine does > internally call DGEBAL to do balancing, which is what I believe > Matlab's documentation refers to. Balancing can in some cases in > fact be a source of additional errors -- see LAPACK Errata #0057 > -- so it may be useful to actually turn it off sometimes. > > So it seems not so sure the balancing is an issue here. > Having different versions of LAPACK etc. may also matter. Thanks for the clarification. Matlab has a 'nobalance' option to turn off the balancing---maybe the only reason they specifically said they balanced the matrices was to point out what the nobalance option was. The two different computations were on two different computers, so it's possible that different versions of LAPACK might have caused the problems. I'll see about possibly trying to track down things a bit further. Thanks, Jason From Deil.Christoph at googlemail.com Fri Sep 16 21:43:58 2011 From: Deil.Christoph at googlemail.com (Christoph Deil) Date: Sat, 17 Sep 2011 03:43:58 +0200 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: References: <20110915150221.6614ea93@mail.gmail.com> Message-ID: <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> On Sep 16, 2011, at 11:12 PM, Christopher Jordan-Squire wrote: > On Thu, Sep 15, 2011 at 2:02 PM, Denis Laxalde wrote: >> Hi, >> >> As discussed recently on the -user list [1], I've started thinking >> about possible improvements of the optimize package, in particular >> concerning the consistency of functions signature, parameters/returns >> names, etc. I've posted a proposal on the wiki of my gihub account [2]. >> In brief, the proposal is two-fold: >> >> - First, concerning the standardization of functions signature, I >> would propose in particular to gather solver settings in a >> dictionary (named `options`) and to generalized the use of the >> `infodict`, `ier`, `mesg` outputs which respectively correspond >> solver statistics, exit flag and information message. I've tried >> also to choose simple yet informative variables names. >> >> - Then, as discussed in the aforementioned thread, the implementation >> of unified interfaces (or wrappers) to several algorithms with >> similar purpose is proposed. There definition is basically taken >> from the classification of the optimize package documentation [3]. >> >> I've also try to list the impact of the proposed changes on the >> existing functions as well. Also, having started working on the code, I >> would say that most of this is feasible. Yet, many choices are somehow >> arbitrary and are thus subject to discussions so: comments welcome! >> >> -- >> Denis Laxalde >> >> 1: http://mail.scipy.org/pipermail/scipy-user/2011-September/030444.html >> 2: https://github.com/dlaxalde/scipy/wiki/Improvements-to-optimize-package >> 3: http://docs.scipy.org/doc/scipy/reference/optimize.html >> >> > > Looks well thought-out. A few comments: > > Is there a reason to give two interfaces for constrained and > unconstrained optimization instead of combining them somehow? > Why not include leastsq with the unconstrained optimization methods > for the unconstrained optimization interface? > Why not put anneal in with the unconstrained optimization interface? > You could include in the interface the possibility of returning a > dictionary of solver specific outputs. > > Bound constrained optimization should be associated with constrained > optimization, not unconstrained. Also, I think there's a typo in the > multivariate solvers: 'newton\_krylov' should be 'newton_krylov'. > > -Chris JS > Hi Denis, I also think your proposal is great. Here are my comments (mostly on new names), in the order of the document: - Why not have all optimizers return only two things, x and infodict? The difference to your proposal would be that "ier" and "mesg" as well as solver-dependent extra stuff would all be in infodict instead of having variable-length return tuples, which make it harder to quickly switch optimizers. If it's just a dict I can simply pprint it and as long as I don't look at items that are not available for all solvers, I only have to change one word in the code (the optimizer name) to switch to any other optimizer. - Can we call the unified minimizer simply "minimize" instead of "fminunc", which is kind of hard to pronounce? - Why not call the (optional) returned function, Jacobian and Hessian "fun", "jac" and "hess". This is easier to remember than the f, g, h you propose, because it is the same names as for the inputs. - constrained minimizers could be "con_minimize" instead of "fmincon". Also: you call the parameters "cons" (e.g. "eq_cons"), but the minimizer "con". Pick one or the other. - Scalar function minimizers could be "minimize1d" instead of "fmins", similar to "interp1d" and other 1d specific stuff in scipy. - I would avoid renaming fmin to something else, because it is so much in use. You could simply choose a new name for the root finding wrapper, e.g. "root". Thanks for making a nicer scipy.optimize! Christoph From charlesr.harris at gmail.com Fri Sep 16 23:06:29 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 16 Sep 2011 21:06:29 -0600 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> References: <20110915150221.6614ea93@mail.gmail.com> <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> Message-ID: On Fri, Sep 16, 2011 at 7:43 PM, Christoph Deil < Deil.Christoph at googlemail.com> wrote: > > On Sep 16, 2011, at 11:12 PM, Christopher Jordan-Squire wrote: > > > On Thu, Sep 15, 2011 at 2:02 PM, Denis Laxalde > wrote: > >> Hi, > >> > >> As discussed recently on the -user list [1], I've started thinking > >> about possible improvements of the optimize package, in particular > >> concerning the consistency of functions signature, parameters/returns > >> names, etc. I've posted a proposal on the wiki of my gihub account [2]. > >> In brief, the proposal is two-fold: > >> > >> - First, concerning the standardization of functions signature, I > >> would propose in particular to gather solver settings in a > >> dictionary (named `options`) and to generalized the use of the > >> `infodict`, `ier`, `mesg` outputs which respectively correspond > >> solver statistics, exit flag and information message. I've tried > >> also to choose simple yet informative variables names. > >> > >> - Then, as discussed in the aforementioned thread, the implementation > >> of unified interfaces (or wrappers) to several algorithms with > >> similar purpose is proposed. There definition is basically taken > >> from the classification of the optimize package documentation [3]. > >> > >> I've also try to list the impact of the proposed changes on the > >> existing functions as well. Also, having started working on the code, I > >> would say that most of this is feasible. Yet, many choices are somehow > >> arbitrary and are thus subject to discussions so: comments welcome! > >> > >> -- > >> Denis Laxalde > >> > >> 1: > http://mail.scipy.org/pipermail/scipy-user/2011-September/030444.html > >> 2: > https://github.com/dlaxalde/scipy/wiki/Improvements-to-optimize-package > >> 3: http://docs.scipy.org/doc/scipy/reference/optimize.html > >> > >> > > > > Looks well thought-out. A few comments: > > > > Is there a reason to give two interfaces for constrained and > > unconstrained optimization instead of combining them somehow? > > Why not include leastsq with the unconstrained optimization methods > > for the unconstrained optimization interface? > > Why not put anneal in with the unconstrained optimization interface? > > You could include in the interface the possibility of returning a > > dictionary of solver specific outputs. > > > > Bound constrained optimization should be associated with constrained > > optimization, not unconstrained. Also, I think there's a typo in the > > multivariate solvers: 'newton\_krylov' should be 'newton_krylov'. > > > > -Chris JS > > > > Hi Denis, > > I also think your proposal is great. > Here are my comments (mostly on new names), in the order of the document: > > - Why not have all optimizers return only two things, x and infodict? The > difference to your proposal would be that "ier" and "mesg" as well as > solver-dependent extra stuff would all be in infodict instead of having > variable-length return tuples, which make it harder to quickly switch > optimizers. If it's just a dict I can simply pprint it and as long as I > don't look at items that are not available for all solvers, I only have to > change one word in the code (the optimizer name) to switch to any other > optimizer. > +1, and put x in the infodict also so you can pickle the whole thing in one go or pass it to another function to analyse the results. > - Can we call the unified minimizer simply "minimize" instead of "fminunc", > which is kind of hard to pronounce? > - Why not call the (optional) returned function, Jacobian and Hessian "fun", > "jac" and "hess". This is easier to remember than the f, g, h you propose, > because it is the same names as for the inputs. > - constrained minimizers could be "con_minimize" instead of "fmincon". > Also: you call the parameters "cons" (e.g. "eq_cons"), but the minimizer > "con". Pick one or the other. > - Scalar function minimizers could be "minimize1d" instead of "fmins", > similar to "interp1d" and other 1d specific stuff in scipy. > - I would avoid renaming fmin to something else, because it is so much in > use. You could simply choose a new name for the root finding wrapper, e.g. > "root". > > Good suggestions all. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Sep 16 23:18:27 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 16 Sep 2011 21:18:27 -0600 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: <20110915150221.6614ea93@mail.gmail.com> References: <20110915150221.6614ea93@mail.gmail.com> Message-ID: On Thu, Sep 15, 2011 at 1:02 PM, Denis Laxalde wrote: > Hi, > > As discussed recently on the -user list [1], I've started thinking > about possible improvements of the optimize package, in particular > concerning the consistency of functions signature, parameters/returns > names, etc. I've posted a proposal on the wiki of my gihub account [2]. > In brief, the proposal is two-fold: > > - First, concerning the standardization of functions signature, I > would propose in particular to gather solver settings in a > dictionary (named `options`) and to generalized the use of the > `infodict`, `ier`, `mesg` outputs which respectively correspond > solver statistics, exit flag and information message. I've tried > also to choose simple yet informative variables names. > > - Then, as discussed in the aforementioned thread, the implementation > of unified interfaces (or wrappers) to several algorithms with > similar purpose is proposed. There definition is basically taken > from the classification of the optimize package documentation [3]. > > I've also try to list the impact of the proposed changes on the > existing functions as well. Also, having started working on the code, I > would say that most of this is feasible. Yet, many choices are somehow > arbitrary and are thus subject to discussions so: comments welcome! > > -- > Denis Laxalde > > 1: http://mail.scipy.org/pipermail/scipy-user/2011-September/030444.html > 2: > https://github.com/dlaxalde/scipy/wiki/Improvements-to-optimize-package > 3: http://docs.scipy.org/doc/scipy/reference/optimize.html > > > The 1-d root finders need stopping conditions in the parameter list. Currently they have both absolute and relative step size, which are used together. I think that both should be kept. I think we could dispense with the brenth and ridder solvers as I don't see that they solve any problem that the other solvers can't deal with. Does anybody use them? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkb.teichmann at gmail.com Sat Sep 17 08:11:30 2011 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Sat, 17 Sep 2011 14:11:30 +0200 Subject: [SciPy-Dev] adding chkfinite flags to linalg functions In-Reply-To: References: <4E57BE0A.90506@gmail.com> <4E57E9D9.7010009@gmail.com> <4E57F44D.3030507@gmail.com> <4E5800E6.9010805@gmail.com> Message-ID: Hi list, I've been working on qr_multiply, as some of you might have already known, and realized that more than 10% of the time is actually spent in asarray_chkfinite, at which point I discovered this discussion and just wanted to add my two cents. I think it is a problem that should be tackled, but there are several possible solutions: * just say numpy.asarray_chkfinite=numpy.asarray. That works fine as a hack if you know what you're doing * adding the parameter chkfinite to all functions. That's uncool, I think, hard to maintain on the long run. * improving asarray_chkfinite. I think Fabian is working on that. actually, the only thing needed would be a function isallfinite, since I guess that most of the time is spent in the creation of the logical array in isfinite(a).all(). * adding a ALLFINITE flag to ndarrays. Ga?l correctly pointed out that this is not as simple as it sounds, but it's nevertheless possible: ALLFINITE would have to be a immutable flag at creation time of the ndarray, and everytime one does something to the ndarray (possibly via a view, to which the ALLFINITE flag has to be copied), we have to check everything is finite. This solution would create a lot of work, but also nice side-effects: even NaN-aware functions in numpy can be very slow on some (actually very common) hardware, accidental NaNs can be very annoying. * adding an "finite" intent into f2py. This is currently my preferred solution. as f2py often has to copy the data to make it available for FORTRAN, checking finiteness could be done while copying with virtually no loss of speed. So, what are yall thinking? Greetings Martin -- Max-Born-Institut Max-Born-Stra?e 2a 12489 Berlin +49 30 6392 1234 From charlesr.harris at gmail.com Sat Sep 17 13:52:36 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 17 Sep 2011 11:52:36 -0600 Subject: [SciPy-Dev] adding chkfinite flags to linalg functions In-Reply-To: References: <4E57BE0A.90506@gmail.com> <4E57E9D9.7010009@gmail.com> <4E57F44D.3030507@gmail.com> <4E5800E6.9010805@gmail.com> Message-ID: On Sun, Aug 28, 2011 at 11:07 AM, Christopher Jordan-Squire wrote: > Fabian says he's working on a more efficient numpy asarray_chkfinite > implementation, and will hopefully have something within a week. > > https://github.com/scipy/scipy/pull/48 > > Perhaps this discussion should be put on hold until we see if a faster > and more memory-efficient asarray_chkfinite makes this pull request > moot. > > Somewhat related, how are we going to handle the new masked arrays? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Sep 18 07:02:08 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 18 Sep 2011 13:02:08 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 Message-ID: Hi, The first beta release of scipy 0.10.0 was, well, beta quality, therefore I am pleased to announce the availability of the second 0.10.0 beta release. For this release over a 100 tickets and pull requests have been closed, and many new features have been added. Some of the highlights are: - support for Bento as a build system for scipy - generalized and shift-invert eigenvalue problems in sparse.linalg - addition of discrete-time linear systems in the signal module Sources and binaries can be found at https://sourceforge.net/projects/scipy/files/scipy/0.10.0b2/, release notes are copied below. SciPy 0.10 is compatible with Python 2.4 - 3.2, and requires numpy 1.5.1 or higher. Please try this release and report problems on the mailing list. Cheers, Ralf ========================== SciPy 0.10.0 Release Notes ========================== .. note:: Scipy 0.10.0 is not released yet! .. contents:: SciPy 0.10.0 is the culmination of XXX months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.10.x branch, and on adding new features on the development trunk. This release requires Python 2.4-2.7 or 3.1- and NumPy 1.5 or greater. New features ============ Bento: new optional build system -------------------------------- Scipy can now be built with `Bento `_. Bento has some nice features like parallel builds and partial rebuilds, that are not possible with the default build system (distutils). For usage instructions see BENTO_BUILD.txt in the scipy top-level directory. Currently Scipy has three build systems, distutils, numscons and bento. Numscons is deprecated and is planned and will likely be removed in the next release. Generalized and shift-invert eigenvalue problems in ``scipy.sparse.linalg`` --------------------------------------------------------------------------- The sparse eigenvalue problem solver functions ``scipy.sparse.eigs/eigh`` now support generalized eigenvalue problems, and all shift-invert modes available in ARPACK. Discrete-Time Linear Systems (``scipy.signal``) ----------------------------------------------- Support for simulating discrete-time linear systems, including ``scipy.signal.dlsim``, ``scipy.signal.dimpulse``, and ``scipy.signal.dstep``, has been added to SciPy. Conversion of linear systems from continuous-time to discrete-time representations is also present via the ``scipy.signal.cont2discrete`` function. Enhancements to ``scipy.signal`` -------------------------------- A Lomb-Scargle periodogram can now be computed with the new function ``scipy.signal.lombscargle``. The forward-backward filter function ``scipy.signal.filtfilt`` can now filter the data in a given axis of an n-dimensional numpy array. (Previously it only handled a 1-dimensional array.) Options have been added to allow more control over how the data is extended before filtering. FIR filter design with ``scipy.signal.firwin2`` now has options to create filters of type III (zero at zero and Nyquist frequencies) and IV (zero at zero frequency). Additional decomposition options (``scipy.linalg``) --------------------------------------------------- A sort keyword has been added to the Schur decomposition routine (``scipy.linalg.schur``) to allow the sorting of eigenvalues in the resultant Schur form. Additional special matrices (``scipy.linalg``) ---------------------------------------------- The functions ``hilbert`` and ``invhilbert`` were added to ``scipy.linalg``. Enhancements to ``scipy.stats`` ------------------------------- * The *one-sided form* of Fisher's exact test is now also implemented in ``stats.fisher_exact``. * The function ``stats.chi2_contingency`` for computing the chi-square test of independence of factors in a contingency table has been added, along with the related utility functions ``stats.contingency.margins`` and ``stats.contingency.expected_freq``. Basic support for Harwell-Boeing file format for sparse matrices ---------------------------------------------------------------- Both read and write are support through a simple function-based API, as well as a more complete API to control number format. The functions may be found in scipy.sparse.io. The following features are supported: * Read and write sparse matrices in the CSC format * Only real, symmetric, assembled matrix are supported (RUA format) Deprecated features =================== ``scipy.maxentropy`` -------------------- The maxentropy module is unmaintained, rarely used and has not been functioning well for several releases. Therefore it has been deprecated for this release, and will be removed for scipy 0.11. Logistic regression in scikits.learn is a good alternative for this functionality. The ``scipy.maxentropy.logsumexp`` function has been moved to ``scipy.misc``. ``scipy.lib.blas`` ------------------ There are similar BLAS wrappers in ``scipy.linalg`` and ``scipy.lib``. These have now been consolidated as ``scipy.linalg.blas``, and ``scipy.lib.blas`` is deprecated. Numscons build system --------------------- The numscons build system is being replaced by Bento, and will be removed in one of the next scipy releases. Removed features ================ The deprecated name `invnorm` was removed from ``scipy.stats.distributions``, this distribution is available as `invgauss`. The following deprecated nonlinear solvers from ``scipy.optimize`` have been removed:: - ``broyden_modified`` (bad performance) - ``broyden1_modified`` (bad performance) - ``broyden_generalized`` (equivalent to ``anderson``) - ``anderson2`` (equivalent to ``anderson``) - ``broyden3`` (obsoleted by new limited-memory broyden methods) - ``vackar`` (renamed to ``diagbroyden``) Other changes ============= ``scipy.constants`` has been updated with the CODATA 2010 constants. ``__all__`` dicts have been added to all modules, which has cleaned up the namespaces (particularly useful for interactive work). An API section has been added to the documentation, giving recommended import guidelines and specifying which submodules are public and which aren't. Checksums ========= f986e635f37eb064f647fcd7ecffbd20 release/installers/scipy-0.10.0b2-py2.7-python.org-macosx10.6.dmg 6e474846d85469271c7c5adb87c7edef release/installers/scipy-0.10.0b2-win32-superpack-python2.5.exe 9aa3cf8eb60e9f9cddc70c7110a5fd42 release/installers/scipy-0.10.0b2-win32-superpack-python2.6.exe 43f462791a4b9159df57054d7294d442 release/installers/scipy-0.10.0b2-win32-superpack-python2.7.exe f6aa76ad9209a879ff525a509574ac77 release/installers/scipy-0.10.0b2-win32-superpack-python3.1.exe 9fc5962c62e0b8b0765128c506e15def release/installers/scipy-0.10.0b2-win32-superpack-python3.2.exe ac6683d466c61b1884d0d185523d7775 release/installers/scipy-0.10.0b2.tar.gz 48371c49661bc443639b90f66713266b release/installers/scipy-0.10.0b2.zip -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Sep 18 07:31:21 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 18 Sep 2011 13:31:21 +0200 Subject: [SciPy-Dev] adding chkfinite flags to linalg functions In-Reply-To: References: <4E57BE0A.90506@gmail.com> <4E57E9D9.7010009@gmail.com> <4E57F44D.3030507@gmail.com> <4E5800E6.9010805@gmail.com> Message-ID: On Sat, Sep 17, 2011 at 2:11 PM, Martin Teichmann wrote: > Hi list, > > I've been working on qr_multiply, as some of you might have already > known, and realized that more than 10% of the time is actually spent > in asarray_chkfinite, at which point I discovered this discussion and > just wanted to add my two cents. > > I think it is a problem that should be tackled, but there are several > possible solutions: > > * just say numpy.asarray_chkfinite=numpy.asarray. That works fine as > a hack if you know what you're doing > > * adding the parameter chkfinite to all functions. That's uncool, I think, > hard to maintain on the long run. > Agreed that it's ugly. > > * improving asarray_chkfinite. I think Fabian is working on that. > actually, the only thing needed would be a function isallfinite, since > I guess that most of the time is spent in the creation of the logical > array in isfinite(a).all(). > Fabian noted at https://github.com/scipy/scipy/pull/48 that he was giving up for now. He did give some suggestions about possible ways to go, quoting: - A pure-cython, type-generic function as present in pandas [0] is slow. To go faster, types have to be specialized. For LAPACK, it might not be a problem to write 4 similar routines (float, double, complex, double complex), but it's ugly. - BLAS i_amax function doesn't propagate NaNs but can be used to detect {-np.inf, np.inf} extremely fast. - NumPy has the infrastructure to write functions for several dtypes in a templated fashion (in C). I haven't tried that. > > * adding a ALLFINITE flag to ndarrays. Ga?l correctly pointed out that > this is not as simple as it sounds, but it's nevertheless possible: > ALLFINITE would have to be a immutable flag at creation time of the > ndarray, and everytime one does something to the ndarray (possibly > via a view, to which the ALLFINITE flag has to be copied), we have > to check everything is finite. This solution would create a lot of work, > but also nice side-effects: even NaN-aware functions in numpy can > be very slow on some (actually very common) hardware, accidental > NaNs can be very annoying. > Not impossible, but it would create much bigger problems than it solves. > > * adding an "finite" intent into f2py. This is currently my preferred > solution. > as f2py often has to copy the data to make it available for FORTRAN, > checking finiteness could be done while copying with virtually no > loss of speed. > Would this work in all cases? And wouldn't you get a small penalty for cases where you don't need to use asarray_chkfinite (not sure how often this happens)? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun Sep 18 12:07:13 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 18 Sep 2011 09:07:13 -0700 Subject: [SciPy-Dev] adding chkfinite flags to linalg functions In-Reply-To: References: <4E57BE0A.90506@gmail.com> <4E57E9D9.7010009@gmail.com> <4E57F44D.3030507@gmail.com> <4E5800E6.9010805@gmail.com> Message-ID: On Sat, Sep 17, 2011 at 5:11 AM, Martin Teichmann wrote: > * adding a ALLFINITE flag to ndarrays. Ga?l correctly pointed out that > this is not as simple as it sounds, but it's nevertheless possible: > ALLFINITE would have to be a immutable flag at creation time of the > ndarray, and everytime one does something to the ndarray (possibly > via a view, to which the ALLFINITE flag has to be copied), we have > to check everything is finite. This solution would create a lot of work, > but also nice side-effects: even NaN-aware functions in numpy can > be very slow on some (actually very common) hardware, accidental > NaNs can be very annoying. I don't think this is possible. Think, for example, of a ctypes call that simply receives the array data pointer, and then directly operates on memory. There is no way for the array to know that this is happening. Regards St?fan From cgohlke at uci.edu Sun Sep 18 15:37:50 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sun, 18 Sep 2011 12:37:50 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: References: Message-ID: <4E76488E.1030106@uci.edu> On 9/18/2011 4:02 AM, Ralf Gommers wrote: > Hi, > > The first beta release of scipy 0.10.0 was, well, beta quality, > therefore I am pleased to announce the availability of the second 0.10.0 > beta release. For this release over a 100 tickets and pull requests have > been closed, and many new features have been added. Some of the > highlights are: > > - support for Bento as a build system for scipy > - generalized and shift-invert eigenvalue problems in sparse.linalg > - addition of discrete-time linear systems in the signal module > > Sources and binaries can be found at > https://sourceforge.net/projects/scipy/files/scipy/0.10.0b2/, release > notes are copied below. SciPy 0.10 is compatible with Python 2.4 - 3.2, > and requires numpy 1.5.1 or higher. > > Please try this release and report problems on the mailing list. > > Cheers, > Ralf > > Hello Ralf, looks good. Just some minor issues: 1) I was unable to build from scipy-0.10.0b2.zip: "error: src\fblaswrap.f: No such file or directory". The git branch is OK. 2) Can the file japanese_utf8.txt in scipy\io\matlab\tests\data somehow be marked as binary on github? It is automatically converted to Windows line endings such that test_mio.test_load fails. 3) Building with Intel C compiler fails on Windows due to a bug in Cython 0.15, which has recently been fixed . The following files are affected: interpolate/interpnd.c io/matlab/mio5_utils.c io/matlab/mio_utils.c io/matlab/streams.c spatial/ckdtree.c spatial/qhull.c 4) FAIL: test_datatypes.test_uint64_max ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python27\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "X:\Python27\lib\site-packages\scipy\ndimage\tests\test_datatypes.py", line 57, in test_uint64_max assert_true(x[1] > (2**63)) AssertionError: False is not true This is due to the 32 bit visual C compiler using signed int64 when converting between uint64 to double. Anyway, it seems unreliable to me to cast back and forth between uint64 and double types for large numbers because of potential overflow and precision loss. 5) Two known failures in test_arpack.test_complex_nonsymmetric_modes and test_arpack.test_symmetric_modes. Ticket 1515 . Christoph From stefan at sun.ac.za Sun Sep 18 16:04:44 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 18 Sep 2011 13:04:44 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: <4E76488E.1030106@uci.edu> References: <4E76488E.1030106@uci.edu> Message-ID: On Sun, Sep 18, 2011 at 12:37 PM, Christoph Gohlke wrote: > 4) FAIL: test_datatypes.test_uint64_max > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File "X:\Python27\lib\site-packages\nose\case.py", line 197, in runTest > ? ? self.test(*self.arg) > ? File > "X:\Python27\lib\site-packages\scipy\ndimage\tests\test_datatypes.py", > line 57, in test_uint64_max > ? ? assert_true(x[1] > (2**63)) > AssertionError: False is not true > > > This is due to the 32 bit visual C compiler using signed int64 when > converting between uint64 to double. ?Anyway, it seems unreliable to me > to cast back and forth between uint64 and double types for large numbers > because of potential overflow and precision loss. What would be the correct way to write this test? It is simply meant to ensure that numbers close to the upper limit of 64-bit uints are preserved during interpolation operations. Regards St?fan From charlesr.harris at gmail.com Sun Sep 18 17:35:33 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 18 Sep 2011 15:35:33 -0600 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: <4E76488E.1030106@uci.edu> References: <4E76488E.1030106@uci.edu> Message-ID: On Sun, Sep 18, 2011 at 1:37 PM, Christoph Gohlke wrote: > > > On 9/18/2011 4:02 AM, Ralf Gommers wrote: > > Hi, > > > > The first beta release of scipy 0.10.0 was, well, beta quality, > > therefore I am pleased to announce the availability of the second 0.10.0 > > beta release. For this release over a 100 tickets and pull requests have > > been closed, and many new features have been added. Some of the > > highlights are: > > > > - support for Bento as a build system for scipy > > - generalized and shift-invert eigenvalue problems in sparse.linalg > > - addition of discrete-time linear systems in the signal module > > > > Sources and binaries can be found at > > https://sourceforge.net/projects/scipy/files/scipy/0.10.0b2/, release > > notes are copied below. SciPy 0.10 is compatible with Python 2.4 - 3.2, > > and requires numpy 1.5.1 or higher. > > > > Please try this release and report problems on the mailing list. > > > > Cheers, > > Ralf > > > > > > Hello Ralf, > > looks good. Just some minor issues: > > 1) I was unable to build from scipy-0.10.0b2.zip: "error: > src\fblaswrap.f: No such file or directory". The git branch is OK. > > 2) Can the file japanese_utf8.txt in scipy\io\matlab\tests\data somehow > be marked as binary on github? It is automatically converted to Windows > line endings such that test_mio.test_load fails. > > We need to put "scipy/io/matlab/tests/data/japanese_utf8.txt binary" in the .gitattributes file. That assumes everyone is using git >= 1.6. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Sun Sep 18 17:44:28 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sun, 18 Sep 2011 14:44:28 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: References: <4E76488E.1030106@uci.edu> Message-ID: <4E76663C.1010401@uci.edu> On 9/18/2011 1:04 PM, St?fan van der Walt wrote: > On Sun, Sep 18, 2011 at 12:37 PM, Christoph Gohlke wrote: >> 4) FAIL: test_datatypes.test_uint64_max >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "X:\Python27\lib\site-packages\nose\case.py", line 197, in runTest >> self.test(*self.arg) >> File >> "X:\Python27\lib\site-packages\scipy\ndimage\tests\test_datatypes.py", >> line 57, in test_uint64_max >> assert_true(x[1]> (2**63)) >> AssertionError: False is not true >> >> >> This is due to the 32 bit visual C compiler using signed int64 when >> converting between uint64 to double. Anyway, it seems unreliable to me >> to cast back and forth between uint64 and double types for large numbers >> because of potential overflow and precision loss. > > What would be the correct way to write this test? It is simply meant > to ensure that numbers close to the upper limit of 64-bit uints are > preserved during interpolation operations. > > Regards > St?fan Hi St?fan, I think the test is fine for that purpose. It fails with 32 bit msvc9 because of a MS bug but it passes when building with the Intel or 64 bit msvc9 compilers. The second part of my comment was more a reminder of the fact that for example: >>> np.uint64(np.float64(2**61) + 100.0) == np.uint64(2**61) True To me it does not look like that ndimage was designed to reliably interpolate integer numbers > 2**53. Maybe I am wrong. Is there a test for that? Christoph From stefan at sun.ac.za Sun Sep 18 21:04:33 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 18 Sep 2011 18:04:33 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: <4E76663C.1010401@uci.edu> References: <4E76488E.1030106@uci.edu> <4E76663C.1010401@uci.edu> Message-ID: On Sun, Sep 18, 2011 at 2:44 PM, Christoph Gohlke wrote: > The second part of my comment was more a reminder of the fact that for > example: >>>> np.uint64(np.float64(2**61) + 100.0) == np.uint64(2**61) > True > > To me it does not look like that ndimage was designed to reliably > interpolate integer numbers > 2**53. Maybe I am wrong. Is there a test > for that? I think the test originated because of a bug where, even if the output dtype was uint64, 32-bit integers wrapped. We could therefore modify the test to work on numbers closer to that upper limit; however, since the current test doesn't evaluate elements but simply determines whether they are in the 64-bit range, I think we're ok. Regards St?fan From scott.sinclair.za at gmail.com Mon Sep 19 03:24:12 2011 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Mon, 19 Sep 2011 09:24:12 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: <4E76488E.1030106@uci.edu> References: <4E76488E.1030106@uci.edu> Message-ID: On 18 September 2011 21:37, Christoph Gohlke wrote: > > > On 9/18/2011 4:02 AM, Ralf Gommers wrote: >> The first beta release of scipy 0.10.0 was, well, beta quality, >> therefore I am pleased to announce the availability of the second 0.10.0 >> beta release. For this release over a 100 tickets and pull requests have > 1) I was unable to build from scipy-0.10.0b2.zip: "error: > src\fblaswrap.f: No such file or directory". The git branch is OK. I see the same thing building from scipy-0.10.0b2.tar.gz on 64-bit Ubuntu. No problems or test failures with a Git checkout of the v0.10.0b2 tag. Cheers, Scott From martin.teichmann at mbi-berlin.de Mon Sep 19 03:39:55 2011 From: martin.teichmann at mbi-berlin.de (Martin Teichmann) Date: Mon, 19 Sep 2011 09:39:55 +0200 Subject: [SciPy-Dev] adding chkfinite flags to linalg functions In-Reply-To: References: <4E57BE0A.90506@gmail.com> <4E57E9D9.7010009@gmail.com> <4E57F44D.3030507@gmail.com> <4E5800E6.9010805@gmail.com> Message-ID: Hi list, Hi Ralph, >> * adding an "finite" intent into f2py. This is currently my preferred >> solution. as f2py often has to copy the data to make it available for FORTRAN, >> checking finiteness could be done while copying with virtually no >> loss of speed. > > Would this work in all cases? And wouldn't you get a small penalty for cases > where you don't need to use asarray_chkfinite (not sure how often this > happens)? That's exactly why I wanted to create a "finite" intent. Whenever it is needed, we declare an input parameter to have "finite" intent, and it will be checked, and not otherwise. A different question is what happens when we don't copy the array. Then we have to iterate through the array to find possible infintes. But I consider that a minor issure: pure reading is typically really fast, and it is certainly faster than what we are doing currently. Greetings Martin -- Max-Born-Institut Max-Born-Stra?e 2a 12489 Berlin +49 30 6392 1234 From lkb.teichmann at gmail.com Mon Sep 19 05:55:31 2011 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Mon, 19 Sep 2011 11:55:31 +0200 Subject: [SciPy-Dev] adding chkfinite flags to linalg functions In-Reply-To: References: <4E57BE0A.90506@gmail.com> <4E57E9D9.7010009@gmail.com> <4E57F44D.3030507@gmail.com> <4E5800E6.9010805@gmail.com> Message-ID: Hi list, I just got the idea of a very simple improvement of asarray_chkfinite: instead of calling isfinite on the array, call it on its sum. That's much faster as no intermedite array has to be created. I got a significant speed improvement by this, so that now asarray_chkfinite does not take up a significant time of the total running time of the linear algebra routines. I commited a pull request https://github.com/numpy/numpy/pull/164 Greetings Martin -- Max-Born-Institut Max-Born-Stra?e 2a 12489 Berlin +49 30 6392 1234 From pav at iki.fi Mon Sep 19 07:31:48 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 19 Sep 2011 11:31:48 +0000 (UTC) Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 References: Message-ID: Hi, Sun, 18 Sep 2011 13:02:08 +0200, Ralf Gommers wrote: [clip] > The first beta release of scipy 0.10.0 was, well, beta quality, therefore I > am pleased to announce the availability of the second 0.10.0 beta release. I suggest we add two new sections to the release notes. (i) We could add a brief summary list of backward incompatible changes. This should make it easier for people to see what to change in their code. It duplicates part of the info in the release notes, but could be useful, as it allows to see everything relevant at a glance. Backward incompatible changes ============================= * one-line summary list of backward incompatible changes * list also deprecations (ii) We could steal some ideas from Sympy and add a list of contributors to the release notes. Sure, we have a list in THANKS.txt, but who actually reads that? Authors ======= This release contains work by the following people (contributed at least one patch to this release, names in alphabetical order): * Jeff Armstrong+ * Matthew Brett * Lars Buitinck+ * David Cournapeau * FI$H 2000+ * Michael McNeil Forbes+ * Matty G+ * Christoph Gohlke * Ralf Gommers * Yaroslav Halchenko * Charles Harris * Thouis (Ray) Jones+ * Chris Jordan-Squire+ * Robert Kern * Chris Lasher+ * Wes McKinney+ * Travis Oliphant * Fabian Pedregosa * Josef Perktold * Thomas Robitaille+ * Pim Schellart+ * Anthony Scopatz+ * Skipper Seabold+ * Fazlul Shahriar+ * David Simcha+ * Scott Sinclair+ * Andrey Smirnov+ * Collin RM Stocks+ * Martin Teichmann+ * Jake Vanderplas+ * Ga?l Varoquaux+ * Pauli Virtanen * Stefan van der Walt * Warren Weckesser * Mark Wiebe+ A total of 35 people contributed to this release. People with a "+" by their names contributed a patch for the first time. NOTE: I didn't check this list manually, so there might be some names missing. It's produced by this script http://pav.iki.fi/tmp/git-authors based on commit author names and log messages. -- Pauli Virtanen From denis.laxalde at mcgill.ca Mon Sep 19 09:03:07 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Mon, 19 Sep 2011 09:03:07 -0400 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: References: <20110915150221.6614ea93@mail.gmail.com> Message-ID: <20110919090307.42b1bc76@mcgill.ca> Christopher Jordan-Squire wrote: > Is there a reason to give two interfaces for constrained and > unconstrained optimization instead of combining them somehow? Not really. I thought that this minimal separation would lead to simpler functions from a user perspective (i.e. less parameters/options to consider for unconstrained minimization as compared to constrained minimization, less doc to read, etc.) Yet, I don't mind implementing a common function instead if you (and others) think it's better. Any other opinions on this question? > Why not include leastsq with the unconstrained optimization methods > for the unconstrained optimization interface? I think the main reason is that, in leastsq, the objective function returns an array whereas in other methods it returns a scalar. This has been discussed to some extent in the original thread [1]. The purpose of leastsq is to solve a nonlinear least square problem so one might be confused to have to look for it in the 'minimize' function. In some respect, it is also close to root finding algorithms (and is often used in place of the latter when singularities of the Jacobian occur in particular). For this reason, I think leastsq should remain separate. Yet, it should be possible to add it as a method in the unconstrained optimization interface. I'll mention this on the proposal. > Why not put anneal in with the unconstrained optimization interface? > You could include in the interface the possibility of returning a > dictionary of solver specific outputs. That should be doable. I'll update the proposal accordingly. What about brute? > Bound constrained optimization should be associated with constrained > optimization, not unconstrained. Also, I think there's a typo in the > multivariate solvers: 'newton\_krylov' should be 'newton_krylov'. Ok. -- Denis 1:?http://mail.scipy.org/pipermail/scipy-user/2011-September/030444.html From denis.laxalde at mcgill.ca Mon Sep 19 09:14:20 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Mon, 19 Sep 2011 09:14:20 -0400 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> References: <20110915150221.6614ea93@mail.gmail.com> <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> Message-ID: <20110919091420.7ac8fd77@mcgill.ca> Christoph Deil wrote: > - Why not have all optimizers return only two things, x and infodict? > The difference to your proposal would be that "ier" and "mesg" as > well as solver-dependent extra stuff would all be in infodict instead > of having variable-length return tuples, which make it harder to > quickly switch optimizers. If it's just a dict I can simply pprint it > and as long as I don't look at items that are not available for all > solvers, I only have to change one word in the code (the optimizer > name) to switch to any other optimizer. > - Can we call the unified minimizer simply "minimize" instead of > "fminunc", which is kind of hard to pronounce? > - Why not call the (optional) returned function, Jacobian and Hessian > "fun", "jac" and "hess". This is easier to remember than the f, g, h > you propose, because it is the same names as for the inputs. > - constrained minimizers could be "con_minimize" instead of > "fmincon". Also: you call the parameters "cons" (e.g. "eq_cons"), but > the minimizer "con". Pick one or the other. > - Scalar function minimizers could be "minimize1d" instead of > "fmins", similar to "interp1d" and other 1d specific stuff in scipy. Thanks for these comments, I'll update the proposal accordingly. > - I would avoid renaming fmin to something else, because it is so > much in use. You could simply choose a new name for the root finding > wrapper, e.g. "root". You mean 'fsolve' here I guess? "root" is a good name. -- Denis From denis.laxalde at mcgill.ca Mon Sep 19 09:17:30 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Mon, 19 Sep 2011 09:17:30 -0400 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: References: <20110915150221.6614ea93@mail.gmail.com> <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> Message-ID: <20110919091730.6b2d333e@mcgill.ca> Charles R Harris wrote: > On Fri, Sep 16, 2011 at 7:43 PM, Christoph Deil > wrote: > > - Why not have all optimizers return only two things, x and > > infodict? The difference to your proposal would be that "ier" and > > "mesg" as well as solver-dependent extra stuff would all be in > > infodict instead of having variable-length return tuples, which > > make it harder to quickly switch optimizers. If it's just a dict I > > can simply pprint it and as long as I don't look at items that are > > not available for all solvers, I only have to change one word in > > the code (the optimizer name) to switch to any other optimizer. > > > > +1, and put x in the infodict also so you can pickle the whole thing > in one go or pass it to another function to analyse the results. Ok. So everything in a dictionary. I guess the 'infodict' name will not be suitable anymore then. Maybe 'sol'? -- Denis From denis.laxalde at mcgill.ca Mon Sep 19 09:29:36 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Mon, 19 Sep 2011 09:29:36 -0400 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: References: <20110915150221.6614ea93@mail.gmail.com> Message-ID: <20110919092936.78cecf90@mcgill.ca> On Fri, 16 Sep 2011 21:18:27 -0600, Charles R Harris wrote: > The 1-d root finders need stopping conditions in the parameter list. > Currently they have both absolute and relative step size, which are used > together. I think that both should be kept. This is not currently documented but looking at the code there is indeed `xtol` and `rtol` for all functions but `newton`. I'll update the proposal accordingly. Thanks. -- Denis From charlesr.harris at gmail.com Mon Sep 19 10:16:35 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 19 Sep 2011 08:16:35 -0600 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: <20110919091730.6b2d333e@mcgill.ca> References: <20110915150221.6614ea93@mail.gmail.com> <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> <20110919091730.6b2d333e@mcgill.ca> Message-ID: On Mon, Sep 19, 2011 at 7:17 AM, Denis Laxalde wrote: > Charles R Harris wrote: > > On Fri, Sep 16, 2011 at 7:43 PM, Christoph Deil > > wrote: > > > - Why not have all optimizers return only two things, x and > > > infodict? The difference to your proposal would be that "ier" and > > > "mesg" as well as solver-dependent extra stuff would all be in > > > infodict instead of having variable-length return tuples, which > > > make it harder to quickly switch optimizers. If it's just a dict I > > > can simply pprint it and as long as I don't look at items that are > > > not available for all solvers, I only have to change one word in > > > the code (the optimizer name) to switch to any other optimizer. > > > > > > > +1, and put x in the infodict also so you can pickle the whole thing > > in one go or pass it to another function to analyse the results. > > Ok. So everything in a dictionary. I guess the 'infodict' name will not > be suitable anymore then. Maybe 'sol'? > > Just to clarify, I'm suggesting a return like "x, dict", with x in two places. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Sep 19 10:18:07 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 19 Sep 2011 08:18:07 -0600 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: <20110919092936.78cecf90@mcgill.ca> References: <20110915150221.6614ea93@mail.gmail.com> <20110919092936.78cecf90@mcgill.ca> Message-ID: On Mon, Sep 19, 2011 at 7:29 AM, Denis Laxalde wrote: > On Fri, 16 Sep 2011 21:18:27 -0600, > Charles R Harris wrote: > > The 1-d root finders need stopping conditions in the parameter list. > > Currently they have both absolute and relative step size, which are used > > together. I think that both should be kept. > > This is not currently documented but looking at the code there is > indeed `xtol` and `rtol` for all functions but `newton`. I'll update > the proposal accordingly. Thanks. > They should probably get added to newton also, it's a common method of terminating on step size. I'll take a look next time I have some time... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis.laxalde at mcgill.ca Mon Sep 19 10:27:49 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Mon, 19 Sep 2011 10:27:49 -0400 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: References: <20110915150221.6614ea93@mail.gmail.com> <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> <20110919091730.6b2d333e@mcgill.ca> Message-ID: <20110919102749.1a6c51d1@mcgill.ca> On Mon, 19 Sep 2011 08:16:35 -0600, Charles R Harris wrote: > On Mon, Sep 19, 2011 at 7:17 AM, Denis Laxalde wrote: > > > Charles R Harris wrote: > > > On Fri, Sep 16, 2011 at 7:43 PM, Christoph Deil > > > wrote: > > > > - Why not have all optimizers return only two things, x and > > > > infodict? The difference to your proposal would be that "ier" and > > > > "mesg" as well as solver-dependent extra stuff would all be in > > > > infodict instead of having variable-length return tuples, which > > > > make it harder to quickly switch optimizers. If it's just a dict I > > > > can simply pprint it and as long as I don't look at items that are > > > > not available for all solvers, I only have to change one word in > > > > the code (the optimizer name) to switch to any other optimizer. > > > > > > > > > > +1, and put x in the infodict also so you can pickle the whole thing > > > in one go or pass it to another function to analyse the results. > > > > Ok. So everything in a dictionary. I guess the 'infodict' name will not > > be suitable anymore then. Maybe 'sol'? > > > > > Just to clarify, I'm suggesting a return like "x, dict", with x in two > places. Ok. Thanks for clarifying (I misunderstood). I'll keep the 'infodict' name then. -- Denis From deil.christoph at googlemail.com Mon Sep 19 11:41:47 2011 From: deil.christoph at googlemail.com (Christoph Deil) Date: Mon, 19 Sep 2011 17:41:47 +0200 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: <20110919102749.1a6c51d1@mcgill.ca> References: <20110915150221.6614ea93@mail.gmail.com> <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> <20110919091730.6b2d333e@mcgill.ca> <20110919102749.1a6c51d1@mcgill.ca> Message-ID: <9A867ECE-C77B-4554-A2BA-F0A4431776A0@googlemail.com> On Sep 19, 2011, at 4:27 PM, Denis Laxalde wrote: > On Mon, 19 Sep 2011 08:16:35 -0600, > Charles R Harris wrote: >> On Mon, Sep 19, 2011 at 7:17 AM, Denis Laxalde wrote: >> >>> Charles R Harris wrote: >>>> On Fri, Sep 16, 2011 at 7:43 PM, Christoph Deil >>>> wrote: >>>>> - Why not have all optimizers return only two things, x and >>>>> infodict? The difference to your proposal would be that "ier" and >>>>> "mesg" as well as solver-dependent extra stuff would all be in >>>>> infodict instead of having variable-length return tuples, which >>>>> make it harder to quickly switch optimizers. If it's just a dict I >>>>> can simply pprint it and as long as I don't look at items that are >>>>> not available for all solvers, I only have to change one word in >>>>> the code (the optimizer name) to switch to any other optimizer. >>>>> >>>> >>>> +1, and put x in the infodict also so you can pickle the whole thing >>>> in one go or pass it to another function to analyse the results. >>> >>> Ok. So everything in a dictionary. I guess the 'infodict' name will not >>> be suitable anymore then. Maybe 'sol'? >>> >>> >> Just to clarify, I'm suggesting a return like "x, dict", with x in two >> places. > > Ok. Thanks for clarifying (I misunderstood). I'll keep the 'infodict' > name then. > I think 'infodict' is fine, although if the dict also contains x, it contains everything and I would prefer 'result'. This is what Matt called it in his very nice lmfit package, although in his case it's actually the Minimizer class itself which stores the fit results: http://newville.github.com/lmfit-py/parameters.html#simple-example I have no opinion if (x,result) or only (result) should be returned, the important point is that all solvers return a same-length tuple and are thus one can easily learn new ones and try different ones out for a given fit problem. Looking over your document again, there's only one more name that is not obvious to me and for which I had to look at the docstring: "ier". Can we rename it to "status" or if that is too general, maybe "optimizer_status" or something similar that is self-descriptive? It would be amazing if I could do this with any optimizer: result = if result.success: print 'Found minimum %s at position %s' % (result.fun, result.x) else: print 'Fit did not converge. The problem was:' print result.status, result.message Specifically I think it would be nice if "success" and "fun" (the function value on optimizer exit) were added to the results dict for all optimizers, since these are the most basic fit results besides "x" every user wants. The current problem with "fun" is that only some optimizers return it, so for those I have to add a line or two of code to do it myself after the fit. Since every optimizer has a "fun" value on exit I guess it doesn't hurt to put it in the results dict for all of them without any performance hit? The current problem with ier = status is that each optimizer has different numbers for success, e.g. for leastsq 1,2,3 or 4 is success, for anneal 0 and 1 is success, and looking at the fmin docu I can't figure out how to tell if the fit succeeded at all. I guess the numbers returned are the ones from the underlying libraries and thus should not be changed, so maybe adding a bool success flag to every optimizer for new users would be nice. If others like the dot notation lookup too (result.status possible in addition to result["status"]), it can be added to the results dict using the five lines of code described here: http://parand.com/say/index.php/2008/10/24/python-dot-notation-dictionary-access/ Denis, thanks a lot for your effort! From ralf.gommers at googlemail.com Mon Sep 19 12:15:49 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 19 Sep 2011 18:15:49 +0200 Subject: [SciPy-Dev] adding chkfinite flags to linalg functions In-Reply-To: References: <4E57BE0A.90506@gmail.com> <4E57E9D9.7010009@gmail.com> <4E57F44D.3030507@gmail.com> <4E5800E6.9010805@gmail.com> Message-ID: On Mon, Sep 19, 2011 at 9:39 AM, Martin Teichmann < martin.teichmann at mbi-berlin.de> wrote: > Hi list, > Hi Ralph, > > >> * adding an "finite" intent into f2py. This is currently my preferred > >> solution. as f2py often has to copy the data to make it available for > FORTRAN, > >> checking finiteness could be done while copying with virtually no > >> loss of speed. > > > > Would this work in all cases? And wouldn't you get a small penalty for > cases > > where you don't need to use asarray_chkfinite (not sure how often this > > happens)? > > That's exactly why I wanted to create a "finite" intent. Whenever > it is needed, we declare an input parameter to have "finite" intent, and > it will be checked, and not otherwise. > > Maybe I'm misunderstanding, but this happens at compile time and then finiteness is checked at every call, right? So if you call the same function twice, you do the check twice? This is not that uncommon I think, for example there are many functions that do a first call to determine a work array before the actual function call. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Mon Sep 19 13:23:12 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 19 Sep 2011 19:23:12 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: References: <4E76488E.1030106@uci.edu> Message-ID: On Sun, Sep 18, 2011 at 11:35 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Sun, Sep 18, 2011 at 1:37 PM, Christoph Gohlke wrote: > >> >> 2) Can the file japanese_utf8.txt in scipy\io\matlab\tests\data somehow >> be marked as binary on github? It is automatically converted to Windows >> line endings such that test_mio.test_load fails. >> >> > We need to put "scipy/io/matlab/tests/data/japanese_utf8.txt binary" in the > .gitattributes file. > > I can't find any documentation for the above. Are you sure that works? The standard way is "scipy/io/matlab/tests/data/japanese_utf8.txt -crlf -diff". Let's do that for all Cython-generated files as well. > That assumes everyone is using git >= 1.6. > > That's a safe assumption, git 1.6.0 is more than 3 years old. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Sep 19 14:13:28 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 19 Sep 2011 18:13:28 +0000 (UTC) Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 References: <4E76488E.1030106@uci.edu> Message-ID: On Mon, 19 Sep 2011 19:23:12 +0200, Ralf Gommers wrote: > On Sun, Sep 18, 2011 at 11:35 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: [clip] >> We need to put "scipy/io/matlab/tests/data/japanese_utf8.txt binary" in >> the .gitattributes file. >> > I can't find any documentation for the above. Are you sure that works? > The standard way is "scipy/io/matlab/tests/data/japanese_utf8.txt -crlf > -diff". Let's do that for all Cython-generated files as well. My gitattributes man page (Git 1.7.4.1) says that "binary" is a builtin macro attribute that expands to "-diff -text". The man page also says that "crlf" attribute is obsolete, and "-crlf" now expands to "-text". I guess this part of Git changed somewhat not so long time ago. Pauli From denis.laxalde at mcgill.ca Mon Sep 19 14:57:25 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Mon, 19 Sep 2011 14:57:25 -0400 Subject: [SciPy-Dev] improvements of optimize package In-Reply-To: <9A867ECE-C77B-4554-A2BA-F0A4431776A0@googlemail.com> References: <20110915150221.6614ea93@mail.gmail.com> <0B66EE13-3FB3-4E89-8CDE-A4DF43BCB28C@googlemail.com> <20110919091730.6b2d333e@mcgill.ca> <20110919102749.1a6c51d1@mcgill.ca> <9A867ECE-C77B-4554-A2BA-F0A4431776A0@googlemail.com> Message-ID: <20110919145725.065706a5@mcgill.ca> On Mon, 19 Sep 2011 17:41:47 +0200, Christoph Deil wrote: > I think 'infodict' is fine, although if the dict also contains x, it contains > everything and I would prefer 'result'. > > This is what Matt called it in his very nice lmfit package, > although in his case it's actually the Minimizer class itself which stores the fit results: > http://newville.github.com/lmfit-py/parameters.html#simple-example I've updated the proposal and set it to 'info' for now. But I can change this later if needed. > I have no opinion if (x,result) or only (result) should be returned, > the important point is that all solvers return a same-length tuple and > are thus one can easily learn new ones and try different ones out for a given fit problem. > > Looking over your document again, there's only one more name that is not > obvious to me and for which I had to look at the docstring: "ier". > Can we rename it to "status" or if that is too general, maybe "optimizer_status" or something > similar that is self-descriptive? I agree. "status" looks fine. > It would be amazing if I could do this with any optimizer: > > result = > if result.success: > print 'Found minimum %s at position %s' % (result.fun, result.x) > else: > print 'Fit did not converge. The problem was:' > print result.status, result.message > > Specifically I think it would be nice if "success" and "fun" (the function value on optimizer exit) > were added to the results dict for all optimizers, > since these are the most basic fit results besides "x" every user wants. > > The current problem with "fun" is that only some optimizers return it, > so for those I have to add a line or two of code to do it myself after the fit. > Since every optimizer has a "fun" value on exit I guess it doesn't hurt > to put it in the results dict for all of them without any performance hit? This should be doable. I will look at this when I'll start the actual implementation. > The current problem with ier = status is that each optimizer has different numbers for success, > e.g. for leastsq 1,2,3 or 4 is success, for anneal 0 and 1 is success, and looking at the fmin docu I can't > figure out how to tell if the fit succeeded at all. Yes, I noticed that and planned to select a convention (probably 0 for converged and >0 otherwise). > I guess the numbers returned are the ones from the underlying libraries and thus should not be changed, > so maybe adding a bool success flag to every optimizer for new users would be nice. Ok, why not. Basically `success = (status==0)`. -- Denis From ralf.gommers at googlemail.com Mon Sep 19 15:08:56 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 19 Sep 2011 21:08:56 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: <4E76488E.1030106@uci.edu> References: <4E76488E.1030106@uci.edu> Message-ID: On Sun, Sep 18, 2011 at 9:37 PM, Christoph Gohlke wrote: > > > On 9/18/2011 4:02 AM, Ralf Gommers wrote: > > Hi, > > > > The first beta release of scipy 0.10.0 was, well, beta quality, > > therefore I am pleased to announce the availability of the second 0.10.0 > > beta release. For this release over a 100 tickets and pull requests have > > been closed, and many new features have been added. Some of the > > highlights are: > > > > - support for Bento as a build system for scipy > > - generalized and shift-invert eigenvalue problems in sparse.linalg > > - addition of discrete-time linear systems in the signal module > > > > Sources and binaries can be found at > > https://sourceforge.net/projects/scipy/files/scipy/0.10.0b2/, release > > notes are copied below. SciPy 0.10 is compatible with Python 2.4 - 3.2, > > and requires numpy 1.5.1 or higher. > > > > Please try this release and report problems on the mailing list. > > > > Cheers, > > Ralf > > > > > > Hello Ralf, > > looks good. Just some minor issues: > > 1) I was unable to build from scipy-0.10.0b2.zip: "error: > src\fblaswrap.f: No such file or directory". The git branch is OK. > And I even tested the tarball for a change:( Due to https://github.com/scipy/scipy/commit/fd20b082#diff-8 While checking this I saw that inv.v is missing in the tarball as well, but that file doesn't seem to be used. Does anyone know what it's doing there? > > 2) Can the file japanese_utf8.txt in scipy\io\matlab\tests\data somehow > be marked as binary on github? It is automatically converted to Windows > line endings such that test_mio.test_load fails. > > 3) Building with Intel C compiler fails on Windows due to a bug in > Cython 0.15, which has recently been fixed > < > https://github.com/cython/cython/commit/0443ad3d55f0a4762d4009bc606cb98ee4f4a1d6 > >. > The following files are affected: > > interpolate/interpnd.c > io/matlab/mio5_utils.c > io/matlab/mio_utils.c > io/matlab/streams.c > spatial/ckdtree.c > spatial/qhull.c > So almost all Cython files are affected, no matter which Cython version was used on them. Patching my version of Cython 0.15 with just that bugfix is probably preferable to using current Cython master. Does that sound OK? > 5) Two known failures in test_arpack.test_complex_nonsymmetric_modes and > test_arpack.test_symmetric_modes. > Ticket 1515 . > > Yeah, that's turning into a never ending story. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Mon Sep 19 15:26:09 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 19 Sep 2011 21:26:09 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: References: Message-ID: On Mon, Sep 19, 2011 at 1:31 PM, Pauli Virtanen wrote: > Hi, > > Sun, 18 Sep 2011 13:02:08 +0200, Ralf Gommers wrote: > [clip] > > The first beta release of scipy 0.10.0 was, well, beta quality, therefore > I > > am pleased to announce the availability of the second 0.10.0 beta > release. > > I suggest we add two new sections to the release notes. > > (i) We could add a brief summary list of backward incompatible > changes. This should make it easier for people to see what to change > in their code. It duplicates part of the info in the release notes, > but could be useful, as it allows to see everything relevant at a glance. > > Backward incompatible changes > ============================= > > * one-line summary list of backward incompatible changes > Wouldn't this be an exact copy of the "Removed features" section? I'm not aware of any other changes that are backwards incompatible. > * list also deprecations > > > (ii) We could steal some ideas from Sympy and add a list of contributors > to the release notes. Sure, we have a list in THANKS.txt, but who actually > reads that? > > Authors > ======= > > This release contains work by the following people (contributed at least > one patch to this release, names in alphabetical order): > > * Jeff Armstrong+ > * Matthew Brett > * Lars Buitinck+ > * David Cournapeau > * FI$H 2000+ > * Michael McNeil Forbes+ > * Matty G+ > * Christoph Gohlke > * Ralf Gommers > * Yaroslav Halchenko > * Charles Harris > * Thouis (Ray) Jones+ > * Chris Jordan-Squire+ > * Robert Kern > * Chris Lasher+ > * Wes McKinney+ > * Travis Oliphant > * Fabian Pedregosa > * Josef Perktold > * Thomas Robitaille+ > * Pim Schellart+ > * Anthony Scopatz+ > * Skipper Seabold+ > * Fazlul Shahriar+ > * David Simcha+ > * Scott Sinclair+ > * Andrey Smirnov+ > * Collin RM Stocks+ > * Martin Teichmann+ > * Jake Vanderplas+ > * Ga?l Varoquaux+ > * Pauli Virtanen > * Stefan van der Walt > * Warren Weckesser > * Mark Wiebe+ > > A total of 35 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > > > NOTE: I didn't check this list manually, so there might be some names > missing. It's produced by this script http://pav.iki.fi/tmp/git-authors > based on commit author names and log messages. > > This is a very good idea. Thanks for doing that, with a nice name mapping even. Shall we also expand "Pierre GM", or is Pierre really attached to those initials? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Sep 19 16:27:59 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 19 Sep 2011 20:27:59 +0000 (UTC) Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 References: Message-ID: On Mon, 19 Sep 2011 21:26:09 +0200, Ralf Gommers wrote: [clip] >> Backward incompatible changes >> ============================= >> >> * one-line summary list of backward incompatible changes > > Wouldn't this be an exact copy of the "Removed features" section? I'm > not aware of any other changes that are backwards incompatible. Probably. Maybe we should, however, rename "Removed features" to "Backward incompatible changes"? Pauli From ralf.gommers at googlemail.com Mon Sep 19 17:18:40 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 19 Sep 2011 23:18:40 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: References: <4E76488E.1030106@uci.edu> Message-ID: On Mon, Sep 19, 2011 at 9:08 PM, Ralf Gommers wrote: > > > On Sun, Sep 18, 2011 at 9:37 PM, Christoph Gohlke wrote: > >> >> >> On 9/18/2011 4:02 AM, Ralf Gommers wrote: >> > Hi, >> > >> > The first beta release of scipy 0.10.0 was, well, beta quality, >> > therefore I am pleased to announce the availability of the second 0.10.0 >> > beta release. For this release over a 100 tickets and pull requests have >> > been closed, and many new features have been added. Some of the >> > highlights are: >> > >> > - support for Bento as a build system for scipy >> > - generalized and shift-invert eigenvalue problems in sparse.linalg >> > - addition of discrete-time linear systems in the signal module >> > >> > Sources and binaries can be found at >> > https://sourceforge.net/projects/scipy/files/scipy/0.10.0b2/, release >> > notes are copied below. SciPy 0.10 is compatible with Python 2.4 - 3.2, >> > and requires numpy 1.5.1 or higher. >> > >> > Please try this release and report problems on the mailing list. >> > >> > Cheers, >> > Ralf >> > >> > >> >> Hello Ralf, >> >> looks good. Just some minor issues: >> >> 1) I was unable to build from scipy-0.10.0b2.zip: "error: >> src\fblaswrap.f: No such file or directory". The git branch is OK. >> > > And I even tested the tarball for a change:( > Due to https://github.com/scipy/scipy/commit/fd20b082#diff-8 > > This is fixed in master now, and I uploaded new source releases to sourceforge. Ralf > While checking this I saw that inv.v is missing in the tarball as well, but > that file doesn't seem to be used. Does anyone know what it's doing there? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yversley at gmail.com Tue Sep 20 12:06:03 2011 From: yversley at gmail.com (Yannick Versley) Date: Tue, 20 Sep 2011 18:06:03 +0200 Subject: [SciPy-Dev] sparse vectors / matrices / tensors Message-ID: I have been working quite a lot with sparse vectors and sparse matrices (basically as feature vectors in the context of machine learning), and have noticed that they do crop up in a lot of places (e.g. the CVXOPT library, in scikits, ...) and that people tend to either reinvent the wheel (i.e. implement a complete sparse matrix library) or pretend that no separate data structure is needed (i.e. always passing along pairs of coordinate and data arrays). The most obvious response is to point to scipy.sparse, however I ended up reimplementing a sparse matrix library myself because - scipy.sparse is limited to matrices and has no vectors or order-k tensors - LIL and DOK are not really efficient or convenient data structures to create sparse matrices (my own library basically keeps a list of unordered COO items and compacts/sorts them when the matrix is actually used as a matrix) As a result, I built yet another sparse matrix library, and I was wondering whether (i) there's some generic enough data structure that could be a sparse counterpart to numpy's ndarray (i.e., good enough for 99% of the people, 99% of the time -- my current guess would be that the mutable COO tensor implementation I currently have, or something vaguely similar, might actually fit the bill), or (ii) whether it would make sense to have some conventions for standardized access to other people's sparse matrix packages, either by defining a minimum set of Python methods that would be useful or by defining some kind of low-level interface (similar to Python's buffer interface). The answers to (i) and (ii) do depend on what people do with sparse matrices, and I'd expect people who deal with PDEs to have different needs than people who use sparse matrices for co-occurrence graph, or as feature matrix in machine learning, etc. - so I'd like to hear from people who have different use cases than I do. -Yannick -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsilter at gmail.com Tue Sep 20 13:25:03 2011 From: jsilter at gmail.com (Jacob Silterra) Date: Tue, 20 Sep 2011 13:25:03 -0400 Subject: [SciPy-Dev] Function for finding relative maxima of a 1d array Message-ID: Hello all, I posted an idea for a simple function for finding relative maxima of 1D data in the numpy mailing list; the general consensus seemed to be that it was useless but more advanced algorithm would be useful in scipy. Since this is an addition of new functionality, I thought (re: was told in the case of numpy) it'd be best to discuss it over the mailing list before issuing a pull request. Full code available at https://github.com/jsilter/scipy/tree/find_peaks. The method is named find_peaks, located in optimize.py in the optimize module. Copied from the docstring: The algorithm is as follows: 1. Perform a continuous wavelet transform on `vector`, for the supplied `widths`. This is a convolution of `vector` with `wavelet(width)` for each width in `widths`. See scipy.signals.cwt (pending). 2. Identify "ridge lines" in the cwt matrix. These are relative maxima at each row, connected across adjacent rows. See identify_ridge_lines (in optimize.py) 3. Filter the ridge_lines using filter_ridge_lines (in optimize.py) by length and signal to noise ratio. This algorithm requires several parameters be specified, I chose what I thought were reasonable values to be defaults. This method was designed for use in mass-spectrogram analysis, which typically have large,sharp peaks on top of a flat and noisy baseline. With proper parameter selection, I believe it would function well for a wide class of peaks. The default wavelet for step 1 is a ricker wavelet, which is added in scipy.signal.wavelets. This is implemented in pure python, which is appropriate for steps 1 and 3. Step 2 involves a lot of looping, that may be better in C, but it's Python at the moment. I used my best judgement on placing functions in appropriate modules, which led to some circular dependencies. As a temporary fix, the imports are within each function rather than at the top of each module. Once it's decided where each function should actually live that can be resolved. Thoughts? -Jacob PS There's a good comparison different algorithms available here . PPS I have Eclipse configured to use PEP8 standards for spacing after commas, assignment operators, and so forth. This ended up changing a lot of code which may make this specific function hard to review (sorry). I put the new functions close to the bottom of each file. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Sep 20 13:26:31 2011 From: cournape at gmail.com (David Cournapeau) Date: Tue, 20 Sep 2011 13:26:31 -0400 Subject: [SciPy-Dev] sparse vectors / matrices / tensors In-Reply-To: References: Message-ID: Yannick, On Tue, Sep 20, 2011 at 12:06 PM, Yannick Versley wrote: > I have been working quite a lot with sparse vectors and sparse matrices > (basically > as feature vectors in the context of machine learning), and have noticed > that they > do crop up in a lot of places (e.g. the CVXOPT library, in scikits, ...) and > that people > tend to either reinvent the wheel (i.e. implement a complete sparse matrix > library) or > pretend that no separate data structure is needed (i.e. always passing along > pairs of > coordinate and data arrays). > The most obvious response is to point to scipy.sparse, however I ended up > reimplementing a sparse matrix library myself because > - scipy.sparse is limited to matrices and has no vectors or order-k tensors > - LIL and DOK are not really efficient or convenient data structures to > create > ? sparse matrices (my own library basically keeps a list of unordered COO > items > ? and compacts/sorts them when the matrix is actually used as a matrix) I think those statementds are pretty uncontroversial. Implementing a sparse array (not limited to 1/2 dimensions) is a non trivial endeavour because the efficient 2d representations (csr/csc) do not generalize well. I started looking into the literature about those issues, and found a few interesting data structures, such as the UB-tree (http://www.scholarpedia.org/article/B-tree_and_UB-tree). I am actually currently looking into implementing this for a basic sparse tensor to get an idea of its efficiency (memory and speed-wise). cheers, David From ralf.gommers at googlemail.com Tue Sep 20 16:13:27 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 20 Sep 2011 22:13:27 +0200 Subject: [SciPy-Dev] Function for finding relative maxima of a 1d array In-Reply-To: References: Message-ID: Hi Jacob, On Tue, Sep 20, 2011 at 7:25 PM, Jacob Silterra wrote: > Hello all, > > I posted an idea for a simple function for finding relative maxima of 1D > data in the numpy mailing list; the general consensus seemed to be that it > was useless but more advanced algorithm would be useful in scipy. Since > this is an addition of new functionality, I thought (re: was told in the > case of numpy) it'd be best to discuss it over the mailing list before > issuing a pull request. Full code available at > https://github.com/jsilter/scipy/tree/find_peaks. The method is named > find_peaks, located in optimize.py in the optimize module. > That looks promising. > > Copied from the docstring: > The algorithm is as follows: > 1. Perform a continuous wavelet transform on `vector`, for the supplied > `widths`. This is a convolution of `vector` with `wavelet(width)` for > each width in `widths`. See scipy.signals.cwt (pending). > There is also a CWT implementation at http://projects.scipy.org/scipy/ticket/922 (the first code review link still works). It didn't get merged but still contains examples and references that may be useful. Also the author may still be around and willing to review your code for example. > 2. Identify "ridge lines" in the cwt matrix. These are relative maxima > at each row, connected across adjacent rows. See identify_ridge_lines > (in optimize.py) > 3. Filter the ridge_lines using filter_ridge_lines (in optimize.py) by > length and signal to noise ratio. > > This algorithm requires several parameters be specified, I chose what I > thought were reasonable values to be defaults. This method was designed for > use in mass-spectrogram analysis, which typically have large,sharp peaks on > top of a flat and noisy baseline. With proper parameter selection, I believe > it would function well for a wide class of peaks. The default wavelet for > step 1 is a ricker wavelet, which is added in scipy.signal.wavelets. > > This is implemented in pure python, which is appropriate for steps 1 and 3. > Step 2 involves a lot of looping, that may be better in C, but it's Python > at the moment. > Optimizing can always be done at a later stage, no need to worry about that now. > > I used my best judgement on placing functions in appropriate modules, which > led to some circular dependencies. As a temporary fix, the imports are > within each function rather than at the top of each module. Once it's > decided where each function should actually live that can be resolved. > I think ricker landed in the right place, cwt could probably also live in wavelets.py. optimize.py is not the best place for peak finding algorithms, it's already a very large file. Why not create a _peak_finding.py file under signal? That should get rid of those circular imports. > Thoughts? > > -Jacob > > PS There's a good comparison different algorithms available here > . > > PPS I have Eclipse configured to use PEP8 standards for spacing after > commas, assignment operators, and so forth. This ended up changing a lot of > code which may make this specific function hard to review (sorry). I put the > new functions close to the bottom of each file. > > Code cleanups are very welcome, your changes mostly look good to me. It would be helpful to have them as a separate commit though. The other additions can also be split up more to make them more easy to review. For example one commit for ricker and cwt with tests, one for argrelmin/max and one for find_peaks and the ridge_lines functions. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From yversley at gmail.com Tue Sep 20 18:12:43 2011 From: yversley at gmail.com (Yannick Versley) Date: Wed, 21 Sep 2011 00:12:43 +0200 Subject: [SciPy-Dev] sparse vectors / matrices / tensors Message-ID: David, I think the most important tradeoffs are about - writable vs. read-only; where I assume that writes are batched (so you can put the contents of writes somewhere and transition it to a compact read-only form at the next read access) - only access in lexical order vs. multidimensional access; my own implementation assumes always access in lexical order (or enumeration in that order), but other people (Dan Goodman) have written about adding auxiliary indices to facilitate column-wise access. My guess is that there will be something that works well for a commonly encountered subset of those requirements, but nothing that fits in a one-size-fits-all fashion if one also considers multidimensional access. (Keeping both the matrix and its transpose enough is sometimes ok, but fails when you want to modify the values). >* - scipy.sparse is limited to matrices and has no vectors or order-k tensors*>* - LIL and DOK are not really efficient or convenient data structures to*>* create*>* sparse matrices (my own library basically keeps a list of unordered COO*>* items*>* and compacts/sorts them when the matrix is actually used as a matrix)* > I think those statementds are pretty uncontroversial. Implementing a > sparse array (not limited to 1/2 dimensions) is a non trivial > endeavour because the efficient 2d representations (csr/csc) do not > generalize well. Basically, you have dense access where you compute the offset of the data from the address itself, and then you have COO where you store (address,data) tuples (either separately or together). CSR uses a mixture where you use a computed offset for a prefix of the address and then store (address,data) tuples for a suffix of the address. You could generalize this by always use a computed offset for the first dimension of the address and then use (address,data) for the rest (which gives you diminishing returns in terms of space saved); or you could use some multilevel scheme where you first use a computed offset and then look up N-2 (address,offset) pairs (with the range end being specified by the offset in the following tuple) followed by one last group of (address,data) tuples. > I started looking into the literature about those issues, and found a > few interesting data structures, such as the UB-tree > (http://www.scholarpedia.org/article/B-tree_and_UB-tree). I am > actually currently looking into implementing this for a basic sparse > tensor to get an idea of its efficiency (memory and speed-wise). Neat idea. Space-filling curves should give a decent tradeoff for multidimensional access. (Using B-trees vs batching up writes is yet another case of "it depends" - which would suggest a common interface rather than a single common data structure for all sparse matrices). -Yannick -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Sep 20 18:23:44 2011 From: cournape at gmail.com (David Cournapeau) Date: Tue, 20 Sep 2011 18:23:44 -0400 Subject: [SciPy-Dev] sparse vectors / matrices / tensors In-Reply-To: References: Message-ID: On Tue, Sep 20, 2011 at 6:12 PM, Yannick Versley wrote: > David, > > I think the most important tradeoffs are about > > - writable vs. read-only; where I assume that writes are batched (so you can > put > the contents of writes somewhere and transition it to a compact read-only > form > at the next read access) > > - only access in lexical order vs. multidimensional access; my own > implementation > assumes always access in lexical order (or enumeration in that order), but > other people (Dan Goodman) have written about adding auxiliary indices to > facilitate > column-wise access. > > My guess is that there will be something that works well for a commonly > encountered > subset of those requirements, but nothing that fits in a one-size-fits-all > fashion > if one also considers multidimensional access. (Keeping both the matrix and > its > transpose enough is sometimes ok, but fails when you want to modify the > values). >> >> > - scipy.sparse is limited to matrices and has no vectors or order-k >> > tensors >> > - LIL and DOK are not really efficient or convenient data structures to >> > create >> > ? sparse matrices (my own library basically keeps a list of unordered >> > COO >> > items >> > ? and compacts/sorts them when the matrix is actually used as a matrix) >> >> I think those statementds are pretty uncontroversial. Implementing a >> sparse array (not limited to 1/2 dimensions) is a non trivial >> endeavour because the efficient 2d representations (csr/csc) do not >> generalize well. > > Basically, you have dense access where you compute the offset > of the data from the address itself, and then you have COO where > you store (address,data) tuples (either separately or together). > CSR uses a mixture where you use a computed offset for a prefix of > the address and then store (address,data) tuples for a suffix of the > address. You could generalize this by always use a computed offset > for the first dimension of the address and then use (address,data) > for the rest (which gives you diminishing returns in terms of space > saved); or you could use some multilevel scheme where you first use > a computed offset and then look up N-2 (address,offset) pairs (with > the range end being specified by the offset in the following tuple) > followed by one last group of (address,data) tuples. >> >> I started looking into the literature about those issues, and found a >> few interesting data structures, such as the UB-tree >> (http://www.scholarpedia.org/article/B-tree_and_UB-tree). I am >> >> actually currently looking into implementing this for a basic sparse >> tensor to get an idea of its efficiency (memory and speed-wise). > > Neat idea. Space-filling curves should give a decent tradeoff for > multidimensional access. (Using B-trees vs batching up writes is > yet another case of "it depends" - which would suggest a common > interface rather than a single common data structure for all sparse > matrices). The problem of just defining a common interface is that implementations will likely only implement some disjoint subsets of each other (i.e. the current situation), unless the interface is pretty minimalistic. I don't see such an interface to be very useful while being data representation independent. The thing I really like with the UB-tree is that the storage layer is dimension independent, and that it supports all the basic operations we need (fast insertion, deletion and random access, and memory-efficient iteration). cheers, David From jsilter at gmail.com Tue Sep 20 23:06:43 2011 From: jsilter at gmail.com (Jacob Silterra) Date: Tue, 20 Sep 2011 23:06:43 -0400 Subject: [SciPy-Dev] Function for finding relative maxima of a 1d array Message-ID: >There is also a CWT implementation at http://projects.scipy.org/scipy/ticket/922 (the first code review link still works). It didn't get merged but still contains examples and references that may be useful. Also the author may still be around and willing to review your code for example. It seems like a good data structure for handling wavelets in general, which is outside the scope of what I was going for here. The math is straightforward for the cwt. This author implements convolution via fourier transform, I opted to leave the implementation details to scipy.signal.convolve and not reinvent that particular wheel. It wouldn't be too hard to change, I'm just not sure there's an advantage. >I think ricker landed in the right place, cwt could probably also live in wavelets.py. optimize.py is not the best place for peak finding algorithms, it's already a very large file. Why not create a _peak_finding.py file under signal? That should get rid of those circular imports. Done and done. >Code cleanups are very welcome, your changes mostly look good to me. It would be helpful to have them as a separate commit though Indeed, I didn't notice the spacing issues until I was writing that last email. I had originally put all the functions in a single commit because I was thinking of them as a single feature, but in retrospect that doesn't make sense. I've redone the history, same branch name, but the commits should be easier to review: https://github.com/jsilter/scipy/commits/find_peaks. All of those cleanups were in optimize.py, which I didn't touch this time around, so they aren't in this branch. I can do that in a separate branch + pull request. -Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Sep 20 23:30:27 2011 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 20 Sep 2011 20:30:27 -0700 Subject: [SciPy-Dev] sparse vectors / matrices / tensors In-Reply-To: References: Message-ID: On Tue, Sep 20, 2011 at 10:26 AM, David Cournapeau wrote: > I think those statementds are pretty uncontroversial. Implementing a > sparse array (not limited to 1/2 dimensions) is a non trivial > endeavour because the efficient 2d representations (csr/csc) do not > generalize well. > > I started looking into the literature about those issues, and found a > few interesting data structures, such as the UB-tree > (http://www.scholarpedia.org/article/B-tree_and_UB-tree). I am > actually currently looking into implementing this for a basic sparse > tensor to get an idea of its efficiency (memory and speed- The Matlab Tensor Toolbox uses a COO format for sparse tensors (arrays): http://csmr.ca.sandia.gov/~tgkolda/TensorToolbox/ They explain their reasoning here: http://csmr.ca.sandia.gov/~tgkolda/pubs/bibtgkfiles/SIAM-67648.pdf I wouldn't be surprised if UB-trees or similar were better, but the COO format is extremely simple to reason about and seems reasonably efficient. The memory overhead versus CSR/CSC is not much, and if you want to switch from column-wise to row-wise access, you just do an in-place sort on one of the coordinate columns. Sorting is a fast operation, and even more so if you have an algorithm that is (a) stable (so re-sorting along one dimension leaves some structure in the previous dimension), and (b) is able to exploit such structure to make later sorts faster. Python's timsort is just such an algorithm. A similar strategy works for applying element-wise binary operations to two sparse COO arrays -- you treat the two arrays as being a single big unsorted array, sort it to bring matching non-zero elements together, and then use your binary operation to combine duplicate entries. As for general interfaces, I think it'd make more sense to talk about that once we had at least one example of such a library released publically... trying to generalize over zero examples is hard :-). -- Nathaniel From D.J.Baker at soton.ac.uk Wed Sep 21 06:50:53 2011 From: D.J.Baker at soton.ac.uk (Baker D.J.) Date: Wed, 21 Sep 2011 11:50:53 +0100 Subject: [SciPy-Dev] scipy 0.9.0 scipy.test fails with segmentation fault Message-ID: Hello, I'm building the latest numpy and scipy for one of the users here. The installation of numpy 1.6.1 looks good, however scipy 0.9.0 is not so great. I find that scipy.test(verbose=10) fails with a segmentation fault. Here's an extract from the typescript: [root at blue33 ~]# python Python 2.6.5 (r265:79063, Apr 16 2010, 20:05:16) [GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path.insert(0,"/local/software/rh53/scipy/0.9.0/gcc/lib/python2.6/site-packages/") >>> import numpy >>> import scipy >>> scipy.test(verbose=10) Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /local/software/rh53/python/2.6.5/gcc/lib/python2.6/site-packages/numpy SciPy version 0.9.0 SciPy is installed in /local/software/rh53/scipy/0.9.0/gcc/lib/python2.6/site-packages/scipy Python version 2.6.5 (r265:79063, Apr 16 2010, 20:05:16) [GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] nose version 0.11.4 etc, etc, etc test_nonlin.TestJacobianDotSolve.test_broyden1 ... Segmentation fault This is an "out of the box" installation. I'm working on a RHELS 5.3 machine, I use the GNU compilers ((GCC) 4.1.2) and the blas/lapack routines provided by the RHELS rpms. I could potentially switch to the Intel compilers and MKL, however I have many problems compiling numpy/scipy with the Intel compilers in the past. Is this a known problem and/or can anyone please advise me how to work around this problem? I would appreciate your advice, please. Best regards - David. Dr David J Baker iSolutions University of Southampton Highfield Southampton SO17 1BJ Email: D.J.Baker at soton.ac.uk Tel: +44 23 80598352 Fax: +44 23 80593131 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Sep 21 07:17:53 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 21 Sep 2011 11:17:53 +0000 (UTC) Subject: [SciPy-Dev] scipy 0.9.0 scipy.test fails with segmentation fault References: Message-ID: Wed, 21 Sep 2011 11:50:53 +0100, Baker D.J. wrote: [clip] > test_nonlin.TestJacobianDotSolve.test_broyden1 ... Segmentation fault [clip] That test contains only pure-Python code, with some calls to BLAS. Nobody has seen a crash at that point before, as far as I know. I would try linking against the reference blas from netlib.org/blas, to check whether the BLAS libraries provided by RHEL are at fault. -- Pauli Virtanen From D.J.Baker at soton.ac.uk Wed Sep 21 07:34:40 2011 From: D.J.Baker at soton.ac.uk (Baker D.J.) Date: Wed, 21 Sep 2011 12:34:40 +0100 Subject: [SciPy-Dev] scipy 0.9.0 scipy.test fails with segmentation fault In-Reply-To: References: Message-ID: Hello Paul, Thanks for the reply. I agree that I should take a look the BLAS library. I've only found one reference to this type of failure on the web, and it regards the use of Enthought Python.... http://comments.gmane.org/gmane.comp.python.epd.user/309 This posting discusses a segmentation fault in the same routine, however with reference to the use of the MKL library. Something about the alignment of Fortran arrays being different in MKL. Best regards -- David. -----Original Message----- From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On Behalf Of Pauli Virtanen Sent: Wednesday, September 21, 2011 12:18 PM To: scipy-dev at scipy.org Subject: Re: [SciPy-Dev] scipy 0.9.0 scipy.test fails with segmentation fault Wed, 21 Sep 2011 11:50:53 +0100, Baker D.J. wrote: [clip] > test_nonlin.TestJacobianDotSolve.test_broyden1 ... Segmentation fault [clip] That test contains only pure-Python code, with some calls to BLAS. Nobody has seen a crash at that point before, as far as I know. I would try linking against the reference blas from netlib.org/blas, to check whether the BLAS libraries provided by RHEL are at fault. -- Pauli Virtanen _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev From scipy at samueljohn.de Wed Sep 21 08:32:47 2011 From: scipy at samueljohn.de (Samuel John) Date: Wed, 21 Sep 2011 14:32:47 +0200 Subject: [SciPy-Dev] scipy 0.9.0 scipy.test fails with segmentation fault In-Reply-To: References: Message-ID: <65143C52-90A2-4F45-A90D-9AC0618AFFB9@samueljohn.de> Hi, I have no solution to offer :-( But perhaps a related issue on gentoo. I have pasted the result of running scipy.test() with gdb output. We found superlu or libatlas to be involved but could not figure out more. I opend a scipy ticket (see below) but I am not sure if it is scipy's fault after all. On 15.09.2011, at 03:36, Xiaoye S. Li wrote: >> It's hard to say whether it's in libsuperlu or libatlas, or in the scipy interface to these libraries. You may have to print the matrix, and test superlu and atlas in isolation. > Sherry Li > > On Mon, Sep 12, 2011 at 6:44 AM, Samuel John wrote: > Hi there, > > we have got a segmentation fault in scipy, which seems to occur in libsuperlu.4.so > http://projects.scipy.org/scipy/ticket/1513 > > I am unsure, if this is really the fault of scipy or libsuperlu or something else. > For scipy 0.11.dev the segfault happens to occur in libatlas.so.0 and not in libsuperlu. > > Perhaps you do have an idea. > (The details are right now in the scipy ticket) > > Thanks in advance! > > Samuel > - - - - - - - - - - - - > > > > > sjohn at macabeo:~ $ gdb --args python -c 'import scipy; scipy.test()' > GNU gdb (Gentoo 7.2 p1) 7.2 > Copyright (C) 2010 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-pc-linux-gnu". > For bug reporting instructions, please see: > ... > Reading symbols from /usr/bin/python...(no debugging symbols found)...done. > (gdb) run > Starting program: /usr/bin/python -c import\ scipy\;\ scipy.test\(\) > process 5606 is executing new program: /usr/bin/python2.7 > [Thread debugging using libthread_db enabled] > Running unit tests for scipy > NumPy version 1.6.1 > NumPy is installed in /usr/lib64/python2.7/site-packages/numpy > SciPy version 0.11.0.dev-1f6595e > SciPy is installed in /homes/sjohn/.local/lib64/python2.7/site-packages/scipy > Python version 2.7.2 (default, Sep 7 2011, 17:08:51) [GCC 4.5.3] > nose version 1.0.0 > ............................................................................................................................................................................................................................K............................................................................................................/homes/sjohn/.local/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:674: UserWarning: > The coefficients of the spline returned have been computed as the > minimal norm least-squares solution of a (numerically) rank deficient > system (deficiency=7). If deficiency is large, the results may be > inaccurate. Deficiency may strongly depend on the value of eps. > warnings.warn(message) > ....../homes/sjohn/.local/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:605: UserWarning: > The required storage space exceeds the available storage space: nxest > or nyest too small, or s too small. > The weighted least-squares spline corresponds to the current set of > knots. > warnings.warn(message) > ........................K..K....../usr/lib64/python2.7/site-packages/numpy/core/numeric.py:1920: RuntimeWarning: invalid value encountered in absolute > return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) > ............................................................................................................................................................................................................................................................................................................................................................................................................................................/homes/sjohn/.local/lib64/python2.7/site-packages/scipy/io/wavfile.py:31: WavFileWarning: Unfamiliar format bytes > warnings.warn("Unfamiliar format bytes", WavFileWarning) > /homes/sjohn/.local/lib64/python2.7/site-packages/scipy/io/wavfile.py:121: WavFileWarning: chunk not understood > warnings.warn("chunk not understood", WavFileWarning) > ...............................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS......................................................................................................................................................FF[New Thread 0x7fffe1fc9700 (LWP 5622)] > [New Thread 0x7fffe17c8700 (LWP 5623)] > [New Thread 0x7fffe0fc7700 (LWP 5625)] > [New Thread 0x7fffe07c6700 (LWP 5624)] > [Thread 0x7fffe0fc7700 (LWP 5625) exited] > [Thread 0x7fffe07c6700 (LWP 5624) exited] > [Thread 0x7fffe17c8700 (LWP 5623) exited] > [Thread 0x7fffe1fc9700 (LWP 5622) exited] > [New Thread 0x7fffe1fc9700 (LWP 5626)] > [New Thread 0x7fffe17c8700 (LWP 5627)] > [Thread 0x7fffe17c8700 (LWP 5627) exited] > [Thread 0x7fffe1fc9700 (LWP 5626) exited] > [New Thread 0x7fffe1fc9700 (LWP 5628)] > [New Thread 0x7fffe17c8700 (LWP 5629)] > [New Thread 0x7fffe07c6700 (LWP 5630)] > [New Thread 0x7fffe0fc7700 (LWP 5631)] > [Thread 0x7fffe07c6700 (LWP 5630) exited] > [Thread 0x7fffe0fc7700 (LWP 5631) exited] > [Thread 0x7fffe17c8700 (LWP 5629) exited] > [Thread 0x7fffe1fc9700 (LWP 5628) exited] > [New Thread 0x7fffe1fc9700 (LWP 5632)] > [New Thread 0x7fffe17c8700 (LWP 5633)] > [Thread 0x7fffe17c8700 (LWP 5633) exited] > [Thread 0x7fffe1fc9700 (LWP 5632) exited] > .[New Thread 0x7fffe1fc9700 (LWP 5634)] > [New Thread 0x7fffe17c8700 (LWP 5635)] > [New Thread 0x7fffe0fc7700 (LWP 5636)] > [Thread 0x7fffe17c8700 (LWP 5635) exited] > [Thread 0x7fffe0fc7700 (LWP 5636) exited] > [Thread 0x7fffe1fc9700 (LWP 5634) exited] > [New Thread 0x7fffe1fc9700 (LWP 5637)] > [New Thread 0x7fffe17c8700 (LWP 5638)] > [New Thread 0x7fffe0fc7700 (LWP 5639)] > [Thread 0x7fffe17c8700 (LWP 5638) exited] > [Thread 0x7fffe0fc7700 (LWP 5639) exited] > [Thread 0x7fffe1fc9700 (LWP 5637) exited] > .[New Thread 0x7fffe1fc9700 (LWP 5640)] > [New Thread 0x7fffe17c8700 (LWP 5641)] > [New Thread 0x7fffe0fc7700 (LWP 5642)] > [Thread 0x7fffe17c8700 (LWP 5641) exited] > [Thread 0x7fffe0fc7700 (LWP 5642) exited] > [Thread 0x7fffe1fc9700 (LWP 5640) exited] > F[New Thread 0x7fffe1fc9700 (LWP 5643)] > [New Thread 0x7fffe17c8700 (LWP 5644)] > [New Thread 0x7fffe07c6700 (LWP 5646)] > [New Thread 0x7fffe0fc7700 (LWP 5645)] > [Thread 0x7fffe0fc7700 (LWP 5645) exited] > [Thread 0x7fffe07c6700 (LWP 5646) exited] > [Thread 0x7fffe17c8700 (LWP 5644) exited] > [Thread 0x7fffe1fc9700 (LWP 5643) exited] > [New Thread 0x7fffe1fc9700 (LWP 5647)] > [New Thread 0x7fffe17c8700 (LWP 5648)] > [Thread 0x7fffe17c8700 (LWP 5648) exited] > [Thread 0x7fffe1fc9700 (LWP 5647) exited] > F[New Thread 0x7fffe1fc9700 (LWP 5649)] > [New Thread 0x7fffe17c8700 (LWP 5650)] > [Thread 0x7fffe17c8700 (LWP 5650) exited] > [Thread 0x7fffe1fc9700 (LWP 5649) exited] > [New Thread 0x7fffe1fc9700 (LWP 5651)] > [New Thread 0x7fffe17c8700 (LWP 5652)] > [Thread 0x7fffe17c8700 (LWP 5652) exited] > [Thread 0x7fffe1fc9700 (LWP 5651) exited] > .[New Thread 0x7fffe1fc9700 (LWP 5653)] > [New Thread 0x7fffe17c8700 (LWP 5654)] > [Thread 0x7fffe17c8700 (LWP 5654) exited] > [Thread 0x7fffe1fc9700 (LWP 5653) exited] > F..F..FFF..FF.F...[New Thread 0x7fffe1fc9700 (LWP 5655)] > [New Thread 0x7fffe17c8700 (LWP 5656)] > [Thread 0x7fffe17c8700 (LWP 5656) exited] > [Thread 0x7fffe1fc9700 (LWP 5655) exited] > [New Thread 0x7fffe1fc9700 (LWP 5657)] > [New Thread 0x7fffe17c8700 (LWP 5658)] > [Thread 0x7fffe17c8700 (LWP 5658) exited] > [Thread 0x7fffe1fc9700 (LWP 5657) exited] > .[New Thread 0x7fffe1fc9700 (LWP 5659)] > [New Thread 0x7fffe17c8700 (LWP 5660)] > [New Thread 0x7fffe07c6700 (LWP 5661)] > [Thread 0x7fffe17c8700 (LWP 5660) exited] > [Thread 0x7fffe07c6700 (LWP 5661) exited] > [Thread 0x7fffe1fc9700 (LWP 5659) exited] > [New Thread 0x7fffe1fc9700 (LWP 5662)] > [New Thread 0x7fffe17c8700 (LWP 5663)] > [New Thread 0x7fffe07c6700 (LWP 5664)] > [Thread 0x7fffe17c8700 (LWP 5663) exited] > [Thread 0x7fffe07c6700 (LWP 5664) exited] > [Thread 0x7fffe1fc9700 (LWP 5662) exited] > .[New Thread 0x7fffe1fc9700 (LWP 5665)] > [New Thread 0x7fffe17c8700 (LWP 5666)] > [New Thread 0x7fffe0fc7700 (LWP 5668)] > [New Thread 0x7fffe07c6700 (LWP 5667)] > [Thread 0x7fffe0fc7700 (LWP 5668) exited] > [Thread 0x7fffe07c6700 (LWP 5667) exited] > [Thread 0x7fffe17c8700 (LWP 5666) exited] > [Thread 0x7fffe1fc9700 (LWP 5665) exited] > [New Thread 0x7fffe1fc9700 (LWP 5669)] > [New Thread 0x7fffe17c8700 (LWP 5670)] > [Thread 0x7fffe17c8700 (LWP 5670) exited] > [Thread 0x7fffe1fc9700 (LWP 5669) exited] > [New Thread 0x7fffe1fc9700 (LWP 5671)] > [New Thread 0x7fffe17c8700 (LWP 5672)] > [New Thread 0x7fffe0fc7700 (LWP 5673)] > [New Thread 0x7fffe07c6700 (LWP 5674)] > [Thread 0x7fffe07c6700 (LWP 5674) exited] > [Thread 0x7fffe0fc7700 (LWP 5673) exited] > [Thread 0x7fffe17c8700 (LWP 5672) exited] > [Thread 0x7fffe1fc9700 (LWP 5671) exited] > [New Thread 0x7fffe1fc9700 (LWP 5675)] > [New Thread 0x7fffe17c8700 (LWP 5676)] > [Thread 0x7fffe17c8700 (LWP 5676) exited] > [Thread 0x7fffe1fc9700 (LWP 5675) exited] > .[New Thread 0x7fffe1fc9700 (LWP 5677)] > [New Thread 0x7fffe17c8700 (LWP 5678)] > [Thread 0x7fffe17c8700 (LWP 5678) exited] > [Thread 0x7fffe1fc9700 (LWP 5677) exited] > [New Thread 0x7fffe1fc9700 (LWP 5679)] > [New Thread 0x7fffe17c8700 (LWP 5680)] > [Thread 0x7fffe17c8700 (LWP 5680) exited] > [Thread 0x7fffe1fc9700 (LWP 5679) exited] > ..........................................................................................................................................................K................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................/homes/sjohn/.local/lib64/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:259: DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead > ' install scikits.umfpack instead', DeprecationWarning ) > ../homes/sjohn/.local/lib64/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:75: DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead > ' install scikits.umfpack instead', DeprecationWarning ) > ..........................................................................................................................................................F............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. > Program received signal SIGSEGV, Segmentation fault. > 0x00007ffff591dc60 in ATL_dgerk_L2_restrict () from /usr/lib64/libatlas.so.0 > (gdb) > From martin.teichmann at mbi-berlin.de Wed Sep 21 09:39:28 2011 From: martin.teichmann at mbi-berlin.de (Martin Teichmann) Date: Wed, 21 Sep 2011 15:39:28 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: References: <4E76488E.1030106@uci.edu> Message-ID: Hi all, There was a discussion at https://github.com/scipy/scipy/pull/76 to deprecate qr_old, and while I know that 0.10 is already in the second beta, maybe it's still a good idea to do that, I think it's very unlikely that we introduce a bug through deprecation, but it should be announced to users ASAP, so that they can adopt their code. (This also will allow for feedback if there is still anyone out there using qr_old...) Greetings Martin From bsouthey at gmail.com Wed Sep 21 10:15:25 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 21 Sep 2011 09:15:25 -0500 Subject: [SciPy-Dev] scipy 0.9.0 scipy.test fails with segmentation fault In-Reply-To: <65143C52-90A2-4F45-A90D-9AC0618AFFB9@samueljohn.de> References: <65143C52-90A2-4F45-A90D-9AC0618AFFB9@samueljohn.de> Message-ID: <4E79F17D.4000803@gmail.com> On 09/21/2011 07:32 AM, Samuel John wrote: > Hi, > > I have no solution to offer :-( > > But perhaps a related issue on gentoo. > I have pasted the result of running scipy.test() with gdb output. > We found superlu or libatlas to be involved but could not figure out more. > I opend a scipy ticket (see below) but I am not sure if it is scipy's fault after all. > > > On 15.09.2011, at 03:36, Xiaoye S. Li wrote: >>> It's hard to say whether it's in libsuperlu or libatlas, or in the scipy interface to these libraries. You may have to print the matrix, and test superlu and atlas in isolation. >> Sherry Li >> > >> On Mon, Sep 12, 2011 at 6:44 AM, Samuel John wrote: >> Hi there, >> >> we have got a segmentation fault in scipy, which seems to occur in libsuperlu.4.so >> http://projects.scipy.org/scipy/ticket/1513 >> >> I am unsure, if this is really the fault of scipy or libsuperlu or something else. >> For scipy 0.11.dev the segfault happens to occur in libatlas.so.0 and not in libsuperlu. >> >> Perhaps you do have an idea. >> (The details are right now in the scipy ticket) >> >> Thanks in advance! >> >> Samuel >> - - - - - - - - - - - - >> >> >> >> >> sjohn at macabeo:~ $ gdb --args python -c 'import scipy; scipy.test()' >> GNU gdb (Gentoo 7.2 p1) 7.2 >> Copyright (C) 2010 Free Software Foundation, Inc. >> License GPLv3+: GNU GPL version 3 or later >> This is free software: you are free to change and redistribute it. >> There is NO WARRANTY, to the extent permitted by law. Type "show copying" >> and "show warranty" for details. >> This GDB was configured as "x86_64-pc-linux-gnu". >> For bug reporting instructions, please see: >> ... >> Reading symbols from /usr/bin/python...(no debugging symbols found)...done. >> (gdb) run >> Starting program: /usr/bin/python -c import\ scipy\;\ scipy.test\(\) >> process 5606 is executing new program: /usr/bin/python2.7 >> [Thread debugging using libthread_db enabled] >> Running unit tests for scipy >> NumPy version 1.6.1 >> NumPy is installed in /usr/lib64/python2.7/site-packages/numpy >> SciPy version 0.11.0.dev-1f6595e >> SciPy is installed in /homes/sjohn/.local/lib64/python2.7/site-packages/scipy >> Python version 2.7.2 (default, Sep 7 2011, 17:08:51) [GCC 4.5.3] >> nose version 1.0.0 >> ............................................................................................................................................................................................................................K............................................................................................................/homes/sjohn/.local/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:674: UserWarning: >> The coefficients of the spline returned have been computed as the >> minimal norm least-squares solution of a (numerically) rank deficient >> system (deficiency=7). If deficiency is large, the results may be >> inaccurate. Deficiency may strongly depend on the value of eps. >> warnings.warn(message) >> ....../homes/sjohn/.local/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:605: UserWarning: >> The required storage space exceeds the available storage space: nxest >> or nyest too small, or s too small. >> The weighted least-squares spline corresponds to the current set of >> knots. >> warnings.warn(message) >> ........................K..K....../usr/lib64/python2.7/site-packages/numpy/core/numeric.py:1920: RuntimeWarning: invalid value encountered in absolute >> return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) >> ............................................................................................................................................................................................................................................................................................................................................................................................................................................/homes/sjohn/.local/lib64/python2.7/site-packages/scipy/io/wavfile.py:31: WavFileWarning: Unfamiliar format bytes >> warnings.warn("Unfamiliar format bytes", WavFileWarning) >> /homes/sjohn/.local/lib64/python2.7/site-packages/scipy/io/wavfile.py:121: WavFileWarning: chunk not understood >> warnings.warn("chunk not understood", WavFileWarning) >> ...............................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS......................................................................................................................................................FF[New Thread 0x7fffe1fc9700 (LWP 5622)] >> [New Thread 0x7fffe17c8700 (LWP 5623)] >> [New Thread 0x7fffe0fc7700 (LWP 5625)] >> [New Thread 0x7fffe07c6700 (LWP 5624)] >> [Thread 0x7fffe0fc7700 (LWP 5625) exited] >> [Thread 0x7fffe07c6700 (LWP 5624) exited] >> [Thread 0x7fffe17c8700 (LWP 5623) exited] >> [Thread 0x7fffe1fc9700 (LWP 5622) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5626)] >> [New Thread 0x7fffe17c8700 (LWP 5627)] >> [Thread 0x7fffe17c8700 (LWP 5627) exited] >> [Thread 0x7fffe1fc9700 (LWP 5626) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5628)] >> [New Thread 0x7fffe17c8700 (LWP 5629)] >> [New Thread 0x7fffe07c6700 (LWP 5630)] >> [New Thread 0x7fffe0fc7700 (LWP 5631)] >> [Thread 0x7fffe07c6700 (LWP 5630) exited] >> [Thread 0x7fffe0fc7700 (LWP 5631) exited] >> [Thread 0x7fffe17c8700 (LWP 5629) exited] >> [Thread 0x7fffe1fc9700 (LWP 5628) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5632)] >> [New Thread 0x7fffe17c8700 (LWP 5633)] >> [Thread 0x7fffe17c8700 (LWP 5633) exited] >> [Thread 0x7fffe1fc9700 (LWP 5632) exited] >> .[New Thread 0x7fffe1fc9700 (LWP 5634)] >> [New Thread 0x7fffe17c8700 (LWP 5635)] >> [New Thread 0x7fffe0fc7700 (LWP 5636)] >> [Thread 0x7fffe17c8700 (LWP 5635) exited] >> [Thread 0x7fffe0fc7700 (LWP 5636) exited] >> [Thread 0x7fffe1fc9700 (LWP 5634) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5637)] >> [New Thread 0x7fffe17c8700 (LWP 5638)] >> [New Thread 0x7fffe0fc7700 (LWP 5639)] >> [Thread 0x7fffe17c8700 (LWP 5638) exited] >> [Thread 0x7fffe0fc7700 (LWP 5639) exited] >> [Thread 0x7fffe1fc9700 (LWP 5637) exited] >> .[New Thread 0x7fffe1fc9700 (LWP 5640)] >> [New Thread 0x7fffe17c8700 (LWP 5641)] >> [New Thread 0x7fffe0fc7700 (LWP 5642)] >> [Thread 0x7fffe17c8700 (LWP 5641) exited] >> [Thread 0x7fffe0fc7700 (LWP 5642) exited] >> [Thread 0x7fffe1fc9700 (LWP 5640) exited] >> F[New Thread 0x7fffe1fc9700 (LWP 5643)] >> [New Thread 0x7fffe17c8700 (LWP 5644)] >> [New Thread 0x7fffe07c6700 (LWP 5646)] >> [New Thread 0x7fffe0fc7700 (LWP 5645)] >> [Thread 0x7fffe0fc7700 (LWP 5645) exited] >> [Thread 0x7fffe07c6700 (LWP 5646) exited] >> [Thread 0x7fffe17c8700 (LWP 5644) exited] >> [Thread 0x7fffe1fc9700 (LWP 5643) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5647)] >> [New Thread 0x7fffe17c8700 (LWP 5648)] >> [Thread 0x7fffe17c8700 (LWP 5648) exited] >> [Thread 0x7fffe1fc9700 (LWP 5647) exited] >> F[New Thread 0x7fffe1fc9700 (LWP 5649)] >> [New Thread 0x7fffe17c8700 (LWP 5650)] >> [Thread 0x7fffe17c8700 (LWP 5650) exited] >> [Thread 0x7fffe1fc9700 (LWP 5649) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5651)] >> [New Thread 0x7fffe17c8700 (LWP 5652)] >> [Thread 0x7fffe17c8700 (LWP 5652) exited] >> [Thread 0x7fffe1fc9700 (LWP 5651) exited] >> .[New Thread 0x7fffe1fc9700 (LWP 5653)] >> [New Thread 0x7fffe17c8700 (LWP 5654)] >> [Thread 0x7fffe17c8700 (LWP 5654) exited] >> [Thread 0x7fffe1fc9700 (LWP 5653) exited] >> F..F..FFF..FF.F...[New Thread 0x7fffe1fc9700 (LWP 5655)] >> [New Thread 0x7fffe17c8700 (LWP 5656)] >> [Thread 0x7fffe17c8700 (LWP 5656) exited] >> [Thread 0x7fffe1fc9700 (LWP 5655) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5657)] >> [New Thread 0x7fffe17c8700 (LWP 5658)] >> [Thread 0x7fffe17c8700 (LWP 5658) exited] >> [Thread 0x7fffe1fc9700 (LWP 5657) exited] >> .[New Thread 0x7fffe1fc9700 (LWP 5659)] >> [New Thread 0x7fffe17c8700 (LWP 5660)] >> [New Thread 0x7fffe07c6700 (LWP 5661)] >> [Thread 0x7fffe17c8700 (LWP 5660) exited] >> [Thread 0x7fffe07c6700 (LWP 5661) exited] >> [Thread 0x7fffe1fc9700 (LWP 5659) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5662)] >> [New Thread 0x7fffe17c8700 (LWP 5663)] >> [New Thread 0x7fffe07c6700 (LWP 5664)] >> [Thread 0x7fffe17c8700 (LWP 5663) exited] >> [Thread 0x7fffe07c6700 (LWP 5664) exited] >> [Thread 0x7fffe1fc9700 (LWP 5662) exited] >> .[New Thread 0x7fffe1fc9700 (LWP 5665)] >> [New Thread 0x7fffe17c8700 (LWP 5666)] >> [New Thread 0x7fffe0fc7700 (LWP 5668)] >> [New Thread 0x7fffe07c6700 (LWP 5667)] >> [Thread 0x7fffe0fc7700 (LWP 5668) exited] >> [Thread 0x7fffe07c6700 (LWP 5667) exited] >> [Thread 0x7fffe17c8700 (LWP 5666) exited] >> [Thread 0x7fffe1fc9700 (LWP 5665) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5669)] >> [New Thread 0x7fffe17c8700 (LWP 5670)] >> [Thread 0x7fffe17c8700 (LWP 5670) exited] >> [Thread 0x7fffe1fc9700 (LWP 5669) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5671)] >> [New Thread 0x7fffe17c8700 (LWP 5672)] >> [New Thread 0x7fffe0fc7700 (LWP 5673)] >> [New Thread 0x7fffe07c6700 (LWP 5674)] >> [Thread 0x7fffe07c6700 (LWP 5674) exited] >> [Thread 0x7fffe0fc7700 (LWP 5673) exited] >> [Thread 0x7fffe17c8700 (LWP 5672) exited] >> [Thread 0x7fffe1fc9700 (LWP 5671) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5675)] >> [New Thread 0x7fffe17c8700 (LWP 5676)] >> [Thread 0x7fffe17c8700 (LWP 5676) exited] >> [Thread 0x7fffe1fc9700 (LWP 5675) exited] >> .[New Thread 0x7fffe1fc9700 (LWP 5677)] >> [New Thread 0x7fffe17c8700 (LWP 5678)] >> [Thread 0x7fffe17c8700 (LWP 5678) exited] >> [Thread 0x7fffe1fc9700 (LWP 5677) exited] >> [New Thread 0x7fffe1fc9700 (LWP 5679)] >> [New Thread 0x7fffe17c8700 (LWP 5680)] >> [Thread 0x7fffe17c8700 (LWP 5680) exited] >> [Thread 0x7fffe1fc9700 (LWP 5679) exited] >> ..........................................................................................................................................................K................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. > .............................................................................................................................................................................../homes/sjohn/.local/lib64/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:259: DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead >> ' install scikits.umfpack instead', DeprecationWarning ) >> ../homes/sjohn/.local/lib64/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:75: DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead >> ' install scikits.umfpack instead', DeprecationWarning ) >> ..........................................................................................................................................................F............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. >> Program received signal SIGSEGV, Segmentation fault. >> 0x00007ffff591dc60 in ATL_dgerk_L2_restrict () from /usr/lib64/libatlas.so.0 >> (gdb) >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev Building without using Atlas should confirm if Atlas is involved - probably have to build both numpy and scipy this way: $ ATLAS=None python setup.py build As Pauli said, there are only a couple of operations that should cause this so try to find which line of the test is involved. You should be able to run the individual test from the source directory. Then you can modify the test file such as adding an 'exit()' in the test_nonlin.py in the 'TestJacobianDotSolve' class to find the line involved etc.: $ pwd {path to scipy}/scipy/scipy/optimize/tests $ python test_nonlin.py .......EEEE.E.............................................. Bruce From D.J.Baker at soton.ac.uk Wed Sep 21 11:34:52 2011 From: D.J.Baker at soton.ac.uk (Baker D.J.) Date: Wed, 21 Sep 2011 16:34:52 +0100 Subject: [SciPy-Dev] scipy 0.9.0 scipy.test fails with segmentation fault In-Reply-To: <4E79F17D.4000803@gmail.com> References: <65143C52-90A2-4F45-A90D-9AC0618AFFB9@samueljohn.de> <4E79F17D.4000803@gmail.com> Message-ID: Hello, Pauli, thank you again for your advice. I now have scipy 0.9.0 working as expected. I downloaded and built both the BLAS and LAPACK libraries from netlib, remade numpy (and tested) and finally remade scipy (and tested). Also I was very particular to ensure that "gfortran" was used for all the compilations. As I say scipy.test now completes with an acceptable result and there certainly aren't any segmentation faults. Best regards -- David. -----Original Message----- From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On Behalf Of Bruce Southey Sent: Wednesday, September 21, 2011 3:15 PM To: scipy-dev at scipy.org Subject: Re: [SciPy-Dev] scipy 0.9.0 scipy.test fails with segmentation fault On 09/21/2011 07:32 AM, Samuel John wrote: > Hi, > > I have no solution to offer :-( > > But perhaps a related issue on gentoo. > I have pasted the result of running scipy.test() with gdb output. > We found superlu or libatlas to be involved but could not figure out more. > I opend a scipy ticket (see below) but I am not sure if it is scipy's fault after all. > > > On 15.09.2011, at 03:36, Xiaoye S. Li wrote: >>> It's hard to say whether it's in libsuperlu or libatlas, or in the scipy interface to these libraries. You may have to print the matrix, and test superlu and atlas in isolation. >> Sherry Li >> > >> On Mon, Sep 12, 2011 at 6:44 AM, Samuel John wrote: >> Hi there, >> >> we have got a segmentation fault in scipy, which seems to occur in >> libsuperlu.4.so >> http://projects.scipy.org/scipy/ticket/1513 >> >> I am unsure, if this is really the fault of scipy or libsuperlu or something else. >> For scipy 0.11.dev the segfault happens to occur in libatlas.so.0 and not in libsuperlu. >> >> Perhaps you do have an idea. >> (The details are right now in the scipy ticket) >> >> Thanks in advance! >> >> Samuel >> - - - - - - - - - - - - >> >> >> >> >> sjohn at macabeo:~ $ gdb --args python -c 'import scipy; scipy.test()' >> GNU gdb (Gentoo 7.2 p1) 7.2 >> Copyright (C) 2010 Free Software Foundation, Inc. >> License GPLv3+: GNU GPL version 3 or >> later >> This is free software: you are free to change and redistribute it. >> There is NO WARRANTY, to the extent permitted by law. Type "show copying" >> and "show warranty" for details. >> This GDB was configured as "x86_64-pc-linux-gnu". >> For bug reporting instructions, please see: >> ... >> Reading symbols from /usr/bin/python...(no debugging symbols found)...done. >> (gdb) run >> Starting program: /usr/bin/python -c import\ scipy\;\ scipy.test\(\) >> process 5606 is executing new program: /usr/bin/python2.7 [Thread >> debugging using libthread_db enabled] Running unit tests for scipy >> NumPy version 1.6.1 NumPy is installed in >> /usr/lib64/python2.7/site-packages/numpy >> SciPy version 0.11.0.dev-1f6595e >> SciPy is installed in >> /homes/sjohn/.local/lib64/python2.7/site-packages/scipy >> Python version 2.7.2 (default, Sep 7 2011, 17:08:51) [GCC 4.5.3] >> nose version 1.0.0 >> ............................................................................................................................................................................................................................K............................................................................................................/homes/sjohn/.local/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:674: UserWarning: >> The coefficients of the spline returned have been computed as the >> minimal norm least-squares solution of a (numerically) rank deficient >> system (deficiency=7). If deficiency is large, the results may be >> inaccurate. Deficiency may strongly depend on the value of eps. >> warnings.warn(message) >> ....../homes/sjohn/.local/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:605: UserWarning: >> The required storage space exceeds the available storage space: nxest >> or nyest too small, or s too small. >> The weighted least-squares spline corresponds to the current set of >> knots. >> warnings.warn(message) >> ........................K..K....../usr/lib64/python2.7/site-packages/ >> numpy/core/numeric.py:1920: RuntimeWarning: invalid value encountered >> in absolute return all(less_equal(absolute(x-y), atol + rtol * >> absolute(y))) >> ..................................................................... >> ..................................................................... >> ..................................................................... >> ..................................................................... >> ..................................................................... >> ..................................................................... >> ............../homes/sjohn/.local/lib64/python2.7/site-packages/scipy >> /io/wavfile.py:31: WavFileWarning: Unfamiliar format bytes >> warnings.warn("Unfamiliar format bytes", WavFileWarning) >> /homes/sjohn/.local/lib64/python2.7/site-packages/scipy/io/wavfile.py >> :121: WavFileWarning: chunk not understood warnings.warn("chunk not >> understood", WavFileWarning) >> ..................................................................... >> ..................................................................... >> ..................................................................... >> ................SSSSSS......SSSSSS......SSSS......................... >> ..................................................................... >> ........................................................FF[New Thread >> 0x7fffe1fc9700 (LWP 5622)] [New Thread 0x7fffe17c8700 (LWP 5623)] >> [New Thread 0x7fffe0fc7700 (LWP 5625)] [New Thread 0x7fffe07c6700 >> (LWP 5624)] [Thread 0x7fffe0fc7700 (LWP 5625) exited] [Thread >> 0x7fffe07c6700 (LWP 5624) exited] [Thread 0x7fffe17c8700 (LWP 5623) >> exited] [Thread 0x7fffe1fc9700 (LWP 5622) exited] [New Thread >> 0x7fffe1fc9700 (LWP 5626)] [New Thread 0x7fffe17c8700 (LWP 5627)] >> [Thread 0x7fffe17c8700 (LWP 5627) exited] [Thread 0x7fffe1fc9700 (LWP >> 5626) exited] [New Thread 0x7fffe1fc9700 (LWP 5628)] [New Thread >> 0x7fffe17c8700 (LWP 5629)] [New Thread 0x7fffe07c6700 (LWP 5630)] >> [New Thread 0x7fffe0fc7700 (LWP 5631)] [Thread 0x7fffe07c6700 (LWP >> 5630) exited] [Thread 0x7fffe0fc7700 (LWP 5631) exited] [Thread >> 0x7fffe17c8700 (LWP 5629) exited] [Thread 0x7fffe1fc9700 (LWP 5628) >> exited] [New Thread 0x7fffe1fc9700 (LWP 5632)] [New Thread >> 0x7fffe17c8700 (LWP 5633)] [Thread 0x7fffe17c8700 (LWP 5633) exited] >> [Thread 0x7fffe1fc9700 (LWP 5632) exited] .[New Thread 0x7fffe1fc9700 >> (LWP 5634)] [New Thread 0x7fffe17c8700 (LWP 5635)] [New Thread >> 0x7fffe0fc7700 (LWP 5636)] [Thread 0x7fffe17c8700 (LWP 5635) exited] >> [Thread 0x7fffe0fc7700 (LWP 5636) exited] [Thread 0x7fffe1fc9700 (LWP >> 5634) exited] [New Thread 0x7fffe1fc9700 (LWP 5637)] [New Thread >> 0x7fffe17c8700 (LWP 5638)] [New Thread 0x7fffe0fc7700 (LWP 5639)] >> [Thread 0x7fffe17c8700 (LWP 5638) exited] [Thread 0x7fffe0fc7700 (LWP >> 5639) exited] [Thread 0x7fffe1fc9700 (LWP 5637) exited] .[New Thread >> 0x7fffe1fc9700 (LWP 5640)] [New Thread 0x7fffe17c8700 (LWP 5641)] >> [New Thread 0x7fffe0fc7700 (LWP 5642)] [Thread 0x7fffe17c8700 (LWP >> 5641) exited] [Thread 0x7fffe0fc7700 (LWP 5642) exited] [Thread >> 0x7fffe1fc9700 (LWP 5640) exited] F[New Thread 0x7fffe1fc9700 (LWP >> 5643)] [New Thread 0x7fffe17c8700 (LWP 5644)] [New Thread >> 0x7fffe07c6700 (LWP 5646)] [New Thread 0x7fffe0fc7700 (LWP 5645)] >> [Thread 0x7fffe0fc7700 (LWP 5645) exited] [Thread 0x7fffe07c6700 (LWP >> 5646) exited] [Thread 0x7fffe17c8700 (LWP 5644) exited] [Thread >> 0x7fffe1fc9700 (LWP 5643) exited] [New Thread 0x7fffe1fc9700 (LWP >> 5647)] [New Thread 0x7fffe17c8700 (LWP 5648)] [Thread 0x7fffe17c8700 >> (LWP 5648) exited] [Thread 0x7fffe1fc9700 (LWP 5647) exited] F[New >> Thread 0x7fffe1fc9700 (LWP 5649)] [New Thread 0x7fffe17c8700 (LWP >> 5650)] [Thread 0x7fffe17c8700 (LWP 5650) exited] [Thread >> 0x7fffe1fc9700 (LWP 5649) exited] [New Thread 0x7fffe1fc9700 (LWP >> 5651)] [New Thread 0x7fffe17c8700 (LWP 5652)] [Thread 0x7fffe17c8700 >> (LWP 5652) exited] [Thread 0x7fffe1fc9700 (LWP 5651) exited] .[New >> Thread 0x7fffe1fc9700 (LWP 5653)] [New Thread 0x7fffe17c8700 (LWP >> 5654)] [Thread 0x7fffe17c8700 (LWP 5654) exited] [Thread >> 0x7fffe1fc9700 (LWP 5653) exited] F..F..FFF..FF.F...[New Thread >> 0x7fffe1fc9700 (LWP 5655)] [New Thread 0x7fffe17c8700 (LWP 5656)] >> [Thread 0x7fffe17c8700 (LWP 5656) exited] [Thread 0x7fffe1fc9700 (LWP >> 5655) exited] [New Thread 0x7fffe1fc9700 (LWP 5657)] [New Thread >> 0x7fffe17c8700 (LWP 5658)] [Thread 0x7fffe17c8700 (LWP 5658) exited] >> [Thread 0x7fffe1fc9700 (LWP 5657) exited] .[New Thread 0x7fffe1fc9700 >> (LWP 5659)] [New Thread 0x7fffe17c8700 (LWP 5660)] [New Thread >> 0x7fffe07c6700 (LWP 5661)] [Thread 0x7fffe17c8700 (LWP 5660) exited] >> [Thread 0x7fffe07c6700 (LWP 5661) exited] [Thread 0x7fffe1fc9700 (LWP >> 5659) exited] [New Thread 0x7fffe1fc9700 (LWP 5662)] [New Thread >> 0x7fffe17c8700 (LWP 5663)] [New Thread 0x7fffe07c6700 (LWP 5664)] >> [Thread 0x7fffe17c8700 (LWP 5663) exited] [Thread 0x7fffe07c6700 (LWP >> 5664) exited] [Thread 0x7fffe1fc9700 (LWP 5662) exited] .[New Thread >> 0x7fffe1fc9700 (LWP 5665)] [New Thread 0x7fffe17c8700 (LWP 5666)] >> [New Thread 0x7fffe0fc7700 (LWP 5668)] [New Thread 0x7fffe07c6700 >> (LWP 5667)] [Thread 0x7fffe0fc7700 (LWP 5668) exited] [Thread >> 0x7fffe07c6700 (LWP 5667) exited] [Thread 0x7fffe17c8700 (LWP 5666) >> exited] [Thread 0x7fffe1fc9700 (LWP 5665) exited] [New Thread >> 0x7fffe1fc9700 (LWP 5669)] [New Thread 0x7fffe17c8700 (LWP 5670)] >> [Thread 0x7fffe17c8700 (LWP 5670) exited] [Thread 0x7fffe1fc9700 (LWP >> 5669) exited] [New Thread 0x7fffe1fc9700 (LWP 5671)] [New Thread >> 0x7fffe17c8700 (LWP 5672)] [New Thread 0x7fffe0fc7700 (LWP 5673)] >> [New Thread 0x7fffe07c6700 (LWP 5674)] [Thread 0x7fffe07c6700 (LWP >> 5674) exited] [Thread 0x7fffe0fc7700 (LWP 5673) exited] [Thread >> 0x7fffe17c8700 (LWP 5672) exited] [Thread 0x7fffe1fc9700 (LWP 5671) >> exited] [New Thread 0x7fffe1fc9700 (LWP 5675)] [New Thread >> 0x7fffe17c8700 (LWP 5676)] [Thread 0x7fffe17c8700 (LWP 5676) exited] >> [Thread 0x7fffe1fc9700 (LWP 5675) exited] .[New Thread 0x7fffe1fc9700 >> (LWP 5677)] [New Thread 0x7fffe17c8700 (LWP 5678)] [Thread >> 0x7fffe17c8700 (LWP 5678) exited] [Thread 0x7fffe1fc9700 (LWP 5677) >> exited] [New Thread 0x7fffe1fc9700 (LWP 5679)] [New Thread >> 0x7fffe17c8700 (LWP 5680)] [Thread 0x7fffe17c8700 (LWP 5680) exited] >> [Thread 0x7fffe1fc9700 (LWP 5679) exited] >> ..........................................................................................................................................................K................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ . > > ...................................................................... > ...................................................................... > .................................../homes/sjohn/.local/lib64/python2.7 > /site-packages/scipy/sparse/linalg/dsolve/linsolve.py:259: > DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be > removed, install scikits.umfpack instead >> ' install scikits.umfpack instead', DeprecationWarning ) >> ../homes/sjohn/.local/lib64/python2.7/site-packages/scipy/sparse/lina >> lg/dsolve/linsolve.py:75: DeprecationWarning: >> scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead ' install scikits.umfpack instead', DeprecationWarning ) ..........................................................................................................................................................F............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. >> Program received signal SIGSEGV, Segmentation fault. >> 0x00007ffff591dc60 in ATL_dgerk_L2_restrict () from >> /usr/lib64/libatlas.so.0 >> (gdb) >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev Building without using Atlas should confirm if Atlas is involved - probably have to build both numpy and scipy this way: $ ATLAS=None python setup.py build As Pauli said, there are only a couple of operations that should cause this so try to find which line of the test is involved. You should be able to run the individual test from the source directory. Then you can modify the test file such as adding an 'exit()' in the test_nonlin.py in the 'TestJacobianDotSolve' class to find the line involved etc.: $ pwd {path to scipy}/scipy/scipy/optimize/tests $ python test_nonlin.py .......EEEE.E.............................................. Bruce _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev From ralf.gommers at googlemail.com Wed Sep 21 13:47:50 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 21 Sep 2011 19:47:50 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.10.0 beta 2 In-Reply-To: References: <4E76488E.1030106@uci.edu> Message-ID: On Wed, Sep 21, 2011 at 3:39 PM, Martin Teichmann < martin.teichmann at mbi-berlin.de> wrote: > Hi all, > > There was a discussion at https://github.com/scipy/scipy/pull/76 > to deprecate qr_old, and while I know that 0.10 is already in the > second beta, maybe it's still a good idea to do that, I think > it's very unlikely that we introduce a bug through deprecation, > but it should be announced to users ASAP, so that they can > adopt their code. (This also will allow for feedback if there is > still anyone out there using qr_old...) > > This is not something I like to do at the last moment, doing it at the beginning of a release cycle (i.e. do it soon, but for 0.11) gives people more time before the code is actually removed. I also don't see why we need to be in a hurry, it's a simple code cleanup. Finally, it should be done only when the new improvements to qr are merged. Which is very soon I guess. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabian.pedregosa at inria.fr Thu Sep 22 04:14:55 2011 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Thu, 22 Sep 2011 10:14:55 +0200 Subject: [SciPy-Dev] Announce: scikit-learn 0.9 Message-ID: Dear all, I am pleased to announce the availability of scikits.learn 0.9. scikit-learn 0.9 was released on September 2011, three months after the 0.8 release and includes the new modules Manifold learning, The Dirichlet Process as well as a dozen of new algorithms, datasets, performance and documentation improvements. All this and much more can be found in the changelog: http://scikit-learn.sourceforge.net/whats_new.html As usual, sources and windows binaries can be found on pypi (http://pypi.python.org/pypi/scikit-learn/0.9) or installed though pip: pip install -U scikit-learn Best regards, Fabian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Sep 22 04:51:20 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 22 Sep 2011 10:51:20 +0200 Subject: [SciPy-Dev] [Scikit-learn-general] Announce: scikit-learn 0.9 In-Reply-To: References: Message-ID: <20110922085119.GA30520@phare.normalesup.org> A new release of awesome! > http://scikit-learn.sourceforge.net/whats_new.html Pretty crazy changelog. Cheers to the team! Thanks Fabian (and I believe Vlad) for making the release. Releases are vital for a project. Gael From martin.dulovits at woogieworks.at Thu Sep 22 05:03:37 2011 From: martin.dulovits at woogieworks.at (Martin Dulovits) Date: Thu, 22 Sep 2011 11:03:37 +0200 Subject: [SciPy-Dev] SciPy/Numpy windows x64 build vc8/vs2005 Message-ID: <4E7AF9E9.7090304@woogieworks.at> I am searching for a build of SciPy numpy for win64 which was build by VC8/VS2005. I would need this library to extend an application called houdini by sidefx and they require their packages to be build by VC8. They have an integrated python shell and within this shell i would like to use scipy. I succeeded after some work on building numpy without fortran but i am miles away from building scipy. I hope someone can help. thx dulo From g.durin at inrim.it Thu Sep 22 11:59:19 2011 From: g.durin at inrim.it (Gianfranco Durin) Date: Thu, 22 Sep 2011 17:59:19 +0200 (CEST) Subject: [SciPy-Dev] Docs.scipy.org wiki edit rights In-Reply-To: <415abf45-f865-44e7-8ee4-4d273f1944a0@sisrv6> Message-ID: Hi all, I am extensively using the least-square fitting method and I recently found a bug in the documentation together with some improvements to add. I'd like to get the rights to edit the docs. I will soon present the results of my analysis to the list Many thanks Gianfranco From g.durin at inrim.it Thu Sep 22 13:23:03 2011 From: g.durin at inrim.it (Gianfranco Durin) Date: Thu, 22 Sep 2011 19:23:03 +0200 (CEST) Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: <971ba7c8-ec01-4c7d-9e08-9d17fbb2143c@sisrv6> Message-ID: Dear all, I wanted to briefly show you the results of an analysis I did on the performances of the optimize.leastsq method for data fitting. I presented these results at the last Python in Physics workshop. You can download the pdf here: http://emma.inrim.it:8080/gdurin/talks. 1. The main concern is about the use of cov_x to estimate the error bar of the fitting parameters. In the docs, it is set that "this matrix must be multiplied by the residual standard deviation to get the covariance of the parameter estimates -- see curve_fits."" Unfortunately, this is not correct, or better it is only partially correct. It is correct if there are no error bars of the input data (the sigma of curve_fit is None). But if provided, they are used as "as weights in least-squares problem" (curve_fit doc), and cov_x gives directly the covariance of the parameter estimates (i.e. the diagonal terms are the errors in the parameters). See for instance here: http://www.gnu.org/s/gsl/manual/html_node/Computing-the-covariance-matrix-of-best-fit-parameters.html. This means that not only the doc needs fixing, but also the curve_fit code, those estimation of the parameters' error is INDEPENDENT of the values of the data errors in the case they are constant, which is clearly wrong. I have never provided a patch, but the fix should be quite simple, just please give me indication on how to do that. 2. The convergence of the fit in the most difficult cases (see page 15 of my presentation) can required up to about 3000 iterations, reduced to 800 when using analytical derivatives. For quite a long time I did not realized that the fit needed more iterations that the number set by maxfev, and thus I started to think that the leastsq was not good enough for 'hard' data. As a matter of fact, maxfev : int The maximum number of calls to the function. If zero, then 100*(N+1) is the maximum where N is the number of elements in x0. is pretty low, so I suggest to increase the prefactor 100 to 1000. A relatively huge number is not a problem, by the way, because if the system is sloppy. i.e. one parameter does not move too much, the routine stops and complains with "Both actual and predicted relative reductions in the sum of squares are at most 0.000000 and the relative error between two consecutive iterates is at most 0.000000" as in the case of boxBod at pg. 15. By the way, we should also advice that the in case of analytical derivative this number is half, even if I personally would keep the same number for both cases. This is all for the moment. Many thanks for your attention, and sorry for the long mail Gianfranco From gael.varoquaux at normalesup.org Thu Sep 22 13:40:43 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 22 Sep 2011 19:40:43 +0200 Subject: [SciPy-Dev] Docs.scipy.org wiki edit rights In-Reply-To: References: <415abf45-f865-44e7-8ee4-4d273f1944a0@sisrv6> Message-ID: <20110922174043.GA29322@phare.normalesup.org> On Thu, Sep 22, 2011 at 05:59:19PM +0200, Gianfranco Durin wrote: > I am extensively using the least-square fitting method and I recently found a bug in the documentation together with some improvements to add. > I'd like to get the rights to edit the docs. You should be good to go! Gael From josef.pktd at gmail.com Thu Sep 22 13:56:55 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 22 Sep 2011 13:56:55 -0400 Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: References: <971ba7c8-ec01-4c7d-9e08-9d17fbb2143c@sisrv6> Message-ID: On Thu, Sep 22, 2011 at 1:23 PM, Gianfranco Durin wrote: > Dear all, > I wanted to briefly show you the results of an analysis I did on the performances of the optimize.leastsq method for data fitting. I presented these results at the last Python in Physics workshop. You can download the pdf here: http://emma.inrim.it:8080/gdurin/talks. > > 1. The main concern is about the use of cov_x to estimate the error bar of the fitting parameters. In the docs, it is set that "this matrix must be multiplied by the residual standard deviation to get the covariance of the parameter estimates -- see curve_fits."" > > Unfortunately, this is not correct, or better it is only partially correct. It is correct if there are no error bars of the input data ?(the sigma of curve_fit is None). But if provided, they are used as "as weights in least-squares problem" (curve_fit doc), and cov_x gives directly the covariance of the parameter estimates (i.e. the diagonal terms are the errors in the parameters). See for instance here: http://www.gnu.org/s/gsl/manual/html_node/Computing-the-covariance-matrix-of-best-fit-parameters.html. > > This means that not only the doc needs fixing, but also the curve_fit code, those estimation of the parameters' error is INDEPENDENT of the values of the data errors in the case they are constant, which is clearly wrong. > I have never provided a patch, but the fix should be quite simple, just please give me indication on how to do that. Since this depends on what you define as weight or sigma, both are correct but use different definitions. Since we just had this discussion, I'm not arguing again. I just want to have clear definitions that the "average" user can use by default. I don't really care which it is if the majority of users are engineers who can tell what their errors variances are before doing any estimation. Josef > > 2. The convergence of the fit in the most difficult cases (see page 15 of my presentation) can required up to about 3000 iterations, reduced to 800 when using analytical derivatives. For quite a long time I did not realized that the fit needed more iterations that the number set by maxfev, and thus I started to think that the leastsq was not good enough for 'hard' data. > > As a matter of fact, > > maxfev : int > ? ?The maximum number of calls to the function. If zero, then 100*(N+1) is > ? ?the maximum where N is the number of elements in x0. > > is pretty low, so I suggest to increase the prefactor 100 to 1000. A relatively huge number is not a problem, by the way, because if the system is sloppy. i.e. one parameter does not move too much, the routine stops and complains with "Both actual and predicted relative reductions in the sum of squares are at most 0.000000 and the relative error between two consecutive iterates is at most 0.000000" as in the case of boxBod at pg. 15. > > By the way, we should also advice that the in case of analytical derivative this number is half, even if I personally would keep the same number for both cases. > > This is all for the moment. > > Many thanks for your attention, and sorry for the long mail > > Gianfranco > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From johann.cohentanugi at gmail.com Sat Sep 24 18:18:21 2011 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Sun, 25 Sep 2011 00:18:21 +0200 Subject: [SciPy-Dev] [SciPy-User] polylogarithm? In-Reply-To: References: <4E719695.1080503@gmail.com> <4E7C87C8.9080104@gmail.com> Message-ID: <4E7E572D.5010909@gmail.com> For the record, I created a first pull request, with limited scope, in order to make the C call to cephes described below possible. Further details in https://github.com/scipy/scipy/pull/84 I will switch to the dev list from now on. best, Johann On 09/23/2011 06:18 PM, Pauli Virtanen wrote: > Fri, 23 Sep 2011 15:21:12 +0200, Johann Cohen-Tanugi wrote: > [clip] >> zetac(z) knows how to eat negative arguments, but not zeta(z,1).... Is >> there a reason why special.zeta does not default to 1+special.zetac for >> s=1? This would make the behavior of the 2 functions more identical. > Probably no reason, except that it wasn't implemented. mpmath is > impressive, and in several ways ahead of scipy.special --- or at least in > the parts where the problems overlap, as you can do tricks with arbitrary > precision that are not really feasible. > > Note that if you need to call the zeta function from the Cython extension, > call the C library directly: > > cdef extern from "cephes.h": > double zeta(double x, double q) > double zetac(double x) > > and link the extension with the "sc_cephes" library. > > *** > > There's a formula (look in the mpmath sources ;) for the transform > from x< 0 to x> 0 for zeta(x, a) for general a. But that needs polylog. > > The implementation for zetac(x) for x< 0 seems also a bit incomplete, > as it goes only down to -30.8148. It seems this is due to a silly reason, > it should use gammaln instead of Gamma to avoid the overflow. > > Pauli > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Sun Sep 25 15:26:40 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 25 Sep 2011 21:26:40 +0200 Subject: [SciPy-Dev] Arpack problems and 0.10.0 release Message-ID: Hi, The first release candidate for scipy 0.10.0 will have to be delayed a bit, due to problems with Arpack: http://projects.scipy.org/scipy/ticket/1515 (test failure on 64-bit Linux) http://projects.scipy.org/scipy/ticket/1523 (crash on 64-bit OS X) I'm hoping someone has the time and knowledge to fix this. For the OS X crash, it would also be helpful if someone could (a) reproduce it and (b) narrow it down to a self-contained example for easier debugging. Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.durin at inrim.it Mon Sep 26 09:57:25 2011 From: g.durin at inrim.it (Gianfranco Durin) Date: Mon, 26 Sep 2011 15:57:25 +0200 (CEST) Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: Message-ID: <5a914d64-fe25-4423-9123-1a1fdd6e58fb@sisrv6> ----- Original Message ----- > On Thu, Sep 22, 2011 at 1:23 PM, Gianfranco Durin > wrote: > > Dear all, > > I wanted to briefly show you the results of an analysis I did on > > the performances of the optimize.leastsq method for data fitting. > > I presented these results at the last Python in Physics workshop. > > You can download the pdf here: > > http://emma.inrim.it:8080/gdurin/talks. > > > > 1. The main concern is about the use of cov_x to estimate the error > > bar of the fitting parameters. In the docs, it is set that "this > > matrix must be multiplied by the residual standard deviation to > > get the covariance of the parameter estimates -- see curve_fits."" > > > > Unfortunately, this is not correct, or better it is only partially > > correct. It is correct if there are no error bars of the input > > data ?(the sigma of curve_fit is None). But if provided, they are > > used as "as weights in least-squares problem" (curve_fit doc), and > > cov_x gives directly the covariance of the parameter estimates > > (i.e. the diagonal terms are the errors in the parameters). See > > for instance here: > > http://www.gnu.org/s/gsl/manual/html_node/Computing-the-covariance-matrix-of-best-fit-parameters.html. > > > > This means that not only the doc needs fixing, but also the > > curve_fit code, those estimation of the parameters' error is > > INDEPENDENT of the values of the data errors in the case they are > > constant, which is clearly wrong. > > I have never provided a patch, but the fix should be quite simple, > > just please give me indication on how to do that. > > Since this depends on what you define as weight or sigma, both are > correct but use different definitions. Of course, but this is not the point. Let me explain. In our routines, the cov_x is calculated as (J^T * J) ^-1, where J is the jacobian. The Jacobian, unless provided directly, is calculated using the definition of the residuals, which in curve_fit method are "_general_function", and "_weighted_general_function". The latter function explicitly uses the weights, so the cov_x is more precisely (J^T W J) ^-1, where W is the matrix of the weights, and J is just the matrix of the first derivatives. Thus in this case, the diagonal elements of cov_x give the variance of the parameters. No need to multiply by the residual standard deviation. In case of W == 1, i.e. no errors in the data are provided, as reported here http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Weighted_linear_least_squares one uses the variance of the observations, i.e. uses the s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) as an estimate of the variance, as done in curve_fit. BUT, we cannot multiply the cov_x obtained with the _weighted_general_function by s_sq. As I told, we already took it into account in the definition of the residuals. Thus: if (len(ydata) > len(p0)) and pcov is not None: s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) pcov = pcov * s_sq else: pcov = inf where func can be both "_general_function" and "_weighted_general_function", is not correct. > > Since we just had this discussion, I'm not arguing again. I just want > to have clear definitions that the "average" user can use by default. > I don't really care which it is if the majority of users are > engineers > who can tell what their errors variances are before doing any > estimation. > Oh, interesting. Where did you have this discussion? On this list? I could not find it... Here the problem is not to decide an 'average' behaviour, but to correctly calculate the parameters' error when the user does or does not provide the errors in the data. Hope this helps Gianfranco From npkuin at gmail.com Mon Sep 26 10:31:31 2011 From: npkuin at gmail.com (Paul Kuin) Date: Mon, 26 Sep 2011 15:31:31 +0100 Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: <5a914d64-fe25-4423-9123-1a1fdd6e58fb@sisrv6> References: <5a914d64-fe25-4423-9123-1a1fdd6e58fb@sisrv6> Message-ID: I am just a simple user, but perhaps this gives some idea of what us users experience: I got so fed up with leastsq not having good documentation, not being able to set limits to the parameters to be fit, and not handling errors in input measurements in a transparent way, that I was very happy to replace it with the pkfit routine based on Craig Markwards IDL code. I am happy now, but I waisted a lot of time because of these leastsq issues. Anyway - I am happy now. Paul Kuin On Mon, Sep 26, 2011 at 2:57 PM, Gianfranco Durin wrote: > ----- Original Message ----- >> On Thu, Sep 22, 2011 at 1:23 PM, Gianfranco Durin >> wrote: >> > Dear all, >> > I wanted to briefly show you the results of an analysis I did on >> > the performances of the optimize.leastsq method for data fitting. >> > I presented these results at the last Python in Physics workshop. >> > You can download the pdf here: >> > http://emma.inrim.it:8080/gdurin/talks. >> > >> > 1. The main concern is about the use of cov_x to estimate the error >> > bar of the fitting parameters. In the docs, it is set that "this >> > matrix must be multiplied by the residual standard deviation to >> > get the covariance of the parameter estimates -- see curve_fits."" >> > >> > Unfortunately, this is not correct, or better it is only partially >> > correct. It is correct if there are no error bars of the input >> > data ?(the sigma of curve_fit is None). But if provided, they are >> > used as "as weights in least-squares problem" (curve_fit doc), and >> > cov_x gives directly the covariance of the parameter estimates >> > (i.e. the diagonal terms are the errors in the parameters). See >> > for instance here: >> > http://www.gnu.org/s/gsl/manual/html_node/Computing-the-covariance-matrix-of-best-fit-parameters.html. >> > >> > This means that not only the doc needs fixing, but also the >> > curve_fit code, those estimation of the parameters' error is >> > INDEPENDENT of the values of the data errors in the case they are >> > constant, which is clearly wrong. >> > I have never provided a patch, but the fix should be quite simple, >> > just please give me indication on how to do that. >> >> Since this depends on what you define as weight or sigma, both are >> correct but use different definitions. > > Of course, but this is not the point. Let me explain. > > In our routines, the cov_x is calculated as (J^T * J) ^-1, where J is the jacobian. The Jacobian, unless provided directly, is calculated using the definition of the residuals, which in curve_fit method are "_general_function", and "_weighted_general_function". The latter function explicitly uses the weights, so the cov_x is more precisely (J^T W J) ^-1, where W is the matrix of the weights, and J is just the matrix of the first derivatives. Thus in this case, the diagonal elements of cov_x give the variance of the parameters. No need to multiply by the residual standard deviation. > In case of W == 1, i.e. no errors in the data are provided, as reported here http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Weighted_linear_least_squares > > one uses the variance of the observations, i.e. uses the > > s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) > > as an estimate of the variance, as done in curve_fit. > > BUT, we cannot multiply the cov_x obtained with the _weighted_general_function by s_sq. As I told, we already took it into account in the definition of the residuals. Thus: > > ? ?if (len(ydata) > len(p0)) and pcov is not None: > ? ? ? ?s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) > ? ? ? ?pcov = pcov * s_sq > ? ?else: > ? ? ? ?pcov = inf > > > where func can be both "_general_function" and "_weighted_general_function", is not correct. > >> >> Since we just had this discussion, I'm not arguing again. I just want >> to have clear definitions that the "average" user can use by default. >> I don't really care which it is if the majority of users are >> engineers >> who can tell what their errors variances are before doing any >> estimation. >> > > Oh, interesting. Where did you have this discussion? On this list? I could not find it... > > Here the problem is not to decide an 'average' behaviour, but to correctly calculate the parameters' error when the user does or does not provide the errors in the data. > > Hope this helps > Gianfranco > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Mon Sep 26 12:19:02 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 26 Sep 2011 12:19:02 -0400 Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: <5a914d64-fe25-4423-9123-1a1fdd6e58fb@sisrv6> References: <5a914d64-fe25-4423-9123-1a1fdd6e58fb@sisrv6> Message-ID: On Mon, Sep 26, 2011 at 9:57 AM, Gianfranco Durin wrote: > ----- Original Message ----- > > On Thu, Sep 22, 2011 at 1:23 PM, Gianfranco Durin > > wrote: > > > Dear all, > > > I wanted to briefly show you the results of an analysis I did on > > > the performances of the optimize.leastsq method for data fitting. > > > I presented these results at the last Python in Physics workshop. > > > You can download the pdf here: > > > http://emma.inrim.it:8080/gdurin/talks. > > > > > > 1. The main concern is about the use of cov_x to estimate the error > > > bar of the fitting parameters. In the docs, it is set that "this > > > matrix must be multiplied by the residual standard deviation to > > > get the covariance of the parameter estimates -- see curve_fits."" > > > > > > Unfortunately, this is not correct, or better it is only partially > > > correct. It is correct if there are no error bars of the input > > > data (the sigma of curve_fit is None). But if provided, they are > > > used as "as weights in least-squares problem" (curve_fit doc), and > > > cov_x gives directly the covariance of the parameter estimates > > > (i.e. the diagonal terms are the errors in the parameters). See > > > for instance here: > > > > http://www.gnu.org/s/gsl/manual/html_node/Computing-the-covariance-matrix-of-best-fit-parameters.html > . > > > > > > This means that not only the doc needs fixing, but also the > > > curve_fit code, those estimation of the parameters' error is > > > INDEPENDENT of the values of the data errors in the case they are > > > constant, which is clearly wrong. > > > I have never provided a patch, but the fix should be quite simple, > > > just please give me indication on how to do that. > > > > Since this depends on what you define as weight or sigma, both are > > correct but use different definitions. > > Of course, but this is not the point. Let me explain. > > In our routines, the cov_x is calculated as (J^T * J) ^-1, where J is the > jacobian. The Jacobian, unless provided directly, is calculated using the > definition of the residuals, which in curve_fit method are > "_general_function", and "_weighted_general_function". The latter function > explicitly uses the weights, so the cov_x is more precisely (J^T W J) ^-1, > where W is the matrix of the weights, and J is just the matrix of the first > derivatives. Thus in this case, the diagonal elements of cov_x give the > variance of the parameters. No need to multiply by the residual standard > deviation. > In case of W == 1, i.e. no errors in the data are provided, as reported > here > http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Weighted_linear_least_squares > > one uses the variance of the observations, i.e. uses the > > s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) > > as an estimate of the variance, as done in curve_fit. > > BUT, we cannot multiply the cov_x obtained with the > _weighted_general_function by s_sq. As I told, we already took it into > account in the definition of the residuals. Thus: > > if (len(ydata) > len(p0)) and pcov is not None: > s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) > pcov = pcov * s_sq > else: > pcov = inf > > > where func can be both "_general_function" and > "_weighted_general_function", is not correct. > *M* = *?*2*I ok unit weights M = W^(-1) your case, W has the estimates of the error covariance M = **?*2* **W^(-1) I think this is what curve_fit uses, and *what is in (my) textbooks defined as weighted least squares Do we use definition 2 or 3 by default? both are reasonable 3 is what I expected when I looked at curve_fit 2 might be more useful for two stage estimation, but I didn't have time to check the details > > > > > Since we just had this discussion, I'm not arguing again. I just want > > to have clear definitions that the "average" user can use by default. > > I don't really care which it is if the majority of users are > > engineers > > who can tell what their errors variances are before doing any > > estimation. > > > > Oh, interesting. Where did you have this discussion? On this list? I could > not find it... > Sorry, I needed to find it http://groups.google.com/group/scipy-user/browse_thread/thread/55497657b2f11c14?hl=en > > Here the problem is not to decide an 'average' behaviour, but to correctly > calculate the parameters' error when the user does or does not provide the > errors in the data. > correctness depends on the definitions, and definitions should be made clearer in the documentation, but it looks like the default interpretation (for users that don't read the small print) will depend on the background. Josef > > Hope this helps > Gianfranco > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Sep 26 12:39:30 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 26 Sep 2011 12:39:30 -0400 Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: References: <5a914d64-fe25-4423-9123-1a1fdd6e58fb@sisrv6> Message-ID: On Mon, Sep 26, 2011 at 10:31 AM, Paul Kuin wrote: > I am just a simple user, but perhaps this gives some idea of what us users > experience: > > I got so fed up with leastsq not having good documentation, not being able > to > set limits to the parameters to be fit, and not handling errors in > input measurements > in a transparent way, that I was very happy to replace it with the > pkfit routine > based on Craig Markwards IDL code. I am happy now, but I waisted a lot of > time > because of these leastsq issues. > > Anyway - I am happy now. > I'm a pretty happy user of scipy.optimize.leastsq because it is just a low level optimizer that I can use for many different models and because it is not a curve fitting function. Josef > > Paul Kuin > > On Mon, Sep 26, 2011 at 2:57 PM, Gianfranco Durin > wrote: > > ----- Original Message ----- > >> On Thu, Sep 22, 2011 at 1:23 PM, Gianfranco Durin > >> wrote: > >> > Dear all, > >> > I wanted to briefly show you the results of an analysis I did on > >> > the performances of the optimize.leastsq method for data fitting. > >> > I presented these results at the last Python in Physics workshop. > >> > You can download the pdf here: > >> > http://emma.inrim.it:8080/gdurin/talks. > >> > > >> > 1. The main concern is about the use of cov_x to estimate the error > >> > bar of the fitting parameters. In the docs, it is set that "this > >> > matrix must be multiplied by the residual standard deviation to > >> > get the covariance of the parameter estimates -- see curve_fits."" > >> > > >> > Unfortunately, this is not correct, or better it is only partially > >> > correct. It is correct if there are no error bars of the input > >> > data (the sigma of curve_fit is None). But if provided, they are > >> > used as "as weights in least-squares problem" (curve_fit doc), and > >> > cov_x gives directly the covariance of the parameter estimates > >> > (i.e. the diagonal terms are the errors in the parameters). See > >> > for instance here: > >> > > http://www.gnu.org/s/gsl/manual/html_node/Computing-the-covariance-matrix-of-best-fit-parameters.html > . > >> > > >> > This means that not only the doc needs fixing, but also the > >> > curve_fit code, those estimation of the parameters' error is > >> > INDEPENDENT of the values of the data errors in the case they are > >> > constant, which is clearly wrong. > >> > I have never provided a patch, but the fix should be quite simple, > >> > just please give me indication on how to do that. > >> > >> Since this depends on what you define as weight or sigma, both are > >> correct but use different definitions. > > > > Of course, but this is not the point. Let me explain. > > > > In our routines, the cov_x is calculated as (J^T * J) ^-1, where J is the > jacobian. The Jacobian, unless provided directly, is calculated using the > definition of the residuals, which in curve_fit method are > "_general_function", and "_weighted_general_function". The latter function > explicitly uses the weights, so the cov_x is more precisely (J^T W J) ^-1, > where W is the matrix of the weights, and J is just the matrix of the first > derivatives. Thus in this case, the diagonal elements of cov_x give the > variance of the parameters. No need to multiply by the residual standard > deviation. > > In case of W == 1, i.e. no errors in the data are provided, as reported > here > http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Weighted_linear_least_squares > > > > one uses the variance of the observations, i.e. uses the > > > > s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) > > > > as an estimate of the variance, as done in curve_fit. > > > > BUT, we cannot multiply the cov_x obtained with the > _weighted_general_function by s_sq. As I told, we already took it into > account in the definition of the residuals. Thus: > > > > if (len(ydata) > len(p0)) and pcov is not None: > > s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) > > pcov = pcov * s_sq > > else: > > pcov = inf > > > > > > where func can be both "_general_function" and > "_weighted_general_function", is not correct. > > > >> > >> Since we just had this discussion, I'm not arguing again. I just want > >> to have clear definitions that the "average" user can use by default. > >> I don't really care which it is if the majority of users are > >> engineers > >> who can tell what their errors variances are before doing any > >> estimation. > >> > > > > Oh, interesting. Where did you have this discussion? On this list? I > could not find it... > > > > Here the problem is not to decide an 'average' behaviour, but to > correctly calculate the parameters' error when the user does or does not > provide the errors in the data. > > > > Hope this helps > > Gianfranco > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.durin at inrim.it Tue Sep 27 05:17:40 2011 From: g.durin at inrim.it (Gianfranco Durin) Date: Tue, 27 Sep 2011 11:17:40 +0200 (CEST) Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: Message-ID: > where func can be both "_general_function" and > "_weighted_general_function", is not correct. > > > M = ? 2 I ok unit weights > > M = W^(-1) your case, W has the estimates of the error covariance > > M = ? 2 W^(-1) I think this is what curve_fit uses, and what is in > (my) textbooks defined as weighted least squares > > Do we use definition 2 or 3 by default? both are reasonable > > 3 is what I expected when I looked at curve_fit > 2 might be more useful for two stage estimation, but I didn't have > time to check the details Ehmm, no, curve_fit does not use def 3, as the errors would scale with W, but they don't. By the way, it does not have the correct units. Curve_fit calculates M = W \sigma^2 W^(-1) = \sigma^2 so it gives exactly the same results of case 1, irrespective the W's. This is why the errors do not scale with W. Gianfranco From josef.pktd at gmail.com Tue Sep 27 10:20:07 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 27 Sep 2011 10:20:07 -0400 Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: References: Message-ID: On Tue, Sep 27, 2011 at 5:17 AM, Gianfranco Durin wrote: > > > where func can be both "_general_function" and > > "_weighted_general_function", is not correct. > > > > > > M = ? 2 I ok unit weights > > > > M = W^(-1) your case, W has the estimates of the error covariance > > > > M = ? 2 W^(-1) I think this is what curve_fit uses, and what is in > > (my) textbooks defined as weighted least squares > > > > Do we use definition 2 or 3 by default? both are reasonable > > > > 3 is what I expected when I looked at curve_fit > > 2 might be more useful for two stage estimation, but I didn't have > > time to check the details > > Ehmm, no, curve_fit does not use def 3, as the errors would scale with W, > but they don't. By the way, it does not have the correct units. > http://wwwasdoc.web.cern.ch/wwwasdoc/minuit/node31.html ''' The minimization of [image: $ \chi^{2}_{}$] above is sometimes called *weighted least squares* in which case the inverse quantities 1/*e*2 are called the weights. Clearly this is simply a different word for the same thing, but in practice the use of these words sometimes means that the interpretation of * e*2 as variances or squared errors is not straightforward. The word weight often implies that only the relative weights are known (``point two is twice as important as point one'') in which case there is apparently an unknown overall normalization factor. Unfortunately the parameter errors coming out of such a fit will be proportional to this factor, and the user must be aware of this in the formulation of his problem. ''' (I don't quite understand the last sentence.) M = ?^2 W^(-1), where ?^2 is estimated by residual sum of squares from weighted regression. W only specifies relative errors, the assumption is that the covariance matrix of the errors is *proportional* to W. The scaling is arbitrary. If the scale of W changes, then the estimated residual sum of squares from weighted regression will compensate for it. So, rescaling W has no effect on the covariance of the parameter estimates. I checked in Greene: Econometric Analysis, and briefly looked at the SAS description. It looks like weighted least squares is always with automatic scaling, W is defined only as relative weights. All I seem to be able to find is weighted least squares with automatic scaling (except for maybe some two-step estimators). > > Curve_fit calculates > > M = W \sigma^2 W^(-1) = \sigma^2 > If I remember correctly (calculated from the transformed model) it should be: the cov of the parameter estimates is s^2 (X'WX)^(-1) error estimates should be s^2 * W where W = diag(1/curvefit_sigma**2) unfortunate terminology for curve_fit's sigma or intentional ? (as I mentioned in the other thread) Josef > > so it gives exactly the same results of case 1, irrespective the W's. This > is why the errors do not scale with W. > > Gianfranco > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Tue Sep 27 12:53:52 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 27 Sep 2011 11:53:52 -0500 Subject: [SciPy-Dev] On the leastsq/curve_fit method In-Reply-To: References: Message-ID: On Tue, Sep 27, 2011 at 9:20 AM, wrote: > > > On Tue, Sep 27, 2011 at 5:17 AM, Gianfranco Durin wrote: > >> >> > where func can be both "_general_function" and >> > "_weighted_general_function", is not correct. >> > >> > >> > M = ? 2 I ok unit weights >> > >> > M = W^(-1) your case, W has the estimates of the error covariance >> > >> > M = ? 2 W^(-1) I think this is what curve_fit uses, and what is in >> > (my) textbooks defined as weighted least squares >> > >> > Do we use definition 2 or 3 by default? both are reasonable >> > >> > 3 is what I expected when I looked at curve_fit >> > 2 might be more useful for two stage estimation, but I didn't have >> > time to check the details >> >> Ehmm, no, curve_fit does not use def 3, as the errors would scale with W, >> but they don't. By the way, it does not have the correct units. >> > > http://wwwasdoc.web.cern.ch/wwwasdoc/minuit/node31.html > > ''' > The minimization of [image: $ \chi^{2}_{}$] above is sometimes called *weighted > least squares* in which case the inverse quantities 1/*e*2 are called the > weights. Clearly this is simply a different word for the same thing, but in > practice the use of these words sometimes means that the interpretation of > *e*2 as variances or squared errors is not straightforward. The word > weight often implies that only the relative weights are known (``point two > is twice as important as point one'') in which case there is apparently an > unknown overall normalization factor. Unfortunately the parameter errors > coming out of such a fit will be proportional to this factor, and the user > must be aware of this in the formulation of his problem. > ''' > (I don't quite understand the last sentence.) > > M = ?^2 W^(-1), where ?^2 is estimated by residual sum of squares from > weighted regression. > > W only specifies relative errors, the assumption is that the covariance > matrix of the errors is *proportional* to W. The scaling is arbitrary. If > the scale of W changes, then the estimated residual sum of squares from > weighted regression will compensate for it. So, rescaling W has no effect on > the covariance of the parameter estimates. > > I checked in Greene: Econometric Analysis, and briefly looked at the SAS > description. It looks like weighted least squares is always with automatic > scaling, W is defined only as relative weights. > > All I seem to be able to find is weighted least squares with automatic > scaling (except for maybe some two-step estimators). > > >> >> Curve_fit calculates >> >> M = W \sigma^2 W^(-1) = \sigma^2 >> > > If I remember correctly (calculated from the transformed model) it should > be: > > the cov of the parameter estimates is s^2 (X'WX)^(-1) > error estimates should be s^2 * W > > where W = diag(1/curvefit_sigma**2) unfortunate terminology for > curve_fit's sigma or intentional ? (as I mentioned in the other thread) > > Josef > > >> >> so it gives exactly the same results of case 1, irrespective the W's. This >> is why the errors do not scale with W. >> >> Gianfranco >> _______________________________________________ >> > Gianfranco, Can you please provide some Python and R (or SAS) code to show what you mean? In the linear example below, I get agreement between SAS and curve_fit with and without defining a weight. I do know that the immediate result from leastsq does not agree with values from SAS. Thus, with my limited knowledge, I consider that the documentation is correct. import numpy as np from scipy.optimize import curve_fit from scipy.optimize import leastsq x=np.array([200, 400, 300, 400, 200, 300, 300, 400, 200, 400, 200, 300], dtype=float) y=np.array([28, 75, 37, 53, 22, 58, 40, 96, 34, 52, 30, 69], dtype=float) w=np.array([0.001168822, 0.000205547, 0.000408122, 0.000205547, 0.001168822, 0.000408122, 0.000408122, 0.000205547, 0.001168822, 0.000205547, 0.001168822, 0.000408122]) sw=1/np.sqrt(w) def linfunc(x, a, b): return a + b*x popt, pcov = curve_fit(linfunc, x, y) wopt, wcov = curve_fit(linfunc, x, y, sigma=sw) Output: No weighted form usage of curve_fit which matches SAS: a= -11.25 with SE= 15.88585587 b=0.2025 with SE=0.05109428 Weighted form if I understand the difference between how weights need to be specified so the results match SAS: a= -12.91438 with SE= 11.09325 b=0.20831 with SE= 0.04342 Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From pivanov314 at gmail.com Thu Sep 29 01:18:38 2011 From: pivanov314 at gmail.com (Paul Ivanov) Date: Wed, 28 Sep 2011 22:18:38 -0700 Subject: [SciPy-Dev] lazy imports feedback request Message-ID: Friends and uncles, I'm seeking feedback about some lazy import functionality I'm trying to implement in nitime. In short, "foo = LazyImport('foo')" returns something that will act as foo if you did just "import foo" - but only perform the import when foo is actually used, not when it is defined. I'll outline the highlights in this email, but you can see the full four-leaf clover pull request discussion here: https://github.com/nipy/nitime/pull/88 Here's the code, with an explanatory docstring class LazyImport(object): """ This class takes the module name as a parameter, and acts as a proxy for that module, importing it only when the module is used, but effectively acting as the module in every other way (including inside IPython with respect to introspection and tab completion) with the *exception* of reload(). >>> mlab = LazyImport('matplotlib.mlab') No import happens on the above line, until we do something like call an mlab method or try to do tab completion or introspection on mlab in IPython. >>> mlab Now the LazyImport will do an actual import, and call the dist function of the imported module. >>> mlab.dist(1969,2011) 42.0 >>> mlab """ def __init__(self, modname): self.__lazyname__= modname def __getattribute__(self,x): # This method will be called only once name = object.__getattribute__(self,'__lazyname__') module =__import__(name, fromlist=name.split('.')) # Now that we've done the import, cutout the middleman class LoadedLazyImport(object): __getattribute__ = module.__getattribute__ __repr__ = module.__repr__ object.__setattr__(self,'__class__', LoadedLazyImport) return module.__getattribute__(x) def __repr__(self): return "" %\ object.__getattribute__(self,'__lazyname__') The win is that when you get a faster import and smaller memory footprint by not loading those modules which your code path ends up not using. The functionality of lazyimports.LazyImport is generic enough to allow the lazily imported module to act as the module in almost every way (tab completion, introspection for docstrings and sources) except reloading is not supported. Although I have not spent a lot of time trying to make reloading work, I think it is not a big deal because here in the intent is to lazily load external packages (matplotlib, scipy, and numpy's nosetools) which are not meant to be developed simultaneously with the project wishing to load them in a lazy manner. A difference with another strategy, such as one Keith Goodman highlighted in a thread announcing Bottlneck 0.4.1 on [SciPy-User] back in March which looks like this: # http://wiki.python.org/moin/PythonSpeed/PerformanceTips#Import_Statement_Overhead email = None def parse_email(): global email if email is None: import email is that LazyImport keeps you from having to check if a given module has been lazily loaded in every place that it is used. Fernando Perez suggested I contact this list because he recalls similar functionality being implemented in scipy at some point, and then removed because it was problematic. thanks a lot in advance for any feedback. I'm cc-ing my primary address in hope that your mail reader will do the same. best, -- Paul Ivanov From ralf.gommers at googlemail.com Fri Sep 30 04:21:22 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 30 Sep 2011 10:21:22 +0200 Subject: [SciPy-Dev] lazy imports feedback request In-Reply-To: References: Message-ID: On Thu, Sep 29, 2011 at 7:18 AM, Paul Ivanov wrote: > Friends and uncles, > > I'm seeking feedback about some lazy import functionality I'm trying to > implement in nitime. In short, "foo = LazyImport('foo')" returns something > that > will act as foo if you did just "import foo" - but only perform the import > when > foo is actually used, not when it is defined. > > I'll outline the highlights in this email, but you can see the full > four-leaf > clover pull request discussion here: > https://github.com/nipy/nitime/pull/88 > > Here's the code, with an explanatory docstring > > class LazyImport(object): > """ > This class takes the module name as a parameter, and acts as a proxy for > that module, importing it only when the module is used, but effectively > acting as the module in every other way (including inside IPython with > respect to introspection and tab completion) with the *exception* of > reload(). > > >>> mlab = LazyImport('matplotlib.mlab') > > No import happens on the above line, until we do something like call an > mlab method or try to do tab completion or introspection on mlab > in IPython. > > >>> mlab > > > Now the LazyImport will do an actual import, and call the dist function > of > the imported module. > > >>> mlab.dist(1969,2011) > 42.0 > >>> mlab > > """ > def __init__(self, modname): > self.__lazyname__= modname > def __getattribute__(self,x): > # This method will be called only once > name = object.__getattribute__(self,'__lazyname__') > module =__import__(name, fromlist=name.split('.')) > # Now that we've done the import, cutout the middleman > class LoadedLazyImport(object): > __getattribute__ = module.__getattribute__ > __repr__ = module.__repr__ > object.__setattr__(self,'__class__', LoadedLazyImport) > return module.__getattribute__(x) > def __repr__(self): > return "" %\ > object.__getattribute__(self,'__lazyname__') > > The win is that when you get a faster import and smaller memory footprint > by > not loading those modules which your code path ends up not using. > > The functionality of lazyimports.LazyImport is generic enough to allow the > lazily imported module to act as the module in almost every way (tab > completion, introspection for docstrings and sources) except reloading is > not supported. > > Although I have not spent a lot of time trying to make reloading work, I > think > it is not a big deal because here in the intent is to lazily load external > packages (matplotlib, scipy, and numpy's nosetools) which are not meant to > be > developed simultaneously with the project wishing to load them in a lazy > manner. > > A difference with another strategy, such as one Keith Goodman highlighted > in a > thread announcing Bottlneck 0.4.1 on [SciPy-User] back in March which > looks like this: > > # > http://wiki.python.org/moin/PythonSpeed/PerformanceTips#Import_Statement_Overhead > email = None > def parse_email(): > global email > if email is None: > import email > > is that LazyImport keeps you from having to check if a given module > has been lazily > loaded in every place that it is used. > > Fernando Perez suggested I contact this list because he recalls similar > functionality being implemented in scipy at some point, and then removed > because it was problematic. > Until recently this was indeed in scipy, using the PackageLoader in numpy/_import_tools.py. You will likely run into strange issues like http://projects.scipy.org/scipy/ticket/1501 at some point, so you have to have a very good use-case to justify this kind of lazy loading. For scipy that wasn't really the case. Plus the code you have to write to get the functionality you want is quite ugly..... Cheers, Ralf > thanks a lot in advance for any feedback. I'm cc-ing my primary > address in hope that your mail reader will do the same. > > best, > -- > Paul Ivanov > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsilter at gmail.com Fri Sep 30 13:52:17 2011 From: jsilter at gmail.com (Jacob Silterra) Date: Fri, 30 Sep 2011 13:52:17 -0400 Subject: [SciPy-Dev] Function for finding relative maxima of a 1d array Message-ID: Hello all, I just opened a pull request to add scipy.signal._peak_finding.find_peaks, which finds the relative maxima in a 1d ndarray. The algorithm behind this function has been discussed on the scipy list, but here's a quick recap: The idea is basically convolve the function with a wavelet of varying widths (default is ricker, or "mexican hat" wavelet) and identify as peaks those locations where there are relative maxima (meaning they are larger than their nearest neighbor on each side) at each of several widths, and have a sufficiently high SNR. More details are provided in the docstrings of the functions. Pull request at https://github.com/scipy/scipy/pull/85. I believe the code quality/docstrings/test coverage are pretty good, however it would be of great benefit for someone with experience with similar algorithms to review the code. -Jacob PS I've sent this to both the scipy and numpy lists, apologies to those who get it twice. -------------- next part -------------- An HTML attachment was scrubbed... URL: