From ralf.gommers at googlemail.com Sat Jan 1 06:47:54 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 1 Jan 2011 19:47:54 +0800 Subject: [SciPy-Dev] Bug in KDTree In-Reply-To: References: Message-ID: On Sat, Jan 1, 2011 at 3:30 AM, Benjamin Root wrote: > > > On Fri, Dec 10, 2010 at 4:54 PM, Benjamin Root wrote: > >> Hello, >> >> I came across an issue using KDTree. I searched the bug list, and it >> appears that there has already been a patch submitted for review. I have >> added comments to the bug report and I think the patch is good to go. I am >> not an expert in KDTree myself, but I think the logic is sound. >> >> http://projects.scipy.org/scipy/ticket/1052 >> >> If the first node of the tree is a leafnode, then query_pairs() will fail >> if the points are closer than the distance threshold. This is because the >> algorithm is assuming that the first node will always be an innernode. By >> rearranging the if-statements so that a check for a leafnode is done first, >> you protect against such a situation. >> >> At least, that is my understanding. I welcome comments on this. >> >> Thanks, >> Ben Root >> > > I am re-pinging this. Is there any chance this will make it to the > upcoming release? I am very certain the patch given 6 months ago is > correct. > > It can go in I think. I'll check and apply before RC1 if no one beats me to it. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Sat Jan 8 16:40:19 2011 From: ben.root at ou.edu (Benjamin Root) Date: Sat, 8 Jan 2011 15:40:19 -0600 Subject: [SciPy-Dev] Bug in KDTree In-Reply-To: References: Message-ID: On Sat, Jan 1, 2011 at 5:47 AM, Ralf Gommers wrote: > > > On Sat, Jan 1, 2011 at 3:30 AM, Benjamin Root wrote: > >> >> >> On Fri, Dec 10, 2010 at 4:54 PM, Benjamin Root wrote: >> >>> Hello, >>> >>> I came across an issue using KDTree. I searched the bug list, and it >>> appears that there has already been a patch submitted for review. I have >>> added comments to the bug report and I think the patch is good to go. I am >>> not an expert in KDTree myself, but I think the logic is sound. >>> >>> http://projects.scipy.org/scipy/ticket/1052 >>> >>> If the first node of the tree is a leafnode, then query_pairs() will fail >>> if the points are closer than the distance threshold. This is because the >>> algorithm is assuming that the first node will always be an innernode. By >>> rearranging the if-statements so that a check for a leafnode is done first, >>> you protect against such a situation. >>> >>> At least, that is my understanding. I welcome comments on this. >>> >>> Thanks, >>> Ben Root >>> >> >> I am re-pinging this. Is there any chance this will make it to the >> upcoming release? I am very certain the patch given 6 months ago is >> correct. >> >> It can go in I think. I'll check and apply before RC1 if no one beats me > to it. > > Cheers, > Ralf > > This still hasn't been committed yet... Sorry for being pestering, but I hate monkey-patching... Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Mon Jan 10 08:41:24 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 10 Jan 2011 21:41:24 +0800 Subject: [SciPy-Dev] Bug in KDTree In-Reply-To: References: Message-ID: On Sun, Jan 9, 2011 at 5:40 AM, Benjamin Root wrote: > On Sat, Jan 1, 2011 at 5:47 AM, Ralf Gommers wrote: > >> >> >> On Sat, Jan 1, 2011 at 3:30 AM, Benjamin Root wrote: >> >>> >>> >>> On Fri, Dec 10, 2010 at 4:54 PM, Benjamin Root wrote: >>> >>>> Hello, >>>> >>>> I came across an issue using KDTree. I searched the bug list, and it >>>> appears that there has already been a patch submitted for review. I have >>>> added comments to the bug report and I think the patch is good to go. I am >>>> not an expert in KDTree myself, but I think the logic is sound. >>>> >>>> http://projects.scipy.org/scipy/ticket/1052 >>>> >>>> If the first node of the tree is a leafnode, then query_pairs() will >>>> fail if the points are closer than the distance threshold. This is because >>>> the algorithm is assuming that the first node will always be an innernode. >>>> By rearranging the if-statements so that a check for a leafnode is done >>>> first, you protect against such a situation. >>>> >>>> At least, that is my understanding. I welcome comments on this. >>>> >>>> Thanks, >>>> Ben Root >>>> >>> >>> I am re-pinging this. Is there any chance this will make it to the >>> upcoming release? I am very certain the patch given 6 months ago is >>> correct. >>> >>> It can go in I think. I'll check and apply before RC1 if no one beats me >> to it. >> >> Cheers, >> Ralf >> >> > This still hasn't been committed yet... > > It's still 6 days to RC1 and I came home from holiday + business trip last night..... don't worry just yet. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Mon Jan 10 22:48:06 2011 From: ben.root at ou.edu (Benjamin Root) Date: Mon, 10 Jan 2011 21:48:06 -0600 Subject: [SciPy-Dev] Bug in KDTree In-Reply-To: References: Message-ID: On Mon, Jan 10, 2011 at 7:41 AM, Ralf Gommers wrote: > > > On Sun, Jan 9, 2011 at 5:40 AM, Benjamin Root wrote: > >> On Sat, Jan 1, 2011 at 5:47 AM, Ralf Gommers > > wrote: >> >>> >>> >>> On Sat, Jan 1, 2011 at 3:30 AM, Benjamin Root wrote: >>> >>>> >>>> >>>> On Fri, Dec 10, 2010 at 4:54 PM, Benjamin Root wrote: >>>> >>>>> Hello, >>>>> >>>>> I came across an issue using KDTree. I searched the bug list, and it >>>>> appears that there has already been a patch submitted for review. I have >>>>> added comments to the bug report and I think the patch is good to go. I am >>>>> not an expert in KDTree myself, but I think the logic is sound. >>>>> >>>>> http://projects.scipy.org/scipy/ticket/1052 >>>>> >>>>> If the first node of the tree is a leafnode, then query_pairs() will >>>>> fail if the points are closer than the distance threshold. This is because >>>>> the algorithm is assuming that the first node will always be an innernode. >>>>> By rearranging the if-statements so that a check for a leafnode is done >>>>> first, you protect against such a situation. >>>>> >>>>> At least, that is my understanding. I welcome comments on this. >>>>> >>>>> Thanks, >>>>> Ben Root >>>>> >>>> >>>> I am re-pinging this. Is there any chance this will make it to the >>>> upcoming release? I am very certain the patch given 6 months ago is >>>> correct. >>>> >>>> It can go in I think. I'll check and apply before RC1 if no one beats me >>> to it. >>> >>> Cheers, >>> Ralf >>> >>> >> This still hasn't been committed yet... >> >> It's still 6 days to RC1 and I came home from holiday + business trip last > night..... don't worry just yet. > > Ralf > No problem. Just making sure that it gets in eventually. Thanks for your help! Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Jan 11 10:32:39 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 11 Jan 2011 23:32:39 +0800 Subject: [SciPy-Dev] review request - test noise clean up Message-ID: Hi, I cleaned most of the noise in the test output (only stats module and a few random ones left): https://github.com/rgommers/scipy/tree/testnoise Can someone please check this and confirm that this can go into trunk as well as 0.9.x? Especially for the scipy.special commit. There are a few warnings that I'll only silence in 0.9.x because they indicate actual problems or should be fixed elsewhere (like astype() in numpy: http://projects.scipy.org/numpy/ticket/1709). For the ComplexWarning filter, is it preferable to be really granular with linenos like warnings.filterwarnings('ignore', category=np.ComplexWarning, lineno=98) or just filter it out of the whole module? Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Jan 11 10:44:41 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Jan 2011 08:44:41 -0700 Subject: [SciPy-Dev] review request - test noise clean up In-Reply-To: References: Message-ID: On Tue, Jan 11, 2011 at 8:32 AM, Ralf Gommers wrote: > Hi, > > I cleaned most of the noise in the test output (only stats module and a few > random ones left): https://github.com/rgommers/scipy/tree/testnoise > Can someone please check this and confirm that this can go into trunk as > well as 0.9.x? Especially for the scipy.special commit. > > There are a few warnings that I'll only silence in 0.9.x because they > indicate actual problems or should be fixed elsewhere (like astype() in > numpy: http://projects.scipy.org/numpy/ticket/1709). > > For the ComplexWarning filter, is it preferable to be really granular with > linenos like > > warnings.filterwarnings('ignore', category=np.ComplexWarning, lineno=98) > or just filter it out of the whole module? > > Are the complex warnings for things that could be easily fixed? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Jan 12 05:18:12 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 12 Jan 2011 18:18:12 +0800 Subject: [SciPy-Dev] review request - test noise clean up In-Reply-To: References: Message-ID: On Tue, Jan 11, 2011 at 11:44 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Tue, Jan 11, 2011 at 8:32 AM, Ralf Gommers > wrote: > >> Hi, >> >> I cleaned most of the noise in the test output (only stats module and a >> few random ones left): https://github.com/rgommers/scipy/tree/testnoise >> Can someone please check this and confirm that this can go into trunk as >> well as 0.9.x? Especially for the scipy.special commit. >> >> There are a few warnings that I'll only silence in 0.9.x because they >> indicate actual problems or should be fixed elsewhere (like astype() in >> numpy: http://projects.scipy.org/numpy/ticket/1709). >> >> For the ComplexWarning filter, is it preferable to be really granular with >> linenos like >> >> warnings.filterwarnings('ignore', category=np.ComplexWarning, lineno=98) >> or just filter it out of the whole module? >> >> > Are the complex warnings for things that could be easily fixed? > > After looking again, yes: https://github.com/rgommers/scipy/commit/fdf18166. They were calls to fblas funcs with the wrong dtype, obscured a bit by subclassing the test classes. So the only ones left are for ndarray.astype(), which should be fixed in numpy. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Jan 12 05:25:32 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 12 Jan 2011 18:25:32 +0800 Subject: [SciPy-Dev] review request - test noise clean up In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 6:18 PM, Ralf Gommers wrote: > > > On Tue, Jan 11, 2011 at 11:44 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Tue, Jan 11, 2011 at 8:32 AM, Ralf Gommers < >> ralf.gommers at googlemail.com> wrote: >> >>> Hi, >>> >>> I cleaned most of the noise in the test output (only stats module and a >>> few random ones left): https://github.com/rgommers/scipy/tree/testnoise >>> Can someone please check this and confirm that this can go into trunk as >>> well as 0.9.x? Especially for the scipy.special commit. >>> >>> There are a few warnings that I'll only silence in 0.9.x because they >>> indicate actual problems or should be fixed elsewhere (like astype() in >>> numpy: http://projects.scipy.org/numpy/ticket/1709). >>> >>> For the ComplexWarning filter, is it preferable to be really granular >>> with linenos like >>> >>> warnings.filterwarnings('ignore', category=np.ComplexWarning, lineno=98) >>> or just filter it out of the whole module? >>> >>> >> Are the complex warnings for things that could be easily fixed? >> >> After looking again, yes: > https://github.com/rgommers/scipy/commit/fdf18166. They were calls to > fblas funcs with the wrong dtype, obscured a bit by subclassing the test > classes. So the only ones left are for ndarray.astype(), which should be > fixed in numpy. > > What's the reason for the almost identical test_fblas.py files in lib/blas/tests and linalg/tests by the way? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Jan 12 05:49:55 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 12 Jan 2011 10:49:55 +0000 (UTC) Subject: [SciPy-Dev] review request - test noise clean up References: Message-ID: Wed, 12 Jan 2011 18:18:12 +0800, Ralf Gommers wrote: > https://github.com/rgommers/scipy/commit/fdf18166. They were calls to > fblas funcs with the wrong dtype, obscured a bit by subclassing the test > classes. So the only ones left are for ndarray.astype(), which should be > fixed in numpy. np.array(1+2j).astype(np.float64) IMHO should still raise a ComplexWarning. Cast, even explicit, from complex to float should strictly speaking be undefined. One should do np.array(1+2j).real.astype(np.float64) to be explicit. -- Pauli Virtanen From josef.pktd at gmail.com Wed Jan 12 08:05:30 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Jan 2011 08:05:30 -0500 Subject: [SciPy-Dev] review request - test noise clean up In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 5:49 AM, Pauli Virtanen wrote: > Wed, 12 Jan 2011 18:18:12 +0800, Ralf Gommers wrote: >> https://github.com/rgommers/scipy/commit/fdf18166. They were calls to >> fblas funcs with the wrong dtype, obscured a bit by subclassing the test >> classes. So the only ones left are for ndarray.astype(), which should be >> fixed in numpy. > > np.array(1+2j).astype(np.float64) > > IMHO should still raise a ComplexWarning. Cast, even explicit, from > complex to float should strictly speaking be undefined. One should do > > np.array(1+2j).real.astype(np.float64) Isn't .astype always an explicit request to (down)cast ? np.array(1.).astype(int) np.array(1+2j).astype(int) I don't think numpy has any other function that casts to int for example. (round to closest integer and cast) Just for consistency of astype, astype looks a bit dangerous to use for downcasting, as substitute for real_if_close. Josef > > to be explicit. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ralf.gommers at googlemail.com Wed Jan 12 09:37:51 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 12 Jan 2011 22:37:51 +0800 Subject: [SciPy-Dev] review request - test noise clean up In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 6:25 PM, Ralf Gommers wrote: > > > On Wed, Jan 12, 2011 at 6:18 PM, Ralf Gommers > wrote: > >> >> >> On Tue, Jan 11, 2011 at 11:44 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Tue, Jan 11, 2011 at 8:32 AM, Ralf Gommers < >>> ralf.gommers at googlemail.com> wrote: >>> >>>> Hi, >>>> >>>> I cleaned most of the noise in the test output (only stats module and a >>>> few random ones left): https://github.com/rgommers/scipy/tree/testnoise >>>> Can someone please check this and confirm that this can go into trunk as >>>> well as 0.9.x? Especially for the scipy.special commit. >>>> >>>> There are a few warnings that I'll only silence in 0.9.x because they >>>> indicate actual problems or should be fixed elsewhere (like astype() in >>>> numpy: http://projects.scipy.org/numpy/ticket/1709). >>>> >>>> For the ComplexWarning filter, is it preferable to be really granular >>>> with linenos like >>>> >>>> warnings.filterwarnings('ignore', category=np.ComplexWarning, lineno=98) >>>> or just filter it out of the whole module? >>>> >>>> >>> Are the complex warnings for things that could be easily fixed? >>> >>> After looking again, yes: >> https://github.com/rgommers/scipy/commit/fdf18166. They were calls to >> fblas funcs with the wrong dtype, obscured a bit by subclassing the test >> classes. So the only ones left are for ndarray.astype(), which should be >> fixed in numpy. >> >> > Actually, there's one left that may be a bug. It happens in linalg.qr(), the failing test is TestQR.test_random_complex() (and a similar one in decomp_schur). But I can't reproduce it standalone, only if I run all linalg tests. Could you perhaps have a look? The other ones I'd like your opinion on is the fitpack warnings below. Not sure what's going on there. Ralf Running unit tests for scipy NumPy version 1.5.1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy SciPy version 0.10.0.dev SciPy is installed in /Users/rgommers/Code/scipy/scipy Python version 2.6.4 (r264:75821M, Oct 27 2009, 19:48:32) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 0.11.1 ....................................................................................................................................................................................................................K.............................................................................../Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:673: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ....../Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:604: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ......................K..K......................................................................................................................................../Users/rgommers/Code/scipy/scipy/io/matlab/mio.py:232: FutureWarning: Using oned_as default value ('column') This will change to 'row' in future versions oned_as=oned_as) ............................................................................................................................................................................................................................................................................./Users/rgommers/Code/scipy/scipy/io/wavfile.py:31: WavFileWarning: Unfamiliar format bytes warnings.warn("Unfamiliar format bytes", WavFileWarning) /Users/rgommers/Code/scipy/scipy/io/wavfile.py:121: WavFileWarning: chunk not understood warnings.warn("chunk not understood", WavFileWarning) ...............................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS.................................................................S.................................................../Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:91: ComplexWarning: Casting complex values to real discards the imaginary part qr, tau, work, info = geqrf(a1, lwork=lwork, overwrite_a=overwrite_a) /Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:125: ComplexWarning: Casting complex values to real discards the imaginary part Q, work, info = gor_un_gqr(qqr, tau, lwork=lwork, overwrite_a=1) ......................../Users/rgommers/Code/scipy/scipy/linalg/decomp_schur.py:72: ComplexWarning: Casting complex values to real discards the imaginary part result = gees(lambda x: None, a, lwork=result[-2][0], overwrite_a=overwrite_a) .....................................................................................................................................K.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS................................................................................................................................................................K...............................................................K...........................................................................................................................................................KK...............................................................................................................................................................................................................................................................................................................................................................................................................................K.K........................................................................................................................................................................................................................................................................................................................................................................................K........K.........SSSSSSS............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................S....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ---------------------------------------------------------------------- Ran 4815 tests in 83.378s OK (KNOWNFAIL=12, SKIP=36) Out[1]: What's the reason for the almost identical test_fblas.py files in > lib/blas/tests and linalg/tests by the way? > > Ralf > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Jan 12 12:29:48 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Jan 2011 12:29:48 -0500 Subject: [SciPy-Dev] updating numpy scipy crash - numpy on pypi Message-ID: I *finally* updated numpy and scipy. For numpy 1.5.1, I used the files on pypi. When I installed scipy 0.9b1 it crashed (segfaulted) in the testsuite. With the superpack installer for numpy from the sourceforge download, scipy 0.9b1 passes all tests Ran 4806 tests in 287.843s OK (KNOWNFAIL=12, SKIP=35) It looks like the non-superpack numpy files on pypi are not compatible with scipy in my setup (win32, sse2, python 2.5.2) Another question, are the clapack tests skipped on purpose, for example test_clapack_dsygv_1 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ dsygv_1 Clapack empty, skip flapack test test_clapack_dsygv_2 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ dsygv_2 Clapack empty, skip flapack test test_clapack_dsygv_3 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ dsygv_3 Clapack empty, skip flapack test test_clapack_ssygv_1 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ ssygv_1 Clapack empty, skip flapack test test_clapack_ssygv_2 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ ssygv_2 Clapack empty, skip flapack test test_clapack_ssygv_3 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ ssygv_3 Clapack empty, skip flapack test Josef From bigorneault at gmail.com Wed Jan 12 17:23:48 2011 From: bigorneault at gmail.com (=?ISO-8859-1?Q?Th=E9lesphonse_Bigorneault?=) Date: Wed, 12 Jan 2011 17:23:48 -0500 Subject: [SciPy-Dev] request of a FGMRES krylov solver Message-ID: I would like to have a FGMRES Krylov solver in scipy. FGMRES is a variant of the GMRES method with right preconditioning that enables the use of a different preconditioner at each step of the Arnoldi process. A possibility could be to "borrow" the fgmres function from pyamg (which as a compatible license) http://code.google.com/p/pyamg/source/browse/trunk/pyamg/krylov/_fgmres.py Including this code into scipy should be straightforward. T?lesphore Bigorneault -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Jan 12 17:42:04 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Jan 2011 17:42:04 -0500 Subject: [SciPy-Dev] speed of nosetests scipy.stats Message-ID: nosetests scipy.stats includes the "slow" tests, but if I remember correctly they ran in around 7 minutes on my computer. Now I get: Ran 1572 tests in 2479.594s Is it slow for anyone else, or is it just because my computer is busy with other things also. I gave up running my fit tests after about an hour or so. scipy 0.9b1, numpy 1.5.1 official installers and what kind of operation produces "Warning: invalid value encountered in subtract" ? I get hundreds of these Josef From kwgoodman at gmail.com Wed Jan 12 17:52:40 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 12 Jan 2011 14:52:40 -0800 Subject: [SciPy-Dev] speed of nosetests scipy.stats In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 2:42 PM, wrote: > and what kind of operation produces > "Warning: invalid value encountered in subtract" ? > > I get hundreds of these http://mail.scipy.org/pipermail/numpy-discussion/2010-November/054128.html From josef.pktd at gmail.com Wed Jan 12 18:47:41 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Jan 2011 18:47:41 -0500 Subject: [SciPy-Dev] speed of nosetests scipy.stats In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 5:52 PM, Keith Goodman wrote: > On Wed, Jan 12, 2011 at 2:42 PM, ? wrote: > >> and what kind of operation produces >> "Warning: invalid value encountered in subtract" ? >> >> I get hundreds of these > > http://mail.scipy.org/pipermail/numpy-discussion/2010-November/054128.html Yes, I will switch soon again to general ignore. But I might have a look at some of the reasons for these warnings in scipy.stats. I know zero division error, but I have no idea what's an invalid value for subtract. I thought we can subtract anything from anything that might be a number or "not a number" or infinite. Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Wed Jan 12 18:58:24 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Jan 2011 17:58:24 -0600 Subject: [SciPy-Dev] speed of nosetests scipy.stats In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 17:47, wrote: > On Wed, Jan 12, 2011 at 5:52 PM, Keith Goodman wrote: >> On Wed, Jan 12, 2011 at 2:42 PM, ? wrote: >> >>> and what kind of operation produces >>> "Warning: invalid value encountered in subtract" ? >>> >>> I get hundreds of these >> >> http://mail.scipy.org/pipermail/numpy-discussion/2010-November/054128.html > > Yes, I will switch soon again to general ignore. But I might have a > look at some of the reasons for these warnings in scipy.stats. > > I know zero division error, but I have no idea what's an invalid value > for subtract. > I thought we can subtract anything from anything that might be a > number or "not a number" or infinite. It just means that a NaN popped up somewhere in the calculation when there was no NaN in the inputs, thus setting a floating point exception flag in your FPU, nothing more. Note that the message says "in subtract", not "for subtract". It's not a value judgment about your inputs. [~] |1> np.seterr(all='print') {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ignore'} [~] |2> np.subtract(np.inf, np.inf) Warning: invalid value encountered in subtract nan [~] |3> np.subtract(np.nan, np.nan) nan -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Wed Jan 12 19:21:48 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Jan 2011 19:21:48 -0500 Subject: [SciPy-Dev] speed of nosetests scipy.stats In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 6:58 PM, Robert Kern wrote: > On Wed, Jan 12, 2011 at 17:47, ? wrote: >> On Wed, Jan 12, 2011 at 5:52 PM, Keith Goodman wrote: >>> On Wed, Jan 12, 2011 at 2:42 PM, ? wrote: >>> >>>> and what kind of operation produces >>>> "Warning: invalid value encountered in subtract" ? >>>> >>>> I get hundreds of these >>> >>> http://mail.scipy.org/pipermail/numpy-discussion/2010-November/054128.html >> >> Yes, I will switch soon again to general ignore. But I might have a >> look at some of the reasons for these warnings in scipy.stats. >> >> I know zero division error, but I have no idea what's an invalid value >> for subtract. >> I thought we can subtract anything from anything that might be a >> number or "not a number" or infinite. > > It just means that a NaN popped up somewhere in the calculation when > there was no NaN in the inputs, thus setting a floating point > exception flag in your FPU, nothing more. Note that the message says > "in subtract", not "for subtract". It's not a value judgment about > your inputs. > > [~] > |1> np.seterr(all='print') > {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ignore'} > > [~] > |2> np.subtract(np.inf, np.inf) > Warning: invalid value encountered in subtract > nan Thanks, I only tried >>> np.seterr() {'over': 'print', 'divide': 'print', 'invalid': 'print', 'under': 'ignore'} >>> np.inf - np.inf -1.#IND which is obviously something different from np.subtract A lot of nans in the calculations is not good news. I just wonder if some of it is because of the switch to logs in some calculations (logpdf) >>> 0**0 1 >>> 0*np.log(0) Warning: invalid value encountered in double_scalars nan I'm getting several of these warnings also. But since I had warnings turned off for so long, I don't know what the historical benchmark for this is. Josef > > [~] > |3> np.subtract(np.nan, np.nan) > nan > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Wed Jan 12 19:32:07 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Jan 2011 18:32:07 -0600 Subject: [SciPy-Dev] speed of nosetests scipy.stats In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 18:21, wrote: > On Wed, Jan 12, 2011 at 6:58 PM, Robert Kern wrote: >> On Wed, Jan 12, 2011 at 17:47, ? wrote: >>> On Wed, Jan 12, 2011 at 5:52 PM, Keith Goodman wrote: >>>> On Wed, Jan 12, 2011 at 2:42 PM, ? wrote: >>>> >>>>> and what kind of operation produces >>>>> "Warning: invalid value encountered in subtract" ? >>>>> >>>>> I get hundreds of these >>>> >>>> http://mail.scipy.org/pipermail/numpy-discussion/2010-November/054128.html >>> >>> Yes, I will switch soon again to general ignore. But I might have a >>> look at some of the reasons for these warnings in scipy.stats. >>> >>> I know zero division error, but I have no idea what's an invalid value >>> for subtract. >>> I thought we can subtract anything from anything that might be a >>> number or "not a number" or infinite. >> >> It just means that a NaN popped up somewhere in the calculation when >> there was no NaN in the inputs, thus setting a floating point >> exception flag in your FPU, nothing more. Note that the message says >> "in subtract", not "for subtract". It's not a value judgment about >> your inputs. >> >> [~] >> |1> np.seterr(all='print') >> {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ignore'} >> >> [~] >> |2> np.subtract(np.inf, np.inf) >> Warning: invalid value encountered in subtract >> nan > > Thanks, > > I only tried > >>>> np.seterr() > {'over': 'print', 'divide': 'print', 'invalid': 'print', 'under': 'ignore'} > >>>> np.inf - np.inf > -1.#IND > > which is obviously something different from np.subtract Yes. The operators on scalar objects tend to avoid the ufunc machinery, and it is the ufunc machinery that issues those errors. > A lot of nans in the calculations is not good news. I just wonder if > some of it is because of the switch to logs in some calculations > (logpdf) > >>>> 0**0 > 1 >>>> 0*np.log(0) > Warning: invalid value encountered in double_scalars > nan Could be. Try np.seterr(invalid='raise'), then you will get tracebacks. Or you can use np.seterr(invalid='call') and np.seterrcall() to set up a function that gets called that will print out the current stack trace (or just record it somewhere so you can aggregate things for easier analysis), but then continue on. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From jsseabold at gmail.com Wed Jan 12 19:56:26 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 12 Jan 2011 19:56:26 -0500 Subject: [SciPy-Dev] speed of nosetests scipy.stats In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 7:32 PM, Robert Kern wrote: > On Wed, Jan 12, 2011 at 18:21, ? wrote: >> On Wed, Jan 12, 2011 at 6:58 PM, Robert Kern wrote: >>> On Wed, Jan 12, 2011 at 17:47, ? wrote: >>>> On Wed, Jan 12, 2011 at 5:52 PM, Keith Goodman wrote: >>>>> On Wed, Jan 12, 2011 at 2:42 PM, ? wrote: >>>>> >>>>>> and what kind of operation produces >>>>>> "Warning: invalid value encountered in subtract" ? >>>>>> >>>>>> I get hundreds of these >>>>> >>>>> http://mail.scipy.org/pipermail/numpy-discussion/2010-November/054128.html >>>> >>>> Yes, I will switch soon again to general ignore. But I might have a >>>> look at some of the reasons for these warnings in scipy.stats. >>>> >>>> I know zero division error, but I have no idea what's an invalid value >>>> for subtract. >>>> I thought we can subtract anything from anything that might be a >>>> number or "not a number" or infinite. >>> >>> It just means that a NaN popped up somewhere in the calculation when >>> there was no NaN in the inputs, thus setting a floating point >>> exception flag in your FPU, nothing more. Note that the message says >>> "in subtract", not "for subtract". It's not a value judgment about >>> your inputs. >>> >>> [~] >>> |1> np.seterr(all='print') >>> {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ignore'} >>> >>> [~] >>> |2> np.subtract(np.inf, np.inf) >>> Warning: invalid value encountered in subtract >>> nan >> >> Thanks, >> >> I only tried >> >>>>> np.seterr() >> {'over': 'print', 'divide': 'print', 'invalid': 'print', 'under': 'ignore'} >> >>>>> np.inf - np.inf >> -1.#IND >> >> which is obviously something different from np.subtract > > Yes. The operators on scalar objects tend to avoid the ufunc > machinery, and it is the ufunc machinery that issues those errors. > >> A lot of nans in the calculations is not good news. I just wonder if >> some of it is because of the switch to logs in some calculations >> (logpdf) >> >>>>> 0**0 >> 1 >>>>> 0*np.log(0) >> Warning: invalid value encountered in double_scalars >> nan > > Could be. Try np.seterr(invalid='raise'), then you will get > tracebacks. Or you can use np.seterr(invalid='call') and > np.seterrcall() to set up a function that gets called that will print > out the current stack trace (or just record it somewhere so you can > aggregate things for easier analysis), but then continue on. > I usually do something like nosetests scipy.stats verbosity=3 &> test_output.txt to see where the warning are coming from. Might need to use 2> on Windows(?). I've also seen these for instance during optimizations with exp or log or some ufunc in the objective function, but the optimum is still reached so they're somewhat spurious (other times they weren't so harmless). Skipper PS Ran 1572 tests in 212.470s From josef.pktd at gmail.com Wed Jan 12 20:51:29 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Jan 2011 20:51:29 -0500 Subject: [SciPy-Dev] speed of nosetests scipy.stats In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 7:56 PM, Skipper Seabold wrote: > On Wed, Jan 12, 2011 at 7:32 PM, Robert Kern wrote: >> On Wed, Jan 12, 2011 at 18:21, ? wrote: >>> On Wed, Jan 12, 2011 at 6:58 PM, Robert Kern wrote: >>>> On Wed, Jan 12, 2011 at 17:47, ? wrote: >>>>> On Wed, Jan 12, 2011 at 5:52 PM, Keith Goodman wrote: >>>>>> On Wed, Jan 12, 2011 at 2:42 PM, ? wrote: >>>>>> >>>>>>> and what kind of operation produces >>>>>>> "Warning: invalid value encountered in subtract" ? >>>>>>> >>>>>>> I get hundreds of these >>>>>> >>>>>> http://mail.scipy.org/pipermail/numpy-discussion/2010-November/054128.html >>>>> >>>>> Yes, I will switch soon again to general ignore. But I might have a >>>>> look at some of the reasons for these warnings in scipy.stats. >>>>> >>>>> I know zero division error, but I have no idea what's an invalid value >>>>> for subtract. >>>>> I thought we can subtract anything from anything that might be a >>>>> number or "not a number" or infinite. >>>> >>>> It just means that a NaN popped up somewhere in the calculation when >>>> there was no NaN in the inputs, thus setting a floating point >>>> exception flag in your FPU, nothing more. Note that the message says >>>> "in subtract", not "for subtract". It's not a value judgment about >>>> your inputs. >>>> >>>> [~] >>>> |1> np.seterr(all='print') >>>> {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ignore'} >>>> >>>> [~] >>>> |2> np.subtract(np.inf, np.inf) >>>> Warning: invalid value encountered in subtract >>>> nan >>> >>> Thanks, >>> >>> I only tried >>> >>>>>> np.seterr() >>> {'over': 'print', 'divide': 'print', 'invalid': 'print', 'under': 'ignore'} >>> >>>>>> np.inf - np.inf >>> -1.#IND >>> >>> which is obviously something different from np.subtract >> >> Yes. The operators on scalar objects tend to avoid the ufunc >> machinery, and it is the ufunc machinery that issues those errors. >> >>> A lot of nans in the calculations is not good news. I just wonder if >>> some of it is because of the switch to logs in some calculations >>> (logpdf) >>> >>>>>> 0**0 >>> 1 >>>>>> 0*np.log(0) >>> Warning: invalid value encountered in double_scalars >>> nan >> >> Could be. Try np.seterr(invalid='raise'), then you will get >> tracebacks. Or you can use np.seterr(invalid='call') and >> np.seterrcall() to set up a function that gets called that will print >> out the current stack trace (or just record it somewhere so you can >> aggregate things for easier analysis), but then continue on. >> > > I usually do something like > > nosetests scipy.stats verbosity=3 &> test_output.txt > > to see where the warning are coming from. ?Might need to use 2> on > Windows(?). ?I've also seen these for instance during optimizations > with exp or log or some ufunc in the objective function, but the > optimum is still reached so they're somewhat spurious (other times > they weren't so harmless). harmless or not, that's what I would like to know, now that I see the warnings. It looks like the main problems are in fit and optimization. Since all the tests pass, it doesn't look like there are any problems (in the nice test cases), but fit() still needs work. Robert, thanks for the tip with raise using scipy.test(label="slow") with 427 tests, I get ====================================================================== ERROR: test_continuous_extra.test_cont_extra(, (1.8771398388773268,), 'lomax loc, scale test') ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.11.1-py2.5.egg\nose\case.p y", line 183, in runTest self.test(*self.arg) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\tests\test_continuous _extra.py", line 78, in check_loc_scale m,v = distfn.stats(*arg) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 1460, in stats mu, mu2, g1, g2 = self._stats(*args) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 3978, in _stats mu, mu2, g1, g2 = pareto.stats(c, loc=-1.0, moments='mvsk') File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 1458, in stats mu, mu2, g1, g2 = self._stats(*args,**{'moments':moments}) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 3939, in _stats vals = 2*(bt+1.0)*sqrt(b-2.0)/((b-3.0)*sqrt(b)) FloatingPointError: invalid value encountered in sqrt ====================================================================== ERROR: test_fit (test_distributions.TestFitMethod) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Programs\Python25\Lib\site-packages\scipy\stats\tests\test_distributi ons.py", line 404, in test_fit vals2 = distfunc.fit(res, optimizer='powell') File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 1711, in fit vals = optimizer(func,x0,args=(ravel(data),),disp=0) File "C:\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", line 1519, in fmin_powell fval, x, direc1 = _linesearch_powell(func, x, direc1, tol=xtol*100) File "C:\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", line 1418, in _linesearch_powell alpha_min, fret, iter, num = brent(myfunc, full_output=1, tol=tol) File "C:\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", line 1241, in brent brent.optimize() File "C:\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", line 1142, in optimize tmp2 = (x-v)*(fx-fw) FloatingPointError: invalid value encountered in double_scalars ====================================================================== ERROR: test_fix_fit (test_distributions.TestFitMethod) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Programs\Python25\Lib\site-packages\scipy\stats\tests\test_distributi ons.py", line 424, in test_fix_fit vals = distfunc.fit(res,floc=0) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 1711, in fit vals = optimizer(func,x0,args=(ravel(data),),disp=0) File "C:\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", line 280, in fmin and max(abs(fsim[0]-fsim[1:])) <= ftol): FloatingPointError: invalid value encountered in subtract ---------------------------------------------------------------------- Ran 427 tests in 910.359s the non-slow test seem to have nans only when they are supposed to, except for the pareto.stats ====================================================================== ERROR: Failure: FloatingPointError (invalid value encountered in sqrt) ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.11.1-py2.5.egg\nose\loader .py", line 224, in generate for test in g(): File "C:\Programs\Python25\Lib\site-packages\scipy\stats\tests\test_continuous _basic.py", line 171, in test_cont_basic m,v = distfn.stats(*arg) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 1460, in stats mu, mu2, g1, g2 = self._stats(*args) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 3978, in _stats mu, mu2, g1, g2 = pareto.stats(c, loc=-1.0, moments='mvsk') File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 1458, in stats mu, mu2, g1, g2 = self._stats(*args,**{'moments':moments}) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\distributions.py", li ne 3939, in _stats vals = 2*(bt+1.0)*sqrt(b-2.0)/((b-3.0)*sqrt(b)) FloatingPointError: invalid value encountered in sqrt ====================================================================== ERROR: Check nanmean when all values are nan. ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Programs\Python25\Lib\site-packages\scipy\stats\tests\test_stats.py", line 161, in test_nanmean_all m = stats.nanmean(self.Xall) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\stats.py", line 274, in nanmean return np.mean(x,axis)/factor FloatingPointError: invalid value encountered in double_scalars ====================================================================== ERROR: Check nanstd when all values are nan. ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Programs\Python25\Lib\site-packages\scipy\stats\tests\test_stats.py", line 176, in test_nanstd_all s = stats.nanstd(self.Xall) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\stats.py", line 323, in nanstd m1 = np.sum(x,axis)/n FloatingPointError: invalid value encountered in double_scalars ====================================================================== ERROR: test_stats.test_ttest_rel ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.11.1-py2.5.egg\nose\case.p y", line 183, in runTest self.test(*self.arg) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\tests\test_stats.py", line 1355, in test_ttest_rel assert_almost_equal(stats.ttest_rel([0,0,0], [0,0,0]), (1.0, 0.4226497308103 7421)) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\stats.py", line 3002, in ttest_rel t = dm / np.sqrt(v/float(n)) FloatingPointError: invalid value encountered in double_scalars ====================================================================== ERROR: test_stats.test_ttest_ind ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.11.1-py2.5.egg\nose\case.p y", line 183, in runTest self.test(*self.arg) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\tests\test_stats.py", line 1396, in test_ttest_ind assert_almost_equal(stats.ttest_ind([0,0,0], [0,0,0]), (1.0, 0.3739009663000 5898)) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\stats.py", line 2921, in ttest_ind t = d/np.sqrt(svar*(1.0/n1 + 1.0/n2)) FloatingPointError: invalid value encountered in double_scalars ====================================================================== ERROR: test_stats.test_ttest_1samp_new ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.11.1-py2.5.egg\nose\case.p y", line 183, in runTest self.test(*self.arg) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\tests\test_stats.py", line 1435, in test_ttest_1samp_new assert_almost_equal(stats.ttest_1samp([0,0,0], 0), (1.0, 0.42264973081037421 )) File "C:\Programs\Python25\Lib\site-packages\scipy\stats\stats.py", line 2831, in ttest_1samp t = d / np.sqrt(v/float(n)) FloatingPointError: invalid value encountered in double_scalars ---------------------------------------------------------------------- Ran 969 tests in 59.281s FAILED (SKIP=1, errors=6) There are still some tests missing compared to nosetests, but I won't be able to set np.seterr for commandline nosetests, so Skipper's recommendation is still todo. There are still additional overflow and division warnings, but I think many of them will be expected. > > Skipper > > PS > > Ran 1572 tests in 212.470s Thanks for checking. In the meantime I found a runaway python process, most likely I hadn't killed the previous fit test completely, and it was using most of my (single) CPU time. So, that's cleared up. Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Wed Jan 12 21:01:05 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Jan 2011 21:01:05 -0500 Subject: [SciPy-Dev] review request - test noise clean up In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 9:37 AM, Ralf Gommers wrote: > > > On Wed, Jan 12, 2011 at 6:25 PM, Ralf Gommers > wrote: >> >> >> On Wed, Jan 12, 2011 at 6:18 PM, Ralf Gommers >> wrote: >>> >>> >>> On Tue, Jan 11, 2011 at 11:44 PM, Charles R Harris >>> wrote: >>>> >>>> >>>> On Tue, Jan 11, 2011 at 8:32 AM, Ralf Gommers >>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> I cleaned most of the noise in the test output (only stats module and a >>>>> few random ones left): https://github.com/rgommers/scipy/tree/testnoise >>>>> Can someone please check this and confirm that this can go into trunk >>>>> as well as 0.9.x? Especially for the scipy.special commit. Just a suggestion, Is it possible to use a module global DEBUG and silence the warnings only "if not DEBUG"? Then we would still have a chance to look at the warnings that the tests produce, even though they are silenced for the public. Josef From david at silveregg.co.jp Wed Jan 12 21:20:00 2011 From: david at silveregg.co.jp (David) Date: Thu, 13 Jan 2011 11:20:00 +0900 Subject: [SciPy-Dev] review request - test noise clean up In-Reply-To: References: Message-ID: <4D2E6150.4090609@silveregg.co.jp> On 01/13/2011 11:01 AM, josef.pktd at gmail.com wrote: > On Wed, Jan 12, 2011 at 9:37 AM, Ralf Gommers > wrote: >> >> >> On Wed, Jan 12, 2011 at 6:25 PM, Ralf Gommers >> wrote: >>> >>> >>> On Wed, Jan 12, 2011 at 6:18 PM, Ralf Gommers >>> wrote: >>>> >>>> >>>> On Tue, Jan 11, 2011 at 11:44 PM, Charles R Harris >>>> wrote: >>>>> >>>>> >>>>> On Tue, Jan 11, 2011 at 8:32 AM, Ralf Gommers >>>>> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I cleaned most of the noise in the test output (only stats module and a >>>>>> few random ones left): https://github.com/rgommers/scipy/tree/testnoise >>>>>> Can someone please check this and confirm that this can go into trunk >>>>>> as well as 0.9.x? Especially for the scipy.special commit. > > Just a suggestion, > > Is it possible to use a module global DEBUG and silence the warnings > only "if not DEBUG"? I would rather not: those are mostly bugs, and are platform dependent. Hiding it by default means we will not be aware of it for platforms we don't often use, cheers, David From jsseabold at gmail.com Wed Jan 12 21:32:04 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 12 Jan 2011 21:32:04 -0500 Subject: [SciPy-Dev] review request - test noise clean up In-Reply-To: <4D2E6150.4090609@silveregg.co.jp> References: <4D2E6150.4090609@silveregg.co.jp> Message-ID: On Wed, Jan 12, 2011 at 9:20 PM, David wrote: > On 01/13/2011 11:01 AM, josef.pktd at gmail.com wrote: >> On Wed, Jan 12, 2011 at 9:37 AM, Ralf Gommers >> ?wrote: >>> >>> >>> On Wed, Jan 12, 2011 at 6:25 PM, Ralf Gommers >>> wrote: >>>> >>>> >>>> On Wed, Jan 12, 2011 at 6:18 PM, Ralf Gommers >>>> ?wrote: >>>>> >>>>> >>>>> On Tue, Jan 11, 2011 at 11:44 PM, Charles R Harris >>>>> ?wrote: >>>>>> >>>>>> >>>>>> On Tue, Jan 11, 2011 at 8:32 AM, Ralf Gommers >>>>>> ?wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I cleaned most of the noise in the test output (only stats module and a >>>>>>> few random ones left): https://github.com/rgommers/scipy/tree/testnoise >>>>>>> Can someone please check this and confirm that this can go into trunk >>>>>>> as well as 0.9.x? Especially for the scipy.special commit. >> >> Just a suggestion, >> >> Is it possible to use a module global DEBUG and silence the warnings >> only "if not DEBUG"? > > I would rather not: those are mostly bugs, and are platform dependent. > Hiding it by default means we will not be aware of it for platforms we > don't often use, > I've been setting numpy.errstate context manager with kwargs for a monkey-patched numpy.testing.nosetester.NoseTester.test to suppress floating point warnings. And you could also do the same with warnings.catch_warnings/simplefilter if you *really* want to suppress all warnings. If you want to catch the numpy floating point errors in the stats distributions fit that are known, I think you could just use errstate there. This might cut down on some of the true noise. Skipper From dave.hirschfeld at gmail.com Thu Jan 13 04:22:46 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Thu, 13 Jan 2011 09:22:46 +0000 (UTC) Subject: [SciPy-Dev] speed of nosetests scipy.stats References: Message-ID: Skipper Seabold gmail.com> writes: > > > I usually do something like > > nosetests scipy.stats verbosity=3 &> test_output.txt > > to see where the warning are coming from. Might need to use 2> on > Windows(?). I've also seen these for instance during optimizations > with exp or log or some ufunc in the objective function, but the > optimum is still reached so they're somewhat spurious (other times > they weren't so harmless). > > Skipper > > PS > > Ran 1572 tests in 212.470s > Jeff Hardy has a very useful blog post on how to both print to screen and capture the output to a file on Windows: http://jdhardy.blogspot.com/2010/10/combining-stdout-stderr-and-pipes-on.html HTH, Dave From ralf.gommers at googlemail.com Thu Jan 13 05:14:04 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 13 Jan 2011 18:14:04 +0800 Subject: [SciPy-Dev] updating numpy scipy crash - numpy on pypi In-Reply-To: References: Message-ID: On Thu, Jan 13, 2011 at 1:29 AM, wrote: > I *finally* updated numpy and scipy. > > For numpy 1.5.1, I used the files on pypi. When I installed scipy > 0.9b1 it crashed (segfaulted) in the testsuite. > > With the superpack installer for numpy from the sourceforge download, > scipy 0.9b1 passes all tests > > Ran 4806 tests in 287.843s > > OK (KNOWNFAIL=12, SKIP=35) > > > It looks like the non-superpack numpy files on pypi are not compatible > with scipy in my setup (win32, sse2, python 2.5.2) > Hmm, they should contain only a single SSE (but not SSE2/3) instruction. Can you check by running http://projects.scipy.org/scipy/attachment/ticket/1170/detect_cpu_extensions_wine.py(only need to modify hardcoded path to scipy)? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Jan 13 05:49:01 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 13 Jan 2011 05:49:01 -0500 Subject: [SciPy-Dev] updating numpy scipy crash - numpy on pypi In-Reply-To: References: Message-ID: On Thu, Jan 13, 2011 at 5:14 AM, Ralf Gommers wrote: > > > On Thu, Jan 13, 2011 at 1:29 AM, wrote: >> >> I *finally* updated numpy and scipy. >> >> For numpy 1.5.1, I used the files on pypi. When I installed scipy >> 0.9b1 it crashed (segfaulted) in the testsuite. >> >> With the superpack installer for numpy from the sourceforge download, >> scipy 0.9b1 passes all tests >> >> Ran 4806 tests in 287.843s >> >> OK (KNOWNFAIL=12, SKIP=35) >> >> >> It looks like the non-superpack numpy files on pypi are not compatible >> with scipy in my setup (win32, sse2, python 2.5.2) > > Hmm, they should contain only a single SSE (but not SSE2/3) instruction. Can > you check by running > http://projects.scipy.org/scipy/attachment/ticket/1170/detect_cpu_extensions_wine.py > (only need to modify hardcoded path to scipy)? I don't know if I can run a wine script. In either case, I have now sse2 numpy and sse2 scipy, which both work without problems so far. My conclusion was that sse2 scipy (from the superpack) is not binary compatible with the single SSE numpy from pypi. The single SSE numpy from pypi tested without failures, so it was running fine on it's own. (My computer is sse2 for sure, and all superpacks always installed the sse2 version automatically.) I went through the process twice with the numpy from pypi installer, and it crashed both times. It's possible but very unlikely that I had any files somewhere from a previous numpy install. Josef > > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From ralf.gommers at googlemail.com Thu Jan 13 09:15:14 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 13 Jan 2011 22:15:14 +0800 Subject: [SciPy-Dev] speed of nosetests scipy.stats In-Reply-To: References: Message-ID: On Thu, Jan 13, 2011 at 9:51 AM, wrote: > On Wed, Jan 12, 2011 at 7:56 PM, Skipper Seabold > wrote: > >> > > > > I usually do something like > > > > nosetests scipy.stats verbosity=3 &> test_output.txt > > > > to see where the warning are coming from. Might need to use 2> on > > Windows(?). I've also seen these for instance during optimizations > > with exp or log or some ufunc in the objective function, but the > > optimum is still reached so they're somewhat spurious (other times > > they weren't so harmless). > > harmless or not, that's what I would like to know, now that I see the > warnings. > It looks like the main problems are in fit and optimization. Since all the > tests > pass, it doesn't look like there are any problems (in the nice test cases), > but fit() still needs work. > > I've just looked at all the ones not labeled slow, and fixed/silenced them: https://github.com/rgommers/scipy/commit/1eb0f8bc3. I propose that I open a ticket with a summary of the ones that are possibly bugs, attach a verbose test log, and then commit that patch to trunk (and backport it). It will be quite easy to un-silence them when you have time to investigate, basically just deleting a try-finally block. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Thu Jan 13 09:37:14 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 13 Jan 2011 22:37:14 +0800 Subject: [SciPy-Dev] updating numpy scipy crash - numpy on pypi In-Reply-To: References: Message-ID: On Thu, Jan 13, 2011 at 6:49 PM, wrote: > On Thu, Jan 13, 2011 at 5:14 AM, Ralf Gommers > wrote: > > > > > > On Thu, Jan 13, 2011 at 1:29 AM, wrote: > >> > >> I *finally* updated numpy and scipy. > >> > >> For numpy 1.5.1, I used the files on pypi. When I installed scipy > >> 0.9b1 it crashed (segfaulted) in the testsuite. > >> > >> With the superpack installer for numpy from the sourceforge download, > >> scipy 0.9b1 passes all tests > >> > >> Ran 4806 tests in 287.843s > >> > >> OK (KNOWNFAIL=12, SKIP=35) > >> > >> > >> It looks like the non-superpack numpy files on pypi are not compatible > >> with scipy in my setup (win32, sse2, python 2.5.2) > > > > Hmm, they should contain only a single SSE (but not SSE2/3) instruction. > Can > > you check by running > > > http://projects.scipy.org/scipy/attachment/ticket/1170/detect_cpu_extensions_wine.py > > (only need to modify hardcoded path to scipy)? > > I don't know if I can run a wine script. In either case, I have now > sse2 numpy and sse2 scipy, which both work without problems so far. > > My conclusion was that sse2 scipy (from the superpack) is not binary > compatible with the single SSE numpy from pypi. > OK that makes sense. The reason to have the nosse binary on pypi is to make easy_install work on Windows, but this is the downside then. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Jan 13 12:44:21 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 13 Jan 2011 12:44:21 -0500 Subject: [SciPy-Dev] updating numpy scipy crash - numpy on pypi In-Reply-To: References: Message-ID: On Thu, Jan 13, 2011 at 9:37 AM, Ralf Gommers wrote: > > > On Thu, Jan 13, 2011 at 6:49 PM, wrote: >> >> On Thu, Jan 13, 2011 at 5:14 AM, Ralf Gommers >> wrote: >> > >> > >> > On Thu, Jan 13, 2011 at 1:29 AM, wrote: >> >> >> >> I *finally* updated numpy and scipy. >> >> >> >> For numpy 1.5.1, I used the files on pypi. When I installed scipy >> >> 0.9b1 it crashed (segfaulted) in the testsuite. >> >> >> >> With the superpack installer for numpy from the sourceforge download, >> >> scipy 0.9b1 passes all tests >> >> >> >> Ran 4806 tests in 287.843s >> >> >> >> OK (KNOWNFAIL=12, SKIP=35) >> >> >> >> >> >> It looks like the non-superpack numpy files on pypi are not compatible >> >> with scipy in my setup (win32, sse2, python 2.5.2) >> > >> > Hmm, they should contain only a single SSE (but not SSE2/3) instruction. >> > Can >> > you check by running >> > >> > http://projects.scipy.org/scipy/attachment/ticket/1170/detect_cpu_extensions_wine.py >> > (only need to modify hardcoded path to scipy)? >> >> I don't know if I can run a wine script. In either case, I have now >> sse2 numpy and sse2 scipy, which both work without problems so far. >> >> My conclusion was that sse2 scipy (from the superpack) is not binary >> compatible with the single SSE numpy from pypi. > > OK that makes sense. The reason to have the nosse binary on pypi is to make > easy_install work on Windows, but this is the downside then. However, I think this is pretty dangerous. If the incompatibility is really present, then everyone that downloads the binaries will end up with install problems when they also want to install scipy. I think there should be at least an indication on the pypi page, that users should go to sourceforge if they want to have a "proper" binary for their system. Is it possible to rename the pypi binaries to xxx_nosse or put the superpacks also on pypi? On the other hand, I'm bit surprised that I'm the first one to report this. Josef > > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From josef.pktd at gmail.com Fri Jan 14 12:09:54 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 14 Jan 2011 12:09:54 -0500 Subject: [SciPy-Dev] one failure in scipy trunk Message-ID: compiled against numpy 1.5.1 Josef ====================================================================== FAIL: Test singular pair ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.11.1-py2.5.egg\nose\case.p y", line 183, in runTest self.test(*self.arg) File "C:\Josef\_progs\Subversion\scipy_trunk_g09\dist\scipy-0.10.0.dev7027.win 32\Programs\Python25\Lib\site-packages\scipy\linalg\tests\test_decomp.py", line 200, in test_singular self._check_gen_eig(A, B) File "C:\Josef\_progs\Subversion\scipy_trunk_g09\dist\scipy-0.10.0.dev7027.win 32\Programs\Python25\Lib\site-packages\scipy\linalg\tests\test_decomp.py", line 189, in _check_gen_eig err_msg=msg) File "C:\Programs\Python25\Lib\site-packages\numpy\testing\utils.py", line 774 , in assert_array_almost_equal header='Arrays are not almost equal') File "C:\Programs\Python25\Lib\site-packages\numpy\testing\utils.py", line 618 , in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal array([[22, 34, 31, 31, 17], [45, 45, 42, 19, 29], [39, 47, 49, 26, 34], [27, 31, 26, 21, 15], [38, 44, 44, 24, 30]]) array([[13, 26, 25, 17, 24], [31, 46, 40, 26, 37], [26, 40, 19, 25, 25], [16, 25, 27, 14, 23], [24, 35, 18, 21, 22]]) (mismatch 50.0%) x: array([ 3.57527348e-16 -2.30736377e-08j, 3.57527348e-16 +2.30736377e-08j, 2.86330799e-01 +0.00000000e+00j, 2.00000000e+00 +0.00000000e+00j]) y: array([ -1.31566605e-01+0.j, -3.23644701e-08+0.j, 3.23644694e-08+0.j, 2.00000000e+00+0.j]) ---------------------------------------------------------------------- Ran 4810 tests in 277.406s FAILED (KNOWNFAIL=12, SKIP=35, failures=1) From pav at iki.fi Fri Jan 14 12:29:22 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 14 Jan 2011 17:29:22 +0000 (UTC) Subject: [SciPy-Dev] one failure in scipy trunk References: Message-ID: Fri, 14 Jan 2011 12:09:54 -0500, josef.pktd wrote: > compiled against numpy 1.5.1 Can you check if it occurs also in revision r7000? -- Pauli Virtanen From pav at iki.fi Fri Jan 14 12:48:15 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 14 Jan 2011 17:48:15 +0000 (UTC) Subject: [SciPy-Dev] one failure in scipy trunk References: Message-ID: Fri, 14 Jan 2011 12:09:54 -0500, josef.pktd wrote: > compiled against numpy 1.5.1 > > Josef > > ====================================================================== > FAIL: Test singular pair > ---------------------------------------------------------------------- [clip] > "...\scipy\linalg\tests\test_decomp.py", line 189, in _check_gen_eig > err_msg=msg) This is actually a bit strange. The test does this: w, vr = eig(A,B) wt = eigvals(A,B) # ... some stuff that does not touch w and wt # The failing assertion: assert_array_almost_equal(sort(w[isfinite(w)]), sort(wt[isfinite(wt)]), err_msg=msg) The code paths in `eig` and `eigvals` are identical as far as the computation of `w` and `wt` is concerned. So perhaps this is another manifestation of the no-determinism of linear algebra which I recall occurred on your computer? It seems this particular is supposed to be ill-conditioned and susceptible to rounding error... *** It would be nice to find out what exactly is causing the non-determinism, since it seems you are not the only one seeing it (other machines where this occurs were reported on the Numpy ML). http://permalink.gmane.org/gmane.comp.python.numeric.general/42082 Are you able to reproduce it using only the `dot` function? What happens if you then use the non-BLAS `numpy.core.multiarray.dot` instead? -- Pauli Virtanen From josef.pktd at gmail.com Fri Jan 14 13:06:05 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 14 Jan 2011 13:06:05 -0500 Subject: [SciPy-Dev] one failure in scipy trunk In-Reply-To: References: Message-ID: On Fri, Jan 14, 2011 at 12:48 PM, Pauli Virtanen wrote: > Fri, 14 Jan 2011 12:09:54 -0500, josef.pktd wrote: > >> compiled against numpy 1.5.1 >> >> Josef >> >> ====================================================================== >> FAIL: Test singular pair >> ---------------------------------------------------------------------- > [clip] >> ? "...\scipy\linalg\tests\test_decomp.py", line 189, in _check_gen_eig >> ? ? err_msg=msg) > > This is actually a bit strange. The test does this: > > ? ?w, vr = eig(A,B) > ? ?wt = eigvals(A,B) > ? ?# ... some stuff that does not touch w and wt > ? ?# The failing assertion: > ? ?assert_array_almost_equal(sort(w[isfinite(w)]), sort(wt[isfinite(wt)]), > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?err_msg=msg) > > The code paths in `eig` and `eigvals` are identical as far as > the computation of `w` and `wt` is concerned. > > So perhaps this is another manifestation of the no-determinism of linear > algebra which I recall occurred on your computer? It seems this particular > is supposed to be ill-conditioned and susceptible to rounding error... > > ? ?*** > > It would be nice to find out what exactly is causing the non-determinism, > since it seems you are not the only one seeing it (other machines where this > occurs were reported on the Numpy ML). > > ? ?http://permalink.gmane.org/gmane.comp.python.numeric.general/42082 > > Are you able to reproduce it using only the `dot` function? What happens if > you then use the non-BLAS `numpy.core.multiarray.dot` instead? I can try a bit later, I'm trying to figure out a stats bug and don't want to mess with my scipy since it takes me more than an hour to build. Looking at the test and try the non atlas dot should be fine. Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From bsouthey at gmail.com Fri Jan 14 13:33:42 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 14 Jan 2011 12:33:42 -0600 Subject: [SciPy-Dev] one failure in scipy trunk In-Reply-To: References: Message-ID: <4D309706.7040409@gmail.com> On 01/14/2011 12:06 PM, josef.pktd at gmail.com wrote: > On Fri, Jan 14, 2011 at 12:48 PM, Pauli Virtanen wrote: >> Fri, 14 Jan 2011 12:09:54 -0500, josef.pktd wrote: >> >>> compiled against numpy 1.5.1 >>> >>> Josef >>> >>> ====================================================================== >>> FAIL: Test singular pair >>> ---------------------------------------------------------------------- >> [clip] >>> "...\scipy\linalg\tests\test_decomp.py", line 189, in _check_gen_eig >>> err_msg=msg) >> This is actually a bit strange. The test does this: >> >> w, vr = eig(A,B) >> wt = eigvals(A,B) >> # ... some stuff that does not touch w and wt >> # The failing assertion: >> assert_array_almost_equal(sort(w[isfinite(w)]), sort(wt[isfinite(wt)]), >> err_msg=msg) >> >> The code paths in `eig` and `eigvals` are identical as far as >> the computation of `w` and `wt` is concerned. >> >> So perhaps this is another manifestation of the no-determinism of linear >> algebra which I recall occurred on your computer? It seems this particular >> is supposed to be ill-conditioned and susceptible to rounding error... >> >> *** >> >> It would be nice to find out what exactly is causing the non-determinism, >> since it seems you are not the only one seeing it (other machines where this >> occurs were reported on the Numpy ML). >> >> http://permalink.gmane.org/gmane.comp.python.numeric.general/42082 >> >> Are you able to reproduce it using only the `dot` function? What happens if >> you then use the non-BLAS `numpy.core.multiarray.dot` instead? > I can try a bit later, I'm trying to figure out a stats bug and don't > want to mess with my scipy since it takes me more than an hour to > build. > > Looking at the test and try the non atlas dot should be fine. > > Josef > >> -- >> Pauli Virtanen >> >> _______________________________________________ >> SciPy-Dev mailing list >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev Presumably this previously passed under numpy 1.3 (I think) then the change in versions reflects the change in numpy/scipy release managers. So does this occur with Christoph's releases on your system? http://www.lfd.uci.edu/~gohlke/pythonlibs/ If so, then there is something missing in the binary generation. In anycase, we also need more information on the OS (I do presume windows but not version etc), and how numpy and scipy were installed including specific version of the installers if used. Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Jan 14 13:40:38 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 14 Jan 2011 13:40:38 -0500 Subject: [SciPy-Dev] one failure in scipy trunk In-Reply-To: <4D309706.7040409@gmail.com> References: <4D309706.7040409@gmail.com> Message-ID: On Fri, Jan 14, 2011 at 1:33 PM, Bruce Southey wrote: > On 01/14/2011 12:06 PM, josef.pktd at gmail.com wrote: > > On Fri, Jan 14, 2011 at 12:48 PM, Pauli Virtanen wrote: > > Fri, 14 Jan 2011 12:09:54 -0500, josef.pktd wrote: > > compiled against numpy 1.5.1 > > Josef > > ====================================================================== > FAIL: Test singular pair > ---------------------------------------------------------------------- > > [clip] > > ? "...\scipy\linalg\tests\test_decomp.py", line 189, in _check_gen_eig > ? ? err_msg=msg) > > This is actually a bit strange. The test does this: > > ? ?w, vr = eig(A,B) > ? ?wt = eigvals(A,B) > ? ?# ... some stuff that does not touch w and wt > ? ?# The failing assertion: > ? ?assert_array_almost_equal(sort(w[isfinite(w)]), sort(wt[isfinite(wt)]), > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?err_msg=msg) > > The code paths in `eig` and `eigvals` are identical as far as > the computation of `w` and `wt` is concerned. > > So perhaps this is another manifestation of the no-determinism of linear > algebra which I recall occurred on your computer? It seems this particular > is supposed to be ill-conditioned and susceptible to rounding error... > > ? ?*** > > It would be nice to find out what exactly is causing the non-determinism, > since it seems you are not the only one seeing it (other machines where this > occurs were reported on the Numpy ML). > > ? ?http://permalink.gmane.org/gmane.comp.python.numeric.general/42082 > > Are you able to reproduce it using only the `dot` function? What happens if > you then use the non-BLAS `numpy.core.multiarray.dot` instead? > > I can try a bit later, I'm trying to figure out a stats bug and don't > want to mess with my scipy since it takes me more than an hour to > build. > > Looking at the test and try the non atlas dot should be fine. > > Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > Presumably this previously passed under numpy 1.3 (I think) then the change > in versions reflects the change in numpy/scipy release managers. So does > this occur with Christoph's releases on your system? > http://www.lfd.uci.edu/~gohlke/pythonlibs/ > > If so, then there is something missing in the binary generation. > > In anycase, we also need more information on the OS (I do presume windows > but not version etc), and how numpy and scipy were installed including > specific version of the installers if used. The failure didn't show up yesterday, when I tested, the official 0.9b1 installer. Either it's a recent change in trunk, or it is because my ATLAS, lapack, blas, is an older version. If I'm the only one seeing it, then I will look at my setup and the details later. Josef > > > Bruce > > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From pav at iki.fi Fri Jan 14 13:51:41 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 14 Jan 2011 18:51:41 +0000 (UTC) Subject: [SciPy-Dev] one failure in scipy trunk References: <4D309706.7040409@gmail.com> Message-ID: Fri, 14 Jan 2011 13:40:38 -0500, josef.pktd wrote: [clip] > If I'm the only one seeing it, then I will look at my setup and the > details later. I can't reproduce it with the SVN trunk (r7027), so it may be system-specific. -- Pauli Virtanen From josef.pktd at gmail.com Fri Jan 14 16:31:49 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 14 Jan 2011 16:31:49 -0500 Subject: [SciPy-Dev] backporting scipy.stats fixes Message-ID: I added 3 changesets in trunk, that should be backported to 0.9 7030 is a trivial fix to avoid an exception 7028 and 7029 correct the integration bounds in the expect methods and have, I think, sufficient test cases. Thanks, Josef From josef.pktd at gmail.com Fri Jan 14 21:25:04 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 14 Jan 2011 21:25:04 -0500 Subject: [SciPy-Dev] backporting scipy.stats fixes In-Reply-To: References: Message-ID: On Fri, Jan 14, 2011 at 4:31 PM, wrote: > I added 3 changesets in trunk, that should be backported to 0.9 > > 7030 is a trivial fix to avoid an exception > > 7028 and 7029 correct the integration bounds in the expect methods and > have, I think, sufficient test cases. 7031, 7032, 7033 are cases that only show up on the first call to the distribution and are difficult to test. 7031, 7032 add protection if there are no valid arguments, initially reported on stackoverflow 7032, 7033 add argcheck call to moment method and fixes ticket 934 for distributions with support shifted by args, and also protects against the no valid args case They don't break any tests, but I only checked them on the commandline for a few cases. I think, it's pretty safe to backport them also. >python -c "from scipy import stats; print stats.wrapcauchy.moment(2,1.)" 1.#QNAN >python -c "from scipy import stats; print stats.poisson.moment(2,-2)" 1.#QNAN >python -c "from scipy import stats; print stats.truncnorm.moment(2,-3,3)" 0.973336924663 Josef > > Thanks, > > Josef > From josef.pktd at gmail.com Fri Jan 14 22:47:55 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 14 Jan 2011 22:47:55 -0500 Subject: [SciPy-Dev] kde evaluate and dtypes, atleast_float ? Message-ID: http://projects.scipy.org/scipy/ticket/1181 the dtypes in kde evaluate only makes sense if we have at least float integer input produces nonsense evaluate has points = atleast_2d(points).astype(self.dataset.dtype) .... result = zeros((m,), points.dtype) .... result += exp(-energy) I don't know which dtypes kde is supposed to handle, but what's the best way to guarantee that the result is "atleast_float", so that a probability makes sense. I don't know what a complex probability is, but handling complex is useful for complex derivatives. Thanks, Josef From fabian.pedregosa at inria.fr Sat Jan 15 05:41:54 2011 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Sat, 15 Jan 2011 11:41:54 +0100 Subject: [SciPy-Dev] [PATCH] improvements for rq factorization Message-ID: Hi all. In this patch I implement several improvements for the RQ matrix factorization in function linalg.rq: - API: now uses the same keywords as qr function: implemented mode='economic' - type support: now supports non-square as well as complex arrays. This way I could uncomment a lot of tests. - performance: now uses LAPACK orgrq for computing the Q matrix instead of plain matrix multiplications. On a 500 x 500 array in my Core Duo, this makes it go from 17.6s s to 0.16s, i.e. gain of a 100x factor. - Also, some docstring fixes in linalg.qr function. Best, Fabian. -------------- next part -------------- A non-text attachment was scrubbed... Name: rq_decomp.patch Type: application/octet-stream Size: 12083 bytes Desc: not available URL: From ralf.gommers at googlemail.com Sat Jan 15 08:52:37 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 15 Jan 2011 21:52:37 +0800 Subject: [SciPy-Dev] updating numpy scipy crash - numpy on pypi In-Reply-To: References: Message-ID: On Fri, Jan 14, 2011 at 1:44 AM, wrote: > On Thu, Jan 13, 2011 at 9:37 AM, Ralf Gommers > wrote: > > > > > > On Thu, Jan 13, 2011 at 6:49 PM, wrote: > >> > >> On Thu, Jan 13, 2011 at 5:14 AM, Ralf Gommers > >> wrote: > >> > > >> > > >> > On Thu, Jan 13, 2011 at 1:29 AM, wrote: > >> >> > >> >> I *finally* updated numpy and scipy. > >> >> > >> >> For numpy 1.5.1, I used the files on pypi. When I installed scipy > >> >> 0.9b1 it crashed (segfaulted) in the testsuite. > >> >> > >> >> With the superpack installer for numpy from the sourceforge download, > >> >> scipy 0.9b1 passes all tests > >> >> > >> >> Ran 4806 tests in 287.843s > >> >> > >> >> OK (KNOWNFAIL=12, SKIP=35) > >> >> > >> >> > >> >> It looks like the non-superpack numpy files on pypi are not > compatible > >> >> with scipy in my setup (win32, sse2, python 2.5.2) > >> > > >> > Hmm, they should contain only a single SSE (but not SSE2/3) > instruction. > >> > Can > >> > you check by running > >> > > >> > > http://projects.scipy.org/scipy/attachment/ticket/1170/detect_cpu_extensions_wine.py > >> > (only need to modify hardcoded path to scipy)? > >> > >> I don't know if I can run a wine script. In either case, I have now > >> sse2 numpy and sse2 scipy, which both work without problems so far. > >> > >> My conclusion was that sse2 scipy (from the superpack) is not binary > >> compatible with the single SSE numpy from pypi. > > > > OK that makes sense. The reason to have the nosse binary on pypi is to > make > > easy_install work on Windows, but this is the downside then. > > However, I think this is pretty dangerous. If the incompatibility is > really present, > then everyone that downloads the binaries will end up with install problems > when > they also want to install scipy. > I'd actually be happy to remove the pypi binaries, since even though it makes easy_install work most people would actually want the faster superpack versions if they realized the difference. But that's not just my call. > > I think there should be at least an indication on the pypi page, that > users should > go to sourceforge if they want to have a "proper" binary for their system. > The pypi page contents come from setup.py, it would seem strange to me to add pypi download instructions there. > > Is it possible to rename the pypi binaries to xxx_nosse or put the > superpacks also on pypi? > That is no problem. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Jan 15 09:07:55 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 15 Jan 2011 22:07:55 +0800 Subject: [SciPy-Dev] backporting scipy.stats fixes In-Reply-To: References: Message-ID: On Sat, Jan 15, 2011 at 10:25 AM, wrote: > On Fri, Jan 14, 2011 at 4:31 PM, wrote: > > I added 3 changesets in trunk, that should be backported to 0.9 > > > > 7030 is a trivial fix to avoid an exception > > > > 7028 and 7029 correct the integration bounds in the expect methods and > > have, I think, sufficient test cases. > > 7031, 7032, 7033 are cases that only show up on the first call to the > distribution and are difficult to test. > 7031, 7032 add protection if there are no valid arguments, initially > reported on stackoverflow > 7032, 7033 add argcheck call to moment method and fixes ticket 934 for > distributions with support shifted by args, and also protects against > the no valid args case > > They don't break any tests, but I only checked them on the commandline > for a few cases. I think, it's pretty safe to backport > them also. > Looks pretty safe to me too, I backported them. Ralf > >python -c "from scipy import stats; print stats.wrapcauchy.moment(2,1.)" > 1.#QNAN > >python -c "from scipy import stats; print stats.poisson.moment(2,-2)" > 1.#QNAN > >python -c "from scipy import stats; print stats.truncnorm.moment(2,-3,3)" > 0.973336924663 > > Josef > > > > Thanks, > > > > Josef > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Jan 15 09:49:56 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 15 Jan 2011 09:49:56 -0500 Subject: [SciPy-Dev] unicode in THANKS.txt Message-ID: What's the policy for unicode in text files? I added some contributers to THANKS.txt, but have problems with an accented name, Ernest Adrogu? Is it possible to "internationalize" THANKS.txt ? Josef From pav at iki.fi Sat Jan 15 09:54:19 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 15 Jan 2011 14:54:19 +0000 (UTC) Subject: [SciPy-Dev] unicode in THANKS.txt References: Message-ID: On Sat, 15 Jan 2011 09:49:56 -0500, josef.pktd wrote: > What's the policy for unicode in text files? > > I added some contributers to THANKS.txt, but have problems with an > accented name, Ernest Adrogu? > > Is it possible to "internationalize" THANKS.txt ? I'd say make it utf-8. I guess lots of names are anyway missing from there... -- Pauli Virtanen From josef.pktd at gmail.com Sat Jan 15 14:25:17 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 15 Jan 2011 14:25:17 -0500 Subject: [SciPy-Dev] unicode in THANKS.txt In-Reply-To: References: Message-ID: On Sat, Jan 15, 2011 at 9:54 AM, Pauli Virtanen wrote: > On Sat, 15 Jan 2011 09:49:56 -0500, josef.pktd wrote: >> What's the policy for unicode in text files? >> >> I added some contributers to THANKS.txt, but have problems with an >> accented name, Ernest Adrogu? >> >> Is it possible to "internationalize" THANKS.txt ? > > I'd say make it utf-8. I guess lots of names are anyway missing from > there... Thanks, I just checked, it is already utf-8 (without BOM) when I read it in a text editor or in python. So, maybe it is just Trac that delivers/displays it with the wrong encoding. Josef > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ralf.gommers at googlemail.com Sun Jan 16 09:44:53 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 16 Jan 2011 22:44:53 +0800 Subject: [SciPy-Dev] correlate complex192 failures on win32 Message-ID: Hi Warren, In r6982 you added correlate/convolve tests that are now failing on Windows for longdouble. Can you please have a look at that? If you don't have time soon I think it can be marked knownfail for 0.9.0 on Windows. Offending code: + +# Create three classes, one for each complex data type: TestCorrelateComplex64, +# TestCorrelateComplex128 and TestCorrelateComplex256. +# The second number in the pairs is used in the 'decimal' keyword argument of +# the array comparisons in the tests. +for i, decimal in [(np.csingle, 5), (np.cdouble, 10), (np.clongdouble, 15)]: Resulting in 16 failures like: ====================================================================== FAIL: test_rank1_full (test_signaltools.TestCorrelateComplex192) ---------------------------------------------------------------------- Traceback (most recent call last): File "Z:\Users\rgommers\.wine\drive_c\Python26\lib\site-packages\scipy\signal\tests\test_signaltools.py", line 630, in test_rank1_full assert_array_almost_equal(y, y_r, decimal=self.decimal) File "Z:\Users\rgommers\.wine\drive_c\Python26\lib\site-packages\numpy\testing\utils.py", line 774, in assert_array_almost_equal header='Arrays are not almost equal') File "Z:\Users\rgommers\.wine\drive_c\Python26\lib\site-packages\numpy\testing\utils.py", line 618, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 11.7647058824%) x: array([0.17161931+0.20853512j, 0.22923514+0.78246389j, -1.3237712+0.47060531j, -3.2990924+-0.50721318j, -2.9089895+0.084727311j, 1.5646499+-0.56960514j,... y: array([0.17161931+0.20853512j, 0.22923514+0.78246389j, -1.3237712+0.47060531j, -3.2990924+-0.50721318j, -2.9089895+0.084727311j, 1.5646499+-0.56960514j,... Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Jan 16 09:59:41 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 16 Jan 2011 22:59:41 +0800 Subject: [SciPy-Dev] postponing 0.9.0rc1 by one week Message-ID: Hi all, There is still one release blocker for 0.9.0, the change in behavior of convolve/correlate. See the old_behavior keyword in correlate/convolve/correlate2d/convolve2d, and also http://projects.scipy.org/scipy/ticket/1301. Therefore the release of RC1 is postponed by one week. If anyone has time to look at this, that would be great. Three other things that could be fixed as well are the following: 1. Medfilt test failure on linux 64-bit, see http://news.gmane.org/gmane.comp.python.scientific.devel. This will be marked as knownfail for this release if no one has time to investigate. 2. CorrelateComplex192 failures on Windows, see the other email I just sent. 3. Test warnings from interpolate and linalg, see below. Will just be filtered out for 0.9.0 unless they are fixed in time. /Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:673: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ....../Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:604: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) /Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:91: ComplexWarning: Casting complex values to real discards the imaginary part qr, tau, work, info = geqrf(a1, lwork=lwork, overwrite_a=overwrite_a) /Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:126: ComplexWarning: Casting complex values to real discards the imaginary part Q, work, info = gor_un_gqr(qqr, tau, lwork=lwork, overwrite_a=1) ......................../Users/rgommers/Code/scipy/scipy/linalg/decomp_schur.py:72: ComplexWarning: Casting complex values to real discards the imaginary part result = gees(lambda x: None, a, lwork=result[-2][0], overwrite_a=overwrite_a) Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabian.pedregosa at inria.fr Mon Jan 17 03:14:25 2011 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Mon, 17 Jan 2011 09:14:25 +0100 Subject: [SciPy-Dev] [PATCH] improvements for rq factorization In-Reply-To: References: Message-ID: I've also set up a pull request here: https://github.com/scipy/scipy-svn/pull/1 Fabian. On Sat, Jan 15, 2011 at 11:41 AM, Fabian Pedregosa wrote: > Hi all. > > In this patch I implement several improvements for the RQ matrix > factorization in function linalg.rq: > > ?- API: now uses the same keywords as qr function: implemented mode='economic' > > ?- type support: now supports non-square as well as complex arrays. > This way I could uncomment a lot of tests. > > ?- performance: now uses LAPACK orgrq for computing the Q matrix > instead of plain matrix multiplications. On a 500 x 500 array in my > Core Duo, this makes it go from 17.6s s to 0.16s, i.e. gain of a 100x > factor. > > ?- Also, some docstring fixes in linalg.qr function. > > Best, > > Fabian. > From fabian.pedregosa at inria.fr Mon Jan 17 03:52:40 2011 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Mon, 17 Jan 2011 09:52:40 +0100 Subject: [SciPy-Dev] postponing 0.9.0rc1 by one week In-Reply-To: <1526056402.330425.1295189990675.JavaMail.root@zmbs3.inria.fr> References: <1526056402.330425.1295189990675.JavaMail.root@zmbs3.inria.fr> Message-ID: On Sun, Jan 16, 2011 at 3:59 PM, Ralf Gommers wrote: > Hi all, > > There is still one release blocker for 0.9.0, the change in behavior of > convolve/correlate. See the old_behavior keyword in > correlate/convolve/correlate2d/convolve2d, and also > http://projects.scipy.org/scipy/ticket/1301. Therefore the release of RC1 is > postponed by one week. If anyone has time to look at this, that would be > great. > > Three other things that could be fixed as well are the following: > 1. Medfilt test failure on linux 64-bit, see > http://news.gmane.org/gmane.comp.python.scientific.devel. This will be > marked as knownfail for this release if no one has time to investigate. > 2. CorrelateComplex192 failures on Windows, see the other email I just sent. > 3. Test warnings from interpolate and linalg, see below. Will just be > filtered out for 0.9.0 unless they are fixed in time. > > /Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:673: UserWarning: > The coefficients of the spline returned have been computed as the > minimal norm least-squares solution of a (numerically) rank deficient > system (deficiency=7). If deficiency is large, the results may be > inaccurate. Deficiency may strongly depend on the value of eps. > ? warnings.warn(message) > ....../Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:604: > UserWarning: > The required storage space exceeds the available storage space: nxest > or nyest too small, or s too small. > The weighted least-squares spline corresponds to the current set of > knots. > ? warnings.warn(message) > > /Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:91: ComplexWarning: > Casting complex values to real discards the imaginary part > ? qr, tau, work, info = geqrf(a1, lwork=lwork, overwrite_a=overwrite_a) > /Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:126: ComplexWarning: > Casting complex values to real discards the imaginary part > ? Q, work, info = gor_un_gqr(qqr, tau, lwork=lwork, overwrite_a=1) These warnings come from the fact that lwork returned from a workspace query has the same dtype as the principal matrix, but when we call the computational routine it expects an integer value. A simple workaround is here: https://github.com/fabianp/scipy-svn/commit/5942b71bd321b0960fd179c9785273c2f7ab6d36 Although it would be more correct to convert it also to integer, i.e., lwork = int(work[0].real) Fabian. From fabian.pedregosa at inria.fr Mon Jan 17 13:42:33 2011 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Mon, 17 Jan 2011 19:42:33 +0100 Subject: [SciPy-Dev] postponing 0.9.0rc1 by one week In-Reply-To: <531034492.362425.1295254366939.JavaMail.root@zmbs3.inria.fr> References: <1526056402.330425.1295189990675.JavaMail.root@zmbs3.inria.fr> <531034492.362425.1295254366939.JavaMail.root@zmbs3.inria.fr> Message-ID: On Mon, Jan 17, 2011 at 9:52 AM, Fabian Pedregosa wrote: > On Sun, Jan 16, 2011 at 3:59 PM, Ralf Gommers > wrote: >> Hi all, >> >> There is still one release blocker for 0.9.0, the change in behavior of >> convolve/correlate. See the old_behavior keyword in >> correlate/convolve/correlate2d/convolve2d, and also >> http://projects.scipy.org/scipy/ticket/1301. Therefore the release of RC1 is >> postponed by one week. If anyone has time to look at this, that would be >> great. >> >> Three other things that could be fixed as well are the following: >> 1. Medfilt test failure on linux 64-bit, see >> http://news.gmane.org/gmane.comp.python.scientific.devel. This will be >> marked as knownfail for this release if no one has time to investigate. >> 2. CorrelateComplex192 failures on Windows, see the other email I just sent. >> 3. Test warnings from interpolate and linalg, see below. Will just be >> filtered out for 0.9.0 unless they are fixed in time. >> >> /Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:673: UserWarning: >> The coefficients of the spline returned have been computed as the >> minimal norm least-squares solution of a (numerically) rank deficient >> system (deficiency=7). If deficiency is large, the results may be >> inaccurate. Deficiency may strongly depend on the value of eps. >> ? warnings.warn(message) >> ....../Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:604: >> UserWarning: >> The required storage space exceeds the available storage space: nxest >> or nyest too small, or s too small. >> The weighted least-squares spline corresponds to the current set of >> knots. >> ? warnings.warn(message) >> >> /Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:91: ComplexWarning: >> Casting complex values to real discards the imaginary part >> ? qr, tau, work, info = geqrf(a1, lwork=lwork, overwrite_a=overwrite_a) >> /Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:126: ComplexWarning: >> Casting complex values to real discards the imaginary part >> ? Q, work, info = gor_un_gqr(qqr, tau, lwork=lwork, overwrite_a=1) > > These warnings come from the fact that lwork returned from a workspace > query has the same dtype as the principal matrix, but when we call the > computational routine it expects an integer value. A simple workaround > is here: I've updated my patch to also fix the warning in schur(): https://github.com/scipy/scipy-svn/pull/2 Fabian. From ralf.gommers at googlemail.com Tue Jan 18 10:13:50 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 18 Jan 2011 23:13:50 +0800 Subject: [SciPy-Dev] postponing 0.9.0rc1 by one week In-Reply-To: References: <1526056402.330425.1295189990675.JavaMail.root@zmbs3.inria.fr> <531034492.362425.1295254366939.JavaMail.root@zmbs3.inria.fr> Message-ID: On Tue, Jan 18, 2011 at 2:42 AM, Fabian Pedregosa wrote: > On Mon, Jan 17, 2011 at 9:52 AM, Fabian Pedregosa > wrote: > > On Sun, Jan 16, 2011 at 3:59 PM, Ralf Gommers > > wrote: > >> Hi all, > >> > >> There is still one release blocker for 0.9.0, the change in behavior of > >> convolve/correlate. See the old_behavior keyword in > >> correlate/convolve/correlate2d/convolve2d, and also > >> http://projects.scipy.org/scipy/ticket/1301. Therefore the release of > RC1 is > >> postponed by one week. If anyone has time to look at this, that would be > >> great. > >> > >> Three other things that could be fixed as well are the following: > >> 1. Medfilt test failure on linux 64-bit, see > >> http://news.gmane.org/gmane.comp.python.scientific.devel. This will be > >> marked as knownfail for this release if no one has time to investigate. > >> 2. CorrelateComplex192 failures on Windows, see the other email I just > sent. > >> 3. Test warnings from interpolate and linalg, see below. Will just be > >> filtered out for 0.9.0 unless they are fixed in time. > >> > >> /Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:673: > UserWarning: > >> The coefficients of the spline returned have been computed as the > >> minimal norm least-squares solution of a (numerically) rank deficient > >> system (deficiency=7). If deficiency is large, the results may be > >> inaccurate. Deficiency may strongly depend on the value of eps. > >> warnings.warn(message) > >> ....../Users/rgommers/Code/scipy/scipy/interpolate/fitpack2.py:604: > >> UserWarning: > >> The required storage space exceeds the available storage space: nxest > >> or nyest too small, or s too small. > >> The weighted least-squares spline corresponds to the current set of > >> knots. > >> warnings.warn(message) > >> > >> /Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:91: ComplexWarning: > >> Casting complex values to real discards the imaginary part > >> qr, tau, work, info = geqrf(a1, lwork=lwork, overwrite_a=overwrite_a) > >> /Users/rgommers/Code/scipy/scipy/linalg/decomp_qr.py:126: > ComplexWarning: > >> Casting complex values to real discards the imaginary part > >> Q, work, info = gor_un_gqr(qqr, tau, lwork=lwork, overwrite_a=1) > > > > These warnings come from the fact that lwork returned from a workspace > > query has the same dtype as the principal matrix, but when we call the > > computational routine it expects an integer value. A simple workaround > > is here: > > I've updated my patch to also fix the warning in schur(): > > https://github.com/scipy/scipy-svn/pull/2 > > Thanks Fabian! Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Tue Jan 18 15:44:25 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 18 Jan 2011 14:44:25 -0600 Subject: [SciPy-Dev] correlate complex192 failures on win32 In-Reply-To: References: Message-ID: On Sun, Jan 16, 2011 at 8:44 AM, Ralf Gommers wrote: > Hi Warren, > > In r6982 you added correlate/convolve tests that are now failing on Windows > for longdouble. Can you please have a look at that? If you don't have time > soon I think it can be marked knownfail for 0.9.0 on Windows. > In r6982, I didn't take into account the platform dependence of longdouble when I modified the complex test cases for correlate. I've updated the tests in trunk (r7067). The correlate tests now pass in XP 32bit and Vista 64. Let me know if you still have any problems. Warren > Offending code: > + > +# Create three classes, one for each complex data type: > TestCorrelateComplex64, > +# TestCorrelateComplex128 and TestCorrelateComplex256. > +# The second number in the pairs is used in the 'decimal' keyword argument > of > +# the array comparisons in the tests. > +for i, decimal in [(np.csingle, 5), (np.cdouble, 10), (np.clongdouble, > 15)]: > > > Resulting in 16 failures like: > > ====================================================================== > FAIL: test_rank1_full (test_signaltools.TestCorrelateComplex192) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "Z:\Users\rgommers\.wine\drive_c\Python26\lib\site-packages\scipy\signal\tests\test_signaltools.py", > line 630, in test_rank1_full > assert_array_almost_equal(y, y_r, decimal=self.decimal) > File > "Z:\Users\rgommers\.wine\drive_c\Python26\lib\site-packages\numpy\testing\utils.py", > line 774, in assert_array_almost_equal > header='Arrays are not almost equal') > File > "Z:\Users\rgommers\.wine\drive_c\Python26\lib\site-packages\numpy\testing\utils.py", > line 618, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > (mismatch 11.7647058824%) > x: array([0.17161931+0.20853512j, 0.22923514+0.78246389j, > -1.3237712+0.47060531j, -3.2990924+-0.50721318j, > -2.9089895+0.084727311j, 1.5646499+-0.56960514j,... > y: array([0.17161931+0.20853512j, 0.22923514+0.78246389j, > -1.3237712+0.47060531j, -3.2990924+-0.50721318j, > -2.9089895+0.084727311j, 1.5646499+-0.56960514j,... > > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Jan 18 20:05:12 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 18 Jan 2011 20:05:12 -0500 Subject: [SciPy-Dev] sparse rmatvec and maxentropy Message-ID: I don't know much about these two parts of scipy. Running an example for scipy.maxentropy raises an exception after upgrading to scipy trunk I didn't manage to find out with a Trac search whether rmatvec still exists. scipy.maxentropy maxentutils is still using it. Traceback (most recent call last): File "C:\Josef\eclipsegworkspace\statsmodels-josef-experimental-gsoc\scikits\statsmodels\sandbox\examples\example_maxent.py", line 29, in model.fit(K[i]) File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", line 226, in fit File "\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", line 636, in fmin_cg File "\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", line 176, in function_wrapper File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", line 420, in grad File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", line 702, in expectations File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", line 737, in pmf File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", line 719, in logpmf File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentutils.py", line 381, in innerprodtranspose File "C:\Josef\_progs\Subversion\scipy_trunk_g09\dist\scipy-0.10.0.dev7027.win32\Programs\Python25\Lib\site-packages\scipy\sparse\base.py", line 384, in __getattr__ raise AttributeError(attr + " not found") AttributeError: rmatvec not found Josef From josef.pktd at gmail.com Tue Jan 18 20:12:15 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 18 Jan 2011 20:12:15 -0500 Subject: [SciPy-Dev] sparse rmatvec and maxentropy In-Reply-To: References: Message-ID: On Tue, Jan 18, 2011 at 8:05 PM, wrote: > I don't know much about these two parts of scipy. > > Running an example for scipy.maxentropy raises an exception after > upgrading to scipy trunk > > I didn't manage to find out with a ?Trac search whether rmatvec still exists. > > scipy.maxentropy maxentutils is still using it. > > > Traceback (most recent call last): > ?File "C:\Josef\eclipsegworkspace\statsmodels-josef-experimental-gsoc\scikits\statsmodels\sandbox\examples\example_maxent.py", > line 29, in > ? ?model.fit(K[i]) > ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", > line 226, in fit > ?File "\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", > line 636, in fmin_cg > ?File "\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", > line 176, in function_wrapper > ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", > line 420, in grad > ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", > line 702, in expectations > ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", > line 737, in pmf > ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", > line 719, in logpmf > ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentutils.py", > line 381, in innerprodtranspose > ?File "C:\Josef\_progs\Subversion\scipy_trunk_g09\dist\scipy-0.10.0.dev7027.win32\Programs\Python25\Lib\site-packages\scipy\sparse\base.py", > line 384, in __getattr__ > ? ?raise AttributeError(attr + " not found") > AttributeError: rmatvec not found 3 of the 4 examples in scipy/maxentropy/examples raise an exception. 2 of them with the rmatvec not found error. The other one doesn't find scipy.sandbox Josef > > > Josef > From bigorneault at gmail.com Tue Jan 18 21:07:38 2011 From: bigorneault at gmail.com (=?ISO-8859-1?Q?Th=E9lesphonse_Bigorneault?=) Date: Tue, 18 Jan 2011 21:07:38 -0500 Subject: [SciPy-Dev] request of a FGMRES krylov solver In-Reply-To: References: Message-ID: 2011/1/12 Th?lesphonse Bigorneault > I would like to have a FGMRES Krylov solver in scipy. FGMRES is a variant > of the GMRES method with right preconditioning that enables the use of a > different preconditioner at each step of the Arnoldi process. > > A possibility could be to "borrow" the fgmres function from pyamg (which as > a compatible license) > > http://code.google.com/p/pyamg/source/browse/trunk/pyamg/krylov/_fgmres.py > > Including this code into scipy should be straightforward. > > T?lesphore Bigorneault > Bump. -------------- next part -------------- An HTML attachment was scrubbed... URL: From schmidtc at gmail.com Wed Jan 19 20:31:40 2011 From: schmidtc at gmail.com (Charles R. Schmidt) Date: Wed, 19 Jan 2011 17:31:40 -0800 Subject: [SciPy-Dev] Another Bug in KDTree Message-ID: There appears to be a bug in KDTree.query_ball_point, when using integer points neighbor points that lie on the radius are not always returned. Eg, with a 4x4 grid >>> import scipy.spatial >>> points = [(0, 0), (0, 1), (0, 2), (0, 3), \ ... ? ? ? ? ? ? ?(1, 0), (1, 1), (1, 2), (1, 3), \ ... ? ? ? ? ? ? ?(2, 0), (2, 1), (2, 2), (2, 3), \ ... ? ? ? ? ? ? ?(3, 0), (3, 1), (3, 2), (3, 3)] >>> kd=scipy.spatial.KDTree(points) >>> kd.query_ball_point((2,0),1) [8, 9, 12] >>> points = [(0.0, 0.0), (0.0, 1.0), (0.0, 2.0), (0.0, 3.0), \ ... (1.0, 0.0), (1.0, 1.0), (1.0, 2.0), (1.0, 3.0), \ ... (2.0, 0.0), (2.0, 1.0), (2.0, 2.0), (2.0, 3.0), \ ... (3.0, 0.0), (3.0, 1.0), (3.0, 2.0), (3.0, 3.0)] >>> kd=scipy.spatial.KDTree(points) >>> kd.query_ball_point((2,0),1) [4, 8, 9, 12] From opossumnano at gmail.com Thu Jan 20 05:12:20 2011 From: opossumnano at gmail.com (Tiziano Zito) Date: Thu, 20 Jan 2011 11:12:20 +0100 Subject: [SciPy-Dev] [Ann] EuroScipy 2011 - Call for papers Message-ID: <20110120101220.GF31049@tulpenbaum.cognition.tu-berlin.de> ========================= Announcing EuroScipy 2011 ========================= --------------------------------------------- The 4th European meeting on Python in Science --------------------------------------------- **Paris, Ecole Normale Sup?rieure, August 25-28 2011** We are happy to announce the 4th EuroScipy meeting, in Paris, August 2011. The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic research and state of the art industry. Main topics =========== - Presentations of scientific tools and libraries using the Python language, including but not limited to: - vector and array manipulation - parallel computing - scientific visualization - scientific data flow and persistence - algorithms implemented or exposed in Python - web applications and portals for science and engineering. - Reports on the use of Python in scientific achievements or ongoing projects. - General-purpose Python tools that can be of special interest to the scientific community. Tutorials ========= There will be two tutorial tracks at the conference, an introductory one, to bring up to speed with the Python language as a scientific tool, and an advanced track, during which experts of the field will lecture on specific advanced topics such as advanced use of numpy, scientific visualization, software engineering... Keynote Speaker: Fernando Perez =============================== We are excited to welcome Fernando Perez (UC Berkeley, Helen Wills Neuroscience Institute, USA) as our keynote speaker. Fernando Perez is the original author of the enhanced interactive python shell IPython and a very active contributor to the Python for Science ecosystem. Important dates =============== Talk submission deadline: Sunday May 8 Program announced: Sunday May 29 Tutorials tracks: Thursday August 25 - Friday August 26 Conference track: Saturday August 27 - Sunday August 28 Call for papers =============== We are soliciting talks that discuss topics related to scientific computing using Python. These include applications, teaching, future development directions, and research. We welcome contributions from the industry as well as the academic world. Indeed, industrial research and development as well academic research face the challenge of mastering IT tools for exploration, modeling and analysis. We look forward to hearing your recent breakthroughs using Python! Submission guidelines ===================== - We solicit talk proposals in the form of a one-page long abstract. - Submissions whose main purpose is to promote a commercial product or service will be refused. - All accepted proposals must be presented at the EuroSciPy conference by at least one author. The one-page long abstracts are for conference planing and selection purposes only. We will later select papers for publication of post-proceedings in a peer-reviewed journal. How to submit an abstract ========================= To submit a talk to the EuroScipy conference follow the instructions here: http://www.euroscipy.org/card/euroscipy2011_call_for_papers Organizers ========== Chairs: - Ga?l Varoquaux (INSERM, Unicog team, and INRIA, Parietal team) - Nicolas Chauvat (Logilab) Local organization committee: - Emmanuelle Gouillart (Saint-Gobain Recherche) - Jean-Philippe Chauvat (Logilab) Tutorial chair: - Valentin Haenel (MKP, Technische Universit?t Berlin) Program committee: - Chair: Tiziano Zito (MKP, Technische Universit?t Berlin) - Romain Brette (ENS Paris, DEC) - Emmanuelle Gouillart (Saint-Gobain Recherche) - Eric Lebigot (Laboratoire Kastler Brossel, Universit? Pierre et Marie Curie) - Konrad Hinsen (Soleil Synchrotron, CNRS) - Hans Petter Langtangen (Simula laboratories) - Jarrod Millman (UC Berkeley, Helen Wills NeuroScience institute) - Mike M?ller (Python Academy) - Didrik Pinte (Enthought Inc) - Marc Poinot (ONERA) - Christophe Pradal (CIRAD/INRIA, Virtual Plantes team) - Andreas Schreiber (DLR) - St?fan van der Walt (University of Stellenbosch) Website ======= http://www.euroscipy.org/conference/euroscipy_2011 From ralf.gommers at googlemail.com Thu Jan 20 06:31:14 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 20 Jan 2011 19:31:14 +0800 Subject: [SciPy-Dev] correlate complex192 failures on win32 In-Reply-To: References: Message-ID: On Wed, Jan 19, 2011 at 4:44 AM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Sun, Jan 16, 2011 at 8:44 AM, Ralf Gommers > wrote: > >> Hi Warren, >> >> In r6982 you added correlate/convolve tests that are now failing on >> Windows for longdouble. Can you please have a look at that? If you don't >> have time soon I think it can be marked knownfail for 0.9.0 on Windows. >> > > > In r6982, I didn't take into account the platform dependence of longdouble > when I modified the complex test cases for correlate. I've updated the > tests in trunk (r7067). The correlate tests now pass in XP 32bit and Vista > 64. Let me know if you still have any problems. > > That works for me, thanks. I backported it to 0.9.x in r7068. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Thu Jan 20 08:52:54 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 20 Jan 2011 21:52:54 +0800 Subject: [SciPy-Dev] postponing 0.9.0rc1 by one week In-Reply-To: References: Message-ID: On Sun, Jan 16, 2011 at 10:59 PM, Ralf Gommers wrote: > Hi all, > > There is still one release blocker for 0.9.0, the change in behavior of > convolve/correlate. See the old_behavior keyword in > correlate/convolve/correlate2d/convolve2d, and also > http://projects.scipy.org/scipy/ticket/1301. Therefore the release of RC1 > is postponed by one week. If anyone has time to look at this, that would be > great. > This should be addressed by https://github.com/rgommers/scipy/commit/a8b61ac0d. Can someone please check that that change is correct? Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Thu Jan 20 09:13:22 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 20 Jan 2011 09:13:22 -0500 Subject: [SciPy-Dev] sparse rmatvec and maxentropy In-Reply-To: References: Message-ID: On Tue, Jan 18, 2011 at 8:12 PM, wrote: > On Tue, Jan 18, 2011 at 8:05 PM, ? wrote: >> I don't know much about these two parts of scipy. >> >> Running an example for scipy.maxentropy raises an exception after >> upgrading to scipy trunk >> >> I didn't manage to find out with a ?Trac search whether rmatvec still exists. >> >> scipy.maxentropy maxentutils is still using it. >> >> >> Traceback (most recent call last): >> ?File "C:\Josef\eclipsegworkspace\statsmodels-josef-experimental-gsoc\scikits\statsmodels\sandbox\examples\example_maxent.py", >> line 29, in >> ? ?model.fit(K[i]) >> ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", >> line 226, in fit >> ?File "\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", >> line 636, in fmin_cg >> ?File "\Programs\Python25\Lib\site-packages\scipy\optimize\optimize.py", >> line 176, in function_wrapper >> ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", >> line 420, in grad >> ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", >> line 702, in expectations >> ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", >> line 737, in pmf >> ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentropy.py", >> line 719, in logpmf >> ?File "\Programs\Python25\Lib\site-packages\scipy\maxentropy\maxentutils.py", >> line 381, in innerprodtranspose >> ?File "C:\Josef\_progs\Subversion\scipy_trunk_g09\dist\scipy-0.10.0.dev7027.win32\Programs\Python25\Lib\site-packages\scipy\sparse\base.py", >> line 384, in __getattr__ >> ? ?raise AttributeError(attr + " not found") >> AttributeError: rmatvec not found > > 3 of the 4 examples in scipy/maxentropy/examples raise an exception. 2 > of them with the rmatvec not found error. The other one doesn't find > scipy.sandbox > I remember seeing the deprecation warnings for this now, but I never worked much with the bigmodel. I think you can just use * now instead of the rmatvec for the sparse matrices. You can see this in 0.8.0 where both are still there. http://projects.scipy.org/scipy/browser/tags/0.8.0/scipy/sparse/base.py I picked up the montecarlo code when I was playing around with these. http://bazaar.launchpad.net/~jsseabold/statsmodels/statsmodels-skipper-maxent/files/head:/scikits/statsmodels/sandbox/maxentropy/ I'm curious if the maxentropy stuff as it is in scipy wouldn't find more use and maintenance in scikits.learn. The implementation is somewhat use specific (natural language processing), though this is not by any means set in stone. Skipper From ralf.gommers at googlemail.com Sat Jan 22 08:50:49 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 22 Jan 2011 21:50:49 +0800 Subject: [SciPy-Dev] sparse rmatvec and maxentropy In-Reply-To: References: Message-ID: On Thu, Jan 20, 2011 at 10:13 PM, Skipper Seabold wrote: > On Tue, Jan 18, 2011 at 8:12 PM, wrote: > > On Tue, Jan 18, 2011 at 8:05 PM, wrote: > >> I don't know much about these two parts of scipy. > >> > >> Running an example for scipy.maxentropy raises an exception after > >> upgrading to scipy trunk > >> > >> I didn't manage to find out with a Trac search whether rmatvec still > exists. > >> > It was removed in r6996. I missed this one call to it, will clean it up. > > > > 3 of the 4 examples in scipy/maxentropy/examples raise an exception. 2 > > of them with the rmatvec not found error. The other one doesn't find > > scipy.sandbox > > > > I remember seeing the deprecation warnings for this now, but I never > worked much with the bigmodel. I think you can just use * now instead > of the rmatvec for the sparse matrices. You can see this in 0.8.0 > where both are still there. > A.rmatvec(v) should be A.conj().transpose() * v. But that does not make the examples work, they just fail later on. The one that imports scipy.sandbox hasn't worked for a while either. > > http://projects.scipy.org/scipy/browser/tags/0.8.0/scipy/sparse/base.py > > I picked up the montecarlo code when I was playing around with these. > > > http://bazaar.launchpad.net/~jsseabold/statsmodels/statsmodels-skipper-maxent/files/head:/scikits/statsmodels/sandbox/maxentropy/ > > I'm curious if the maxentropy stuff as it is in scipy wouldn't find > more use and maintenance in scikits.learn. The implementation is > somewhat use specific (natural language processing), though this is > not by any means set in stone. > > Probably, but wouldn't it need a lot of work before it could be moved? It has a grand total of one test, mostly non-working examples, and is obviously hardly used at all (see r6919 and r6920 for more examples of broken code). Perhaps it's worth asking the scikits.learn guys, and otherwise consider deprecating it if they're not interested? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Jan 22 09:44:15 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 22 Jan 2011 09:44:15 -0500 Subject: [SciPy-Dev] sparse rmatvec and maxentropy In-Reply-To: References: Message-ID: On Sat, Jan 22, 2011 at 8:50 AM, Ralf Gommers wrote: > > > On Thu, Jan 20, 2011 at 10:13 PM, Skipper Seabold > wrote: >> >> On Tue, Jan 18, 2011 at 8:12 PM, ? wrote: >> > On Tue, Jan 18, 2011 at 8:05 PM, ? wrote: >> >> I don't know much about these two parts of scipy. >> >> >> >> Running an example for scipy.maxentropy raises an exception after >> >> upgrading to scipy trunk >> >> >> >> I didn't manage to find out with a ?Trac search whether rmatvec still >> >> exists. >> >> > > It was removed in r6996. I missed this one call to it, will clean it up. > >> >> > >> > 3 of the 4 examples in scipy/maxentropy/examples raise an exception. 2 >> > of them with the rmatvec not found error. The other one doesn't find >> > scipy.sandbox >> > >> >> I remember seeing the deprecation warnings for this now, but I never >> worked much with the bigmodel. ?I think you can just use * now instead >> of the rmatvec for the sparse matrices. ?You can see this in 0.8.0 >> where both are still there. > > A.rmatvec(v) should be A.conj().transpose() * v. But that does not make the > examples work, they just fail later on. The one that imports scipy.sandbox > hasn't worked for a while either. >> >> http://projects.scipy.org/scipy/browser/tags/0.8.0/scipy/sparse/base.py >> >> I picked up the montecarlo code when I was playing around with these. >> >> >> http://bazaar.launchpad.net/~jsseabold/statsmodels/statsmodels-skipper-maxent/files/head:/scikits/statsmodels/sandbox/maxentropy/ >> >> I'm curious if the maxentropy stuff as it is in scipy wouldn't find >> more use and maintenance in scikits.learn. ?The implementation is >> somewhat use specific (natural language processing), though this is >> not by any means set in stone. >> > Probably, but wouldn't it need a lot of work before it could be moved? It > has a grand total of one test, mostly non-working examples, and is obviously > hardly used at all (see r6919 and r6920 for more examples of broken code). > > Perhaps it's worth asking the scikits.learn guys, and otherwise consider > deprecating it if they're not interested? I haven't seen or heard anyone using it besides Skipper. There are also still some features that where designed for pysparse and never fully updated to scipy.sparse. http://projects.scipy.org/scipy/ticket/856 I also thought deprecating and removing maxentropy will be the best idea, if nobody volunteers to give it a workout. Josef > > Cheers, > Ralf > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From bravo.loic at gmail.com Sun Jan 23 06:48:31 2011 From: bravo.loic at gmail.com (=?UTF-8?B?TG/Dr2M=?= Berthe) Date: Sun, 23 Jan 2011 12:48:31 +0100 Subject: [SciPy-Dev] Edit rights for doc.scipy.org Message-ID: <20110123124831.35b84b23@TinyMe> Hi ! Could you give me edit rights for doc.scipy.org ? My username is loic. I would be happy to review some documentation and correct some mistakes. -- Loic From gael.varoquaux at normalesup.org Sun Jan 23 07:28:04 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 23 Jan 2011 13:28:04 +0100 Subject: [SciPy-Dev] Edit rights for doc.scipy.org In-Reply-To: <20110123124831.35b84b23@TinyMe> References: <20110123124831.35b84b23@TinyMe> Message-ID: <20110123122804.GB11273@phare.normalesup.org> On Sun, Jan 23, 2011 at 12:48:31PM +0100, Lo?c Berthe wrote: > Could you give me edit rights for doc.scipy.org ? > My username is loic. You should be good to go. Thanks for your interest, Gael From ralf.gommers at googlemail.com Sun Jan 23 09:58:42 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 23 Jan 2011 22:58:42 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 1 Message-ID: Hi, I am pleased to announce the availability of the first release candidate of SciPy 0.9.0. This will be the first SciPy release to include support for Python 3 (all modules except scipy.weave), as well as for Python 2.7. Please try this release candidate and report any problems on the scipy-dev mailing list. Binaries, sources and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc1/. Note that some OS X binaries are not uploaded yet, they should follow in the next few days. Enjoy, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From amachnik at gmail.com Sun Jan 23 17:30:00 2011 From: amachnik at gmail.com (Adam Machnik) Date: Sun, 23 Jan 2011 23:30:00 +0100 Subject: [SciPy-Dev] You know she wants it big Message-ID: Huge discounts on huge growth http://argilaceous.pillclaytabletsdrugstore.net/ From ralf.gommers at googlemail.com Mon Jan 24 09:35:54 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 24 Jan 2011 22:35:54 +0800 Subject: [SciPy-Dev] future of maxentropy module (was: sparse rmatvec and maxentropy) Message-ID: (excuse the cross-post, but this may be of interest to scipy-user and the scikits.learn crowd) On Sat, Jan 22, 2011 at 10:44 PM, wrote: > On Sat, Jan 22, 2011 at 8:50 AM, Ralf Gommers > wrote: > > On Thu, Jan 20, 2011 at 10:13 PM, Skipper Seabold > > wrote: > >> > >> I picked up the montecarlo code when I was playing around with these. > >> > >> > http://bazaar.launchpad.net/~jsseabold/statsmodels/statsmodels-skipper-maxent/files/head:/scikits/statsmodels/sandbox/maxentropy/ > >> > >> I'm curious if the maxentropy stuff as it is in scipy wouldn't find > >> more use and maintenance in scikits.learn. The implementation is > >> somewhat use specific (natural language processing), though this is > >> not by any means set in stone. > >> > > Probably, but wouldn't it need a lot of work before it could be moved? It > > has a grand total of one test, mostly non-working examples, and is > obviously > > hardly used at all (see r6919 and r6920 for more examples of broken > code). > > > > Perhaps it's worth asking the scikits.learn guys, and otherwise consider > > deprecating it if they're not interested? > > I haven't seen or heard anyone using it besides Skipper. There are > also still some features that where designed for pysparse and never > fully updated to scipy.sparse. > http://projects.scipy.org/scipy/ticket/856 > > I also thought deprecating and removing maxentropy will be the best > idea, if nobody volunteers to give it a workout. > So I guess we just have to ask this out loud: is anyone using the scipy.maxentropy module or interested in doing so? If you are, would you be interested in putting some work into it, like making the examples work and adding some tests? The current status is that 3 out of 4 examples are broken, the module has only a single test, and from broken code that went unnoticed for a long time it is clear that there are very few users. If no one steps up, I propose to deprecate the module for the 0.10 release. If there are any users out there that missed this email and step up then, we can always un-deprecate again. To the scikits.learn developers: would this code fit better and see more use in scikits.learn than in scipy? Would you be interested to pick it up? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Mon Jan 24 09:56:13 2011 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 24 Jan 2011 15:56:13 +0100 Subject: [SciPy-Dev] [Scikit-learn-general] future of maxentropy module (was: sparse rmatvec and maxentropy) In-Reply-To: References: Message-ID: 2011/1/24 Ralf Gommers : > > To the scikits.learn developers: would this code fit better and see more use > in scikits.learn than in scipy? Would you be interested to pick it up? There is already a maxent model in scikit learn which is a wrapper for LibLinear : scikits.learn.linear_model.LogisticRegression AFAIK, LibLinear is pretty much state of the art so I don't think the scikits.learn project is interested reusing this code. Best, -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From jsseabold at gmail.com Mon Jan 24 10:03:02 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 24 Jan 2011 10:03:02 -0500 Subject: [SciPy-Dev] [SciPy-User] future of maxentropy module (was: sparse rmatvec and maxentropy) In-Reply-To: References: Message-ID: On Mon, Jan 24, 2011 at 9:35 AM, Ralf Gommers wrote: > (excuse the cross-post, but this may be of interest to scipy-user and the > scikits.learn crowd) > > > On Sat, Jan 22, 2011 at 10:44 PM, wrote: >> >> On Sat, Jan 22, 2011 at 8:50 AM, Ralf Gommers >> wrote: >> > On Thu, Jan 20, 2011 at 10:13 PM, Skipper Seabold >> > wrote: >> >> >> >> I picked up the montecarlo code when I was playing around with these. >> >> >> >> >> >> http://bazaar.launchpad.net/~jsseabold/statsmodels/statsmodels-skipper-maxent/files/head:/scikits/statsmodels/sandbox/maxentropy/ >> >> >> >> I'm curious if the maxentropy stuff as it is in scipy wouldn't find >> >> more use and maintenance in scikits.learn. ?The implementation is >> >> somewhat use specific (natural language processing), though this is >> >> not by any means set in stone. >> >> >> > Probably, but wouldn't it need a lot of work before it could be moved? >> > It >> > has a grand total of one test, mostly non-working examples, and is >> > obviously >> > hardly used at all (see r6919 and r6920 for more examples of broken >> > code). >> > >> > Perhaps it's worth asking the scikits.learn guys, and otherwise consider >> > deprecating it if they're not interested? >> >> I haven't seen or heard anyone using it besides Skipper. There are >> also still some features that where designed for pysparse and never >> fully updated to scipy.sparse. >> http://projects.scipy.org/scipy/ticket/856 >> >> I also thought deprecating and removing maxentropy will be the best >> idea, if nobody volunteers to give it a workout. > > So I guess we just have to ask this out loud: is anyone using the > scipy.maxentropy module or interested in doing so? If you are, would you be > interested in putting some work into it, like making the examples work and > adding some tests? > > The current status is that 3 out of 4 examples are broken, the module has > only a single test, and from broken code that went unnoticed for a long time > it is clear that there are very few users. > I just checked again, and I do have the examples working in statsmodels with scipy before rmatvec was removed, so it's not so dire. It just depends on the montecarlo code, so we would have to include this in an install if we want the examples to run. I can make a branch that includes this code if there's interest to keep it and have the examples work. > If no one steps up, I propose to deprecate the module for the 0.10 release. > If there are any users out there that missed this email and step up then, we > can always un-deprecate again. > I do use things from the code, ie., the scipy.maxentropy.logsumexp, so I wouldn't want to lose that at the very least. Skipper From david.kremer.dk at gmail.com Mon Jan 24 10:15:43 2011 From: david.kremer.dk at gmail.com (David Kremer) Date: Mon, 24 Jan 2011 16:15:43 +0100 Subject: [SciPy-Dev] [feature request] Smoothing data library Message-ID: Hello, I'm a scipy user that use it for academic research. I would like to ask you if you could design a scipy.smooth module, because currently the smoothing abilitise of scipy are rather poor. Moreover, academic paper on smoothing 1D and 2D data set does exist, so it would be cool if you would like to implement some of them. Thank you very much From pav at iki.fi Mon Jan 24 11:46:06 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 24 Jan 2011 16:46:06 +0000 (UTC) Subject: [SciPy-Dev] [feature request] Smoothing data library References: Message-ID: Mon, 24 Jan 2011 16:15:43 +0100, David Kremer wrote: > Hello, I'm a scipy user that use it for academic research. So am I :) > I would like to ask you if you could design a scipy.smooth module, > because currently the smoothing abilitise of scipy are rather poor. > > Moreover, academic paper on smoothing 1D and 2D data set does exist, so > it would be cool if you would like to implement some of them. Data smoothing is not exactly a small field. Here, you need to be more specific: which smoothing methods are you missing? Do you have references to them? What use cases do you have in mind? -- Pauli Virtanen From jjstickel at vcn.com Mon Jan 24 13:26:16 2011 From: jjstickel at vcn.com (Jonathan Stickel) Date: Mon, 24 Jan 2011 11:26:16 -0700 Subject: [SciPy-Dev] [feature request] Smoothing data library In-Reply-To: References: Message-ID: <4D3DC448.2010504@vcn.com> On 1/24/11 11:00 , scipy-dev-request at scipy.org wrote: > Date: Mon, 24 Jan 2011 16:15:43 +0100 > From: David Kremer > Subject: [SciPy-Dev] [feature request] Smoothing data library > > Hello, I'm a scipy user that use it for academic research. > > I would like to ask you if you could design a scipy.smooth module, > because currently the smoothing abilitise of scipy are rather poor. > > Moreover, academic paper on smoothing 1D and 2D data set does exist, > so it would be cool if you would like to implement some of them. > > Thank you very much There are a few things scattered about. Smoothing splines are available in scipy.interpolate: http://docs.scipy.org/doc/scipy/reference/interpolate.html I submitted a scikit for smoothing by regularization of 1D arrays: http://pypi.python.org/pypi/scikits.datasmooth/0.5 I am open to contributions for improved packaging, documentation, and examples. I, too, think it would be nice if things were collected in a scipy module. Regards, Jonathan From pav at iki.fi Mon Jan 24 15:54:56 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 24 Jan 2011 20:54:56 +0000 (UTC) Subject: [SciPy-Dev] [feature request] Smoothing data library References: <4D3DC448.2010504@vcn.com> Message-ID: On Mon, 24 Jan 2011 11:26:16 -0700, Jonathan Stickel wrote: [clip] > I submitted a scikit for smoothing by regularization of 1D arrays: > > http://pypi.python.org/pypi/scikits.datasmooth/0.5 For N-d smoothing, there's this: http://pav.iki.fi/blog/2010-09-19/nd-smoothing.html which seems a bit similar to regularization smoothing. [clip] > I, too, think it would be nice if things were collected in a scipy > module. That might be a good idea. If smoothing is well-defined enough, it could be separated from interpolation. For instance, much of the spline stuff could be copied there, and the corresponding routines in scipy.interpolate be deprecated. Parts of the FITPACK stuff seem more smoothing than interpolation-oriented... But really, there would be room for an expert to suggest which smoothing algorithms would be those most needed. -- Pauli Virtanen From david.kremer.dk at gmail.com Tue Jan 25 04:29:04 2011 From: david.kremer.dk at gmail.com (David Kremer) Date: Tue, 25 Jan 2011 10:29:04 +0100 Subject: [SciPy-Dev] [feature request] Smoothing data library In-Reply-To: References: Message-ID: I'm a phd student in spectroscopy physics, and while looking for academic smoothing methods, I found this page : [http://vanha.physics.utu.fi/optics/math.html] Actually, I know that origin (software) has three (and more with wavelet transformations) algorithms. We could start by implementing this three algorithms while creating a new "smooth" module inside of scipy. Obviously data smoothing is a broad field, but we just could provide at first the more widely used method in for example matlab/scilab. I do not know this softwares but we could start with the methods I gave you in example. What do you think of it ? PS : References for the three method previously given are following. William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, 1992. pp. 650-655. and Savitzky, A. and Golay, M.J.E. Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Analytical Chemistry 36(8):1627-1639, July 1964. -------------- next part -------------- A non-text attachment was scrubbed... Name: origin_smooth_function.pdf Type: application/pdf Size: 73464 bytes Desc: not available URL: From ralf.gommers at googlemail.com Tue Jan 25 05:15:22 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 25 Jan 2011 18:15:22 +0800 Subject: [SciPy-Dev] [SciPy-User] future of maxentropy module (was: sparse rmatvec and maxentropy) In-Reply-To: References: Message-ID: On Mon, Jan 24, 2011 at 11:03 PM, Skipper Seabold wrote: > On Mon, Jan 24, 2011 at 9:35 AM, Ralf Gommers > wrote: > > (excuse the cross-post, but this may be of interest to scipy-user and the > > scikits.learn crowd) > > > > > > On Sat, Jan 22, 2011 at 10:44 PM, wrote: > >> > >> On Sat, Jan 22, 2011 at 8:50 AM, Ralf Gommers > >> wrote: > >> > On Thu, Jan 20, 2011 at 10:13 PM, Skipper Seabold < > jsseabold at gmail.com> > >> > wrote: > >> >> > >> >> I picked up the montecarlo code when I was playing around with these. > >> >> > >> >> > >> >> > http://bazaar.launchpad.net/~jsseabold/statsmodels/statsmodels-skipper-maxent/files/head:/scikits/statsmodels/sandbox/maxentropy/ > >> >> > >> >> I'm curious if the maxentropy stuff as it is in scipy wouldn't find > >> >> more use and maintenance in scikits.learn. The implementation is > >> >> somewhat use specific (natural language processing), though this is > >> >> not by any means set in stone. > >> >> > >> > Probably, but wouldn't it need a lot of work before it could be moved? > >> > It > >> > has a grand total of one test, mostly non-working examples, and is > >> > obviously > >> > hardly used at all (see r6919 and r6920 for more examples of broken > >> > code). > >> > > >> > Perhaps it's worth asking the scikits.learn guys, and otherwise > consider > >> > deprecating it if they're not interested? > >> > >> I haven't seen or heard anyone using it besides Skipper. There are > >> also still some features that where designed for pysparse and never > >> fully updated to scipy.sparse. > >> http://projects.scipy.org/scipy/ticket/856 > >> > >> I also thought deprecating and removing maxentropy will be the best > >> idea, if nobody volunteers to give it a workout. > > > > So I guess we just have to ask this out loud: is anyone using the > > scipy.maxentropy module or interested in doing so? If you are, would you > be > > interested in putting some work into it, like making the examples work > and > > adding some tests? > > > > The current status is that 3 out of 4 examples are broken, the module has > > only a single test, and from broken code that went unnoticed for a long > time > > it is clear that there are very few users. > > > > I just checked again, and I do have the examples working in > statsmodels with scipy before rmatvec was removed, so it's not so > dire. It just depends on the montecarlo code, so we would have to > include this in an install if we want the examples to run. I can make > a branch that includes this code if there's interest to keep it and > have the examples work. > > The montecarlo code was removed for a reason I assume, so that would be even more work to include again.... On the scikits.learn list someone said the maxentropy examples are nice, so perhaps they could be made to work with (translated to) the logistic regression code in scikits.learn. > If no one steps up, I propose to deprecate the module for the 0.10 > release. > > If there are any users out there that missed this email and step up then, > we > > can always un-deprecate again. > > > > I do use things from the code, ie., the scipy.maxentropy.logsumexp, so > I wouldn't want to lose that at the very least. > > That's a 3-line long utility function, I'm sure a place could be found for it. Anyway, I'm not proposing to throw the code out tomorrow - after 0.10 is out for a while we could go through it and move anything useful. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From bouloumag at gmail.com Tue Jan 25 11:20:58 2011 From: bouloumag at gmail.com (Darcoux Christine) Date: Tue, 25 Jan 2011 11:20:58 -0500 Subject: [SciPy-Dev] request of a FGMRES krylov solver In-Reply-To: References: Message-ID: 2011/1/12 Th?lesphonse Bigorneault : > I would like to have a FGMRES Krylov solver in scipy. FGMRES is a variant of > the GMRES method with right preconditioning that enables the use of a > different preconditioner at each step of the Arnoldi process. I vote for this ! Christine From schmidtc at gmail.com Tue Jan 25 11:32:35 2011 From: schmidtc at gmail.com (Charles R. Schmidt) Date: Tue, 25 Jan 2011 08:32:35 -0800 Subject: [SciPy-Dev] Another Bug in KDTree In-Reply-To: References: Message-ID: Has anyone been able to verify this bug? or know if it's already documented somewhere? Thanks, Charlie. On Wed, Jan 19, 2011 at 5:31 PM, Charles R. Schmidt wrote: > There appears to be a bug in KDTree.query_ball_point, when using > integer points neighbor points that lie on the radius are not always > returned. > > Eg, with a 4x4 grid >>>> import scipy.spatial >>>> points = [(0, 0), (0, 1), (0, 2), (0, 3), \ > ... ? ? ? ? ? ? ?(1, 0), (1, 1), (1, 2), (1, 3), \ > ... ? ? ? ? ? ? ?(2, 0), (2, 1), (2, 2), (2, 3), \ > ... ? ? ? ? ? ? ?(3, 0), (3, 1), (3, 2), (3, 3)] >>>> kd=scipy.spatial.KDTree(points) >>>> kd.query_ball_point((2,0),1) > [8, 9, 12] > > >>>> points = [(0.0, 0.0), (0.0, 1.0), (0.0, 2.0), (0.0, 3.0), \ > ... ? ? ? ? ? ? ?(1.0, 0.0), (1.0, 1.0), (1.0, 2.0), (1.0, 3.0), \ > ... ? ? ? ? ? ? ?(2.0, 0.0), (2.0, 1.0), (2.0, 2.0), (2.0, 3.0), \ > ... ? ? ? ? ? ? ?(3.0, 0.0), (3.0, 1.0), (3.0, 2.0), (3.0, 3.0)] > >>>> kd=scipy.spatial.KDTree(points) >>>> kd.query_ball_point((2,0),1) > [4, 8, 9, 12] > From gael.varoquaux at normalesup.org Tue Jan 25 16:08:18 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 25 Jan 2011 22:08:18 +0100 Subject: [SciPy-Dev] [SciPy-User] future of maxentropy module (was: sparse rmatvec and maxentropy) In-Reply-To: References: Message-ID: <20110125210818.GF13877@phare.normalesup.org> On Tue, Jan 25, 2011 at 06:15:22PM +0800, Ralf Gommers wrote: > On the scikits.learn list someone said the maxentropy examples are nice, > so perhaps they could be made to work with (translated to) the logistic > regression code in scikits.learn. OK, I'll see what we can do. I had a quick look at the examples, and they seemed so synthetic that I couldn't get the point. But then again, I am not a Natural Language Processing guy, so I'll see if I can get an NLP guy translate (and explain) the examples to the scikit. Ga?l From ralf.gommers at googlemail.com Thu Jan 27 06:02:36 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 27 Jan 2011 19:02:36 +0800 Subject: [SciPy-Dev] Another Bug in KDTree In-Reply-To: References: Message-ID: On Wed, Jan 26, 2011 at 12:32 AM, Charles R. Schmidt wrote: > Has anyone been able to verify this bug? or know if it's already > documented somewhere? > Same output on my system, looks like a bug. I don't think it's documented anywhere, could you open a ticket for it? Thanks, Ralf > Thanks, > Charlie. > > On Wed, Jan 19, 2011 at 5:31 PM, Charles R. Schmidt > wrote: > > There appears to be a bug in KDTree.query_ball_point, when using > > integer points neighbor points that lie on the radius are not always > > returned. > > > > Eg, with a 4x4 grid > >>>> import scipy.spatial > >>>> points = [(0, 0), (0, 1), (0, 2), (0, 3), \ > > ... (1, 0), (1, 1), (1, 2), (1, 3), \ > > ... (2, 0), (2, 1), (2, 2), (2, 3), \ > > ... (3, 0), (3, 1), (3, 2), (3, 3)] > >>>> kd=scipy.spatial.KDTree(points) > >>>> kd.query_ball_point((2,0),1) > > [8, 9, 12] > > > > > >>>> points = [(0.0, 0.0), (0.0, 1.0), (0.0, 2.0), (0.0, 3.0), \ > > ... (1.0, 0.0), (1.0, 1.0), (1.0, 2.0), (1.0, 3.0), \ > > ... (2.0, 0.0), (2.0, 1.0), (2.0, 2.0), (2.0, 3.0), \ > > ... (3.0, 0.0), (3.0, 1.0), (3.0, 2.0), (3.0, 3.0)] > > > >>>> kd=scipy.spatial.KDTree(points) > >>>> kd.query_ball_point((2,0),1) > > [4, 8, 9, 12] > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schmidtc at gmail.com Thu Jan 27 13:32:54 2011 From: schmidtc at gmail.com (Charles R. Schmidt) Date: Thu, 27 Jan 2011 10:32:54 -0800 Subject: [SciPy-Dev] Another Bug in KDTree Message-ID: Thanks for confirming. I created a ticket, http://projects.scipy.org/scipy/ticket/1373 - Charlie. > From: Ralf Gommers > Subject: Re: [SciPy-Dev] Another Bug in KDTree > To: SciPy Developers List > Message-ID: > ? ? ? ? > Content-Type: text/plain; charset="iso-8859-1" > > On Wed, Jan 26, 2011 at 12:32 AM, Charles R. Schmidt wrote: > >> Has anyone been able to verify this bug? or know if it's already >> documented somewhere? >> > > Same output on my system, looks like a bug. I don't think it's documented > anywhere, could you open a ticket for it? > > Thanks, > Ralf > > From P.Schellart at astro.ru.nl Thu Jan 27 13:48:21 2011 From: P.Schellart at astro.ru.nl (Pim Schellart) Date: Thu, 27 Jan 2011 19:48:21 +0100 Subject: [SciPy-Dev] Lomb-Scargle periodogram Message-ID: <24668303-DF8C-4FE2-BA40-519CB3DE3AD7@gmail.com> Dear SciPy developers, I have recently submitted code to calculate the Lomb-Scargle periodogram (see ticket http://projects.scipy.org/scipy/ticket/1352 for more information). After some iterations of review, recoding, adding documentation and adding unit tests the code (according to Ralf Gommers) is ready to go in. Ralf has nicely integrated the code into SciPy trunk in his git branch (https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from there. Now the time has come to discuss where to put it and Ralf has suggested to have this discussion here: "I've played with your code a bit, changed a few things, and added it to scipy.signal in a github branch: https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks good, whether it really should go into signal should be discussed on the mailing list (but seemed like the logical place for it). ... The best name for the module I could think of was spectral_analysis, maybe there's a better one? It allows to add similar methods later. " I would like to suggest the module name "lssa" for "Least-squares spectral analysis". So the full path would be scipy.signal.lssa. This has several advantages. 1. It is a short name without underscores (personal preference) 2. It is a category name that can comprise several similar algorithms as desired. 3. It is immediately obvious from this name that this is different than FFT and therefore people will not be confused looking for FFT functions in this sub-module. 4. The name corresponds to the Wikipedia entry for this category of algorithms :) (but on a serious note this may help for people that want to find more information, or for people browsing wikipedia that are looking for an implementation). Please let me know what you think. Kind regards, Pim Schellart From matthew.brett at gmail.com Thu Jan 27 14:44:05 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 27 Jan 2011 11:44:05 -0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 1 In-Reply-To: References: Message-ID: Hi Ralf, > I am pleased to announce the availability of the first release candidate of > SciPy 0.9.0. This will be the first SciPy release to include support for > Python 3 (all modules except scipy.weave), as well as for Python 2.7. Please > try this release candidate and report any problems on the scipy-dev mailing > list. Any chance you'd accept my recent bugfix to the matlab io for the release? Thanks a lot, Matthew From ralf.gommers at googlemail.com Fri Jan 28 06:37:30 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 28 Jan 2011 19:37:30 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 1 In-Reply-To: References: Message-ID: On Fri, Jan 28, 2011 at 3:44 AM, Matthew Brett wrote: > Hi Ralf, > > > I am pleased to announce the availability of the first release candidate > of > > SciPy 0.9.0. This will be the first SciPy release to include support for > > Python 3 (all modules except scipy.weave), as well as for Python 2.7. > Please > > try this release candidate and report any problems on the scipy-dev > mailing > > list. > > Any chance you'd accept my recent bugfix to the matlab io for the release? > > "fix read of empty char arrays with non-zero leading dimensions"? It sounds like a corner case and it is not in Python, so I'd rather not put it in at the last moment. It looks like we will need a second RC though because of ticket 1210, so if you think it's an important fix and explain why, you could change my mind. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Jan 28 14:06:57 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 28 Jan 2011 11:06:57 -0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 1 In-Reply-To: References: Message-ID: Hi, >> > I am pleased to announce the availability of the first release candidate >> > of >> > SciPy 0.9.0. This will be the first SciPy release to include support for >> > Python 3 (all modules except scipy.weave), as well as for Python 2.7. >> > Please >> > try this release candidate and report any problems on the scipy-dev >> > mailing >> > list. >> >> Any chance you'd accept my recent bugfix to the matlab io for the release? >> > "fix read of empty char arrays with non-zero leading dimensions"? > It sounds like a corner case and it is not in Python, so I'd rather not put > it in at the last moment. I can't say how common the offending input is - but it does look like it's possible to do it from matlab (ie the matrices are valid matlab matrices, unlike some more pathological cases we've had). I understand your reluctance, I don't feel very strongly, let's wait then for the next release. Thanks, Matthew From david.kremer.dk at gmail.com Sat Jan 29 13:35:35 2011 From: david.kremer.dk at gmail.com (David Kremer) Date: Sat, 29 Jan 2011 19:35:35 +0100 Subject: [SciPy-Dev] Starting a scikit for NUFFT Message-ID: <201101291935.35724.david.kremer.dk@gmail.com> Hello, I would like to start a scikit on nufft. I already gather some materials to start my work. Source code here : http://www.cims.nyu.edu/cmcl/nufft/nufft.html References here : [DR] Fast Fourier transforms for nonequispaced data, A. Dutt and V. Rokhlin, SIAM J. Sci. Comput. 14, 1368-1383, 1993. [GL] Accelerating the Nonuniform Fast Fourier Transform, L. Greengard and J.-Y. Lee, SIAM Review 46, 443-454 (2004). I'm not an expert on it. Actually, I do need to perform a nufft, but I would appreciate any comment on the way it should be implemented. Please, provide me with good information on the better way to build a scikit. Thank you very much. David Kremer From ralf.gommers at googlemail.com Sun Jan 30 11:11:08 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 31 Jan 2011 00:11:08 +0800 Subject: [SciPy-Dev] Starting a scikit for NUFFT In-Reply-To: <201101291935.35724.david.kremer.dk@gmail.com> References: <201101291935.35724.david.kremer.dk@gmail.com> Message-ID: On Sun, Jan 30, 2011 at 2:35 AM, David Kremer wrote: > Hello, > > I would like to start a scikit on nufft. > > That could actually be much easier than putting it in scipy, like you (at least I assume it was you) recently proposed on both numpy and scipy Trac. A scikit can be GPL, so you could look at the code you link to below, and if it's of reasonable quality just wrap it and be done. Cheers, Ralf Source code here : > http://www.cims.nyu.edu/cmcl/nufft/nufft.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.kremer.dk at gmail.com Sun Jan 30 12:31:34 2011 From: david.kremer.dk at gmail.com (David Kremer) Date: Sun, 30 Jan 2011 18:31:34 +0100 Subject: [SciPy-Dev] Starting a scikit for NUFFT In-Reply-To: References: <201101291935.35724.david.kremer.dk@gmail.com> Message-ID: <201101301831.34529.david.kremer.dk@gmail.com> > GPL, so you could look at the code you link to below, and if it's of > reasonable quality just wrap it and be done. Actually, yes, it was me. Could you please tell me if there is an example of fortran code being packaged into scipy ? It would help me to work from this example to build the NUFFT library proposed. Thank you very much. David From pav at iki.fi Sun Jan 30 15:18:07 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 30 Jan 2011 20:18:07 +0000 (UTC) Subject: [SciPy-Dev] Starting a scikit for NUFFT References: <201101291935.35724.david.kremer.dk@gmail.com> <201101301831.34529.david.kremer.dk@gmail.com> Message-ID: On Sun, 30 Jan 2011 18:31:34 +0100, David Kremer wrote: >> GPL, so you could look at the code you link to below, and if it's of >> reasonable quality just wrap it and be done. > Actually, yes, it was me. > > Could you please tell me if there is an example of fortran code being > packaged into scipy ? It would help me to work from this example to > build the NUFFT library proposed. These may be helpful: http://projects.scipy.org/scikits/browser/trunk/example For wrapping Fortran code, check e.g. setup.py in the following package: http://pypi.python.org/pypi/scikits.bvp_solver/0.3.0 From ralf.gommers at googlemail.com Sun Jan 30 20:50:33 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 31 Jan 2011 09:50:33 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 Message-ID: Hi, I am pleased to announce the availability of the second release candidate of SciPy 0.9.0. This will be the first SciPy release to include support for Python 3 (all modules except scipy.weave), as well as for Python 2.7. Due to the Sourceforge outage I am not able to put binaries on the normal download site right now, that will probably only happen in a week or so. If you want to try the RC now please build from svn, and report any issues. Changes since release candidate 1: - fixes for build problems with MSVC + MKL (#1210, #1376) - fix pilutil test to work with numpy master branch - fix constants.codata to be backwards-compatible Enjoy, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Mon Jan 31 09:48:54 2011 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 31 Jan 2011 15:48:54 +0100 Subject: [SciPy-Dev] git-friendly cythonize script + Tempita templating Message-ID: <4D46CBD6.50601@student.matnat.uio.no> It's getting close to the finish of my Fwrap work on SciPy. I'll get something to the list on that soon. Meanwhile here's a small start re: Cython vs. SciPy: cythonize.py ------------- I've been working with a lot of Cython files. Since Cython isn't (currently at least) invoked by the build system and is not a build-time dependency, I created a development-time utility script to scan for Cython files and invoke Cython in a way that is fast and hopefully safe with git (I had a Makefile, but that didn't work well with switching git branch). https://github.com/dagss/scipy-refactor/blob/cythonize/cythonize.py (Ignore the "-refactor", it is based off scipy/master). It: - First, checks if there's been any changes since the last time it was regenerated by you (checked by sha1 hash against a local "cythonize.dat") - Then, checks with git to see if the .c has been committed in a child commit to the .pyx (it is assumed you use git) - Finally, resort to invoking Cython. I'm sure it needs some polish/Python 3 support etc. (but keep in mind that it is a development time script only). Also, it won't currently strip comments like, e.g., generate_qhull.py did. Feedback welcome. Tempita --------- I needed a template language, and since Robert Kern recommended Tempita, I went with that for wrappers I made with Fwrap: http://pythonpaste.org/tempita/ cythonize.py above will interpret ".pyx.in" as being Tempita-templated Cython source code. Later (too late) I discovered that Mako was already in use in SciPy in interpolate/interpnd.pyx. In the "cythonize" branch linked to above I've converted it to use Tempita instead, in case you agree with that. The advantage with Tempita is that it fits in a single ~1000-line Python file, so that it should be convenient to bundle it with the build system on a later date. Dag Sverre From lists at onerussian.com Mon Jan 31 13:25:18 2011 From: lists at onerussian.com (Yaroslav Halchenko) Date: Mon, 31 Jan 2011 13:25:18 -0500 Subject: [SciPy-Dev] legendre regression Message-ID: <20110131182518.GU8747@onerussian.com> Switching to scipy (0.8.0) + numpy (1.5.1) from scipy (0.7.2) + numpy (1.4.1) reveals regression: *(Pdb) print legendre(1)(N.array([-0.999999999999 , -0.6, -0.2, 0.2, 0.6, 1. ])) [-1. -0.6 -0.2 0.2 0.6 1. ] *(Pdb) print legendre(1)(N.array([-1 , -0.6, -0.2, 0.2, 0.6, 1. ])) [ inf -0.6 -0.2 0.2 0.6 1. ] while before it was giving correct boundary value (-1). Could anyone please check if it is still present in current HEAD? imho worth a bugreport (haven't ran into similar one upon query but decided to check -- may be it was resolved already). -- =------------------------------------------------------------------= Keep in touch www.onerussian.com Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic From lists at onerussian.com Mon Jan 31 13:28:08 2011 From: lists at onerussian.com (Yaroslav Halchenko) Date: Mon, 31 Jan 2011 13:28:08 -0500 Subject: [SciPy-Dev] legendre regression In-Reply-To: <20110131182518.GU8747@onerussian.com> References: <20110131182518.GU8747@onerussian.com> Message-ID: <20110131182808.GV8747@onerussian.com> more fun (and pardon our elderly import numpy as N) ;-) *(Pdb) legendre(1)(N.array([-1.])) array([ inf]) (Pdb) legendre(1)([-1.]) array([-1.]) On Mon, 31 Jan 2011, Yaroslav Halchenko wrote: > Switching to scipy (0.8.0) + numpy (1.5.1) from scipy (0.7.2) + numpy > (1.4.1) reveals regression: > *(Pdb) print legendre(1)(N.array([-0.999999999999 , -0.6, -0.2, 0.2, 0.6, 1. ])) > [-1. -0.6 -0.2 0.2 0.6 1. ] > *(Pdb) print legendre(1)(N.array([-1 , -0.6, -0.2, 0.2, 0.6, 1. ])) > [ inf -0.6 -0.2 0.2 0.6 1. ] > while before it was giving correct boundary value (-1). > Could anyone please check if it is still present in current HEAD? imho > worth a bugreport (haven't ran into similar one upon query but decided to > check -- may be it was resolved already). -- =------------------------------------------------------------------= Keep in touch www.onerussian.com Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic From warren.weckesser at enthought.com Mon Jan 31 13:32:57 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 31 Jan 2011 12:32:57 -0600 Subject: [SciPy-Dev] legendre regression In-Reply-To: <20110131182518.GU8747@onerussian.com> References: <20110131182518.GU8747@onerussian.com> Message-ID: On Mon, Jan 31, 2011 at 12:25 PM, Yaroslav Halchenko wrote: > Switching to scipy (0.8.0) + numpy (1.5.1) from scipy (0.7.2) + numpy > (1.4.1) reveals regression: > > *(Pdb) print legendre(1)(N.array([-0.999999999999 , -0.6, -0.2, 0.2, 0.6, > 1. ])) > [-1. -0.6 -0.2 0.2 0.6 1. ] > > *(Pdb) print legendre(1)(N.array([-1 , -0.6, -0.2, 0.2, 0.6, 1. ])) > [ inf -0.6 -0.2 0.2 0.6 1. ] > > while before it was giving correct boundary value (-1). > > Could anyone please check if it is still present in current HEAD? imho > worth a bugreport (haven't ran into similar one upon query but decided to > check -- may be it was resolved already). > This has been fixed; see ticket #1298: http://projects.scipy.org/scipy/ticket/1298 Warren > > -- > =------------------------------------------------------------------= > Keep in touch www.onerussian.com > Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jan 31 13:38:20 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 31 Jan 2011 13:38:20 -0500 Subject: [SciPy-Dev] legendre regression In-Reply-To: <20110131182808.GV8747@onerussian.com> References: <20110131182518.GU8747@onerussian.com> <20110131182808.GV8747@onerussian.com> Message-ID: On Mon, Jan 31, 2011 at 1:28 PM, Yaroslav Halchenko wrote: > more fun (and pardon our elderly import numpy as N) ;-) > > *(Pdb) legendre(1)(N.array([-1.])) > array([ inf]) > (Pdb) legendre(1)([-1.]) > array([-1.]) > > > On Mon, 31 Jan 2011, Yaroslav Halchenko wrote: > >> Switching to scipy (0.8.0) + numpy (1.5.1) from scipy (0.7.2) + numpy >> (1.4.1) reveals regression: > >> *(Pdb) print legendre(1)(N.array([-0.999999999999 , -0.6, -0.2, ?0.2, ?0.6, ?1. ])) >> [-1. ?-0.6 -0.2 ?0.2 ?0.6 ?1. ] > >> *(Pdb) print legendre(1)(N.array([-1 , -0.6, -0.2, ?0.2, ?0.6, ?1. ])) >> [ inf -0.6 -0.2 ?0.2 ?0.6 ?1. ] > >> while before it was giving correct boundary value (-1). > >> Could anyone please check if it is still present in current HEAD? ?imho >> worth a bugreport ?(haven't ran into similar one upon query but decided to >> check -- may be it was resolved already). mine looks ok, >>> special.legendre(1)(np.array([-0.999999999999 , -0.6, -0.2, 0.2, 0.6, 1. ])) array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ]) >>> special.legendre(1)(np.array([-1 , -0.6, -0.2, 0.2, 0.6, 1. ])) array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ]) >>> special.legendre(1)(np.array([-1,])) array([-1.]) >>> special.legendre(1)(1.) 1.0 >>> special.legendre(1)(-1.) -1.0 >>> special.legendre(1)([-1.]) array([-1.]) >>> import scipy >>> scipy.__version__ '0.10.0.dev7027' numpy 1.5.1 Josef > -- > =------------------------------------------------------------------= > Keep in touch ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? www.onerussian.com > Yaroslav Halchenko ? ? ? ? ? ? ? ? www.ohloh.net/accounts/yarikoptic > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From charlesr.harris at gmail.com Mon Jan 31 13:43:19 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 31 Jan 2011 11:43:19 -0700 Subject: [SciPy-Dev] legendre regression In-Reply-To: <20110131182518.GU8747@onerussian.com> References: <20110131182518.GU8747@onerussian.com> Message-ID: On Mon, Jan 31, 2011 at 11:25 AM, Yaroslav Halchenko wrote: > Switching to scipy (0.8.0) + numpy (1.5.1) from scipy (0.7.2) + numpy > (1.4.1) reveals regression: > > *(Pdb) print legendre(1)(N.array([-0.999999999999 , -0.6, -0.2, 0.2, 0.6, > 1. ])) > [-1. -0.6 -0.2 0.2 0.6 1. ] > > *(Pdb) print legendre(1)(N.array([-1 , -0.6, -0.2, 0.2, 0.6, 1. ])) > [ inf -0.6 -0.2 0.2 0.6 1. ] > > while before it was giving correct boundary value (-1). > > Could anyone please check if it is still present in current HEAD? imho > worth a bugreport (haven't ran into similar one upon query but decided to > check -- may be it was resolved already). > > You might also be interested in the Legendre polynomial series in Numpy. In [1]: from numpy.polynomial import Legendre as L In [2]: p = L([0,1]) In [3]: p([-1 , -0.6, -0.2, 0.2, 0.6, 1. ]) Out[3]: array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ]) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Mon Jan 31 13:50:26 2011 From: lists at onerussian.com (Yaroslav Halchenko) Date: Mon, 31 Jan 2011 13:50:26 -0500 Subject: [SciPy-Dev] legendre regression In-Reply-To: References: <20110131182518.GU8747@onerussian.com> Message-ID: <20110131185026.GW8747@onerussian.com> Thanks everyone! didn't know about numpy's "copy" of the functionality... I guess bloody fresh: In [1]: from numpy.polynomial import Legendre as L --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /home/yoh/proj/pymvpa/pymvpa/ in () ImportError: cannot import name Legendre In [2]: import numpy In [3]: numpy.__version__ Out[3]: '1.5.1' meanwhile worked out temporary workaround by offsetting 'inf' locations On Mon, 31 Jan 2011, Charles R Harris wrote: > You might also be interested in the Legendre polynomial series in > Numpy. > In [1]: from numpy.polynomial import Legendre as L > In [2]: p = L([0,1]) > In [3]: p([-1 , -0.6, -0.2, 0.2, 0.6, 1. ]) > Out[3]: array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ]) > Chuck > References > 1. mailto:lists at onerussian.com > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -- =------------------------------------------------------------------= Keep in touch www.onerussian.com Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic From pav at iki.fi Mon Jan 31 13:52:42 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 31 Jan 2011 18:52:42 +0000 (UTC) Subject: [SciPy-Dev] git-friendly cythonize script + Tempita templating References: <4D46CBD6.50601@student.matnat.uio.no> Message-ID: Mon, 31 Jan 2011 15:48:54 +0100, Dag Sverre Seljebotn wrote: [clip] > Later (too late) I discovered that Mako was already in use in SciPy in > interpolate/interpnd.pyx. In the "cythonize" branch linked to above I've > converted it to use Tempita instead, in case you agree with that. The > advantage with Tempita is that it fits in a single ~1000-line Python > file, so that it should be convenient to bundle it with the build system > on a later date. I don't have strong feelings for one way or another. Since the templates should be used only for simple stuff, whatever is used probably doesn't matter too much. (Using them for more complicated stuff would be like "Hi, I put code generation in your code generation" ;). Pauli From dagss at student.matnat.uio.no Mon Jan 31 14:14:36 2011 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 31 Jan 2011 20:14:36 +0100 Subject: [SciPy-Dev] git-friendly cythonize script + Tempita templating In-Reply-To: References: <4D46CBD6.50601@student.matnat.uio.no> Message-ID: <4D470A1C.2070101@student.matnat.uio.no> On 01/31/2011 07:52 PM, Pauli Virtanen wrote: > Mon, 31 Jan 2011 15:48:54 +0100, Dag Sverre Seljebotn wrote: > [clip] > >> Later (too late) I discovered that Mako was already in use in SciPy in >> interpolate/interpnd.pyx. In the "cythonize" branch linked to above I've >> converted it to use Tempita instead, in case you agree with that. The >> advantage with Tempita is that it fits in a single ~1000-line Python >> file, so that it should be convenient to bundle it with the build system >> on a later date. >> > I don't have strong feelings for one way or another. Since the templates > should be used only for simple stuff, whatever is used probably doesn't > matter too much. (Using them for more complicated stuff would be like > "Hi, I put code generation in your code generation" ;). > :-) Well, ideally Cython should cope with most of your code generation needs, but in practice its preprocessing/macro capabilities can be too limited sometimes. Hopefully we can meet in Munich and talk more about this... Dag Sverre From charlesr.harris at gmail.com Mon Jan 31 14:22:14 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 31 Jan 2011 12:22:14 -0700 Subject: [SciPy-Dev] legendre regression In-Reply-To: <20110131185026.GW8747@onerussian.com> References: <20110131182518.GU8747@onerussian.com> <20110131185026.GW8747@onerussian.com> Message-ID: On Mon, Jan 31, 2011 at 11:50 AM, Yaroslav Halchenko wrote: > Thanks everyone! didn't know about numpy's "copy" of the > functionality... I guess bloody fresh: > > In [1]: from numpy.polynomial import Legendre as L > --------------------------------------------------------------------------- > ImportError Traceback (most recent call last) > > /home/yoh/proj/pymvpa/pymvpa/ in () > > ImportError: cannot import name Legendre > > In [2]: import numpy > > In [3]: numpy.__version__ > Out[3]: '1.5.1' > > Ah, I forgot when it went in. It has more functionality than you get with the scipy special versions, for instance, multiplication/division of series. In [1]: from numpy.polynomial import Legendre as L In [2]: p1 = L([0,0,1]) In [3]: p2 = L([0,0,0,1]) In [4]: p2*p1 Out[4]: Legendre([ 0. , 0.25714286, 0. , 0.26666667, 0. , 0.47619048], [-1., 1.]) In [5]: quo, rem = divmod(p2,p1) In [6]: p1*quo + rem Out[6]: Legendre([ 0., 0., 0., 1.], [-1., 1.]) It's a standalone python/numpy module, you can just copy the folder from trunk and get the current functionality. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Jan 31 16:21:25 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 31 Jan 2011 21:21:25 +0000 (UTC) Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 References: Message-ID: Hi, On Mon, 31 Jan 2011 09:50:33 +0800, Ralf Gommers wrote: > I am pleased to announce the availability of the second release > candidate of SciPy 0.9.0. This will be the first SciPy release to > include support for Python 3 (all modules except scipy.weave), as well > as for Python 2.7. > > Due to the Sourceforge outage I am not able to put binaries on the > normal download site right now, that will probably only happen in a week > or so. If you want to try the RC now please build from svn, and report > any issues. When you build from SVN, please test the latest 0.9.x branch (and not the 0.9.0rc2 tag): http://svn.scipy.org/svn/scipy/branches/0.9.x It contains an additional last-minute bug fix, which we will want to have in 0.9.0. -- Pauli Virtanen From charlesr.harris at gmail.com Mon Jan 31 17:01:39 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 31 Jan 2011 15:01:39 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: Message-ID: On Sun, Jan 30, 2011 at 6:50 PM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the second release candidate > of SciPy 0.9.0. This will be the first SciPy release to include support > for Python 3 (all modules except scipy.weave), as well as for Python 2.7. > > Due to the Sourceforge outage I am not able to put binaries on the normal > download site right now, that will probably only happen in a week or so. If > you want to try the RC now please build from svn, and report any issues. > > Changes since release candidate 1: > - fixes for build problems with MSVC + MKL (#1210, #1376) > - fix pilutil test to work with numpy master branch > - fix constants.codata to be backwards-compatible > > I think there should be a fix for ndarray also, the problem with type 5 (int) not being recognized is that it checks for Int32, which on some (all?) 32 bit platforms is a long (7) rather than an int. I think this is a bug and it will cause problems with numpy 1.6. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Mon Jan 31 17:56:28 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 31 Jan 2011 14:56:28 -0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: Message-ID: <4D473E1C.3020503@uci.edu> On 1/30/2011 5:50 PM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the second release > candidate of SciPy 0.9.0. This will be the first SciPy release to > include support for Python 3 (all modules except scipy.weave), as well > as for Python 2.7. > > Due to the Sourceforge outage I am not able to put binaries on the > normal download site right now, that will probably only happen in a week > or so. If you want to try the RC now please build from svn, and report > any issues. > > Changes since release candidate 1: > - fixes for build problems with MSVC + MKL (#1210, #1376) > - fix pilutil test to work with numpy master branch > - fix constants.codata to be backwards-compatible > > Enjoy, > Ralf > I tested msvc9/ifort/MKL builds of scipy 0.9 rc2 with Python 2.6, 2.7, 3.1 and 3.2 on win32 and win-amd64. Besides the known problems with the MKL builds (, , ), the following tests fail on win32 only: ====================================================================== FAIL: test_linesearch.TestLineSearch.test_line_search_armijo ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python27\lib\site-packages\nose\case.py", line 187, in runTest self.test(*self.arg) File "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py",line 201, in test_line_search_armijo assert_equal(fv, f(x + s*p)) File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line 313, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 1.5675494536393939 DESIRED: 1.5675494536393932 ====================================================================== FAIL: test_linesearch.TestLineSearch.test_line_search_wolfe1 ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python27\lib\site-packages\nose\case.py", line 187, in runTest self.test(*self.arg) File "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py", line 164, in test_line_search_wolfe1 assert_equal(fv, f(x + s*p)) File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line 313, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 19.185353513927268 DESIRED: 19.185353513927272 ====================================================================== FAIL: test_linesearch.TestLineSearch.test_line_search_wolfe2 ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python27\lib\site-packages\nose\case.py", line 187, in runTest self.test(*self.arg) File "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py",line 184, in test_line_search_wolfe2 assert_equal(fv, f(x + s*p)) File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line 313, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 19.185353513927272 DESIRED: 19.185353513927268 ====================================================================== FAIL: Powell (direction set) optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python27\lib\site-packages\scipy\optimize\tests\test_optimize.py", line 123, in test_powell assert_(self.funccalls == 116, self.funccalls) File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: 128 -- Christoph