From amachnik at gmail.com Tue Feb 1 11:10:39 2011 From: amachnik at gmail.com (Adam Machnik) Date: Tue, 1 Feb 2011 17:10:39 +0100 Subject: [SciPy-Dev] Vitality-potion for your rod Message-ID: Portion of strength for men http://tpios.fapmoyhpaji.com/ From sturla at molden.no Tue Feb 1 11:47:05 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 01 Feb 2011 17:47:05 +0100 Subject: [SciPy-Dev] Vitality-potion for your rod In-Reply-To: References: Message-ID: <4D483909.4030401@molden.no> Den 01.02.2011 17:10, skrev Adam Machnik: > Portion of strength for men > http://tpios.fap***hpaji.com/ Mine is strong enough... Get the spammer off the list please? Sturla From ariver at enthought.com Tue Feb 1 14:37:38 2011 From: ariver at enthought.com (Aaron River) Date: Tue, 1 Feb 2011 13:37:38 -0600 Subject: [SciPy-Dev] Vitality-potion for your rod In-Reply-To: <4D483909.4030401@molden.no> References: <4D483909.4030401@molden.no> Message-ID: Done. On Tue, Feb 1, 2011 at 10:47, Sturla Molden wrote: > Den 01.02.2011 17:10, skrev Adam Machnik: >> Portion of strength for men >> ? ? http://tpios.fap***hpaji.com/ > Mine is strong enough... > Get the spammer off the list please? > > Sturla > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ralf.gommers at googlemail.com Tue Feb 1 23:04:18 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 2 Feb 2011 05:04:18 +0100 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: Message-ID: On Mon, Jan 31, 2011 at 11:01 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Sun, Jan 30, 2011 at 6:50 PM, Ralf Gommers < > ralf.gommers at googlemail.com> wrote: > >> Hi, >> >> I am pleased to announce the availability of the second release candidate >> of SciPy 0.9.0. This will be the first SciPy release to include support >> for Python 3 (all modules except scipy.weave), as well as for Python 2.7. >> >> Due to the Sourceforge outage I am not able to put binaries on the normal >> download site right now, that will probably only happen in a week or so. If >> you want to try the RC now please build from svn, and report any issues. >> >> Changes since release candidate 1: >> - fixes for build problems with MSVC + MKL (#1210, #1376) >> - fix pilutil test to work with numpy master branch >> - fix constants.codata to be backwards-compatible >> >> > I think there should be a fix for ndarray also, the problem with type 5 > (int) not being recognized is that it checks for Int32, which on some (all?) > 32 bit platforms is a long (7) rather than an int. I think this is a bug and > it will cause problems with numpy 1.6. > You mean ndimage I guess. I don't really understand your explanation, if that's the case then those ndimage tests should have been failing before, right? If this needs to be fixed in scipy then we need an RC3. I would like to get that out by the 12th or so if possible. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Feb 1 23:07:21 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 2 Feb 2011 05:07:21 +0100 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: <4D473E1C.3020503@uci.edu> References: <4D473E1C.3020503@uci.edu> Message-ID: On Mon, Jan 31, 2011 at 11:56 PM, Christoph Gohlke wrote: > > > On 1/30/2011 5:50 PM, Ralf Gommers wrote: > > Hi, > > > > I am pleased to announce the availability of the second release > > candidate of SciPy 0.9.0. This will be the first SciPy release to > > include support for Python 3 (all modules except scipy.weave), as well > > as for Python 2.7. > > > > Due to the Sourceforge outage I am not able to put binaries on the > > normal download site right now, that will probably only happen in a week > > or so. If you want to try the RC now please build from svn, and report > > any issues. > > > > Changes since release candidate 1: > > - fixes for build problems with MSVC + MKL (#1210, #1376) > > - fix pilutil test to work with numpy master branch > > - fix constants.codata to be backwards-compatible > > > > Enjoy, > > Ralf > > > > > I tested msvc9/ifort/MKL builds of scipy 0.9 rc2 with Python 2.6, 2.7, > 3.1 and 3.2 on win32 and win-amd64. Besides the known problems with the > MKL builds (, > , > ), the following tests fail > on win32 only: > > Tor the first three failures I can adjust the test precision to make the tests pass. For the Powell failure, what are the actual number of funccalls? That type of assertion doesn't work too well, should be <= 116 instead of ==. Ralf > ====================================================================== > FAIL: test_linesearch.TestLineSearch.test_line_search_armijo > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "X:\Python27\lib\site-packages\nose\case.py", line 187, in runTest > self.test(*self.arg) > File > > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py",line > 201, in test_line_search_armijo > assert_equal(fv, f(x + s*p)) > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > 313, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 1.5675494536393939 > DESIRED: 1.5675494536393932 > > ====================================================================== > FAIL: test_linesearch.TestLineSearch.test_line_search_wolfe1 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "X:\Python27\lib\site-packages\nose\case.py", line 187, in runTest > self.test(*self.arg) > File > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py", > line 164, in test_line_search_wolfe1 > assert_equal(fv, f(x + s*p)) > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > 313, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 19.185353513927268 > DESIRED: 19.185353513927272 > > ====================================================================== > FAIL: test_linesearch.TestLineSearch.test_line_search_wolfe2 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "X:\Python27\lib\site-packages\nose\case.py", line 187, in runTest > self.test(*self.arg) > File > > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py",line > 184, in test_line_search_wolfe2 > assert_equal(fv, f(x + s*p)) > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > 313, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 19.185353513927272 > DESIRED: 19.185353513927268 > > ====================================================================== > FAIL: Powell (direction set) optimization routine > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_optimize.py", > line 123, in test_powell > assert_(self.funccalls == 116, self.funccalls) > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line 34, > in assert_ > raise AssertionError(msg) > AssertionError: 128 > > -- > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Tue Feb 1 23:34:49 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 01 Feb 2011 20:34:49 -0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: <4D473E1C.3020503@uci.edu> Message-ID: <4D48DEE9.8020901@uci.edu> On 2/1/2011 8:07 PM, Ralf Gommers wrote: > > > On Mon, Jan 31, 2011 at 11:56 PM, Christoph Gohlke > wrote: > > > > On 1/30/2011 5:50 PM, Ralf Gommers wrote: > > Hi, > > > > I am pleased to announce the availability of the second release > > candidate of SciPy 0.9.0. This will be the first SciPy release to > > include support for Python 3 (all modules except scipy.weave), as well > > as for Python 2.7. > > > > Due to the Sourceforge outage I am not able to put binaries on the > > normal download site right now, that will probably only happen in > a week > > or so. If you want to try the RC now please build from svn, and report > > any issues. > > > > Changes since release candidate 1: > > - fixes for build problems with MSVC + MKL (#1210, #1376) > > - fix pilutil test to work with numpy master branch > > - fix constants.codata to be backwards-compatible > > > > Enjoy, > > Ralf > > > > > I tested msvc9/ifort/MKL builds of scipy 0.9 rc2 with Python 2.6, 2.7, > 3.1 and 3.2 on win32 and win-amd64. Besides the known problems with the > MKL builds (, > , > ), the following tests fail > on win32 only: > > Tor the first three failures I can adjust the test precision to make the > tests pass. For the Powell failure, what are the actual number of > funccalls? That type of assertion doesn't work too well, should be <= > 116 instead of ==. > Ralf I ran the test in a loop: most of the times the number of funccalls is 116, often 117, and sometimes as large as 128. Not sure why this test only fails on win32 and not win-amd64. Christoph > > ====================================================================== > FAIL: test_linesearch.TestLineSearch.test_line_search_armijo > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "X:\Python27\lib\site-packages\nose\case.py", line 187, in > runTest > self.test(*self.arg) > File > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py",line > 201, in test_line_search_armijo > assert_equal(fv, f(x + s*p)) > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > 313, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 1.5675494536393939 > DESIRED: 1.5675494536393932 > > ====================================================================== > FAIL: test_linesearch.TestLineSearch.test_line_search_wolfe1 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "X:\Python27\lib\site-packages\nose\case.py", line 187, in > runTest > self.test(*self.arg) > File > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py", > line 164, in test_line_search_wolfe1 > assert_equal(fv, f(x + s*p)) > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > 313, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 19.185353513927268 > DESIRED: 19.185353513927272 > > ====================================================================== > FAIL: test_linesearch.TestLineSearch.test_line_search_wolfe2 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "X:\Python27\lib\site-packages\nose\case.py", line 187, in > runTest > self.test(*self.arg) > File > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py",line > 184, in test_line_search_wolfe2 > assert_equal(fv, f(x + s*p)) > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > 313, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 19.185353513927272 > DESIRED: 19.185353513927268 > > ====================================================================== > FAIL: Powell (direction set) optimization routine > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_optimize.py", > line 123, in test_powell > assert_(self.funccalls == 116, self.funccalls) > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line 34, > in assert_ > raise AssertionError(msg) > AssertionError: 128 > > -- > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From charlesr.harris at gmail.com Tue Feb 1 23:57:35 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 1 Feb 2011 21:57:35 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: Message-ID: On Tue, Feb 1, 2011 at 9:04 PM, Ralf Gommers wrote: > > > On Mon, Jan 31, 2011 at 11:01 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, Jan 30, 2011 at 6:50 PM, Ralf Gommers < >> ralf.gommers at googlemail.com> wrote: >> >>> Hi, >>> >>> I am pleased to announce the availability of the second release candidate >>> of SciPy 0.9.0. This will be the first SciPy release to include support >>> for Python 3 (all modules except scipy.weave), as well as for Python 2.7. >>> >>> Due to the Sourceforge outage I am not able to put binaries on the normal >>> download site right now, that will probably only happen in a week or so. If >>> you want to try the RC now please build from svn, and report any issues. >>> >>> Changes since release candidate 1: >>> - fixes for build problems with MSVC + MKL (#1210, #1376) >>> - fix pilutil test to work with numpy master branch >>> - fix constants.codata to be backwards-compatible >>> >>> >> I think there should be a fix for ndarray also, the problem with type 5 >> (int) not being recognized is that it checks for Int32, which on some (all?) >> 32 bit platforms is a long (7) rather than an int. I think this is a bug and >> it will cause problems with numpy 1.6. >> > You mean ndimage I guess. I don't really understand your explanation, if > that's the case then those ndimage tests should have been failing before, > right? > > If this needs to be fixed in scipy then we need an RC3. I would like to get > that out by the 12th or so if possible. > > I think it needs to be fixed as it is a bug. The problem is that ndimage checks types, but only by Int16, Int32, etc. On 32 bit systems and windows Int32 is a long (type# 7) and integers (type# 5), are treated as unrecognized types even though they are the same thing. Ndimage should accept both types. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Feb 2 00:11:35 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 2 Feb 2011 13:11:35 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: Message-ID: On Wed, Feb 2, 2011 at 12:57 PM, Charles R Harris wrote: > > > On Tue, Feb 1, 2011 at 9:04 PM, Ralf Gommers wrote: > >> >> >> On Mon, Jan 31, 2011 at 11:01 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Sun, Jan 30, 2011 at 6:50 PM, Ralf Gommers < >>> ralf.gommers at googlemail.com> wrote: >>> >>>> Hi, >>>> >>>> I am pleased to announce the availability of the second release >>>> candidate of SciPy 0.9.0. This will be the first SciPy release to >>>> include support for Python 3 (all modules except scipy.weave), as well as >>>> for Python 2.7. >>>> >>>> Due to the Sourceforge outage I am not able to put binaries on the >>>> normal download site right now, that will probably only happen in a week or >>>> so. If you want to try the RC now please build from svn, and report any >>>> issues. >>>> >>>> Changes since release candidate 1: >>>> - fixes for build problems with MSVC + MKL (#1210, #1376) >>>> - fix pilutil test to work with numpy master branch >>>> - fix constants.codata to be backwards-compatible >>>> >>>> >>> I think there should be a fix for ndarray also, the problem with type 5 >>> (int) not being recognized is that it checks for Int32, which on some (all?) >>> 32 bit platforms is a long (7) rather than an int. I think this is a bug and >>> it will cause problems with numpy 1.6. >>> >> You mean ndimage I guess. I don't really understand your explanation, if >> that's the case then those ndimage tests should have been failing before, >> right? >> >> If this needs to be fixed in scipy then we need an RC3. I would like to >> get that out by the 12th or so if possible. >> >> > > I think it needs to be fixed as it is a bug. The problem is that ndimage > checks types, but only by Int16, Int32, etc. On 32 bit systems and windows > Int32 is a long (type# 7) and integers (type# 5), are treated as > unrecognized types even though they are the same thing. Ndimage should > accept both types. > > I'm still a bit puzzled about why it's only failing after the recent changes in numpy master. Sounds like it's a bug, but it's not a regression and we are at rc2 right now. Do you (or does someone else) have time to fix it in the next week or so? The other option is to fix it in 0.9.1, which can come out at the same time as numpy 1.6.0. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Feb 2 00:17:03 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 2 Feb 2011 13:17:03 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: <4D48DEE9.8020901@uci.edu> References: <4D473E1C.3020503@uci.edu> <4D48DEE9.8020901@uci.edu> Message-ID: On Wed, Feb 2, 2011 at 12:34 PM, Christoph Gohlke wrote: > > > On 2/1/2011 8:07 PM, Ralf Gommers wrote: > > > > > > On Mon, Jan 31, 2011 at 11:56 PM, Christoph Gohlke > > wrote: > > > > > > > > On 1/30/2011 5:50 PM, Ralf Gommers wrote: > > > Hi, > > > > > > I am pleased to announce the availability of the second release > > > candidate of SciPy 0.9.0. This will be the first SciPy release to > > > include support for Python 3 (all modules except scipy.weave), as > well > > > as for Python 2.7. > > > > > > Due to the Sourceforge outage I am not able to put binaries on the > > > normal download site right now, that will probably only happen in > > a week > > > or so. If you want to try the RC now please build from svn, and > report > > > any issues. > > > > > > Changes since release candidate 1: > > > - fixes for build problems with MSVC + MKL (#1210, #1376) > > > - fix pilutil test to work with numpy master branch > > > - fix constants.codata to be backwards-compatible > > > > > > Enjoy, > > > Ralf > > > > > > > > > I tested msvc9/ifort/MKL builds of scipy 0.9 rc2 with Python 2.6, > 2.7, > > 3.1 and 3.2 on win32 and win-amd64. Besides the known problems with > the > > MKL builds (, > > , > > ), the following tests > fail > > on win32 only: > > > > Tor the first three failures I can adjust the test precision to make the > > tests pass. For the Powell failure, what are the actual number of > > funccalls? That type of assertion doesn't work too well, should be <= > > 116 instead of ==. > > Ralf > > I ran the test in a loop: most of the times the number of funccalls is > 116, often 117, and sometimes as large as 128. Not sure why this test > only fails on win32 and not win-amd64. > > It looks like an actual problem then, and not just a difference between platforms. I saw something like that recently in a Wolfe line search test too, the problem was floating point ==0.0 comparison. Can you open a ticket? Ralf > > > > > ====================================================================== > > FAIL: test_linesearch.TestLineSearch.test_line_search_armijo > > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "X:\Python27\lib\site-packages\nose\case.py", line 187, in > > runTest > > self.test(*self.arg) > > File > > > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py",line > > 201, in test_line_search_armijo > > assert_equal(fv, f(x + s*p)) > > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > > 313, in assert_equal > > raise AssertionError(msg) > > AssertionError: > > Items are not equal: > > ACTUAL: 1.5675494536393939 > > DESIRED: 1.5675494536393932 > > > > > ====================================================================== > > FAIL: test_linesearch.TestLineSearch.test_line_search_wolfe1 > > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "X:\Python27\lib\site-packages\nose\case.py", line 187, in > > runTest > > self.test(*self.arg) > > File > > > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py", > > line 164, in test_line_search_wolfe1 > > assert_equal(fv, f(x + s*p)) > > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > > 313, in assert_equal > > raise AssertionError(msg) > > AssertionError: > > Items are not equal: > > ACTUAL: 19.185353513927268 > > DESIRED: 19.185353513927272 > > > > > ====================================================================== > > FAIL: test_linesearch.TestLineSearch.test_line_search_wolfe2 > > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "X:\Python27\lib\site-packages\nose\case.py", line 187, in > > runTest > > self.test(*self.arg) > > File > > > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_linesearch.py",line > > 184, in test_line_search_wolfe2 > > assert_equal(fv, f(x + s*p)) > > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > > 313, in assert_equal > > raise AssertionError(msg) > > AssertionError: > > Items are not equal: > > ACTUAL: 19.185353513927272 > > DESIRED: 19.185353513927268 > > > > > ====================================================================== > > FAIL: Powell (direction set) optimization routine > > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > > "X:\Python27\lib\site-packages\scipy\optimize\tests\test_optimize.py", > > line 123, in test_powell > > assert_(self.funccalls == 116, self.funccalls) > > File "X:\Python27\lib\site-packages\numpy\testing\utils.py", line > 34, > > in assert_ > > raise AssertionError(msg) > > AssertionError: 128 > > > > -- > > Christoph > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Feb 2 00:36:33 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 2 Feb 2011 13:36:33 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: Message-ID: On Wed, Feb 2, 2011 at 1:11 PM, Ralf Gommers wrote: > > > On Wed, Feb 2, 2011 at 12:57 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Tue, Feb 1, 2011 at 9:04 PM, Ralf Gommers > > wrote: >> >>> >>> >>> On Mon, Jan 31, 2011 at 11:01 PM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> I think there should be a fix for ndarray also, the problem with type 5 >>>> (int) not being recognized is that it checks for Int32, which on some (all?) >>>> 32 bit platforms is a long (7) rather than an int. I think this is a bug and >>>> it will cause problems with numpy 1.6. >>>> >>> You mean ndimage I guess. I don't really understand your explanation, if >>> that's the case then those ndimage tests should have been failing before, >>> right? >>> >>> If this needs to be fixed in scipy then we need an RC3. I would like to >>> get that out by the 12th or so if possible. >>> >>> >> >> I think it needs to be fixed as it is a bug. The problem is that ndimage >> checks types, but only by Int16, Int32, etc. On 32 bit systems and windows >> Int32 is a long (type# 7) and integers (type# 5), are treated as >> unrecognized types even though they are the same thing. Ndimage should >> accept both types. >> >> I'm still a bit puzzled about why it's only failing after the recent > changes in numpy master. > > Sounds like it's a bug, but it's not a regression and we are at rc2 right > now. Do you (or does someone else) have time to fix it in the next week or > so? The other option is to fix it in 0.9.1, which can come out at the same > time as numpy 1.6.0. > Sorry, I hadn't noticed that there's already a patch attached to http://projects.scipy.org/numpy/ticket/1724 If you could review and commit it, that would be great though. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Wed Feb 2 06:53:51 2011 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 02 Feb 2011 12:53:51 +0100 Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy Message-ID: <4D4945CF.60500@student.matnat.uio.no> It's a good time to start a discussion on my Fwrap refactor of Fortran wrappers in SciPy. I'd appreciate feedback from anyone who has an opinion on whether the Fwrap refactor should be merged upstream and, if so, how it should be done. My SciPy branch is now feature complete without using f2py (cblas/clapack missing and only numscons build, not distutils -- the latter is definitely coming soon). What is there instead of f2py is Cython files that call the Fortran code directly. The Cython code should have the same API as f2py and be backwards compatible. The Cython files were generated from the f2py pyf files by using Fwrap, but the generated files were subsequently modified (rationale below) -- I've not really been replacing f2py with Fwrap, it is more correct to say that I used Fwrap to replace f2py with Cython. My branch is here: https://github.com/jasonmccampbell/scipy-refactor/tree/fwrap/ (Currently only numscons build is changed, not distutils. Also I didn't check in generated C sources for now.) I'll provide some more detailed information on specific spots/differences once it's clear whether this is accepted upstream and how review will be done. Overall: * ABI: Same as f2py (Fortran 77 using the same blatant assumptions that f2py did) * Wrapper code typically looks like this: https://github.com/jasonmccampbell/scipy-refactor/blob/fwrap/scipy/interpolate/dfitpack.pyx In addition there's dfitpack_fc.h, dfitpack_fc.pxd which now expose the raw Fortran ABI for use from C or Cython code. * In the linalg package, there's pyx.in files, which are templates in the Tempita language (http://pythonpaste.org/tempita/). E.g.: https://github.com/jasonmccampbell/scipy-refactor/blob/fwrap/scipy/linalg/fblas.pyx.in * Cython is more verbose, but also more explicit, than f2py. I also think Cython is friendlier but I'm obviously biased. Two examples to contrast the two: i) "check(m > k)" in f2py code is done like this in Cython: if not (m > k): raise ValueError('Condition on arguments not satisfied: m > k') So more verbose, but OTOH it is now trivial to modify the code to provide a custom exception message, something impossible in f2py. ii) Reordering the arguments in Cython is done by simply reordering the arguments in the wrapper function. And custom code is simply inserted, writing it in Cython. Whereas in f2py you get things like this: callstatement {int i=2*kl+ku+1;(*f2py_func)(&n,&kl,&ku,&nrhs,ab,&i,piv,b,&n,&info);for(i=0;i References: <4D4945CF.60500@student.matnat.uio.no> Message-ID: <4D49469B.4050806@student.matnat.uio.no> On 02/02/2011 12:53 PM, Dag Sverre Seljebotn wrote: > It's a good time to start a discussion on my Fwrap refactor of Fortran > wrappers in SciPy. I'd appreciate feedback from anyone who has an > opinion on whether the Fwrap refactor should be merged upstream and, if > so, how it should be done. > > My SciPy branch is now feature complete without using f2py > (cblas/clapack missing and only numscons build, not distutils -- the > latter is definitely coming soon). What is there instead of f2py is > Cython files that call the Fortran code directly. The Cython code should > have the same API as f2py and be backwards compatible. The Cython files > were generated from the f2py pyf files by using Fwrap, but the generated > files were subsequently modified (rationale below) -- I've not really > been replacing f2py with Fwrap, it is more correct to say that I used > Fwrap to replace f2py with Cython. > > My branch is here: > > https://github.com/jasonmccampbell/scipy-refactor/tree/fwrap/ > > (Currently only numscons build is changed, not distutils. Also I didn't > check in generated C sources for now.) > > I'll provide some more detailed information on specific > spots/differences once it's clear whether this is accepted upstream and > how review will be done. Overall: > > * ABI: Same as f2py (Fortran 77 using the same blatant assumptions > that f2py did) > > * Wrapper code typically looks like this: > > https://github.com/jasonmccampbell/scipy-refactor/blob/fwrap/scipy/interpolate/dfitpack.pyx > > In addition there's dfitpack_fc.h, dfitpack_fc.pxd which now expose the > raw Fortran ABI for use from C or Cython code. > > * In the linalg package, there's pyx.in files, which are templates in > the Tempita language (http://pythonpaste.org/tempita/). E.g.: > > https://github.com/jasonmccampbell/scipy-refactor/blob/fwrap/scipy/linalg/fblas.pyx.in > > * Cython is more verbose, but also more explicit, than f2py. I also > think Cython is friendlier but I'm obviously biased. Two examples to > contrast the two: > > i) "check(m> k)" in f2py code is done like this in Cython: > > if not (m> k): > raise ValueError('Condition on arguments not satisfied: m> k') > > So more verbose, but OTOH it is now trivial to modify the code to > provide a custom exception message, something impossible in f2py. > > ii) Reordering the arguments in Cython is done by simply reordering the > arguments in the wrapper function. And custom code is simply inserted, > writing it in Cython. Whereas in f2py you get things like this: > > callstatement {int > i=2*kl+ku+1;(*f2py_func)(&n,&kl,&ku,&nrhs,ab,&i,piv,b,&n,&info);for(i=0;i callprotoargument > int*,int*,int*,int*,float*,int*,int*,float*,int*,int* > > So, a (highly biased) summary: f2py code is much briefer for simple > things, but can get very hairy once one needs to do something > non-trivial. And the wrappers in SciPy often are non-trivial, there was > an incredible amount of "secret" tricks employed in the pyf files to > customize the wrappers that are now spelled out explicitly in Cython code. > > However, the resulting code is more verbose, and perhaps more difficult > to quickly scan. > Forgot to say one thing: The fwrap branch does NOT depend on other parts of the refactoring effort; in particular it doesn't need NumPy 2.0. So it can be reviewed and merged independent of the rest...probably best time is right after a release to make sure it gets some more testing... Dag Sverre From pav at iki.fi Wed Feb 2 07:53:11 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 2 Feb 2011 12:53:11 +0000 (UTC) Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 References: <4D473E1C.3020503@uci.edu> <4D48DEE9.8020901@uci.edu> Message-ID: Wed, 02 Feb 2011 13:17:03 +0800, Ralf Gommers wrote: [clip: test_optimize.TestOptimize.test_powell] > > I ran the test in a loop: most of the times the number of funccalls is > > 116, often 117, and sometimes as large as 128. Not sure why this test > > only fails on win32 and not win-amd64. > > It looks like an actual problem then, and not just a difference between > platforms. I saw something like that recently in a Wolfe line search > test too, the problem was floating point == 0.0 comparison. > > Can you open a ticket? The implementation of fmin_powell has not changed since Scipy 0.7. I'd imagine that the exact iteration count might be somewhat sensitive to numerical noise (due to the endgame of optimization). So, probably, it is mostly an issue with the test. When I wrote those tests, I hoped (naively) that FP arithmetic would be reproducible across machines, which turns out not to be true. Now it seems that it is not exactly reproducible even across multiple iterations on the same machine. Frankly, I do not understand where the random variation comes from --- the test does not use any random numbers etc. The only explanation I can imagine that something (MKL?) on Windows platforms does not perform exactly reproducible floating-point arithmetic. @Cristoph: Have you been able to reproduce random variation in results from numpy.dot? If yes, does the same variations occur in results from numpy.core.multiarray.dot (which never uses BLAS)? I would be much happier if we could point a finger towards MKL as the culprit for non- reproducible FP results. Pauli From pav at iki.fi Wed Feb 2 09:31:40 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 2 Feb 2011 14:31:40 +0000 (UTC) Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 References: <4D473E1C.3020503@uci.edu> <4D48DEE9.8020901@uci.edu> Message-ID: Wed, 02 Feb 2011 12:53:11 +0000, Pauli Virtanen wrote: [clip] > @Cristoph: Have you been able to reproduce random variation in results > from numpy.dot? If yes, does the same variations occur in results from > numpy.core.multiarray.dot (which never uses BLAS)? I would be much > happier if we could point a finger towards MKL as the culprit for non- > reproducible FP results. OK, seems to be MKL http://software.intel.com/sites/products/documentation/hpc/mkl/lin/ MKL_UG_coding_tips/Aligning_Data_for_Numerical_Stability.htm I wouldn't be too surprised if data alignment and threading also had a similar effect with ATLAS. The correct solution is probably just to bump the test tolerances so that they pass with MKL. -- Pauli Virtanen From josef.pktd at gmail.com Wed Feb 2 09:43:04 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 Feb 2011 09:43:04 -0500 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: <4D473E1C.3020503@uci.edu> <4D48DEE9.8020901@uci.edu> Message-ID: On Wed, Feb 2, 2011 at 9:31 AM, Pauli Virtanen wrote: > Wed, 02 Feb 2011 12:53:11 +0000, Pauli Virtanen wrote: > [clip] >> @Cristoph: Have you been able to reproduce random variation in results >> from numpy.dot? If yes, does the same variations occur in results from >> numpy.core.multiarray.dot (which never uses BLAS)? I would be much >> happier if we could point a finger towards MKL as the culprit for non- >> reproducible FP results. > > OK, seems to be MKL > > http://software.intel.com/sites/products/documentation/hpc/mkl/lin/ > MKL_UG_coding_tips/Aligning_Data_for_Numerical_Stability.htm > > I wouldn't be too surprised if data alignment and threading also had a > similar effect with ATLAS. >From this it looks like the problem is more general, but until now we have seen it only on Win32. > > The correct solution is probably just to bump the test tolerances so that > they pass with MKL. It's good to know it's not anything "serious". Is there a good place in the docs to warn about this? Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From pav at iki.fi Wed Feb 2 09:50:12 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 2 Feb 2011 14:50:12 +0000 (UTC) Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 References: <4D473E1C.3020503@uci.edu> <4D48DEE9.8020901@uci.edu> Message-ID: Wed, 02 Feb 2011 09:43:04 -0500, josef.pktd wrote: [clip] > From this it looks like the problem is more general, but until now we > have seen it only on Win32. If the matter is data alignment, perhaps this is because of memory allocators or the BLAS library functioning differently on different platforms. To get a definite answer if the issue is data alignment, one can perhaps check if the different results are correlated with input_array.__array_interface__['data'][0] % 16 (assuming no copies are made inside dot()). -- Pauli Virtanen From cgohlke at uci.edu Wed Feb 2 12:04:54 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Wed, 02 Feb 2011 09:04:54 -0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: <4D473E1C.3020503@uci.edu> <4D48DEE9.8020901@uci.edu> Message-ID: <4D498EB6.60301@uci.edu> On 2/2/2011 6:50 AM, Pauli Virtanen wrote: > Wed, 02 Feb 2011 09:43:04 -0500, josef.pktd wrote: > [clip] >> From this it looks like the problem is more general, but until now we >> have seen it only on Win32. > > If the matter is data alignment, perhaps this is because of memory > allocators or the BLAS library functioning differently on different > platforms. > > To get a definite answer if the issue is data alignment, one can perhaps > check if the different results are correlated with > > input_array.__array_interface__['data'][0] % 16 > > (assuming no copies are made inside dot()). > Yes, the 'K' array used in the test is not always aligned optimally. If K is aligned, the number of iterations is as expected (116) and the test passes. If K is not aligned, the number of iterations is > 116. It does not matter to use numpy.core.multiarray.dot instead of numpy.dot in the func() function. The 'params' array returned by optimize.fmin_powell() varies when K is unaligned but it always passes the 'abs(func(params) - func(solution)) < 1e-6' test so it should be safe to increase the test tolerance (self.funccalls <= 128). Christoph From josef.pktd at gmail.com Wed Feb 2 12:42:11 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 Feb 2011 12:42:11 -0500 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: <4D473E1C.3020503@uci.edu> <4D48DEE9.8020901@uci.edu> Message-ID: On Wed, Feb 2, 2011 at 9:50 AM, Pauli Virtanen wrote: > Wed, 02 Feb 2011 09:43:04 -0500, josef.pktd wrote: > [clip] >> From this it looks like the problem is more general, but until now we >> have seen it only on Win32. > > If the matter is data alignment, perhaps this is because of memory > allocators or the BLAS library functioning differently on different > platforms. > > To get a definite answer if the issue is data alignment, one can perhaps > check if the different results are correlated with > > ? ? ? ?input_array.__array_interface__['data'][0] % 16 > > (assuming no copies are made inside dot()). alignment alternates, same script as on numpy mailinglist (numpy.dot) with additional print print (y[0], m.__array_interface__['data'][0] % 16, x.__array_interface__['data'][0] % 16) using lapack (array([ 5.00486230e-06]), 8, 0) (array([ 5.00486408e-06]), 0, 0) (array([ 5.00486230e-06]), 8, 0) (array([ 5.00486408e-06]), 0, 0) (array([ 5.00486230e-06]), 8, 0) (array([ 5.00486408e-06]), 0, 0) (array([ 5.00486230e-06]), 8, 0) (array([ 5.00486408e-06]), 0, 0) (array([ 5.00486230e-06]), 8, 0) (array([ 5.00486408e-06]), 0, 0) in some runs alignment doesn't change in iterations using lapack (array([ 5.00486230e-06]), 8, 8) (array([ 5.00486230e-06]), 8, 8) (array([ 5.00486230e-06]), 8, 8) (array([ 5.00486230e-06]), 8, 8) (array([ 5.00486230e-06]), 8, 8) (array([ 5.00486230e-06]), 8, 8) (array([ 5.00486230e-06]), 8, 8) (array([ 5.00486230e-06]), 8, 8) (array([ 5.00486230e-06]), 8, 8) (array([ 5.00486230e-06]), 8, 8) Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From charlesr.harris at gmail.com Wed Feb 2 12:57:43 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 2 Feb 2011 10:57:43 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: Message-ID: On Tue, Feb 1, 2011 at 10:36 PM, Ralf Gommers wrote: > > > On Wed, Feb 2, 2011 at 1:11 PM, Ralf Gommers wrote: > >> >> >> On Wed, Feb 2, 2011 at 12:57 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Tue, Feb 1, 2011 at 9:04 PM, Ralf Gommers < >>> ralf.gommers at googlemail.com> wrote: >>> >>>> >>>> >>>> On Mon, Jan 31, 2011 at 11:01 PM, Charles R Harris < >>>> charlesr.harris at gmail.com> wrote: >>>> >>>>> >>>>> I think there should be a fix for ndarray also, the problem with type 5 >>>>> (int) not being recognized is that it checks for Int32, which on some (all?) >>>>> 32 bit platforms is a long (7) rather than an int. I think this is a bug and >>>>> it will cause problems with numpy 1.6. >>>>> >>>> You mean ndimage I guess. I don't really understand your explanation, if >>>> that's the case then those ndimage tests should have been failing before, >>>> right? >>>> >>>> If this needs to be fixed in scipy then we need an RC3. I would like to >>>> get that out by the 12th or so if possible. >>>> >>>> >>> >>> I think it needs to be fixed as it is a bug. The problem is that ndimage >>> checks types, but only by Int16, Int32, etc. On 32 bit systems and windows >>> Int32 is a long (type# 7) and integers (type# 5), are treated as >>> unrecognized types even though they are the same thing. Ndimage should >>> accept both types. >>> >>> I'm still a bit puzzled about why it's only failing after the recent >> changes in numpy master. >> >> Sounds like it's a bug, but it's not a regression and we are at rc2 right >> now. Do you (or does someone else) have time to fix it in the next week or >> so? The other option is to fix it in 0.9.1, which can come out at the same >> time as numpy 1.6.0. >> > > Sorry, I hadn't noticed that there's already a patch attached to > http://projects.scipy.org/numpy/ticket/1724 > If you could review and commit it, that would be great though. > > I've gone ahead and committed it to the trunk in r7120, the decision to backport it or not I'll leave to you. It could probably be made a bit more bombproof with respect to integers but I think it covers all the platforms I'm familiar with. We can make things more determinate for numpy 2.0. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Feb 2 13:17:00 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 2 Feb 2011 11:17:00 -0700 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 2 In-Reply-To: References: Message-ID: On Wed, Feb 2, 2011 at 10:57 AM, Charles R Harris wrote: > > > On Tue, Feb 1, 2011 at 10:36 PM, Ralf Gommers > wrote: > >> >> >> On Wed, Feb 2, 2011 at 1:11 PM, Ralf Gommers > > wrote: >> >>> >>> >>> On Wed, Feb 2, 2011 at 12:57 PM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> On Tue, Feb 1, 2011 at 9:04 PM, Ralf Gommers < >>>> ralf.gommers at googlemail.com> wrote: >>>> >>>>> >>>>> >>>>> On Mon, Jan 31, 2011 at 11:01 PM, Charles R Harris < >>>>> charlesr.harris at gmail.com> wrote: >>>>> >>>>>> >>>>>> I think there should be a fix for ndarray also, the problem with type >>>>>> 5 (int) not being recognized is that it checks for Int32, which on some >>>>>> (all?) 32 bit platforms is a long (7) rather than an int. I think this is a >>>>>> bug and it will cause problems with numpy 1.6. >>>>>> >>>>> You mean ndimage I guess. I don't really understand your explanation, >>>>> if that's the case then those ndimage tests should have been failing before, >>>>> right? >>>>> >>>>> If this needs to be fixed in scipy then we need an RC3. I would like to >>>>> get that out by the 12th or so if possible. >>>>> >>>>> >>>> >>>> I think it needs to be fixed as it is a bug. The problem is that ndimage >>>> checks types, but only by Int16, Int32, etc. On 32 bit systems and windows >>>> Int32 is a long (type# 7) and integers (type# 5), are treated as >>>> unrecognized types even though they are the same thing. Ndimage should >>>> accept both types. >>>> >>>> I'm still a bit puzzled about why it's only failing after the recent >>> changes in numpy master. >>> >>> Sounds like it's a bug, but it's not a regression and we are at rc2 right >>> now. Do you (or does someone else) have time to fix it in the next week or >>> so? The other option is to fix it in 0.9.1, which can come out at the same >>> time as numpy 1.6.0. >>> >> >> Sorry, I hadn't noticed that there's already a patch attached to >> http://projects.scipy.org/numpy/ticket/1724 >> If you could review and commit it, that would be great though. >> >> > I've gone ahead and committed it to the trunk in r7120, the decision to > backport it or not I'll leave to you. It could probably be made a bit more > bombproof with respect to integers but I think it covers all the platforms > I'm familiar with. We can make things more determinate for numpy 2.0. > > In particular, the patch should probably be tested on 64 bit SPARC with the sun compilers and maybe on ALPHA and HPPA also. Anyone? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.kremer.dk at gmail.com Sat Feb 5 06:46:03 2011 From: david.kremer.dk at gmail.com (David Kremer) Date: Sat, 5 Feb 2011 12:46:03 +0100 Subject: [SciPy-Dev] Starting a scikit for NUFFT In-Reply-To: References: <201101291935.35724.david.kremer.dk@gmail.com> <201101301831.34529.david.kremer.dk@gmail.com> Message-ID: <201102051246.03557.david.kremer.dk@gmail.com> I would also like to know if you know a good way and a good example to link a C library to scipy. I saw this library is very complete and well documented : http://www-user.tu-chemnitz.de/~potts/nfft/doc.php So I could try to link it to scipy instead of the previous cited fortran library. Thank you for your feedback. Greetings, David Kremer From pav at iki.fi Sat Feb 5 07:47:09 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 5 Feb 2011 12:47:09 +0000 (UTC) Subject: [SciPy-Dev] Starting a scikit for NUFFT References: <201101291935.35724.david.kremer.dk@gmail.com> <201101301831.34529.david.kremer.dk@gmail.com> <201102051246.03557.david.kremer.dk@gmail.com> Message-ID: On Sat, 05 Feb 2011 12:46:03 +0100, David Kremer wrote: > I would also like to know if you know a good way and a good example to > link a C library to scipy. > > I saw this library is very complete and well documented : > http://www-user.tu-chemnitz.de/~potts/nfft/doc.php Note that also that library is GPL-ed. > So I could try to link it to scipy instead of the previous cited fortran > library. You can still use F2Py, provided the C functions provided by that library are callable from Fortran. Or, you can use Cython. See http://www.cython.org/ for details. -- Pauli Virtanen From warren.weckesser at enthought.com Sat Feb 5 17:03:04 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 5 Feb 2011 16:03:04 -0600 Subject: [SciPy-Dev] 'Annotate' broken in the web source code browser. Message-ID: When I click on 'Annotate' while viewing source code on the web (e.g. at http://projects.scipy.org/scipy/browser/trunk/scipy/sparse/dok.py), I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, root at localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. ------------------------------ Apache/2.2.3 (CentOS) Server at projects.scipy.org Port 80 This has been occurring for at least several months. Does 'Annotate' actually work for anyone? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabian.pedregosa at inria.fr Tue Feb 8 04:52:55 2011 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Tue, 8 Feb 2011 10:52:55 +0100 Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy In-Reply-To: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> References: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> Message-ID: On Wed, Feb 2, 2011 at 12:54 PM, Dag Sverre Seljebotn wrote: > It's a good time to start a discussion on my Fwrap refactor of Fortran > wrappers in SciPy. I'd appreciate feedback from anyone who has an > opinion on whether the Fwrap refactor should be merged upstream and, if > so, how it should be done. > > My SciPy branch is now feature complete without using f2py > (cblas/clapack missing and only numscons build, not distutils -- the > latter is definitely coming soon). What is there instead of f2py is > Cython files that call the Fortran code directly. The Cython code should > have the same API as f2py and be backwards compatible. The Cython files > were generated from the f2py pyf files by using Fwrap, but the generated > files were subsequently modified (rationale below) -- I've not really > been replacing f2py with Fwrap, it is more correct to say that I used > Fwrap to replace f2py with Cython. Although not a scipy dev, I've been doing small fixes lately on scipy.linalg and find this stuff really great, as it makes binding missing LAPACK methods a lot more familiar than f2py. Really looking foward to have this merged. Fabian. From dagss at student.matnat.uio.no Tue Feb 8 04:56:46 2011 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Tue, 08 Feb 2011 10:56:46 +0100 Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy In-Reply-To: References: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> Message-ID: <4D51135E.6010003@student.matnat.uio.no> On 02/08/2011 10:52 AM, Fabian Pedregosa wrote: > On Wed, Feb 2, 2011 at 12:54 PM, Dag Sverre Seljebotn > wrote: > >> It's a good time to start a discussion on my Fwrap refactor of Fortran >> wrappers in SciPy. I'd appreciate feedback from anyone who has an >> opinion on whether the Fwrap refactor should be merged upstream and, if >> so, how it should be done. >> >> My SciPy branch is now feature complete without using f2py >> (cblas/clapack missing and only numscons build, not distutils -- the >> latter is definitely coming soon). What is there instead of f2py is >> Cython files that call the Fortran code directly. The Cython code should >> have the same API as f2py and be backwards compatible. The Cython files >> were generated from the f2py pyf files by using Fwrap, but the generated >> files were subsequently modified (rationale below) -- I've not really >> been replacing f2py with Fwrap, it is more correct to say that I used >> Fwrap to replace f2py with Cython. >> > Although not a scipy dev, I've been doing small fixes lately on > scipy.linalg and find this stuff really great, as it makes binding > missing LAPACK methods a lot more familiar than f2py. Really looking > foward to have this merged. > Good to hear. I don't have push rights or a track record with the project so I can't really make anything happen on my own accord. I'll meet Pauli in a Cython workshop the first week of April, so hopefully we can talk about it then if nothing happens before. Dag Sverre From pav at iki.fi Tue Feb 8 06:09:20 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 8 Feb 2011 11:09:20 +0000 (UTC) Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy References: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> <4D51135E.6010003@student.matnat.uio.no> Message-ID: Tue, 08 Feb 2011 10:56:46 +0100, Dag Sverre Seljebotn wrote: [clip] > I don't have push rights or a track record with the project so I can't > really make anything happen on my own accord. I'll meet Pauli in a > Cython workshop the first week of April, so hopefully we can talk about > it then if nothing happens before. Sorry, I didn't yet find the time to look deeply into this. Anyway, two quick questions (I'm not yet familiar with Fwrap): 1) Have the *.pyx files generated by Fwrap been manually modified? Or can Fwrap re-generate them directly from the *.f sources? If manual changes are necessary, can they be cleanly separated from the automatic boilerplate? 2) Is Fwrap a Cythoning-time dependency? Pauli From dagss at student.matnat.uio.no Tue Feb 8 15:48:28 2011 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Tue, 08 Feb 2011 21:48:28 +0100 Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy In-Reply-To: References: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> <4D51135E.6010003@student.matnat.uio.no> Message-ID: <4D51AC1C.2060907@student.matnat.uio.no> On 02/08/2011 12:09 PM, Pauli Virtanen wrote: > Tue, 08 Feb 2011 10:56:46 +0100, Dag Sverre Seljebotn wrote: > [clip] > >> I don't have push rights or a track record with the project so I can't >> really make anything happen on my own accord. I'll meet Pauli in a >> Cython workshop the first week of April, so hopefully we can talk about >> it then if nothing happens before. >> > Sorry, I didn't yet find the time to look deeply into this. > Anyway, two quick questions (I'm not yet familiar with Fwrap): > > 1) Have the *.pyx files generated by Fwrap been manually modified? > Or can Fwrap re-generate them directly from the *.f sources? > It surely can't do from .f sources (the .pyf files contains *tons* of additional specifications). It does get it 80% there from the .pyf files, and then manual modification is needed. > If manual changes are necessary, can they be cleanly separated > from the automatic boilerplate? > Not code-wise, but history-wise. That is, I have several times changed the code generation of Fwrap, regenerated the boilerplate, and put the manual changes back in (using git). The way I've done it is that in each .pyx file there's two hashes # Fwrap: self-sha1 0345345bbadc0324... # Fwrap: pyf-sha1 dc03240345345bba... (First one is related to .f parsing, second one to a second stage where .pyf changes are added.) Then, git-blame can be used to get the right commit, create a branch from it, regenerate the wrapper, and then you can merge back cleanly. The fwrap command line utility will do some hand-holding with git for you for this purpose, so you don't need to dig for those hashes yourself. > 2) Is Fwrap a Cythoning-time dependency? > No. Dag Sverre From pav at iki.fi Tue Feb 8 16:58:08 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 8 Feb 2011 21:58:08 +0000 (UTC) Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy References: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> <4D51135E.6010003@student.matnat.uio.no> <4D51AC1C.2060907@student.matnat.uio.no> Message-ID: On Tue, 08 Feb 2011 21:48:28 +0100, Dag Sverre Seljebotn wrote: > On 02/08/2011 12:09 PM, Pauli Virtanen wrote: [clip] > > If manual changes are necessary, can they be cleanly separated from > > the automatic boilerplate? > > > > Not code-wise, but history-wise. > > That is, I have several times changed the code generation of Fwrap, > regenerated the boilerplate, and put the manual changes back in (using > git). [clip] Clever, that's doable with git. However, it's not very pretty, and I have a nasty feeling about mixing autogenerated code with manual code in the long term. *** It however looks like that a viable plan [A] could be to replace all F2Py wrappers in Scipy with more or less manually written Cython wrappers. The wrapper code doesn't look *too* bad, and is in any case almost write- once. This would require factoring out common parts of the Fwrap code pieces (e.g. all the fw_* functions) to a reusable-within-Scipy Cython library. Some other common tasks for such a library: helpers for declaring ufuncs etc. Another plan [B] could be to try to improve Fwrap until it can produce compatible results directly from the .pyf sources, or some other mostly declarative definition. (This would probably be a nice boon for Fwrap itself, too.) Here, one would need to think to which degree this is possible at all; you probably have a pretty good understanding on this. Continuing with stupid questions (although this one wasn't answered in your initial mail :) -- What are those *_fc.f files for? It seems their purpose is to deal with assumed-shape arrays. However, unlike F90, AFAIK for F77 one should just assume that double precision :: a(10,10) is compatible with a(10*10). So as far as I see, all *_fc.f could be dropped. It seems this also would strip out all of the fw_copyshape() stuff in the pyx files, useful for plan [A]. Pauli From pav at iki.fi Tue Feb 8 18:20:08 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 8 Feb 2011 23:20:08 +0000 (UTC) Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy References: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> <4D51135E.6010003@student.matnat.uio.no> <4D51AC1C.2060907@student.matnat.uio.no> Message-ID: On Tue, 08 Feb 2011 21:58:08 +0000, Pauli Virtanen wrote: > On Tue, 08 Feb 2011 21:48:28 +0100, Dag Sverre Seljebotn wrote: [clip] >> Not code-wise, but history-wise. >> >> That is, I have several times changed the code generation of Fwrap, >> regenerated the boilerplate, and put the manual changes back in (using >> git). > [clip] > > Clever, that's doable with git. However, it's not very pretty, and I > have a nasty feeling about mixing autogenerated code with manual code in > the long term. Ok, some backing arguments for the gut feeling: the approach moves a part of the logical content of the code out of the files, to the version history. The top-level meaning of the code cannot be understood by reading it, as boilerplate cannot be distinguished from customizations. Instead, one needs to use "git blame" or some such tool to spot points that contain more information than the *.pyf files. Using some sort of "inline patch" approach would put the information back. Before a manual edit: some automatically generated code and after: some #< automatically #< generated manually edited #> code or: some #< manually edited #> code Suitably smart code generator could in principle extract and re-apply such inline patches automatically, but the main point IMHO would be to preserve readability. -- Pauli Virtanen From dagss at student.matnat.uio.no Wed Feb 9 04:11:32 2011 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 09 Feb 2011 10:11:32 +0100 Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy In-Reply-To: References: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> <4D51135E.6010003@student.matnat.uio.no> <4D51AC1C.2060907@student.matnat.uio.no> Message-ID: <4D525A44.4060805@student.matnat.uio.no> On 02/08/2011 10:58 PM, Pauli Virtanen wrote: > On Tue, 08 Feb 2011 21:48:28 +0100, Dag Sverre Seljebotn wrote: > >> On 02/08/2011 12:09 PM, Pauli Virtanen wrote: >> > [clip] > >>> If manual changes are necessary, can they be cleanly separated from >>> the automatic boilerplate? >>> >>> >> Not code-wise, but history-wise. >> >> That is, I have several times changed the code generation of Fwrap, >> regenerated the boilerplate, and put the manual changes back in (using >> git). >> > [clip] > First: Are you sure you are looking at the right branch? There should be no *_fc.f files now (if so it is a bug), I did get rid of them. I'm pushing to Jason's account. I guess I should remove scipy-refactor on my account... https://github.com/jasonmccampbell/scipy-refactor/tree/fwrap > Clever, that's doable with git. However, it's not very pretty, and I have > a nasty feeling about mixing autogenerated code with manual code in the > long term. > The thing is, this isn't really different from f2py. The pyf files are *also* first generated automatically from Fortran files, and then manually modified. So it is a change of language (and one may argue that pyf files are shorter and more declarative -- but also more full of ugly hacks to get around limitations). But it is not really a change of approach. To be clear, I was hoping to rip out/remove pyf files eventually, and consider the Cython wrappers the "final" wrappers. If the Fortran code does change, one can either rewrite that part of the wrapper, or use git. Just like f2py. > It however looks like that a viable plan [A] could be to replace all F2Py > wrappers in Scipy with more or less manually written Cython wrappers. The > wrapper code doesn't look *too* bad, and is in any case almost write- > once. This would require factoring out common parts of the Fwrap code > pieces (e.g. all the fw_* functions) to a reusable-within-Scipy Cython > library. Some other common tasks for such a library: helpers for > declaring ufuncs etc. > That's a good point. To be honest, I wasn't sure where to put such cross-package code, since e.g. the f2py build does copy in the shared sources into every package in the build, and since there's been pushes now and then for breaking up the packaging. Creating such a shared "scipy.core" or "scipy.utils" seemed orthogonal to what I was doing *shrug*. > Another plan [B] could be to try to improve Fwrap until it can produce > compatible results directly from the .pyf sources, or some other mostly > declarative definition. (This would probably be a nice boon for Fwrap > itself, too.) Here, one would need to think to which degree this is > possible at all; you probably have a pretty good understanding on this. > Well, the pyf sources as is is a nogo for the .NET port as they contain some CPython-specific code. Augmenting the pyf files with inline Cython code didn't seem pretty. But even if we did, getting the amount of manual changes down to 0% from pyf files is realistically not going to happen, as I think nobody would be interested in doing it. Let's exclude that as a possibility. Now, if one only considers CPython, you could stick with f2py. I'm not saying this merge *has* to happen (as I said initially I can see a case either way). The thing is, pyf is declarative, but severely limited. This leads to hundreds of ugly hacks (many of them invisible to the untrained eye -- such as casting a real Fortran-side to a complex Cython-side by passing the address of a complex variable as a real pointer to Fortran), many of them intertwined with the exact f2py implementation. So I guess the lesson I took from pyf files is that such a simple declarative language is *not* powerful enough for wrapping Fortran source (or at least, people weren't satisfied with 1:1 wrappers, but added lots of customizations to the wrappers -- you *could* do "ipiv += 1" for the PLU factorization Python-side, but that's not what people did). And in light of that, I kind of looked at manually modifying Cython wrappers as a feature, not a mis-feature. (Long-term, I'd much rather spend my time improving Cython so that the Cython wrappers can contain less boilerplate just because of more things being built into Cython. After all, if you wrap C code directly in Cython, there's much of the same boilerplate, and it should be easier.) I could invent a totally new declarative language for Fwrap (or a dialect of pyf that fixed its problems). But that would have the same problems that pyf has currently (see Fabian's post -- dropping pyf is one less language to learn). BTW, if Fortran sources change, you should be able to do fwrap update mywrapper.pyx and it will parse the configuration stored in there, reparse Fortran file, do a git blame, do proper branching, and tell you to merge. > Continuing with stupid questions (although this one wasn't answered in > your initial mail :) -- > > What are those *_fc.f files for? It seems their purpose is to deal > with assumed-shape arrays. However, unlike F90, AFAIK for F77 one should > just assume that double precision :: a(10,10) is compatible with > a(10*10). So as far as I see, all *_fc.f could be dropped. > Yes, you got the wrong branch, sorry about that. Dag Sverre From dagss at student.matnat.uio.no Wed Feb 9 04:53:40 2011 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 09 Feb 2011 10:53:40 +0100 Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy In-Reply-To: <4D525A44.4060805@student.matnat.uio.no> References: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> <4D51135E.6010003@student.matnat.uio.no> <4D51AC1C.2060907@student.matnat.uio.no> <4D525A44.4060805@student.matnat.uio.no> Message-ID: <4D526424.7050504@student.matnat.uio.no> On 02/09/2011 10:11 AM, Dag Sverre Seljebotn wrote: > On 02/08/2011 10:58 PM, Pauli Virtanen wrote: > >> On Tue, 08 Feb 2011 21:48:28 +0100, Dag Sverre Seljebotn wrote: >> >> >>> On 02/08/2011 12:09 PM, Pauli Virtanen wrote: >>> >>> >> [clip] >> >> >>>> If manual changes are necessary, can they be cleanly separated from >>>> the automatic boilerplate? >>>> >>>> >>>> >>> Not code-wise, but history-wise. >>> >>> That is, I have several times changed the code generation of Fwrap, >>> regenerated the boilerplate, and put the manual changes back in (using >>> git). >>> >>> >> [clip] >> >> > First: Are you sure you are looking at the right branch? There should be > no *_fc.f files now (if so it is a bug), I did get rid of them. > > I'm pushing to Jason's account. I guess I should remove scipy-refactor > on my account... > > https://github.com/jasonmccampbell/scipy-refactor/tree/fwrap > Ah, seems the problem is you looked at the 'refactor' branch. The 'fwrap' branch is seperate and does not depend on the refactor. ('fwrap' is routinely merged into 'refactor', but something has now gone wrong in 'refactor', will probably be fixed soon.) Dag Sverre From P.Schellart at astro.ru.nl Wed Feb 9 09:01:16 2011 From: P.Schellart at astro.ru.nl (Pim Schellart) Date: Wed, 9 Feb 2011 15:01:16 +0100 Subject: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal (sub)module Message-ID: <4FBDF01D-588D-4D41-B8ED-45DF15BA96A4@gmail.com> Dear SciPy developers, I have recently submitted code to calculate the Lomb-Scargle periodogram (see ticket http://projects.scipy.org/scipy/ticket/1352 for more information). After some iterations of review, recoding, adding documentation and adding unit tests the code (according to Ralf Gommers) is ready to go in. Ralf has nicely integrated the code into SciPy trunk in his git branch (https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from there. Now the time has come to discuss where to put it and Ralf has suggested to have this discussion here: "I've played with your code a bit, changed a few things, and added it to scipy.signal in a github branch:https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks good, whether it really should go into signal should be discussed on the mailing list (but seemed like the logical place for it). ... The best name for the module I could think of was spectral_analysis, maybe there's a better one? It allows to add similar methods later. " I would like to suggest the module name "lssa" for "Least-squares spectral analysis". So the full path would be scipy.signal.lssa. This has several advantages. 1. It is a short name without underscores (personal preference) 2. It is a category name that can comprise several similar algorithms as desired. 3. It is immediately obvious from this name that this is different than FFT and therefore people will not be confused looking for FFT functions in this sub-module. 4. The name corresponds to the Wikipedia entry for this category of algorithms :) (but on a serious note this may help for people that want to find more information, or for people browsing wikipedia that are looking for an implementation). Please let me know what you think. Kind regards, Pim Schellart From gael.varoquaux at normalesup.org Wed Feb 9 09:04:02 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 9 Feb 2011 15:04:02 +0100 Subject: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal (sub)module In-Reply-To: <4FBDF01D-588D-4D41-B8ED-45DF15BA96A4@gmail.com> References: <4FBDF01D-588D-4D41-B8ED-45DF15BA96A4@gmail.com> Message-ID: <20110209140402.GB31177@phare.normalesup.org> LSSA is incomprehensible to me. Acronymes are bad for communication, IMHO. I suggest 'spectral'. Gael PS: and Aubergine for the bikeshed, of course. On Wed, Feb 09, 2011 at 03:01:16PM +0100, Pim Schellart wrote: > Dear SciPy developers, > I have recently submitted code to calculate the Lomb-Scargle periodogram (see ticket http://projects.scipy.org/scipy/ticket/1352 for more information). > After some iterations of review, recoding, adding documentation and adding unit tests the code (according to Ralf Gommers) is ready to go in. > Ralf has nicely integrated the code into SciPy trunk in his git branch (https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from there. > Now the time has come to discuss where to put it and Ralf has suggested to have this discussion here: > "I've played with your code a bit, changed a few things, and added it to scipy.signal in a github branch:https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks good, whether it really should go into signal should be discussed on the mailing list (but seemed like the logical place for it). > ... > The best name for the module I could think of was spectral_analysis, maybe there's a better one? It allows to add similar methods later. > " > I would like to suggest the module name "lssa" for "Least-squares spectral analysis". So the full path would be scipy.signal.lssa. This has several advantages. > 1. It is a short name without underscores (personal preference) > 2. It is a category name that can comprise several similar algorithms as desired. > 3. It is immediately obvious from this name that this is different than FFT and therefore people will not be confused looking for FFT functions in this sub-module. > 4. The name corresponds to the Wikipedia entry for this category of algorithms :) (but on a serious note this may help for people that want to find more information, or for people browsing wikipedia that are looking for an implementation). > Please let me know what you think. > Kind regards, > Pim Schellart > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -- Gael Varoquaux Research Fellow, INSERM Associate researcher, INRIA Laboratoire de Neuro-Imagerie Assistee par Ordinateur NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-78-35 Mobile: ++ 33-6-28-25-64-62 http://gael-varoquaux.info From bsouthey at gmail.com Wed Feb 9 10:19:49 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 09 Feb 2011 09:19:49 -0600 Subject: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal (sub)module In-Reply-To: <20110209140402.GB31177@phare.normalesup.org> References: <4FBDF01D-588D-4D41-B8ED-45DF15BA96A4@gmail.com> <20110209140402.GB31177@phare.normalesup.org> Message-ID: <4D52B095.2000408@gmail.com> On 02/09/2011 08:04 AM, Gael Varoquaux wrote: > LSSA is incomprehensible to me. Acronymes are bad for communication, > IMHO. I suggest 'spectral'. > > Gael > > PS: and Aubergine for the bikeshed, of course. > > On Wed, Feb 09, 2011 at 03:01:16PM +0100, Pim Schellart wrote: >> Dear SciPy developers, >> I have recently submitted code to calculate the Lomb-Scargle periodogram (see ticket http://projects.scipy.org/scipy/ticket/1352 for more information). >> After some iterations of review, recoding, adding documentation and adding unit tests the code (according to Ralf Gommers) is ready to go in. >> Ralf has nicely integrated the code into SciPy trunk in his git branch (https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from there. >> Now the time has come to discuss where to put it and Ralf has suggested to have this discussion here: >> "I've played with your code a bit, changed a few things, and added it to scipy.signal in a github branch:https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks good, whether it really should go into signal should be discussed on the mailing list (but seemed like the logical place for it). >> ... >> The best name for the module I could think of was spectral_analysis, maybe there's a better one? It allows to add similar methods later. >> " >> I would like to suggest the module name "lssa" for "Least-squares spectral analysis". So the full path would be scipy.signal.lssa. This has several advantages. >> 1. It is a short name without underscores (personal preference) >> 2. It is a category name that can comprise several similar algorithms as desired. >> 3. It is immediately obvious from this name that this is different than FFT and therefore people will not be confused looking for FFT functions in this sub-module. >> 4. The name corresponds to the Wikipedia entry for this category of algorithms :) (but on a serious note this may help for people that want to find more information, or for people browsing wikipedia that are looking for an implementation). >> Please let me know what you think. >> Kind regards, >> Pim Schellart >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev I do agree with Gael about acronyms and initialisms. If the people know what lssa is in this context, then they should certainly know words like 'Lomb-Scargle', 'periodogram' and signal density! Rather the audience you need are the ones that need help to go from signal density to periodogram to Lomb-Scargle to '[l]east-squares spectral analysis' to lssa. So using lssa is not much of a help to anyone except people who can not enter more than a few letters. It is not my area but Wikipedia's entry (http://en.wikipedia.org/wiki/Periodogram) implies that there is more to this particular area than this very specific implementation. So you need to provide a general framework of how every relates to signal processing with an object-orientated view. This allows not only people to use your code but also to contribute in a unified manner. Also I probably incorrectly presume that 'spectral' is rather redundant when talking about signals in this context. Given that '[t]he periodogram is an estimate of the spectral density of a signal' (from the Wikipedia periodogram entry) so 'scipy.signal.density' would seem appropriate location. Is there another way of doing 'spectral analysis' that does not involve least-squares? If so then you need to allow for these approaches. Also, Wikipedia entry (http://en.wikipedia.org/wiki/Least-squares_spectral_analysis) gives multiple methods so technically the design should allow for these if someone wanted to add them in the future. (Of course this entry talks about 'estimating a frequency spectrum' not a density as per the periodogram entry - I do not consider frequency to be always equivalent to a density but could be true here.) Anyhow, I am sure you know the differences, so I would suggest that there is either a single general periodogram/density/fit/estimation class/function (that permits different approaches) or set of functions that are similar numpy's polynomials. Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Feb 9 10:38:01 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 9 Feb 2011 08:38:01 -0700 Subject: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal (sub)module In-Reply-To: <4FBDF01D-588D-4D41-B8ED-45DF15BA96A4@gmail.com> References: <4FBDF01D-588D-4D41-B8ED-45DF15BA96A4@gmail.com> Message-ID: On Wed, Feb 9, 2011 at 7:01 AM, Pim Schellart wrote: > Dear SciPy developers, > > I have recently submitted code to calculate the Lomb-Scargle periodogram > (see ticket http://projects.scipy.org/scipy/ticket/1352 for more > information). > After some iterations of review, recoding, adding documentation and adding > unit tests the code (according to Ralf Gommers) is ready to go in. > Ralf has nicely integrated the code into SciPy trunk in his git branch ( > https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from > there. > Now the time has come to discuss where to put it and Ralf has suggested to > have this discussion here: > > "I've played with your code a bit, changed a few things, and added it to > scipy.signal in a github branch: > https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks > good, whether it really should go into signal should be discussed on the > mailing list (but seemed like the logical place for it). > > ... > > The best name for the module I could think of was spectral_analysis, maybe > there's a better one? It allows to add similar methods later. > " > > I would like to suggest the module name "lssa" for "Least-squares spectral > analysis". So the full path would be scipy.signal.lssa. This has several > advantages. > 1. It is a short name without underscores (personal preference) > 2. It is a category name that can comprise several similar algorithms as > desired. > 3. It is immediately obvious from this name that this is different than FFT > and therefore people will not be confused looking for FFT functions in this > sub-module. > 4. The name corresponds to the Wikipedia entry for this category of > algorithms :) (but on a serious note this may help for people that want to > find more information, or for people browsing wikipedia that are looking for > an implementation). > > Please let me know what you think. > > The signal module is collecting a lot of various stuff. Maybe at some point we should have separate spectral_analysis/curve_fitting/filter_design modules. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Wed Feb 9 17:22:22 2011 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 9 Feb 2011 14:22:22 -0800 Subject: [SciPy-Dev] scipy github migration Message-ID: Hi, I noticed today that ohloh just fixed the problem that was causing NumPy (after the move to github) to appear to have a negative number of lines of code: http://www.ohloh.net/blog/Latest_Updates_to_Project_Statistics_and_Line_Counts And that got me wondering whether it was time to move SciPy over to git/github as well. Now that ETS has moved and matplotlib is planning to move in the near future, it seems like a good time to make a plan for moving SciPy as well. I believe that Pauli has already taken care of most of the work? What is left to do? Thanks, Jarrod From warren.weckesser at enthought.com Wed Feb 9 17:37:57 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 9 Feb 2011 16:37:57 -0600 Subject: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal (sub)module In-Reply-To: <4FBDF01D-588D-4D41-B8ED-45DF15BA96A4@gmail.com> References: <4FBDF01D-588D-4D41-B8ED-45DF15BA96A4@gmail.com> Message-ID: On Wed, Feb 9, 2011 at 8:01 AM, Pim Schellart wrote: > Dear SciPy developers, > > I have recently submitted code to calculate the Lomb-Scargle periodogram > (see ticket http://projects.scipy.org/scipy/ticket/1352 for more > information). > After some iterations of review, recoding, adding documentation and adding > unit tests the code (according to Ralf Gommers) is ready to go in. > Ralf has nicely integrated the code into SciPy trunk in his git branch ( > https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from > there. > Now the time has come to discuss where to put it and Ralf has suggested to > have this discussion here: > > "I've played with your code a bit, changed a few things, and added it to > scipy.signal in a github branch: > https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks > good, whether it really should go into signal should be discussed on the > mailing list (but seemed like the logical place for it). > > ... > > The best name for the module I could think of was spectral_analysis, maybe > there's a better one? It allows to add similar methods later. > " > > I would like to suggest the module name "lssa" for "Least-squares spectral > analysis". So the full path would be scipy.signal.lssa. This has several > advantages. > 1. It is a short name without underscores (personal preference) > 2. It is a category name that can comprise several similar algorithms as > desired. > 3. It is immediately obvious from this name that this is different than FFT > and therefore people will not be confused looking for FFT functions in this > sub-module. > 4. The name corresponds to the Wikipedia entry for this category of > algorithms :) (but on a serious note this may help for people that want to > find more information, or for people browsing wikipedia that are looking for > an implementation). > > Please let me know what you think. > As in many scipy subpackages, users will access this function as scipy.signal.lombscargle (because __init__.py in scipy.signal imports the function from the submodule and makes it available in __all__). I don't know if there is an official policy, but this seems to be the preferred packaging style in many scipy packages. So the name of the submodule doesn't really matter that much. I would just call the submodule lombscargle also. Warren > > Kind regards, > > Pim Schellart > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From P.Schellart at astro.ru.nl Wed Feb 9 17:48:56 2011 From: P.Schellart at astro.ru.nl (Pim Schellart) Date: Wed, 9 Feb 2011 23:48:56 +0100 Subject: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal (sub)module In-Reply-To: References: Message-ID: Yes, I see your point, using an acronym probably makes it more complicated for people to find the routines if they are not familiar with the subject. I guess that having longer module names is a small price to pay for clarity. I especially like the suggestion of making separate spectral_analysis/curve_fitting/filter_design modules within signal however that is obviously not up to me to decide :) The suggestion of similarity to the numpy.polynomials module is also quite interesting. I think the following naming scheme (similar to the one used in scipy.optimize) would be nice: signal.spectral_analysis.periodogram_classical signal.spectral_analysis.periodogram_lombscargle signal.spectral_analysis.periodogram_ribicky ... perhaps with `signal.spectral_analysis.periodogram` as an alias to the most useful form. I intend to implement at least these two additional functions as soon as I have time. Kind regards, Pim Schellart On Feb 9, 2011, at 7:00 PM, scipy-dev-request at scipy.org wrote: > Send SciPy-Dev mailing list submissions to > scipy-dev at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at scipy.org > > You can reach the person managing the list at > scipy-dev-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Re: Including Lomb-Scargle code in scipy.signal (sub)module > (Bruce Southey) > 2. Re: Including Lomb-Scargle code in scipy.signal (sub)module > (Charles R Harris) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 09 Feb 2011 09:19:49 -0600 > From: Bruce Southey > Subject: Re: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal > (sub)module > To: scipy-dev at scipy.org > Message-ID: <4D52B095.2000408 at gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > On 02/09/2011 08:04 AM, Gael Varoquaux wrote: >> LSSA is incomprehensible to me. Acronymes are bad for communication, >> IMHO. I suggest 'spectral'. >> >> Gael >> >> PS: and Aubergine for the bikeshed, of course. >> >> On Wed, Feb 09, 2011 at 03:01:16PM +0100, Pim Schellart wrote: >>> Dear SciPy developers, >>> I have recently submitted code to calculate the Lomb-Scargle periodogram (see ticket http://projects.scipy.org/scipy/ticket/1352 for more information). >>> After some iterations of review, recoding, adding documentation and adding unit tests the code (according to Ralf Gommers) is ready to go in. >>> Ralf has nicely integrated the code into SciPy trunk in his git branch (https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from there. >>> Now the time has come to discuss where to put it and Ralf has suggested to have this discussion here: >>> "I've played with your code a bit, changed a few things, and added it to scipy.signal in a github branch:https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks good, whether it really should go into signal should be discussed on the mailing list (but seemed like the logical place for it). >>> ... >>> The best name for the module I could think of was spectral_analysis, maybe there's a better one? It allows to add similar methods later. >>> " >>> I would like to suggest the module name "lssa" for "Least-squares spectral analysis". So the full path would be scipy.signal.lssa. This has several advantages. >>> 1. It is a short name without underscores (personal preference) >>> 2. It is a category name that can comprise several similar algorithms as desired. >>> 3. It is immediately obvious from this name that this is different than FFT and therefore people will not be confused looking for FFT functions in this sub-module. >>> 4. The name corresponds to the Wikipedia entry for this category of algorithms :) (but on a serious note this may help for people that want to find more information, or for people browsing wikipedia that are looking for an implementation). >>> Please let me know what you think. >>> Kind regards, >>> Pim Schellart >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev > I do agree with Gael about acronyms and initialisms. If the people know > what lssa is in this context, then they should certainly know words like > 'Lomb-Scargle', 'periodogram' and signal density! Rather the audience > you need are the ones that need help to go from signal density to > periodogram to Lomb-Scargle to '[l]east-squares spectral analysis' to > lssa. So using lssa is not much of a help to anyone except people who > can not enter more than a few letters. > > It is not my area but Wikipedia's entry > (http://en.wikipedia.org/wiki/Periodogram) implies that there is more to > this particular area than this very specific implementation. So you need > to provide a general framework of how every relates to signal processing > with an object-orientated view. This allows not only people to use your > code but also to contribute in a unified manner. Also I probably > incorrectly presume that 'spectral' is rather redundant when talking > about signals in this context. > > Given that '[t]he periodogram is an estimate of the spectral density of > a signal' (from the Wikipedia periodogram entry) so > 'scipy.signal.density' would seem appropriate location. > > Is there another way of doing 'spectral analysis' that does not involve > least-squares? If so then you need to allow for these approaches. > Also, Wikipedia entry > (http://en.wikipedia.org/wiki/Least-squares_spectral_analysis) gives > multiple methods so technically the design should allow for these if > someone wanted to add them in the future. (Of course this entry talks > about 'estimating a frequency spectrum' not a density as per the > periodogram entry - I do not consider frequency to be always equivalent > to a density but could be true here.) > > Anyhow, I am sure you know the differences, so I would suggest that > there is either a single general periodogram/density/fit/estimation > class/function (that permits different approaches) or set of functions > that are similar numpy's polynomials. > > Bruce > > > > > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20110209/24055ef2/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Wed, 9 Feb 2011 08:38:01 -0700 > From: Charles R Harris > Subject: Re: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal > (sub)module > To: SciPy Developers List > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Wed, Feb 9, 2011 at 7:01 AM, Pim Schellart wrote: > >> Dear SciPy developers, >> >> I have recently submitted code to calculate the Lomb-Scargle periodogram >> (see ticket http://projects.scipy.org/scipy/ticket/1352 for more >> information). >> After some iterations of review, recoding, adding documentation and adding >> unit tests the code (according to Ralf Gommers) is ready to go in. >> Ralf has nicely integrated the code into SciPy trunk in his git branch ( >> https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from >> there. >> Now the time has come to discuss where to put it and Ralf has suggested to >> have this discussion here: >> >> "I've played with your code a bit, changed a few things, and added it to >> scipy.signal in a github branch: >> https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks >> good, whether it really should go into signal should be discussed on the >> mailing list (but seemed like the logical place for it). >> >> ... >> >> The best name for the module I could think of was spectral_analysis, maybe >> there's a better one? It allows to add similar methods later. >> " >> >> I would like to suggest the module name "lssa" for "Least-squares spectral >> analysis". So the full path would be scipy.signal.lssa. This has several >> advantages. >> 1. It is a short name without underscores (personal preference) >> 2. It is a category name that can comprise several similar algorithms as >> desired. >> 3. It is immediately obvious from this name that this is different than FFT >> and therefore people will not be confused looking for FFT functions in this >> sub-module. >> 4. The name corresponds to the Wikipedia entry for this category of >> algorithms :) (but on a serious note this may help for people that want to >> find more information, or for people browsing wikipedia that are looking for >> an implementation). >> >> Please let me know what you think. >> >> > > The signal module is collecting a lot of various stuff. Maybe at some point > we should have separate spectral_analysis/curve_fitting/filter_design > modules. > > Chuck > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20110209/334a842d/attachment-0001.html > > ------------------------------ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > End of SciPy-Dev Digest, Vol 88, Issue 11 > ***************************************** From gael.varoquaux at normalesup.org Wed Feb 9 18:07:21 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 10 Feb 2011 00:07:21 +0100 Subject: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal (sub)module In-Reply-To: References: <4FBDF01D-588D-4D41-B8ED-45DF15BA96A4@gmail.com> Message-ID: <20110209230721.GA29775@phare.normalesup.org> On Wed, Feb 09, 2011 at 04:37:57PM -0600, Warren Weckesser wrote: > As in many scipy subpackages, users will access this function as > scipy.signal.lombscargle? (because __init__.py in scipy.signal > imports the function from the submodule and makes it available > in? __all__).? I don't know if there is an official policy, but > this seems to be the preferred packaging style in many scipy > packages.? So the name of the submodule doesn't really matter > that much.? I would just call the submodule lombscargle also. I don't think that you should be using the same name for the module and the function as, as a result, the function imported in the __init__, will shadow the module and make it impossible to import (eg for testing). My 2 cents, Ga?l From pav at iki.fi Wed Feb 9 19:37:41 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 10 Feb 2011 00:37:41 +0000 (UTC) Subject: [SciPy-Dev] scipy github migration References: Message-ID: On Wed, 09 Feb 2011 14:22:22 -0800, Jarrod Millman wrote: [clip] > And that got me wondering whether it was time to move SciPy over to > git/github as well. Now that ETS has moved and matplotlib is planning > to move in the near future, it seems like a good time to make a plan for > moving SciPy as well. I believe that Pauli has already taken care of > most of the work? What is left to do? Not much is left to do. In principle we would be mostly ready already now. The only question is whether making the jump now would bring more work for Ralf getting 0.9.0 out. -- Pauli Virtanen From scopatz at gmail.com Wed Feb 9 23:36:23 2011 From: scopatz at gmail.com (Anthony Scopatz) Date: Thu, 10 Feb 2011 04:36:23 +0000 Subject: [SciPy-Dev] scipy github migration In-Reply-To: References: Message-ID: On Thu, Feb 10, 2011 at 12:37 AM, Pauli Virtanen wrote: > On Wed, 09 Feb 2011 14:22:22 -0800, Jarrod Millman wrote: > [clip] > > And that got me wondering whether it was time to move SciPy over to > > git/github as well. Now that ETS has moved and matplotlib is planning > > to move in the near future, it seems like a good time to make a plan for > > moving SciPy as well. I believe that Pauli has already taken care of > > most of the work? What is left to do? > > Not much is left to do. In principle we would be mostly ready already now. > > The only question is whether making the jump now would bring more work > for Ralf getting 0.9.0 out. > If you need any help with this, please let me know. Ilan and I just transfered all of ETS over from svn a couple of weeks ago. I would be happy to do it for SciPy. Be Well Anthony > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Thu Feb 10 05:20:59 2011 From: oliphant at enthought.com (Travis Oliphant) Date: Thu, 10 Feb 2011 04:20:59 -0600 Subject: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal (sub)module In-Reply-To: References: Message-ID: <94688049-03E4-4C05-8BB7-5BE3CB6BE7AF@enthought.com> This looks like a very nice addition to the signal toolbox in SciPy. And, yes, it should definitely go in the signal toolbox. I suggest creating a namespace periodogram (or spectrum or spectral) in the signal toolbox: signal.periodogram.welch signal.periodogram.lombscargle signal.periodogram.ribicky etc. I'm not a big fan of very long names with a lot of redundancy (i.e. what is the categorical difference between spectral and periodogram?) -Travis On Feb 9, 2011, at 4:48 PM, Pim Schellart wrote: > Yes, I see your point, using an acronym probably makes it more complicated for people to find the routines if they are not familiar with the subject. > I guess that having longer module names is a small price to pay for clarity. > I especially like the suggestion of making separate spectral_analysis/curve_fitting/filter_design modules within signal however that is obviously not up to me to decide :) > The suggestion of similarity to the numpy.polynomials module is also quite interesting. > I think the following naming scheme (similar to the one used in scipy.optimize) would be nice: > > signal.spectral_analysis.periodogram_classical > signal.spectral_analysis.periodogram_lombscargle > signal.spectral_analysis.periodogram_ribicky > ... > > perhaps with `signal.spectral_analysis.periodogram` as an alias to the most useful form. > I intend to implement at least these two additional functions as soon as I have time. > > Kind regards, > > Pim Schellart > > On Feb 9, 2011, at 7:00 PM, scipy-dev-request at scipy.org wrote: > >> Send SciPy-Dev mailing list submissions to >> scipy-dev at scipy.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> or, via email, send a message with subject or body 'help' to >> scipy-dev-request at scipy.org >> >> You can reach the person managing the list at >> scipy-dev-owner at scipy.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of SciPy-Dev digest..." >> >> >> Today's Topics: >> >> 1. Re: Including Lomb-Scargle code in scipy.signal (sub)module >> (Bruce Southey) >> 2. Re: Including Lomb-Scargle code in scipy.signal (sub)module >> (Charles R Harris) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Wed, 09 Feb 2011 09:19:49 -0600 >> From: Bruce Southey >> Subject: Re: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal >> (sub)module >> To: scipy-dev at scipy.org >> Message-ID: <4D52B095.2000408 at gmail.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> On 02/09/2011 08:04 AM, Gael Varoquaux wrote: >>> LSSA is incomprehensible to me. Acronymes are bad for communication, >>> IMHO. I suggest 'spectral'. >>> >>> Gael >>> >>> PS: and Aubergine for the bikeshed, of course. >>> >>> On Wed, Feb 09, 2011 at 03:01:16PM +0100, Pim Schellart wrote: >>>> Dear SciPy developers, >>>> I have recently submitted code to calculate the Lomb-Scargle periodogram (see ticket http://projects.scipy.org/scipy/ticket/1352 for more information). >>>> After some iterations of review, recoding, adding documentation and adding unit tests the code (according to Ralf Gommers) is ready to go in. >>>> Ralf has nicely integrated the code into SciPy trunk in his git branch (https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from there. >>>> Now the time has come to discuss where to put it and Ralf has suggested to have this discussion here: >>>> "I've played with your code a bit, changed a few things, and added it to scipy.signal in a github branch:https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks good, whether it really should go into signal should be discussed on the mailing list (but seemed like the logical place for it). >>>> ... >>>> The best name for the module I could think of was spectral_analysis, maybe there's a better one? It allows to add similar methods later. >>>> " >>>> I would like to suggest the module name "lssa" for "Least-squares spectral analysis". So the full path would be scipy.signal.lssa. This has several advantages. >>>> 1. It is a short name without underscores (personal preference) >>>> 2. It is a category name that can comprise several similar algorithms as desired. >>>> 3. It is immediately obvious from this name that this is different than FFT and therefore people will not be confused looking for FFT functions in this sub-module. >>>> 4. The name corresponds to the Wikipedia entry for this category of algorithms :) (but on a serious note this may help for people that want to find more information, or for people browsing wikipedia that are looking for an implementation). >>>> Please let me know what you think. >>>> Kind regards, >>>> Pim Schellart >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >> I do agree with Gael about acronyms and initialisms. If the people know >> what lssa is in this context, then they should certainly know words like >> 'Lomb-Scargle', 'periodogram' and signal density! Rather the audience >> you need are the ones that need help to go from signal density to >> periodogram to Lomb-Scargle to '[l]east-squares spectral analysis' to >> lssa. So using lssa is not much of a help to anyone except people who >> can not enter more than a few letters. >> >> It is not my area but Wikipedia's entry >> (http://en.wikipedia.org/wiki/Periodogram) implies that there is more to >> this particular area than this very specific implementation. So you need >> to provide a general framework of how every relates to signal processing >> with an object-orientated view. This allows not only people to use your >> code but also to contribute in a unified manner. Also I probably >> incorrectly presume that 'spectral' is rather redundant when talking >> about signals in this context. >> >> Given that '[t]he periodogram is an estimate of the spectral density of >> a signal' (from the Wikipedia periodogram entry) so >> 'scipy.signal.density' would seem appropriate location. >> >> Is there another way of doing 'spectral analysis' that does not involve >> least-squares? If so then you need to allow for these approaches. >> Also, Wikipedia entry >> (http://en.wikipedia.org/wiki/Least-squares_spectral_analysis) gives >> multiple methods so technically the design should allow for these if >> someone wanted to add them in the future. (Of course this entry talks >> about 'estimating a frequency spectrum' not a density as per the >> periodogram entry - I do not consider frequency to be always equivalent >> to a density but could be true here.) >> >> Anyhow, I am sure you know the differences, so I would suggest that >> there is either a single general periodogram/density/fit/estimation >> class/function (that permits different approaches) or set of functions >> that are similar numpy's polynomials. >> >> Bruce >> >> >> >> >> >> >> >> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20110209/24055ef2/attachment-0001.html >> >> ------------------------------ >> >> Message: 2 >> Date: Wed, 9 Feb 2011 08:38:01 -0700 >> From: Charles R Harris >> Subject: Re: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal >> (sub)module >> To: SciPy Developers List >> Message-ID: >> >> Content-Type: text/plain; charset="iso-8859-1" >> >> On Wed, Feb 9, 2011 at 7:01 AM, Pim Schellart wrote: >> >>> Dear SciPy developers, >>> >>> I have recently submitted code to calculate the Lomb-Scargle periodogram >>> (see ticket http://projects.scipy.org/scipy/ticket/1352 for more >>> information). >>> After some iterations of review, recoding, adding documentation and adding >>> unit tests the code (according to Ralf Gommers) is ready to go in. >>> Ralf has nicely integrated the code into SciPy trunk in his git branch ( >>> https://github.com/rgommers/scipy/tree/lomb-scargle) and you may pull from >>> there. >>> Now the time has come to discuss where to put it and Ralf has suggested to >>> have this discussion here: >>> >>> "I've played with your code a bit, changed a few things, and added it to >>> scipy.signal in a github branch: >>> https://github.com/rgommers/scipy/tree/lomb-scargle. I think it looks >>> good, whether it really should go into signal should be discussed on the >>> mailing list (but seemed like the logical place for it). >>> >>> ... >>> >>> The best name for the module I could think of was spectral_analysis, maybe >>> there's a better one? It allows to add similar methods later. >>> " >>> >>> I would like to suggest the module name "lssa" for "Least-squares spectral >>> analysis". So the full path would be scipy.signal.lssa. This has several >>> advantages. >>> 1. It is a short name without underscores (personal preference) >>> 2. It is a category name that can comprise several similar algorithms as >>> desired. >>> 3. It is immediately obvious from this name that this is different than FFT >>> and therefore people will not be confused looking for FFT functions in this >>> sub-module. >>> 4. The name corresponds to the Wikipedia entry for this category of >>> algorithms :) (but on a serious note this may help for people that want to >>> find more information, or for people browsing wikipedia that are looking for >>> an implementation). >>> >>> Please let me know what you think. >>> >>> >> >> The signal module is collecting a lot of various stuff. Maybe at some point >> we should have separate spectral_analysis/curve_fitting/filter_design >> modules. >> >> Chuck >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20110209/334a842d/attachment-0001.html >> >> ------------------------------ >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >> End of SciPy-Dev Digest, Vol 88, Issue 11 >> ***************************************** > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev --- Travis Oliphant Enthought, Inc. oliphant at enthought.com 1-512-536-1057 http://www.enthought.com From ralf.gommers at googlemail.com Thu Feb 10 06:34:52 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 10 Feb 2011 19:34:52 +0800 Subject: [SciPy-Dev] scipy github migration In-Reply-To: References: Message-ID: On Thu, Feb 10, 2011 at 8:37 AM, Pauli Virtanen wrote: > On Wed, 09 Feb 2011 14:22:22 -0800, Jarrod Millman wrote: > [clip] > > And that got me wondering whether it was time to move SciPy over to > > git/github as well. Now that ETS has moved and matplotlib is planning > > to move in the near future, it seems like a good time to make a plan for > > moving SciPy as well. I believe that Pauli has already taken care of > > most of the work? What is left to do? > > Not much is left to do. In principle we would be mostly ready already now. > > The only question is whether making the jump now would bring more work > for Ralf getting 0.9.0 out. > Probably not much, but just in case I'd prefer you wait till it's done. I will backport the last few patches (the ndimage one, the most recent fixes for builds with MKL and your fixes for python 2.4), tag rc3 and if everything is okay then 0.9.0 should be released in ten days or so. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From scopatz at gmail.com Thu Feb 10 09:21:22 2011 From: scopatz at gmail.com (Anthony Scopatz) Date: Thu, 10 Feb 2011 14:21:22 +0000 Subject: [SciPy-Dev] scipy github migration In-Reply-To: References: Message-ID: On Thu, Feb 10, 2011 at 11:34 AM, Ralf Gommers wrote: > > > On Thu, Feb 10, 2011 at 8:37 AM, Pauli Virtanen wrote: > >> On Wed, 09 Feb 2011 14:22:22 -0800, Jarrod Millman wrote: >> [clip] >> > And that got me wondering whether it was time to move SciPy over to >> > git/github as well. Now that ETS has moved and matplotlib is planning >> > to move in the near future, it seems like a good time to make a plan for >> > moving SciPy as well. I believe that Pauli has already taken care of >> > most of the work? What is left to do? >> >> Not much is left to do. In principle we would be mostly ready already now. >> >> The only question is whether making the jump now would bring more work >> for Ralf getting 0.9.0 out. >> > > Probably not much, but just in case I'd prefer you wait till it's done. I > will backport the last few patches (the ndimage one, the most recent fixes > for builds with MKL and your fixes for python 2.4), tag rc3 and if > everything is okay then 0.9.0 should be released in ten days or so. > I agree, waiting until right after the release is probably the best. When it gets to be about a week away, we should send out a post that states that the svn repo will become read-only after the following the release and other details about the transition. After 0.9 would be a perfect time for this. Be Well Anthony > Cheers, > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Thu Feb 10 09:57:26 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 10 Feb 2011 08:57:26 -0600 Subject: [SciPy-Dev] scipy github migration In-Reply-To: References: Message-ID: <4D53FCD6.5030705@gmail.com> On 02/10/2011 08:21 AM, Anthony Scopatz wrote: > > > On Thu, Feb 10, 2011 at 11:34 AM, Ralf Gommers > > wrote: > > > > On Thu, Feb 10, 2011 at 8:37 AM, Pauli Virtanen > wrote: > > On Wed, 09 Feb 2011 14:22:22 -0800, Jarrod Millman wrote: > [clip] > > And that got me wondering whether it was time to move SciPy > over to > > git/github as well. Now that ETS has moved and matplotlib > is planning > > to move in the near future, it seems like a good time to > make a plan for > > moving SciPy as well. I believe that Pauli has already > taken care of > > most of the work? What is left to do? > > Not much is left to do. In principle we would be mostly ready > already now. > > The only question is whether making the jump now would bring > more work > for Ralf getting 0.9.0 out. > > > Probably not much, but just in case I'd prefer you wait till it's > done. I will backport the last few patches (the ndimage one, the > most recent fixes for builds with MKL and your fixes for python > 2.4), tag rc3 and if everything is okay then 0.9.0 should be > released in ten days or so. > > > I agree, waiting until right after the release is probably the best. > > When it gets to be about a week away, we should send out a post that > states that the svn repo will become read-only after the following the > release and other details about the transition. After 0.9 would be a > perfect time for this. > > Be Well > Anthony > > Cheers, > Ralf > > _______________________________________________ > Please make sure that any information about the switch is included in the 0.9 release notes. Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From joel.andersson at esat.kuleuven.be Thu Feb 10 10:55:38 2011 From: joel.andersson at esat.kuleuven.be (Joel Andersson) Date: Thu, 10 Feb 2011 16:55:38 +0100 Subject: [SciPy-Dev] Python interface to SUNDIALS (CVODES and IDAS) via CasADi Message-ID: Dear SciPy developers, It is my pleasure to announce to you the first public release of CasADi, a minimalistic computer algebra system implementing automatic differentiation in forward and adjoint modes by means of a hybrid symbolic/numeric approach. It is designed to be a low-level tool for quick, yet highly efficient implementation of algorithms for dynamic optimization. It is written in completely self-contained C++ code and is released under the LGPL license. Maybe the most interesting for you is that it comes with a full-featured Python interface, which is auto-generated using SWIG. The C++ classes have been designed in a way to make the interface between Python and C++ as complete and easy to maintain as possible. No matter if the tool is used from C++ or Python, all functions will be evaluated on CasADi's virtual machine, meaning that there is little to no speed penalty when using the tool from Python, over a pure C/C++-approach. This beta release contains a rather full-featured interface to CVODES and IDAS, the two sensitivity capable integrators from the Sundials suite, which relieves the user for much of the painstaking work when using these tools efficiently. In particular, the following is supported: * Automatic generation of the forward and adjoint right hand side when calculating ODE/DAE sensitivities * Automatic generation of Jacobian information in dense, banded or general sparse format * Interface to a sparse direct linear solver (SuperLU), to be used as an alternative linear solver or as a preconditioner module for Sundial's iterative linear solvers * Second and higher order sensitivities via a forward-over-forward or forward-over-adjoint approach In addition to the Sundials interface, there are interfaces to KNITRO and IPOPT, two excellent NLP solvers. These can be used together with the Sundials interface or be used separately, enabling very efficient implementation of a wide range of methods from the field of dynamic optimization (in particular direct multiple-shooting and direct collocation). Right now, the interface is quite different from the Scipy ODE/DAE integrators, but you might be interested anyway. The only Numpy/Scipy integration so far in the package, is automatic type conversions between CasADi and Numpy/Sciypy datatypes. The software is located on Sourceforge, please visit www.casadi.org for instructions on how to download and install the code. Best regards, Joel Andersson on behalf of the CasADi team -- Joel Andersson, PhD Student Electrical Engineering Department (ESAT-SCD), Room 05.11, K.U.Leuven, Kasteelpark Arenberg 10 - bus 2446, 3001 Heverlee, Belgium Phone: +32-16-321819 Mobile: +32-486-672874 (Belgium) / +34-63-4452111 (Spain) / +46-727-365878 (Sweden) Private address: Justus Lipsiusstraat 59/202, 3000 Leuven, Belgium -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Fri Feb 11 09:13:01 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 11 Feb 2011 22:13:01 +0800 Subject: [SciPy-Dev] scipy github migration In-Reply-To: <4D53FCD6.5030705@gmail.com> References: <4D53FCD6.5030705@gmail.com> Message-ID: On Thu, Feb 10, 2011 at 10:57 PM, Bruce Southey wrote: > On 02/10/2011 08:21 AM, Anthony Scopatz wrote: > > > > On Thu, Feb 10, 2011 at 11:34 AM, Ralf Gommers < > ralf.gommers at googlemail.com> wrote: > >> >> >> On Thu, Feb 10, 2011 at 8:37 AM, Pauli Virtanen wrote: >> >>> On Wed, 09 Feb 2011 14:22:22 -0800, Jarrod Millman wrote: >>> [clip] >>> > And that got me wondering whether it was time to move SciPy over to >>> > git/github as well. Now that ETS has moved and matplotlib is planning >>> > to move in the near future, it seems like a good time to make a plan >>> for >>> > moving SciPy as well. I believe that Pauli has already taken care of >>> > most of the work? What is left to do? >>> >>> Not much is left to do. In principle we would be mostly ready already >>> now. >>> >>> The only question is whether making the jump now would bring more work >>> for Ralf getting 0.9.0 out. >>> >> >> Probably not much, but just in case I'd prefer you wait till it's done. I >> will backport the last few patches (the ndimage one, the most recent fixes >> for builds with MKL and your fixes for python 2.4), tag rc3 and if >> everything is okay then 0.9.0 should be released in ten days or so. >> > > I agree, waiting until right after the release is probably the best. > > When it gets to be about a week away, we should send out a post that > states that the svn repo will become read-only after the following the > release and other details about the transition. After 0.9 would be a > perfect time for this. > > Be Well > Anthony > > >> Cheers, >> Ralf >> >> _______________________________________________ >> > > Please make sure that any information about the switch is included in the > 0.9 release notes. > > Makes sense, I can include it like: Scipy source code location to be changed ======================================== Soon after this release, Scipy will stop using SVN as the version control system, and move to Git. The development source code for Scipy can from then on be found at http://github.com/scipy/scipy Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Fri Feb 11 09:30:33 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 11 Feb 2011 22:30:33 +0800 Subject: [SciPy-Dev] Including Lomb-Scargle code in scipy.signal (sub)module In-Reply-To: <94688049-03E4-4C05-8BB7-5BE3CB6BE7AF@enthought.com> References: <94688049-03E4-4C05-8BB7-5BE3CB6BE7AF@enthought.com> Message-ID: On Thu, Feb 10, 2011 at 6:20 PM, Travis Oliphant wrote: > This looks like a very nice addition to the signal toolbox in SciPy. > And, yes, it should definitely go in the signal toolbox. > > I suggest creating a namespace periodogram (or spectrum or spectral) in the > signal toolbox: > > signal.periodogram.welch > signal.periodogram.lombscargle > signal.periodogram.ribicky > > etc. > It seems there is reasonable agreement, with most suggestions for "spectral". So I plan to change the name to that and commit it in the next few days. > > I'm not a big fan of very long names with a lot of redundancy (i.e. what is > the categorical difference between spectral and periodogram?) > > Spectral seems to be a slightly broader term, i.e. the least squares spectral analysis Wikipedia page includes a number of techniques only some of which qualify as periodograms. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sat Feb 12 13:53:04 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 12 Feb 2011 12:53:04 -0600 Subject: [SciPy-Dev] Question about subpackage/submodule API Message-ID: The recent discussion about what to call the module in Pim Schellart's nice Lomb-Scargle contribution brought up a question for me that I have not seen discussed before. In many cases, a subpackage contains many modules, and the assorted functions and classes in these modules are made available in the (sub)package-level namespace by importing them from within __init__.py. For example, scipy/stats contains distributions.py, kde.py, morestats.py, mstats.py, rv.py, stats.py and vonmises.py. Except for mstats.py, the contents of all these are imported into the __init__.py namespace and included in __all__, so users can say, for example, >>> from scipy.stats import bayes_mvs instead of >>> from scipy.stats.more_stats import bayes_mvs Is this an *intentional* definition of an API? For example, is it "wrong" for a user to refer to the submodule 'more_stats' explicitly when importing? I think the answer is yes, because I don't think any promise is being made that the submodule won't be renamed or refactored as scipy development progresses. But I'd like to be sure that that is everyone else's understanding. If that is not the case, then changes that I have made in the past have violated the deprecation policy. For example, a while back I moved the functions in signal/filter_design.py that were related to FIR filters to their own module, fir_filter_design.py. By making appropriate changes to signal/__init__.py, all the function were still importable from scipy.signal. So, from my point of view, I didn't change the public API and no deprecation was necessary. But if scipy.signal.filter_design is part of the public API, then I should have made sure that something like >>> from scipy.signal.filter_design import firwin still worked after the refactor. Note: I'm not saying that *all* submodules should be private and have their objects exposed only through __init__.py. I'm just looking for some clarification of existing policy. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Feb 12 14:35:40 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 12 Feb 2011 14:35:40 -0500 Subject: [SciPy-Dev] Question about subpackage/submodule API In-Reply-To: References: Message-ID: On Sat, Feb 12, 2011 at 1:53 PM, Warren Weckesser wrote: > The recent discussion about what to call the module in > Pim Schellart's nice Lomb-Scargle contribution brought up > a question for me that I have not seen discussed before. > In many cases, a subpackage contains many modules, > and the assorted functions and classes in these modules > are made available in the (sub)package-level namespace > by importing them from within __init__.py.? For example, > scipy/stats contains distributions.py, kde.py, morestats.py, > mstats.py, rv.py, stats.py and vonmises.py.? Except for > mstats.py, the contents of all these are imported into > the __init__.py namespace and included in __all__, so > users can say, for example, > >>>> from scipy.stats import bayes_mvs > > instead of > >>>> from scipy.stats.more_stats import bayes_mvs > > Is this an *intentional* definition of an API?? For example, > is it "wrong" for a user to refer to the submodule 'more_stats' > explicitly when importing?? I think the answer is yes, because > I don't think any promise is being made that the submodule > won't be renamed or refactored as scipy development > progresses.? But I'd like to be sure that that is everyone > else's understanding. > > If that is not the case, then changes that I have made in the > past have violated the deprecation policy.? For example, a > while back I moved the functions in signal/filter_design.py > that were related to FIR filters to their own module, > fir_filter_design.py.? By making appropriate changes to > signal/__init__.py, all the function were still importable > from scipy.signal.? So, from my point of view, I didn't > change the public API and no deprecation was necessary. > But if scipy.signal.filter_design is part of the public > API, then I should have made sure that something like > >>>> from scipy.signal.filter_design import firwin > > still worked after the refactor. > > Note: I'm not saying that *all* submodules should be private > and have their objects exposed only through __init__.py. > I'm just looking for some clarification of existing > policy. I haven't seen a discussion before either. I think the main reason to rename any modules is if there are not all objects or functions exposed in the namespace for the subpackage. I would never try to rename stats.distributions, because even though the distribution instances are exposed in stats, the classes itself are not. When we added mstats to the import I added mstats as import module fore the actual modules, as a sub namespace. Some modules have unexposed helper function that are sometimes useful for users to use. On the other hand, I don't think there is many import by users that is directly from stats.stats or stats.morestats or stats.mstats_extras. I think until scipy 1.0 there is still more streamlining in the scipy structure necessary, but I think we should move to a policy that also internal module reorganization is deprecated, even if this wasn't so in the past. If I remember correctly, there were other cases besides your changes. As a related aside: If we are ever allowed to make bigger changes, then I'd love to break up scipy.stats. I find the import time for scipy.stats (and the accompanying kitchen sink) pretty awful. Josef > > Warren > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From josef.pktd at gmail.com Sat Feb 12 14:39:07 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 12 Feb 2011 14:39:07 -0500 Subject: [SciPy-Dev] Question about subpackage/submodule API In-Reply-To: References: Message-ID: On Sat, Feb 12, 2011 at 2:35 PM, wrote: > On Sat, Feb 12, 2011 at 1:53 PM, Warren Weckesser > wrote: >> The recent discussion about what to call the module in >> Pim Schellart's nice Lomb-Scargle contribution brought up >> a question for me that I have not seen discussed before. >> In many cases, a subpackage contains many modules, >> and the assorted functions and classes in these modules >> are made available in the (sub)package-level namespace >> by importing them from within __init__.py.? For example, >> scipy/stats contains distributions.py, kde.py, morestats.py, >> mstats.py, rv.py, stats.py and vonmises.py.? Except for >> mstats.py, the contents of all these are imported into >> the __init__.py namespace and included in __all__, so >> users can say, for example, >> >>>>> from scipy.stats import bayes_mvs >> >> instead of >> >>>>> from scipy.stats.more_stats import bayes_mvs >> >> Is this an *intentional* definition of an API?? For example, >> is it "wrong" for a user to refer to the submodule 'more_stats' >> explicitly when importing?? I think the answer is yes, because >> I don't think any promise is being made that the submodule >> won't be renamed or refactored as scipy development >> progresses.? But I'd like to be sure that that is everyone >> else's understanding. >> >> If that is not the case, then changes that I have made in the >> past have violated the deprecation policy.? For example, a >> while back I moved the functions in signal/filter_design.py >> that were related to FIR filters to their own module, >> fir_filter_design.py.? By making appropriate changes to >> signal/__init__.py, all the function were still importable >> from scipy.signal.? So, from my point of view, I didn't >> change the public API and no deprecation was necessary. >> But if scipy.signal.filter_design is part of the public >> API, then I should have made sure that something like >> >>>>> from scipy.signal.filter_design import firwin >> >> still worked after the refactor. >> >> Note: I'm not saying that *all* submodules should be private >> and have their objects exposed only through __init__.py. >> I'm just looking for some clarification of existing >> policy. > > I haven't seen a discussion before either. I think the main reason to > rename any modules is if there are not all objects or functions > exposed in the namespace for the subpackage. (proofreading after the send) I think the main reason *not* to rename a module is if it contains objects, functions and classes, that are not exposed in the namespace of the subpackage. Josef >I would never try to > rename stats.distributions, because even though the distribution > instances are exposed in stats, the classes itself are not. When we > added mstats to the import I added mstats as import module fore the > actual modules, as a sub namespace. > > Some modules have unexposed helper function that are sometimes useful > for users to use. > > On the other hand, I don't think there is many import by users that is > directly from stats.stats or stats.morestats or stats.mstats_extras. > > I think until scipy 1.0 there is still more streamlining in the scipy > structure necessary, but I think we should move to a policy that also > internal module reorganization is deprecated, even if this wasn't so > in the past. If I remember correctly, there were other cases besides > your changes. > > As a related aside: If we are ever allowed to make bigger changes, > then I'd love to break up scipy.stats. I find the import time for > scipy.stats (and the accompanying kitchen sink) pretty awful. > > Josef > > >> >> Warren >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > From oliphant at enthought.com Sat Feb 12 15:12:44 2011 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 12 Feb 2011 14:12:44 -0600 Subject: [SciPy-Dev] Question about subpackage/submodule API In-Reply-To: References: Message-ID: The policy in the past has been that the stable API is only one level down from the scipy namespace. So, developers should import the name from the top level namespace. As a subpackage grows, I could see justification for one more level in the stable API --- e.g. scipy.signal.spectral. But, I would be opposed to an API that is deeper than that. Travis -- (mobile phone of) Travis Oliphant Enthought, Inc. 1-512-536-1057 http://www.enthought.com On Feb 12, 2011, at 1:39 PM, josef.pktd at gmail.com wrote: > On Sat, Feb 12, 2011 at 2:35 PM, wrote: >> On Sat, Feb 12, 2011 at 1:53 PM, Warren Weckesser >> wrote: >>> The recent discussion about what to call the module in >>> Pim Schellart's nice Lomb-Scargle contribution brought up >>> a question for me that I have not seen discussed before. >>> In many cases, a subpackage contains many modules, >>> and the assorted functions and classes in these modules >>> are made available in the (sub)package-level namespace >>> by importing them from within __init__.py. For example, >>> scipy/stats contains distributions.py, kde.py, morestats.py, >>> mstats.py, rv.py, stats.py and vonmises.py. Except for >>> mstats.py, the contents of all these are imported into >>> the __init__.py namespace and included in __all__, so >>> users can say, for example, >>> >>>>>> from scipy.stats import bayes_mvs >>> >>> instead of >>> >>>>>> from scipy.stats.more_stats import bayes_mvs >>> >>> Is this an *intentional* definition of an API? For example, >>> is it "wrong" for a user to refer to the submodule 'more_stats' >>> explicitly when importing? I think the answer is yes, because >>> I don't think any promise is being made that the submodule >>> won't be renamed or refactored as scipy development >>> progresses. But I'd like to be sure that that is everyone >>> else's understanding. >>> >>> If that is not the case, then changes that I have made in the >>> past have violated the deprecation policy. For example, a >>> while back I moved the functions in signal/filter_design.py >>> that were related to FIR filters to their own module, >>> fir_filter_design.py. By making appropriate changes to >>> signal/__init__.py, all the function were still importable >>> from scipy.signal. So, from my point of view, I didn't >>> change the public API and no deprecation was necessary. >>> But if scipy.signal.filter_design is part of the public >>> API, then I should have made sure that something like >>> >>>>>> from scipy.signal.filter_design import firwin >>> >>> still worked after the refactor. >>> >>> Note: I'm not saying that *all* submodules should be private >>> and have their objects exposed only through __init__.py. >>> I'm just looking for some clarification of existing >>> policy. >> >> I haven't seen a discussion before either. I think the main reason to >> rename any modules is if there are not all objects or functions >> exposed in the namespace for the subpackage. > > (proofreading after the send) > I think the main reason *not* to rename a module is if it contains > objects, functions and classes, that are not exposed in the namespace > of the subpackage. > > Josef > >> I would never try to >> rename stats.distributions, because even though the distribution >> instances are exposed in stats, the classes itself are not. When we >> added mstats to the import I added mstats as import module fore the >> actual modules, as a sub namespace. >> >> Some modules have unexposed helper function that are sometimes useful >> for users to use. >> >> On the other hand, I don't think there is many import by users that is >> directly from stats.stats or stats.morestats or stats.mstats_extras. >> >> I think until scipy 1.0 there is still more streamlining in the scipy >> structure necessary, but I think we should move to a policy that also >> internal module reorganization is deprecated, even if this wasn't so >> in the past. If I remember correctly, there were other cases besides >> your changes. >> >> As a related aside: If we are ever allowed to make bigger changes, >> then I'd love to break up scipy.stats. I find the import time for >> scipy.stats (and the accompanying kitchen sink) pretty awful. >> >> Josef >> >> >>> >>> Warren >>> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From pav at iki.fi Sat Feb 12 16:05:05 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 12 Feb 2011 21:05:05 +0000 (UTC) Subject: [SciPy-Dev] Question about subpackage/submodule API References: Message-ID: On Sat, 12 Feb 2011 14:12:44 -0600, Travis Oliphant wrote: > The policy in the past has been that the stable API is only one level > down from the scipy namespace. > > So, developers should import the name from the top level namespace. > > As a subpackage grows, I could see justification for one more level in > the stable API --- e.g. scipy.signal.spectral. > > But, I would be opposed to an API that is deeper than that. Agreed. One wild idea to make this clearer could be to prefix all internal sub- package names with the usual '_'. In the long run, it probably wouldn't be as bad as it initially sounds like. For instance, `numpy._core`, `scipy.special._orthogonal`, `scipy.linalg._decomp` etc. Pauli From ralf.gommers at googlemail.com Sat Feb 12 20:28:53 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 13 Feb 2011 09:28:53 +0800 Subject: [SciPy-Dev] Question about subpackage/submodule API In-Reply-To: References: Message-ID: On Sun, Feb 13, 2011 at 5:05 AM, Pauli Virtanen wrote: > On Sat, 12 Feb 2011 14:12:44 -0600, Travis Oliphant wrote: > > The policy in the past has been that the stable API is only one level > > down from the scipy namespace. > > > > So, developers should import the name from the top level namespace. > > > Is this written down somewhere? As far as I understand this is not standard practice in Python (one should use underscores). It's also at the moment not correct in parts of scipy, for example scipy.sparse.linalg.__all__ contains functions that are not available in sparse.__all__. Furthermore it would be quite natural to do something like: # 4 levels deep, from an actual bug report import scipy.sparse.linalg.eigen as eigen import scipy.signal.filter_design as filt > As a subpackage grows, I could see justification for one more level in > > the stable API --- e.g. scipy.signal.spectral. > > > > But, I would be opposed to an API that is deeper than that. > > Agreed. > > One wild idea to make this clearer could be to prefix all internal sub- > package names with the usual '_'. In the long run, it probably wouldn't > be as bad as it initially sounds like. > > This is not a wild idea at all, I think it should be done. I considered all modules without '_' prefix public API. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Feb 13 03:36:06 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 13 Feb 2011 16:36:06 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 3 Message-ID: Hi, I am pleased to announce the availability of the third release candidate of SciPy 0.9.0. This will be the first SciPy release to include support for Python 3 (all modules except scipy.weave), as well as for Python 2.7. Sources, binaries and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc3/. Note that due to the issues Sourceforge is still having the binaries are not visible at this moment, even though they are uploaded. They should appear within a day I expect. Changes since release candidate 2: - a high-priority bugfix for fftpack (#1353) - a change in ndimage for compatibility with upcoming numpy 1.6 - fixes for compatibility with Python 2.4 - fixed test failures reported for RC2 built against MKL If no more issues are reported, 0.9.0 will be released in the weekend of 19/20 February. Enjoy, Ralf From nwagner at iam.uni-stuttgart.de Sun Feb 13 06:18:50 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 13 Feb 2011 12:18:50 +0100 Subject: [SciPy-Dev] ERROR: test_rvs (test_distributions.TestRvDiscrete) Message-ID: Hi, I am seeing test failures in >>> scipy.__version__ '0.10.0.dev7141' ====================================================================== ERROR: test_rvs (test_distributions.TestRvDiscrete) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/tests/test_distributions.py", line 254, in test_rvs r = stats.rv_discrete(name='sample',values=(states,probability)) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/distributions.py", line 5095, in __init__ self._construct_doc() File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/distributions.py", line 5126, in _construct_doc self.__doc__ = doccer.docformat(self.__doc__, tempdict) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/misc/doccer.py", line 62, in docformat return docstring % indented ValueError: unsupported format character 'p' (0x70) at index 3707 ====================================================================== FAIL: Real-valued Bessel domains ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", line 1712, in test_ticket_854 assert_(not isnan(special.airye(-1)[2:4]).any(), special.airye(-1)) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: (nan, nan, nan, nan) ---------------------------------------------------------------------- Ran 4773 tests in 136.997s FAILED (KNOWNFAIL=12, SKIP=28, errors=1, failures=1) From ralf.gommers at googlemail.com Sun Feb 13 07:27:13 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 13 Feb 2011 20:27:13 +0800 Subject: [SciPy-Dev] ERROR: test_rvs (test_distributions.TestRvDiscrete) In-Reply-To: References: Message-ID: On Sun, Feb 13, 2011 at 7:18 PM, Nils Wagner wrote: > Hi, > > I am seeing test failures in > >>>> scipy.__version__ > '0.10.0.dev7141' > > ====================================================================== > ERROR: test_rvs (test_distributions.TestRvDiscrete) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/tests/test_distributions.py", > line 254, in test_rvs > ? ? r = > stats.rv_discrete(name='sample',values=(states,probability)) > ? File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/distributions.py", > line 5095, in __init__ > ? ? self._construct_doc() > ? File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/distributions.py", > line 5126, in _construct_doc > ? ? self.__doc__ = doccer.docformat(self.__doc__, > tempdict) > ? File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/misc/doccer.py", > line 62, in docformat > ? ? return docstring % indented > ValueError: unsupported format character 'p' (0x70) at > index 3707 > Fixed this one in r7142. Ralf From dagss at student.matnat.uio.no Mon Feb 14 09:37:27 2011 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 14 Feb 2011 15:37:27 +0100 Subject: [SciPy-Dev] On pulling fwrap refactor into upstream SciPy In-Reply-To: References: <544976556.1195337.1296647649985.JavaMail.root@zmbs3.inria.fr> <4D51135E.6010003@student.matnat.uio.no> <4D51AC1C.2060907@student.matnat.uio.no> Message-ID: <4D593E27.2000104@student.matnat.uio.no> On 02/09/2011 12:20 AM, Pauli Virtanen wrote: > On Tue, 08 Feb 2011 21:58:08 +0000, Pauli Virtanen wrote: > >> On Tue, 08 Feb 2011 21:48:28 +0100, Dag Sverre Seljebotn wrote: >> > [clip] > >>> Not code-wise, but history-wise. >>> >>> That is, I have several times changed the code generation of Fwrap, >>> regenerated the boilerplate, and put the manual changes back in (using >>> git). >>> >> [clip] >> >> Clever, that's doable with git. However, it's not very pretty, and I >> have a nasty feeling about mixing autogenerated code with manual code in >> the long term. >> > Ok, some backing arguments for the gut feeling: the approach moves a part > of the logical content of the code out of the files, to the version > history. The top-level meaning of the code cannot be understood by > reading it, as boilerplate cannot be distinguished from customizations. > Instead, one needs to use "git blame" or some such tool to spot points > that contain more information than the *.pyf files. > > Using some sort of "inline patch" approach would put the information > back. Before a manual edit: > > some > automatically > generated > code > > and after: > > some > #< automatically > #< generated > manually edited > #> > code > > or: > > some > #< > manually edited > #> > code > > Suitably smart code generator could in principle extract and re-apply > such inline patches automatically, but the main point IMHO would be to > preserve readability. > First: When I had to go another round with regenerating my wrappers, I realized that while merging back changes works beautifully, the git history becomes incredibly messy and difficult to track from my approach (a) every Fwrap-generated file has its own branch which *does* show up in the history, b) I become too afraid to do any rebasing less I loose the auto-generated version (and managed to break the scheme in a merge I did), and thus the tool got in my way, which is never good). So what I did now is to have a subdirectory in every package (which is currently named ".fwrap+pyf", we can get back to that) which stores the pristine auto-generated copies. "fwrap update" invokes Unix "merge", and "fwrap mergetool" currently invokes kdiff3 with the right arguments. This works much better with my workflow, since I decouple how I work with git from how I work with fwrap. And the merges are as painless as they use to be. And you can always get the manual modifications by diff wrapper.pyx .fwrap+pyf/wrapper.pyx (which I suppose I could roll into "fwrap diff"). Commenting on your suggestion: Assuming the (much more failsafe) merging strategy I described above is kept, this just boils down to a SciPy project convention, right? A problem is that many of the manual patches changes only some parts of a line (e.g., change one argument of the call to Fortran). How about just a comment at the top of the function body listing higher-level descriptions of every manual change done, and then one can consult 'fwrap diff' for further details? Not sure if I'll have (paid) time to insert such comments tough... Dag Sverre From ralf.gommers at googlemail.com Mon Feb 14 10:24:24 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 14 Feb 2011 23:24:24 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 3 In-Reply-To: References: Message-ID: On Sun, Feb 13, 2011 at 4:36 PM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the third release > candidate of SciPy 0.9.0. This will be the first SciPy release to > include support for Python 3 (all modules except scipy.weave), as well > as for Python 2.7. > > Sources, binaries and release notes can be found at > http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc3/. Note that > due to the issues Sourceforge is still having the binaries are not > visible at this moment, even though they are uploaded. They should > appear within a day I expect. > All binaries are visible now. Cheers, Ralf > Changes since release candidate 2: > - a high-priority bugfix for fftpack (#1353) > - a change in ndimage for compatibility with upcoming numpy 1.6 > - fixes for compatibility with Python 2.4 > - fixed test failures reported for RC2 built against MKL > > If no more issues are reported, 0.9.0 will be released in the weekend > of 19/20 February. > > Enjoy, > Ralf > From gokhansever at gmail.com Mon Feb 14 18:32:09 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Mon, 14 Feb 2011 16:32:09 -0700 Subject: [SciPy-Dev] One scipy.test() failure on rev7143 Message-ID: Hello, Installing scipy on a fresh laptop (using Fedora 14 - x86_64): >>> scipy.test() Running unit tests for scipy NumPy version 1.6.0.dev-af1e833 NumPy is installed in /usr/lib64/python2.7/site-packages/numpy SciPy version 0.10.0.dev7143 SciPy is installed in /usr/lib64/python2.7/site-packages/scipy Python version 2.7 (r27:82500, Sep 16 2010, 18:02:00) [GCC 4.5.1 20100907 (Red Hat 4.5.1-3)] nose version 1.0.0 .............................................................................................................................................................................................................K........................................................................................../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:673: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ....../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:604: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ......................K..K....................................................................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:31: WavFileWarning: Unfamiliar format bytes warnings.warn("Unfamiliar format bytes", WavFileWarning) /usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:121: WavFileWarning: chunk not understood warnings.warn("chunk not understood", WavFileWarning) ...............................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS............................................................................................................................./usr/lib64/python2.7/site-packages/scipy/linalg/decomp.py:57: ComplexWarning: Casting complex values to real discards the imaginary part overwrite_a, overwrite_b) ..........................................................................................................................................................................................K.................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/ndimage/morphology.py:254: PendingDeprecationWarning: The CObject type is marked Pending Deprecation in Python 2.7. Please use capsule objects instead. structure, mask, output, border_value, origin, invert, cit, 1) ..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS................................................................................................................................................................K...............................................................K...........................................................................................................................................................KK......................................................................................................................................................................................................................................................................................................................................................................................................................F........K.K........................................................................................................................................................................................................................................................................................................................................................................................K........K.........SSSSSSS..................................................................................................................................................................................................................................................................................................................................................................................................Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp .................................Warning: invalid value encountered in sqrt .........................................................................................................................................................................................................................................................................................................S............................................................................................................................................................................................Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide ................................................................................................................................................................................................................................................................................................................................................................................................................. ====================================================================== FAIL: Real-valued Bessel domains ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/scipy/special/tests/test_basic.py", line 1712, in test_ticket_854 assert_(not isnan(special.airye(-1)[2:4]).any(), special.airye(-1)) File "/usr/lib64/python2.7/site-packages/numpy/testing/utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: (nan, nan, nan, nan) ---------------------------------------------------------------------- Ran 4773 tests in 40.854s FAILED (KNOWNFAIL=12, SKIP=35, failures=1) Anything to worry? Thanks. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Mon Feb 14 18:53:59 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 14 Feb 2011 17:53:59 -0600 Subject: [SciPy-Dev] Question about subpackage/submodule API In-Reply-To: References: Message-ID: On Sat, Feb 12, 2011 at 7:28 PM, Ralf Gommers wrote: > > > On Sun, Feb 13, 2011 at 5:05 AM, Pauli Virtanen wrote: > >> On Sat, 12 Feb 2011 14:12:44 -0600, Travis Oliphant wrote: >> > The policy in the past has been that the stable API is only one level >> > down from the scipy namespace. >> > >> > So, developers should import the name from the top level namespace. >> > >> > Is this written down somewhere? As far as I understand this is not standard > practice in Python (one should use underscores). It's also at the moment not > correct in parts of scipy, for example scipy.sparse.linalg.__all__ contains > functions that are not available in sparse.__all__. > > Furthermore it would be quite natural to do something like: > # 4 levels deep, from an actual bug report > import scipy.sparse.linalg.eigen as eigen > import scipy.signal.filter_design as filt > > > As a subpackage grows, I could see justification for one more level in >> > the stable API --- e.g. scipy.signal.spectral. >> > >> > But, I would be opposed to an API that is deeper than that. >> >> Agreed. >> >> One wild idea to make this clearer could be to prefix all internal sub- >> package names with the usual '_'. In the long run, it probably wouldn't >> be as bad as it initially sounds like. >> >> This is not a wild idea at all, I think it should be done. I considered > all modules without '_' prefix public API. > > Agreed (despite what I said in my initial post). To actually do this, we'll need to check which packages have modules that should be private. These can be renamed in 0.10 to have an underscore, and new public versions created that contain a deprecation warning and that import everything from the private version. The deprecated public modules can be removed in 0.11. Some modules will require almost no changes. For example, scipy.cluster *only* exposes two modules, vq and hierarchy, so no changes are needed. (Well, there is also the module info.py that all packages have. That should become _info.py--there's no need for that to be public, is there?) Other packages will probably require some discussion about what modules should be public. Consider the above a proposed change for 0.10 and 0.11--what do you think? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Mon Feb 14 18:54:44 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 14 Feb 2011 15:54:44 -0800 Subject: [SciPy-Dev] One scipy.test() failure on rev7143 In-Reply-To: References: Message-ID: <4D59C0C4.8020305@uci.edu> On 2/14/2011 3:32 PM, G?khan Sever wrote: > Hello, > > Installing scipy on a fresh laptop (using Fedora 14 - x86_64): > >> >> scipy.test() > Running unit tests for scipy > NumPy version 1.6.0.dev-af1e833 > NumPy is installed in /usr/lib64/python2.7/site-packages/numpy > SciPy version 0.10.0.dev7143 > SciPy is installed in /usr/lib64/python2.7/site-packages/scipy > Python version 2.7 (r27:82500, Sep 16 2010, 18:02:00) [GCC 4.5.1 > 20100907 (Red Hat 4.5.1-3)] > nose version 1.0.0 > .............................................................................................................................................................................................................K........................................................................................../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:673: > UserWarning: > The coefficients of the spline returned have been computed as the > minimal norm least-squares solution of a (numerically) rank deficient > system (deficiency=7). If deficiency is large, the results may be > inaccurate. Deficiency may strongly depend on the value of eps. > warnings.warn(message) > ....../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:604: > UserWarning: > The required storage space exceeds the available storage space: nxest > or nyest too small, or s too small. > The weighted least-squares spline corresponds to the current set of > knots. > warnings.warn(message) > ......................K..K....................................................................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:31: > WavFileWarning: Unfamiliar format bytes > warnings.warn("Unfamiliar format bytes", WavFileWarning) > /usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:121: > WavFileWarning: chunk not understood > warnings.warn("chunk not understood", WavFileWarning) > ...............................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS............................................................................................................................./usr/lib64/python2.7/site-packages/scipy/linalg/decomp.py:57: > ComplexWarning: Casting complex values to real discards the imaginary part > overwrite_a, overwrite_b) > ..........................................................................................................................................................................................K.................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/ndimage/morphology.py:254: > PendingDeprecationWarning: The CObject type is marked Pending > Deprecation in Python 2.7. Please use capsule objects instead. > structure, mask, output, border_value, origin, invert, cit, 1) > ..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS................................................................................................................................................................K...............................................................K..................................................................................................................................................... ......KK......................................................................................................................................................................................................................................................................................................................................................................................................................F........K.K........................................................................................................................................................................................................................................................................................................................................................................................K........K.........SSSSSSS.................................................................................................................................................................. ................................................................................................................................................................................................................................Warning: > overflow encountered in exp > Warning: overflow encountered in exp > Warning: overflow encountered in exp > Warning: overflow encountered in exp > Warning: overflow encountered in exp > Warning: overflow encountered in exp > Warning: overflow encountered in exp > Warning: overflow encountered in exp > Warning: overflow encountered in exp > Warning: overflow encountered in exp > .................................Warning: invalid value encountered in sqrt > .........................................................................................................................................................................................................................................................................................................S............................................................................................................................................................................................Warning: > divide by zero encountered in divide > Warning: divide by zero encountered in divide > Warning: divide by zero encountered in divide > ................................................................................................................................................................................................................................................................................................................................................................................................................. > ====================================================================== > FAIL: Real-valued Bessel domains > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.7/site-packages/scipy/special/tests/test_basic.py", > line 1712, in test_ticket_854 > assert_(not isnan(special.airye(-1)[2:4]).any(), special.airye(-1)) > File "/usr/lib64/python2.7/site-packages/numpy/testing/utils.py", > line 34, in assert_ > raise AssertionError(msg) > AssertionError: (nan, nan, nan, nan) > > ---------------------------------------------------------------------- > Ran 4773 tests in 40.854s > > FAILED (KNOWNFAIL=12, SKIP=35, failures=1) > > > Anything to worry? > > Thanks. > > > -- > G?khan > > This is a know bug . Christoph From gokhansever at gmail.com Tue Feb 15 12:09:56 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Tue, 15 Feb 2011 10:09:56 -0700 Subject: [SciPy-Dev] One scipy.test() failure on rev7143 In-Reply-To: <4D59C0C4.8020305@uci.edu> References: <4D59C0C4.8020305@uci.edu> Message-ID: I could reproduce the issue both using: python -c"import scipy;scipy.test()" python -c"import scipy.special;scipy.special.test()" Using 64-bit local builds. On Mon, Feb 14, 2011 at 4:54 PM, Christoph Gohlke wrote: > > > On 2/14/2011 3:32 PM, G?khan Sever wrote: > > Hello, > > > > Installing scipy on a fresh laptop (using Fedora 14 - x86_64): > > > >> >> scipy.test() > > Running unit tests for scipy > > NumPy version 1.6.0.dev-af1e833 > > NumPy is installed in /usr/lib64/python2.7/site-packages/numpy > > SciPy version 0.10.0.dev7143 > > SciPy is installed in /usr/lib64/python2.7/site-packages/scipy > > Python version 2.7 (r27:82500, Sep 16 2010, 18:02:00) [GCC 4.5.1 > > 20100907 (Red Hat 4.5.1-3)] > > nose version 1.0.0 > > > .............................................................................................................................................................................................................K........................................................................................../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:673: > > UserWarning: > > The coefficients of the spline returned have been computed as the > > minimal norm least-squares solution of a (numerically) rank deficient > > system (deficiency=7). If deficiency is large, the results may be > > inaccurate. Deficiency may strongly depend on the value of eps. > > warnings.warn(message) > > > ....../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:604: > > UserWarning: > > The required storage space exceeds the available storage space: nxest > > or nyest too small, or s too small. > > The weighted least-squares spline corresponds to the current set of > > knots. > > warnings.warn(message) > > > ......................K..K....................................................................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:31: > > WavFileWarning: Unfamiliar format bytes > > warnings.warn("Unfamiliar format bytes", WavFileWarning) > > /usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:121: > > WavFileWarning: chunk not understood > > warnings.warn("chunk not understood", WavFileWarning) > > > ...............................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS............................................................................................................................./usr/lib64/python2.7/site-packages/scipy/linalg/decomp.py:57: > > ComplexWarning: Casting complex values to real discards the imaginary > part > > overwrite_a, overwrite_b) > > > ..........................................................................................................................................................................................K.................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/ndimage/morphology.py:254: > > PendingDeprecationWarning: The CObject type is marked Pending > > Deprecation in Python 2.7. Please use capsule objects instead. > > structure, mask, output, border_value, origin, invert, cit, 1) > > > ..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS................................................................................................................................................................K...............................................................K..................................................................................................................................................... > > ......KK......................................................................................................................................................................................................................................................................................................................................................................................................................F........K.K........................................................................................................................................................................................................................................................................................................................................................................................K........K.........SSSSSSS.................................................................................................................................................................. > > ................................................................................................................................................................................................................................Warning: > > overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > .................................Warning: invalid value encountered in > sqrt > > > .........................................................................................................................................................................................................................................................................................................S............................................................................................................................................................................................Warning: > > divide by zero encountered in divide > > Warning: divide by zero encountered in divide > > Warning: divide by zero encountered in divide > > > ................................................................................................................................................................................................................................................................................................................................................................................................................. > > ====================================================================== > > FAIL: Real-valued Bessel domains > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > "/usr/lib64/python2.7/site-packages/scipy/special/tests/test_basic.py", > > line 1712, in test_ticket_854 > > assert_(not isnan(special.airye(-1)[2:4]).any(), special.airye(-1)) > > File "/usr/lib64/python2.7/site-packages/numpy/testing/utils.py", > > line 34, in assert_ > > raise AssertionError(msg) > > AssertionError: (nan, nan, nan, nan) > > > > ---------------------------------------------------------------------- > > Ran 4773 tests in 40.854s > > > > FAILED (KNOWNFAIL=12, SKIP=35, failures=1) > > > > > > Anything to worry? > > > > Thanks. > > > > > > -- > > G?khan > > > > > > > This is a know bug . > > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Tue Feb 15 13:13:04 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 15 Feb 2011 12:13:04 -0600 Subject: [SciPy-Dev] One scipy.test() failure on rev7143 In-Reply-To: References: <4D59C0C4.8020305@uci.edu> Message-ID: <4D5AC230.9000604@gmail.com> On 02/15/2011 11:09 AM, G?khan Sever wrote: > I could reproduce the issue both using: > > python -c"import scipy;scipy.test()" > python -c"import scipy.special;scipy.special.test()" > > Using 64-bit local builds. > > On Mon, Feb 14, 2011 at 4:54 PM, Christoph Gohlke > wrote: > > > > On 2/14/2011 3:32 PM, G?khan Sever wrote: > > Hello, > > > > Installing scipy on a fresh laptop (using Fedora 14 - x86_64): > > > >> >> scipy.test() > > Running unit tests for scipy > > NumPy version 1.6.0.dev-af1e833 > > NumPy is installed in /usr/lib64/python2.7/site-packages/numpy > > SciPy version 0.10.0.dev7143 > > SciPy is installed in /usr/lib64/python2.7/site-packages/scipy > > Python version 2.7 (r27:82500, Sep 16 2010, 18:02:00) [GCC 4.5.1 > > 20100907 (Red Hat 4.5.1-3)] > > nose version 1.0.0 > > > .............................................................................................................................................................................................................K........................................................................................../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:673: > > UserWarning: > > The coefficients of the spline returned have been computed as the > > minimal norm least-squares solution of a (numerically) rank > deficient > > system (deficiency=7). If deficiency is large, the results may be > > inaccurate. Deficiency may strongly depend on the value of eps. > > warnings.warn(message) > > > ....../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:604: > > UserWarning: > > The required storage space exceeds the available storage space: > nxest > > or nyest too small, or s too small. > > The weighted least-squares spline corresponds to the current set of > > knots. > > warnings.warn(message) > > > ......................K..K....................................................................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:31: > > WavFileWarning: Unfamiliar format bytes > > warnings.warn("Unfamiliar format bytes", WavFileWarning) > > /usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:121: > > WavFileWarning: chunk not understood > > warnings.warn("chunk not understood", WavFileWarning) > > > ...............................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS............................................................................................................................./usr/lib64/python2.7/site-packages/scipy/linalg/decomp.py:57: > > ComplexWarning: Casting complex values to real discards the > imaginary part > > overwrite_a, overwrite_b) > > > ..........................................................................................................................................................................................K.................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/ndimage/morphology.py:254: > > PendingDeprecationWarning: The CObject type is marked Pending > > Deprecation in Python 2.7. Please use capsule objects instead. > > structure, mask, output, border_value, origin, invert, cit, 1) > > > ..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS................................................................................................................................................................K...............................................................K..................................................................................................................................................... > ......KK......................................................................................................................................................................................................................................................................................................................................................................................................................F........K.K........................................................................................................................................................................................................................................................................................................................................................................................K........K.........SSSSSSS.................................................................................................................................................................. > ................................................................................................................................................................................................................................Warning: > > overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > .................................Warning: invalid value > encountered in sqrt > > > .........................................................................................................................................................................................................................................................................................................S............................................................................................................................................................................................Warning: > > divide by zero encountered in divide > > Warning: divide by zero encountered in divide > > Warning: divide by zero encountered in divide > > > ................................................................................................................................................................................................................................................................................................................................................................................................................. > > > ====================================================================== > > FAIL: Real-valued Bessel domains > > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > > "/usr/lib64/python2.7/site-packages/scipy/special/tests/test_basic.py", > > line 1712, in test_ticket_854 > > assert_(not isnan(special.airye(-1)[2:4]).any(), > special.airye(-1)) > > File "/usr/lib64/python2.7/site-packages/numpy/testing/utils.py", > > line 34, in assert_ > > raise AssertionError(msg) > > AssertionError: (nan, nan, nan, nan) > > > > > ---------------------------------------------------------------------- > > Ran 4773 tests in 40.854s > > > > FAILED (KNOWNFAIL=12, SKIP=35, failures=1) > > > > > > Anything to worry? > > > > Thanks. > > > > > > -- > > G?khan > > > > > > > This is a know bug . > > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > -- > G?khan > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev I did not see this with scipy0.9rc3 using any of the various Python versions (still need to get Python 3.2) on my 64bit Fedora 14 system which does not have any Intel compilers or libraries. Do you get this error with Python 3.1? Also, with regards to Pauli's last comment on ticket 1233, it would be interesting if using fwrap instead of f2py helps. Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Tue Feb 15 13:43:36 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 15 Feb 2011 10:43:36 -0800 Subject: [SciPy-Dev] One scipy.test() failure on rev7143 In-Reply-To: References: <4D59C0C4.8020305@uci.edu> Message-ID: <4D5AC958.8010506@uci.edu> On 2/15/2011 9:09 AM, G?khan Sever wrote: > I could reproduce the issue both using: > > python -c"import scipy;scipy.test()" > > python -c"import scipy.special;scipy.special.test()" > > > Using 64-bit local builds. > > On Mon, Feb 14, 2011 at 4:54 PM, Christoph Gohlke > wrote: > > > > On 2/14/2011 3:32 PM, G?khan Sever wrote: > > Hello, > > > > Installing scipy on a fresh laptop (using Fedora 14 - x86_64): > > > > > >> scipy.test() > > Running unit tests for scipy > > NumPy version 1.6.0.dev-af1e833 > > NumPy is installed in /usr/lib64/python2.7/site-packages/numpy > > SciPy version 0.10.0.dev7143 > > SciPy is installed in /usr/lib64/python2.7/site-packages/scipy > > Python version 2.7 (r27:82500, Sep 16 2010, 18:02:00) [GCC 4.5.1 > > 20100907 (Red Hat 4.5.1-3)] > > nose version 1.0.0 > > > .............................................................................................................................................................................................................K........................................................................................../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:673: > > UserWarning: > > The coefficients of the spline returned have been computed as the > > minimal norm least-squares solution of a (numerically) rank deficient > > system (deficiency=7). If deficiency is large, the results may be > > inaccurate. Deficiency may strongly depend on the value of eps. > > warnings.warn(message) > > > ....../usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack2.py:604: > > UserWarning: > > The required storage space exceeds the available storage space: nxest > > or nyest too small, or s too small. > > The weighted least-squares spline corresponds to the current set of > > knots. > > warnings.warn(message) > > > ......................K..K....................................................................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:31: > > WavFileWarning: Unfamiliar format bytes > > warnings.warn("Unfamiliar format bytes", WavFileWarning) > > /usr/lib64/python2.7/site-packages/scipy/io/wavfile.py:121: > > WavFileWarning: chunk not understood > > warnings.warn("chunk not understood", WavFileWarning) > > > ...............................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS............................................................................................................................./usr/lib64/python2.7/site-packages/scipy/linalg/decomp.py:57: > > ComplexWarning: Casting complex values to real discards the > imaginary part > > overwrite_a, overwrite_b) > > > ..........................................................................................................................................................................................K.................................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/ndimage/morphology.py:254: > > PendingDeprecationWarning: The CObject type is marked Pending > > Deprecation in Python 2.7. Please use capsule objects instead. > > structure, mask, output, border_value, origin, invert, cit, 1) > > > ..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS................................................................................................................................................................K...............................................................K................................................................................................................................................. .... > ......KK......................................................................................................................................................................................................................................................................................................................................................................................................................F........K.K........................................................................................................................................................................................................................................................................................................................................................................................K........K.........SSSSSSS........................................................................................................................................................... ....... > ................................................................................................................................................................................................................................Warning: > > overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > Warning: overflow encountered in exp > > .................................Warning: invalid value > encountered in sqrt > > > .........................................................................................................................................................................................................................................................................................................S............................................................................................................................................................................................Warning: > > divide by zero encountered in divide > > Warning: divide by zero encountered in divide > > Warning: divide by zero encountered in divide > > > ................................................................................................................................................................................................................................................................................................................................................................................................................. > > ====================================================================== > > FAIL: Real-valued Bessel domains > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > > "/usr/lib64/python2.7/site-packages/scipy/special/tests/test_basic.py", > > line 1712, in test_ticket_854 > > assert_(not isnan(special.airye(-1)[2:4]).any(), > special.airye(-1)) > > File "/usr/lib64/python2.7/site-packages/numpy/testing/utils.py", > > line 34, in assert_ > > raise AssertionError(msg) > > AssertionError: (nan, nan, nan, nan) > > > > ---------------------------------------------------------------------- > > Ran 4773 tests in 40.854s > > > > FAILED (KNOWNFAIL=12, SKIP=35, failures=1) > > > > > > Anything to worry? > > > > Thanks. > > > > > > -- > > G?khan > > > > > > > This is a know bug . > > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > -- > G?khan > Could you try the patch at ? I use the patch for my scipy builds and have not seen the test failure since. Christoph From pav at iki.fi Tue Feb 15 14:22:47 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 15 Feb 2011 19:22:47 +0000 (UTC) Subject: [SciPy-Dev] One scipy.test() failure on rev7143 References: <4D59C0C4.8020305@uci.edu> <4D5AC958.8010506@uci.edu> Message-ID: On Tue, 15 Feb 2011 10:43:36 -0800, Christoph Gohlke wrote: [clip] > Could you try the patch at > ? I > use the patch for my scipy builds and have not seen the test failure > since. The problem is that the prototype for ZBIRY on the C-side does not match the Fortran routine. I must have been blind earlier to not notice this. Should be fixed in r7144 Pauli From cgohlke at uci.edu Tue Feb 15 15:10:34 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 15 Feb 2011 12:10:34 -0800 Subject: [SciPy-Dev] One scipy.test() failure on rev7143 In-Reply-To: References: <4D59C0C4.8020305@uci.edu> <4D5AC958.8010506@uci.edu> Message-ID: <4D5ADDBA.8090405@uci.edu> On 2/15/2011 11:22 AM, Pauli Virtanen wrote: > On Tue, 15 Feb 2011 10:43:36 -0800, Christoph Gohlke wrote: > [clip] >> Could you try the patch at >> ? I >> use the patch for my scipy builds and have not seen the test failure >> since. > > The problem is that the prototype for ZBIRY on the C-side does not match > the Fortran routine. I must have been blind earlier to not notice this. > Should be fixed in r7144 > > Pauli > r7144 works for me. Thank you! Could this make it into the 0.9 release? Christoph From gokhansever at gmail.com Tue Feb 15 16:25:05 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Tue, 15 Feb 2011 14:25:05 -0700 Subject: [SciPy-Dev] One scipy.test() failure on rev7143 In-Reply-To: <4D5ADDBA.8090405@uci.edu> References: <4D59C0C4.8020305@uci.edu> <4D5AC958.8010506@uci.edu> <4D5ADDBA.8090405@uci.edu> Message-ID: r7144 works fine here as well. On Tue, Feb 15, 2011 at 1:10 PM, Christoph Gohlke wrote: > > > On 2/15/2011 11:22 AM, Pauli Virtanen wrote: > > On Tue, 15 Feb 2011 10:43:36 -0800, Christoph Gohlke wrote: > > [clip] > >> Could you try the patch at > >> ? > I > >> use the patch for my scipy builds and have not seen the test failure > >> since. > > > > The problem is that the prototype for ZBIRY on the C-side does not match > > the Fortran routine. I must have been blind earlier to not notice this. > > Should be fixed in r7144 > > > > Pauli > > > > > r7144 works for me. Thank you! Could this make it into the 0.9 release? > > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Feb 16 09:58:35 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 16 Feb 2011 22:58:35 +0800 Subject: [SciPy-Dev] One scipy.test() failure on rev7143 In-Reply-To: <4D5ADDBA.8090405@uci.edu> References: <4D59C0C4.8020305@uci.edu> <4D5AC958.8010506@uci.edu> <4D5ADDBA.8090405@uci.edu> Message-ID: On Wed, Feb 16, 2011 at 4:10 AM, Christoph Gohlke wrote: > > > On 2/15/2011 11:22 AM, Pauli Virtanen wrote: >> On Tue, 15 Feb 2011 10:43:36 -0800, Christoph Gohlke wrote: >> [clip] >>> Could you try the patch at >>> ? I >>> use the patch for my scipy builds and have not seen the test failure >>> since. >> >> The problem is that the prototype for ZBIRY on the C-side does not match >> the Fortran routine. I must have been blind earlier to not notice this. >> Should be fixed in r7144 >> >> ? ? ? Pauli >> > > > r7144 works for me. Thank you! Could this make it into the 0.9 release? I'd prefer not to include it at this stage unless there is another reason for one more RC. Somehow the problem became worse in trunk so it showed up for everyone, in the 0.9.x branch it only occurs for builds with MKL apparently. Ralf From josef.pktd at gmail.com Wed Feb 16 10:23:11 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 16 Feb 2011 10:23:11 -0500 Subject: [SciPy-Dev] Breaking up scipy.stats or How to avoid importing the kitchen sink (when we are not in the kitchen) Message-ID: Warren's thread on scipy's subpackages made me realize that we can break up the imports in scipy.stats in a backward compatible way. Problem "from scipy import stats" is slow unless scipy is already in the disk cache len(beforenp), len(beforesp), len(beforestats), len(after) 125 261 341 569 >>> 569 - 341 228 e.g. who import scipy.sparse if there is no sparse code in scipy.stats If I only want to use some tests, then all I need is scipy.stats.stats and scipy.special Proposal keep scipy.stats as API import subpackage as public API especially for interactive work move all modules from scipy.stats into another directory, scipy.stats_ or scipy.statslib or something: keep it's __import__ empty create API one level down - stats_basic: current stats.stats plus tests from morestats, (name?): imports only scipy.special - stats_other: rest of morestats and other extras (plots, boxcox,...), (name?) - mstats - kde - distributions: imports the kitchen sink no lazy imports possible because distributions are instances and not just classes then we can do "from scipy.statlib import stats_basic" and we get the ttests with an import of scipy.special plus one module instead of plus 215 modules. This is currently just an idea, and I won't pursue it further if we don't want to go this way. Notes I don't understand some things about the imports, why do I get some distutils and enthought modules with the stats import. (I don't understand the lazy import machinery.) statsmodels just switched to separating API from package imports. import sys, copy beforenp = copy.copy(sys.modules) import numpy beforesp = copy.copy(sys.modules) import scipy beforestats = copy.copy(sys.modules) from scipy import stats after = copy.copy(sys.modules) print 'len(beforenp), len(beforesp), len(beforestats), len(after)' print len(beforenp), len(beforesp), len(beforestats), len(after) from pprint import pprint pprint(sorted(set(('.'.join(i.split('.')[:2]) for i in set(after)-set(beforestats))))) ##pprint(sorted(set(('.'.join(i.split('.')[:2]) for i in ## set(beforestats)-set(beforesp))))) >python -i stats_imports.py len(beforenp), len(beforesp), len(beforestats), len(after) 125 261 341 569 ['_bisect', 'bisect', 'dis', 'distutils', 'distutils.dep_util', 'distutils.distutils', 'distutils.errors', 'distutils.log', 'distutils.os', 'distutils.re', 'distutils.spawn', 'distutils.string', 'distutils.sys', 'distutils.util', 'enthought', 'enthought.envisage', 'enthought.modulefinder', 'enthought.plugins', 'enthought.pyface', 'enthought.traits', 'inspect', 'modulefinder', 'mpl_toolkits', 'new', 'numpy.core', 'numpy.dual', 'opcode', 'paste', 'paste.modulefinder', 'paste.pkg_resources', 'pkg_resources', 'pkgutil', 'scikits', 'scipy.integrate', 'scipy.lib', 'scipy.linalg', 'scipy.misc', 'scipy.optimize', 'scipy.sparse', 'scipy.special', 'scipy.stats', 'swig_runtime_data4', 'token', 'tokenize', 'zlib'] Josef From bsouthey at gmail.com Wed Feb 16 12:09:37 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 16 Feb 2011 11:09:37 -0600 Subject: [SciPy-Dev] Breaking up scipy.stats or How to avoid importing the kitchen sink (when we are not in the kitchen) In-Reply-To: References: Message-ID: <4D5C04D1.2000404@gmail.com> On 02/16/2011 09:23 AM, josef.pktd at gmail.com wrote: > Warren's thread on scipy's subpackages made me realize that we can > break up the imports in scipy.stats in a backward compatible way. > > Problem > "from scipy import stats" is slow unless scipy is already in the disk cache > > len(beforenp), len(beforesp), len(beforestats), len(after) > 125 261 341 569 >>>> 569 - 341 > 228 > > e.g. who import scipy.sparse if there is no sparse code in scipy.stats > If I only want to use some tests, then all I need is scipy.stats.stats > and scipy.special > > Proposal > > keep scipy.stats as API import subpackage as public API especially for > interactive work > > move all modules from scipy.stats into another directory, scipy.stats_ > or scipy.statslib or something: > keep it's __import__ empty > create API one level down > - stats_basic: current stats.stats plus tests from morestats, > (name?): imports only scipy.special > - stats_other: rest of morestats and other extras (plots, > boxcox,...), (name?) > - mstats > - kde > - distributions: imports the kitchen sink > no lazy imports possible because distributions are instances and > not just classes > > then we can do > "from scipy.statlib import stats_basic" > and we get the ttests with an import of scipy.special plus one module > instead of plus 215 modules. > > > This is currently just an idea, and I won't pursue it further if we > don't want to go this way. Ignoring backwards compatibility, we can do something about the current __init__.py: " from info import __doc__ from stats import * from distributions import * from rv import * from morestats import * from kde import gaussian_kde import mstats " Most items appear to come from stats (27) and distributions (70). So, without addressing the impacts, three 'easy' things that could be done are: 1) avoiding or changing the distribution import would help. 2) use 'import morestats' instead of 'from morestats import *'. 3) move less common functions in stats.py to morestats.py and just do 'import morestats'. A possible list for things to move from stats.py are: MOMENTS HANDLING NAN: nanmean nanmedian nanstd ALTERED VERSIONS: tmean tvar tstd tsem describe TRIMMING FCNS: threshold (for arrays only) trimboth trim1 around (round all vals to 'n' decimals) Bruce > > Notes > > I don't understand some things about the imports, > why do I get some distutils and enthought modules with the stats > import. (I don't understand the lazy import machinery.) > > statsmodels just switched to separating API from package imports. > > import sys, copy > beforenp = copy.copy(sys.modules) > import numpy > beforesp = copy.copy(sys.modules) > import scipy > beforestats = copy.copy(sys.modules) > from scipy import stats > after = copy.copy(sys.modules) > > print 'len(beforenp), len(beforesp), len(beforestats), len(after)' > print len(beforenp), len(beforesp), len(beforestats), len(after) > > from pprint import pprint > pprint(sorted(set(('.'.join(i.split('.')[:2]) for i in > set(after)-set(beforestats))))) > > ##pprint(sorted(set(('.'.join(i.split('.')[:2]) for i in > ## set(beforestats)-set(beforesp))))) > > > >> python -i stats_imports.py > len(beforenp), len(beforesp), len(beforestats), len(after) > 125 261 341 569 > ['_bisect', > 'bisect', > 'dis', > 'distutils', > 'distutils.dep_util', > 'distutils.distutils', > 'distutils.errors', > 'distutils.log', > 'distutils.os', > 'distutils.re', > 'distutils.spawn', > 'distutils.string', > 'distutils.sys', > 'distutils.util', > 'enthought', > 'enthought.envisage', > 'enthought.modulefinder', > 'enthought.plugins', > 'enthought.pyface', > 'enthought.traits', > 'inspect', > 'modulefinder', > 'mpl_toolkits', > 'new', > 'numpy.core', > 'numpy.dual', > 'opcode', > 'paste', > 'paste.modulefinder', > 'paste.pkg_resources', > 'pkg_resources', > 'pkgutil', > 'scikits', > 'scipy.integrate', > 'scipy.lib', > 'scipy.linalg', > 'scipy.misc', > 'scipy.optimize', > 'scipy.sparse', > 'scipy.special', > 'scipy.stats', > 'swig_runtime_data4', > 'token', > 'tokenize', > 'zlib'] > > > Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From robert.kern at gmail.com Wed Feb 16 12:31:12 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Feb 2011 11:31:12 -0600 Subject: [SciPy-Dev] Breaking up scipy.stats or How to avoid importing the kitchen sink (when we are not in the kitchen) In-Reply-To: References: Message-ID: On Wed, Feb 16, 2011 at 09:23, wrote: > I don't understand some things about the imports, > why do I get some distutils and enthought modules with the stats > import. (I don't understand the lazy import machinery.) There is no lazy import machinery at work here. scipy.stats imports scipy.sparse (through an intermediate, probably scipy.linalg). scipy.sparse tries to import scikits.umfpack to provide optional functionality. scikits is a namespace package, which uses pkg_resources to implement that namespace behavior. The way that happens to be implemented in your version of pkg_resources is to import all namespace packages the first time you import any namespace package (just their mostly-empty __init__.py files, not the actual code). It's probably using some support code from distutils and pkgutil in the process. This is also why you get mpl_toolkits, paste, and scikits. By the way, you might want to use this function to get the set of module names that have been imported at a particular time. It handles an important subtlety that can confuse the results. def get_modules(): return set(m for m in sys.modules if sys.modules[m] is not None) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Wed Feb 16 12:54:18 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 16 Feb 2011 12:54:18 -0500 Subject: [SciPy-Dev] Breaking up scipy.stats or How to avoid importing the kitchen sink (when we are not in the kitchen) In-Reply-To: References: Message-ID: On Wed, Feb 16, 2011 at 12:31 PM, Robert Kern wrote: > On Wed, Feb 16, 2011 at 09:23, ? wrote: > >> I don't understand some things about the imports, >> why do I get some distutils and enthought modules with the stats >> import. (I don't understand the lazy import machinery.) > > There is no lazy import machinery at work here. scipy.stats imports > scipy.sparse (through an intermediate, probably scipy.linalg). > scipy.sparse tries to import scikits.umfpack to provide optional > functionality. scikits is a namespace package, which uses > pkg_resources to implement that namespace behavior. The way that > happens to be implemented in your version of pkg_resources is to > import all namespace packages the first time you import any namespace > package (just their mostly-empty __init__.py files, not the actual > code). It's probably using some support code from distutils and > pkgutil in the process. This is also why you get mpl_toolkits, paste, > and scikits. > > By the way, you might want to use this function to get the set of > module names that have been imported at a particular time. It handles > an important subtlety that can confuse the results. > > ?def get_modules(): > ? ?return set(m for m in sys.modules if sys.modules[m] is not None) Thanks, this explanation is very useful if I add import scikits to get the namespace imports out of the way, and use the None filter, then scipy.stats imports only 132 modules >python -i stats_imports_1.py len(beforenp), len(beforesp), len(beforestats), len(after) 81 160 242 374 ['_bisect', 'bisect', 'inspect', 'numpy.dual', 'scipy.integrate', 'scipy.lib', 'scipy.linalg', 'scipy.misc', 'scipy.optimize', 'scipy.sparse', 'scipy.special', 'scipy.stats', 'swig_runtime_data4', 'token', 'tokenize'] Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From pav at iki.fi Wed Feb 16 17:26:35 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 16 Feb 2011 22:26:35 +0000 (UTC) Subject: [SciPy-Dev] Release blocker? References: <4D5ABEAD.4000603@gmail.com> Message-ID: On Tue, 15 Feb 2011 21:39:57 -0600, Bruce Southey wrote: [clip] > On 64-bit Win7 with 32-bit Python 2.6.3 and 3.1 , I get a crashes with > both binary installers at > 'test_interpnd.TestCloughTocher2DInterpolator.test_dense '. A window > pops up asking to close the application. The problem appears to be free()-ing memory allocated in a different DLL, which apparently isn't allowed on this platform (didn't anyone test the previous release candidates on Win7? :) That is, the Cython module scipy.spatial.qhull allocated memory, passed it on to the Cython module scipy.interpolate.interpnd, which tried to free it -> boom. This should fix it: http://github.com/pv/scipy-work/tree/bug/interpnd-windows I'll see if I also manage to build Windows binaries... -- Pauli Virtanen From cgohlke at uci.edu Wed Feb 16 17:49:51 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Wed, 16 Feb 2011 14:49:51 -0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> Message-ID: <4D5C548F.2040005@uci.edu> On 2/16/2011 2:26 PM, Pauli Virtanen wrote: > On Tue, 15 Feb 2011 21:39:57 -0600, Bruce Southey wrote: > [clip] >> On 64-bit Win7 with 32-bit Python 2.6.3 and 3.1 , I get a crashes with >> both binary installers at >> 'test_interpnd.TestCloughTocher2DInterpolator.test_dense '. A window >> pops up asking to close the application. > > The problem appears to be free()-ing memory allocated in a different DLL, > which apparently isn't allowed on this platform (didn't anyone test the > previous release candidates on Win7? :) > > That is, the Cython module scipy.spatial.qhull allocated memory, passed > it on to the Cython module scipy.interpolate.interpnd, which tried to > free it -> boom. > > This should fix it: > > http://github.com/pv/scipy-work/tree/bug/interpnd-windows > > I'll see if I also manage to build Windows binaries... > I tested msvc9/MKL builds of rc3 with 8 different Python versions and did not notice this issue. I think the problem is not Win7 but that the official scipy binaries link against two different C runtime libraries, which is a bad idea on any platform. For example, interpnd.pyd correctly uses msvcr90.dll, while qhull.pyd uses msvcrt.dll. Christoph From cgohlke at uci.edu Wed Feb 16 18:02:29 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Wed, 16 Feb 2011 15:02:29 -0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: <4D5C548F.2040005@uci.edu> References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> Message-ID: <4D5C5785.2040402@uci.edu> On 2/16/2011 2:49 PM, Christoph Gohlke wrote: > > > On 2/16/2011 2:26 PM, Pauli Virtanen wrote: >> On Tue, 15 Feb 2011 21:39:57 -0600, Bruce Southey wrote: >> [clip] >>> On 64-bit Win7 with 32-bit Python 2.6.3 and 3.1 , I get a crashes with >>> both binary installers at >>> 'test_interpnd.TestCloughTocher2DInterpolator.test_dense '. A window >>> pops up asking to close the application. >> >> The problem appears to be free()-ing memory allocated in a different DLL, >> which apparently isn't allowed on this platform (didn't anyone test the >> previous release candidates on Win7? :) >> >> That is, the Cython module scipy.spatial.qhull allocated memory, passed >> it on to the Cython module scipy.interpolate.interpnd, which tried to >> free it -> boom. >> >> This should fix it: >> >> http://github.com/pv/scipy-work/tree/bug/interpnd-windows >> >> I'll see if I also manage to build Windows binaries... >> > > I tested msvc9/MKL builds of rc3 with 8 different Python versions and > did not notice this issue. I think the problem is not Win7 but that the > official scipy binaries link against two different C runtime libraries, > which is a bad idea on any platform. For example, interpnd.pyd correctly > uses msvcr90.dll, while qhull.pyd uses msvcrt.dll. > > Christoph Actually, interpolnd.pyd uses functions from both C runtime libraries. Christoph From pav at iki.fi Wed Feb 16 18:23:49 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 16 Feb 2011 23:23:49 +0000 (UTC) Subject: [SciPy-Dev] Release blocker? References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> Message-ID: On Wed, 16 Feb 2011 14:49:51 -0800, Christoph Gohlke wrote: [clip] > I tested msvc9/MKL builds of rc3 with 8 different Python versions and > did not notice this issue. Thanks a lot for your trouble! > I think the problem is not Win7 but that the > official scipy binaries link against two different C runtime libraries, > which is a bad idea on any platform. For example, interpnd.pyd correctly > uses msvcr90.dll, while qhull.pyd uses msvcrt.dll. Hmm, yes, different C runtimes would make sense as the cause -- I was wondering how just crossing DLL borders could cause problems for malloc/free... > Actually, interpnd.pyd uses functions from both C runtime libraries. That's probably not a good sign. I managed to reproduce the issue by building Scipy with Mingw against the official Numpy 1.5.1 binary, so at least that sort of build setup appears to produce strange binaries. The build is driven by distutils, so I've little idea what's happening in it. This might be some sort of Mingw problem. So at least two solutions seem possible: (i) apply the patches from my github branch to replace malloc/free by static allocations, or (ii) fix the build setup. Pauli From robert.kern at gmail.com Wed Feb 16 18:53:45 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Feb 2011 17:53:45 -0600 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> Message-ID: On Wed, Feb 16, 2011 at 17:23, Pauli Virtanen wrote: > I managed to reproduce the issue by building Scipy with Mingw against the > official Numpy 1.5.1 binary, so at least that sort of build setup appears > to produce strange binaries. The build is driven by distutils, so I've > little idea what's happening in it. This might be some sort of Mingw > problem. > > So at least two solutions seem possible: (i) apply the patches from my > github branch to replace malloc/free by static allocations, or > (ii) fix the build setup. You should never use raw malloc/free in C extensions, but you don't have to rely on static allocations. Always use PyMem_Malloc/PyMem_Free. This will make sure that everything uses the C runtime that Python was built with regardless of what CRT is available at build time for the extension. http://docs.python.org/c-api/memory.html#memory-interface That said, distutils should already be passing the right flags to mingw to link against msvcr90.dll unless if we are interfering. Can you show us the command lines that are being executed on your machine? I believe that using the environment variable DISTUTILS_DEBUG=1 will cause setup.py to print them out as it goes along. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From charlesr.harris at gmail.com Wed Feb 16 19:04:18 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 16 Feb 2011 17:04:18 -0700 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> Message-ID: On Wed, Feb 16, 2011 at 4:53 PM, Robert Kern wrote: > On Wed, Feb 16, 2011 at 17:23, Pauli Virtanen wrote: > > > I managed to reproduce the issue by building Scipy with Mingw against the > > official Numpy 1.5.1 binary, so at least that sort of build setup appears > > to produce strange binaries. The build is driven by distutils, so I've > > little idea what's happening in it. This might be some sort of Mingw > > problem. > > > > So at least two solutions seem possible: (i) apply the patches from my > > github branch to replace malloc/free by static allocations, or > > (ii) fix the build setup. > > You should never use raw malloc/free in C extensions, but you don't > have to rely on static allocations. Always use > PyMem_Malloc/PyMem_Free. This will make sure that everything uses the > C runtime that Python was built with regardless of what CRT is > available at build time for the extension. > > http://docs.python.org/c-api/memory.html#memory-interface > > That said, distutils should already be passing the right flags to > mingw to link against msvcr90.dll unless if we are interfering. Can > you show us the command lines that are being executed on your machine? > I believe that using the environment variable DISTUTILS_DEBUG=1 will > cause setup.py to print them out as it goes along. > > Hmm, numpy uses malloc/free in some places. Would this also be a problem with the private memory management in the refactor branch? I think the best solution here would be to fix the build process. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Feb 16 19:25:10 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 Feb 2011 00:25:10 +0000 (UTC) Subject: [SciPy-Dev] Release blocker? References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> Message-ID: On Wed, 16 Feb 2011 17:53:45 -0600, Robert Kern wrote: [clip] > You should never use raw malloc/free in C extensions, [clip] Live and learn. [clip] > That said, distutils should already be passing the right flags to mingw > to link against msvcr90.dll unless if we are interfering. Can you show > us the command lines that are being executed on your machine? I believe > that using the environment variable DISTUTILS_DEBUG=1 will cause > setup.py to print them out as it goes along. Build log is here: http://pav.iki.fi/tmp/build.log It's a simple Numpy 1.5.1 + current mingw32 + python.org 3.1 setup. Apparently, what happens is that modules linked with g++ get -lmsvcr90, whereas modules linked with gfortran (due to BLAS, apparently) don't get it. -- Pauli Virtanen From robert.kern at gmail.com Wed Feb 16 19:30:44 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Feb 2011 18:30:44 -0600 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> Message-ID: On Wed, Feb 16, 2011 at 18:04, Charles R Harris wrote: > > On Wed, Feb 16, 2011 at 4:53 PM, Robert Kern wrote: >> >> On Wed, Feb 16, 2011 at 17:23, Pauli Virtanen wrote: >> >> > I managed to reproduce the issue by building Scipy with Mingw against >> > the >> > official Numpy 1.5.1 binary, so at least that sort of build setup >> > appears >> > to produce strange binaries. The build is driven by distutils, so I've >> > little idea what's happening in it. This might be some sort of Mingw >> > problem. >> > >> > So at least two solutions seem possible: (i) apply the patches from my >> > github branch to replace malloc/free by static allocations, or >> > (ii) fix the build setup. >> >> You should never use raw malloc/free in C extensions, but you don't >> have to rely on static allocations. Always use >> PyMem_Malloc/PyMem_Free. This will make sure that everything uses the >> C runtime that Python was built with regardless of what CRT is >> available at build time for the extension. >> >> http://docs.python.org/c-api/memory.html#memory-interface >> >> That said, distutils should already be passing the right flags to >> mingw to link against msvcr90.dll unless if we are interfering. Can >> you show us the command lines that are being executed on your machine? >> I believe that using the environment variable DISTUTILS_DEBUG=1 will >> cause setup.py to print them out as it goes along. > > Hmm, numpy uses malloc/free in some places. Would this also be a problem > with the private memory management in the refactor branch? Yes. At least it's centralized to one place that needs to be configured/fixed. It should at least be possible to make npy_malloc() use PyMem_Malloc() when libndarray is being built for numpy itself. Grepping the numpy source, I see we have a few uses of raw malloc(). numpy already tries to do the right thing by providing a PyArray_malloc() macro (_pya_malloc(), internally). > I think the best > solution here would be to fix the build process. We need to do both. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From robert.kern at gmail.com Wed Feb 16 19:39:25 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Feb 2011 18:39:25 -0600 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> Message-ID: On Wed, Feb 16, 2011 at 18:25, Pauli Virtanen wrote: > On Wed, 16 Feb 2011 17:53:45 -0600, Robert Kern wrote: >> That said, distutils should already be passing the right flags to mingw >> to link against msvcr90.dll unless if we are interfering. Can you show >> us the command lines that are being executed on your machine? I believe >> that using the environment variable DISTUTILS_DEBUG=1 will cause >> setup.py to print them out as it goes along. > > Build log is here: http://pav.iki.fi/tmp/build.log > It's a simple Numpy 1.5.1 + current mingw32 + python.org 3.1 setup. > > Apparently, what happens is that modules linked with g++ get -lmsvcr90, > whereas modules linked with gfortran (due to BLAS, apparently) don't get > it. Ah, that makes sense. numpy.distutils owns the link flags for Fortran modules. I think the problem is in fcompiler/gnu.py:get_libraries() (both of them). We don't add the right runtime library unless if the configured C compiler is 'msvc'. I think we need to add it unconditionally. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From wnbell at gmail.com Fri Feb 18 01:55:28 2011 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 18 Feb 2011 01:55:28 -0500 Subject: [SciPy-Dev] request of a FGMRES krylov solver In-Reply-To: References: Message-ID: On Wed, Jan 12, 2011 at 5:23 PM, Th?lesphonse Bigorneault < bigorneault at gmail.com> wrote: > I would like to have a FGMRES Krylov solver in scipy. FGMRES is a variant > of the GMRES method with right preconditioning that enables the use of a > different preconditioner at each step of the Arnoldi process. > > A possibility could be to "borrow" the fgmres function from pyamg (which as > a compatible license) > > http://code.google.com/p/pyamg/source/browse/trunk/pyamg/krylov/_fgmres.py > > Including this code into scipy should be straightforward. > > T?lesphore Bigorneault > Hi T?lesphore, The only obstacle to moving the PyAMG solvers like into to SciPy is a handful of C++ routines here [1]. In principle we could add this C++ code to scipy.sparsetools, but I'd strongly prefer that the Krylov solvers be pure-Python codes for simplicity. It shouldn't be too hard to implement reasonably fast versions of the routines in [1] using NumPy/SciPy. So, if you could contribute implementations of those routines that were close in performance to the existing C++ I'd migrate the solvers from PyAMG to SciPy. [1] http://code.google.com/p/pyamg/source/browse/trunk/pyamg/amg_core/krylov.h -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From filip.dominec at gmail.com Fri Feb 18 05:21:09 2011 From: filip.dominec at gmail.com (Filip Dominec) Date: Fri, 18 Feb 2011 11:21:09 +0100 Subject: [SciPy-Dev] New modules for important tasks (that are missing in SciPy) Message-ID: <20110218112109.25ef6654@diana> Hi, recently I was wondering if I could use SciPy for basic spectral data processing. I needed just to subtract linear background, determine the peaks and calculate their intensity and width. I was puzzled that there are probably no readily available functions in Scipy nor Numpy to do this task in a moment. Therefore, I have written a tiny module containing these essential functions and published it as "dataproc.py" in the "Python modules" section at: http://fzu.cz/~dominecf/index.html Another important feature is the simplex optimisation, which I need to use now and then in the interactive console. This requires the syntax to be as simple as possible. Maybe the "simplex.py" module should be incorporated into SciPy, too. Best Regards, Filip Dominec From pav at iki.fi Fri Feb 18 07:19:08 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 18 Feb 2011 12:19:08 +0000 (UTC) Subject: [SciPy-Dev] New modules for important tasks (that are missing in SciPy) References: <20110218112109.25ef6654@diana> Message-ID: Fri, 18 Feb 2011 11:21:09 +0100, Filip Dominec wrote: > recently I was wondering if I could use SciPy for basic spectral data > processing. I needed just to subtract linear background, determine the > peaks and calculate their intensity and width. I was puzzled that there > are probably no readily available functions in Scipy nor Numpy to do > this task in a moment. Thanks. Some comments: - The peak handling routines seems pretty specialized for a specific use case, and do not probably work with other kinds of data, which explains why they are not in Scipy. - numpy.trapz = trapezoid method > Therefore, I have written a tiny module containing these essential > functions and published it as "dataproc.py" in the "Python modules" > section at: http://fzu.cz/~dominecf/index.html Check the permissions for "simplex.py", it currently says "403 Access denied" [clip] > Another important feature is the simplex optimisation, which I need to > use now and then in the interactive console. This requires the syntax to > be as simple as possible. Maybe the "simplex.py" module should be > incorporated into SciPy, too. scipy.optimize.fmin implements the Nelder-Mead algorithm. From ralf.gommers at googlemail.com Sat Feb 19 03:22:05 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 19 Feb 2011 16:22:05 +0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> Message-ID: On Thu, Feb 17, 2011 at 8:39 AM, Robert Kern wrote: > On Wed, Feb 16, 2011 at 18:25, Pauli Virtanen wrote: >> On Wed, 16 Feb 2011 17:53:45 -0600, Robert Kern wrote: > >>> That said, distutils should already be passing the right flags to mingw >>> to link against msvcr90.dll unless if we are interfering. Can you show >>> us the command lines that are being executed on your machine? I believe >>> that using the environment variable DISTUTILS_DEBUG=1 will cause >>> setup.py to print them out as it goes along. >> >> Build log is here: http://pav.iki.fi/tmp/build.log >> It's a simple Numpy 1.5.1 + current mingw32 + python.org 3.1 setup. >> >> Apparently, what happens is that modules linked with g++ get -lmsvcr90, >> whereas modules linked with gfortran (due to BLAS, apparently) don't get >> it. > > Ah, that makes sense. numpy.distutils owns the link flags for Fortran > modules. I think the problem is in fcompiler/gnu.py:get_libraries() > (both of them). We don't add the right runtime library unless if the > configured C compiler is 'msvc'. I think we need to add it > unconditionally. > I tried that, https://github.com/rgommers/numpy/tree/mingw-runtime-1.5.x. Then building scipy against this patched numpy 1.5.1 shows that also g77 receives '-lmsvcr90'. Build log at http://pastebin.com/Zu4PB4ss. The tests all pass for me under Wine. I don't have access to a Win7 system though, so can someone who does try the installer in http://sourceforge.net/projects/scipy/files/scipy/temp/? The above doesn't get rid of references to msvcrt.dll though. See interpnd.txt and qhull.txt (output of Dependency Walker) in https://sourceforge.net/projects/scipy/files/scipy/temp/. I am also wondering if this dual runtime is really the issue here - it has worked fine with this setup for a long time, and the segfault occurs for recently added code (interpnd). Perhaps getting rid of free/malloc is enough. There is also an unclear warning at line 181 of fcompiler/gnu.py (originally written by Pearu): # the following code is not needed (read: breaks) when using MinGW # in case want to link F77 compiled code with MSVC I'm not sure if that just refers to the line "opt.append('gcc')" and not also to the runtime_lib that you propose to change now. A note for future reference: Dependency Walker has a problem with msvcr90.dll, it claims it can not find it. It is installed anyway, in ~/__wine/drive_c/windows/winsxs/x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_x-ww_d08d0375/. Ralf From cgohlke at uci.edu Sat Feb 19 04:09:34 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sat, 19 Feb 2011 01:09:34 -0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> Message-ID: <4D5F88CE.9020201@uci.edu> On 2/19/2011 12:22 AM, Ralf Gommers wrote: > On Thu, Feb 17, 2011 at 8:39 AM, Robert Kern wrote: >> On Wed, Feb 16, 2011 at 18:25, Pauli Virtanen wrote: >>> On Wed, 16 Feb 2011 17:53:45 -0600, Robert Kern wrote: >> >>>> That said, distutils should already be passing the right flags to mingw >>>> to link against msvcr90.dll unless if we are interfering. Can you show >>>> us the command lines that are being executed on your machine? I believe >>>> that using the environment variable DISTUTILS_DEBUG=1 will cause >>>> setup.py to print them out as it goes along. >>> >>> Build log is here: http://pav.iki.fi/tmp/build.log >>> It's a simple Numpy 1.5.1 + current mingw32 + python.org 3.1 setup. >>> >>> Apparently, what happens is that modules linked with g++ get -lmsvcr90, >>> whereas modules linked with gfortran (due to BLAS, apparently) don't get >>> it. >> >> Ah, that makes sense. numpy.distutils owns the link flags for Fortran >> modules. I think the problem is in fcompiler/gnu.py:get_libraries() >> (both of them). We don't add the right runtime library unless if the >> configured C compiler is 'msvc'. I think we need to add it >> unconditionally. >> > I tried that, https://github.com/rgommers/numpy/tree/mingw-runtime-1.5.x. > Then building scipy against this patched numpy 1.5.1 shows that also > g77 receives '-lmsvcr90'. Build log at http://pastebin.com/Zu4PB4ss. > The tests all pass for me under Wine. I don't have access to a Win7 > system though, so can someone who does try the installer in > http://sourceforge.net/projects/scipy/files/scipy/temp/? Just tried it on Windows 7. Qhull.pyd fails to load with "ImportError: DLL load failed". Process Monitor reports it is failing to find/load a file named ''. Most unusual. > > The above doesn't get rid of references to msvcrt.dll though. See > interpnd.txt and qhull.txt (output of Dependency Walker) in > https://sourceforge.net/projects/scipy/files/scipy/temp/. I am also > wondering if this dual runtime is really the issue here - it has > worked fine with this setup for a long time, and the segfault occurs > for recently added code (interpnd). Perhaps getting rid of free/malloc > is enough. The dual runtime is an issue in this case. Scipy.spatial.qhull uses msvcrt to allocate memory and scipy.interpolate.interpnd uses msvcrt90 to free it. At this point it is probably best to use PyMem_Malloc/PyMem_Free, as Robert Kern suggested. > > There is also an unclear warning at line 181 of fcompiler/gnu.py > (originally written by Pearu): > # the following code is not needed (read: breaks) when using MinGW > # in case want to link F77 compiled code with MSVC > I'm not sure if that just refers to the line "opt.append('gcc')" and > not also to the runtime_lib that you propose to change now. > > A note for future reference: Dependency Walker has a problem with > msvcr90.dll, it claims it can not find it. It is installed anyway, in > ~/__wine/drive_c/windows/winsxs/x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_x-ww_d08d0375/. It is expected that Dependency Walker can not find msvcr90.dll. As of Python 2.6.5 msvcrt manifests are no longer embedded into pyd files . The msvcr90 DLL is loaded by python.exe, which has the manifest embedded. > > Ralf Christoph From ralf.gommers at googlemail.com Sat Feb 19 04:34:09 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 19 Feb 2011 17:34:09 +0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: <4D5F88CE.9020201@uci.edu> References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> <4D5F88CE.9020201@uci.edu> Message-ID: On Sat, Feb 19, 2011 at 5:09 PM, Christoph Gohlke wrote: > > > On 2/19/2011 12:22 AM, Ralf Gommers wrote: >> On Thu, Feb 17, 2011 at 8:39 AM, Robert Kern ?wrote: >>> On Wed, Feb 16, 2011 at 18:25, Pauli Virtanen ?wrote: >>>> On Wed, 16 Feb 2011 17:53:45 -0600, Robert Kern wrote: >>> >>>>> That said, distutils should already be passing the right flags to mingw >>>>> to link against msvcr90.dll unless if we are interfering. Can you show >>>>> us the command lines that are being executed on your machine? I believe >>>>> that using the environment variable DISTUTILS_DEBUG=1 will cause >>>>> setup.py to print them out as it goes along. >>>> >>>> Build log is here: http://pav.iki.fi/tmp/build.log >>>> It's a simple Numpy 1.5.1 + current mingw32 + python.org 3.1 setup. >>>> >>>> Apparently, what happens is that modules linked with g++ get -lmsvcr90, >>>> whereas modules linked with gfortran (due to BLAS, apparently) don't get >>>> it. >>> >>> Ah, that makes sense. numpy.distutils owns the link flags for Fortran >>> modules. I think the problem is in fcompiler/gnu.py:get_libraries() >>> (both of them). We don't add the right runtime library unless if the >>> configured C compiler is 'msvc'. I think we need to add it >>> unconditionally. >>> >> I tried that, https://github.com/rgommers/numpy/tree/mingw-runtime-1.5.x. >> Then building scipy against this patched numpy 1.5.1 shows that also >> g77 receives '-lmsvcr90'. Build log at http://pastebin.com/Zu4PB4ss. >> The tests all pass for me under Wine. I don't have access to a Win7 >> system though, so can someone who does try the installer in >> http://sourceforge.net/projects/scipy/files/scipy/temp/? > > Just tried it on Windows 7. Qhull.pyd fails to load with "ImportError: > DLL load failed". Process Monitor reports it is failing to find/load a > file named ''. Most unusual. > No idea. Perhaps that's what the comment on line 181 in fcompiler/gnu.py was about? >> >> The above doesn't get rid of references to msvcrt.dll though. See >> interpnd.txt and qhull.txt (output of Dependency Walker) in >> https://sourceforge.net/projects/scipy/files/scipy/temp/. I am also >> wondering if this dual runtime is really the issue here - it has >> worked fine with this setup for a long time, and the segfault occurs >> for recently added code (interpnd). Perhaps getting rid of free/malloc >> is enough. > > The dual runtime is an issue in this case. Scipy.spatial.qhull uses > msvcrt to allocate memory and scipy.interpolate.interpnd uses msvcrt90 > to free it. Yes, that's clear. But I meant that we just need to resolve this memory allocation issue, not the much broader issue of using two runtimes at all - this happens in all .pyd files. But in most cases only a few functions like _assert, _isatty, _tempnam are used from msvcrt.dll. This is harmless as far as I can tell and probably can't be avoided with MinGW. > At this point it is probably best to use > PyMem_Malloc/PyMem_Free, as Robert Kern suggested. Agreed. Ralf >> >> There is also an unclear warning at line 181 of fcompiler/gnu.py >> (originally written by Pearu): >> ? ? ?# the following code is not needed (read: breaks) when using MinGW >> ? ? ?# in case want to link F77 compiled code with MSVC >> I'm not sure if that just refers to the line "opt.append('gcc')" and >> not also to the runtime_lib that you propose to change now. From cgohlke at uci.edu Sat Feb 19 04:44:09 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sat, 19 Feb 2011 01:44:09 -0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> <4D5F88CE.9020201@uci.edu> Message-ID: <4D5F90E9.5030409@uci.edu> On 2/19/2011 1:34 AM, Ralf Gommers wrote: > On Sat, Feb 19, 2011 at 5:09 PM, Christoph Gohlke wrote: >> >> >> On 2/19/2011 12:22 AM, Ralf Gommers wrote: >>> On Thu, Feb 17, 2011 at 8:39 AM, Robert Kern wrote: >>>> On Wed, Feb 16, 2011 at 18:25, Pauli Virtanen wrote: >>>>> On Wed, 16 Feb 2011 17:53:45 -0600, Robert Kern wrote: >>>> >>>>>> That said, distutils should already be passing the right flags to mingw >>>>>> to link against msvcr90.dll unless if we are interfering. Can you show >>>>>> us the command lines that are being executed on your machine? I believe >>>>>> that using the environment variable DISTUTILS_DEBUG=1 will cause >>>>>> setup.py to print them out as it goes along. >>>>> >>>>> Build log is here: http://pav.iki.fi/tmp/build.log >>>>> It's a simple Numpy 1.5.1 + current mingw32 + python.org 3.1 setup. >>>>> >>>>> Apparently, what happens is that modules linked with g++ get -lmsvcr90, >>>>> whereas modules linked with gfortran (due to BLAS, apparently) don't get >>>>> it. >>>> >>>> Ah, that makes sense. numpy.distutils owns the link flags for Fortran >>>> modules. I think the problem is in fcompiler/gnu.py:get_libraries() >>>> (both of them). We don't add the right runtime library unless if the >>>> configured C compiler is 'msvc'. I think we need to add it >>>> unconditionally. >>>> >>> I tried that, https://github.com/rgommers/numpy/tree/mingw-runtime-1.5.x. >>> Then building scipy against this patched numpy 1.5.1 shows that also >>> g77 receives '-lmsvcr90'. Build log at http://pastebin.com/Zu4PB4ss. >>> The tests all pass for me under Wine. I don't have access to a Win7 >>> system though, so can someone who does try the installer in >>> http://sourceforge.net/projects/scipy/files/scipy/temp/? >> >> Just tried it on Windows 7. Qhull.pyd fails to load with "ImportError: >> DLL load failed". Process Monitor reports it is failing to find/load a >> file named ''. Most unusual. >> > No idea. Perhaps that's what the comment on line 181 in > fcompiler/gnu.py was about? > >>> >>> The above doesn't get rid of references to msvcrt.dll though. See >>> interpnd.txt and qhull.txt (output of Dependency Walker) in >>> https://sourceforge.net/projects/scipy/files/scipy/temp/. I am also >>> wondering if this dual runtime is really the issue here - it has >>> worked fine with this setup for a long time, and the segfault occurs >>> for recently added code (interpnd). Perhaps getting rid of free/malloc >>> is enough. >> >> The dual runtime is an issue in this case. Scipy.spatial.qhull uses >> msvcrt to allocate memory and scipy.interpolate.interpnd uses msvcrt90 >> to free it. > > Yes, that's clear. But I meant that we just need to resolve this > memory allocation issue, not the much broader issue of using two > runtimes at all - this happens in all .pyd files. I agree. >But in most cases > only a few functions like _assert, _isatty, _tempnam are used from > msvcrt.dll. This is harmless as far as I can tell and probably can't > be avoided with MinGW. The remaining dependencies on msvcrt probably come from linking to libmoldname.a instead of libmoldname90.a. Pygame went through some trouble to remove all msvcrt dependencies . Christoph > >> At this point it is probably best to use >> PyMem_Malloc/PyMem_Free, as Robert Kern suggested. > > Agreed. > > Ralf > >>> >>> There is also an unclear warning at line 181 of fcompiler/gnu.py >>> (originally written by Pearu): >>> # the following code is not needed (read: breaks) when using MinGW >>> # in case want to link F77 compiled code with MSVC >>> I'm not sure if that just refers to the line "opt.append('gcc')" and >>> not also to the runtime_lib that you propose to change now. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From ralf.gommers at googlemail.com Sat Feb 19 05:09:14 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 19 Feb 2011 18:09:14 +0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: <4D5F90E9.5030409@uci.edu> References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> <4D5F88CE.9020201@uci.edu> <4D5F90E9.5030409@uci.edu> Message-ID: On Sat, Feb 19, 2011 at 5:44 PM, Christoph Gohlke wrote: > > > On 2/19/2011 1:34 AM, Ralf Gommers wrote: >> On Sat, Feb 19, 2011 at 5:09 PM, Christoph Gohlke ?wrote: >>> >>> >>> On 2/19/2011 12:22 AM, Ralf Gommers wrote: >>>> On Thu, Feb 17, 2011 at 8:39 AM, Robert Kern ? ?wrote: >>>>> On Wed, Feb 16, 2011 at 18:25, Pauli Virtanen ? ?wrote: >>>>>> On Wed, 16 Feb 2011 17:53:45 -0600, Robert Kern wrote: >>>>> >>>>>>> That said, distutils should already be passing the right flags to mingw >>>>>>> to link against msvcr90.dll unless if we are interfering. Can you show >>>>>>> us the command lines that are being executed on your machine? I believe >>>>>>> that using the environment variable DISTUTILS_DEBUG=1 will cause >>>>>>> setup.py to print them out as it goes along. >>>>>> >>>>>> Build log is here: http://pav.iki.fi/tmp/build.log >>>>>> It's a simple Numpy 1.5.1 + current mingw32 + python.org 3.1 setup. >>>>>> >>>>>> Apparently, what happens is that modules linked with g++ get -lmsvcr90, >>>>>> whereas modules linked with gfortran (due to BLAS, apparently) don't get >>>>>> it. >>>>> >>>>> Ah, that makes sense. numpy.distutils owns the link flags for Fortran >>>>> modules. I think the problem is in fcompiler/gnu.py:get_libraries() >>>>> (both of them). We don't add the right runtime library unless if the >>>>> configured C compiler is 'msvc'. I think we need to add it >>>>> unconditionally. >>>>> >>>> I tried that, https://github.com/rgommers/numpy/tree/mingw-runtime-1.5.x. >>>> Then building scipy against this patched numpy 1.5.1 shows that also >>>> g77 receives '-lmsvcr90'. Build log at http://pastebin.com/Zu4PB4ss. >>>> The tests all pass for me under Wine. I don't have access to a Win7 >>>> system though, so can someone who does try the installer in >>>> http://sourceforge.net/projects/scipy/files/scipy/temp/? >>> >>> Just tried it on Windows 7. Qhull.pyd fails to load with "ImportError: >>> DLL load failed". Process Monitor reports it is failing to find/load a >>> file named ''. Most unusual. >>> >> No idea. Perhaps that's what the comment on line 181 in >> fcompiler/gnu.py was about? >> >>>> >>>> The above doesn't get rid of references to msvcrt.dll though. See >>>> interpnd.txt and qhull.txt (output of Dependency Walker) in >>>> https://sourceforge.net/projects/scipy/files/scipy/temp/. I am also >>>> wondering if this dual runtime is really the issue here - it has >>>> worked fine with this setup for a long time, and the segfault occurs >>>> for recently added code (interpnd). Perhaps getting rid of free/malloc >>>> is enough. >>> >>> The dual runtime is an issue in this case. Scipy.spatial.qhull uses >>> msvcrt to allocate memory and scipy.interpolate.interpnd uses msvcrt90 >>> to free it. >> >> Yes, that's clear. But I meant that we just need to resolve this >> memory allocation issue, not the much broader issue of using two >> runtimes at all - this happens in all .pyd files. > > I agree. > >>But in most cases >> only a few functions like _assert, _isatty, _tempnam are used from >> msvcrt.dll. This is harmless as far as I can tell and probably can't >> be avoided with MinGW. > > The remaining dependencies on msvcrt probably come from linking to > libmoldname.a instead of libmoldname90.a. Pygame went through some > trouble to remove all msvcrt dependencies > . I don't like that approach for two reasons: 1. It's manual tweaking that makes the build environment harder to reproduce on other machines. 2. Pygame now links to a specific library, msvcr71.dll. For us this could be msvcr90.dll. But what if the next version of Python is built with Visual Studio 2010? Then you want msvcr100.dll, so you end up with two runtimes again. In my opinion we should only follow the Pygame example if there's a really good reason to do so. Ralf From pav at iki.fi Sat Feb 19 05:22:08 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 19 Feb 2011 10:22:08 +0000 (UTC) Subject: [SciPy-Dev] Release blocker? References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> <4D5F88CE.9020201@uci.edu> Message-ID: On Sat, 19 Feb 2011 01:09:34 -0800, Christoph Gohlke wrote: [clip] > The dual runtime is an issue in this case. Scipy.spatial.qhull uses > msvcrt to allocate memory and scipy.interpolate.interpnd uses msvcrt90 > to free it. At this point it is probably best to use > PyMem_Malloc/PyMem_Free, as Robert Kern suggested. Then the fix is here: http://github.com/pv/scipy-work/tree/bug/interpnd-windows No need to use any heap allocation scheme --- the structure is better off stored on the stack anyway. Pauli From cgohlke at uci.edu Sat Feb 19 05:23:42 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sat, 19 Feb 2011 02:23:42 -0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> <4D5F88CE.9020201@uci.edu> <4D5F90E9.5030409@uci.edu> Message-ID: <4D5F9A2E.4020601@uci.edu> On 2/19/2011 2:09 AM, Ralf Gommers wrote: > On Sat, Feb 19, 2011 at 5:44 PM, Christoph Gohlke wrote: >> >> >> On 2/19/2011 1:34 AM, Ralf Gommers wrote: >>> On Sat, Feb 19, 2011 at 5:09 PM, Christoph Gohlke wrote: >>>> >>>> >>>> On 2/19/2011 12:22 AM, Ralf Gommers wrote: >>>>> On Thu, Feb 17, 2011 at 8:39 AM, Robert Kern wrote: >>>>>> On Wed, Feb 16, 2011 at 18:25, Pauli Virtanen wrote: >>>>>>> On Wed, 16 Feb 2011 17:53:45 -0600, Robert Kern wrote: >>>>>> >>>>>>>> That said, distutils should already be passing the right flags to mingw >>>>>>>> to link against msvcr90.dll unless if we are interfering. Can you show >>>>>>>> us the command lines that are being executed on your machine? I believe >>>>>>>> that using the environment variable DISTUTILS_DEBUG=1 will cause >>>>>>>> setup.py to print them out as it goes along. >>>>>>> >>>>>>> Build log is here: http://pav.iki.fi/tmp/build.log >>>>>>> It's a simple Numpy 1.5.1 + current mingw32 + python.org 3.1 setup. >>>>>>> >>>>>>> Apparently, what happens is that modules linked with g++ get -lmsvcr90, >>>>>>> whereas modules linked with gfortran (due to BLAS, apparently) don't get >>>>>>> it. >>>>>> >>>>>> Ah, that makes sense. numpy.distutils owns the link flags for Fortran >>>>>> modules. I think the problem is in fcompiler/gnu.py:get_libraries() >>>>>> (both of them). We don't add the right runtime library unless if the >>>>>> configured C compiler is 'msvc'. I think we need to add it >>>>>> unconditionally. >>>>>> >>>>> I tried that, https://github.com/rgommers/numpy/tree/mingw-runtime-1.5.x. >>>>> Then building scipy against this patched numpy 1.5.1 shows that also >>>>> g77 receives '-lmsvcr90'. Build log at http://pastebin.com/Zu4PB4ss. >>>>> The tests all pass for me under Wine. I don't have access to a Win7 >>>>> system though, so can someone who does try the installer in >>>>> http://sourceforge.net/projects/scipy/files/scipy/temp/? >>>> >>>> Just tried it on Windows 7. Qhull.pyd fails to load with "ImportError: >>>> DLL load failed". Process Monitor reports it is failing to find/load a >>>> file named ''. Most unusual. >>>> >>> No idea. Perhaps that's what the comment on line 181 in >>> fcompiler/gnu.py was about? >>> >>>>> >>>>> The above doesn't get rid of references to msvcrt.dll though. See >>>>> interpnd.txt and qhull.txt (output of Dependency Walker) in >>>>> https://sourceforge.net/projects/scipy/files/scipy/temp/. I am also >>>>> wondering if this dual runtime is really the issue here - it has >>>>> worked fine with this setup for a long time, and the segfault occurs >>>>> for recently added code (interpnd). Perhaps getting rid of free/malloc >>>>> is enough. >>>> >>>> The dual runtime is an issue in this case. Scipy.spatial.qhull uses >>>> msvcrt to allocate memory and scipy.interpolate.interpnd uses msvcrt90 >>>> to free it. >>> >>> Yes, that's clear. But I meant that we just need to resolve this >>> memory allocation issue, not the much broader issue of using two >>> runtimes at all - this happens in all .pyd files. >> >> I agree. >> >>> But in most cases >>> only a few functions like _assert, _isatty, _tempnam are used from >>> msvcrt.dll. This is harmless as far as I can tell and probably can't >>> be avoided with MinGW. >> >> The remaining dependencies on msvcrt probably come from linking to >> libmoldname.a instead of libmoldname90.a. Pygame went through some >> trouble to remove all msvcrt dependencies >> . > > I don't like that approach for two reasons: > 1. It's manual tweaking that makes the build environment harder to > reproduce on other machines. > 2. Pygame now links to a specific library, msvcr71.dll. For us this > could be msvcr90.dll. But what if the next version of Python is built > with Visual Studio 2010? Then you want msvcr100.dll, so you end up > with two runtimes again. > > In my opinion we should only follow the Pygame example if there's a > really good reason to do so. Actually that document is a little outdated. Msvcr71 is only needed for Python 2.5. Also libmoldname71.a and libmoldname90.a are included in recent Mingw distributions (not mingw64 though). Christoph > > Ralf > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From ralf.gommers at googlemail.com Sat Feb 19 06:58:37 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 19 Feb 2011 19:58:37 +0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> <4D5F88CE.9020201@uci.edu> Message-ID: On Sat, Feb 19, 2011 at 6:22 PM, Pauli Virtanen wrote: > On Sat, 19 Feb 2011 01:09:34 -0800, Christoph Gohlke wrote: > [clip] >> The dual runtime is an issue in this case. Scipy.spatial.qhull uses >> msvcrt to allocate memory and scipy.interpolate.interpnd uses msvcrt90 >> to free it. At this point it is probably best to use >> PyMem_Malloc/PyMem_Free, as Robert Kern suggested. > > Then the fix is here: > > ? ?http://github.com/pv/scipy-work/tree/bug/interpnd-windows > > No need to use any heap allocation scheme --- the structure is better off > stored on the stack anyway. > I uploaded a new binary with that fix included and built against numpy 1.5.1 (without the distutils change) to https://sourceforge.net/projects/scipy/files/scipy/temp/. Christoph or another Win7 user, can you please test again? If that works it should become RC4. Ralf From pav at iki.fi Sat Feb 19 08:00:22 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 19 Feb 2011 13:00:22 +0000 (UTC) Subject: [SciPy-Dev] Release blocker? References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> <4D5F88CE.9020201@uci.edu> Message-ID: On Sat, 19 Feb 2011 19:58:37 +0800, Ralf Gommers wrote: [clip] > I uploaded a new binary with that fix included and built against numpy > 1.5.1 (without the distutils change) to > https://sourceforge.net/projects/scipy/files/scipy/temp/. > > Christoph or another Win7 user, can you please test again? Works for me. Pauli From ralf.gommers at googlemail.com Sat Feb 19 08:21:46 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 19 Feb 2011 21:21:46 +0800 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> <4D5F88CE.9020201@uci.edu> Message-ID: On Sat, Feb 19, 2011 at 9:00 PM, Pauli Virtanen wrote: > On Sat, 19 Feb 2011 19:58:37 +0800, Ralf Gommers wrote: > [clip] >> I uploaded a new binary with that fix included and built against numpy >> 1.5.1 (without the distutils change) to >> https://sourceforge.net/projects/scipy/files/scipy/temp/. >> >> Christoph or another Win7 user, can you please test again? > > Works for me. Thanks. I'll go ahead then and tag that as rc4. Ralf From bsouthey at gmail.com Sat Feb 19 09:14:20 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Sat, 19 Feb 2011 08:14:20 -0600 Subject: [SciPy-Dev] Release blocker? In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> <4D5C548F.2040005@uci.edu> <4D5F88CE.9020201@uci.edu> Message-ID: On Sat, Feb 19, 2011 at 7:00 AM, Pauli Virtanen wrote: > On Sat, 19 Feb 2011 19:58:37 +0800, Ralf Gommers wrote: > [clip] >> I uploaded a new binary with that fix included and built against numpy >> 1.5.1 (without the distutils change) to >> https://sourceforge.net/projects/scipy/files/scipy/temp/. >> >> Christoph or another Win7 user, can you please test again? > > Works for me. > > ? ? ? ?Pauli > You lot are as impressive as usual! Also for me, just get the Bessel error that has been previously noted. Bruce IDLE 2.6.3 >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 1.5.1 NumPy is installed in E:\Python26\lib\site-packages\numpy SciPy version 0.9.0rc3.dev SciPy is installed in E:\Python26\lib\site-packages\scipy Python version 2.6.3 (r263rc1:75186, Oct 2 2009, 20:40:30) [MSC v.1500 32 bit (Intel)] nose version 0.11.1 .............................................................................................................................................................................................................K..............................................................................................................................K..K....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS.................................................................S................................................................................................................................................................................................................K...........................................................................................................................................................................................SSSSS.........S.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS................................................................................................................................................................K...............................................................K...........................................................................................................................................................KK......................................................................................................................................................................................................................................................................................................................................................................................................................F........K.K........................................................................................................................................................................................................................................................................................................................................................................................K........K.........SSSSSSS............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................S.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................error removing e:\users\me\appdata\local\temp\tmpiopytmcat_test: e:\users\me\appdata\local\temp\tmpiopytmcat_test: The directory is not empty ................................................................................................... ====================================================================== FAIL: Real-valued Bessel domains ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\Python26\lib\site-packages\scipy\special\tests\test_basic.py", line 1712, in test_ticket_854 assert_(not isnan(special.airye(-1)[2:4]).any(), special.airye(-1)) File "E:\Python26\lib\site-packages\numpy\testing\utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: (nan, nan, nan, nan) ---------------------------------------------------------------------- Ran 4736 tests in 126.442s FAILED (KNOWNFAIL=12, SKIP=42, failures=1) >>> From pav at iki.fi Sat Feb 19 09:38:09 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 19 Feb 2011 14:38:09 +0000 (UTC) Subject: [SciPy-Dev] griddata problem in 0.9rc3 References: <4D5FC5AD.5090005@gmail.com> Message-ID: On Sat, 19 Feb 2011 21:29:17 +0800, Wolfgang Kerzendorf wrote: [clip] > There is a problem with scipy.interpolate: > > In [2]: interpolate.griddata([-1.5,-1.0], [5,6],[[-1.12]]) Out[2]: > array([[ 5.76]]) > > In [3]: interpolate.griddata([-1.0,-1.5], [5,6],[[-1.12]]) Out[3]: > array([[ nan]]) > > It depends on the order of the input values. That's the behavior of interp1d, which griddata inherits. The x-values must be in ascending order. >>> interp1d([-1.0, -1.5], [5, 6], bounds_error=False)([-1.12]) array([ nan]) >>> interp1d([-1.5, -1.0], [5, 6], bounds_error=False)([-1.12]) array([ 5.76]) Documentation issue, mainly. -- Pauli Virtanen From cgohlke at uci.edu Sat Feb 19 14:39:01 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sat, 19 Feb 2011 11:39:01 -0800 Subject: [SciPy-Dev] Problem with form feed character in interpolate/polyint.py Message-ID: <4D601C55.6060102@uci.edu> Hello, when building scipy 0.9rc4 on Python 3.2rc3 the file interpolate/polyint.py gets truncated by the 2to3 tool at line 382, right before `class BarycentricInterpolator(object):`. Line 382 contains a ASCII form feed (0x0C) character. Removing the form feed character fixes this issue (patch attached). This is probably a bug in recent builds of Python 3.2. Christoph -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: formfeed.diff URL: From ralf.gommers at googlemail.com Sat Feb 19 22:32:12 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 20 Feb 2011 11:32:12 +0800 Subject: [SciPy-Dev] Problem with form feed character in interpolate/polyint.py In-Reply-To: <4D601C55.6060102@uci.edu> References: <4D601C55.6060102@uci.edu> Message-ID: On Sun, Feb 20, 2011 at 3:39 AM, Christoph Gohlke wrote: > Hello, > > when building scipy 0.9rc4 on Python 3.2rc3 the file interpolate/polyint.py > gets truncated by the 2to3 tool at line 382, right before `class > BarycentricInterpolator(object):`. Line 382 contains a ASCII form feed > (0x0C) character. Removing the form feed character fixes this issue (patch > attached). > > This is probably a bug in recent builds of Python 3.2. > This indeed wasn't an issue with Python 3.2rc1. But it doesn't matter, we'd better fix it anyway. The extra half a day doesn't matter so much now, looks like we missed the Ubuntu deadline in any case. I'll commit your patch, backport r7144 that Bruce indicated and tag rc5 today. Are you filing a bug report against 3.2rc3? Cheers, Ralf From cgohlke at uci.edu Sat Feb 19 23:22:09 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sat, 19 Feb 2011 20:22:09 -0800 Subject: [SciPy-Dev] Problem with form feed character in interpolate/polyint.py In-Reply-To: References: <4D601C55.6060102@uci.edu> Message-ID: <4D6096F1.40605@uci.edu> On 2/19/2011 7:32 PM, Ralf Gommers wrote: > On Sun, Feb 20, 2011 at 3:39 AM, Christoph Gohlke wrote: >> Hello, >> >> when building scipy 0.9rc4 on Python 3.2rc3 the file interpolate/polyint.py >> gets truncated by the 2to3 tool at line 382, right before `class >> BarycentricInterpolator(object):`. Line 382 contains a ASCII form feed >> (0x0C) character. Removing the form feed character fixes this issue (patch >> attached). >> >> This is probably a bug in recent builds of Python 3.2. >> > This indeed wasn't an issue with Python 3.2rc1. But it doesn't matter, > we'd better fix it anyway. The extra half a day doesn't matter so much > now, looks like we missed the Ubuntu deadline in any case. I'll commit > your patch, backport r7144 that Bruce indicated and tag rc5 today. > > Are you filing a bug report against 3.2rc3? > http://bugs.python.org/issue11250 Christoph From ralf.gommers at googlemail.com Sun Feb 20 02:33:34 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 20 Feb 2011 15:33:34 +0800 Subject: [SciPy-Dev] Question about subpackage/submodule API In-Reply-To: References: Message-ID: On Tue, Feb 15, 2011 at 7:53 AM, Warren Weckesser wrote: > > > On Sat, Feb 12, 2011 at 7:28 PM, Ralf Gommers > wrote: >> >> >> On Sun, Feb 13, 2011 at 5:05 AM, Pauli Virtanen wrote: >>> >>> One wild idea to make this clearer could be to prefix all internal sub- >>> package names with the usual '_'. In the long run, it probably wouldn't >>> be as bad as it initially sounds like. >>> >> This is not a wild idea at all, I think it should be done. I considered >> all modules without '_' prefix public API. >> > > > Agreed (despite what I said in my initial post). > > To actually do this, we'll need to check which packages have modules that > should be private.? These can be renamed in 0.10 to have an underscore,? and > new public versions created that contain a deprecation warning and that > import everything from the private version.?? The deprecated public modules > can be removed in 0.11. > > Some modules will require almost no changes.? For example, scipy.cluster > *only* exposes two modules, vq and hierarchy, so no changes are needed. > (Well, there is also the module info.py that all packages have.? That should > become _info.py--there's no need for that to be public, is there?) Agreed, rename to _info.py > Other packages will probably require some discussion about what modules should be > public. > > Consider the above a proposed change for 0.10 and 0.11--what do you think? > Sounds good. Attached is a file that goes through scipy sub-packages and checks their __all__ for modules. Those are public by definition (but this doesn't give you the whole API). It's pretty messy, for example: signal ====== bsplines filter_design fir_filter_design integrate interpolate linalg ltisys np numpy optimize scipy signaltools sigtools special spline types warnings waveforms wavelets windows That should be cleaned up. Then there are also public modules that don't show up of course (for example odr.models). How about doing the following? : 1. Start a doc, perhaps on the wiki, with a full list of public modules. 2. Put that doc at the beginning of the reference guide, as well as the relevant part in the docstring for each sub-package. 3. Clean up existing __all__, and add __all__ to sub-packages that don't have them yet. 4. Rename private modules, with suitable deprecation warning. Cheers, Ralf -------------- next part -------------- A non-text attachment was scrubbed... Name: check_for_modules.py Type: application/octet-stream Size: 943 bytes Desc: not available URL: From ralf.gommers at googlemail.com Sun Feb 20 06:16:04 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 20 Feb 2011 19:16:04 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 5 Message-ID: Hi, I am pleased to announce the availability of the fifth - and hopefully final - release candidate of SciPy 0.9.0. This will be the first SciPy release to include support for Python 3 (all modules except scipy.weave), as well as for Python 2.7. Sources and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc5/. Binaries will follow there within a day. Changes since release candidate 3: - a fix for a segfault on Windows 7 - a fix for a bug introduced in Python 3.2rc3 - a fix for a bug in scipy.special with MKL builds If no more issues are reported, 0.9.0 will be released in one week. Enjoy, Ralf From robert.kern at gmail.com Sun Feb 20 12:40:27 2011 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 20 Feb 2011 11:40:27 -0600 Subject: [SciPy-Dev] Question about subpackage/submodule API In-Reply-To: References: Message-ID: On Sat, Feb 12, 2011 at 19:28, Ralf Gommers wrote: > > On Sun, Feb 13, 2011 at 5:05 AM, Pauli Virtanen wrote: >> >> On Sat, 12 Feb 2011 14:12:44 -0600, Travis Oliphant wrote: >> > The policy in the past has been that the stable API is only one level >> > down from the scipy namespace. >> > >> > So, developers should import the name from the top level namespace. > > Is this written down somewhere? As far as I understand this is not standard > practice in Python (one should use underscores). Actually, it is de facto standard practice. The presence of underscores do mark something as private, but the absence of underscores do not mark something as public. Almost no one prepends underscores to module names regardless of whether they are considered "public" or "private". The public API of a package is determined by any number of conventions which may or may not be documented explicitly. For scipy, the convention is that public functions are exposed in the __init__.py. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From pav at iki.fi Sun Feb 20 13:14:44 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 20 Feb 2011 18:14:44 +0000 (UTC) Subject: [SciPy-Dev] Question about subpackage/submodule API References: Message-ID: On Sun, 20 Feb 2011 11:40:27 -0600, Robert Kern wrote: [clip] > Actually, it is de facto standard practice. The presence of underscores > do mark something as private, but the absence of underscores do not mark > something as public. Almost no one prepends underscores to module names > regardless of whether they are considered "public" or "private". The > public API of a package is determined by any number of conventions which > may or may not be documented explicitly. For scipy, the convention is > that public functions are exposed in the __init__.py. True, it is what is typically done. The reason why it's done like this is however usually more laziness than some well-thought-out design principle. This does not have to be so. I don't foresee prefixing an underscore to "private" modules to be a PITA in practice. Some people also add a "packagename.api" module to expose a well-defined API. It's true that this doesn't matter very much, but carefully painted bike sheds look nice. Pauli From alan.isaac at gmail.com Sun Feb 20 15:50:23 2011 From: alan.isaac at gmail.com (Alan G Isaac) Date: Sun, 20 Feb 2011 15:50:23 -0500 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 5 In-Reply-To: References: Message-ID: <4D617E8F.3050301@gmail.com> Still getting the directory removal error. Otherwise looks ok on Vista with Python 2.7. Alan Isaac NumPy version 1.5.0 NumPy is installed in c:\Python27\lib\site-packages\numpy SciPy version 0.9.0rc5 SciPy is installed in c:\Python27\lib\site-packages\scipy Python version 2.7 (r27:82525, Jul 4 2010, 09:01:59) [MSC v.1500 32 bit (Intel)] nose version 0.11.0 error removing c:\users\alanis~1\appdata\local\temp\tmpmd729qcat_test: c:\users\alanis~1\appdata\local\temp\tmpmd729qcat_test: The directory is not empty ................................................................................................... ---------------------------------------------------------------------- Ran 4728 tests in 72.356s OK (KNOWNFAIL=12, SKIP=42) From ralf.gommers at googlemail.com Sun Feb 20 19:19:21 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 21 Feb 2011 08:19:21 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 5 In-Reply-To: <4D617E8F.3050301@gmail.com> References: <4D617E8F.3050301@gmail.com> Message-ID: On Mon, Feb 21, 2011 at 4:50 AM, Alan G Isaac wrote: > Still getting the directory removal error. > Otherwise looks ok on Vista with Python 2.7. Thanks for testing Alan. This dir removal message does indicate a problem that should be addressed, but I don't consider it urgent to do so for 0.9.0. Could you please file a ticket? Thanks, Ralf > > NumPy version 1.5.0 > NumPy is installed in c:\Python27\lib\site-packages\numpy > SciPy version 0.9.0rc5 > SciPy is installed in c:\Python27\lib\site-packages\scipy > Python version 2.7 (r27:82525, Jul ?4 2010, 09:01:59) [MSC v.1500 32 bit (Intel)] > nose version 0.11.0 > error removing c:\users\alanis~1\appdata\local\temp\tmpmd729qcat_test: c:\users\alanis~1\appdata\local\temp\tmpmd729qcat_test: The directory is not empty > ................................................................................................... > ---------------------------------------------------------------------- > Ran 4728 tests in 72.356s > > OK (KNOWNFAIL=12, SKIP=42) > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From cgohlke at uci.edu Mon Feb 21 03:11:39 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 21 Feb 2011 00:11:39 -0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 5 In-Reply-To: References: <4D617E8F.3050301@gmail.com> Message-ID: <4D621E3B.5060008@uci.edu> On 2/20/2011 4:19 PM, Ralf Gommers wrote: > On Mon, Feb 21, 2011 at 4:50 AM, Alan G Isaac wrote: >> Still getting the directory removal error. >> Otherwise looks ok on Vista with Python 2.7. > > Thanks for testing Alan. > > This dir removal message does indicate a problem that should be > addressed, but I don't consider it urgent to do so for 0.9.0. Could > you please file a ticket? > > Thanks, > Ralf > >> >> NumPy version 1.5.0 >> NumPy is installed in c:\Python27\lib\site-packages\numpy >> SciPy version 0.9.0rc5 >> SciPy is installed in c:\Python27\lib\site-packages\scipy >> Python version 2.7 (r27:82525, Jul 4 2010, 09:01:59) [MSC v.1500 32 bit (Intel)] >> nose version 0.11.0 >> error removing c:\users\alanis~1\appdata\local\temp\tmpmd729qcat_test: c:\users\alanis~1\appdata\local\temp\tmpmd729qcat_test: The directory is not empty >> ................................................................................................... >> ---------------------------------------------------------------------- >> Ran 4728 tests in 72.356s >> >> OK (KNOWNFAIL=12, SKIP=42) The message comes from scipy/weave/tests/test_catalog.py test_create_catalog(). If the catalog is closed before removing the test directory everything is good. A patch is attached. Christoph -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test_catalog.diff URL: From ralf.gommers at googlemail.com Mon Feb 21 08:16:03 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 21 Feb 2011 21:16:03 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 release candidate 5 In-Reply-To: <4D621E3B.5060008@uci.edu> References: <4D617E8F.3050301@gmail.com> <4D621E3B.5060008@uci.edu> Message-ID: On Mon, Feb 21, 2011 at 4:11 PM, Christoph Gohlke wrote: > > > On 2/20/2011 4:19 PM, Ralf Gommers wrote: >> >> On Mon, Feb 21, 2011 at 4:50 AM, Alan G Isaac >> ?wrote: >>> >>> Still getting the directory removal error. >>> Otherwise looks ok on Vista with Python 2.7. >> >> Thanks for testing Alan. >> >> This dir removal message does indicate a problem that should be >> addressed, but I don't consider it urgent to do so for 0.9.0. Could >> you please file a ticket? >> >> Thanks, >> Ralf >> >>> >>> NumPy version 1.5.0 >>> NumPy is installed in c:\Python27\lib\site-packages\numpy >>> SciPy version 0.9.0rc5 >>> SciPy is installed in c:\Python27\lib\site-packages\scipy >>> Python version 2.7 (r27:82525, Jul ?4 2010, 09:01:59) [MSC v.1500 32 bit >>> (Intel)] >>> nose version 0.11.0 >>> error removing c:\users\alanis~1\appdata\local\temp\tmpmd729qcat_test: >>> c:\users\alanis~1\appdata\local\temp\tmpmd729qcat_test: The directory is not >>> empty >>> >>> ................................................................................................... >>> ---------------------------------------------------------------------- >>> Ran 4728 tests in 72.356s >>> >>> OK (KNOWNFAIL=12, SKIP=42) > > The message comes from scipy/weave/tests/test_catalog.py > test_create_catalog(). If the catalog is closed before removing the test > directory everything is good. A patch is attached. > Thanks, applied to trunk in r7170. Should be fine to apply to 0.9.0 final also. Ralf From ralf.gommers at googlemail.com Mon Feb 21 09:32:46 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 21 Feb 2011 22:32:46 +0800 Subject: [SciPy-Dev] Question about subpackage/submodule API In-Reply-To: References: Message-ID: On Mon, Feb 21, 2011 at 2:14 AM, Pauli Virtanen wrote: > On Sun, 20 Feb 2011 11:40:27 -0600, Robert Kern wrote: > [clip] >> Actually, it is de facto standard practice. The presence of underscores >> do mark something as private, but the absence of underscores do not mark >> something as public. Almost no one prepends underscores to module names >> regardless of whether they are considered "public" or "private". The >> public API of a package is determined by any number of conventions which >> may or may not be documented explicitly. I tried to find a clear explanation somewhere in the Python docs but failed. So you're right. On the other hand Python itself seems to be consistent with underscored modules, and in the first Stackoverflow hit for "python API __init__" Alex Martelli says the same thing I did. I could be in worse company:) >> For scipy, the convention is that public functions are exposed in the __init__.py. Only one level down from the scipy namespace. Unless it's two. With a lot of stuff exposed that obviously isn't part of the API. It's pretty inconsistent. > True, it is what is typically done. The reason why it's done like this is > however usually more laziness than some well-thought-out design principle. > > This does not have to be so. I don't foresee prefixing an underscore to > "private" modules to be a PITA in practice. Some people also add a > "packagename.api" module to expose a well-defined API. It's true that > this doesn't matter very much, but carefully painted bike sheds look nice. Once you accidentally break users' code with a refactor like the one that started this thread, it may matter a little.... Cheers, Ralf From warren.weckesser at gmail.com Mon Feb 21 18:30:40 2011 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Mon, 21 Feb 2011 17:30:40 -0600 Subject: [SciPy-Dev] Scipy : Weave tutorial In-Reply-To: <20110221104108.152500@gmx.com> References: <20110221104108.152500@gmx.com> Message-ID: Beno?t, I hope you don't mind that I cc'ed this to the scipy-dev mailing list. I'm sure other people are interested in this issue. Weave is not obsolete, but--as you have discovered--the tutorial is old and not up-to-date. The change that I made was basically a global search-and-replace of a very common spelling mistake that existed in many places in scipy. I did not take a close look at the tutorial. Warren On Mon, Feb 21, 2011 at 4:41 AM, wrote: > Hello Warren, > > On Scipy Wiki, I've noticed that you've fixed a misspelling on the file > "tutorial.txt" : > > > > http://projects.scipy.org/scipy/changeset/7143/trunk/scipy/weave/doc/tutorial.txt > > Do you think this file has only spelling mistakes? Can you tell me if the > code example on line 641 works: > from scipy.weave.blitz_tools import blitz_type_factories > from scipy.weave import scalar_spec > ... > > And if it doesn't, do you know how to correct it? (I've tried some > experiments here : http://projects.scipy.org/scipy/ticket/1368 ) > > Thanks, > > Beno?t (aka bscipy on Scipy Wiki) -------------- next part -------------- An HTML attachment was scrubbed... URL: From joosep.pata at gmail.com Tue Feb 22 02:37:17 2011 From: joosep.pata at gmail.com (Joosep Pata) Date: Tue, 22 Feb 2011 09:37:17 +0200 Subject: [SciPy-Dev] SciPy and the Google Summer of Code Message-ID: <4D6367AD.4070607@gmail.com> Hi, This is my first post to the mailing list. My name is Joosep and I'm a physics major from Estonia. Are SciPy and NumPy taking part in GSOC this year? If so, how would one go about applying for it? I'd really like to get into open-source development and SciPy/NumPy seems like the way to go. I have some previous experience with C++ and Python and I have done numerical methods before. What would need doing? I'd of course be happy to do development outside of GSOC as well. Cheers, Joosep From james.hensman at gmail.com Wed Feb 23 10:07:56 2011 From: james.hensman at gmail.com (James Hensman) Date: Wed, 23 Feb 2011 15:07:56 +0000 Subject: [SciPy-Dev] scipy.integrate's dopri5 methods doesn't pass additional arguments Message-ID: in scipy/integrate/ode.py, line 745 in the dopri5 class. the following is needed in order to pass additional arguments to the function to be integrated: < x,y,iwork,idid = self.runner(*((f,t0,y0,t1) + tuple(self.call_args))) --- > x,y,iwork,idid = self.runner(*((f,t0,y0,t1) + tuple(self.call_args)+(f_params,))) Please excuse any faux pas I may have made: this is my first post to such a list. James. -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Wed Feb 23 11:40:31 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 23 Feb 2011 10:40:31 -0600 Subject: [SciPy-Dev] scipy.integrate's dopri5 methods doesn't pass additional arguments In-Reply-To: References: Message-ID: On Wed, Feb 23, 2011 at 9:07 AM, James Hensman wrote: > in scipy/integrate/ode.py, line 745 in the dopri5 class. the following is > needed in order to pass additional arguments to the function to be > integrated: > > < x,y,iwork,idid = self.runner(*((f,t0,y0,t1) + > tuple(self.call_args))) > --- > > x,y,iwork,idid = self.runner(*((f,t0,y0,t1) + > tuple(self.call_args)+(f_params,))) > > Please excuse any faux pas I may have made: this is my first post to such a > list. > > Hi James, Welcome to the list. Thanks for reporting the problem, and even better, for suggesting the fix! Could you create a ticket about this on the wiki? http://projects.scipy.org/scipy/wiki Once you register and login, you'll see a link for "New Ticket". Warren > James. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scopatz at gmail.com Wed Feb 23 18:33:54 2011 From: scopatz at gmail.com (Anthony Scopatz) Date: Wed, 23 Feb 2011 23:33:54 +0000 Subject: [SciPy-Dev] ANN: inSCIght, The Scientific Computing Podcast Message-ID: Hello All, I am very pleased to announce inSCIght, a new scientific computing podcast (press release below). I apologize for those of you in the intersection of these lists that may receive this message multiple times. As I mention in the press release, we are very open to your contributions! Be Well Anthony inSCIght: The Scientific Computing Podcast ========================================== 'inSCIght' is a podcast that focuses on scientific computing in all of its various forms. Every week we have a few panelists engage head-to-head on poignant and interesting topics. The panelists are drawn from all across the scientific computing community. From embedded systems experts to very high level language gurus, biologists and nuclear engineers, the hosts of inSCIght use computers to solve science and engineering problems everyday. This podcast throws people, ideas, and opinions into an audio-blender hoping to educate and entice each other and the world. You can find us at: * inSCIght.org (http://inscight.org/), * Twitter (http://twitter.com/inscight/), * Convore (https://convore.com/inscight/), * and GitHub (https://github.com/inscight/). Furthermore, we are are always looking to supplement our current repertoire of hosts and special guests. So if you would like to contribute to inSCIght or have something interesting to present on a show, feel free to email us at info_AT_inscight.org. We'd love to have you join the conversation! The inSCIght podcast is a co-production of Enthought, Software Carpentry, and The Hacker Within. Thanks for listening! The inSCIght podcast is licensed under the Creative Commons Attribution 3.0 Unported (CC BY 3.0) license. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joris.vankerschaver at gmail.com Thu Feb 24 01:38:08 2011 From: joris.vankerschaver at gmail.com (Joris Vankerschaver) Date: Wed, 23 Feb 2011 22:38:08 -0800 Subject: [SciPy-Dev] optimize.fsolve too accurate Message-ID: <4D65FCD0.9060809@gmail.com> Hi all, I'm relatively new to SciPy, so I hope you will excuse the occasional inaccuracies, but I noticed something slightly strange with scipy.optimize.fsolve: the following snippet import scipy.optimize fun = lambda x: x**2 scipy.optimize.fsolve(fun, 0.5) returns "Warning: The number of calls to function has reached maxfev = 400", while the answer is presented as 2.8405269166788003e-84 (you might have to change the initial conditions somewhat to obtain this error). As the default tolerance for fsolve is roughly 1.4e-8, the computation should have terminated long before reaching this level of precision. My question: is this a bug or am I invoking fsolve in the wrong way? Secondly, is there a quick and easy way to build parts of the scipy library in place for testing? I looked at the implementation and there are a few things I would like to experiment with, but I don't know how to go about this other than by rebuilding scipy in its entirety. Thanks! Joris From ralf.gommers at googlemail.com Thu Feb 24 09:01:00 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 24 Feb 2011 22:01:00 +0800 Subject: [SciPy-Dev] optimize.fsolve too accurate In-Reply-To: <4D65FCD0.9060809@gmail.com> References: <4D65FCD0.9060809@gmail.com> Message-ID: On Thu, Feb 24, 2011 at 2:38 PM, Joris Vankerschaver wrote: > > Secondly, is there a quick and easy way to build parts of the scipy > library in place for testing? ?I looked at the implementation and there > are a few things I would like to experiment with, but I don't know how > to go about this other than by rebuilding scipy in its entirety. > If you want to only experiment with Python code, you can build in-place with $ python setup.py build_ext -i For compiled code, you're much better off with numscons: $ # in-place build $ python setupscons.py scons -i $ # partial rebuild of optimize module $ python setupscons.py scons -i --package-list=scipy.optimize You'll need to install numscons, see http://projects.scipy.org/numpy/wiki/NumScons. Ralf From mellerf at netvision.net.il Thu Feb 24 09:44:02 2011 From: mellerf at netvision.net.il (Yosef Meller) Date: Thu, 24 Feb 2011 16:44:02 +0200 Subject: [SciPy-Dev] optimize.fsolve too accurate In-Reply-To: <4D65FCD0.9060809@gmail.com> References: <4D65FCD0.9060809@gmail.com> Message-ID: <4D666EB2.5070605@netvision.net.il> ?????? 24/02/11 08:38, ????? Joris Vankerschaver: > I'm relatively new to SciPy, so I hope you will excuse the occasional > inaccuracies, but I noticed something slightly strange with > scipy.optimize.fsolve: the following snippet > > import scipy.optimize > fun = lambda x: x**2 > scipy.optimize.fsolve(fun, 0.5) > > returns "Warning: The number of calls to function has reached maxfev = > 400", while the answer is presented as 2.8405269166788003e-84 (you > might have to change the initial conditions somewhat to obtain this > error). As the default tolerance for fsolve is roughly 1.4e-8, the > computation should have terminated long before reaching this level of > precision. > > My question: is this a bug or am I invoking fsolve in the wrong way? I can confirm that this happens on the first time the code is run. Second call to fsolve with the same arguments runs as expected. There might be something wrong with module initialization or something of the sort. From pav at iki.fi Thu Feb 24 09:48:17 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 24 Feb 2011 14:48:17 +0000 (UTC) Subject: [SciPy-Dev] optimize.fsolve too accurate References: <4D65FCD0.9060809@gmail.com> <4D666EB2.5070605@netvision.net.il> Message-ID: Thu, 24 Feb 2011 16:44:02 +0200, Yosef Meller wrote: [clip] > I can confirm that this happens on the first time the code is run. > Second call to fsolve with the same arguments runs as expected. There > might be something wrong with module initialization or something of the > sort. Python by default shows each warning only once per occurrence. The interactive session is treated as a single location, and so each warning shows up only once, even though it is issued every time. The main issue here is that MINPACK only terminates based on the relative tolerance of the result --- which does not make much sense if the exact solution happens to be 0. -- Pauli Virtanen From charlesr.harris at gmail.com Thu Feb 24 09:54:38 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 24 Feb 2011 07:54:38 -0700 Subject: [SciPy-Dev] optimize.fsolve too accurate In-Reply-To: <4D65FCD0.9060809@gmail.com> References: <4D65FCD0.9060809@gmail.com> Message-ID: On Wed, Feb 23, 2011 at 11:38 PM, Joris Vankerschaver < joris.vankerschaver at gmail.com> wrote: > Hi all, > > I'm relatively new to SciPy, so I hope you will excuse the occasional > inaccuracies, but I noticed something slightly strange with > scipy.optimize.fsolve: the following snippet > > import scipy.optimize > fun = lambda x: x**2 > scipy.optimize.fsolve(fun, 0.5) > > Just a side note, but the zero solvers are generally much better for the one dimensional case. Although I suspect the problem here is the double zero. In [1]: import scipy.optimize In [2]: fun = lambda x: x**2 - 1e-10 In [3]: scipy.optimize.fsolve(fun, 0.5) Out[3]: array([ 1.00000000e-05]) > returns "Warning: The number of calls to function has reached maxfev = > 400", while the answer is presented as 2.8405269166788003e-84 (you > might have to change the initial conditions somewhat to obtain this > error). As the default tolerance for fsolve is roughly 1.4e-8, the > computation should have terminated long before reaching this level of > precision. > > My question: is this a bug or am I invoking fsolve in the wrong way? > > It looks like fsolve only uses relative errors, so that is probably a problem near zero. Newton does better In [5]: fun = lambda x: x**2 In [6]: scipy.optimize.newton(fun, 0.5) Out[6]: 2.0701071395499238e-08 Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Feb 24 22:24:58 2011 From: cournape at gmail.com (David Cournapeau) Date: Fri, 25 Feb 2011 12:24:58 +0900 Subject: [SciPy-Dev] Initial support for Harwell Boeing sparse matrix format Message-ID: Hi there, I have added support for simple read/write for HB matrix format. The updated branch is on github: https://github.com/cournape/scipy3/compare/master..._new_hb_io It adds a high-level API (read_hb/write_hb) as well as a lower-level API for fine-grained control. The functions live in scipy.sparse.io. I already put this patch last year, without any reaction, so I will assume that no reaction means yes :) cheers, David From pav at iki.fi Fri Feb 25 04:09:24 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 25 Feb 2011 09:09:24 +0000 (UTC) Subject: [SciPy-Dev] Initial support for Harwell Boeing sparse matrix format References: Message-ID: Fri, 25 Feb 2011 12:24:58 +0900, David Cournapeau wrote: > I have added support for simple read/write for HB matrix format. The > updated branch is on github: > https://github.com/cournape/scipy3/compare/master..._new_hb_io > > It adds a high-level API (read_hb/write_hb) as well as a lower-level API > for fine-grained control. The functions live in scipy.sparse.io. I > already put this patch last year, without any reaction, so I will assume > that no reaction means yes :) How about putting them in scipy.io? That's where the e.g. matrix market functions are, and it would make sense to put also HB there. Pauli From nwagner at iam.uni-stuttgart.de Fri Feb 25 04:22:08 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 25 Feb 2011 10:22:08 +0100 Subject: [SciPy-Dev] Initial support for Harwell Boeing sparse matrix format In-Reply-To: References: Message-ID: On Fri, 25 Feb 2011 09:09:24 +0000 (UTC) Pauli Virtanen wrote: >Fri, 25 Feb 2011 12:24:58 +0900, David Cournapeau wrote: >> I have added support for simple read/write for HB matrix >>format. The >> updated branch is on github: >> https://github.com/cournape/scipy3/compare/master..._new_hb_io >> >> It adds a high-level API (read_hb/write_hb) as well as a >>lower-level API >> for fine-grained control. The functions live in >>scipy.sparse.io. I >> already put this patch last year, without any reaction, >>so I will assume >> that no reaction means yes :) > > How about putting them in scipy.io? That's where the >e.g. matrix market > functions are, and it would make sense to put also HB >there. > > Pauli +1 Nils From joris.vankerschaver at gmail.com Fri Feb 25 14:34:06 2011 From: joris.vankerschaver at gmail.com (J Vankerschaver) Date: Fri, 25 Feb 2011 11:34:06 -0800 Subject: [SciPy-Dev] optimize.fsolve too accurate Message-ID: Hi guys, Thanks for the great replies to my optimize.fsolve query! I'm still not too happy with minpack.hybrd only dealing with relative tolerances, so I'll stick to using the Newton solver for my problem (it doesn't have a double zero -- that only came up when I was looking for a toy problem). Numscons looks like an easy to use build system, so I'm glad that I can play with the scipy internals too! All the best, Joris > Message: 3 > Date: Thu, 24 Feb 2011 22:01:00 +0800 > From: Ralf Gommers > Subject: Re: [SciPy-Dev] optimize.fsolve too accurate > To: SciPy Developers List > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > On Thu, Feb 24, 2011 at 2:38 PM, Joris Vankerschaver > wrote: > > > > Secondly, is there a quick and easy way to build parts of the scipy > > library in place for testing? ?I looked at the implementation and there > > are a few things I would like to experiment with, but I don't know how > > to go about this other than by rebuilding scipy in its entirety. > > > If you want to only experiment with Python code, you can build in-place > with > $ python setup.py build_ext -i > > For compiled code, you're much better off with numscons: > $ # in-place build > $ python setupscons.py scons -i > $ # partial rebuild of optimize module > $ python setupscons.py scons -i --package-list=scipy.optimize > You'll need to install numscons, see > http://projects.scipy.org/numpy/wiki/NumScons. > > Ralf > > > ------------------------------ > > Message: 4 > Date: Thu, 24 Feb 2011 16:44:02 +0200 > From: Yosef Meller > Subject: Re: [SciPy-Dev] optimize.fsolve too accurate > To: scipy-dev at scipy.org > Message-ID: <4D666EB2.5070605 at netvision.net.il> > Content-Type: text/plain; charset=UTF-8; format=flowed > > ?????? 24/02/11 08:38, ????? Joris Vankerschaver: > > I'm relatively new to SciPy, so I hope you will excuse the occasional > > inaccuracies, but I noticed something slightly strange with > > scipy.optimize.fsolve: the following snippet > > > > import scipy.optimize > > fun = lambda x: x**2 > > scipy.optimize.fsolve(fun, 0.5) > > > > returns "Warning: The number of calls to function has reached maxfev = > > 400", while the answer is presented as 2.8405269166788003e-84 (you > > might have to change the initial conditions somewhat to obtain this > > error). As the default tolerance for fsolve is roughly 1.4e-8, the > > computation should have terminated long before reaching this level of > > precision. > > > > My question: is this a bug or am I invoking fsolve in the wrong way? > > I can confirm that this happens on the first time the code is run. > Second call to fsolve with the same arguments runs as expected. There > might be something wrong with module initialization or something of the > sort. > > > ------------------------------ > > Message: 5 > Date: Thu, 24 Feb 2011 14:48:17 +0000 (UTC) > From: Pauli Virtanen > Subject: Re: [SciPy-Dev] optimize.fsolve too accurate > To: scipy-dev at scipy.org > Message-ID: > Content-Type: text/plain; charset=UTF-8 > > Thu, 24 Feb 2011 16:44:02 +0200, Yosef Meller wrote: > [clip] > > I can confirm that this happens on the first time the code is run. > > Second call to fsolve with the same arguments runs as expected. There > > might be something wrong with module initialization or something of the > > sort. > > Python by default shows each warning only once per occurrence. The > interactive session is treated as a single location, and so each warning > shows up only once, even though it is issued every time. > > The main issue here is that MINPACK only terminates based on the relative > tolerance of the result --- which does not make much sense if the exact > solution happens to be 0. > > -- > Pauli Virtanen > > > > ------------------------------ > > Message: 6 > Date: Thu, 24 Feb 2011 07:54:38 -0700 > From: Charles R Harris > Subject: Re: [SciPy-Dev] optimize.fsolve too accurate > To: SciPy Developers List > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Wed, Feb 23, 2011 at 11:38 PM, Joris Vankerschaver < > joris.vankerschaver at gmail.com> wrote: > > > Hi all, > > > > I'm relatively new to SciPy, so I hope you will excuse the occasional > > inaccuracies, but I noticed something slightly strange with > > scipy.optimize.fsolve: the following snippet > > > > import scipy.optimize > > fun = lambda x: x**2 > > scipy.optimize.fsolve(fun, 0.5) > > > > > Just a side note, but the zero solvers are generally much better for the > one > dimensional case. Although I suspect the problem here is the double zero. > > In [1]: import scipy.optimize > > In [2]: fun = lambda x: x**2 - 1e-10 > > In [3]: scipy.optimize.fsolve(fun, 0.5) > Out[3]: array([ 1.00000000e-05]) > > > > > returns "Warning: The number of calls to function has reached maxfev = > > 400", while the answer is presented as 2.8405269166788003e-84 (you > > might have to change the initial conditions somewhat to obtain this > > error). As the default tolerance for fsolve is roughly 1.4e-8, the > > computation should have terminated long before reaching this level of > > precision. > > > > My question: is this a bug or am I invoking fsolve in the wrong way? > > > > > It looks like fsolve only uses relative errors, so that is probably a > problem near zero. Newton does better > > In [5]: fun = lambda x: x**2 > > In [6]: scipy.optimize.newton(fun, 0.5) > Out[6]: 2.0701071395499238e-08 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Feb 26 02:37:47 2011 From: cournape at gmail.com (David Cournapeau) Date: Sat, 26 Feb 2011 16:37:47 +0900 Subject: [SciPy-Dev] Initial support for Harwell Boeing sparse matrix format In-Reply-To: References: Message-ID: On Fri, Feb 25, 2011 at 6:09 PM, Pauli Virtanen wrote: > Fri, 25 Feb 2011 12:24:58 +0900, David Cournapeau wrote: >> I have added support for simple read/write for HB matrix format. The >> updated branch is on github: >> https://github.com/cournape/scipy3/compare/master..._new_hb_io >> >> It adds a high-level API (read_hb/write_hb) as well as a lower-level API >> for fine-grained control. The functions live in scipy.sparse.io. I >> already put this patch last year, without any reaction, so I will assume >> that no reaction means yes :) > > How about putting them in scipy.io? That's where the e.g. matrix market > functions are, and it would make sense to put also HB there. Indeed. I wonder how we want to export this into scipy.io: I think for file format it actually makes sense to say from scipy.io.format_name import function instead of putting everything into scipy.io, but I don't feel strongly about it either. cheers, David From cgohlke at uci.edu Sat Feb 26 15:57:50 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sat, 26 Feb 2011 12:57:50 -0800 Subject: [SciPy-Dev] 2to3 backup files in scipy installers Message-ID: <4D69694E.2030207@uci.edu> Hello, the scipy 0.9 rc5 installers for Python 3.1 contain many *.bak files created during the 2to3 conversion. A patch is attached. Other than that, for the first time I got scipy.test() to run without any crashes, errors, or failures on win-amd64-py2.7. Much better than the 16 errors and 9 failures reported for scipy 0.8b1 . Thank you! Christoph -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: 2to3tool.diff URL: From wnbell at gmail.com Sat Feb 26 20:48:44 2011 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 26 Feb 2011 20:48:44 -0500 Subject: [SciPy-Dev] Initial support for Harwell Boeing sparse matrix format In-Reply-To: References: Message-ID: On Sat, Feb 26, 2011 at 2:37 AM, David Cournapeau wrote: > > Indeed. I wonder how we want to export this into scipy.io: I think for > file format it actually makes sense to say from scipy.io.format_name > import function instead of putting everything into scipy.io, but I > don't feel strongly about it either. > > cheers, > +1 Organizing the routines into scipy.io.harwell_boeing.* scipy.io.matrix_market.* ... would be clearer and provide a logical home for high-level and low-level APIs. Ideally there'd be some level of uniformity among the highest-level APIs so that one could do scipy.io.{format_name}.read(source) and obtain a semi-standardized result (e.g. dictionary of name->value pairs, where the names can be invented if not specified in the format). If that was feasible, we could additionally provide scipy.io.read(source) which could (under many circumstances) dispatch the appropriate reader. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Feb 26 23:17:47 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 27 Feb 2011 12:17:47 +0800 Subject: [SciPy-Dev] 2to3 backup files in scipy installers In-Reply-To: <4D69694E.2030207@uci.edu> References: <4D69694E.2030207@uci.edu> Message-ID: On Sun, Feb 27, 2011 at 4:57 AM, Christoph Gohlke wrote: > Hello, > > the scipy 0.9 rc5 installers for Python 3.1 contain many *.bak files created > during the 2to3 conversion. A patch is attached. That looks fine to me for both trunk and 0.9.x, the backups don't seem to be used for anything. But I'll wait for Pauli to confirm, he's the 2to3 expert. > Other than that, for the first time I got scipy.test() to run without any > crashes, errors, or failures on win-amd64-py2.7. Much better than the 16 > errors and 9 failures reported for scipy 0.8b1 > . Great. Many of those improvements are due to your testing work and patches, so thank you! Cheers, Ralf From pav at iki.fi Sun Feb 27 06:15:27 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 27 Feb 2011 11:15:27 +0000 (UTC) Subject: [SciPy-Dev] 2to3 backup files in scipy installers References: <4D69694E.2030207@uci.edu> Message-ID: On Sun, 27 Feb 2011 12:17:47 +0800, Ralf Gommers wrote: > On Sun, Feb 27, 2011 at 4:57 AM, Christoph Gohlke > wrote: >> the scipy 0.9 rc5 installers for Python 3.1 contain many *.bak files >> created during the 2to3 conversion. A patch is attached. > > That looks fine to me for both trunk and 0.9.x, the backups don't seem > to be used for anything. But I'll wait for Pauli to confirm, he's the > 2to3 expert. The proposed fix is OK, AFAICS. Pauli From ralf.gommers at googlemail.com Mon Feb 28 00:56:31 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 28 Feb 2011 13:56:31 +0800 Subject: [SciPy-Dev] ANN: SciPy 0.9.0 Message-ID: I'm pleased to announce the release of SciPy 0.9.0. SciPy is a package of tools for science and engineering for Python. It includes modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more. This release comes seven months after the 0.8.0 release and contains several new features, numerous bug-fixes, improved test coverage, and better documentation. This is the first release that supports Python 3 (with the exception of the scipy.weave module). Sources, binaries, documentation and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.9.0/ Thank you to everybody who contributed to this release. Enjoy, The SciPy developers ========================= SciPy 0.9.0 Release Notes ========================= .. contents:: SciPy 0.9.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.9.x branch, and on adding new features on the development trunk. This release requires Python 2.4 - 2.7 or 3.1 - and NumPy 1.5 or greater. Please note that SciPy is still considered to have "Beta" status, as we work toward a SciPy 1.0.0 release. The 1.0.0 release will mark a major milestone in the development of SciPy, after which changing the package structure or API will be much more difficult. Whilst these pre-1.0 releases are considered to have "Beta" status, we are committed to making them as bug-free as possible. However, until the 1.0 release, we are aggressively reviewing and refining the functionality, organization, and interface. This is being done in an effort to make the package as coherent, intuitive, and useful as possible. To achieve this, we need help from the community of users. Specifically, we need feedback regarding all aspects of the project - everything - from which algorithms we implement, to details about our function's call signatures. Python 3 ======== Scipy 0.9.0 is the first SciPy release to support Python 3. The only module that is not yet ported is ``scipy.weave``. Scipy source code location to be changed ======================================== Soon after this release, Scipy will stop using SVN as the version control system, and move to Git. The development source code for Scipy can from then on be found at http://github.com/scipy/scipy New features ============ Delaunay tesselations (``scipy.spatial``) ----------------------------------------- Scipy now includes routines for computing Delaunay tesselations in N dimensions, powered by the Qhull_ computational geometry library. Such calculations can now make use of the new ``scipy.spatial.Delaunay`` interface. .. _Qhull: http://www.qhull.org/ N-dimensional interpolation (``scipy.interpolate``) --------------------------------------------------- Support for scattered data interpolation is now significantly improved. This version includes a ``scipy.interpolate.griddata`` function that can perform linear and nearest-neighbour interpolation for N-dimensional scattered data, in addition to cubic spline (C1-smooth) interpolation in 2D and 1D. An object-oriented interface to each interpolator type is also available. Nonlinear equation solvers (``scipy.optimize``) ----------------------------------------------- Scipy includes new routines for large-scale nonlinear equation solving in ``scipy.optimize``. The following methods are implemented: * Newton-Krylov (``scipy.optimize.newton_krylov``) * (Generalized) secant methods: - Limited-memory Broyden methods (``scipy.optimize.broyden1``, ``scipy.optimize.broyden2``) - Anderson method (``scipy.optimize.anderson``) * Simple iterations (``scipy.optimize.diagbroyden``, ``scipy.optimize.excitingmixing``, ``scipy.optimize.linearmixing``) The ``scipy.optimize.nonlin`` module was completely rewritten, and some of the functions were deprecated (see above). New linear algebra routines (``scipy.linalg``) ---------------------------------------------- Scipy now contains routines for effectively solving triangular equation systems (``scipy.linalg.solve_triangular``). Improved FIR filter design functions (``scipy.signal``) ------------------------------------------------------- The function ``scipy.signal.firwin`` was enhanced to allow the design of highpass, bandpass, bandstop and multi-band FIR filters. The function ``scipy.signal.firwin2`` was added. This function uses the window method to create a linear phase FIR filter with an arbitrary frequency response. The functions ``scipy.signal.kaiser_atten`` and ``scipy.signal.kaiser_beta`` were added. Improved statistical tests (``scipy.stats``) -------------------------------------------- A new function ``scipy.stats.fisher_exact`` was added, that provides Fisher's exact test for 2x2 contingency tables. The function ``scipy.stats.kendalltau`` was rewritten to make it much faster (O(n log(n)) vs O(n^2)). Deprecated features =================== Obsolete nonlinear solvers (in ``scipy.optimize``) -------------------------------------------------- The following nonlinear solvers from ``scipy.optimize`` are deprecated: - ``broyden_modified`` (bad performance) - ``broyden1_modified`` (bad performance) - ``broyden_generalized`` (equivalent to ``anderson``) - ``anderson2`` (equivalent to ``anderson``) - ``broyden3`` (obsoleted by new limited-memory broyden methods) - ``vackar`` (renamed to ``diagbroyden``) Removed features ================ The deprecated modules ``helpmod``, ``pexec`` and ``ppimport`` were removed from ``scipy.misc``. The ``output_type`` keyword in many ``scipy.ndimage`` interpolation functions has been removed. The ``econ`` keyword in ``scipy.linalg.qr`` has been removed. The same functionality is still available by specifying ``mode='economic'``. Old correlate/convolve behavior (in ``scipy.signal``) ----------------------------------------------------- The old behavior for ``scipy.signal.convolve``, ``scipy.signal.convolve2d``, ``scipy.signal.correlate`` and ``scipy.signal.correlate2d`` was deprecated in 0.8.0 and has now been removed. Convolve and correlate used to swap their arguments if the second argument has dimensions larger than the first one, and the mode was relative to the input with the largest dimension. The current behavior is to never swap the inputs, which is what most people expect, and is how correlation is usually defined. ``scipy.stats`` --------------- Many functions in ``scipy.stats`` that are either available from numpy or have been superseded, and have been deprecated since version 0.7, have been removed: `std`, `var`, `mean`, `median`, `cov`, `corrcoef`, `z`, `zs`, `stderr`, `samplestd`, `samplevar`, `pdfapprox`, `pdf_moments` and `erfc`. These changes are mirrored in ``scipy.stats.mstats``. ``scipy.sparse`` ---------------- Several methods of the sparse matrix classes in ``scipy.sparse`` which had been deprecated since version 0.7 were removed: `save`, `rowcol`, `getdata`, `listprint`, `ensure_sorted_indices`, `matvec`, `matmat` and `rmatvec`. The functions ``spkron``, ``speye``, ``spidentity``, ``lil_eye`` and ``lil_diags`` were removed from ``scipy.sparse``. The first three functions are still available as ``scipy.sparse.kron``, ``scipy.sparse.eye`` and ``scipy.sparse.identity``. The `dims` and `nzmax` keywords were removed from the sparse matrix constructor. The `colind` and `rowind` attributes were removed from CSR and CSC matrices respectively. ``scipy.sparse.linalg.arpack.speigs`` ------------------------------------- A duplicated interface to the ARPACK library was removed. Other changes ============= ARPACK interface changes ------------------------ The interface to the ARPACK eigenvalue routines in ``scipy.sparse.linalg`` was changed for more robustness. The eigenvalue and SVD routines now raise ``ArpackNoConvergence`` if the eigenvalue iteration fails to converge. If partially converged results are desired, they can be accessed as follows:: import numpy as np from scipy.sparse.linalg import eigs, ArpackNoConvergence m = np.random.randn(30, 30) try: w, v = eigs(m, 6) except ArpackNoConvergence, err: partially_converged_w = err.eigenvalues partially_converged_v = err.eigenvectors Several bugs were also fixed. The routines were moreover renamed as follows: - eigen --> eigs - eigen_symmetric --> eigsh - svd --> svds From bsouthey at gmail.com Mon Feb 28 09:50:16 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 28 Feb 2011 08:50:16 -0600 Subject: [SciPy-Dev] Initial support for Harwell Boeing sparse matrix format In-Reply-To: References: Message-ID: <4D6BB628.7040003@gmail.com> On 02/26/2011 07:48 PM, Nathan Bell wrote: > On Sat, Feb 26, 2011 at 2:37 AM, David Cournapeau > wrote: > > > Indeed. I wonder how we want to export this into scipy.io > : I think for > file format it actually makes sense to say from scipy.io.format_name > import function instead of putting everything into scipy.io > , but I > don't feel strongly about it either. > > cheers, > > > +1 > > Organizing the routines into > scipy.io.harwell_boeing.* > scipy.io.matrix_market.* > ... > would be clearer and provide a logical home for high-level and > low-level APIs. > > Ideally there'd be some level of uniformity among the highest-level > APIs so that one could do > scipy.io.{format_name}.read(source) > and obtain a semi-standardized result (e.g. dictionary of name->value > pairs, where the names can be invented if not specified in the format). > > If that was feasible, we could additionally provide > scipy.io.read(source) > which could (under many circumstances) dispatch the appropriate reader. > > -- > Nathan Bell wnbell at gmail.com > http://www.wnbell.com/ > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev Based on the reasoning in numpy, should the io stuff be renamed to something like spio? Start of thread: http://mail.scipy.org/pipermail/numpy-discussion/2010-March/049543.html Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: