From nwagner at mecha.uni-stuttgart.de Wed Dec 1 02:53:48 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Dec 2004 08:53:48 +0100 Subject: [SciPy-dev] building lib.lapack without optimization In-Reply-To: References: <41AC4D01.9010000@mecha.uni-stuttgart.de> <41AC5A3F.1010509@mecha.uni-stuttgart.de> <41AC6DED.9030906@mecha.uni-stuttgart.de> Message-ID: <41AD788C.1050101@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > > On Tue, 30 Nov 2004, Nils Wagner wrote: > >>> Also, if your lapack libraries are built with -O3 then rebuilding >>> them with -O2 should fix the segmentation faults. >>> >> This is my make.inc. So, I will replace >> >> OPTS = -funroll-all-loops -fno-f2c -O3 >> >> with >> >> OPTS = -funroll-all-loops -fno-f2c -O2 >> >> Is that o.k. ? > > > Yes. > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev First of all, I have rebuild my lapack library with -O2. Secondly, I have installed ATLAS from scratch. From my ATLAS directory, issue : make killall arch=ARCH make startup arch=ARCH make install arch=ARCH Then I build a complete lapack library ATLAS does not provide a full LAPACK library. However, there is a simple way to get ATLAS to provide its faster LAPACK routines to a full LAPACK library. ATLAS's internal routines are distinct from LAPACK's, so it is safe to compile ATLAS's LAPACK routines directly into a netlib-style LAPACK library. First, download and install the standard LAPACK library from the LAPACK homepage . Then, in your ATLAS/lib/ARCH directory (where you should have a liblapack.a), issue the following commands: mkdir tmp cd tmp ar x ../liblapack.a cp ../liblapack.a ar r ../liblapack.a *.o cd .. rm -rf tmp Again, scipy.test() failed with a segmentation fault. Am I missing something ? Nils From pearu at scipy.org Wed Dec 1 03:33:36 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 1 Dec 2004 02:33:36 -0600 (CST) Subject: [SciPy-dev] building lib.lapack without optimization In-Reply-To: <41AD788C.1050101@mecha.uni-stuttgart.de> References: <41AC4D01.9010000@mecha.uni-stuttgart.de> <41AC5A3F.1010509@mecha.uni-stuttgart.de> <41AC6DED.9030906@mecha.uni-stuttgart.de> <41AD788C.1050101@mecha.uni-stuttgart.de> Message-ID: On Wed, 1 Dec 2004, Nils Wagner wrote: > First of all, I have rebuild my lapack library with -O2. > > Secondly, I have installed ATLAS from scratch. > From my ATLAS directory, issue : > > make killall arch=ARCH > make startup arch=ARCH > make install arch=ARCH > > Then I build a complete lapack library > > ATLAS does not provide a full LAPACK library. However, there is a simple way > to get ATLAS to provide its faster LAPACK routines to a full LAPACK library. > ATLAS's internal routines are distinct from LAPACK's, so it is safe to > compile ATLAS's LAPACK routines directly into a netlib-style LAPACK library. > First, download and install the standard LAPACK library from the LAPACK > homepage . Then, in your ATLAS/lib/ARCH > directory (where you should have a liblapack.a), issue the following > commands: > > mkdir tmp > cd tmp > ar x ../liblapack.a > cp ../liblapack.a > ar r ../liblapack.a *.o > cd .. > rm -rf tmp > > > Again, scipy.test() failed with a segmentation fault. Am I missing something > ? I think from the success of using Fortran blas/lapack libraries it is now clear that the issue is not in scipy. If you update scipy CVS tree, install scipy_core, then running in Lib/lapack python tests/test_lapack.py -v 10 should show test name before crashing python. Then study the corresponding test and try to find the simplest way to reproduce the segmentation fault. This probably means calling some function from flapack.so. Try using different arguments for this function to see if some combination does succeed. If not, create a C program that is using the corresponding routine to prove a possible bug in ATLAS or compiler. Then report your findings to scipy-dev and we'll try to find a workaround for you. Another option for you would be arrange a temporary account to your machine so that I could login and try to do some diagnostics myself. Regards, Pearu From nwagner at mecha.uni-stuttgart.de Wed Dec 1 05:42:42 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Dec 2004 11:42:42 +0100 Subject: [SciPy-dev] building lib.lapack without optimization In-Reply-To: References: <41AC4D01.9010000@mecha.uni-stuttgart.de> <41AC5A3F.1010509@mecha.uni-stuttgart.de> <41AC6DED.9030906@mecha.uni-stuttgart.de> <41AD788C.1050101@mecha.uni-stuttgart.de> Message-ID: <41ADA022.8000300@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > > On Wed, 1 Dec 2004, Nils Wagner wrote: > >> First of all, I have rebuild my lapack library with -O2. >> >> Secondly, I have installed ATLAS from scratch. >> From my ATLAS directory, issue : >> >> make killall arch=ARCH >> make startup arch=ARCH >> make install arch=ARCH >> >> Then I build a complete lapack library >> >> ATLAS does not provide a full LAPACK library. However, there is a >> simple way to get ATLAS to provide its faster LAPACK routines to a >> full LAPACK library. ATLAS's internal routines are distinct from >> LAPACK's, so it is safe to compile ATLAS's LAPACK routines directly >> into a netlib-style LAPACK library. First, download and install the >> standard LAPACK library from the LAPACK homepage >> . Then, in your ATLAS/lib/ARCH >> directory (where you should have a liblapack.a), issue the following >> commands: >> >> mkdir tmp >> cd tmp >> ar x ../liblapack.a >> cp ../liblapack.a >> ar r ../liblapack.a *.o >> cd .. >> rm -rf tmp >> >> >> Again, scipy.test() failed with a segmentation fault. Am I missing >> something ? > > > I think from the success of using Fortran blas/lapack libraries it is > now clear that the issue is not in scipy. > > If you update scipy CVS tree, install scipy_core, then running in > Lib/lapack > > python tests/test_lapack.py -v 10 > > should show test name before crashing python. > python2.3 tests/test_lapack.py -v 10 Found 66 tests for __main__ check_gebal (__main__.test_flapack_complex) ... ok check_heev (__main__.test_flapack_complex)Segmentation fault > Then study the corresponding test and try to find the simplest way to > reproduce the segmentation fault. This probably means calling some > function from flapack.so. Try using different arguments for this > function to see if some combination does succeed. Python 2.3.3 (#1, Apr 6 2004, 01:47:39) [GCC 3.3.3 (SuSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.lib.lapack.flapack.cheev([[1,2],[2,2]]) (array([-0.56155282, 3.56155276],'f'), array([[-0.78820544+0.j, 0.61541224+0.j], [ 0.61541224+0.j, 0.78820544+0.j]],'F'), 0) >>> scipy.lib.lapack.flapack.cheev([[1,2,3],[2,2,3],[3,3,6]]) Segmentation fault >>> scipy.lib.lapack.flapack.cheev([[1j,2j],[3,2]]) Segmentation fault > If not, create a C program that is using the corresponding routine to > prove a possible bug in ATLAS or compiler. Then report your findings > to scipy-dev and we'll try to find a workaround for you. gcc main.c -L/var/tmp/LAPACK -llapack -lg2c /home/nwagner> gdb a.out GNU gdb 6.1 Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i586-suse-linux"...Using host libthread_db library "/lib/tls/libthread_db.so.1". (gdb) run Starting program: /home/nwagner/a.out Program received signal SIGSEGV, Segmentation fault. 0x400961db in cheev_ () from /usr/lib/liblapack.so.3 I do not understand the usage of /usr/lib/liblapack.so.3. This library comes with SuSE as an rpm... Nils > > Another option for you would be arrange a temporary account to your > machine so that I could login and try to do some diagnostics myself. > > Regards, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -- Dr.-Ing. Nils Wagner Institut A f?r Mechanik Universit?t Stuttgart Pfaffenwaldring 9 D-70550 Stuttgart Tel.: (+49) 0711 685 6262 Fax.: (+49) 0711 685 6282 E-mail: nwagner at mecha.uni-stuttgart.de URL : http://www.mecha.uni-stuttgart.de -------------- next part -------------- A non-text attachment was scrubbed... Name: main.c Type: text/x-c Size: 415 bytes Desc: not available URL: From nwagner at mecha.uni-stuttgart.de Wed Dec 1 07:15:23 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Dec 2004 13:15:23 +0100 Subject: [SciPy-dev] Problems with building scipy Message-ID: <41ADB5DB.30400@mecha.uni-stuttgart.de> Hi all, Finally, I decided to use FORTRAN LAPACK/BLAS libraries instead of ATLAS. The environmental variables are set as follows vs/scipy> echo $ATLAS None cvs/scipy> echo $BLAS None cvs/scipy> echo $LAPACK None cvs/scipy> echo $LAPACK_SRC /var/tmp/src/lapack cvs/scipy> echo $BLAS_SRC /var/tmp/src/blas Now python2.3 setup.py build yields Traceback (most recent call last): File "setup.py", line 112, in ? setup_package(ignore_packages) File "setup.py", line 99, in setup_package url = "http://www.scipy.org", File "scipy_core/scipy_distutils/core.py", line 73, in setup return old_setup(**new_attr) File "/usr/lib/python2.3/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python2.3/distutils/dist.py", line 907, in run_commands self.run_command(cmd) File "/usr/lib/python2.3/distutils/dist.py", line 927, in run_command cmd_obj.run() File "/usr/lib/python2.3/distutils/command/build.py", line 107, in run self.run_command(cmd_name) File "/usr/lib/python2.3/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.3/distutils/dist.py", line 927, in run_command cmd_obj.run() File "scipy_core/scipy_distutils/command/build_src.py", line 81, in run self.build_sources() File "scipy_core/scipy_distutils/command/build_src.py", line 88, in build_sources self.build_extension_sources(ext) File "scipy_core/scipy_distutils/command/build_src.py", line 122, in build_extension_sources sources = self.generate_sources(sources, ext) File "scipy_core/scipy_distutils/command/build_src.py", line 164, in generate_sources source = func(extension, build_dir) File "Lib/lib/blas/setup_blas.py", line 94, in get_cblas_source f = open(source,'w') NameError: global name 'source' is not defined Nils From aisaac at american.edu Wed Dec 1 09:28:41 2004 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 1 Dec 2004 09:28:41 -0500 (Eastern Standard Time) Subject: [SciPy-dev] feature request: expressions in mat strings Message-ID: >>> print mat('1/2') Matrix([ [12]) i. This is both unexpected and as far as I can tell undocumented. Unexpected is always a problem... ii. Parsing lines on spaces or commas and evaluating the resulting tokens seems more reasonable and makes 'mat' more useful. Please consider this. Thank you, Alan Isaac From pearu at scipy.org Wed Dec 1 12:33:05 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 1 Dec 2004 11:33:05 -0600 (CST) Subject: [SciPy-dev] Problems with building scipy In-Reply-To: <41ADB5DB.30400@mecha.uni-stuttgart.de> References: <41ADB5DB.30400@mecha.uni-stuttgart.de> Message-ID: On Wed, 1 Dec 2004, Nils Wagner wrote: > Hi all, > > Finally, I decided to use FORTRAN LAPACK/BLAS libraries instead of ATLAS. > The environmental variables are set as follows > vs/scipy> echo $ATLAS > None > cvs/scipy> echo $BLAS > None > cvs/scipy> echo $LAPACK > None > cvs/scipy> echo $LAPACK_SRC > /var/tmp/src/lapack > cvs/scipy> echo $BLAS_SRC > /var/tmp/src/blas > > Now python2.3 setup.py build yields > > Traceback (most recent call last): > f = open(source,'w') > NameError: global name 'source' is not defined Fixed in CVS. Pearu From pearu at scipy.org Wed Dec 1 12:40:55 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 1 Dec 2004 11:40:55 -0600 (CST) Subject: [SciPy-dev] building lib.lapack without optimization In-Reply-To: <41ADA022.8000300@mecha.uni-stuttgart.de> References: <41AC4D01.9010000@mecha.uni-stuttgart.de> <41AC5A3F.1010509@mecha.uni-stuttgart.de> <41AD788C.1050101@mecha.uni-stuttgart.de> <41ADA022.8000300@mecha.uni-stuttgart.de> Message-ID: On Wed, 1 Dec 2004, Nils Wagner wrote: >> If not, create a C program that is using the corresponding routine to prove >> a possible bug in ATLAS or compiler. Then report your findings to scipy-dev >> and we'll try to find a workaround for you. > > gcc main.c -L/var/tmp/LAPACK -llapack -lg2c > > /home/nwagner> gdb a.out > > (gdb) run > Starting program: /home/nwagner/a.out > > Program received signal SIGSEGV, Segmentation fault. > 0x400961db in cheev_ () from /usr/lib/liblapack.so.3 > > I do not understand the usage of /usr/lib/liblapack.so.3. This library comes > with SuSE as an rpm... Does anyone have a good reference about linking basics for Nils? Pearu From nwagner at mecha.uni-stuttgart.de Thu Dec 2 03:04:04 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Dec 2004 09:04:04 +0100 Subject: [SciPy-dev] Problems with building scipy In-Reply-To: References: <41ADB5DB.30400@mecha.uni-stuttgart.de> Message-ID: <41AECC74.8010501@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > > On Wed, 1 Dec 2004, Nils Wagner wrote: > >> Hi all, >> >> Finally, I decided to use FORTRAN LAPACK/BLAS libraries instead of >> ATLAS. >> The environmental variables are set as follows >> vs/scipy> echo $ATLAS >> None >> cvs/scipy> echo $BLAS >> None >> cvs/scipy> echo $LAPACK >> None >> cvs/scipy> echo $LAPACK_SRC >> /var/tmp/src/lapack >> cvs/scipy> echo $BLAS_SRC >> /var/tmp/src/blas >> >> Now python2.3 setup.py build yields >> >> Traceback (most recent call last): >> f = open(source,'w') >> NameError: global name 'source' is not defined > > > Fixed in CVS. > Thank you. Now there is only one "failure" in scipy.test() using FORTRAN LAPACK/BLAS. ====================================================================== FAIL: check_heev_complex (scipy.lib.lapack.test_lapack.test_flapack_complex) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 31, in check_heev_complex assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i]) File "/usr/lib/python2.3/site-packages/scipy_test/testing.py", line 742, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 33.3333333333%): Array 1: [-1.2905481 -4.3758251e+00j -2.0410486 +1.3645462e+00j 3.5935487 +2.8312206e-07j] Array 2: [-1.2905486-4.3758267j -2.041049 +1.3645459j 3.5935484-0.j ] ---------------------------------------------------------------------- Ran 1172 tests in 2.816s FAILED (failures=1) Meanwhile, I have also test gcc3.3.4, gcc3.3.2 and gcc3.3.1 (SuSE9.2/SuSE9.1 /SuSE9.0). Unfortunately, I got the same segmentation faults as mentioned earlier when using ATLAS 3.6 / 3.7.8. This fact is (at least from my point of view) very unsatisfactory. Any comment would be appreciated. Nils > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From nwagner at mecha.uni-stuttgart.de Thu Dec 2 04:04:20 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Dec 2004 10:04:20 +0100 Subject: [SciPy-dev] Feature request : Comparison of sparse matrices is not implemented. Message-ID: <41AEDA94.70908@mecha.uni-stuttgart.de> Hi all, I tried to visualize the structure of large and sparse matrices using from matplotlib.colors import LinearSegmentedColormap from matplotlib.matlab import * from scipy import * import IPython def spy2(Z): """ SPY(Z) plots the sparsity pattern of the matrix S as an image """ #binary colormap min white, max black cmapdata = { 'red' : ((0., 1., 1.), (1., 0., 0.)), 'green': ((0., 1., 1.), (1., 0., 0.)), 'blue' : ((0., 1., 1.), (1., 0., 0.)) } binary = LinearSegmentedColormap('binary', cmapdata, 2) Z = where(Z>0,1.,0.) imshow(transpose(Z), interpolation='nearest', cmap=binary) rows, cols, entries, rep, field, symm = io.mminfo('k0.mtx') print 'number of rows, cols and entries', rows, cols, entries print 'Start reading matrix - this may take a minute' ma = io.mmread('k0.mtx') print 'Finished' flag = 1 if flag == 1: spy2(ma) show() It failed. Is it somehow possible to visualize sparse matrices ? Any suggestion would be appreciated. Thanks in advance Nils number of rows, cols and entries 67986 67986 4222171 Start reading matrix - this may take a minute Finished Traceback (most recent call last): File "spy.py", line 29, in ? spy2(ma) File "spy.py", line 19, in spy2 Z = where(Z>0,1.,0.) File "/usr/lib/python2.3/site-packages/scipy/sparse/Sparse.py", line 145, in __cmp__ raise TypeError, "Comparison of sparse matrices is not implemented." TypeError: Comparison of sparse matrices is not implemented. From nwagner at mecha.uni-stuttgart.de Thu Dec 2 04:10:12 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Dec 2004 10:10:12 +0100 Subject: [SciPy-dev] IPYthon's who() and sparse matrices Message-ID: <41AEDBF4.7090609@mecha.uni-stuttgart.de> Hi all, from scipy import * import IPython a = rand(4,4) who() speye=sparse.csc_matrix(identity(4)) print speye who() Name Shape Bytes Type =========================================================== a 4 x 4 128 double Upper bound on total bytes = 128 (0, 0) 1.0 (1, 1) 1.0 (2, 2) 1.0 (3, 3) 1.0 Name Shape Bytes Type =========================================================== a 4 x 4 128 double Upper bound on total bytes = 128 speye is not displayed by who(). For what reason ? Nils From pearu at scipy.org Thu Dec 2 06:04:51 2004 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 2 Dec 2004 05:04:51 -0600 (CST) Subject: [SciPy-dev] Problems with building scipy In-Reply-To: <41AECC74.8010501@mecha.uni-stuttgart.de> References: <41ADB5DB.30400@mecha.uni-stuttgart.de> <41AECC74.8010501@mecha.uni-stuttgart.de> Message-ID: On Thu, 2 Dec 2004, Nils Wagner wrote: > Now there is only one "failure" in scipy.test() using FORTRAN LAPACK/BLAS. > > ====================================================================== > FAIL: check_heev_complex (scipy.lib.lapack.test_lapack.test_flapack_complex) > ---------------------------------------------------------------------- Fixed in CVS. This failure appeared when using Fortran BLAS. > Meanwhile, I have also test gcc3.3.4, gcc3.3.2 and gcc3.3.1 > (SuSE9.2/SuSE9.1 /SuSE9.0). Unfortunately, I got the same segmentation > faults as mentioned earlier when using ATLAS 3.6 / 3.7.8. This fact is > (at least from my point of view) very unsatisfactory. Comments: (i) Have you tried using ATLAS libraries from http://www.scipy.org/download/atlasbinaries/linux/ ? (ii) Always use ldd on flapack.so to check that system blas/lapack libraries are not being used. Also verify the linking command for flapack.so and the existence of .a files used in -l.. and that -L.. options are properly ordered so that correct .a files are picked up. Pearu From rkern at ucsd.edu Thu Dec 2 06:19:56 2004 From: rkern at ucsd.edu (Robert Kern) Date: Thu, 02 Dec 2004 03:19:56 -0800 Subject: [SciPy-dev] Feature request : Comparison of sparse matrices is not implemented. In-Reply-To: <41AEDA94.70908@mecha.uni-stuttgart.de> References: <41AEDA94.70908@mecha.uni-stuttgart.de> Message-ID: <41AEFA5C.1080408@ucsd.edu> Nils Wagner wrote: > Hi all, > > I tried to visualize the structure of large and sparse matrices using > > from matplotlib.colors import LinearSegmentedColormap > from matplotlib.matlab import * > from scipy import * > import IPython > > def spy2(Z): > """ > SPY(Z) plots the sparsity pattern of the matrix S as an image > """ > > #binary colormap min white, max black > cmapdata = { > 'red' : ((0., 1., 1.), (1., 0., 0.)), > 'green': ((0., 1., 1.), (1., 0., 0.)), > 'blue' : ((0., 1., 1.), (1., 0., 0.)) > } > binary = LinearSegmentedColormap('binary', cmapdata, 2) > > Z = where(Z>0,1.,0.) > imshow(transpose(Z), interpolation='nearest', cmap=binary) Since imshow() doesn't deal with sparse matrices (to my knowledge), but only dense matrices, you need to use Z.todense() regardless. Z = Z.transp().todense() > 0 imshow(Z, ...) If you really need the array to be Float, you can explicitly cast it. Otherwise, where(, 1.0, 0.0) is extraneous. I'm sure sparse matrix comparisons are already "on the list," as it were. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Thu Dec 2 06:55:11 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Dec 2004 12:55:11 +0100 Subject: [SciPy-dev] Feature request : Comparison of sparse matrices is not implemented. In-Reply-To: <41AEFA5C.1080408@ucsd.edu> References: <41AEDA94.70908@mecha.uni-stuttgart.de> <41AEFA5C.1080408@ucsd.edu> Message-ID: <41AF029F.4050106@mecha.uni-stuttgart.de> Robert Kern wrote: > Nils Wagner wrote: > >> Hi all, >> >> I tried to visualize the structure of large and sparse matrices using >> >> from matplotlib.colors import LinearSegmentedColormap >> from matplotlib.matlab import * >> from scipy import * >> import IPython >> >> def spy2(Z): >> """ >> SPY(Z) plots the sparsity pattern of the matrix S as an image >> """ >> >> #binary colormap min white, max black >> cmapdata = { >> 'red' : ((0., 1., 1.), (1., 0., 0.)), >> 'green': ((0., 1., 1.), (1., 0., 0.)), >> 'blue' : ((0., 1., 1.), (1., 0., 0.)) >> } >> binary = LinearSegmentedColormap('binary', cmapdata, 2) >> >> Z = where(Z>0,1.,0.) >> imshow(transpose(Z), interpolation='nearest', cmap=binary) > > > Since imshow() doesn't deal with sparse matrices (to my knowledge), > but only dense matrices, you need to use Z.todense() regardless. > This might be a memory problem for such large matrices arising in my applications... Just, to receive an impression the number of rows, cols and entries are 67986, 67986 and 4222171, respectively. Nils > Z = Z.transp().todense() > 0 > imshow(Z, ...) > > If you really need the array to be Float, you can explicitly cast it. > Otherwise, where(, 1.0, 0.0) is extraneous. > > I'm sure sparse matrix comparisons are already "on the list," as it were. > From nwagner at mecha.uni-stuttgart.de Thu Dec 2 07:47:30 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Dec 2004 13:47:30 +0100 Subject: [SciPy-dev] Problems with building scipy In-Reply-To: References: <41ADB5DB.30400@mecha.uni-stuttgart.de> <41AECC74.8010501@mecha.uni-stuttgart.de> Message-ID: <41AF0EE2.504@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > > On Thu, 2 Dec 2004, Nils Wagner wrote: > >> Now there is only one "failure" in scipy.test() using FORTRAN >> LAPACK/BLAS. >> >> ====================================================================== >> FAIL: check_heev_complex >> (scipy.lib.lapack.test_lapack.test_flapack_complex) >> ---------------------------------------------------------------------- > > > Fixed in CVS. This failure appeared when using Fortran BLAS. > >> Meanwhile, I have also test gcc3.3.4, gcc3.3.2 and gcc3.3.1 >> (SuSE9.2/SuSE9.1 /SuSE9.0). Unfortunately, I got the same >> segmentation faults as mentioned earlier when using ATLAS 3.6 / >> 3.7.8. This fact is (at least from my point of view) very >> unsatisfactory. > > > Comments: > (i) Have you tried using ATLAS libraries from > http://www.scipy.org/download/atlasbinaries/linux/ > ? No, I did not. > (ii) Always use ldd on flapack.so to check that system blas/lapack > libraries are not being used. I have removed the rpm's containing liblapack and libblas. BTW, I found two different flapack.so on my system. -rwxr-xr-x 1 root root 2342056 2004-12-02 12:59 /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so -rwxr-xr-x 1 root root 1979744 2004-12-02 13:01 /usr/lib/python2.3/site-packages/scipy/linalg/flapack.so Is that o.k. ? cvs/scipy> ldd /usr/lib/python2.3/site-packages/scipy/linalg/flapack.so linux-gate.so.1 => (0xffffe000) libg2c.so.0 => /usr/lib/libg2c.so.0 (0x4028e000) libm.so.6 => /lib/tls/libm.so.6 (0x402ac000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x402ce000) libc.so.6 => /lib/tls/libc.so.6 (0x402d7000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x80000000) cvs/scipy> ldd /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so linux-gate.so.1 => (0xffffe000) libg2c.so.0 => /usr/lib/libg2c.so.0 (0x40305000) libm.so.6 => /lib/tls/libm.so.6 (0x40323000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x40345000) libc.so.6 => /lib/tls/libc.so.6 (0x4034e000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x80000000) Nils > Also verify the linking command for flapack.so and the existence of .a > files used in -l.. and that -L.. > options are properly ordered so that correct .a files are picked up. > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From rkern at ucsd.edu Thu Dec 2 14:24:12 2004 From: rkern at ucsd.edu (Robert Kern) Date: Thu, 02 Dec 2004 11:24:12 -0800 Subject: [SciPy-dev] Feature request : Comparison of sparse matrices is not implemented. In-Reply-To: <41AF029F.4050106@mecha.uni-stuttgart.de> References: <41AEDA94.70908@mecha.uni-stuttgart.de> <41AEFA5C.1080408@ucsd.edu> <41AF029F.4050106@mecha.uni-stuttgart.de> Message-ID: <41AF6BDC.4080504@ucsd.edu> Nils Wagner wrote: > Robert Kern wrote: > >> Nils Wagner wrote: >> >>> Hi all, >>> >>> I tried to visualize the structure of large and sparse matrices using >>> >>> from matplotlib.colors import LinearSegmentedColormap >>> from matplotlib.matlab import * >>> from scipy import * >>> import IPython >>> >>> def spy2(Z): >>> """ >>> SPY(Z) plots the sparsity pattern of the matrix S as an image >>> """ >>> >>> #binary colormap min white, max black >>> cmapdata = { >>> 'red' : ((0., 1., 1.), (1., 0., 0.)), >>> 'green': ((0., 1., 1.), (1., 0., 0.)), >>> 'blue' : ((0., 1., 1.), (1., 0., 0.)) >>> } >>> binary = LinearSegmentedColormap('binary', cmapdata, 2) >>> >>> Z = where(Z>0,1.,0.) >>> imshow(transpose(Z), interpolation='nearest', cmap=binary) >> >> >> >> Since imshow() doesn't deal with sparse matrices (to my knowledge), >> but only dense matrices, you need to use Z.todense() regardless. >> > This might be a memory problem for such large matrices arising in my > applications... > Just, to receive an impression the number of rows, cols and entries are > 67986, 67986 and 4222171, respectively. Then you'll have to patch imshow to deal with sparse matrices. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Fri Dec 3 05:43:17 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Dec 2004 11:43:17 +0100 Subject: [SciPy-dev] Feature request : Comparison of sparse matrices is not implemented. In-Reply-To: <41AF6BDC.4080504@ucsd.edu> References: <41AEDA94.70908@mecha.uni-stuttgart.de> <41AEFA5C.1080408@ucsd.edu> <41AF029F.4050106@mecha.uni-stuttgart.de> <41AF6BDC.4080504@ucsd.edu> Message-ID: <41B04345.3010003@mecha.uni-stuttgart.de> Robert Kern wrote: > Nils Wagner wrote: > >> Robert Kern wrote: >> >>> Nils Wagner wrote: >>> >>>> Hi all, >>>> >>>> I tried to visualize the structure of large and sparse matrices using >>>> >>>> from matplotlib.colors import LinearSegmentedColormap >>>> from matplotlib.matlab import * >>>> from scipy import * >>>> import IPython >>>> >>>> def spy2(Z): >>>> """ >>>> SPY(Z) plots the sparsity pattern of the matrix S as an image >>>> """ >>>> >>>> #binary colormap min white, max black >>>> cmapdata = { >>>> 'red' : ((0., 1., 1.), (1., 0., 0.)), >>>> 'green': ((0., 1., 1.), (1., 0., 0.)), >>>> 'blue' : ((0., 1., 1.), (1., 0., 0.)) >>>> } >>>> binary = LinearSegmentedColormap('binary', cmapdata, 2) >>>> >>>> Z = where(Z>0,1.,0.) >>>> imshow(transpose(Z), interpolation='nearest', cmap=binary) >>> >>> >>> >>> >>> Since imshow() doesn't deal with sparse matrices (to my knowledge), >>> but only dense matrices, you need to use Z.todense() regardless. >>> >> This might be a memory problem for such large matrices arising in my >> applications... >> Just, to receive an impression the number of rows, cols and entries >> are 67986, 67986 and 4222171, respectively. > > > Then you'll have to patch imshow to deal with sparse matrices. > Is there someone who can help me with that task ? Nils From nwagner at mecha.uni-stuttgart.de Fri Dec 3 07:02:18 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Dec 2004 13:02:18 +0100 Subject: [SciPy-dev] building lib.lapack without optimization In-Reply-To: References: <41AC4D01.9010000@mecha.uni-stuttgart.de> <41AC5A3F.1010509@mecha.uni-stuttgart.de> <41AD788C.1050101@mecha.uni-stuttgart.de> <41ADA022.8000300@mecha.uni-stuttgart.de> Message-ID: <41B055CA.3020708@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > > On Wed, 1 Dec 2004, Nils Wagner wrote: > >>> If not, create a C program that is using the corresponding routine >>> to prove a possible bug in ATLAS or compiler. Then report your >>> findings to scipy-dev and we'll try to find a workaround for you. >> >> >> gcc main.c -L/var/tmp/LAPACK -llapack -lg2c >> >> /home/nwagner> gdb a.out >> >> (gdb) run >> Starting program: /home/nwagner/a.out >> >> Program received signal SIGSEGV, Segmentation fault. >> 0x400961db in cheev_ () from /usr/lib/liblapack.so.3 >> >> I do not understand the usage of /usr/lib/liblapack.so.3. This >> library comes with SuSE as an rpm... > > > Does anyone have a good reference about linking basics for Nils? > I found one. http://sourceforge.net/tracker/index.php?func=detail&aid=1067122&group_id=1369&atid=101369 Nils > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From jmiller at stsci.edu Tue Dec 7 18:11:36 2004 From: jmiller at stsci.edu (Todd Miller) Date: 07 Dec 2004 18:11:36 -0500 Subject: [SciPy-dev] Re: Fundamental scipy testing changes In-Reply-To: References: Message-ID: <1102461096.17245.458.camel@halloween.stsci.edu> The discussion below relates to porting scipy to numarray, starting with some small changes to scipy_test.testing. The discussion began as private e-mail but has crossed over to scipy-dev at Pearu's suggestion. On Tue, 2004-12-07 at 16:16, Pearu Peterson wrote: > Hi Todd, > > On 7 Dec 2004, Todd Miller wrote: > > > I took a closer look at how my scipy_base changes affect scipy.test() > > and found a couple things in my scipy_base changes that needed fixing. > > Sorry I didn't think of this sooner. These fixes are committed now on > > the numarray branch in CVS. I also honed in on what I think should be > > the starting point for discussion: the changes I made to > > scipy_test/testing.py. > > Note that assert_equal, assert_almost_equal, assert_approx_equal > were not meant to be used with array arguments (I didn't implement them > but its obvious from reading the code). Thanks for pointing this out... I noticed the array versions only peripherally and didn't understand the distinction. > For checking the equality of > array arguments assert_array_equal or assert_array_almost_equal should be > used. If some scipy test suite uses assert_equal, etc with array > arguments then I think this is a bug of this particular test suite, > not of testing.py. So, using scipy_base.all in assert_equal, etc is not > necessary (unless we want to drop assert_array_* functions). Understood. Are we agreed that it is appropriate to use all() in the assert_array_* functions? > > In order to do the numarray scipy_base changes and pass > > scipy_base.test(), I modified scipy_test/testing.py in ways which I > > assert are good. Mostly, I used all() in contexts where an array result > > was being used as a truth value. In those contexts, the array > > __nonzero__() function is executed. NumArray.__nonzero__() raises an > > exception causing many tests to fail that should succeed. Using all() > > reduces the "truth array" to a single scalar value, so > > NumArray.__nonzero__() is never called. > > > > Where I think early agreement needs to happen is that my testing.py > > changes are good even though they expose a handful of latent scipy bugs > > or unit test bugs because Numeric's __nonzero__() has the meaning of > > any() and not all(). So, for say an equality test, if any of the > > values were equal, the test would succeed even if most of the arrays' > > values were *not* equal. Switching to all() as I am advocating exposes > > hidden problems like these. > > At the moment I have no comments on this as I haven't tried the numarray > branch of scipy yet. I think we should square away the scipy_test.testing changes before anyone messes with the numarray branch. > Btw, your discussion about the changes to scipy > sounds reasonable. However, why do you say that using alltrue is an error? > I thought all(x) is equivalent to alltrue(ravel(x)) and in assert_array_* > functions the arguments to alltrue are already ravel'ed. I looked at my diffs more carefully and see that you're right, the alltrues are not bugs because they're only fed 1D arrays. I have not reexamined all my scipy_base changes for instances of alltrue. > > If my testing.py changes are agreed upon and the exposed bugs are either > > fixed or acknowledged as known, we have a better basis for examining > > the rest of the numarray changes to scipy_base. > > > > I attached 3 files for you to look at: > > > > 1. testing.diffs (changes to -r HEAD of scipy_test/testing.py) > > > > 2. results.HEAD (scipy.test() results against the HEAD of CVS using > > Numeric) > > I got 4 failures and can't see the nightly scoreboard yet so I don't > > know if these are expected. > > > > 3. results.testing.changes (scipy.test() results with testing.diffs > > applied) > > I got 11 failures (including the original 4) most of which I believe are > > either bugs or unit test bugs. > > The failures, where you see almost equal arrays, may occur when using > Fortran blas; My scipy.test() does complain about not finding clapack so maybe I am using a Fortran blas too. > these errors should be fixed by using proper decimal > argument in assert_array_almost_equal call of the corresponding tests (I > thought I have fixed these errors in scipy HEAD already). As you said I think, the (incorrect) changes to assert_equal exposed (incorrect) uses of assert_equal in array contexts. > Other failures need more attention. > > > If any of you would rather not be included in my future numarray > > mailings, or you have suggestions for a better venue for them, please > > let me know. > > I suggest using scipy-dev. > > Thanks, > Pearu I re-attached the attachments in case anyone on scipy-dev wants to look as well; there's nothing new there. Thanks for looking it over, Todd -------------- next part -------------- A non-text attachment was scrubbed... Name: testing.diffs Type: text/x-patch Size: 5846 bytes Desc: not available URL: -------------- next part -------------- ====================================================================== FAIL: check_heevr_irange_high (scipy.lib.lapack.test_lapack.test_flapack_complex) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 80, in check_heevr_irange_high def check_heevr_irange_high(self): self.check_syevr_irange(sym='he',irange=[1,2]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 68, in check_syevr_irange assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 742, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 33.3333333333%): Array 1: [ 3.6929269+0.j 4.0951085+0.j 7.3420534+0.j] Array 2: [ 3.6929276+0.j 4.0951101+0.j 7.3420528+0.j] ====================================================================== FAIL: check_heevr_vrange (scipy.lib.lapack.test_lapack.test_flapack_complex) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 102, in check_heevr_vrange def check_heevr_vrange(self): self.check_syevr_vrange(sym='he') File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 94, in check_syevr_vrange assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 742, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 33.3333333333%): Array 1: [ 3.6929269+0.j 4.0951085+0.j 7.3420534+0.j] Array 2: [ 3.6929276+0.j 4.0951101+0.j 7.3420528+0.j] ====================================================================== FAIL: check_syevr_irange_high (scipy.lib.lapack.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 74, in check_syevr_irange_high def check_syevr_irange_high(self): self.check_syevr_irange(irange=[1,2]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 68, in check_syevr_irange assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 742, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 33.3333333333%): Array 1: [ 3.6929269 4.0951085 7.3420534] Array 2: [ 3.6929276 4.0951101 7.3420528] ====================================================================== FAIL: check_syevr_vrange (scipy.lib.lapack.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 94, in check_syevr_vrange assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 742, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 33.3333333333%): Array 1: [ 3.6929269 4.0951085 7.3420534] Array 2: [ 3.6929276 4.0951101 7.3420528] ---------------------------------------------------------------------- Ran 1172 tests in 7.154s FAILED (failures=4) [413475 refs] -------------- next part -------------- ====================================================================== FAIL: check_basic (scipy_base.function_base.test_function_base.test_amax) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy_base/tests/test_function_base.py", line 72, in check_basic assert_equal(amax(b),[8.0,10.0,9.0]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 649, in assert_equal assert _sb.all(desired == actual), msg AssertionError: Items are not equal: DESIRED: [8.0, 10.0, 9.0] ACTUAL: array([ 9., 10., 8.]) ====================================================================== FAIL: check_basic (scipy_base.function_base.test_function_base.test_amin) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy_base/tests/test_function_base.py", line 82, in check_basic assert_equal(amin(b),[3.0,3.0,2.0]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 649, in assert_equal assert _sb.all(desired == actual), msg AssertionError: Items are not equal: DESIRED: [3.0, 3.0, 2.0] ACTUAL: array([ 3., 4., 2.]) ====================================================================== FAIL: check_heevr_irange_high (scipy.lib.lapack.test_lapack.test_flapack_complex) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 80, in check_heevr_irange_high def check_heevr_irange_high(self): self.check_syevr_irange(sym='he',irange=[1,2]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 68, in check_syevr_irange assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 744, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 33.3333333333%): Array 1: [ 3.6929269+0.j 4.0951085+0.j 7.3420534+0.j] Array 2: [ 3.6929276+0.j 4.0951101+0.j 7.3420528+0.j] ====================================================================== FAIL: check_heevr_vrange (scipy.lib.lapack.test_lapack.test_flapack_complex) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 102, in check_heevr_vrange def check_heevr_vrange(self): self.check_syevr_vrange(sym='he') File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 94, in check_syevr_vrange assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 744, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 33.3333333333%): Array 1: [ 3.6929269+0.j 4.0951085+0.j 7.3420534+0.j] Array 2: [ 3.6929276+0.j 4.0951101+0.j 7.3420528+0.j] ====================================================================== FAIL: check_syevr_irange_high (scipy.lib.lapack.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 74, in check_syevr_irange_high def check_syevr_irange_high(self): self.check_syevr_irange(irange=[1,2]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 68, in check_syevr_irange assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 744, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 33.3333333333%): Array 1: [ 3.6929269 4.0951085 7.3420534] Array 2: [ 3.6929276 4.0951101 7.3420528] ====================================================================== FAIL: check_syevr_vrange (scipy.lib.lapack.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 94, in check_syevr_vrange assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 744, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 33.3333333333%): Array 1: [ 3.6929269 4.0951085 7.3420534] Array 2: [ 3.6929276 4.0951101 7.3420528] ====================================================================== FAIL: check_arange (scipy.special.basic.test_basic.test_arange) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 495, in check_arange assert_equal(numstring,array([0.,0.1,0.2,0.3, File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 649, in assert_equal assert _sb.all(desired == actual), msg AssertionError: Items are not equal: ====================================================================== FAIL: check_genlaguerre (scipy.special.basic.test_basic.test_laguerre) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1573, in check_genlaguerre assert_equal(lag2.c,array([1,-2*(k+2),(k+1.)*(k+2.)])/2.0) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 649, in assert_equal assert _sb.all(desired == actual), msg AssertionError: Items are not equal: DESIRED: array([ 0.5 , -3.58365527, 4.62946491]) ACTUAL: array([ 0.5 , -3.58365527, 4.62946491]) ====================================================================== FAIL: check_legendre (scipy.special.basic.test_basic.test_legendre) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1590, in check_legendre assert_equal(leg3.c,array([5,0,-3,0])/2.0) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 649, in assert_equal assert _sb.all(desired == actual), msg AssertionError: Items are not equal: DESIRED: array([ 2.5, 0. , -1.5, 0. ]) ACTUAL: array([ 2.50000000e+00, 0.00000000e+00, -1.50000000e+00, 3.10191936e-17]) ====================================================================== FAIL: check_diag (scipy.linalg.basic.test_basic.test_tri) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/linalg/tests/test_basic.py", line 427, in check_diag assert_equal(tri(4,k=1),array([[1,1,0,0], File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 649, in assert_equal assert _sb.all(desired == actual), msg AssertionError: Items are not equal: DESIRED: array([[1, 1, 0, 0], [1, 1, 1, 0], [0, 1, 1, 1], [1, 1, 1, 1]]) ACTUAL: array([[1, 1, 0, 0], [1, 1, 1, 0], [1, 1, 1, 1], [1, 1, 1, 1]],'b') ====================================================================== FAIL: check_diag (scipy.linalg.basic.test_basic.test_triu) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/linalg/tests/test_basic.py", line 494, in check_diag assert_equal(tril(a,k=-2),b) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 649, in assert_equal assert _sb.all(desired == actual), msg AssertionError: Items are not equal: ---------------------------------------------------------------------- Ran 1172 tests in 6.999s FAILED (failures=11) [455000 refs] >>> From pearu at scipy.org Wed Dec 8 06:53:28 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 8 Dec 2004 05:53:28 -0600 (CST) Subject: [SciPy-dev] Re: Fundamental scipy testing changes In-Reply-To: <1102461096.17245.458.camel@halloween.stsci.edu> References: <1102461096.17245.458.camel@halloween.stsci.edu> Message-ID: On Tue, 7 Dec 2004, Todd Miller wrote: >> Note that assert_equal, assert_almost_equal, assert_approx_equal >> were not meant to be used with array arguments (I didn't implement them >> but its obvious from reading the code). > > Thanks for pointing this out... I noticed the array versions only > peripherally and didn't understand the distinction. May be we need to review assert_equal, etc so that they will handle array inputs similar to assert_array_equal but on Python objects they will not use unnecessary scipy_base.all. And then define assert_array_equal = assert_equal assert_array_almost_equal = assert_almost_equal in testing.py for backward compability and declare their use as depreciated. >> For checking the equality of >> array arguments assert_array_equal or assert_array_almost_equal should be >> used. If some scipy test suite uses assert_equal, etc with array >> arguments then I think this is a bug of this particular test suite, >> not of testing.py. So, using scipy_base.all in assert_equal, etc is not >> necessary (unless we want to drop assert_array_* functions). > > Understood. Are we agreed that it is appropriate to use all() in the > assert_array_* functions? Yes. But be careful when replacing `if obj:` with `if all(obj):` in other parts of scipy as it may also mean `if any(obj):` or `if obj is not None:`, in fact, I think these are being assumed in most cases. And if not, then it should be a bug. I agree that the usage of `if obj:` is a bug and should be fixed either to `if any(obj):` or `if obj is not None:` or rarely `if all(obj):`. Pearu From jmiller at stsci.edu Wed Dec 8 10:33:46 2004 From: jmiller at stsci.edu (Todd Miller) Date: 08 Dec 2004 10:33:46 -0500 Subject: [SciPy-dev] Re: Fundamental scipy testing changes In-Reply-To: References: <1102461096.17245.458.camel@halloween.stsci.edu> Message-ID: <1102520025.21898.105.camel@halloween.stsci.edu> On Wed, 2004-12-08 at 06:53, Pearu Peterson wrote: > On Tue, 7 Dec 2004, Todd Miller wrote: > > >> Note that assert_equal, assert_almost_equal, assert_approx_equal > >> were not meant to be used with array arguments (I didn't implement them > >> but its obvious from reading the code). > > > > Thanks for pointing this out... I noticed the array versions only > > peripherally and didn't understand the distinction. > > May be we need to review assert_equal, etc so that they will > handle array inputs similar to assert_array_equal but on Python objects > they will not use unnecessary scipy_base.all. And then define > > assert_array_equal = assert_equal > assert_array_almost_equal = assert_almost_equal > > in testing.py for backward compability and declare their use as > depreciated. This sounds like a good interface simplification. I looked at doing the deprecation (sticking in warnings) but was quickly intimidated by the array equality functions. I think as an alternative to deprecation, we should consider having assert_equal delegate to assert_array_equal for arrays. That way, arrays are handled as they have been, but future testers won't have to distinguish between array and non-array contexts. > >> For checking the equality of > >> array arguments assert_array_equal or assert_array_almost_equal should be > >> used. If some scipy test suite uses assert_equal, etc with array > >> arguments then I think this is a bug of this particular test suite, > >> not of testing.py. So, using scipy_base.all in assert_equal, etc is not > >> necessary (unless we want to drop assert_array_* functions). > > > > Understood. Are we agreed that it is appropriate to use all() in the > > assert_array_* functions? > > Yes. > > But be careful when replacing `if obj:` with `if all(obj):` in other parts > of scipy as it may also mean `if any(obj):` or `if obj is not None:`, in > fact, I think these are being assumed in most cases. And if not, then > it should be a bug. Sounds good. > I agree that the usage of `if obj:` is a bug and should be fixed either to > `if any(obj):` or `if obj is not None:` or rarely `if all(obj):`. Something else worth explicitly mentioning is array comparisons and logical expressions, something like "if A I could not help noticing that OpenCD has no scientific tools. It seems the Enthought enhance Python distribution meets the inclusion criteria: http://theopencd.sunsite.dk/criteria.php fwiw, Alan Isaac From joe at enthought.com Sun Dec 12 21:36:42 2004 From: joe at enthought.com (Joe Cooper) Date: Sun, 12 Dec 2004 20:36:42 -0600 Subject: [SciPy-dev] OpenCD In-Reply-To: References: Message-ID: <41BD003A.3060307@enthought.com> Alan G Isaac wrote: > I could not help noticing that OpenCD has no > scientific tools. It seems the Enthought > enhance Python distribution meets the inclusion criteria: > http://theopencd.sunsite.dk/criteria.php You're probably right that it would be nice for some folks. However, as it stands, I don't know how eager they'd be to add ~100MB of software (compressed...uncompressed it's probably twice that) that only scientific and mathematics folks would have a serious interest in. Number four on the list of qualifications: 4) Be mainstream and functional. It should compare favorably with proprietary alternatives. While the packages in Enthon compare favorably to proprietary alternatives, and the next release will be quite slick, as such things go, with the addition of the IPython shell and some other new additions...I just don't know that "mainstream" is a word one would use to describe it. Looking over the current applications, I don't see how we'd really fit into the mix...it's word processing, email and web, consumer-targetted multimedia apps, games, compression, etc. Combine that with a huge package size, and we've got the makings of a big fat "NO!" from the OpenCD folks. That said, if there is any interest, I'd be happy to work with someone involved in that group to make it happen. Maybe with the next release coming in the next week or so, we should just focus on getting out the word to the folks who would have a real interest in it who may not be familiar with it, and making sure they are able to get it easily. From jdhunter at ace.bsd.uchicago.edu Sun Dec 12 22:02:44 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sun, 12 Dec 2004 21:02:44 -0600 Subject: [SciPy-dev] OpenCD In-Reply-To: <41BD003A.3060307@enthought.com> (Joe Cooper's message of "Sun, 12 Dec 2004 20:36:42 -0600") References: <41BD003A.3060307@enthought.com> Message-ID: >>>>> "Joe" == Joe Cooper writes: Joe> Maybe with the next release coming in the next week or so, we Joe> should just focus on getting out the word to the folks who Joe> would have a real interest in it who may not be familiar with Joe> it, and making sure they are able to get it easily. What might be nice would be for enthought to press a CD release for it, and charge a nominal fee ($10-15?) to recover some of the cost you put into the thing. I've converted a former adviser to using python plus the standard goodies in a course he is teaching in place of matlab. 95% of his students are win32. I suggested enthon to him but he was put off by the download size, so there might be a small market for a CD. Granted, any semi-literate person could simply download and burn it themselves, but we shouldn't overestimate literacy in the wild. For people like my adviser, he would probably rather order 15 copies for his next course rather than figure out how to download it and burn it. But I suspect he is an exception case: I sometimes refer to him as the only Luddite I know with a cluster of Sun workstations. JDH From joe at enthought.com Sun Dec 12 22:41:24 2004 From: joe at enthought.com (Joe Cooper) Date: Sun, 12 Dec 2004 21:41:24 -0600 Subject: [SciPy-dev] OpenCD In-Reply-To: References: <41BD003A.3060307@enthought.com> Message-ID: <41BD0F64.90303@enthought.com> John Hunter wrote: >>>>>>"Joe" == Joe Cooper writes: > > > Joe> Maybe with the next release coming in the next week or so, we > Joe> should just focus on getting out the word to the folks who > Joe> would have a real interest in it who may not be familiar with > Joe> it, and making sure they are able to get it easily. > > What might be nice would be for enthought to press a CD release for > it, and charge a nominal fee ($10-15?) to recover some of the cost you > put into the thing. I've converted a former adviser to using python > plus the standard goodies in a course he is teaching in place of > matlab. 95% of his students are win32. I suggested enthon to him but > he was put off by the download size, so there might be a small market > for a CD. > > Granted, any semi-literate person could simply download and burn it > themselves, but we shouldn't overestimate literacy in the wild. For > people like my adviser, he would probably rather order 15 copies for > his next course rather than figure out how to download it and burn it. > But I suspect he is an exception case: I sometimes refer to him as the > only Luddite I know with a cluster of Sun workstations. Hmmm..."any semi-literate person"...Is that your way of volunteering? ;-) Eric and I have discussed this very idea and I believe there is a small but critical non-technical problem with the concept: As soon as money changes hands, no matter how cheap we sell those CDs or how largely a "No Warranty and No Support" statement is printed on the label, folks who buy it will start calling us and emailing us directly, asking questions and expecting answers. And if something doesn't work right someone will want it fixed, NOW! It sounds ridiculous to most folks (me too), but I've been involved in Open Source software on a lot of fronts and I /know/ it would happen. In fact, the most demanding users are nearly always the ones who've given the least to a project, in terms of effort or money. Not that I'm bitter or anything. ;-) Anyway, this is not the official Enthought party line, by any means, but I don't think selling unsupported Enthon CDs will happen. I do think it is likely that CD giveaways at conferences might occur, and I think it has been done before, though I wasn't there to witness it. And I don't think anyone would hunt you down or threaten you with violence if you wanted to burn CDs and distribute them in whatever way you like to whomever you like. From jdhunter at ace.bsd.uchicago.edu Sun Dec 12 23:31:03 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sun, 12 Dec 2004 22:31:03 -0600 Subject: [SciPy-dev] OpenCD In-Reply-To: <41BD0F64.90303@enthought.com> (Joe Cooper's message of "Sun, 12 Dec 2004 21:41:24 -0600") References: <41BD003A.3060307@enthought.com> <41BD0F64.90303@enthought.com> Message-ID: >>>>> "Joe" == Joe Cooper writes: Joe> As soon as money changes hands, no matter how cheap we sell Joe> those CDs or how largely a "No Warranty and No Support" Joe> statement is printed on the label, folks who buy it will Joe> start calling us and emailing us directly, asking questions Joe> and expecting answers. And if something doesn't work right Joe> someone will want it fixed, NOW! It sounds ridiculous to Joe> most folks (me too), but I've been involved in Open Source Joe> software on a lot of fronts and I /know/ it would happen. In Joe> fact, the most demanding users are nearly always the ones Joe> who've given the least to a project, in terms of effort or Joe> money. Not that I'm bitter or anything. ;-) It doesn't sound ridiculous to me. In fact I wouldn't be surprised if you sometimes get a similar reaction from folks you provide the goods to for free. End users struggle with things that are obvious to developers, and in the process can shed light on what is wrong with the software or documentation. If it's worth releasing, it's probably worth supporting. My experience with enthought python is that it works amazingly well, presumably because you've configured and tested the hell out of it. So the support commitment would be manageable, I'm guessing. Sure there will always be some crank who'll make your life miserable, but it might be worth it. "the most demanding users are nearly always the ones who've given the least to a project" (amen, but not naming any names) JDH From aisaac at american.edu Sun Dec 12 23:59:47 2004 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 12 Dec 2004 23:59:47 -0500 (Eastern Standard Time) Subject: [SciPy-dev] OpenCD In-Reply-To: <41BD003A.3060307@enthought.com> References: <41BD003A.3060307@enthought.com> Message-ID: > Alan G Isaac wrote: >> I could not help noticing that OpenCD has no >> scientific tools. It seems the Enthought >> enhance Python distribution meets the inclusion criteria: >> http://theopencd.sunsite.dk/criteria.php On Sun, 12 Dec 2004, Joe Cooper apparently wrote: > Looking over the current applications, I don't see how > we'd really fit into the mix...it's word processing, email and web, > consumer-targetted multimedia apps, games, compression, etc. Blender? Hardly seems broadly targetted to me. http://theopencd.sunsite.dk/programs/blender.php Or even SciTE? A programmers editor, I'd say. http://theopencd.sunsite.dk/programs/scite.php I'd guess universities are going to see the largest distribution of TheOpenCD, and so deviation from the inclusion criterion is not so obvious. fwiw, Alan Isaac From jdhunter at ace.bsd.uchicago.edu Mon Dec 13 10:05:57 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 13 Dec 2004 09:05:57 -0600 Subject: [SciPy-dev] OpenCD In-Reply-To: (Alan G Isaac's message of "Sun, 12 Dec 2004 23:59:47 -0500 (Eastern Standard Time)") References: <41BD003A.3060307@enthought.com> Message-ID: >>>>> "Alan" == Alan G Isaac writes: Alan> Blender? Hardly seems broadly targetted to me. Alan> http://theopencd.sunsite.dk/programs/blender.php Over thanksgiving I was visiting a friend who's 13 year old kid was playing Halo. He was using photoshop to make customized skins for his vehicles and 3D tools to make new weapons etc. I can't remember what he was using, but I did ask him if he'd heard of blender and he said he'd downloaded it and used it. So it might be a wider audience than you think. JDH From loredo at astro.cornell.edu Mon Dec 13 14:22:47 2004 From: loredo at astro.cornell.edu (Tom Loredo) Date: Mon, 13 Dec 2004 14:22:47 -0500 Subject: [SciPy-dev] Two fortran/f2py questions In-Reply-To: <20041213062131.75FDF3EB68@www.scipy.com> References: <20041213062131.75FDF3EB68@www.scipy.com> Message-ID: <1102965767.41bdec0777a41@astrosun2.astro.cornell.edu> Hi folks- Is there a way to use f2py directives to indicate that a few subprograms in a fortran77 source file need not be wrapped/exposed? Right now I'm just using the "only:" keyword on the command line, but I would imagine there's a way to embed this info in the source. I just haven't been able to figure it out from the Users Guide. Is there documentation anywhere showing how one can have fortran source for a Python extension access a C or fortran function that is part of Scipy? The specific example driving this is a module I'm working on that uses some special functions. It seems there should be some way for me to call the cephes functions directly. But I'm not sure how to make code that will automatically link to the appropriate library, and do it portably (so the module can be distributed). If there's a good example of this within Scipy (as I suspect), just give me a pointer and I'll "go to the source." I thought there might be some docs on this within scipy_distutils, but I haven't found it if it's there. Thanks, Tom ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From pearu at scipy.org Mon Dec 13 14:37:16 2004 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 13 Dec 2004 13:37:16 -0600 (CST) Subject: [SciPy-dev] Two fortran/f2py questions In-Reply-To: <1102965767.41bdec0777a41@astrosun2.astro.cornell.edu> References: <20041213062131.75FDF3EB68@www.scipy.com> <1102965767.41bdec0777a41@astrosun2.astro.cornell.edu> Message-ID: On Mon, 13 Dec 2004, Tom Loredo wrote: > Is there a way to use f2py directives to indicate that a few subprograms > in a fortran77 source file need not be wrapped/exposed? Right > now I'm just using the "only:" keyword on the command line, but > I would imagine there's a way to embed this info in the source. > I just haven't been able to figure it out from the Users Guide. No. But I think it would be a useful feature. So I'll add this request to my todo list. Btw, if you would use scipy_distutils-style setup.py file then you can add the "only: .. :" part to f2py_options to avoid using it on the command line. > Is there documentation anywhere showing how one can have fortran > source for a Python extension access a C or fortran function > that is part of Scipy? The specific example driving this is > a module I'm working on that uses some special functions. It > seems there should be some way for me to call the cephes > functions directly. But I'm not sure how to make code that > will automatically link to the appropriate library, and do it > portably (so the module can be distributed). If there's a good > example of this within Scipy (as I suspect), just give me a pointer > and I'll "go to the source." I thought there might be some > docs on this within scipy_distutils, but I haven't found it > if it's there. I am not sure if I follow you correctly here but when using f2py from its CVS, all fortran objects have _cpointer attribute that is a C pointer to the actual Fortran or C function. Using such a _cpointer as an callback argument will speed up calling the Fortran or C function because the f2py generated wrapper layer is avoided but at the risk of crashing Python when using _cpointer feature incorrectly. Is this what you are looking for? Pearu From david.grant at telus.net Mon Dec 13 14:47:56 2004 From: david.grant at telus.net (David Grant) Date: Mon, 13 Dec 2004 11:47:56 -0800 Subject: [SciPy-dev] OpenCD In-Reply-To: <41BD0F64.90303@enthought.com> References: <41BD003A.3060307@enthought.com> <41BD0F64.90303@enthought.com> Message-ID: <41BDF1EC.2090405@telus.net> Joe Cooper wrote: > As soon as money changes hands, no matter how cheap we sell those CDs > or how largely a "No Warranty and No Support" statement is printed on > the label, folks who buy it will start calling us and emailing us > directly, asking questions and expecting answers. And if something > doesn't work right someone will want it fixed, NOW! It sounds > ridiculous to most folks (me too), but I've been involved in Open > Source software on a lot of fronts and I /know/ it would happen. In > fact, the most demanding users are nearly always the ones who've given > the least to a project, in terms of effort or money. Not that I'm > bitter or anything. ;-) > Have an automated email sent to them telling them about the various support options such as mailing lists, IRC, bug reporting, and possibly some non-free support option (set up a 1-900 number or something? :-) ). I for one think that the more users using scipy, the better it will get. If that means selling some CDs, having some disastified "customers" along the way, so be it. Overall, getting more people to use python for scientific applications will help the community and help improve scipy in the long run. Dave From loredo at astro.cornell.edu Mon Dec 13 15:34:30 2004 From: loredo at astro.cornell.edu (Tom Loredo) Date: Mon, 13 Dec 2004 15:34:30 -0500 Subject: [SciPy-dev] Re: Two fortran/f2py questions In-Reply-To: <20041213062131.75FDF3EB68@www.scipy.com> References: <20041213062131.75FDF3EB68@www.scipy.com> Message-ID: <1102970070.41bdfcd6c351d@astrosun2.astro.cornell.edu> Hi Pearu- Thanks a lot for the quick response! Regarding ignoring wrapping some utility subprograms via a directive: > But I think it would be a useful feature. So I'll add this request to > my todo list. Thanks! > Btw, if you would use scipy_distutils-style setup.py file > then you can add the "only: .. :" part to f2py_options I thought there must be something like this; thanks for spelling it out. > I am not sure if I follow you correctly here but when using f2py from > its CVS, all fortran objects have _cpointer attribute that is > a C pointer to the actual Fortran or C function. Well, I'm not sure I've followed you, either. 8-) Here's what I'm trying to do. I have fortran functions and subroutines that need to evaluate special functions as part of what they're doing. E.g., for gamma functions, right now I just define a function "gammaln" that returns the log of a gamma function, within the fortran source of my module. But Scipy already has such a function somewhere (in the Cephes library). Is there a way I can use that Cephes function? For example, could I just figure out the appropriate Cephes call (sorting out the appropriate underscores and pointers) and write a setup.py that will let my fortran routine use the Cephes library that Scipy has installed somewhere? For performance, I'd rather not have to send my fortran module a Python callback; after all, the raw Cephes routine is sitting *somewhere*. There are other examples; another module I have needs a Cholesky decomposition in the middle of a calculation. Right now it's just hard-coded in the C for the module. I should just call the appropriate lapack/atlas routine for this, and I'd love to know how to write the C code and accompanying setup.py file to be able to do this portably without the overhead of a Python callback. Thanks, Tom ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From pearu at scipy.org Mon Dec 13 16:04:57 2004 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 13 Dec 2004 15:04:57 -0600 (CST) Subject: [SciPy-dev] Re: Two fortran/f2py questions In-Reply-To: <1102970070.41bdfcd6c351d@astrosun2.astro.cornell.edu> References: <20041213062131.75FDF3EB68@www.scipy.com> <1102970070.41bdfcd6c351d@astrosun2.astro.cornell.edu> Message-ID: On Mon, 13 Dec 2004, Tom Loredo wrote: >> I am not sure if I follow you correctly here but when using f2py from >> its CVS, all fortran objects have _cpointer attribute that is >> a C pointer to the actual Fortran or C function. > > Well, I'm not sure I've followed you, either. 8-) Here's what > I'm trying to do. I have fortran functions and subroutines that > need to evaluate special functions as part of what they're doing. > E.g., for gamma functions, right now I just define a function > "gammaln" that returns the log of a gamma function, within the > fortran source of my module. But Scipy already has such a > function somewhere (in the Cephes library). Is there a way > I can use that Cephes function? For example, could I just > figure out the appropriate Cephes call (sorting out the > appropriate underscores and pointers) and write a setup.py > that will let my fortran routine use the Cephes library that > Scipy has installed somewhere? For performance, I'd rather > not have to send my fortran module a Python callback; after > all, the raw Cephes routine is sitting *somewhere*. Thanks for the explaination. Now I get the problem and using _cpointer is not a solution indeed. A working solution could be figured out from Lib/special/setup_special.py that compiles cephes library sources. Assuming that you have scipy source tree somewhere then in setup.py file use scipy_src_path = '/path/to/scipy/Lib' cephes = glob(os.path.join(scipy_src_path,'special','cephes','*.c')) ext_args = {} dict_append(ext_args, name = 'extname', sources = [...] libraries = ['cephes']) ext = Extension(**ext_args) setup(ext_modules = [ext], libraries = [('cephes',cephes)]) See Lib/special/setup_special.py for details and various aspects making the above example more portable. A better solution (using get_info like below for lapack) has to wait until we have moved various 3rd party libraries in scipy to Lib/lib where get_info could pick up cephes library, for instance, and return appropiate dictionary of libraries and paths to be used when linking your extension module. > There are other examples; another module I have needs a > Cholesky decomposition in the middle of a calculation. Right > now it's just hard-coded in the C for the module. I should > just call the appropriate lapack/atlas routine for this, > and I'd love to know how to write the C code and accompanying > setup.py file to be able to do this portably without the > overhead of a Python callback. Ok, that problem is easier to solve if you have lapack/atlas libraries installed in you system. In your setup.py file use from scipy_distutils.system_info import get_info, dict_append lapack_opt = get_info('lapack_opt',notfound_action=2) ext_args = {} dict_append(ext_args, name = 'extname', sources = [...]) dict_append(ext_args,**lapack_opt) ext = Extension(**ext_args) to define your Extension module that will be linked against optimized lapack libraries. See Lib/lib/{lapack,blas}/setup_*.py files for more examples. Pearu From jmiller at stsci.edu Tue Dec 14 13:01:04 2004 From: jmiller at stsci.edu (Todd Miller) Date: 14 Dec 2004 13:01:04 -0500 Subject: [SciPy-dev] [Fwd: [Numpy-discussion] started work on new version of LAPACK] Message-ID: <1103047263.3501.170.camel@halloween.stsci.edu> The LAPACK/ScaLAPACK team is soliciting input for a new version. This was originally posted to numpy-discussion but I thought I should pass it on to SciPy. Regards, Todd -------------- next part -------------- An embedded message was scrubbed... From: Piotr Luszczek Subject: [Numpy-discussion] started work on new version of LAPACK Date: Tue, 14 Dec 2004 12:52:29 -0500 Size: 4246 URL: From Fernando.Perez at colorado.edu Tue Dec 21 20:26:21 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 21 Dec 2004 18:26:21 -0700 Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box Message-ID: <41C8CD3D.9000401@colorado.edu> Hi all, I just updated to current CVS of f2py/scipy and Numeric 23.6, to try to root out some problems I was encountering with scipy.test(). Now I'm having a different problem, scipy is segfaulting on me :) I've narrowed it down to this: planck[pylab]> python -c 'import scipy;scipy.lib.lapack.test(verbosity=2)' !! No test file 'test_flapack.py' found for !! No test file 'test_clapack.py' found for Found 78 tests for scipy.lib.lapack !! No test file 'test_calc_lwork.py' found for Found 0 tests for __main__ check_gebal (scipy.lib.lapack.test_lapack.test_flapack_complex) ... ok check_heev (scipy.lib.lapack.test_lapack.test_flapack_complex)Segmentation fault If I go manually into test_lapack.py and comment out around line 93: #class test_flapack_complex(_test_lapack): # lapack = PrefixWrapper(flapack,'c') # decimal = 5 The whole suite runs fine: planck[pylab]> python -c 'import scipy;scipy.test(level=10,verbosity=2)' [...] check_basic (scipy.signal.signaltools.test_signaltools.test_wiener) ... ok ---------------------------------------------------------------------- Ran 1166 tests in 83.200s OK By digging a bit deeper into the testing code, I was able to track the segfault to this (a defined here is copied from the test code, the matrix so defined is correctly hermitian as required by cheev, so this shouldn't be a problem): In [1]: a = [[1,2,3],[2,2,3],[3,3,6]] In [2]: import scipy In [3]: scipy.lib.lapack.flapack.cheev(a) Segmentation fault The coredump is not terribly informative (unfortunately I don't have a debug build of python around to produce more details with): #0 0x55cd67c2 in ATL_cdotc_xp0yp0aXbX () from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so #1 0xfee241b8 in ?? () #2 0x09bfb880 in ?? () #3 0xfee241f8 in ?? () #4 0x55cc4228 in cdotc_ () from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so #5 0x4083e1dc in ?? () #6 0x560f1358 in ?? () from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so #7 0x3ed413ce in ?? () #8 0x560f1358 in ?? () from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so #9 0x3ed413ce in ?? () #10 0xfee24100 in ?? () #11 0x00000000 in ?? () Other calls to complex routines also appear to segfault, though I didn't test too many more. I wonder if anyone knows what may be going on here. This machine has in the past run successfully all scipy tests, and I did not change my ATLAS installation. For reference, here are all my version numbers (I just pulled scipy and f2py from CVS minutes ago, and grabbed Numpy from Sourceforge): In [9]: print sys.version 2.3.3 (#1, May 7 2004, 10:31:40) [GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] In [10]: Numeric.__version__ Out[10]: '23.6' In [11]: f2py2e.__version__.version Out[11]: '2.44.240_1906' In [12]: scipy.__version__ Out[12]: '0.3.2_300.4521' I installed scipy with a straight 'setup.py install', without touching any of the config files (as I've done successfully in the past). Thanks for any help. I'll gladly provide more info if needed, f From pearu at scipy.org Tue Dec 21 21:35:51 2004 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 21 Dec 2004 20:35:51 -0600 (CST) Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box In-Reply-To: <41C8CD3D.9000401@colorado.edu> References: <41C8CD3D.9000401@colorado.edu> Message-ID: On Tue, 21 Dec 2004, Fernando Perez wrote: > Hi all, > > I just updated to current CVS of f2py/scipy and Numeric 23.6, to try to root > out some problems I was encountering with scipy.test(). Now I'm having a > different problem, scipy is segfaulting on me :) > > I've narrowed it down to this: > > planck[pylab]> python -c 'import scipy;scipy.lib.lapack.test(verbosity=2)' > !! No test file 'test_flapack.py' found for 'scipy.lib.lapack.flapack' from '...es/scipy/lib/lapack/flapack.so'> > !! No test file 'test_clapack.py' found for 'scipy.lib.lapack.clapack' from '...es/scipy/lib/lapack/clapack.so'> > Found 78 tests for scipy.lib.lapack > !! No test file 'test_calc_lwork.py' found for 'scipy.lib.lapack.calc_lwork' from '...scipy/lib/lapack/calc_lwork.so'> > Found 0 tests for __main__ > check_gebal (scipy.lib.lapack.test_lapack.test_flapack_complex) ... ok > check_heev (scipy.lib.lapack.test_lapack.test_flapack_complex)Segmentation > fault Todd had similar problem and my response: """ Second, some gcc Fortran compilers produce incorrect code when using -O3 optimization flag and there have been reports that they cause segfaults in heev tests. See the get_flags_opt method in scipy_distutils/gnufcompiler.py and replace the line if self.get_version()=='3.3.3': with if self.get_version()<='3.3.3': Do `rm -rf build` and rebuild scipy. Note that if this issue is related to g77 optimization bug then you should also rebuild Fortran lapack library (that used in completing atlas lapack library) with -O2 flag and before rebuilding scipy. """ seemed to help him. Todd was using gcc 3.2.2. > Other calls to complex routines also appear to segfault, though I didn't test > too many more. I wonder if anyone knows what may be going on here. This > machine has in the past run successfully all scipy tests, and I did not > change my ATLAS installation. These are new tests, so the issue were not discovered before. HTH, Pearu From Fernando.Perez at colorado.edu Tue Dec 21 21:56:09 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 21 Dec 2004 19:56:09 -0700 Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box In-Reply-To: References: <41C8CD3D.9000401@colorado.edu> Message-ID: <41C8E249.6070102@colorado.edu> Hi Pearu, Pearu Peterson schrieb: > Todd had similar problem and my response: > > """ > Second, some gcc Fortran compilers produce incorrect code when using > -O3 optimization flag and there have been reports that they cause > segfaults in heev tests. > See the get_flags_opt method in scipy_distutils/gnufcompiler.py and > replace the line > > if self.get_version()=='3.3.3': > > with > > if self.get_version()<='3.3.3': > > Do `rm -rf build` and rebuild scipy. > > Note that if this issue is related to g77 optimization bug then you should > also rebuild Fortran lapack library (that used in completing atlas lapack > library) with -O2 flag and before rebuilding scipy. > """ > > seemed to help him. Todd was using gcc 3.2.2. Well, I'm using 3.3.3, so the code is already building with -02 only. I hardcoded this change (for 3.3.3) to plain -O instead, and this thing segfaults all the same. However, I did NOT rebuild ATLAS/LAPACK myself, I'm using those from the scipy ATLAS binaries page. Oh well, I'm switching to FC3 very soon, so I'll report again if the problem persists there. For now it's not an issue for me, since I don't need those routines. I figured I'd let you guys know about it, at least. Regards, and many thanks for the help, f From charles.harris at sdl.usu.edu Wed Dec 22 00:59:42 2004 From: charles.harris at sdl.usu.edu (Charles Harris) Date: Tue, 21 Dec 2004 22:59:42 -0700 Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box Message-ID: Hi Fernando, Am I missing something? I don't get scipy.lib when I install from cvs, so I am wondering if something has changed and I need to checkout scipy again? Anyway, scipy.linalg.lapack.flapack.cheev(a) runs fine here on Fedora Core 3, but I am using the blas libraries that came with the distro. Is Atlas to be preferred to blas? As to optimization, I usually find -O2 runs faster than -O3 and that -mtune can make a big difference. -----Original Message----- From: scipy-dev-bounces at scipy.net on behalf of Fernando Perez Sent: Tue 12/21/2004 6:26 PM To: scipy Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box Hi all, I just updated to current CVS of f2py/scipy and Numeric 23.6, to try to root out some problems I was encountering with scipy.test(). Now I'm having a different problem, scipy is segfaulting on me :) I've narrowed it down to this: planck[pylab]> python -c 'import scipy;scipy.lib.lapack.test(verbosity=2)' !! No test file 'test_flapack.py' found for !! No test file 'test_clapack.py' found for Found 78 tests for scipy.lib.lapack !! No test file 'test_calc_lwork.py' found for Found 0 tests for __main__ check_gebal (scipy.lib.lapack.test_lapack.test_flapack_complex) ... ok check_heev (scipy.lib.lapack.test_lapack.test_flapack_complex)Segmentation fault If I go manually into test_lapack.py and comment out around line 93: #class test_flapack_complex(_test_lapack): # lapack = PrefixWrapper(flapack,'c') # decimal = 5 The whole suite runs fine: planck[pylab]> python -c 'import scipy;scipy.test(level=10,verbosity=2)' [...] check_basic (scipy.signal.signaltools.test_signaltools.test_wiener) ... ok ---------------------------------------------------------------------- Ran 1166 tests in 83.200s OK By digging a bit deeper into the testing code, I was able to track the segfault to this (a defined here is copied from the test code, the matrix so defined is correctly hermitian as required by cheev, so this shouldn't be a problem): In [1]: a = [[1,2,3],[2,2,3],[3,3,6]] In [2]: import scipy In [3]: scipy.lib.lapack.flapack.cheev(a) Segmentation fault The coredump is not terribly informative (unfortunately I don't have a debug build of python around to produce more details with): #0 0x55cd67c2 in ATL_cdotc_xp0yp0aXbX () from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so #1 0xfee241b8 in ?? () #2 0x09bfb880 in ?? () #3 0xfee241f8 in ?? () #4 0x55cc4228 in cdotc_ () from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so #5 0x4083e1dc in ?? () #6 0x560f1358 in ?? () from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so #7 0x3ed413ce in ?? () #8 0x560f1358 in ?? () from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so #9 0x3ed413ce in ?? () #10 0xfee24100 in ?? () #11 0x00000000 in ?? () Other calls to complex routines also appear to segfault, though I didn't test too many more. I wonder if anyone knows what may be going on here. This machine has in the past run successfully all scipy tests, and I did not change my ATLAS installation. For reference, here are all my version numbers (I just pulled scipy and f2py from CVS minutes ago, and grabbed Numpy from Sourceforge): In [9]: print sys.version 2.3.3 (#1, May 7 2004, 10:31:40) [GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] In [10]: Numeric.__version__ Out[10]: '23.6' In [11]: f2py2e.__version__.version Out[11]: '2.44.240_1906' In [12]: scipy.__version__ Out[12]: '0.3.2_300.4521' I installed scipy with a straight 'setup.py install', without touching any of the config files (as I've done successfully in the past). Thanks for any help. I'll gladly provide more info if needed, f _______________________________________________ Scipy-dev mailing list Scipy-dev at scipy.net http://www.scipy.net/mailman/listinfo/scipy-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4716 bytes Desc: not available URL: From Fernando.Perez at colorado.edu Wed Dec 22 02:07:34 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 22 Dec 2004 00:07:34 -0700 Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box In-Reply-To: References: Message-ID: <41C91D36.3060209@colorado.edu> Charles Harris wrote: > Hi Fernando, > > Am I missing something? I don't get scipy.lib when I install from cvs, so I > am wondering if something has changed and I need to checkout scipy again? I did a fresh checkout, b/c I recalled hearing about a discussion on a directory reorg. Since CVS sucks so badly with layout operations, I played it safe and did a clean checkout. > Anyway, scipy.linalg.lapack.flapack.cheev(a) runs fine here on Fedora Core > 3, but I am using the blas libraries that came with the distro. Is Atlas to > be preferred to blas? ATLAS provides a tuned BLAS, optimized for your specific architecture. The functionality is the same as that in Fedora's generic BLAS, but the performance is higher. Cheers, f From pearu at scipy.org Wed Dec 22 03:51:34 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 22 Dec 2004 02:51:34 -0600 (CST) Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box In-Reply-To: References: Message-ID: On Tue, 21 Dec 2004, Charles Harris wrote: > Am I missing something? I don't get scipy.lib when I install from cvs, > so I am wondering if something has changed and I need to checkout scipy > again? Try cvs update -Pd This should retrieve also new directories from the CVS repository. Pearu From bgoli at sun.ac.za Wed Dec 22 05:23:47 2004 From: bgoli at sun.ac.za (Brett Olivier) Date: Wed, 22 Dec 2004 12:23:47 +0200 Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box In-Reply-To: <41C8E249.6070102@colorado.edu> References: <41C8CD3D.9000401@colorado.edu> <41C8E249.6070102@colorado.edu> Message-ID: <200412221223.48186.bgoli@sun.ac.za> Hi I've been having the same problems with gcc version 3.4.1 (Mandrakelinux 10.1 3.4.1-4mdk). As suggested I recompiled BLAS/LAPACK with -O2 updated my ATLAS libs and changed gnufcompiler.py to: if self.get_version()<='3.4.2': to rebuild SciPy with -O2. However, no luck, I still get the segfault in scipy.test() with heev. Has anyone else tried compiling SciPy with 3.4.x compilers (I've had additional adventures with MinGW GCC 3.4.2 but will post these a bit later) ? Brett On Wednesday 22 December 2004 04:56, Fernando Perez wrote: > > Well, I'm using 3.3.3, so the code is already building with -02 only. I > hardcoded this change (for 3.3.3) to plain -O instead, and this thing > segfaults all the same. However, I did NOT rebuild ATLAS/LAPACK myself, > I'm using those from the scipy ATLAS binaries page. > > Oh well, I'm switching to FC3 very soon, so I'll report again if the > problem persists there. For now it's not an issue for me, since I don't > need those routines. I figured I'd let you guys know about it, at least. > > Regards, and many thanks for the help, > > f > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -- Brett G. Olivier National Bioinformatics Network - Stellenbosch Node Triple-J Group for Molecular Cell Physiology, Stellenbosch University bgoli at sun dot ac dot za http://glue.jjj.sun.ac.za/~bgoli Tel +27-21-8082697 Fax +27-21-8085863 Mobile +27-82-7329306 There are many intelligent species in the universe, and they all own cats. From pearu at scipy.org Wed Dec 22 05:51:56 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 22 Dec 2004 04:51:56 -0600 (CST) Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box In-Reply-To: <200412221223.48186.bgoli@sun.ac.za> References: <41C8CD3D.9000401@colorado.edu> <41C8E249.6070102@colorado.edu> <200412221223.48186.bgoli@sun.ac.za> Message-ID: On Wed, 22 Dec 2004, Brett Olivier wrote: > I've been having the same problems with gcc version 3.4.1 (Mandrakelinux 10.1 > 3.4.1-4mdk). As suggested I recompiled BLAS/LAPACK with -O2 updated my ATLAS > libs and changed gnufcompiler.py to: > if self.get_version()<='3.4.2': > to rebuild SciPy with -O2. > > However, no luck, I still get the segfault in scipy.test() with heev. Has > anyone else tried compiling SciPy with 3.4.x compilers (I've had additional > adventures with MinGW GCC 3.4.2 but will post these a bit later) ? I have gcc version 3.4.4, 3.3.5 in my debian sid box and have no trouble at all with scipy tests when building against these compilers. I am using also debian ATLAS 3.6.0 that is built against g77 3.3.3 using -O optimization. I'd suggest building scipy against Fortran BLAS/LAPACK libraries and see if the problem percist. For that, get blas and lapack sources from netlib, unpack them, and set up the following environment for building scipy: ATLAS=None LAPACK=None BLAS=None BLAS_SRC= LAPACK_SRC= And in cvs/scipy/Lib/lib/lapack directory execute: rm -rf build python setup_lapack.py build The last command could be modified also to python setup_lapack.py config_fc --noopt build python setup_lapack.py config_fc --noopt --noarch build python setup_lapack.py config_fc --opt="-O" build so that you don't have to modify gnufcompiler.py and reinstall scipy_core all the time. To run lapack tests without installing scipy, execute python tests/test_lapack.py -v 10 HTH, Pearu From charles.harris at sdl.usu.edu Wed Dec 22 10:36:06 2004 From: charles.harris at sdl.usu.edu (Charles Harris) Date: Wed, 22 Dec 2004 08:36:06 -0700 Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box Message-ID: Ok, I've got the reorganized version from cvs and still don't get a segfault using the distro BLAS and gcc 3.4.2. -----Original Message----- From: scipy-dev-bounces at scipy.net on behalf of Fernando Perez Sent: Wed 12/22/2004 12:07 AM To: SciPy Developers List Subject: Re: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box Charles Harris wrote: > Hi Fernando, > > Am I missing something? I don't get scipy.lib when I install from cvs, so I > am wondering if something has changed and I need to checkout scipy again? I did a fresh checkout, b/c I recalled hearing about a discussion on a directory reorg. Since CVS sucks so badly with layout operations, I played it safe and did a clean checkout. > Anyway, scipy.linalg.lapack.flapack.cheev(a) runs fine here on Fedora Core > 3, but I am using the blas libraries that came with the distro. Is Atlas to > be preferred to blas? ATLAS provides a tuned BLAS, optimized for your specific architecture. The functionality is the same as that in Fedora's generic BLAS, but the performance is higher. Cheers, f _______________________________________________ Scipy-dev mailing list Scipy-dev at scipy.net http://www.scipy.net/mailman/listinfo/scipy-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3260 bytes Desc: not available URL: From Fernando.Perez at colorado.edu Wed Dec 22 13:42:11 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 22 Dec 2004 11:42:11 -0700 Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box In-Reply-To: References: Message-ID: <41C9C003.9000901@colorado.edu> Charles Harris wrote: > Ok, > > I've got the reorganized version from cvs and still don't get a segfault > using the distro BLAS and gcc 3.4.2. I suspect the problem lies with ATLAS. I am using atlas3.6.0_Linux_P4SSE2_2HT.tgz from the scipy website, which from reading the included makefile, was built as follows: F77 = /home/pearu/bin/g77 F77FLAGS = -fomit-frame-pointer -O -fno-second-underscore I think the problem is ATLAS, b/c the top of the backtrace I included yesterday had this: #0 0x55cd67c2 in ATL_cdotc_xp0yp0aXbX () from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so That first symbol is, I imagine, an ATLAS call. But Pearu seems to have built these binaries with pretty conservative flags (-O), and when you use the distro BLAS (no ATLAS) you don't get any segfault. So perhaps it's a bug in ATLAS and/or g77 which appears even with basic optimizations. Cheers, f From Fernando.Perez at colorado.edu Wed Dec 22 13:44:53 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 22 Dec 2004 11:44:53 -0700 Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box In-Reply-To: References: <41C8CD3D.9000401@colorado.edu> <41C8E249.6070102@colorado.edu> <200412221223.48186.bgoli@sun.ac.za> Message-ID: <41C9C0A5.7060002@colorado.edu> Pearu Peterson wrote: > I have gcc version 3.4.4, 3.3.5 in my debian sid box and have no trouble > at all with scipy tests when building against these compilers. I am using > also debian ATLAS 3.6.0 that is built against g77 3.3.3 using -O > optimization. Mmh, weird. Those seem to be the flags in the ATLAS binaries you shipped at scipy.org. > I'd suggest building scipy against Fortran BLAS/LAPACK libraries and see > if the problem percist. For that, get blas and lapack sources from netlib, > unpack them, and set up the following environment for building scipy: As I said, I'll be soon updating these boxes to Fedora3, so I'd rather not spend too much time on this right now (I'm pretty behind on other things as it is :). Many thanks for the feedback though. At least I know I'm not the only one having these problems. Cheers, f From pearu at scipy.org Wed Dec 22 13:59:35 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 22 Dec 2004 12:59:35 -0600 (CST) Subject: [SciPy-dev] CVS scipy.test() segfaulting on me on a Fedora2 box In-Reply-To: <41C9C003.9000901@colorado.edu> References: <41C9C003.9000901@colorado.edu> Message-ID: On Wed, 22 Dec 2004, Fernando Perez wrote: > Charles Harris wrote: >> Ok, >> >> I've got the reorganized version from cvs and still don't get a segfault >> using the distro BLAS and gcc 3.4.2. > > I suspect the problem lies with ATLAS. I am using > atlas3.6.0_Linux_P4SSE2_2HT.tgz from the scipy website, which from reading > the included makefile, was built as follows: > > F77 = /home/pearu/bin/g77 > F77FLAGS = -fomit-frame-pointer -O -fno-second-underscore > > I think the problem is ATLAS, b/c the top of the backtrace I included > yesterday had this: > > #0 0x55cd67c2 in ATL_cdotc_xp0yp0aXbX () > from /usr/lib/python2.3/site-packages/scipy/lib/lapack/flapack.so > > That first symbol is, I imagine, an ATLAS call. But Pearu seems to have > built these binaries with pretty conservative flags (-O), and when you use > the distro BLAS (no ATLAS) you don't get any segfault. So perhaps it's a bug > in ATLAS and/or g77 which appears even with basic optimizations. Actually the Fortran lapack library that was used to complete ATLAS lapack, may have been compiled with -O3, I don't remember exactly anymore. If somebody could confirm that compiling lapack library with -O2 will fix the issue, then I'll look for rebuilding atlas binaries at scipy site. Pearu From jmiller at stsci.edu Wed Dec 22 17:25:25 2004 From: jmiller at stsci.edu (Todd Miller) Date: 22 Dec 2004 17:25:25 -0500 Subject: [SciPy-dev] scipy testing changes revisited In-Reply-To: <1102520025.21898.105.camel@halloween.stsci.edu> References: <1102461096.17245.458.camel@halloween.stsci.edu> <1102520025.21898.105.camel@halloween.stsci.edu> Message-ID: <1103754325.18822.300.camel@halloween.stsci.edu> A week or so ago we were discussing the set of changes to scipy's testing framework needed to add support for numarray. The changes I'm talking about here should be made to the main trunk of scipy CVS. After further review, I now think the array versions of the assert family of functions (e.g. assert_array_equal) are correct with respect to truth value testing. So, the only change I think is needed is the addition of "delegation code" so that calls to assert_equal, etc. defer to assert_array_equal, etc. when passed array parameters. Here's the patch: Index: testing.py =================================================================== RCS file: /home/cvsroot/world/scipy_core/scipy_test/testing.py,v retrieving revision 1.49 retrieving revision 1.49.2.3 diff -c -r1.49 -r1.49.2.3 *** testing.py 1 Dec 2004 07:08:51 -0000 1.49 --- testing.py 17 Dec 2004 18:20:28 -0000 1.49.2.3 *************** *** 634,639 **** --- 633,640 ---- """ Raise an assertion if two items are not equal. I think this should be part of unittest.py """ + if isinstance(actual, ArrayType): + return assert_array_equal(actual, desired, err_msg) msg = '\nItems are not equal:\n' + err_msg try: if ( verbose and len(repr(desired)) < 100 and len(repr(actual)) ): *************** *** 651,656 **** --- 652,659 ---- """ Raise an assertion if two items are not equal. I think this should be part of unittest.py """ + if isinstance(actual, ArrayType): + return assert_array_almost_equal(actual, desired, decimal, err_msg) msg = '\nItems are not equal:\n' + err_msg try: if ( verbose and len(repr(desired)) < 100 and len(repr(actual)) ): These changes are necessary for adding numarray support to scipy because without them there are 12 testing failures (in scipy_base.test()). The failures are all related to numarray's "outlawed" __nonzero__() and testers apparently calling the wrong assertion function. With these changes, 7 of the numarray failures go away and 5 identical failures remain for both numarray and Numeric. I'm arguing that the 5 remaining failures are either real problems or testing bugs which were masked by testers calling the wrong assert functions and by Numeric's sometrue() definition of __nonzero__(). Here are the test failures I see: ====================================================================== FAIL: check_basic (scipy_base.function_base.test_function_base.test_amax) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy_base/tests/test_function_base.py", line 72, in check_basic assert_equal(amax(b),[8.0,10.0,9.0]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 638, in assert_equal return assert_array_equal(actual, desired, err_msg) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 721, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 66.6666666667%): Array 1: [ 9. 10. 8.] Array 2: [ 8. 10. 9.] ====================================================================== FAIL: check_basic (scipy_base.function_base.test_function_base.test_amin) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy_base/tests/test_function_base.py", line 82, in check_basic assert_equal(amin(b),[3.0,3.0,2.0]) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 638, in assert_equal return assert_array_equal(actual, desired, err_msg) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 721, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 33.3333333333%): Array 1: [ 3. 4. 2.] Array 2: [ 3. 3. 2.] ====================================================================== FAIL: check_arange (scipy.special.basic.test_basic.test_arange) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 495, in check_arange assert_equal(numstring,array([0.,0.1,0.2,0.3, File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 638, in assert_equal return assert_array_equal(actual, desired, err_msg) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 721, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 30.4347826087%): Array 1: [ 0. 0.1 0.2 0.3 0.4 0.5 ... Array 2: [ 0. 0.1 0.2 0.3 0.4 0.5 ... ====================================================================== FAIL: check_genlaguerre (scipy.special.basic.test_basic.test_laguerre) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1573, in check_genlaguerre assert_equal(lag2.c,array([1,-2*(k+2),(k+1.)*(k+2.)])/2.0) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 638, in assert_equal return assert_array_equal(actual, desired, err_msg) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 721, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 66.6666666667%): Array 1: [ 0.5 -2.5842644468705798 2.0470791422443622] Array 2: [ 0.5 -2.5842644468705802 2.047079142244363 ] ====================================================================== FAIL: check_legendre (scipy.special.basic.test_basic.test_legendre) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jmiller/work/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1590, in check_legendre assert_equal(leg3.c,array([5,0,-3,0])/2.0) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 638, in assert_equal return assert_array_equal(actual, desired, err_msg) File "/home/jmiller/work/lib/python2.4/site-packages/scipy_test/testing.py", line 721, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 75.0%): Array 1: [ 2.5000000000000000e+00 -8.3266726846886741e-16 -1.4999999999999991e+00 1.1340467527089692e-17] Array 2: [ 2.5 0. -1.5 0. ] ---------------------------------------------------------------------- Ran 690 tests in 3.949s FAILED (failures=5) In each case you can see that assert_array_equal has been called from the new delegation code in assert_equal. At this point, I guess I have two questions: 1. Is this patch acceptable for the main trunk now? 2. If so, who should fix the test failures? Regards, Todd -------------- next part -------------- A non-text attachment was scrubbed... Name: testing.patch Type: text/x-patch Size: 1164 bytes Desc: not available URL: From Norbert.Nemec.list at gmx.de Fri Dec 31 11:53:27 2004 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Fri, 31 Dec 2004 17:53:27 +0100 Subject: [SciPy-dev] Patch: Tiny bugfixes Message-ID: <200412311753.27759.Norbert.Nemec.list@gmx.de> Hi there, see attached a patch with three miniscule bugfixes for the current CVS version The two fixes about real and imag are intended to make real(3.0) return 3.0 and not array([3.0]) Greetings, Norbert -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy-bugfixes.diff Type: text/x-diff Size: 1980 bytes Desc: not available URL: