From arnd.baecker at web.de Wed Mar 1 08:22:28 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 1 Mar 2006 14:22:28 +0100 (CET) Subject: [SciPy-dev] some sparse errors Message-ID: Hi, I just updated my numpy/scipy installation and get some errorrs Out[2]: '0.4.7.1625' In [3]: scipy.test(1) import io -> failed: invalid syntax (sparse.py, line 2119) [...] Warning: FAILURE importing tests for /home/abaecker/NBB/SOFTWARE/morepub/pub_scipy/lib/python2.3/site-packages/scipy/sparse/tests/test_sparse.py:23: ImportError: cannot import name csc_matrix (in ?) Note that I am running python 2.3.5 sparse/sparse.py", line 2119 return sum(len(rowvals) for rowvals in self.vals) ^ SyntaxError: invalid syntax This looks like it is some 2.4 syntax being used ? Best, Arnd From nwagner at mecha.uni-stuttgart.de Wed Mar 1 08:25:38 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Mar 2006 14:25:38 +0100 Subject: [SciPy-dev] some sparse errors In-Reply-To: References: Message-ID: <4405A0D2.207@mecha.uni-stuttgart.de> Arnd Baecker wrote: > Hi, > > I just updated my numpy/scipy installation and get some errorrs > > Out[2]: '0.4.7.1625' > In [3]: scipy.test(1) > import io -> failed: invalid syntax (sparse.py, line 2119) > > [...] > > Warning: FAILURE importing tests for '...ages/scipy/sparse/__init__.pyc'> > /home/abaecker/NBB/SOFTWARE/morepub/pub_scipy/lib/python2.3/site-packages/scipy/sparse/tests/test_sparse.py:23: > ImportError: cannot import name csc_matrix (in ?) > > > Note that I am running python 2.3.5 > sparse/sparse.py", line 2119 > return sum(len(rowvals) for rowvals in self.vals) > ^ > SyntaxError: invalid syntax > > This looks like it is some 2.4 syntax being used ? > > Best, Arnd > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > It works fine with python2.4 Nils From schofield at ftw.at Wed Mar 1 11:03:30 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 01 Mar 2006 17:03:30 +0100 Subject: [SciPy-dev] some sparse errors In-Reply-To: References: Message-ID: <4405C5D2.3080309@ftw.at> Arnd Baecker wrote: > Hi, > > I just updated my numpy/scipy installation and get some errorrs > [SNIP] > This looks like it is some 2.4 syntax being used ? > Yes, thanks for letting me know. I'll fix it now. -- Ed From schofield at ftw.at Wed Mar 1 11:14:24 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 01 Mar 2006 17:14:24 +0100 Subject: [SciPy-dev] Python 2.3 support [Was: some sparse errors] In-Reply-To: <4405C5D2.3080309@ftw.at> References: <4405C5D2.3080309@ftw.at> Message-ID: <4405C860.7030907@ftw.at> Ed Schofield wrote: > Arnd Baecker wrote: > >> Hi, >> >> I just updated my numpy/scipy installation and get some errorrs >> >> > [SNIP] > >> This looks like it is some 2.4 syntax being used ? >> Okay, I've fixed the offending line. Please let me know if there are any more Python 2.4-isms ;) -- Ed From robert.kern at gmail.com Wed Mar 1 11:14:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 01 Mar 2006 10:14:59 -0600 Subject: [SciPy-dev] some sparse errors In-Reply-To: References: Message-ID: <4405C883.7050302@gmail.com> Arnd Baecker wrote: > Note that I am running python 2.3.5 > sparse/sparse.py", line 2119 > return sum(len(rowvals) for rowvals in self.vals) > ^ > SyntaxError: invalid syntax Correct. numpy and scipy are trying to maintain 2.3 compatibility, so this needs to be fixed. I will enter a ticket. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cimrman3 at ntc.zcu.cz Wed Mar 1 11:32:36 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 01 Mar 2006 17:32:36 +0100 Subject: [SciPy-dev] [SciPy-user] New sparse matrix functionality In-Reply-To: <664793EC-254A-4381-906F-72AD5CD1A918@ftw.at> References: <4402C496.90204@ftw.at> <4402D324.4090207@ntc.zcu.cz> <664793EC-254A-4381-906F-72AD5CD1A918@ftw.at> Message-ID: <4405CCA4.1080105@ntc.zcu.cz> >>Do you also plan to add the c-based linked-list matrix as in PySparse >>(ll_mat.c there)? This could be even faster than using the Python >>lists >>(IMHO...). > > > Well, I guess it would be nice to have, and the code's already > written, but I don't know how we'd make it derive from the spmatrix > base class, which is written in Python. Travis mentioned back in > October that this is possible but not easy. So it would require some well, we might eventually write a llmat class in Python that will 'borrow' (and appreciate, of course ;-)) relevant ll_mat functions - we do not need the ll_mat Python object. I can live without it for now, though :-) > work. I don't need the extra speed personally -- the new class seems > to be fast enough for my needs (the bottleneck for my work is now > elsewhere :) OK, I see... Can you tell me where? Just curious :-) > An update: I've changed the matrix.__mul__ function in NumPy SVN to > return NotImplemented if the right operand defines __rmul__ and isn't > a NumPy-compatible type. This seems to work fine for * now. > Functions like numpy.dot() still won't work on sparse matrices, but I > don't really have a problem with this ;) Fine with me... In the meantime, I have added a rudimentary umfpack support to the sparse module - it is used when present by 'solve' (and can be switched off). I have also fixed the umfpack module in the sandbox for complex matrices. (At least I hope so :)) Still, the umfpack must be installed separately, doing the classical 'python setup.py install' in its sandbox home, because I am still struggling with a proper system_info class to detect the umfpack libraries in the system. Any help/ideas would be appreciated. r. From arnd.baecker at web.de Wed Mar 1 12:45:02 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 1 Mar 2006 18:45:02 +0100 (CET) Subject: [SciPy-dev] Python 2.3 support [Was: some sparse errors] In-Reply-To: <4405C860.7030907@ftw.at> References: <4405C5D2.3080309@ftw.at> <4405C860.7030907@ftw.at> Message-ID: On Wed, 1 Mar 2006, Ed Schofield wrote: > Ed Schofield wrote: > > Arnd Baecker wrote: [...] > >> This looks like it is some 2.4 syntax being used ? > >> > Okay, I've fixed the offending line. Please let me know if there are > any more Python 2.4-isms ;) Looks fine - Ran 1116 tests in 201.748s without problems. Many thanks! Arnd From nwagner at mecha.uni-stuttgart.de Thu Mar 2 03:35:22 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Mar 2006 09:35:22 +0100 Subject: [SciPy-dev] Program exited with code 012 Message-ID: <4406AE4A.1050406@mecha.uni-stuttgart.de> Hi all, The latest attempt to install numpy/scipy with ATLAS support on a 64 bit machine works but scipy.test(1,10) results in check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_algebraic_log_weight (scipy.integrate.tests.test_quadpack.test_quad) ... ok check_cauchypv_weight (scipy.integrate.tests.test_quadpack.test_quad) ... ok check_cosine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad)STOP 778 Program exited with code 012. (gdb) bt No stack. I have never observed such an error before. Google http://doc.bughunter.net/format-string/format-bugs.html Is there any alternative to ATLAS ? Can I use ACML ? What is necessary to use the ACML within numpy/scipy ? Nils cat /proc/cpuinfo yields processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 47 model name : AMD Athlon(tm) 64 Processor 3200+ stepping : 2 cpu MHz : 2000.141 cache size : 512 KB fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni lahf_lm bogomips : 4009.73 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc From nwagner at mecha.uni-stuttgart.de Thu Mar 2 05:41:05 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Mar 2006 11:41:05 +0100 Subject: [SciPy-dev] Segmentation fault Message-ID: <4406CBC1.40605@mecha.uni-stuttgart.de> I tried to track down the problem with scipy.test(1,10). I changed the lower and upper limits of the integrals by accident. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 15072)] 0x00002aaab17fdc25 in dqc25f_ () from /usr/lib64/python2.4/site-packages/scipy/integrate/_quadpack.so (gdb) bt #0 0x00002aaab17fdc25 in dqc25f_ () from /usr/lib64/python2.4/site-packages/scipy/integrate/_quadpack.so #1 0x00002aaab17f9b64 in dqawoe_ () from /usr/lib64/python2.4/site-packages/scipy/integrate/_quadpack.so #2 0x00002aaab17f0552 in quadpack_qawoe (dummy=, args=) at __quadpack.h:420 #3 0x00002aaaaac5496a in PyEval_EvalFrame (f=0x774bf0) at ceval.c:3547 #4 0x00002aaaaac53b97 in PyEval_EvalFrame (f=0x8ac7b0) at ceval.c:3629 #5 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaae039c70, globals=, locals=, args=0xb, argcount=3, kws=0x5066a8, kwcount=3, defs=0x2aaaae0570f8, defcount=11, closure=0x0) at ceval.c:2730 #6 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x506500) at ceval.c:3640 #7 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaaab23c70, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #8 0x00002aaaaac556d2 in PyEval_EvalCode (co=, globals=, locals=) at ceval.c:484 #9 0x00002aaaaac67cc7 in PyImport_ExecCodeModuleEx (name=0x7fffffbbfb50 "quadtest", co=0x2aaaaab23c70, pathname=) at import.c:619 Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: quadtest.py Type: text/x-python Size: 663 bytes Desc: not available URL: From nwagner at mecha.uni-stuttgart.de Thu Mar 2 08:25:39 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Mar 2006 14:25:39 +0100 Subject: [SciPy-dev] Another segfault Message-ID: <4406F253.7060304@mecha.uni-stuttgart.de> Hi all, I have no idea why this test results in a segfault. Maybe its a compiler issue ? (g77 versus gfortran) I have installed numpy/scipy via python setup.py config_fc --fcompiler=gnu95 install >>> import sparse_test.py Use minimum degree ordering on A'+A. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 2632)] 0x00002aaab23e50b3 in dpivotL (jcol=0, u=1, usepr=0x7fffffb2a204, perm_r=0xa0fa40, iperm_r=0x816b50, iperm_c=0x9ffed0, pivrow=0x7fffffb2a210, Glu=0x9a0ac0, stat=0xffffffffab80bb68) at dpivotL.c:120 120 perm_r[*pivrow] = jcol; (gdb) bt #0 0x00002aaab23e50b3 in dpivotL (jcol=0, u=1, usepr=0x7fffffb2a204, perm_r=0xa0fa40, iperm_r=0x816b50, iperm_c=0x9ffed0, pivrow=0x7fffffb2a210, Glu=0x9a0ac0, stat=0xffffffffab80bb68) at dpivotL.c:120 #1 0x00002aaab23dadd0 in dgstrf (options=, A=0x7fffffb2a330, drop_tol=, relax=, panel_size=10, etree=, work=, lwork=10549824, perm_c=0x945350, perm_r=0xa0fa40, L=0x2aaab508c7d0, U=0x2aaab508c7f0, stat=0x7fffffb2a310, info=0x7fffffb2a35c) at dgstrf.c:310 #2 0x00002aaab23bdb4b in newSciPyLUObject (A=0x7fffffb2a3f0, diag_pivot_thresh=1, drop_tol=0, relax=1, panel_size=10, permc_spec=2, intype=) at _superluobject.c:372 #3 0x00002aaab23bc9ba in Py_dgstrf (self=, args=, keywds=) at _dsuperlumodule.c:187 #4 0x00002aaaaac5496a in PyEval_EvalFrame (f=0xace630) at ceval.c:3547 #5 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaae7e01f0, globals=, locals=, args=0x5, argcount=1, kws=0x506698, kwcount=0, defs=0x2aaaae7cef08, defcount=5, closure=0x0) at ceval.c:2730 #6 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x506500) at ceval.c:3640 #7 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaaab23b90, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #8 0x00002aaaaac556d2 in PyEval_EvalCode (co=, globals=, locals=) at ceval.c:484 #9 0x00002aaaaac67cc7 in PyImport_ExecCodeModuleEx (name=0x7fffffb2b370 "sparse_test", co=0x2aaaaab23b90, pathname=) at import.c:619 #10 0x00002aaaaac68e3d in load_source_module (name=0x7fffffb2b370 "sparse_test", pathname=0x7fffffb2aeb0 "sparse_test.py", fp=) at import.c:893 #11 0x00002aaaaac6a854 in import_submodule (mod=0x2aaaaadbee30, subname=0x7fffffb2b370 "sparse_test", fullname=0x7fffffb2b370 "sparse_test") at import.c:2250 #12 0x00002aaaaac6aa72 in load_next (mod=0x2aaaaadbee30, altmod=0x2aaaaadbee30, p_name=, buf=0x7fffffb2b370 "sparse_test", p_buflen=0x7fffffb2b78c) at import.c:2070 #13 0x00002aaaaac6af07 in PyImport_ImportModuleEx (name=0x2aaaaaae0360 "\002", globals=0x2aaaaaae0384, locals=, fromlist=0x2aaaaadbee30) at import.c:1905 #14 0x00002aaaaac47d73 in builtin___import__ (self=, args=) at bltinmodule.c:45 Any suggestion how to resolve this problem ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: sparse_test.py Type: text/x-python Size: 320 bytes Desc: not available URL: From nwagner at mecha.uni-stuttgart.de Thu Mar 2 10:05:58 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Mar 2006 16:05:58 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <4406F253.7060304@mecha.uni-stuttgart.de> References: <4406F253.7060304@mecha.uni-stuttgart.de> Message-ID: <440709D6.2040305@mecha.uni-stuttgart.de> Nils Wagner wrote: > Hi all, > > I have no idea why this test results in a segfault. Maybe its a compiler > issue ? (g77 versus gfortran) > > I have installed numpy/scipy via > > python setup.py config_fc --fcompiler=gnu95 install > > >>>> import sparse_test.py >>>> > Use minimum degree ordering on A'+A. > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 16384 (LWP 2632)] > 0x00002aaab23e50b3 in dpivotL (jcol=0, u=1, usepr=0x7fffffb2a204, > perm_r=0xa0fa40, iperm_r=0x816b50, iperm_c=0x9ffed0, > pivrow=0x7fffffb2a210, Glu=0x9a0ac0, stat=0xffffffffab80bb68) at > dpivotL.c:120 > 120 perm_r[*pivrow] = jcol; > (gdb) bt > #0 0x00002aaab23e50b3 in dpivotL (jcol=0, u=1, usepr=0x7fffffb2a204, > perm_r=0xa0fa40, iperm_r=0x816b50, iperm_c=0x9ffed0, > pivrow=0x7fffffb2a210, Glu=0x9a0ac0, stat=0xffffffffab80bb68) at > dpivotL.c:120 > #1 0x00002aaab23dadd0 in dgstrf (options=, > A=0x7fffffb2a330, drop_tol=, > relax=, panel_size=10, etree= out>, work=, lwork=10549824, > perm_c=0x945350, perm_r=0xa0fa40, L=0x2aaab508c7d0, > U=0x2aaab508c7f0, stat=0x7fffffb2a310, info=0x7fffffb2a35c) > at dgstrf.c:310 > #2 0x00002aaab23bdb4b in newSciPyLUObject (A=0x7fffffb2a3f0, > diag_pivot_thresh=1, drop_tol=0, relax=1, panel_size=10, > permc_spec=2, intype=) at _superluobject.c:372 > #3 0x00002aaab23bc9ba in Py_dgstrf (self=, > args=, keywds=) > at _dsuperlumodule.c:187 > #4 0x00002aaaaac5496a in PyEval_EvalFrame (f=0xace630) at ceval.c:3547 > #5 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaae7e01f0, > globals=, > locals=, args=0x5, argcount=1, kws=0x506698, > kwcount=0, defs=0x2aaaae7cef08, defcount=5, > closure=0x0) at ceval.c:2730 > #6 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x506500) at ceval.c:3640 > #7 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaaab23b90, > globals=, > locals=, args=0x0, argcount=0, kws=0x0, > kwcount=0, defs=0x0, defcount=0, closure=0x0) > at ceval.c:2730 > #8 0x00002aaaaac556d2 in PyEval_EvalCode (co=, > globals=, > locals=) at ceval.c:484 > #9 0x00002aaaaac67cc7 in PyImport_ExecCodeModuleEx (name=0x7fffffb2b370 > "sparse_test", co=0x2aaaaab23b90, > pathname=) at import.c:619 > #10 0x00002aaaaac68e3d in load_source_module (name=0x7fffffb2b370 > "sparse_test", pathname=0x7fffffb2aeb0 "sparse_test.py", > fp=) at import.c:893 > #11 0x00002aaaaac6a854 in import_submodule (mod=0x2aaaaadbee30, > subname=0x7fffffb2b370 "sparse_test", > fullname=0x7fffffb2b370 "sparse_test") at import.c:2250 > #12 0x00002aaaaac6aa72 in load_next (mod=0x2aaaaadbee30, > altmod=0x2aaaaadbee30, p_name=, > buf=0x7fffffb2b370 "sparse_test", p_buflen=0x7fffffb2b78c) at > import.c:2070 > #13 0x00002aaaaac6af07 in PyImport_ImportModuleEx (name=0x2aaaaaae0360 > "\002", globals=0x2aaaaaae0384, > locals=, fromlist=0x2aaaaadbee30) at import.c:1905 > #14 0x00002aaaaac47d73 in builtin___import__ (self= out>, args=) at bltinmodule.c:45 > > Any suggestion how to resolve this problem ? > > Nils > > > ------------------------------------------------------------------------ > > from scipy import * > from scipy.sparse import * > n = 20 > A = csc_matrix((n,n)) > x = rand(n) > # > # Segmentation fault for complex entries > # > y = rand(n-1)+1j*rand(n-1) > r = rand(n) > for i in range(len(x)): > A[i,i] = x[i] > for i in range(len(y)): > A[i,i+1] = y[i] > A[i+1,i] = conjugate(y[i]) > xx = sparse.lu_factor(A).solve(r) > > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > I have disabled ATLAS and recompiled numpy/scipy from scratch using g77 instead of gfortran. The segfault w.r.t. sparse_test.py persists but the problem w.r.t. to quadtest.py vanishes. Nils From robert.kern at gmail.com Thu Mar 2 10:26:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 02 Mar 2006 09:26:24 -0600 Subject: [SciPy-dev] Another segfault In-Reply-To: <4406F253.7060304@mecha.uni-stuttgart.de> References: <4406F253.7060304@mecha.uni-stuttgart.de> Message-ID: <44070EA0.5090609@gmail.com> Nils Wagner wrote: > Hi all, > > I have no idea why this test results in a segfault. Maybe its a compiler > issue ? (g77 versus gfortran) I have gotten segfaults using gfortran on code that does not segfault with a decent Fortran 90 compiler. I would not recommend using gfortran for anything at this point. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Thu Mar 2 10:40:06 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Mar 2006 16:40:06 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <44070EA0.5090609@gmail.com> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> Message-ID: <440711D6.5050502@mecha.uni-stuttgart.de> Robert Kern wrote: > Nils Wagner wrote: > >> Hi all, >> >> I have no idea why this test results in a segfault. Maybe its a compiler >> issue ? (g77 versus gfortran) >> > > I have gotten segfaults using gfortran on code that does not segfault with a > decent Fortran 90 compiler. I would not recommend using gfortran for anything at > this point. > > Robert, Thank you for your advice. This note should be available on the Wiki as a warning for other users. I have spend more than one day on this - pure waste of time. Anyway the segfault w.r.t. sparse_test.py persists independent from g77/gfortran. Can you reproduce this segfault ? Nils From schofield at ftw.at Thu Mar 2 11:05:22 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 02 Mar 2006 17:05:22 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <440711D6.5050502@mecha.uni-stuttgart.de> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> Message-ID: <440717C2.9040004@ftw.at> Nils Wagner wrote: > Thank you for your advice. This note should be available on the Wiki as a > warning for other users. I have spend more than one day on this - pure > waste of time. > Anyway the segfault w.r.t. sparse_test.py persists independent from > g77/gfortran. > Can you reproduce this segfault ? > I can't. I'm using g77 v3.4.6 with a default distutils config. Does it work if you install scipy from scratch this way? It may be a bug, rather than a configuration error, but we need to reproduce it to fix it. -- Ed From nwagner at mecha.uni-stuttgart.de Thu Mar 2 11:11:24 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Mar 2006 17:11:24 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <440717C2.9040004@ftw.at> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> Message-ID: <4407192C.1030808@mecha.uni-stuttgart.de> Ed Schofield wrote: > Nils Wagner wrote: > >> Thank you for your advice. This note should be available on the Wiki as a >> warning for other users. I have spend more than one day on this - pure >> waste of time. >> Anyway the segfault w.r.t. sparse_test.py persists independent from >> g77/gfortran. >> Can you reproduce this segfault ? >> >> > I can't. I'm using g77 v3.4.6 with a default distutils config. Does it > work if you install scipy from scratch this way? It may be a bug, > rather than a configuration error, but we need to reproduce it to fix it. > > -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Ed, Please find attached my configuration. Are you on a 64 bit machine ? Reading specs from /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/specs Configured with: ../configure --enable-threads=posix --prefix=/usr --with-local-prefix=/usr/local --infodir=/usr/share/info --mandir=/usr/share/man --enable-languages=c,c++,f77,objc,java,ada --disable-checking --libdir=/usr/lib64 --enable-libgcj --with-slibdir=/lib64 --with-system-zlib --enable-shared --enable-__cxa_atexit x86_64-suse-linux Thread model: posix gcc version 3.3.5 20050117 (prerelease) (SUSE Linux) Linux rachel 2.6.11.4-21.11-default #1 Thu Feb 2 20:54:26 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux Numpy version 0.9.6.2192 Scipy version 0.4.7.1627 dfftw_info: NOT AVAILABLE fft_opt_info: NOT AVAILABLE mkl_info: NOT AVAILABLE djbfft_info: NOT AVAILABLE atlas_blas_threads_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] language = f77 fftw2_info: NOT AVAILABLE fftw3_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] language = c atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] atlas_threads_info: NOT AVAILABLE Just now I have tested it on a 32 bit machine. It works fine there ! Linux amanda 2.6.11.4-21.10-default #1 Tue Nov 29 14:32:49 UTC 2005 i686 athlon i386 GNU/Linux Numpy version 0.9.6.2191 Scipy version 0.4.7.1625 atlas_threads_info: NOT AVAILABLE fft_opt_info: libraries = ['fftw3'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] atlas_blas_threads_info: NOT AVAILABLE djbfft_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = f77 fftw3_info: libraries = ['fftw3'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = c atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] mkl_info: NOT AVAILABLE So I guess its a 64 bit issue ? Nils From zpincus at stanford.edu Thu Mar 2 14:50:01 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Thu, 2 Mar 2006 11:50:01 -0800 Subject: [SciPy-dev] Stats t-test broken on 1D input Message-ID: <81F11E44-BA5B-40A7-94D5-F80B8924D424@stanford.edu> Hi folks, Sorry to re-post, but nobody in scipy-user seemed too concerned that the most basic form of the most basic statistical test -- the T-test on 1D input -- is broken in scipy 0.4.6. In [1]: import scipy.stats In [2]: scipy.version.version Out[2]: '0.4.6' In [3]: scipy.stats.ttest_ind([1,2,3],[1,2,3]) TypeError: len() of unsized object In [4]: scipy.stats.ttest_rel([1,2,3],[1,2,3]) TypeError: len() of unsized object The problem is at lines 1463 and 1512 of stats.py, where len() is applied to a scalar result. This can be fixed by chainging the blocks that look like: if type(t) == ArrayType: probs = reshape(probs,t.shape) if len(probs) == 1: probs = probs[0] to look like: if type(t) == ArrayType: probs = reshape(probs,t.shape) if len(probs) == 1: probs = probs[0] That is, probs is guaranteed to be a scalar ( I think) if t is not an array. I assume that in a previous version of scipy this was not the case, but now it is. Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine From schofield at ftw.at Thu Mar 2 17:13:47 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 2 Mar 2006 23:13:47 +0100 Subject: [SciPy-dev] Stats t-test broken on 1D input In-Reply-To: <81F11E44-BA5B-40A7-94D5-F80B8924D424@stanford.edu> References: <81F11E44-BA5B-40A7-94D5-F80B8924D424@stanford.edu> Message-ID: <9AF83E8D-8266-4A10-A63C-E88B2684E4E4@ftw.at> On 02/03/2006, at 8:50 PM, Zachary Pincus wrote: > Hi folks, > > Sorry to re-post, but nobody in scipy-user seemed too concerned that > the most basic form of the most basic statistical test -- the T-test > on 1D input -- is broken in scipy 0.4.6. Thanks for your insistence in posting this. I've now fixed it (I hope) in SVN. I actually think this is an old bug. We don't have ANY unit tests for the T-test (or normality test, etc.), and I think you've stumbled across it because you're the first one to have used it since long, long ago ... The ttest_ind function actually has a comment that it's "from Numerical Recipes". So I think, even though it's been Pythonized, we don't have permission to distribute it and we need to need to remove it altogether. So we really need some more help with the stats module ... -- Ed From zpincus at stanford.edu Thu Mar 2 17:25:59 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Thu, 2 Mar 2006 14:25:59 -0800 Subject: [SciPy-dev] Stats t-test broken on 1D input In-Reply-To: <9AF83E8D-8266-4A10-A63C-E88B2684E4E4@ftw.at> References: <81F11E44-BA5B-40A7-94D5-F80B8924D424@stanford.edu> <9AF83E8D-8266-4A10-A63C-E88B2684E4E4@ftw.at> Message-ID: <9A91DC14-68EA-4CC7-A62E-0870432FF2B0@stanford.edu> > The ttest_ind function actually has a comment that it's "from > Numerical Recipes". So I think, even though it's been Pythonized, we > don't have permission to distribute it and we need to need to remove > it altogether. So we really need some more help with the stats > module ... Re-writing something in a completely different language isn't transformative enough (especially given its short length) to make the resulting code no longer copyrighted by Numerical Recipes? I am of course NAL, but I find that surprising. Zach From robert.kern at gmail.com Thu Mar 2 17:52:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 02 Mar 2006 16:52:59 -0600 Subject: [SciPy-dev] Stats t-test broken on 1D input In-Reply-To: <9A91DC14-68EA-4CC7-A62E-0870432FF2B0@stanford.edu> References: <81F11E44-BA5B-40A7-94D5-F80B8924D424@stanford.edu> <9AF83E8D-8266-4A10-A63C-E88B2684E4E4@ftw.at> <9A91DC14-68EA-4CC7-A62E-0870432FF2B0@stanford.edu> Message-ID: <4407774B.3040502@gmail.com> [Ed Schofield:] >>The ttest_ind function actually has a comment that it's "from >>Numerical Recipes". So I think, even though it's been Pythonized, we >>don't have permission to distribute it and we need to need to remove >>it altogether. So we really need some more help with the stats >>module ... [Zachary Pincus:] > Re-writing something in a completely different language isn't > transformative enough (especially given its short length) to make the > resulting code no longer copyrighted by Numerical Recipes? > > I am of course NAL, but I find that surprising. I don't think there is a problem. The algorithm is so small and well-known that, for all the points of similarity between our code and Numerical Recipes', no competent programmer would implement it any differently. Copying NR code verbatim would be legally iffy, and I would reject it immediately, but this seems okay. IANAL. TINLA. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Thu Mar 2 18:18:22 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 02 Mar 2006 16:18:22 -0700 Subject: [SciPy-dev] Stats t-test broken on 1D input In-Reply-To: <9A91DC14-68EA-4CC7-A62E-0870432FF2B0@stanford.edu> References: <81F11E44-BA5B-40A7-94D5-F80B8924D424@stanford.edu> <9AF83E8D-8266-4A10-A63C-E88B2684E4E4@ftw.at> <9A91DC14-68EA-4CC7-A62E-0870432FF2B0@stanford.edu> Message-ID: <44077D3E.30302@ee.byu.edu> Zachary Pincus wrote: >>The ttest_ind function actually has a comment that it's "from >>Numerical Recipes". So I think, even though it's been Pythonized, we >>don't have permission to distribute it and we need to need to remove >>it altogether. So we really need some more help with the stats >>module ... >> >> > > > The stats module has a lot of code adapted from Gary Strangmans original code. It could definitely use more cleaning up. Some of the funtions should be removed entirely. Any help with this one is very much appreciated. From schofield at ftw.at Thu Mar 2 18:24:43 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 3 Mar 2006 00:24:43 +0100 Subject: [SciPy-dev] Stats t-test broken on 1D input In-Reply-To: <9A91DC14-68EA-4CC7-A62E-0870432FF2B0@stanford.edu> References: <81F11E44-BA5B-40A7-94D5-F80B8924D424@stanford.edu> <9AF83E8D-8266-4A10-A63C-E88B2684E4E4@ftw.at> <9A91DC14-68EA-4CC7-A62E-0870432FF2B0@stanford.edu> Message-ID: > Re-writing something in a completely different language isn't > transformative enough (especially given its short length) to make the > resulting code no longer copyrighted by Numerical Recipes? > > I am of course NAL, but I find that surprising. Well, this is my interpretation of the book's License Information section (at http://www.library.cornell.edu/nr/bookcpdf/c0-1.pdf). Translations of novels into other languages are also considered "derivative works" and require permission of the copyright holder. But it's hard to draw the line between idea and expression in computer code. If it's really a straightforward implementation of a well-known algorithm, like Robert argues, then perhaps we'd just want to remove the references to NR, rename the variables, add some nice comments -- and stamp out the bugs ... -- Ed From mmetz at astro.uni-bonn.de Fri Mar 3 03:34:24 2006 From: mmetz at astro.uni-bonn.de (Manuel Metz) Date: Fri, 03 Mar 2006 09:34:24 +0100 Subject: [SciPy-dev] Kuipers statistics In-Reply-To: <43FAF3EE.2020408@ftw.at> References: <43F97DAD.1010905@astro.uni-bonn.de> <43FAF3EE.2020408@ftw.at> Message-ID: <4407FF90.5050906@astro.uni-bonn.de> Hm, I didn't know that 'cephes' is a netlib library (which I found out now). Then, maybe yes, Lib/special/c_misc/ would be a good place - I guess. What about the inverse function? Is it needed? And what about the Python code? Should I also try to provide a patch? Manuel (note that I actually have not compiled the code in the SciPy framework, since I don't have numpy installed yet - so it may need some testing) Ed Schofield wrote: > Manuel Metz wrote: > > >>Hi, >>I have attached a patch for the file /Lib/special/cephes/kolmogorov.c. >>This patch adds a function 'kuiper(y)' which is very similar to the >>function 'kolmogorov(y)', which is required for the K-S-Test. Kuipers >>statistics is a variant of K-S statistics which is invariant on a circle. >> >>I did not write an inverse function yet (is't not monotonic in it's >>first derivative) and also did not include patches for the corresponding >>.py files. >> >>Is there any interest to include the Kuiper statistics into SciPy ??? >> >> > > Thanks for the patch! I don't know much about this field, but I suspect > we'd be glad to include it. > > I'm not sure which directory the function should go in, since it's not > part of the cephes library. Perhaps Lib/special/c_misc/? > > -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -- --------------------------------------- Manuel Metz ............ Stw at AIfA Argelander Institut fuer Astronomie Auf dem Huegel 71 (room 3.06) D - 53121 Bonn E-Mail: mmetz at astro.uni-bonn.de Web: www.astro.uni-bonn.de/~mmetz Phone: (+49) 228 / 73-3660 Fax: (+49) 228 / 73-3672 --------------------------------------- From cimrman3 at ntc.zcu.cz Fri Mar 3 06:17:05 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 03 Mar 2006 12:17:05 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <440717C2.9040004@ftw.at> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> Message-ID: <440825B1.7060705@ntc.zcu.cz> Ed Schofield wrote: > Nils Wagner wrote: > >>Thank you for your advice. This note should be available on the Wiki as a >>warning for other users. I have spend more than one day on this - pure >>waste of time. >>Anyway the segfault w.r.t. sparse_test.py persists independent from >>g77/gfortran. >>Can you reproduce this segfault ? >> > > I can't. I'm using g77 v3.4.6 with a default distutils config. Does it > work if you install scipy from scratch this way? It may be a bug, > rather than a configuration error, but we need to reproduce it to fix it. The sparse_test.py works ok for me, but if one runs sparse.py module as a script, a segfault occurs in solve() when using LU. (UMFPACK works ok, if present, that is.) Can you reproduce this, Ed? r. From nwagner at mecha.uni-stuttgart.de Fri Mar 3 07:08:20 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 13:08:20 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <440825B1.7060705@ntc.zcu.cz> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> Message-ID: <440831B4.2040703@mecha.uni-stuttgart.de> Robert Cimrman wrote: > Ed Schofield wrote: > >> Nils Wagner wrote: >> >> >>> Thank you for your advice. This note should be available on the Wiki as a >>> warning for other users. I have spend more than one day on this - pure >>> waste of time. >>> Anyway the segfault w.r.t. sparse_test.py persists independent from >>> g77/gfortran. >>> Can you reproduce this segfault ? >>> >>> >> I can't. I'm using g77 v3.4.6 with a default distutils config. Does it >> work if you install scipy from scratch this way? It may be a bug, >> rather than a configuration error, but we need to reproduce it to fix it. >> > > The sparse_test.py works ok for me, but if one runs sparse.py module as > a script, a segfault occurs in solve() when using LU. (UMFPACK works ok, > if present, that is.) Can you reproduce this, Ed? > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Robert, Good to hear that you can reproduce the segfault. I have posted the gdb output to the list before. Isn't it useful ? Cheers, Nils BTW, I have recompiled ATLAS from scratch using gcc4.0.2 and g77. Now all tests numpy.test(1,10) and scipy.test(1,10) passed. Great !! I have attached the platform dependent Makefile for completeness. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Make.Linux_HAMMER64SSE3 URL: From schofield at ftw.at Fri Mar 3 09:42:25 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 03 Mar 2006 15:42:25 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <440825B1.7060705@ntc.zcu.cz> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> Message-ID: <440855D1.7060506@ftw.at> Robert Cimrman wrote: > Ed Schofield wrote: > >> Nils Wagner wrote: >> >> >>> Thank you for your advice. This note should be available on the Wiki as a >>> warning for other users. I have spend more than one day on this - pure >>> waste of time. >>> Anyway the segfault w.r.t. sparse_test.py persists independent from >>> g77/gfortran. >>> Can you reproduce this segfault ? >>> >>> >> I can't. I'm using g77 v3.4.6 with a default distutils config. Does it >> work if you install scipy from scratch this way? It may be a bug, >> rather than a configuration error, but we need to reproduce it to fix it. >> > > The sparse_test.py works ok for me, but if one runs sparse.py module as > a script, a segfault occurs in solve() when using LU. (UMFPACK works ok, > if present, that is.) Can you reproduce this, Ed? > No, I can't. (I only have a 32-bit machine.) I've added a unit test that runs Nils' code. If others start reporting segfaults we're in business ;) I've fixed some bugs in the handling of data type arguments and added a small unit test for tocsc() conversion with complex data types. This is unlikely to fix the segfault problem though ... -- Ed From nwagner at mecha.uni-stuttgart.de Fri Mar 3 10:20:23 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 16:20:23 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <440855D1.7060506@ftw.at> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> Message-ID: <44085EB7.8060903@mecha.uni-stuttgart.de> Ed Schofield wrote: > Robert Cimrman wrote: > >> Ed Schofield wrote: >> >> >>> Nils Wagner wrote: >>> >>> >>> >>>> Thank you for your advice. This note should be available on the Wiki as a >>>> warning for other users. I have spend more than one day on this - pure >>>> waste of time. >>>> Anyway the segfault w.r.t. sparse_test.py persists independent from >>>> g77/gfortran. >>>> Can you reproduce this segfault ? >>>> >>>> >>>> >>> I can't. I'm using g77 v3.4.6 with a default distutils config. Does it >>> work if you install scipy from scratch this way? It may be a bug, >>> rather than a configuration error, but we need to reproduce it to fix it. >>> >>> >> The sparse_test.py works ok for me, but if one runs sparse.py module as >> a script, a segfault occurs in solve() when using LU. (UMFPACK works ok, >> if present, that is.) Can you reproduce this, Ed? >> >> > No, I can't. (I only have a 32-bit machine.) I've added a unit test > that runs Nils' code. If others start reporting segfaults we're in > business ;) > > I've fixed some bugs in the handling of data type arguments and added a > small unit test for tocsc() conversion with complex data types. This is > unlikely to fix the segfault problem though ... > > -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Ed, Thanks for adding the unit test. I have installed numpy/scipy from scratch using a 32 and a 64 bit machine. The segfault is definitely a 64 bit problem. Nils On a 32 bit bit machine scipy.test(1,10) Ran 1111 tests in 7.764s OK >>> scipy.__version__ '0.4.7.1631' >>> On a 64 bit machine scipy.test(1,10) check_rmatvec (scipy.sparse.tests.test_sparse.test_csc) ... ok check_setelement (scipy.sparse.tests.test_sparse.test_csc) ... ok Test whether the lu_solve command segfaults, as reported by NilsUse minimum degree ordering on A'+A. Segmentation fault From jesper.friis at material.ntnu.no Fri Mar 3 10:56:46 2006 From: jesper.friis at material.ntnu.no (Jesper Friis) Date: Fri, 03 Mar 2006 16:56:46 +0100 Subject: [SciPy-dev] Patch fixing problems in integrate/ode.py for banded systems Message-ID: <4408673E.5040704@material.ntnu.no> Here is a small patch (just 2 lines) that enters the size of the upper and lower diagonals in the work arrays before calling DVODE. Without this patch DVODE will return immediately, complaining about illegal input. Another minor problem with the interface to DVODE is that the function calculating the Jacobian is expected to return a matrix of size NROWPD by len(y), where NROWPD unfortunately is not provided as an argument to the Jacobian function in the python interface. For trigonal systems NROWPD is e.g. 4 and not 3 as one would expect. On the other hand is an error message is printed when a matrix of the wrong size is returned, so this problem is easy to fix... --- scipy-0.4.6/Lib/integrate/ode.py.org 2006-03-03 13:20:36.000000000 +0100 +++ scipy-0.4.6/Lib/integrate/ode.py 2006-03-03 16:17:17.000000000 +0100 @@ -337,6 +337,8 @@ rwork[6] = self.min_step self.rwork = rwork iwork = zeros((liw,),'i') + if self.ml != None: iwork[0] = self.ml + if self.mu != None: iwork[1] = self.mu iwork[4] = self.order iwork[5] = self.nsteps iwork[6] = 2 # mxhnil From cimrman3 at ntc.zcu.cz Fri Mar 3 10:58:17 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 03 Mar 2006 16:58:17 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <440855D1.7060506@ftw.at> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> Message-ID: <44086799.7050802@ntc.zcu.cz> Ed Schofield wrote: > Robert Cimrman wrote: >>The sparse_test.py works ok for me, but if one runs sparse.py module as >>a script, a segfault occurs in solve() when using LU. (UMFPACK works ok, >>if present, that is.) Can you reproduce this, Ed? >> > > No, I can't. (I only have a 32-bit machine.) I've added a unit test > that runs Nils' code. If others start reporting segfaults we're in > business ;) I have a 32-bit machine too. Nils' code runs ok for me, but running 'python /Lib/sparse/sparse.py' causes a segfault, strange, see below. BTW. I have modified the getdtype() function so that the dtype is always one of 'fdFD', as it is required by the superLU code (_transtabl...). I hope it will not break anything :-) r. gdb --exec=/usr/bin/python (gdb) r /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py ... Solve: single precision: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 20727)] 0xb7e8ce75 in PyDict_DelItem (op=0x0, key=0x80b04b4) at dictobject.c:579 579 dictobject.c: nen? souborem ani adres??em. in dictobject.c (gdb) bt #0 0xb7e8ce75 in PyDict_DelItem (op=0x0, key=0x80b04b4) at dictobject.c:579 #1 0xb6c2a925 in superlu_python_module_free () from /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/_dsuperlu.so #2 0xb6c40e62 in Destroy_SuperMatrix_Store () from /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/_dsuperlu.so #3 0xb6c2a4d1 in Py_dgssv () from /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/_dsuperlu.so #4 0xb7e90220 in PyCFunction_Call (func=0xb7adf10c, arg=0xb79eb17c, kw=0x0) at methodobject.c:77 #5 0xb7ed2857 in call_function (pp_stack=0xbfb8b0d8, oparg=8) at ceval.c:3558 #6 0xb7ecf9a6 in PyEval_EvalFrame (f=0x80856ac) at ceval.c:2163 #7 0xb7ed0e3e in PyEval_EvalCodeEx (co=0xb7893560, globals=0xb7bc1824, locals=0x0, args=0x807d6dc, argcount=2, kws=0x807d6e4, kwcount=0, defs=0xb79e3a58, defcount=1, closure=0x0) at ceval.c:2736 #8 0xb7ed2b88 in fast_function (func=0xb79eb09c, pp_stack=0xbfb8b318, n=2, na=2, nk=0) at ceval.c:3651 #9 0xb7ed292e in call_function (pp_stack=0xbfb8b318, oparg=2) at ceval.c:3579 #10 0xb7ecf9a6 in PyEval_EvalFrame (f=0x807d58c) at ceval.c:2163 #11 0xb7ed0e3e in PyEval_EvalCodeEx (co=0xb7893720, globals=0xb7bc1824, locals=0xb7bc1824, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, ---Type to continue, or q to quit--- defcount=0, closure=0x0) at ceval.c:2736 #12 0xb7ecc2ae in PyEval_EvalCode (co=0xb7893720, globals=0xb7bc1824, locals=0xb7bc1824) at ceval.c:484 #13 0xb7ef77c1 in run_node (n=0xb7ba92d8, filename=0xbfb8cf55 "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", globals=0xb7bc1824, locals=0xb7bc1824, flags=0xbfb8b560) at pythonrun.c:1265 #14 0xb7ef7758 in run_err_node (n=0xb7ba92d8, filename=0xbfb8cf55 "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", globals=0xb7bc1824, locals=0xb7bc1824, flags=0xbfb8b560) at pythonrun.c:1252 #15 0xb7ef7716 in PyRun_FileExFlags (fp=0x804ad88, filename=0xbfb8cf55 "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", start=257, globals=0xb7bc1824, locals=0xb7bc1824, closeit=1, flags=0xbfb8b560) at pythonrun.c:1243 #16 0xb7ef6647 in PyRun_SimpleFileExFlags (fp=0x804ad88, filename=0xbfb8cf55 "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", closeit=1, flags=0xbfb8b560) at pythonrun.c:860 #17 0xb7ef5ee1 in PyRun_AnyFileExFlags (fp=0x804ad88, filename=0xbfb8cf55 "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", closeit=1, flags=0xbfb8b560) at pythonrun.c:664 #18 0xb7efe703 in Py_Main (argc=2, argv=0xbfb8b614) at main.c:484 #19 0x080486b2 in ?? () From nwagner at mecha.uni-stuttgart.de Fri Mar 3 14:21:11 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 20:21:11 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <44086799.7050802@ntc.zcu.cz> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> Message-ID: On Fri, 03 Mar 2006 16:58:17 +0100 Robert Cimrman wrote: > Ed Schofield wrote: >> Robert Cimrman wrote: >>>The sparse_test.py works ok for me, but if one runs >>>sparse.py module as >>>a script, a segfault occurs in solve() when using LU. >>>(UMFPACK works ok, >>>if present, that is.) Can you reproduce this, Ed? >>> >> >> No, I can't. (I only have a 32-bit machine.) I've >>added a unit test >> that runs Nils' code. If others start reporting >>segfaults we're in >> business ;) > > I have a 32-bit machine too. Nils' code runs ok for me, >but running > 'python /Lib/sparse/sparse.py' causes >a segfault, > strange, see below. BTW. I have modified the getdtype() >function so that > the dtype is always one of 'fdFD', as it is required by >the superLU code > (_transtabl...). I hope it will not break anything :-) > > r. > > gdb --exec=/usr/bin/python > (gdb) r > /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py > > ... > > Solve: single precision: > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 16384 (LWP 20727)] > 0xb7e8ce75 in PyDict_DelItem (op=0x0, key=0x80b04b4) at >dictobject.c:579 > 579 dictobject.c: nen? souborem ani adres??em. > in dictobject.c > (gdb) bt > #0 0xb7e8ce75 in PyDict_DelItem (op=0x0, key=0x80b04b4) >at dictobject.c:579 > #1 0xb6c2a925 in superlu_python_module_free () > from > /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/_dsuperlu.so > #2 0xb6c40e62 in Destroy_SuperMatrix_Store () > from > /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/_dsuperlu.so > #3 0xb6c2a4d1 in Py_dgssv () > from > /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/_dsuperlu.so > #4 0xb7e90220 in PyCFunction_Call (func=0xb7adf10c, >arg=0xb79eb17c, kw=0x0) > at methodobject.c:77 > #5 0xb7ed2857 in call_function (pp_stack=0xbfb8b0d8, >oparg=8) at > ceval.c:3558 > #6 0xb7ecf9a6 in PyEval_EvalFrame (f=0x80856ac) at >ceval.c:2163 > #7 0xb7ed0e3e in PyEval_EvalCodeEx (co=0xb7893560, >globals=0xb7bc1824, > locals=0x0, args=0x807d6dc, argcount=2, >kws=0x807d6e4, kwcount=0, > defs=0xb79e3a58, defcount=1, closure=0x0) at >ceval.c:2736 > #8 0xb7ed2b88 in fast_function (func=0xb79eb09c, >pp_stack=0xbfb8b318, n=2, > na=2, nk=0) at ceval.c:3651 > #9 0xb7ed292e in call_function (pp_stack=0xbfb8b318, >oparg=2) at > ceval.c:3579 > #10 0xb7ecf9a6 in PyEval_EvalFrame (f=0x807d58c) at >ceval.c:2163 > #11 0xb7ed0e3e in PyEval_EvalCodeEx (co=0xb7893720, >globals=0xb7bc1824, > locals=0xb7bc1824, args=0x0, argcount=0, kws=0x0, >kwcount=0, defs=0x0, > ---Type to continue, or q to quit--- > defcount=0, closure=0x0) at ceval.c:2736 > #12 0xb7ecc2ae in PyEval_EvalCode (co=0xb7893720, >globals=0xb7bc1824, > locals=0xb7bc1824) at ceval.c:484 > #13 0xb7ef77c1 in run_node (n=0xb7ba92d8, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > globals=0xb7bc1824, locals=0xb7bc1824, flags=0xbfb8b560) > at pythonrun.c:1265 > #14 0xb7ef7758 in run_err_node (n=0xb7ba92d8, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > globals=0xb7bc1824, locals=0xb7bc1824, flags=0xbfb8b560) > at pythonrun.c:1252 > #15 0xb7ef7716 in PyRun_FileExFlags (fp=0x804ad88, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > start=257, globals=0xb7bc1824, locals=0xb7bc1824, > closeit=1, flags=0xbfb8b560) at pythonrun.c:1243 > #16 0xb7ef6647 in PyRun_SimpleFileExFlags (fp=0x804ad88, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > closeit=1, flags=0xbfb8b560) at pythonrun.c:860 > #17 0xb7ef5ee1 in PyRun_AnyFileExFlags (fp=0x804ad88, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > closeit=1, flags=0xbfb8b560) at pythonrun.c:664 > #18 0xb7efe703 in Py_Main (argc=2, argv=0xbfb8b614) at >main.c:484 > #19 0x080486b2 in ?? () > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev On a 32-bit machine Solve: single precision complex: Solve: double precision complex: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1076175008 (LWP 22590)] 0x08081cdb in ?? () (gdb) bt #0 0x08081cdb in ?? () #1 0x4265913c in ?? () from /usr/local/lib/python2.4/site-packages/scipy/sparse/_csuperlu.so #2 0x081b7118 in ?? () #3 0xbfffe5b8 in ?? () #4 0x420c2f39 in superlu_python_module_free (ptr=0x4038c34c) at _superlu_utils.c:61 #5 0x420c2f39 in superlu_python_module_free (ptr=0x81b7118) at _superlu_utils.c:61 #6 0x420d72df in Destroy_SuperMatrix_Store (A=0x4038c34c) at util.c:56 #7 0x420c2d4a in Py_cgssv (self=0x0, args=0x4029de2c, kwdict=0x0) at _csuperlumodule.c:94 #8 0x0811eb56 in ?? () #9 0x00000000 in ?? () #10 0x4029de2c in ?? () #11 0x00000000 in ?? () #12 0x00000003 in ?? () #13 0xffffffff in ?? () #14 0x405b8240 in ?? () #15 0x420c2a30 in Py_cgstrf () at _csuperlumodule.c:154 #16 0x080c74ed in ?? () #17 0x4037f3ac in ?? () #18 0x4029de2c in ?? () #19 0x00000000 in ?? () #20 0x0814d940 in ?? () #21 0x0000000b in ?? () #22 0x2a080909 in ?? () #23 0x4035735c in ?? () #24 0x4038c40c in ?? () #25 0x4064bd60 in _PyCLongDouble_ArrFuncs () from /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so #26 0x403898ac in ?? () #27 0xbfffe7b8 in ?? () #28 0x0805f312 in ?? () #29 0x403898ac in ?? () #30 0x405af0e0 in ?? () #31 0x4064bd60 in _PyCLongDouble_ArrFuncs () from /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so Nils From nwagner at mecha.uni-stuttgart.de Fri Mar 3 14:47:26 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 20:47:26 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <44086799.7050802@ntc.zcu.cz> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> Message-ID: On Fri, 03 Mar 2006 16:58:17 +0100 Robert Cimrman wrote: > Ed Schofield wrote: >> Robert Cimrman wrote: >>>The sparse_test.py works ok for me, but if one runs >>>sparse.py module as >>>a script, a segfault occurs in solve() when using LU. >>>(UMFPACK works ok, >>>if present, that is.) Can you reproduce this, Ed? >>> >> >> No, I can't. (I only have a 32-bit machine.) I've >>added a unit test >> that runs Nils' code. If others start reporting >>segfaults we're in >> business ;) > > I have a 32-bit machine too. Nils' code runs ok for me, >but running > 'python /Lib/sparse/sparse.py' causes >a segfault, > strange, see below. BTW. I have modified the getdtype() >function so that > the dtype is always one of 'fdFD', as it is required by >the superLU code > (_transtabl...). I hope it will not break anything :-) > > r. > > gdb --exec=/usr/bin/python > (gdb) r > /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py > > ... > > Solve: single precision: > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 16384 (LWP 20727)] > 0xb7e8ce75 in PyDict_DelItem (op=0x0, key=0x80b04b4) at >dictobject.c:579 > 579 dictobject.c: nen? souborem ani adres??em. > in dictobject.c > (gdb) bt > #0 0xb7e8ce75 in PyDict_DelItem (op=0x0, key=0x80b04b4) >at dictobject.c:579 > #1 0xb6c2a925 in superlu_python_module_free () > from > /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/_dsuperlu.so > #2 0xb6c40e62 in Destroy_SuperMatrix_Store () > from > /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/_dsuperlu.so > #3 0xb6c2a4d1 in Py_dgssv () > from > /home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/_dsuperlu.so > #4 0xb7e90220 in PyCFunction_Call (func=0xb7adf10c, >arg=0xb79eb17c, kw=0x0) > at methodobject.c:77 > #5 0xb7ed2857 in call_function (pp_stack=0xbfb8b0d8, >oparg=8) at > ceval.c:3558 > #6 0xb7ecf9a6 in PyEval_EvalFrame (f=0x80856ac) at >ceval.c:2163 > #7 0xb7ed0e3e in PyEval_EvalCodeEx (co=0xb7893560, >globals=0xb7bc1824, > locals=0x0, args=0x807d6dc, argcount=2, >kws=0x807d6e4, kwcount=0, > defs=0xb79e3a58, defcount=1, closure=0x0) at >ceval.c:2736 > #8 0xb7ed2b88 in fast_function (func=0xb79eb09c, >pp_stack=0xbfb8b318, n=2, > na=2, nk=0) at ceval.c:3651 > #9 0xb7ed292e in call_function (pp_stack=0xbfb8b318, >oparg=2) at > ceval.c:3579 > #10 0xb7ecf9a6 in PyEval_EvalFrame (f=0x807d58c) at >ceval.c:2163 > #11 0xb7ed0e3e in PyEval_EvalCodeEx (co=0xb7893720, >globals=0xb7bc1824, > locals=0xb7bc1824, args=0x0, argcount=0, kws=0x0, >kwcount=0, defs=0x0, > ---Type to continue, or q to quit--- > defcount=0, closure=0x0) at ceval.c:2736 > #12 0xb7ecc2ae in PyEval_EvalCode (co=0xb7893720, >globals=0xb7bc1824, > locals=0xb7bc1824) at ceval.c:484 > #13 0xb7ef77c1 in run_node (n=0xb7ba92d8, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > globals=0xb7bc1824, locals=0xb7bc1824, flags=0xbfb8b560) > at pythonrun.c:1265 > #14 0xb7ef7758 in run_err_node (n=0xb7ba92d8, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > globals=0xb7bc1824, locals=0xb7bc1824, flags=0xbfb8b560) > at pythonrun.c:1252 > #15 0xb7ef7716 in PyRun_FileExFlags (fp=0x804ad88, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > start=257, globals=0xb7bc1824, locals=0xb7bc1824, > closeit=1, flags=0xbfb8b560) at pythonrun.c:1243 > #16 0xb7ef6647 in PyRun_SimpleFileExFlags (fp=0x804ad88, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > closeit=1, flags=0xbfb8b560) at pythonrun.c:860 > #17 0xb7ef5ee1 in PyRun_AnyFileExFlags (fp=0x804ad88, > filename=0xbfb8cf55 > "/home/share/software/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > closeit=1, flags=0xbfb8b560) at pythonrun.c:664 > #18 0xb7efe703 in Py_Main (argc=2, argv=0xbfb8b614) at >main.c:484 > #19 0x080486b2 in ?? () > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev On a 64 bit machine scipy.test(1,10) results in a segfault The bt message differs from the bt on a 32 bit machine. For what reason ? Test whether the lu_solve command segfaults, as reported by NilsUse minimum degree ordering on A'+A. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 5170)] 0x00002aaaae34d787 in zpivotL (jcol=0, u=1, usepr=0x7fffffc62264, perm_r=0xc9c7d0, iperm_r=0xbd8c60, iperm_c=0xdb55c0, pivrow=0x7fffffc62270, Glu=0x0, stat=0xffffffffab80bb68) at zpivotL.c:121 121 perm_r[*pivrow] = jcol; (gdb) bt #0 0x00002aaaae34d787 in zpivotL (jcol=0, u=1, usepr=0x7fffffc62264, perm_r=0xc9c7d0, iperm_r=0xbd8c60, iperm_c=0xdb55c0, pivrow=0x7fffffc62270, Glu=0x0, stat=0xffffffffab80bb68) at zpivotL.c:121 #1 0x00002aaaae33f2f4 in zgstrf (options=, A=0x7fffffc62390, drop_tol=, relax=, panel_size=10, etree=, work=, lwork=13223888, perm_c=0xdfd420, perm_r=0xc9c7d0, L=0x2aaab56b21b8, U=0x2aaab56b21d8, stat=0x7fffffc62370, info=0x7fffffc623bc) at zgstrf.c:310 #2 0x00002aaaae33bbf5 in newSciPyLUObject (A=0x7fffffc62450, diag_pivot_thresh=1, drop_tol=0, relax=1, panel_size=10, permc_spec=2, intype=) at _superluobject.c:382 #3 0x00002aaaae33a91a in Py_zgstrf (self=, args=, keywds=) at _zsuperlumodule.c:149 #4 0x00002aaaaac5496a in PyEval_EvalFrame (f=0xe4dbc0) at ceval.c:3547 #5 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaae09b2d0, globals=, locals=, args=0x5, argcount=1, kws=0x9d1b78, kwcount=0, defs=0x2aaaae095668, defcount=5, closure=0x0) at ceval.c:2730 #6 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x9d19a0) at ceval.c:3640 #7 0x00002aaaaac53b97 in PyEval_EvalFrame (f=0x9bbe60) at ceval.c:3629 #8 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbedce0, globals=, locals=, args=0x2aaab5a449f8, argcount=2, kws=0xa267c0, kwcount=0, defs=0x2aaaabc05068, defcount=1, closure=0x0) at ceval.c:2730 #9 0x00002aaaaac0e9af in function_call (func=0x2aaaabc047d0, arg=0x2aaab5a449e0, kw=) at funcobject.c:548 #10 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #11 0x00002aaaaac532e2 in PyEval_EvalFrame (f=0xb83530) at ceval.c:3824 #12 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbedd50, globals=, locals=, args=0x2aaab5a76e78, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #13 0x00002aaaaac0e9af in function_call (func=0x2aaaabc04848, arg=0x2aaab5a76e60, kw=) at funcobject.c:548 #14 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #15 0x00002aaaaac02131 in instancemethod_call (func=, arg=0x2aaab5a76e60, kw=0x0) at classobject.c:2431 #16 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, ---Type to continue, or q to quit--- kw=) at abstract.c:1751 #17 0x00002aaaaac5380d in PyEval_EvalFrame (f=0xb5b810) at ceval.c:3755 #18 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbe25e0, globals=, locals=, args=0x2aaab5a81338, argcount=2, kws=0x0, kwcount=0, defs=0x2aaaabc05ae8, defcount=1, closure=0x0) at ceval.c:2730 #19 0x00002aaaaac0e9af in function_call (func=0x2aaaabc08d70, arg=0x2aaab5a81320, kw=) at funcobject.c:548 #20 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #21 0x00002aaaaac02131 in instancemethod_call (func=, arg=0x2aaab5a81320, kw=0x0) at classobject.c:2431 #22 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #23 0x00002aaaaac33b0a in slot_tp_call (self=, args=0x2aaab56b0590, kwds=0x0) at typeobject.c:4526 #24 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #25 0x00002aaaaac5380d in PyEval_EvalFrame (f=0x674100) at ceval.c:3755 #26 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbf47a0, globals=, locals=, args=0x2aaab5a71ad0, argcount=2, kws=0xd46290, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #27 0x00002aaaaac0e9af in function_call (func=0x2aaaabc060c8, arg=0x2aaab5a71ab8, kw=) at funcobject.c:548 #28 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #29 0x00002aaaaac532e2 in PyEval_EvalFrame (f=0x626c00) at ceval.c:3824 #30 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbf4810, globals=, locals=, args=0x2aaab5a6ca40, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #31 0x00002aaaaac0e9af in function_call (func=0x2aaaabc06140, arg=0x2aaab5a6ca28, kw=) at funcobject.c:548 #32 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #33 0x00002aaaaac02131 in instancemethod_call (func=, arg=0x2aaab5a6ca28, kw=0x0) ---Type to continue, or q to quit--- at classobject.c:2431 #34 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #35 0x00002aaaaac33b0a in slot_tp_call (self=, args=0x2aaaaab28c50, kwds=0x0) at typeobject.c:4526 #36 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #37 0x00002aaaaac5380d in PyEval_EvalFrame (f=0x5cfc30) at ceval.c:3755 #38 0x00002aaaaac53b97 in PyEval_EvalFrame (f=0x76cbb0) at ceval.c:3629 #39 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbe2d50, globals=, locals=, args=0x5b8998, argcount=3, kws=0x5b89b0, kwcount=0, defs=0x2aaaabc07b60, defcount=2, closure=0x0) at ceval.c:2730 #40 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x5b87f0) at ceval.c:3640 #41 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaaab22c00, globals=, locals=, args=0x5cae20, argcount=2, kws=0x5cae30, kwcount=0, defs=0x2aaaadd64188, defcount=2, closure=0x0) at ceval.c:2730 #42 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x5cac90) at ceval.c:3640 #43 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaadd3ee30, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #44 0x00002aaaaac556d2 in PyEval_EvalCode (co=, globals=, locals=) at ceval.c:484 #45 0x00002aaaaac70719 in run_node (n=, filename=, globals=0x503b50, locals=0x503b50, flags=) at pythonrun.c:1265 #46 0x00002aaaaac71bc7 in PyRun_InteractiveOneFlags (fp=, filename=0x2aaaaac95e73 "", flags=0x7fffffc64cf0) at pythonrun.c:762 #47 0x00002aaaaac71cbe in PyRun_InteractiveLoopFlags (fp=0x2aaaab809e00, filename=0x2aaaaac95e73 "", flags=0x7fffffc64cf0) at pythonrun.c:695 #48 0x00002aaaaac7221c in PyRun_AnyFileExFlags (fp=0x2aaaab809e00, filename=0x2aaaaac95e73 "", closeit=0, flags=0x7fffffc64cf0) at pythonrun.c:658 #49 0x00002aaaaac77b25 in Py_Main (argc=, argv=0x7fffffc66b12) at main.c:484 #50 0x00002aaaab603ced in __libc_start_main () from /lib64/libc.so.6 #51 0x00000000004006ea in _start () at start.S:113 #52 0x00007fffffc64d88 in ?? () From cookedm at physics.mcmaster.ca Fri Mar 3 16:09:59 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 03 Mar 2006 16:09:59 -0500 Subject: [SciPy-dev] Patch fixing problems in integrate/ode.py for banded systems In-Reply-To: <4408673E.5040704@material.ntnu.no> (Jesper Friis's message of "Fri, 03 Mar 2006 16:56:46 +0100") References: <4408673E.5040704@material.ntnu.no> Message-ID: Jesper Friis writes: > Here is a small patch (just 2 lines) that enters the size of the upper > and lower diagonals in the work arrays before calling DVODE. Without > this patch DVODE will return immediately, complaining about illegal input. I've applied it to svn. > Another minor problem with the interface to DVODE is that the function > calculating the Jacobian is expected to return a matrix of size NROWPD > by len(y), where NROWPD unfortunately is not provided as an argument to > the Jacobian function in the python interface. For trigonal systems > NROWPD is e.g. 4 and not 3 as one would expect. On the other hand is an > error message is printed when a matrix of the wrong size is returned, > so this problem is easy to fix... Open a ticket for this on our Trac wiki at http://projects.scipy.org/scipy/scipy/wiki so we don't lose track. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Fri Mar 3 16:12:30 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 03 Mar 2006 16:12:30 -0500 Subject: [SciPy-dev] Another segfault In-Reply-To: (Nils Wagner's message of "Fri, 03 Mar 2006 20:47:26 +0100") References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> Message-ID: "Nils Wagner" writes: > On a 64 bit machine scipy.test(1,10) results in a segfault > The bt message differs from the bt on a 32 bit machine. > For what reason ? > > Test whether the lu_solve command segfaults, as reported > by NilsUse minimum degree ordering on A'+A. > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 16384 (LWP 5170)] > 0x00002aaaae34d787 in zpivotL (jcol=0, u=1, > usepr=0x7fffffc62264, perm_r=0xc9c7d0, iperm_r=0xbd8c60, > iperm_c=0xdb55c0, pivrow=0x7fffffc62270, Glu=0x0, > stat=0xffffffffab80bb68) at zpivotL.c:121 > 121 perm_r[*pivrow] = jcol; btw, I can reproduce this on my 64-bit machine (grr, can't run scipy.test() right now because of it). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Fri Mar 3 16:25:39 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 03 Mar 2006 16:25:39 -0500 Subject: [SciPy-dev] Fortran type objects Message-ID: I think scipy should define some fortran scalar types: fint, fsingle, fdouble, etc. One reason for this is that I believe (don't quote me) is that sizeof(INTEGER) == 4 on my 64-bit machine in g77. This means arrays passed to Fortran code (through f2py, for instance), should be have a dtype of 'i4', not 'i8', which is what int would give. >From a documentation standpoint, this is also useful. I don't know, however, how to code up a test for how large these types are. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Fri Mar 3 16:29:56 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 03 Mar 2006 14:29:56 -0700 Subject: [SciPy-dev] Another segfault In-Reply-To: References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> Message-ID: <4408B554.6050902@ee.byu.edu> David M. Cooke wrote: >"Nils Wagner" writes: > > > >>On a 64 bit machine scipy.test(1,10) results in a segfault >>The bt message differs from the bt on a 32 bit machine. >>For what reason ? >> >>Test whether the lu_solve command segfaults, as reported >>by NilsUse minimum degree ordering on A'+A. >> >>Program received signal SIGSEGV, Segmentation fault. >>[Switching to Thread 16384 (LWP 5170)] >>0x00002aaaae34d787 in zpivotL (jcol=0, u=1, >>usepr=0x7fffffc62264, perm_r=0xc9c7d0, iperm_r=0xbd8c60, >> iperm_c=0xdb55c0, pivrow=0x7fffffc62270, Glu=0x0, >>stat=0xffffffffab80bb68) at zpivotL.c:121 >>121 perm_r[*pivrow] = jcol; >> >> > >btw, I can reproduce this on my 64-bit machine (grr, can't run >scipy.test() right now because of it). > > I'm wondering if this has to do with the two integer arrays that make up the csc and csr representations of a sparse matrix. I'm looking for where they are checked to make sure they are the right type (need to be int). And I'm not finding it. -Travis From nwagner at mecha.uni-stuttgart.de Fri Mar 3 17:07:24 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 23:07:24 +0100 Subject: [SciPy-dev] Another segfault In-Reply-To: <4408B554.6050902@ee.byu.edu> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> Message-ID: On Fri, 03 Mar 2006 14:29:56 -0700 Travis Oliphant wrote: > David M. Cooke wrote: > >>"Nils Wagner" writes: >> >> >> >>>On a 64 bit machine scipy.test(1,10) results in a >>>segfault >>>The bt message differs from the bt on a 32 bit machine. >>>For what reason ? >>> >>>Test whether the lu_solve command segfaults, as reported >>>by NilsUse minimum degree ordering on A'+A. >>> >>>Program received signal SIGSEGV, Segmentation fault. >>>[Switching to Thread 16384 (LWP 5170)] >>>0x00002aaaae34d787 in zpivotL (jcol=0, u=1, >>>usepr=0x7fffffc62264, perm_r=0xc9c7d0, iperm_r=0xbd8c60, >>> iperm_c=0xdb55c0, pivrow=0x7fffffc62270, Glu=0x0, >>>stat=0xffffffffab80bb68) at zpivotL.c:121 >>>121 perm_r[*pivrow] = jcol; >>> >>> >> >>btw, I can reproduce this on my 64-bit machine (grr, >>can't run >>scipy.test() right now because of it). >> >> > > I'm wondering if this has to do with the two integer >arrays that make up > the csc and csr representations of a sparse matrix. I'm >looking for > where they are checked to make sure they are the right >type (need to be > int). And I'm not finding it. > > -Travis > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev No longer a segfault on my 64-bit machine, but ====================================================================== ERROR: Test whether the lu_solve command segfaults, as reported by Nils ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 255, in check_solve xx = lu_factor(A.tocsc()).solve(r) File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 2518, in lu_factor diag_pivot_thresh, drop_tol, relax, panel_size) TypeError: colptr and rowind must be of type cint ====================================================================== ERROR: Test whether the lu_solve command segfaults, as reported by Nils ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 255, in check_solve xx = lu_factor(A.tocsc()).solve(r) File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 1351, in tocsc return self.tocoo().tocsc() File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 1347, in tocoo data, col, row = func(self.data, self.colind, self.indptr) ValueError: failed to create intent(cache|hide)|optional array-- must have defined dimensions but got (0,) ====================================================================== ERROR: Test whether the lu_solve command segfaults, as reported by Nils ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 255, in check_solve xx = lu_factor(A.tocsc()).solve(r) File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 2518, in lu_factor diag_pivot_thresh, drop_tol, relax, panel_size) TypeError: colptr and rowind must be of type cint ---------------------------------------------------------------------- Ran 1109 tests in 2.124s FAILED (errors=3) Cheers, Nils From oliphant at ee.byu.edu Fri Mar 3 17:14:04 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 03 Mar 2006 15:14:04 -0700 Subject: [SciPy-dev] ***[Possible UCE]*** Re: Another segfault In-Reply-To: References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> Message-ID: <4408BFAC.1010706@ee.byu.edu> Nils Wagner wrote: >On Fri, 03 Mar 2006 14:29:56 -0700 > Travis Oliphant wrote: > > >>David M. Cooke wrote: >> >> >> >>>"Nils Wagner" writes: >>> >>> >>> >>> >>> >>>>On a 64 bit machine scipy.test(1,10) results in a >>>>segfault >>>>The bt message differs from the bt on a 32 bit machine. >>>>For what reason ? >>>> >>>>Test whether the lu_solve command segfaults, as reported >>>>by NilsUse minimum degree ordering on A'+A. >>>> >>>>Program received signal SIGSEGV, Segmentation fault. >>>>[Switching to Thread 16384 (LWP 5170)] >>>>0x00002aaaae34d787 in zpivotL (jcol=0, u=1, >>>>usepr=0x7fffffc62264, perm_r=0xc9c7d0, iperm_r=0xbd8c60, >>>> iperm_c=0xdb55c0, pivrow=0x7fffffc62270, Glu=0x0, >>>>stat=0xffffffffab80bb68) at zpivotL.c:121 >>>>121 perm_r[*pivrow] = jcol; >>>> >>>> >>>> >>>> >>>btw, I can reproduce this on my 64-bit machine (grr, >>>can't run >>>scipy.test() right now because of it). >>> >>> >>> >>> >>I'm wondering if this has to do with the two integer >>arrays that make up >>the csc and csr representations of a sparse matrix. I'm >>looking for >>where they are checked to make sure they are the right >>type (need to be >>int). And I'm not finding it. >> >>-Travis >> >> >>_______________________________________________ >>Scipy-dev mailing list >>Scipy-dev at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-dev >> >> > > >No longer a segfault on my 64-bit machine, but >====================================================================== >ERROR: Test whether the lu_solve command segfaults, as >reported by Nils >---------------------------------------------------------------------- >Traceback (most recent call last): > File >"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", >line 255, in check_solve > xx = lu_factor(A.tocsc()).solve(r) > File >"/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", >line 2518, in lu_factor > diag_pivot_thresh, drop_tol, relax, panel_size) >TypeError: colptr and rowind must be of type cint > >====================================================================== >ERROR: Test whether the lu_solve command segfaults, as >reported by Nils >---------------------------------------------------------------------- >Traceback (most recent call last): > File >"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", >line 255, in check_solve > xx = lu_factor(A.tocsc()).solve(r) > File >"/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", >line 1351, in tocsc > return self.tocoo().tocsc() > File >"/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", >line 1347, in tocoo > data, col, row = func(self.data, self.colind, >self.indptr) >ValueError: failed to create intent(cache|hide)|optional >array-- must have defined dimensions but got (0,) > >====================================================================== >ERROR: Test whether the lu_solve command segfaults, as >reported by Nils >---------------------------------------------------------------------- >Traceback (most recent call last): > File >"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", >line 255, in check_solve > xx = lu_factor(A.tocsc()).solve(r) > File >"/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", >line 2518, in lu_factor > diag_pivot_thresh, drop_tol, relax, panel_size) >TypeError: colptr and rowind must be of type cint > >---------------------------------------------------------------------- >Ran 1109 tests in 2.124s > >FAILED (errors=3) > > > Good. At least two of those errors are due to the explicit check I put in there. Now, we just have to figure out why the index pointers are of the wrong type (they need to be of type int). Notice, that this means we are limiting our matrix sizes to 2^31 x 2^31 even for 64-bit machines. -Travis From nwagner at mecha.uni-stuttgart.de Fri Mar 3 17:31:35 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 23:31:35 +0100 Subject: [SciPy-dev] ***[Possible UCE]*** Re: Another segfault In-Reply-To: <4408BFAC.1010706@ee.byu.edu> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> Message-ID: On Fri, 03 Mar 2006 15:14:04 -0700 Travis Oliphant wrote: > Nils Wagner wrote: > >>On Fri, 03 Mar 2006 14:29:56 -0700 >> Travis Oliphant wrote: >> >> >>>David M. Cooke wrote: >>> >>> >>> >>>>"Nils Wagner" writes: >>>> >>>> >>>> >>>> >>>> >>>>>On a 64 bit machine scipy.test(1,10) results in a >>>>>segfault >>>>>The bt message differs from the bt on a 32 bit machine. >>>>>For what reason ? >>>>> >>>>>Test whether the lu_solve command segfaults, as reported >>>>>by NilsUse minimum degree ordering on A'+A. >>>>> >>>>>Program received signal SIGSEGV, Segmentation fault. >>>>>[Switching to Thread 16384 (LWP 5170)] >>>>>0x00002aaaae34d787 in zpivotL (jcol=0, u=1, >>>>>usepr=0x7fffffc62264, perm_r=0xc9c7d0, iperm_r=0xbd8c60, >>>>> iperm_c=0xdb55c0, pivrow=0x7fffffc62270, Glu=0x0, >>>>>stat=0xffffffffab80bb68) at zpivotL.c:121 >>>>>121 perm_r[*pivrow] = jcol; >>>>> >>>>> >>>>> >>>>> >>>>btw, I can reproduce this on my 64-bit machine (grr, >>>>can't run >>>>scipy.test() right now because of it). >>>> >>>> >>>> >>>> >>>I'm wondering if this has to do with the two integer >>>arrays that make up >>>the csc and csr representations of a sparse matrix. I'm >>>looking for >>>where they are checked to make sure they are the right >>>type (need to be >>>int). And I'm not finding it. >>> >>>-Travis >>> >>> >>>_______________________________________________ >>>Scipy-dev mailing list >>>Scipy-dev at scipy.net >>>http://www.scipy.net/mailman/listinfo/scipy-dev >>> >>> >> >> >>No longer a segfault on my 64-bit machine, but >>====================================================================== >>ERROR: Test whether the lu_solve command segfaults, as >>reported by Nils >>---------------------------------------------------------------------- >>Traceback (most recent call last): >> File >>"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", >>line 255, in check_solve >> xx = lu_factor(A.tocsc()).solve(r) >> File >>"/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", >>line 2518, in lu_factor >> diag_pivot_thresh, drop_tol, relax, panel_size) >>TypeError: colptr and rowind must be of type cint >> >>====================================================================== >>ERROR: Test whether the lu_solve command segfaults, as >>reported by Nils >>---------------------------------------------------------------------- >>Traceback (most recent call last): >> File >>"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", >>line 255, in check_solve >> xx = lu_factor(A.tocsc()).solve(r) >> File >>"/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", >>line 1351, in tocsc >> return self.tocoo().tocsc() >> File >>"/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", >>line 1347, in tocoo >> data, col, row = func(self.data, self.colind, >>self.indptr) >>ValueError: failed to create intent(cache|hide)|optional >>array-- must have defined dimensions but got (0,) >> >>====================================================================== >>ERROR: Test whether the lu_solve command segfaults, as >>reported by Nils >>---------------------------------------------------------------------- >>Traceback (most recent call last): >> File >>"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", >>line 255, in check_solve >> xx = lu_factor(A.tocsc()).solve(r) >> File >>"/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", >>line 2518, in lu_factor >> diag_pivot_thresh, drop_tol, relax, panel_size) >>TypeError: colptr and rowind must be of type cint >> >>---------------------------------------------------------------------- >>Ran 1109 tests in 2.124s >> >>FAILED (errors=3) >> >> >> > > Good. At least two of those errors are due to the >explicit check I put > in there. Now, we just have to figure out why the index >pointers are of > the wrong type (they need to be of type int). Notice, >that this means > we are limiting our matrix sizes to > > 2^31 x 2^31 > > even for 64-bit machines. > > -Travis > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev Travis, On a 32 bit machine I obtain with gdb --exec=python GNU gdb 6.3 Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i586-suse-linux". (gdb) r /usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py Starting program: /usr/local/bin/python /usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py snip Solve: single precision complex: Solve: double precision complex: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1076175008 (LWP 31853)] 0x08081cdb in ?? () (gdb) bt #0 0x08081cdb in ?? () #1 0x4265913c in ?? () from /usr/local/lib/python2.4/site-packages/scipy/sparse/_csuperlu.so #2 0x081b7118 in ?? () #3 0xbfffe5b8 in ?? () #4 0x420c2fa9 in superlu_python_module_free (ptr=0x4038c34c) at _superlu_utils.c:61 #5 0x420c2fa9 in superlu_python_module_free (ptr=0x81b7118) at _superlu_utils.c:61 #6 0x420d734f in Destroy_SuperMatrix_Store (A=0x4038c34c) at util.c:56 #7 0x420c2db6 in Py_cgssv (self=0x0, args=0x4029de2c, kwdict=0x0) at _csuperlumodule.c:99 #8 0x0811eb56 in ?? () #9 0x00000000 in ?? () #10 0x4029de2c in ?? () #11 0x00000000 in ?? () #12 0x00000003 in ?? () #13 0xffffffff in ?? () #14 0x405b8240 in ?? () #15 0x420c2a60 in Py_cgstrf () at _csuperlumodule.c:163 #16 0x080c74ed in ?? () #17 0x4037f3ac in ?? () #18 0x4029de2c in ?? () #19 0x00000000 in ?? () #20 0x0814d940 in ?? () #21 0x0000000b in ?? () #22 0x2a080909 in ?? () #23 0x4035735c in ?? () #24 0x4038c40c in ?? () #25 0x4064bd60 in _PyCLongDouble_ArrFuncs () from /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so Can you reproduce this ? Nils From oliphant at ee.byu.edu Fri Mar 3 17:39:14 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 03 Mar 2006 15:39:14 -0700 Subject: [SciPy-dev] ***[Possible UCE]*** Re: ***[Possible UCE]*** Re: Another segfault In-Reply-To: References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> Message-ID: <4408C592.9040704@ee.byu.edu> Nils Wagner wrote: > >Travis, > >On a 32 bit machine I obtain with > >gdb --exec=python >GNU gdb 6.3 >Copyright 2004 Free Software Foundation, Inc. >GDB is free software, covered by the GNU General Public >License, and you are >welcome to change it and/or distribute copies of it under >certain conditions. >Type "show copying" to see the conditions. >There is absolutely no warranty for GDB. Type "show >warranty" for details. >This GDB was configured as "i586-suse-linux". >(gdb) r >/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py >Starting program: /usr/local/bin/python >/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py > >snip >Solve: single precision complex: >Solve: double precision complex: > > I'm not sure what you are doing. Is this just running Pyhthon? Presumably it is picking up some PYTHONSTARTUP file and running it. But, I have no idea what that is on your system. Doing exactly what you do works for me. What is actually being run to cause the segfault? What directory are you runing from? Have you re-built scipy after upgrading numpy? -Travis From nwagner at mecha.uni-stuttgart.de Fri Mar 3 17:45:12 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 23:45:12 +0100 Subject: [SciPy-dev] ***[Possible UCE]*** Re: ***[Possible UCE]*** Re: Another segfault In-Reply-To: <4408C592.9040704@ee.byu.edu> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> <4408C592.9040704@ee.byu.edu> Message-ID: On Fri, 03 Mar 2006 15:39:14 -0700 Travis Oliphant wrote: > Nils Wagner wrote: > >> >>Travis, >> >>On a 32 bit machine I obtain with >> >>gdb --exec=python >>GNU gdb 6.3 >>Copyright 2004 Free Software Foundation, Inc. >>GDB is free software, covered by the GNU General Public >>License, and you are >>welcome to change it and/or distribute copies of it under >>certain conditions. >>Type "show copying" to see the conditions. >>There is absolutely no warranty for GDB. Type "show >>warranty" for details. >>This GDB was configured as "i586-suse-linux". >>(gdb) r >>/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py >>Starting program: /usr/local/bin/python >>/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py >> >>snip >>Solve: single precision complex: >>Solve: double precision complex: >> >> > > I'm not sure what you are doing. Is this just running >Pyhthon? > Presumably it is picking up some PYTHONSTARTUP file and >running it. > But, I have no idea what that is on your system. > > Doing exactly what you do works for me. > > What is actually being run to cause the segfault? > python /usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py > What directory are you runing from? > > Have you re-built scipy after upgrading numpy? Yes. Numpy version 0.9.6.2194 Scipy version 0.4.7.1635 Nils > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From nwagner at mecha.uni-stuttgart.de Fri Mar 3 17:52:52 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 23:52:52 +0100 Subject: [SciPy-dev] ***[Possible UCE]*** Re: ***[Possible UCE]*** Re: Another segfault In-Reply-To: References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> <4408C592.9040704@ee.byu.edu> Message-ID: On Fri, 03 Mar 2006 23:45:12 +0100 "Nils Wagner" wrote: > On Fri, 03 Mar 2006 15:39:14 -0700 > Travis Oliphant wrote: >> Nils Wagner wrote: >> >>> >>>Travis, >>> >>>On a 32 bit machine I obtain with >>> >>>gdb --exec=python >>>GNU gdb 6.3 >>>Copyright 2004 Free Software Foundation, Inc. >>>GDB is free software, covered by the GNU General Public >>>License, and you are >>>welcome to change it and/or distribute copies of it under >>>certain conditions. >>>Type "show copying" to see the conditions. >>>There is absolutely no warranty for GDB. Type "show >>>warranty" for details. >>>This GDB was configured as "i586-suse-linux". >>>(gdb) r >>>/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py >>>Starting program: /usr/local/bin/python >>>/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py >>> >>>snip >>>Solve: single precision complex: >>>Solve: double precision complex: >>> >>> >> >> I'm not sure what you are doing. Is this just running >>Pyhthon? >> Presumably it is picking up some PYTHONSTARTUP file and >>running it. >> But, I have no idea what that is on your system. >> >> Doing exactly what you do works for me. >> >> What is actually being run to cause the segfault? >> > python > /usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py > >> What directory are you runing from? >> >> Have you re-built scipy after upgrading numpy? > > Yes. > Numpy version 0.9.6.2194 > Scipy version 0.4.7.1635 > > > Nils > > >> >> -Travis >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-dev > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev Here is a backtrace (64 bit machine) gdb python r /usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 10303)] PyDict_DelItem (op=0x0, key=0x577378) at dictobject.c:579 579 if (!PyDict_Check(op)) { (gdb) bt #0 PyDict_DelItem (op=0x0, key=0x577378) at dictobject.c:579 #1 0x00002aaaae9f7bff in superlu_python_module_free (ptr=0x7fffffe424d0) at _superlu_utils.c:61 #2 0x00002aaaae9f75b2 in Py_cgssv (self=, args=, kwdict=) at _csuperlumodule.c:99 #3 0x00002aaaaac5496a in PyEval_EvalFrame (f=0x53efd0) at ceval.c:3547 #4 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbfe490, globals=, locals=, args=0x1, argcount=2, kws=0x506070, kwcount=0, defs=0x2aaaab903568, defcount=1, closure=0x0) at ceval.c:2730 #5 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x505ed0) at ceval.c:3640 #6 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbfe6c0, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #7 0x00002aaaaac556d2 in PyEval_EvalCode (co=, globals=, locals=) at ceval.c:484 #8 0x00002aaaaac70719 in run_node (n=, filename=, globals=0x503db0, locals=0x503db0, flags=) at pythonrun.c:1265 #9 0x00002aaaaac71843 in PyRun_SimpleFileExFlags (fp=, filename=0x7fffffe441c7 "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", closeit=1, flags=0x7fffffe42b80) at pythonrun.c:860 #10 0x00002aaaaac77b25 in Py_Main (argc=, argv=0x7fffffe42c28) at main.c:484 #11 0x00002aaaab603ced in __libc_start_main () from /lib64/libc.so.6 #12 0x00000000004006ea in _start () at start.S:113 Is this output useful ? Nils From oliphant at ee.byu.edu Fri Mar 3 18:47:26 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 03 Mar 2006 16:47:26 -0700 Subject: [SciPy-dev] ***[Possible UCE]*** Re: ***[Possible UCE]*** Re: Another segfault In-Reply-To: References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> <4408C592.9040704@ee.byu.edu> Message-ID: <4408D58E.5080007@ee.byu.edu> Nils Wagner wrote: >> >>Nils >> >> I was able to reproduce the problem. Several issues. Basically, the error-handling in certain segments of _superlu module was bad (things being called that shouldn't be called and a couple of missing checks for NULL). Then the error that was being caught badly was a result of a corner-case incompatibility in the PyArray_CopyFromObject function which I've fixed in NumPy now. So, now I'm not getting segfaults (but still errors that I'm trying to figure out). Thanks very much for the tests. -Travis From zpincus at stanford.edu Fri Mar 3 20:28:03 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Fri, 3 Mar 2006 17:28:03 -0800 Subject: [SciPy-dev] numpy.linalg prevents use of scipy.linalg? Message-ID: Hi folks, I can't seem to use any of the functions in scipy.linalg because numpy defines its own linalg which shadows that of scipy! Specifically, scipy.linalg defines 'norm' (in linalg/basic.py), and numpy doesn't. (This among other differences, I assume.) In [1]: import scipy In [2]: scipy.linalg.norm AttributeError: 'module' object has no attribute 'norm' In [3]: scipy.linalg.__file__ Out[3]: '[...]/python2.4/site-packages/numpy/linalg/__init__.pyc' ^^^^^^^ Now, look what happens when we import scipy.linalg directly: In [1]: import scipy.linalg In [2]: scipy.linalg.norm Out[2]: In [3]: scipy.linalg.__file__ Out[3]: '[...]/python2.4/site-packages/scipy/linalg/__init__.pyc' ^^^^^^^ Something needs to be done to fix this -- but what? Scipy historically imports * from numpy. Scipy historically has a linalg module. The new thing is that numpy has a linalg module, too, which is loaded by default into the numpy namespace. (Compared to Numeric's LinearAlgebra module.) The only thing I can think of is to fold all of scipy.linalg into numpy.linalg, and remove the former. This provides for backwards compatibility for everyone, but perhaps a bit of work. Zach From oliphant at ee.byu.edu Fri Mar 3 20:49:15 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 03 Mar 2006 18:49:15 -0700 Subject: [SciPy-dev] numpy.linalg prevents use of scipy.linalg? In-Reply-To: References: Message-ID: <4408F21B.7010609@ee.byu.edu> Zachary Pincus wrote: >Hi folks, > >I can't seem to use any of the functions in scipy.linalg because >numpy defines its own linalg which shadows that of scipy! > >Specifically, scipy.linalg defines 'norm' (in linalg/basic.py), and >numpy doesn't. (This among other differences, I assume.) > >In [1]: import scipy >In [2]: scipy.linalg.norm >AttributeError: 'module' object has no attribute 'norm' > > What version are you using? This was a problem in the __init__ file that was fixed. Just go to [...]/python2.4/site-packages/scipy/__init__.py and right after the line with 'del lib' enter a line with 'del linalg' Things should work much better after that. -Travis From zpincus at stanford.edu Fri Mar 3 21:00:42 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Fri, 3 Mar 2006 18:00:42 -0800 Subject: [SciPy-dev] numpy.linalg prevents use of scipy.linalg? In-Reply-To: <4408F21B.7010609@ee.byu.edu> References: <4408F21B.7010609@ee.byu.edu> Message-ID: <2B470833-497C-47BF-A3A5-D8EDC11BF972@stanford.edu> > What version are you using? This was a problem in the __init__ file > that was fixed. Sorry, forgot to mention that. This was with scipy 0.4.6 and numpy 0.9.5. I haven't been keeping up with the svn. > Just go to [...]/python2.4/site-packages/scipy/__init__.py > > and right after the line with 'del lib' enter a line with 'del > linalg' > > Things should work much better after that. Thanks, Zach From robert.kern at gmail.com Fri Mar 3 20:47:42 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 03 Mar 2006 19:47:42 -0600 Subject: [SciPy-dev] numpy.linalg prevents use of scipy.linalg? In-Reply-To: References: Message-ID: Zachary Pincus wrote: > Hi folks, > > I can't seem to use any of the functions in scipy.linalg because > numpy defines its own linalg which shadows that of scipy! I believe I've already fixed this. http://projects.scipy.org/scipy/scipy/changeset/1617 What version of scipy are you using? Before reporting a bug, please make sure it exists in the current SVN revision. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Fri Mar 3 21:33:59 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 03 Mar 2006 19:33:59 -0700 Subject: [SciPy-dev] ***[Possible UCE]*** Re: ***[Possible UCE]*** Re: Another segfault In-Reply-To: References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> <4408C592.9040704@ee.byu.edu> Message-ID: <4408FC97.8030509@ee.byu.edu> Nils Wagner wrote: >On Fri, 03 Mar 2006 23:45:12 +0100 > "Nils Wagner" wrote: > > >>On Fri, 03 Mar 2006 15:39:14 -0700 >> Travis Oliphant wrote: >> >> >>>Nils Wagner wrote: >>> >>> >>> >>>> >>>>Travis, >>>> >>>>On a 32 bit machine I obtain with >>>> >>>>gdb --exec=python >>>>GNU gdb 6.3 >>>>Copyright 2004 Free Software Foundation, Inc. >>>>GDB is free software, covered by the GNU General Public >>>>License, and you are >>>>welcome to change it and/or distribute copies of it under >>>>certain conditions. >>>>Type "show copying" to see the conditions. >>>>There is absolutely no warranty for GDB. Type "show >>>>warranty" for details. >>>>This GDB was configured as "i586-suse-linux". >>>>(gdb) r >>>>/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py >>>>Starting program: /usr/local/bin/python >>>>/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py >>>> >>>>snip >>>>Solve: single precision complex: >>>>Solve: double precision complex: >>>> >>>> This should work now. Give it a try. -Travis From nwagner at mecha.uni-stuttgart.de Sat Mar 4 02:31:46 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 04 Mar 2006 08:31:46 +0100 Subject: [SciPy-dev] ***[Possible UCE]*** Re: ***[Possible UCE]*** Re: Another segfault In-Reply-To: <4408FC97.8030509@ee.byu.edu> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> <4408C592.9040704@ee.byu.edu> <4408FC97.8030509@ee.byu.edu> Message-ID: On Fri, 03 Mar 2006 19:33:59 -0700 Travis Oliphant wrote: > Nils Wagner wrote: > >>On Fri, 03 Mar 2006 23:45:12 +0100 >> "Nils Wagner" wrote: >> >> >>>On Fri, 03 Mar 2006 15:39:14 -0700 >>> Travis Oliphant wrote: >>> >>> >>>>Nils Wagner wrote: >>>> >>>> >>>> >>>>> >>>>>Travis, >>>>> >>>>>On a 32 bit machine I obtain with >>>>> >>>>>gdb --exec=python >>>>>GNU gdb 6.3 >>>>>Copyright 2004 Free Software Foundation, Inc. >>>>>GDB is free software, covered by the GNU General Public >>>>>License, and you are >>>>>welcome to change it and/or distribute copies of it under >>>>>certain conditions. >>>>>Type "show copying" to see the conditions. >>>>>There is absolutely no warranty for GDB. Type "show >>>>>warranty" for details. >>>>>This GDB was configured as "i586-suse-linux". >>>>>(gdb) r >>>>>/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py >>>>>Starting program: /usr/local/bin/python >>>>>/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py >>>>> >>>>>snip >>>>>Solve: single precision complex: >>>>>Solve: double precision complex: >>>>> >>>>> > > This should work now. Give it a try. > > -Travis > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev Hi Travis, No longer any segfault on 32/64 bit systems. w.r.t. to sparse.py. Thank you very much !! But on 64 bit systems scipy.test(1,10) still results in ====================================================================== ERROR: Test whether the lu_solve command segfaults, as reported by Nils ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 255, in check_solve xx = lu_factor(A.tocsc()).solve(r) File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 2521, in lu_factor diag_pivot_thresh, drop_tol, relax, panel_size) TypeError: colptr and rowind must be of type cint ====================================================================== ERROR: Test whether the lu_solve command segfaults, as reported by Nils ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 255, in check_solve xx = lu_factor(A.tocsc()).solve(r) File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 1353, in tocsc return self.tocoo().tocsc() File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 1349, in tocoo data, col, row = func(self.data, self.colind, self.indptr) ValueError: failed to create intent(cache|hide)|optional array-- must have defined dimensions but got (0,) ====================================================================== ERROR: Test whether the lu_solve command segfaults, as reported by Nils ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 255, in check_solve xx = lu_factor(A.tocsc()).solve(r) File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 2521, in lu_factor diag_pivot_thresh, drop_tol, relax, panel_size) TypeError: colptr and rowind must be of type cint ---------------------------------------------------------------------- Ran 1109 tests in 2.220s FAILED (errors=3) Nils From oliphant.travis at ieee.org Sat Mar 4 03:58:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 04 Mar 2006 01:58:21 -0700 Subject: [SciPy-dev] ***[Possible UCE]*** Re: ***[Possible UCE]*** Re: Another segfault In-Reply-To: References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> <4408C592.9040704@ee.byu.edu> <4408FC97.8030509@ee.byu.edu> Message-ID: <440956AD.1000807@ieee.org> Nils Wagner wrote: > > Hi Travis, > > No longer any segfault on 32/64 bit systems. w.r.t. to > sparse.py. Thank you very much !! > But on 64 bit systems scipy.test(1,10) still results > in > Try it now. I hadn't tried to fix those 64-bit issues beyond the segfaulting. Hopefully my recent fixes will improve the ERROR situation. -Travis From nwagner at mecha.uni-stuttgart.de Sat Mar 4 04:13:30 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 04 Mar 2006 10:13:30 +0100 Subject: [SciPy-dev] ***[Possible UCE]*** Re: ***[Possible UCE]*** Re: Another segfault In-Reply-To: <440956AD.1000807@ieee.org> References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> <4408C592.9040704@ee.byu.edu> <4408FC97.8030509@ee.byu.edu> <440956AD.1000807@ieee.org> Message-ID: On Sat, 04 Mar 2006 01:58:21 -0700 Travis Oliphant wrote: > Nils Wagner wrote: >> >> Hi Travis, >> >> No longer any segfault on 32/64 bit systems. w.r.t. to >> sparse.py. Thank you very much !! >> But on 64 bit systems scipy.test(1,10) still results >> in >> > > Try it now. I hadn't tried to fix those 64-bit issues >beyond the > segfaulting. Hopefully my recent fixes will improve >the ERROR situation. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev Great ! Thank you very much ! Ran 1109 tests in 2.130s OK >>> scipy.__version__ '0.4.7.1639' Nils From zpincus at stanford.edu Sat Mar 4 04:41:48 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Sat, 4 Mar 2006 01:41:48 -0800 Subject: [SciPy-dev] [Numpy-discussion] Re: numpy.linalg prevents use of scipy.linalg? In-Reply-To: References: Message-ID: > What version of scipy are you using? Before reporting a bug, please > make sure it > exists in the current SVN revision. Sorry I didn't include the version (0.4.6, by the way) in my first email. My oversight. As to your latter statement, are you sure about that? You're only interested in getting bug reports from people who rebuild scipy from the svn every night, or who have tracked down the problem to the line so they know where in the svn to see if it's fixed? If that's the case, it's certainly your prerogative. Zach From robert.kern at gmail.com Sat Mar 4 05:04:16 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 04 Mar 2006 04:04:16 -0600 Subject: [SciPy-dev] [Numpy-discussion] Re: numpy.linalg prevents use of scipy.linalg? In-Reply-To: References: Message-ID: <44096620.8080505@gmail.com> Zachary Pincus wrote: >>What version of scipy are you using? Before reporting a bug, please >>make sure it >>exists in the current SVN revision. > > Sorry I didn't include the version (0.4.6, by the way) in my first > email. My oversight. > > As to your latter statement, are you sure about that? You're only > interested in getting bug reports from people who rebuild scipy from > the svn every night, or who have tracked down the problem to the line > so they know where in the svn to see if it's fixed? No, I don't expect everyone to rebuild scipy every day. However, when you think you've found a bug, it is standard procedure for you to build the most recent version to test it and see if the bug is still there. So yes, I do expect people to build from SVN when they're about to report a bug, but no more often. Locating the line of code causing the bug doesn't enter into it. Just try the operation that failed under the previous version. > If that's the case, it's certainly your prerogative. It's just standard operating procedure for many projects. E.g. http://bugs.php.net/how-to-report.php http://tortoisesvn.tigris.org/reportbugs.html -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From schofield at ftw.at Sat Mar 4 08:14:49 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat, 04 Mar 2006 14:14:49 +0100 Subject: [SciPy-dev] [Numpy-discussion] Re: numpy.linalg prevents use of scipy.linalg? In-Reply-To: <44096620.8080505@gmail.com> References: <44096620.8080505@gmail.com> Message-ID: <440992C9.6090801@ftw.at> Robert Kern wrote: > Zachary Pincus wrote: > >>> What version of scipy are you using? Before reporting a bug, please >>> make sure it >>> exists in the current SVN revision. >>> >> Sorry I didn't include the version (0.4.6, by the way) in my first >> email. My oversight. >> >> As to your latter statement, are you sure about that? You're only >> interested in getting bug reports from people who rebuild scipy from >> the svn every night, or who have tracked down the problem to the line >> so they know where in the svn to see if it's fixed? >> > > No, I don't expect everyone to rebuild scipy every day. However, when you think > you've found a bug, it is standard procedure for you to build the most recent > version to test it and see if the bug is still there. So yes, I do expect people > to build from SVN when they're about to report a bug, but no more often. > I don't expect this. I think a bug report against the latest released version is better than no bug report. If we think we've seen and fixed the bug in SVN, it takes us less time to reply "Hmmm, check this with the latest SVN" than it would take Zachary or another would-be bug reporter to check out and build the whole tree from SVN, just on the off-chance we've fixed it since then. This lowers the barrier to entry for helpers. Reading bug reports about the current released version can also be a helpful reminder to us of which bugs are still outstanding in the most recent release. -- Ed From oliphant.travis at ieee.org Sat Mar 4 12:07:16 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 04 Mar 2006 10:07:16 -0700 Subject: [SciPy-dev] [Numpy-discussion] Re: numpy.linalg prevents use of scipy.linalg? In-Reply-To: <440992C9.6090801@ftw.at> References: <44096620.8080505@gmail.com> <440992C9.6090801@ftw.at> Message-ID: <4409C944.1020806@ieee.org> Ed Schofield wrote: >> No, I don't expect everyone to rebuild scipy every day. However, when you think >> you've found a bug, it is standard procedure for you to build the most recent >> version to test it and see if the bug is still there. So yes, I do expect people >> to build from SVN when they're about to report a bug, but no more often. >> >> > > I don't expect this. I think a bug report against the latest released > version is better than no bug report. If we think we've seen and fixed > the bug in SVN, it takes us less time to reply "Hmmm, check this with > the latest SVN" than it would take Zachary or another would-be bug > reporter to check out and build the whole tree from SVN, just on the > off-chance we've fixed it since then. This lowers the barrier to entry > for helpers. Reading bug reports about the current released version can > also be a helpful reminder to us of which bugs are still outstanding in > the most recent release. > > I don't want potential bug reports not to get filed, either. This is another advantage of the ticket system on the trac pages. The bug-reporter could just check that page to see if the problem has been fixed. Alternatively, the bug-reporter can do a search through the newsgroup to see if the problem has been talked about and reported as fixed. I think requiring all bug-reporters to build from SVN is a bit of a barrier that we don't want to push at this stage. Especially, when there may be many little bugs lurking from the transition. -Travis From nwagner at mecha.uni-stuttgart.de Sat Mar 4 12:31:13 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 04 Mar 2006 18:31:13 +0100 Subject: [SciPy-dev] Bug in linalg.cgs Message-ID: x0,info = linalg.cgs(A,r) File "/usr/local/lib/python2.4/site-packages/scipy/linalg/iterative.py", line 501, in cgs work[slice2] += sclr1*matvec(work[slice1]) ValueError: invalid return array shape >>> A matrix([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) >>> r array([ 0.17858324, 0.42785877, 0.48033897, 0.85571193]) >>> help (linalg.cgs) >>> shape(r) (4,) >>> shape(A) (4, 4) >>> type(A) >>> type(r) From nwagner at mecha.uni-stuttgart.de Sat Mar 4 12:37:46 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 04 Mar 2006 18:37:46 +0100 Subject: [SciPy-dev] Iterative solvers cannot handle complex rhs Message-ID: x0,info = linalg.cgs(A,r) File "/usr/local/lib/python2.4/site-packages/scipy/linalg/iterative.py", line 474, in cgs b = sb.asarray(b,typ) File "/usr/local/lib/python2.4/site-packages/numpy/core/numeric.py", line 74, in asarray return array(a, dtype, copy=False, fortran=fortran, ndmin=ndmin) TypeError: array cannot be safely cast to required type >>> type(A) >>> shape(A) (4, 4) >>> A <4x4 sparse matrix of type '' with 4 stored elements (space for 100) in Compressed Sparse Column format> >>> r array([ 0.87207973+0.45474362j, 0.41920682+0.12267298j, 0.59096886+0.53802559j, 0.3652316 +0.86013843j]) >>> type(r) >>> shape(r) (4,) Nils From nwagner at mecha.uni-stuttgart.de Sat Mar 4 12:42:48 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 04 Mar 2006 18:42:48 +0100 Subject: [SciPy-dev] KeyError: ('d', 'l') Message-ID: x0,info = linalg.cgs(A,r) File "/usr/local/lib/python2.4/site-packages/scipy/linalg/iterative.py", line 465, in cgs typ = _coerce_rules[b.dtype.char,atyp] KeyError: ('d', 'l') >>> A array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) >>> r array([ 0.61542588, 0.53775049, 0.29953529, 0.50247666]) >>> type(A) A is defined as A = identity(4) From loredo at astro.cornell.edu Sun Mar 5 20:08:36 2006 From: loredo at astro.cornell.edu (Tom Loredo) Date: Sun, 5 Mar 2006 20:08:36 -0500 Subject: [SciPy-dev] Extensions linking to libraries used by numpy/scipy Message-ID: <1141607316.440b8b9494825@astrosun2.astro.cornell.edu> Hi folks, I asked about this some time ago and was told there was no straightforward way to accomplish it, but that was over a year ago and before the recent numpy/scipy split, etc.. So I thought I'd ask again. I'm writing a package with several C and Fortran extensions, some of which call functions or subroutines that do things already done by Scipy routines, e.g., evaluate a gamma function, perform an LU decomposition, etc.. There are repeated calls, in the course of a more complex calculation, and the most sensible way to do the calculations would be to call Scipy's library routines directly, and not go back through Python. In some cases as a placeholder I have a version derived from Numerical Recipes, and this is a no-no as far as public distribution is concerned. I'd like to be able to call & link to the C or Fortran libraries that Scipy is accessing, but I need to do it in a way that will build and operate reliably across the various platforms Scipy builds on. As specific examples, I'd like to access functions in cephes/gamma.c, and lapack routines for LU decomposition and determinant calcuation, e.g., DGETRF, DGBTRF or DGTTRF. I suspect this isn't hard for gamma.c, but since Scipy is sometimes built with BLAS/LAPACK, and sometimes with ATLAS, I suspect it's harder for the LU decomposition. So... is there a way to do this portably? If so, is it documented somewhere (example code and an example setup.py)? Are there examples I could look at? Do some particular Scipy modules provide good examples of this (which ones)? Thanks for any pointers, Tom ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From robert.kern at gmail.com Sun Mar 5 20:36:25 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 05 Mar 2006 19:36:25 -0600 Subject: [SciPy-dev] Extensions linking to libraries used by numpy/scipy In-Reply-To: <1141607316.440b8b9494825@astrosun2.astro.cornell.edu> References: <1141607316.440b8b9494825@astrosun2.astro.cornell.edu> Message-ID: <440B9219.1020107@gmail.com> Tom Loredo wrote: > Hi folks, > > I asked about this some time ago and was told there was no > straightforward way to accomplish it, but that was over a > year ago and before the recent numpy/scipy split, etc.. So > I thought I'd ask again. > > I'm writing a package with several C and Fortran extensions, > some of which call functions or subroutines that do things > already done by Scipy routines, e.g., evaluate a gamma function, > perform an LU decomposition, etc.. There are repeated calls, > in the course of a more complex calculation, and the most > sensible way to do the calculations would be to call Scipy's library > routines directly, and not go back through Python. In some cases as > a placeholder I have a version derived from Numerical Recipes, and > this is a no-no as far as public distribution is concerned. I'd like > to be able to call & link to the C or Fortran libraries that Scipy > is accessing, but I need to do it in a way that will build > and operate reliably across the various platforms Scipy builds on. > > As specific examples, I'd like to access functions in > cephes/gamma.c, and lapack routines for LU decomposition > and determinant calcuation, e.g., DGETRF, DGBTRF or DGTTRF. > > I suspect this isn't hard for gamma.c, but since Scipy > is sometimes built with BLAS/LAPACK, and sometimes with ATLAS, > I suspect it's harder for the LU decomposition. > > So... is there a way to do this portably? If so, is it > documented somewhere (example code and an example setup.py)? > Are there examples I could look at? Do some particular > Scipy modules provide good examples of this (which ones)? I don't think anything has changed since the last time you asked. Sorry. To do this reliably across platforms, you have to do the equivalent of numpy's import_array() trick. The "exporting" extension has to be written specifically to do it, and nothing in scipy has been. Possibly, some should. It occurs to me that it might be possible to modify f2py to do add the necessary function tables to the extension module. That would be a neat trick. Also, I remember asking Pearu to expose the raw function pointer in the fortran object wrapper. The idea was to be able to allow f2py callback functions to be f2py'ed subroutines themselves, and have everything go really, really fast. In [6]: linalg.flapack.dgetrf._cpointer Out[6]: I believe this actually gives you the function pointer to DGETRF_. Or you could just accept the fortran object itself, and access the function pointer from the PyFortranObject structure itself. Ah, now I've rambled myself to the right answer. Don't pay attention to anything but the last sentence of the previous paragraph. AFAICT, this is a new approach, so you can be our guinea pig. For gamma, I recommend just yoinking the routine from specfun. It's tiny. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.leslie at gmail.com Mon Mar 6 01:12:40 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 6 Mar 2006 17:12:40 +1100 Subject: [SciPy-dev] broken scipy.optimize.anneal Message-ID: Hi all, I posted to scipy-users earlier today asking about using multiple parameters with scipy.optimize.anneal. After fiddling with the code for a while to get it to work I've decided that the whole module is fairly broken. pylint and pychecker showed up a number of unused variables and imports, reading the code led me to missing imports, uncommenting the test code in the __main__ section caused breakage and there are any number of problems if you try to use multiple parameters for your function :-) I've started work on fixing all these issues, I was wondering if anyone on the list wanted to claim rights over this module and have a say in any of the changes that might happen. The comments at the top of the file claim: ## Automatically adapted for scipy Oct 07, 2005 by convertcode.py # Author: Travis Oliphant 2002 If noone has any objections I'll open a ticket and submit a patch when I'm done. Cheers, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.leslie at gmail.com Mon Mar 6 02:33:44 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 6 Mar 2006 18:33:44 +1100 Subject: [SciPy-dev] broken scipy.optimize.anneal In-Reply-To: References: Message-ID: On 3/6/06, Tim Leslie wrote: > > If noone has any objections I'll open a ticket and submit a patch when I'm > done. > Patch submitted. http://projects.scipy.org/scipy/scipy/ticket/16 Cheers, Tim Cheers, > > Tim > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Mar 6 04:09:02 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 06 Mar 2006 10:09:02 +0100 Subject: [SciPy-dev] ***[Possible UCE]*** Re: ***[Possible UCE]*** Re: Another segfault In-Reply-To: References: <4406F253.7060304@mecha.uni-stuttgart.de> <44070EA0.5090609@gmail.com> <440711D6.5050502@mecha.uni-stuttgart.de> <440717C2.9040004@ftw.at> <440825B1.7060705@ntc.zcu.cz> <440855D1.7060506@ftw.at> <44086799.7050802@ntc.zcu.cz> <4408B554.6050902@ee.byu.edu> <4408BFAC.1010706@ee.byu.edu> <4408C592.9040704@ee.byu.edu> <4408FC97.8030509@ee.byu.edu> <440956AD.1000807@ieee.org> Message-ID: <440BFC2E.2040806@ntc.zcu.cz> Nils Wagner wrote: > On Sat, 04 Mar 2006 01:58:21 -0700 > Travis Oliphant wrote: > >>Nils Wagner wrote: >> >>> >>>Hi Travis, >>> >>>No longer any segfault on 32/64 bit systems. w.r.t. to >>>sparse.py. Thank you very much !! >>>But on 64 bit systems scipy.test(1,10) still results >>>in >>> >> >>Try it now. I hadn't tried to fix those 64-bit issues >>beyond the >>segfaulting. Hopefully my recent fixes will improve >>the ERROR situation. >> >>-Travis >> >>_______________________________________________ >>Scipy-dev mailing list >>Scipy-dev at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-dev > > > > Great ! Thank you very much ! > Ran 1109 tests in 2.130s > > OK > > Yeah, Travis is the ultimate solution for all bugs! :-) r. From jesper.friis at material.ntnu.no Mon Mar 6 04:17:20 2006 From: jesper.friis at material.ntnu.no (Jesper Friis) Date: Mon, 06 Mar 2006 10:17:20 +0100 Subject: [SciPy-dev] Test case for integrate/ode.py: banded systems Message-ID: <440BFE20.8090105@material.ntnu.no> I hope this is the correct place to post messages like this. Last Friday I submitted a small patch for integrate/ode.py making it working for banded systems. I have now successfully applied the solver to the banded example problem included in the cvode package. The attached script might both work as a test and as an example on how to solve banded systems. I think, especially the following comment considering the Jacobian might be of general interest: # The Jacobian. # For banded systems this function returns a matrix pd of # size ml+mu*2+1 by neq containing the partial derivatives # df[k]/du[u]. Here f is the right hand side function in # udot=f(t,u), ml and mu are the lower and upper half bandwidths # and neq the number of equations. The derivatives df[k]/du[l] # are loaded into pd[mu+k-l,k], i.e. the diagonals are loaded into # the rows of pd from top down (fortran indexing). # # Confusingly, the number of rows VODE expect is not ml+mu+1, but # given by a parameter nrowpd, which unfortunately is left out in # the python interface. However, it seems that VODE expect that # nrowpd = ml+2*mu+1. E.g. for our system with ml=mu=5 VODE expect # 16 rows. Fortunately the f2py interface prints out an error if # the number of rows is wrong, so as long ml and mu are known in # beforehand one can always determine nrowpd by trial and error... Regards /Jesper -------------- next part -------------- A non-text attachment was scrubbed... Name: test_ode_banded.py Type: text/x-python Size: 5695 bytes Desc: not available URL: From nwagner at mecha.uni-stuttgart.de Mon Mar 6 06:22:45 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Mar 2006 12:22:45 +0100 Subject: [SciPy-dev] Bug w.r.t. Addition of sparse matrices Message-ID: <440C1B85.7070201@mecha.uni-stuttgart.de> The addition of sparse matrices results in wrong results (see add.py for details). Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: add.py Type: text/x-python Size: 210 bytes Desc: not available URL: From cimrman3 at ntc.zcu.cz Mon Mar 6 06:49:10 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 06 Mar 2006 12:49:10 +0100 Subject: [SciPy-dev] Bug w.r.t. Addition of sparse matrices In-Reply-To: <440C1B85.7070201@mecha.uni-stuttgart.de> References: <440C1B85.7070201@mecha.uni-stuttgart.de> Message-ID: <440C21B6.1030601@ntc.zcu.cz> Nils Wagner wrote: > The addition of sparse matrices results in wrong results (see add.py for > details). > > Nils > > > ------------------------------------------------------------------------ > > from scipy import * > from scipy.sparse import * > A = rand(3,3) > B = rand(3,3) > A_csr = csr_matrix(A) > B_csr = csr_matrix(B) > C_csr = A_csr+B_csr > C = A + B > print C[0,0], C_csr[0,0], 'should be zero',C[0,0]-C_csr[0,0] I think it works ok: $ python add.py 0.393709556483 0.393709570169 should be zero -1.36865185851e-08 1e-8 is the float precision... Try using doubles. r. From nwagner at mecha.uni-stuttgart.de Mon Mar 6 06:59:22 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Mar 2006 12:59:22 +0100 Subject: [SciPy-dev] Bug w.r.t. Addition of sparse matrices In-Reply-To: <440C21B6.1030601@ntc.zcu.cz> References: <440C1B85.7070201@mecha.uni-stuttgart.de> <440C21B6.1030601@ntc.zcu.cz> Message-ID: <440C241A.9010704@mecha.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> The addition of sparse matrices results in wrong results (see add.py for >> details). >> >> Nils >> >> >> ------------------------------------------------------------------------ >> >> from scipy import * >> from scipy.sparse import * >> A = rand(3,3) >> B = rand(3,3) >> A_csr = csr_matrix(A) >> B_csr = csr_matrix(B) >> C_csr = A_csr+B_csr >> C = A + B >> print C[0,0], C_csr[0,0], 'should be zero',C[0,0]-C_csr[0,0] >> > > I think it works ok: > > $ python add.py > 0.393709556483 0.393709570169 should be zero -1.36865185851e-08 > > 1e-8 is the float precision... Try using doubles. > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > How do I change the precision ? Nils From nwagner at mecha.uni-stuttgart.de Mon Mar 6 07:19:06 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Mar 2006 13:19:06 +0100 Subject: [SciPy-dev] Bug w.r.t. Addition of sparse matrices In-Reply-To: <440C21B6.1030601@ntc.zcu.cz> References: <440C1B85.7070201@mecha.uni-stuttgart.de> <440C21B6.1030601@ntc.zcu.cz> Message-ID: <440C28BA.5030207@mecha.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> The addition of sparse matrices results in wrong results (see add.py for >> details). >> >> Nils >> >> >> ------------------------------------------------------------------------ >> >> from scipy import * >> from scipy.sparse import * >> A = rand(3,3) >> B = rand(3,3) >> A_csr = csr_matrix(A) >> B_csr = csr_matrix(B) >> C_csr = A_csr+B_csr >> C = A + B >> print C[0,0], C_csr[0,0], 'should be zero',C[0,0]-C_csr[0,0] >> > > I think it works ok: > > $ python add.py > 0.393709556483 0.393709570169 should be zero -1.36865185851e-08 > > 1e-8 is the float precision... Try using doubles. > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Am I missing something ? I have used astype('d') to switch to double precision. BTW, is there any reason why default is single precision ? python -i add.py 0.561831531319 0.561831533909 should be zero -2.59017984838e-09 Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: add.py Type: text/x-python Size: 234 bytes Desc: not available URL: From nwagner at mecha.uni-stuttgart.de Mon Mar 6 08:12:49 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Mar 2006 14:12:49 +0100 Subject: [SciPy-dev] Bug w.r.t. Addition of sparse matrices In-Reply-To: <440C28BA.5030207@mecha.uni-stuttgart.de> References: <440C1B85.7070201@mecha.uni-stuttgart.de> <440C21B6.1030601@ntc.zcu.cz> <440C28BA.5030207@mecha.uni-stuttgart.de> Message-ID: <440C3551.2050904@mecha.uni-stuttgart.de> Nils Wagner wrote: > Robert Cimrman wrote: > >> Nils Wagner wrote: >> >> >>> The addition of sparse matrices results in wrong results (see add.py for >>> details). >>> >>> Nils >>> >>> >>> ------------------------------------------------------------------------ >>> >>> from scipy import * >>> from scipy.sparse import * >>> A = rand(3,3) >>> B = rand(3,3) >>> A_csr = csr_matrix(A) >>> B_csr = csr_matrix(B) >>> C_csr = A_csr+B_csr >>> C = A + B >>> print C[0,0], C_csr[0,0], 'should be zero',C[0,0]-C_csr[0,0] >>> >>> >> I think it works ok: >> >> $ python add.py >> 0.393709556483 0.393709570169 should be zero -1.36865185851e-08 >> >> 1e-8 is the float precision... Try using doubles. >> >> r. >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-dev >> >> > Am I missing something ? > I have used astype('d') to switch to double precision. > > BTW, is there any reason why default is single precision ? > > python -i add.py > 0.561831531319 0.561831533909 should be zero -2.59017984838e-09 > > Nils > > > ------------------------------------------------------------------------ > > from scipy import * > from scipy.sparse import * > A = rand(3,3) > B = rand(3,3) > A_csr = csr_matrix(A).astype('d') > B_csr = csr_matrix(B).astype('d') > C_csr = A_csr+B_csr > C = A + B > print C[0,0], C_csr[0,0], 'should be zero',C[0,0]-C_csr[0,0] > > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > It works with astype('D') double precision complex but astype('d') doesn't work. For what reason ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: add.py Type: text/x-python Size: 253 bytes Desc: not available URL: From strawman at astraw.com Mon Mar 6 11:57:09 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 06 Mar 2006 08:57:09 -0800 Subject: [SciPy-dev] Test case for integrate/ode.py: banded systems In-Reply-To: <440BFE20.8090105@material.ntnu.no> References: <440BFE20.8090105@material.ntnu.no> Message-ID: <440C69E5.9050402@astraw.com> Hi Jesper, Thanks, this looks interesting. I've copied your email to the following location so it doesn't slip through the cracks: http://www.scipy.org/Developer_Zone/Recipes/ODE_Integration_on_Banded_Systems You indicated it might serve as an example, so I hope after any potential refinement in the "Developer Zone" you (or someone with more knowledge of integration on banded systems than I) can move it over to the "Cookbook". Cheers! Andrew Jesper Friis wrote: > I hope this is the correct place to post messages like this. Last > Friday I submitted a small patch for integrate/ode.py making it > working for banded systems. I have now successfully applied the solver > to the banded example problem included in the cvode package. The > attached script might both work as a test and as an example on how to > solve banded systems. > > I think, especially the following comment considering the Jacobian > might be of general interest: > # The Jacobian. > # For banded systems this function returns a matrix pd of > # size ml+mu*2+1 by neq containing the partial derivatives > # df[k]/du[u]. Here f is the right hand side function in > # udot=f(t,u), ml and mu are the lower and upper half bandwidths > # and neq the number of equations. The derivatives df[k]/du[l] > # are loaded into pd[mu+k-l,k], i.e. the diagonals are loaded into > # the rows of pd from top down (fortran indexing). > # > # Confusingly, the number of rows VODE expect is not ml+mu+1, but > # given by a parameter nrowpd, which unfortunately is left out in > # the python interface. However, it seems that VODE expect that > # nrowpd = ml+2*mu+1. E.g. for our system with ml=mu=5 VODE expect > # 16 rows. Fortunately the f2py interface prints out an error if > # the number of rows is wrong, so as long ml and mu are known in > # beforehand one can always determine nrowpd by trial and error... > > > > Regards > /Jesper > >------------------------------------------------------------------------ > >#!/usr/bin/env python > ># Test provided provided by Jesper Friis, > >from scipy import arange,zeros,exp >from scipy.integrate import ode > >def test_ode_banded(): > """Banded example copied from CVODE. > > The following is a simple example problem with a banded Jacobian, > with the program for its solution by CVODE. > The problem is the semi-discrete form of the advection-diffusion > equation in 2-D: > du/dt = d^2 u / dx^2 + 0.5 du/dx + d^2 u / dy^2 > on the rectangle 0 <= x <= 2, 0 <= y <= 1, and the time > interval 0 <= t <= 1. Homogeneous Dirichlet boundary conditions > are posed, and the initial condition is > u(x,y,t=0) = x(2-x)y(1-y)exp(5xy) . > The PDE is discretized on a uniform MX+2 by MY+2 grid with > central differencing, and with boundary values eliminated, > leaving an ODE system of size NEQ = MX*MY. > > Assuming that MY < MX a minimum bandwidth banded system can be > constructed by arranging the grid points along columns. This > results in the lower and upper bandwidths ml = mu = MY. > > This function solves the problem with the BDF method, Newton > iteration with the VODE band linear solver, and a user-supplied > Jacobian routine. It uses scalar relative and absolute tolerances. > Output is printed at t = 0., .1, .2, ..., 1. > """ > # Some constants > xmax = 2.0 # domain boundaries > ymax = 1.0 > mx = 10 # mesh dimensions > my = 5 > dx = xmax/(mx+1.) # grid spacing > dy = ymax/(my+1.) > neq = mx*my # number of equations > mu = my # half bandwidths > ml = my > atol = 1.e-5 # scalar absolute tolerance > nrowpd = ml+2*mu+1 # number of rows in storage of banded Jacobian > x = dx*(arange(mx)+1.0) # inner grid points > y = dy*(arange(my)+1.0) > t = 0.1*arange(11) # the times we want to print the solution > > # The right hand side function in udot = f(t,u) > def f(t,u): > for j in range(mx): > for i in range(my): > # Get index of gridpoint i,j in u and udot > k = j*my+i > # Extract u at x_j, y_i and four neighboring points > ult = urt = uup = udn = 0.0 > uij = u[k] > if j>0: ult = u[k-my] > if j if i>0: uup = u[k-1] > if i # Set diffusion and advection terms and load into udot > hdiff = (ult - 2.0*uij + urt)/(dx*dx) > vdiff = (uup - 2.0*uij + udn)/(dy*dy) > hadv = 0.5*(urt - ult)/(2.0*dx) > udot[j*my+i] = hdiff + hadv + vdiff > return udot > > > # The Jacobian. > # For banded systems this function returns a matrix pd of > # size ml+mu*2+1 by neq containing the partial derivatives > # df[k]/du[u]. Here f is the right hand side function in > # udot=f(t,u), ml and mu are the lower and upper half bandwidths > # and neq the number of equations. The derivatives df[k]/du[l] > # are loaded into pd[mu+k-l,k], i.e. the diagonals are loaded into > # the rows of pd from top down (fortran indexing). > # > # Confusingly, the number of rows VODE expect is not ml+mu+1, but > # given by a parameter nrowpd, which unfortunately is left out in > # the python interface. However, it seems that VODE expect that > # nrowpd = ml+2*mu+1. E.g. for our system with ml=mu=5 VODE expect > # 16 rows. Fortunately the f2py interface prints out an error if > # the number of rows is wrong, so as long ml and mu are known in > # beforehand one can always determine nrowpd by trial and error... > def jac(t,u): > # The components of u that f[i,j] = udot_ij depends on are: > # u[i,j], u[i,j-1], u[i,j+1], u[i-1,j] and u[i+1,j], with > # df[i,j]/du[i,j] = -2 (1/dx^2 + 1/dy^2), l=k > # df[i,j]/du[i,j-1] = 1/dx^2 - .25/dx, j>0, l=k-my > # df[i,j]/du[i,j+1] = 1/dx^2 + .25/dx, j # df[i,j]/du[i-1,j] = 1/dy^2 i>0, l=k-1 > # df[i,j]/du[i+1,j] = 1/dy^2 i # where k=j*my+i. > for j in range(mx): > for i in range(my): > k = j*my+i > pd[mu,k] = -2.0*(1.0/(dx*dx) + 1.0/(dy*dy)) > if j > 0: pd[mu-my,k] = 1.0/(dx*dx) + 0.25/dx > if j < mx-1: pd[mu+my,k] = 1.0/(dx*dx) + 0.25/dx > if i > 0: pd[mu-1,k] = 1.0/(dy*dy) > if k < my-1: pd[mu+1,k] = 1.0/(dy*dy) > return pd > > # Initial value > u = zeros(neq,float) > for j in range(mx): > u[j*my:(j+1)*my] = x[j]*(xmax - x[j])*y*(ymax - y)*exp(5*x[j]*y) > > # Allocate global work arrays pd and udot > pd = zeros((nrowpd,neq),float) > udot = zeros(neq,float) > > # Solve the problem > print "2-D advection-diffusion equation, mesh dimensions =%3d %3d" %(mx,my) > print "Banded solution, bandwidth = %d" % (ml+mu+1) > r = ode(f, jac) > r.set_integrator('vode',atol=atol,lband=ml,uband=mu,method='bdf') > r.set_initial_value(u, t=t[0]) > print 'At t=%4.2f max.norm(u) = %-12.4e'%(r.t, max(u)) > for tout in t[1:]: > u = r.integrate(tout) > print 'At t=%4.2f max.norm(u) = %-12.4e'%(r.t, max(u)) > if not r.successful(): > print "An error occurred during integration" > break > > > >test_ode_banded() > > >------------------------------------------------------------------------ > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > > From ndbecker2 at gmail.com Mon Mar 6 12:53:27 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 06 Mar 2006 12:53:27 -0500 Subject: [SciPy-dev] scipy-0.4.6 Adjust D1MACH by uncommenting data statements Message-ID: I just tried a little test on Fedora FC5 (x86_64) that used integrate.quad and the function special.i0e, and go: Adjust D1MACH by uncommenting data statements appropriate for your machine. STOP 779 From robert.kern at gmail.com Mon Mar 6 12:56:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 06 Mar 2006 11:56:41 -0600 Subject: [SciPy-dev] scipy-0.4.6 Adjust D1MACH by uncommenting data statements In-Reply-To: References: Message-ID: <440C77D9.8090009@gmail.com> Neal Becker wrote: > I just tried a little test on Fedora FC5 (x86_64) that used integrate.quad > and the function special.i0e, and go: > > Adjust D1MACH by uncommenting data statements > appropriate for your machine. > STOP 779 Someone else ran into this a bit earlier. He tracked down the problem to gfortran. Using g77 instead solved his problem. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at mecha.uni-stuttgart.de Mon Mar 6 13:55:28 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Mar 2006 19:55:28 +0100 Subject: [SciPy-dev] 0.9.6.2201 NameError: global name 'pi' is not defined Message-ID: Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/numpy/__init__.py", line 46, in test return NumpyTest().test(level, verbosity) File "/usr/local/lib/python2.4/site-packages/numpy/testing/numpytest.py", line 422, in test suites.extend(self._get_module_tests(module, abs(level), verbosity)) File "/usr/local/lib/python2.4/site-packages/numpy/testing/numpytest.py", line 362, in _get_module_tests return self._get_suite_list(test_module, level, module.__name__) File "/usr/local/lib/python2.4/site-packages/numpy/testing/numpytest.py", line 377, in _get_suite_list suite = obj(mthname) File "/usr/local/lib/python2.4/site-packages/numpy/core/tests/test_ma.py", line 17, in __init__ self.setUp() File "/usr/local/lib/python2.4/site-packages/numpy/core/tests/test_ma.py", line 20, in setUp x=numpy.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) NameError: global name 'pi' is not defined From byrnes at bu.edu Mon Mar 6 14:07:24 2006 From: byrnes at bu.edu (John Byrnes) Date: Mon, 6 Mar 2006 14:07:24 -0500 Subject: [SciPy-dev] 0.9.6.2201 NameError: global name 'pi' is not defined In-Reply-To: References: Message-ID: <20060306190724.GF17191@localhost.localdomain> You need to refer to pi as numpy.pi . Regards, John On Mon, Mar 06, 2006 at 07:55:28PM +0100, Nils Wagner wrote: > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/local/lib/python2.4/site-packages/numpy/__init__.py", > line 46, in test > return NumpyTest().test(level, verbosity) > File > "/usr/local/lib/python2.4/site-packages/numpy/testing/numpytest.py", > line 422, in test > suites.extend(self._get_module_tests(module, > abs(level), verbosity)) > File > "/usr/local/lib/python2.4/site-packages/numpy/testing/numpytest.py", > line 362, in _get_module_tests > return self._get_suite_list(test_module, level, > module.__name__) > File > "/usr/local/lib/python2.4/site-packages/numpy/testing/numpytest.py", > line 377, in _get_suite_list > suite = obj(mthname) > File > "/usr/local/lib/python2.4/site-packages/numpy/core/tests/test_ma.py", > line 17, in __init__ > self.setUp() > File > "/usr/local/lib/python2.4/site-packages/numpy/core/tests/test_ma.py", > line 20, in setUp > x=numpy.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., > 10., 1., 2., 3.]) > NameError: global name 'pi' is not defined > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -- An honest man can feel no pleasure in the exercise of power over his fellow citizens. -- Thomas Jefferson, letter to John Melish, January 13, 1813 From aisaac at american.edu Mon Mar 6 18:14:46 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 6 Mar 2006 18:14:46 -0500 Subject: [SciPy-dev] Fwd: Re: CSparse license Message-ID: I hesitate to speak to this, but I am willing to convey the developer view back to Tim Davis. Alan Isaac ------ Forwarded message ------ From: Tim Davis Date: Mon, 06 Mar 2006 11:36:30 -0500 Subject: Re: CSparse license To: Alan Isaac CSparse uses LGPL, not GPL. Isn't that compatible with SciPy? -------- End of message ------- From robert.kern at gmail.com Mon Mar 6 18:23:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 06 Mar 2006 17:23:53 -0600 Subject: [SciPy-dev] Fwd: Re: CSparse license In-Reply-To: References: Message-ID: <440CC489.9010707@gmail.com> Alan G Isaac wrote: > I hesitate to speak to this, but I am willing to convey the > developer view back to Tim Davis. > > Alan Isaac > > ------ Forwarded message ------ > From: Tim Davis > Date: Mon, 06 Mar 2006 11:36:30 -0500 > Subject: Re: CSparse license > To: Alan Isaac > > CSparse uses LGPL, not GPL. Isn't that compatible with SciPy? CSparse code and SciPy code can live side by side in a project just fine. The licenses are compatible in that respect. However, the SciPy project tries not to include code with licenses more restrictive than the BSD license. Among other reasons, the BSD license is short enough to be understandable by non-lawyers. I still think that there is value to keeping the scipy package wholly BSD-licensed as much as possible. Using BSD-licensed code is essentially a no-brainer; it's compatible with essentially everything and entails only the smallest of commitments. If people want to wrap CSparse for SciPy, this would be an excellent seed for Fernando's SciPy Kits idea. scikits.csparse, anyone? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Doug.LATORNELL at mdsinc.com Mon Mar 6 20:21:46 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Mon, 6 Mar 2006 17:21:46 -0800 Subject: [SciPy-dev] More General BSD Support in ufuncobject.h Message-ID: <34090E25C2327C4AA5D276799005DDE00101055B@SMDMX0501.mds.mdsinc.com> For what it's worth... I just svn updated and rebuilt my NumPy installation on my OpenBSD machine. Before building I changed the defined() at line 251 of ufuncobject.h: -#elif defined(sun) || defined(__OpenBSD__) || defined(__FreeBSD__) +#elif defined(sun) || defined(__BSD__) Numpy builds, installs, and tests fine (see below). This change *should* work for all of the BSD OSs (OpenBSD, FreeBSD, NetBSD, DragonflyBSD, ...), though I have only tested it on OpenBSD 3.8. Doug =========== Post-install test ============= In [1]: import numpy In [2]: numpy.__version__ Out[3]: '0.9.6.2204' In [4]: numpy.test() Found 11 tests for numpy.core.umath Found 8 tests for numpy.lib.arraysetops Found 29 tests for numpy.core.ma Found 1 tests for numpy.lib.ufunclike Found 1 tests for numpy.lib.polynomial Found 6 tests for numpy.core.records Found 14 tests for numpy.core.numeric Found 4 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 30 tests for numpy.core.numerictypes Found 9 tests for numpy.lib.twodim_base Found 1 tests for numpy.core.oldnumeric Found 44 tests for numpy.lib.shape_base Found 4 tests for numpy.lib.index_tricks Found 42 tests for numpy.lib.type_check Found 3 tests for numpy.dft.helper Found 87 tests for numpy.core.multiarray Found 8 tests for numpy.core.defmatrix Found 33 tests for numpy.lib.function_base Found 0 tests for __main__ ........................................................................ ........................................................................ ........................................................................ ........................................................................ .................................................. ---------------------------------------------------------------------- Ran 338 tests in 0.886s OK Out[5]: This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From ndbecker2 at gmail.com Mon Mar 6 13:20:41 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 06 Mar 2006 13:20:41 -0500 Subject: [SciPy-dev] scipy-0.4.6 Adjust D1MACH by uncommenting data statements References: <440C77D9.8090009@gmail.com> Message-ID: Robert Kern wrote: > Neal Becker wrote: >> I just tried a little test on Fedora FC5 (x86_64) that used >> integrate.quad and the function special.i0e, and go: >> >> Adjust D1MACH by uncommenting data statements >> appropriate for your machine. >> STOP 779 > > Someone else ran into this a bit earlier. He tracked down the problem to > gfortran. Using g77 instead solved his problem. > Thanks. I wonder if anyone knows of a way to fix this so it works with gfortran? From robert.kern at gmail.com Mon Mar 6 23:26:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 06 Mar 2006 22:26:00 -0600 Subject: [SciPy-dev] scipy-0.4.6 Adjust D1MACH by uncommenting data statements In-Reply-To: References: <440C77D9.8090009@gmail.com> Message-ID: <440D0B58.4040103@gmail.com> Neal Becker wrote: > Robert Kern wrote: > >>Neal Becker wrote: >> >>>I just tried a little test on Fedora FC5 (x86_64) that used >>>integrate.quad and the function special.i0e, and go: >>> >>> Adjust D1MACH by uncommenting data statements >>> appropriate for your machine. >>>STOP 779 >> >>Someone else ran into this a bit earlier. He tracked down the problem to >>gfortran. Using g77 instead solved his problem. > > Thanks. I wonder if anyone knows of a way to fix this so it works with gfortran? Possibly a more recent version of gfortran will help. Of course, I'm willing to bet that the only reason most people here are using gfortran is because their Linux distro had gcc 4 installed by default. http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18518 -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From loredo at astro.cornell.edu Tue Mar 7 00:55:35 2006 From: loredo at astro.cornell.edu (Tom Loredo) Date: Tue, 7 Mar 2006 00:55:35 -0500 Subject: [SciPy-dev] Extensions linking to libraries used by numpy/scipy In-Reply-To: References: Message-ID: <1141710935.440d205795a1b@astrosun2.astro.cornell.edu> Robert, thanks for the interesting suggestions! Robert Kern wrote: > Also, I remember asking Pearu to expose the raw function pointer in the fortran > object wrapper. The idea was to be able to allow f2py callback functions to be > f2py'ed subroutines themselves, and have everything go really, really fast. > > In [6]: linalg.flapack.dgetrf._cpointer > Out[6]: > > I believe this actually gives you the function pointer to DGETRF_. Alas, this is what I see on OS X (10.3.9): In [9]: from scipy import linalg In [10]: linalg.flapack.dgetrf._cpointer AttributeError: 'module' object has no attribute 'flapack' In [11]: linalg.lapack.dgetrf._cpointer AttributeError: 'module' object has no attribute 'lapack' In [12]: linalg.clapack.dgetrf._cpointer AttributeError: 'module' object has no attribute 'clapack' This is presumably related to whatever is behind the scipy.test() warnings about missing blas/lapack on OS X that I and others have reported here, presumably reflecting the use of Apple veclib stuff. Any ideas on a portable way to try this? > Or you could > just accept the fortran object itself, and access the function pointer from the > PyFortranObject structure itself. > > Ah, now I've rambled myself to the right answer. Don't pay attention to anything > but the last sentence of the previous paragraph. AFAICT, this is a new approach, > so you can be our guinea pig. I don't know how to pursue this. I'll have to dig more into the f2py docs to understand it. And since I have the attention of the person behind random/mtrand 8-), I am also wrapping some RNGs in Fortran (e.g., multivariate t), and wondering if there is a way to have them call Numpy's "rand" (or standard_normal, etc.). I thought perhaps I could try your trick, but: In [13]: random.rand Out[13]: In [14]: random.rand._cpointer AttributeError: 'builtin_function_or_method' object has no attribute '_cpointer' I presume this is because mtrand is not built with f2py. Right now I'm just sending my Fortran RNGs arrays of input rands from Python, rather than having them call a uniform or normal RNG directly. Fortunately what I'm working on now requires a deterministic # of input randoms. I'm not sure what I could do in other cases without going through the overhead of a Python callback. But this should work for now. -Tom ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From robert.kern at gmail.com Tue Mar 7 01:16:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 07 Mar 2006 00:16:17 -0600 Subject: [SciPy-dev] Extensions linking to libraries used by numpy/scipy In-Reply-To: <1141710935.440d205795a1b@astrosun2.astro.cornell.edu> References: <1141710935.440d205795a1b@astrosun2.astro.cornell.edu> Message-ID: <440D2531.4010001@gmail.com> Tom Loredo wrote: > Robert, thanks for the interesting suggestions! > > Robert Kern wrote: > >>Also, I remember asking Pearu to expose the raw function pointer in the fortran >>object wrapper. The idea was to be able to allow f2py callback functions to be >>f2py'ed subroutines themselves, and have everything go really, really fast. >> >>In [6]: linalg.flapack.dgetrf._cpointer >>Out[6]: >> >>I believe this actually gives you the function pointer to DGETRF_. > > Alas, this is what I see on OS X (10.3.9): > > In [9]: from scipy import linalg > In [10]: linalg.flapack.dgetrf._cpointer > AttributeError: 'module' object has no attribute 'flapack' > > In [11]: linalg.lapack.dgetrf._cpointer > AttributeError: 'module' object has no attribute 'lapack' > > In [12]: linalg.clapack.dgetrf._cpointer > AttributeError: 'module' object has no attribute 'clapack' > > This is presumably related to whatever is behind the scipy.test() > warnings about missing blas/lapack on OS X that I and others have reported > here, presumably reflecting the use of Apple veclib stuff. Any > ideas on a portable way to try this? I dunno. I'm using OS X 10.4, although I think I build with ATLAS. I think you should at least have flapack. Can you successfully call any of the functions from scipy.linalg? What versions of numpy/scipy are you using? Can you try "from scipy.linalg import flapack"? What does In [3]: from numpy import linalg as n_linalg In [4]: from scipy import linalg as s_linalg In [5]: n_linalg is s_linalg Out[5]: False give you? >>Or you could >>just accept the fortran object itself, and access the function pointer from the >>PyFortranObject structure itself. >> >>Ah, now I've rambled myself to the right answer. Don't pay attention to anything >>but the last sentence of the previous paragraph. AFAICT, this is a new approach, >>so you can be our guinea pig. > > I don't know how to pursue this. I'll have to dig more into > the f2py docs to understand it. Probably the most straightforward way is to use the ._cpointer, actually. You can read about the PyCObject structure in the Python documentation, I believe. My final suggestion probably requires #includeing the fortranobject.h header from f2py, which makes things a bit more complicated. > And since I have the attention of the person behind random/mtrand 8-), > I am also wrapping some RNGs in Fortran (e.g., multivariate t), and > wondering if there is a way to have them call Numpy's "rand" (or > standard_normal, etc.). I thought perhaps I could try your trick, but: > > In [13]: random.rand > Out[13]: > In [14]: random.rand._cpointer > AttributeError: 'builtin_function_or_method' object has no attribute '_cpointer' > > I presume this is because mtrand is not built with f2py. Indeed it is not. The module itself is written using Pyrex, and I don't think there is any reasonable hope of getting Pyrex to do a similar trick. One of these weekends, I am going to rewrite it in pure C and expose an API with a similar mechanism as numpy's. I think the extension has stabilized enough that the benefits of keeping it in Pyrex don't outweigh the usefulness of a C API. Well, maybe that's not quite true. Let me think about it. If anyone wants to make a stab at this, let me know. I have a few crazy ideas floating in my head now. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jesper.friis at material.ntnu.no Tue Mar 7 06:46:23 2006 From: jesper.friis at material.ntnu.no (Jesper Friis) Date: Tue, 07 Mar 2006 12:46:23 +0100 Subject: [SciPy-dev] Test case for integrate/ode.py: banded systems In-Reply-To: <440C69E5.9050402@astraw.com> References: <440BFE20.8090105@material.ntnu.no> <440C69E5.9050402@astraw.com> Message-ID: <440D728F.1060908@material.ntnu.no> Hi Andrew, you are welcome to use it as an example in the "Cookbook". What kind of refinements are you thinking about? Validation of the code or conversion into a form similar to the other Cookbook examples? Regards /Jesper Andrew Straw wrote: > Hi Jesper, > > Thanks, this looks interesting. I've copied your email to the following > location so it doesn't slip through the cracks: > > http://www.scipy.org/Developer_Zone/Recipes/ODE_Integration_on_Banded_Systems > > You indicated it might serve as an example, so I hope after any > potential refinement in the "Developer Zone" you (or someone with more > knowledge of integration on banded systems than I) can move it over to > the "Cookbook". > > Cheers! > Andrew > > Jesper Friis wrote: > > >>I hope this is the correct place to post messages like this. Last >>Friday I submitted a small patch for integrate/ode.py making it >>working for banded systems. I have now successfully applied the solver >>to the banded example problem included in the cvode package. The >>attached script might both work as a test and as an example on how to >>solve banded systems. >> >>I think, especially the following comment considering the Jacobian >>might be of general interest: >> # The Jacobian. >> # For banded systems this function returns a matrix pd of >> # size ml+mu*2+1 by neq containing the partial derivatives >> # df[k]/du[u]. Here f is the right hand side function in >> # udot=f(t,u), ml and mu are the lower and upper half bandwidths >> # and neq the number of equations. The derivatives df[k]/du[l] >> # are loaded into pd[mu+k-l,k], i.e. the diagonals are loaded into >> # the rows of pd from top down (fortran indexing). >> # >> # Confusingly, the number of rows VODE expect is not ml+mu+1, but >> # given by a parameter nrowpd, which unfortunately is left out in >> # the python interface. However, it seems that VODE expect that >> # nrowpd = ml+2*mu+1. E.g. for our system with ml=mu=5 VODE expect >> # 16 rows. Fortunately the f2py interface prints out an error if >> # the number of rows is wrong, so as long ml and mu are known in >> # beforehand one can always determine nrowpd by trial and error... >> >> >> >>Regards >>/Jesper >> >>------------------------------------------------------------------------ >> >>#!/usr/bin/env python >> >># Test provided provided by Jesper Friis, >> >>from scipy import arange,zeros,exp >>from scipy.integrate import ode > >>def test_ode_banded(): >> """Banded example copied from CVODE. >> >> The following is a simple example problem with a banded Jacobian, >> with the program for its solution by CVODE. >> The problem is the semi-discrete form of the advection-diffusion >> equation in 2-D: >> du/dt = d^2 u / dx^2 + 0.5 du/dx + d^2 u / dy^2 >> on the rectangle 0 <= x <= 2, 0 <= y <= 1, and the time >> interval 0 <= t <= 1. Homogeneous Dirichlet boundary conditions >> are posed, and the initial condition is >> u(x,y,t=0) = x(2-x)y(1-y)exp(5xy) . >> The PDE is discretized on a uniform MX+2 by MY+2 grid with >> central differencing, and with boundary values eliminated, >> leaving an ODE system of size NEQ = MX*MY. >> >> Assuming that MY < MX a minimum bandwidth banded system can be >> constructed by arranging the grid points along columns. This >> results in the lower and upper bandwidths ml = mu = MY. >> >> This function solves the problem with the BDF method, Newton >> iteration with the VODE band linear solver, and a user-supplied >> Jacobian routine. It uses scalar relative and absolute tolerances. >> Output is printed at t = 0., .1, .2, ..., 1. >> """ >> # Some constants >> xmax = 2.0 # domain boundaries >> ymax = 1.0 >> mx = 10 # mesh dimensions >> my = 5 >> dx = xmax/(mx+1.) # grid spacing >> dy = ymax/(my+1.) >> neq = mx*my # number of equations >> mu = my # half bandwidths >> ml = my >> atol = 1.e-5 # scalar absolute tolerance >> nrowpd = ml+2*mu+1 # number of rows in storage of banded Jacobian >> x = dx*(arange(mx)+1.0) # inner grid points >> y = dy*(arange(my)+1.0) >> t = 0.1*arange(11) # the times we want to print the solution >> >> # The right hand side function in udot = f(t,u) >> def f(t,u): >> for j in range(mx): >> for i in range(my): >> # Get index of gridpoint i,j in u and udot >> k = j*my+i >> # Extract u at x_j, y_i and four neighboring points >> ult = urt = uup = udn = 0.0 >> uij = u[k] >> if j>0: ult = u[k-my] >> if j> if i>0: uup = u[k-1] >> if i> # Set diffusion and advection terms and load into udot >> hdiff = (ult - 2.0*uij + urt)/(dx*dx) >> vdiff = (uup - 2.0*uij + udn)/(dy*dy) >> hadv = 0.5*(urt - ult)/(2.0*dx) >> udot[j*my+i] = hdiff + hadv + vdiff >> return udot >> >> >> # The Jacobian. >> # For banded systems this function returns a matrix pd of >> # size ml+mu*2+1 by neq containing the partial derivatives >> # df[k]/du[u]. Here f is the right hand side function in >> # udot=f(t,u), ml and mu are the lower and upper half bandwidths >> # and neq the number of equations. The derivatives df[k]/du[l] >> # are loaded into pd[mu+k-l,k], i.e. the diagonals are loaded into >> # the rows of pd from top down (fortran indexing). >> # >> # Confusingly, the number of rows VODE expect is not ml+mu+1, but >> # given by a parameter nrowpd, which unfortunately is left out in >> # the python interface. However, it seems that VODE expect that >> # nrowpd = ml+2*mu+1. E.g. for our system with ml=mu=5 VODE expect >> # 16 rows. Fortunately the f2py interface prints out an error if >> # the number of rows is wrong, so as long ml and mu are known in >> # beforehand one can always determine nrowpd by trial and error... >> def jac(t,u): >> # The components of u that f[i,j] = udot_ij depends on are: >> # u[i,j], u[i,j-1], u[i,j+1], u[i-1,j] and u[i+1,j], with >> # df[i,j]/du[i,j] = -2 (1/dx^2 + 1/dy^2), l=k >> # df[i,j]/du[i,j-1] = 1/dx^2 - .25/dx, j>0, l=k-my >> # df[i,j]/du[i,j+1] = 1/dx^2 + .25/dx, j> # df[i,j]/du[i-1,j] = 1/dy^2 i>0, l=k-1 >> # df[i,j]/du[i+1,j] = 1/dy^2 i> # where k=j*my+i. >> for j in range(mx): >> for i in range(my): >> k = j*my+i >> pd[mu,k] = -2.0*(1.0/(dx*dx) + 1.0/(dy*dy)) >> if j > 0: pd[mu-my,k] = 1.0/(dx*dx) + 0.25/dx >> if j < mx-1: pd[mu+my,k] = 1.0/(dx*dx) + 0.25/dx >> if i > 0: pd[mu-1,k] = 1.0/(dy*dy) >> if k < my-1: pd[mu+1,k] = 1.0/(dy*dy) >> return pd >> >> # Initial value >> u = zeros(neq,float) >> for j in range(mx): >> u[j*my:(j+1)*my] = x[j]*(xmax - x[j])*y*(ymax - y)*exp(5*x[j]*y) >> >> # Allocate global work arrays pd and udot >> pd = zeros((nrowpd,neq),float) >> udot = zeros(neq,float) >> >> # Solve the problem >> print "2-D advection-diffusion equation, mesh dimensions =%3d %3d" %(mx,my) >> print "Banded solution, bandwidth = %d" % (ml+mu+1) >> r = ode(f, jac) >> r.set_integrator('vode',atol=atol,lband=ml,uband=mu,method='bdf') >> r.set_initial_value(u, t=t[0]) >> print 'At t=%4.2f max.norm(u) = %-12.4e'%(r.t, max(u)) >> for tout in t[1:]: >> u = r.integrate(tout) >> print 'At t=%4.2f max.norm(u) = %-12.4e'%(r.t, max(u)) >> if not r.successful(): >> print "An error occurred during integration" >> break >> >> >> >>test_ode_banded() >> >> >>------------------------------------------------------------------------ >> >>_______________________________________________ >>Scipy-dev mailing list >>Scipy-dev at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-dev >> >> > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From ndbecker2 at gmail.com Tue Mar 7 07:44:37 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 07 Mar 2006 07:44:37 -0500 Subject: [SciPy-dev] scipy-0.4.6 Adjust D1MACH by uncommenting data statements References: <440C77D9.8090009@gmail.com> <440D0B58.4040103@gmail.com> Message-ID: Robert Kern wrote: > Neal Becker wrote: >> Robert Kern wrote: >> >>>Neal Becker wrote: >>> >>>>I just tried a little test on Fedora FC5 (x86_64) that used >>>>integrate.quad and the function special.i0e, and go: >>>> >>>> Adjust D1MACH by uncommenting data statements >>>> appropriate for your machine. >>>>STOP 779 >>> >>>Someone else ran into this a bit earlier. He tracked down the problem to >>>gfortran. Using g77 instead solved his problem. >> >> Thanks. I wonder if anyone knows of a way to fix this so it works with >> gfortran? > > Possibly a more recent version of gfortran will help. Of course, I'm > willing to bet that the only reason most people here are using gfortran is > because their Linux distro had gcc 4 installed by default. > > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18518 > Problem is still there when I rebuilt scipy with: gcc version 4.1.0 20060304 (Red Hat 4.1.0-2) I don't think you'll find much newer gfortran :( From ndbecker2 at gmail.com Tue Mar 7 08:30:09 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 07 Mar 2006 08:30:09 -0500 Subject: [SciPy-dev] scipy-0.4.6 Adjust D1MACH by uncommenting data statements References: <440C77D9.8090009@gmail.com> <440D0B58.4040103@gmail.com> Message-ID: Robert Kern wrote: > Neal Becker wrote: >> Robert Kern wrote: >> >>>Neal Becker wrote: >>> >>>>I just tried a little test on Fedora FC5 (x86_64) that used >>>>integrate.quad and the function special.i0e, and go: >>>> >>>> Adjust D1MACH by uncommenting data statements >>>> appropriate for your machine. >>>>STOP 779 >>> >>>Someone else ran into this a bit earlier. He tracked down the problem to >>>gfortran. Using g77 instead solved his problem. >> >> Thanks. I wonder if anyone knows of a way to fix this so it works with >> gfortran? > > Possibly a more recent version of gfortran will help. Of course, I'm > willing to bet that the only reason most people here are using gfortran is > because their Linux distro had gcc 4 installed by default. > > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18518 > Compiling d1mach.f with -O rather than -O2 fixes it also. How can I patch scipy to compile only d1mach.f using -O? From ndbecker2 at gmail.com Tue Mar 7 09:41:27 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 07 Mar 2006 09:41:27 -0500 Subject: [SciPy-dev] [PATCH] d1mach problem Message-ID: I think these patches will fix the problem with d1mach miscompiling with gcc4.1: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy-build_clib-patch Type: text/x-diff Size: 689 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy-d1mach.patch Type: text/x-diff Size: 1201 bytes Desc: not available URL: From strawman at astraw.com Tue Mar 7 11:51:47 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 07 Mar 2006 08:51:47 -0800 Subject: [SciPy-dev] Test case for integrate/ode.py: banded systems In-Reply-To: <440D728F.1060908@material.ntnu.no> References: <440BFE20.8090105@material.ntnu.no> <440C69E5.9050402@astraw.com> <440D728F.1060908@material.ntnu.no> Message-ID: <440DBA23.1050907@astraw.com> Jesper Friis wrote: >Hi Andrew, >you are welcome to use it as an example in the "Cookbook". What kind of >refinements are you thinking about? Validation of the code or conversion >into a form similar to the other Cookbook examples? > > Hi Jesper, There's been a suggestion that new Cookbook entries don't go directly into the Cookbook but rather have an incubation period where they're vetted. The thinking, which seems reasonable to me, is that the quality of the Cookbook section should be very high -- thus, it would be good if a 2nd set of eyeballs (of someone with an understanding of integrating banded systems) scanned the page you wrote. Of course, this criterion is only going to work if someone actually looks at it, so if that doesn't happen in a reasonable period of time, I say we move it over anyway... I didn't have any specific refinements in mind, having merely copied the email you sent. Cheers! Andrew From strawman at astraw.com Tue Mar 7 11:57:22 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 07 Mar 2006 08:57:22 -0800 Subject: [SciPy-dev] Test case for integrate/ode.py: banded systems In-Reply-To: <440DBA23.1050907@astraw.com> References: <440BFE20.8090105@material.ntnu.no> <440C69E5.9050402@astraw.com> <440D728F.1060908@material.ntnu.no> <440DBA23.1050907@astraw.com> Message-ID: <440DBB72.1000606@astraw.com> And now, upon reading the comments in the code of that page, it seems you've uncovered what may be a bug in the VODE wrapper. Would you say it is? If so, we should report the bug, hopefully fix it either ourselves or have a developer do it, and fix up the page appropriately. Cheers! Andrew From loredo at astro.cornell.edu Tue Mar 7 12:53:03 2006 From: loredo at astro.cornell.edu (Tom Loredo) Date: Tue, 7 Mar 2006 12:53:03 -0500 Subject: [SciPy-dev] Extensions linking to libraries used by numpy/scipy In-Reply-To: References: Message-ID: <1141753983.440dc87f931e1@astrosun2.astro.cornell.edu> Robert Kern wrote: > I'm using OS X 10.4, although I think I build with ATLAS. I think you > should at least have flapack. Can you successfully call any of the functions > from scipy.linalg? What versions of numpy/scipy are you using? Can you try "from > scipy.linalg import flapack"? What does > > In [3]: from numpy import linalg as n_linalg > > In [4]: from scipy import linalg as s_linalg > > In [5]: n_linalg is s_linalg > Out[5]: False > > give you? The following is with OS 10.3.9, MacPython 2.4.1, Numpy 0.9.5, and Scipy 0.4.6: In [1]: from numpy import linalg as n_linalg In [2]: from scipy import linalg as s_linalg In [3]: n_linalg is s_linalg Out[3]: True In [4]: from scipy.linalg import flapack In [5]: scipy.linalg.flapack.dgetrf._cpointer Out[6]: Huh? So I tried: In [10]: from scipy import linalg In [11]: linalg.flapack.dgetrf._cpointer Out[11]: So I quit IPython, started it afresh, and: In [1]: from scipy import linalg In [2]: linalg.flapack.dgetrf._cpointer AttributeError: 'module' object has no attribute 'flapack' In [3]: from scipy.linalg import flapack In [4]: linalg.flapack.dgetrf._cpointer AttributeError: 'module' object has no attribute 'flapack' In [5]: flapack.dgetrf._cpointer Out[5]: In [9]: dir(linalg) Out[9]: ['Heigenvalues', 'Heigenvectors', 'LinAlgError', 'ScipyTest', '__builtins__', '__doc__', '__file__', '__name__', '__path__', 'cholesky', 'cholesky_decomposition', 'det', 'determinant', 'eig', 'eigenvalues', 'eigenvectors', 'eigh', 'eigvals', 'eigvalsh', 'generalized_inverse', 'inv', 'inverse', 'lapack_lite', 'linalg', 'linear_least_squares', 'lstsq', 'pinv', 'singular_value_decomposition', 'solve', 'solve_linear_equations', 'svd', 'test'] No "flapack" there---should there be? Is some import magic happening that I don't understand? Is the presence of "lapack_lite" a hint of something amiss? -Tom ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From robert.kern at gmail.com Tue Mar 7 13:04:29 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 07 Mar 2006 12:04:29 -0600 Subject: [SciPy-dev] Extensions linking to libraries used by numpy/scipy In-Reply-To: <1141753983.440dc87f931e1@astrosun2.astro.cornell.edu> References: <1141753983.440dc87f931e1@astrosun2.astro.cornell.edu> Message-ID: <440DCB2D.7090603@gmail.com> Tom Loredo wrote: > Robert Kern wrote: > > >>I'm using OS X 10.4, although I think I build with ATLAS. I think you >>should at least have flapack. Can you successfully call any of the functions >>from scipy.linalg? What versions of numpy/scipy are you using? Can you try "from >>scipy.linalg import flapack"? What does >> >>In [3]: from numpy import linalg as n_linalg >> >>In [4]: from scipy import linalg as s_linalg >> >>In [5]: n_linalg is s_linalg >>Out[5]: False >> >>give you? > > The following is with OS 10.3.9, MacPython 2.4.1, Numpy 0.9.5, > and Scipy 0.4.6: > > In [1]: from numpy import linalg as n_linalg > > In [2]: from scipy import linalg as s_linalg > > In [3]: n_linalg is s_linalg > Out[3]: True > > In [4]: from scipy.linalg import flapack > > In [5]: scipy.linalg.flapack.dgetrf._cpointer > Out[6]: http://projects.scipy.org/scipy/scipy/changeset/1617 -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonathan.taylor at stanford.edu Tue Mar 7 16:22:24 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Tue, 07 Mar 2006 13:22:24 -0800 Subject: [SciPy-dev] minor fix to ndimage setup.py Message-ID: <440DF990.2010903@stanford.edu> In building scipy.sandbox.nd_image I noticed that simply uncommenting the "nd_image" line in sandbox/setup.py doesn't create a proper installation because there is a package_path='Lib' missing in sandbox/nd_image/setup.py. Embarrassingly simple patch attached. -- Jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ndimage.patch URL: From schofield at ftw.at Tue Mar 7 16:42:01 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 7 Mar 2006 22:42:01 +0100 Subject: [SciPy-dev] minor fix to ndimage setup.py In-Reply-To: <440DF990.2010903@stanford.edu> References: <440DF990.2010903@stanford.edu> Message-ID: On 07/03/2006, at 10:22 PM, Jonathan Taylor wrote: > In building > > scipy.sandbox.nd_image > > I noticed that simply uncommenting the "nd_image" line in sandbox/ > setup.py doesn't create a proper installation because there is a > package_path='Lib' missing in sandbox/nd_image/setup.py. > > Embarrassingly simple patch attached. > Thanks :) I've applied it in SVN. -- Ed From jesper.friis at material.ntnu.no Wed Mar 8 03:31:50 2006 From: jesper.friis at material.ntnu.no (Jesper Friis) Date: Wed, 08 Mar 2006 09:31:50 +0100 Subject: [SciPy-dev] Test case for integrate/ode.py: banded systems Message-ID: <440E9676.1080009@material.ntnu.no> Andrew Straw wrote: > And now, upon reading the comments in the code of that page, it seems > you've uncovered what may be a bug in the VODE wrapper. Would you say it > is? If so, we should report the bug, hopefully fix it either ourselves > or have a developer do it, and fix up the page appropriately. > > Cheers! > Andrew I have to admit that I just started last week to use python instead of Fortran, so I think that you are in a better position to say if it should be considered a bug or not that the nrowpd argument is missing in the python interface. Searching through the function integrate/odepack/vode.f revealed that the Jacobian function is only called in two places. For full systems it is called with nrowpd=neq while for banded systems it is called with nrowpd=ml*2+mu+1. So, as long as these values for nrowpd are provided in the documentation there should be no problems using the python interface. Note however that the lower and upper bandwidths, ml and mu are interchanged compared to what I wrote in the comment in the example script. The reason for this turned out to be another little miss in integrate/ode.py, which is corrected with the patch below. It might also be interesting (at least for me) to add a way to obtain information about the solution, like the number of steps that was needed to reach the solution. What is the best way to do this? Maybe one could add a method called infodict() that returns a dictionary containing all optional output from the solver? In the end of the ode module there are two simple tests for non-banded systems. Maybe they should be added to the Cookbook together with my banded example. What do you think? Regards /Jesper -------------------------------------------------------------- --- scipy-0.4.6/Lib/integrate/ode.py.org 2006-03-07 23:23:17.000000000 +0100 +++ scipy-0.4.6/Lib/integrate/ode.py 2006-03-07 23:23:33.000000000 +0100 @@ -276,8 +276,8 @@ self.with_jacobian = with_jacobian self.rtol = rtol self.atol = atol - self.mu = lband - self.ml = uband + self.mu = uband + self.ml = lband self.order = order self.nsteps = nsteps From nwagner at mecha.uni-stuttgart.de Wed Mar 8 04:44:55 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Mar 2006 10:44:55 +0100 Subject: [SciPy-dev] Trouble when installing scipy via latest svn Message-ID: <440EA797.9060209@mecha.uni-stuttgart.de> gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mcpu=i686 -fmessage-length=0 -Wall -g -fPIC' compile options: '-DATLAS_INFO="\"3.6.0\"" -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c' gcc: Lib/linalg/atlas_version.c /usr/bin/g77 -shared build/temp.linux-i686-2.4/Lib/linalg/atlas_version.o -L/usr/local/lib/atlas -Lbuild/temp.linux-i686-2.4 -llapack -lf77blas -lcblas -latlas -lg2c -o build/lib.linux-i686-2.4/scipy/linalg/atlas_version.so /usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: BFD 2.15.94.0.2.2 20041220 (SuSE Linux) internal error, aborting at ../../bfd/elf32-i386.c line 3127 in elf_i386_finish_dynamic_symbol /usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: Please report this bug. collect2: ld returned 1 exit status /usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: BFD 2.15.94.0.2.2 20041220 (SuSE Linux) internal error, aborting at ../../bfd/elf32-i386.c line 3127 in elf_i386_finish_dynamic_symbol /usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: Please report this bug. collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -shared build/temp.linux-i686-2.4/Lib/linalg/atlas_version.o -L/usr/local/lib/atlas -Lbuild/temp.linux-i686-2.4 -llapack -lf77blas -lcblas -latlas -lg2c -o build/lib.linux-i686-2.4/scipy/linalg/atlas_version.so" failed with exit status 1 Any pointer ? Nils From ndbecker2 at gmail.com Wed Mar 8 12:51:23 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 08 Mar 2006 12:51:23 -0500 Subject: [SciPy-dev] netcdf supported? Message-ID: Does numpy-0.9.5/scipy-0.6.4 support netcdf? From jswhit at fastmail.fm Wed Mar 8 13:30:41 2006 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Wed, 08 Mar 2006 11:30:41 -0700 Subject: [SciPy-dev] netcdf supported? In-Reply-To: References: Message-ID: <440F22D1.9010800@fastmail.fm> Neal Becker wrote: >Does numpy-0.9.5/scipy-0.6.4 support netcdf? > > > > Neal: No, but I've got a netcdf module that supports numpy http://www.cdc.noaa.gov/people/jeffrey.s.whitaker/python/netCDF4.html -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From stephen.walton at csun.edu Wed Mar 8 14:15:39 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Wed, 08 Mar 2006 11:15:39 -0800 Subject: [SciPy-dev] Trouble when installing scipy via latest svn In-Reply-To: <440EA797.9060209@mecha.uni-stuttgart.de> References: <440EA797.9060209@mecha.uni-stuttgart.de> Message-ID: <440F2D5B.8000606@csun.edu> Nils Wagner wrote: >/usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: >Please report this bug. > > > Well, offhand it looks like this is a bug in the ld command which should be reported to SuSE. From nwagner at mecha.uni-stuttgart.de Wed Mar 8 14:30:34 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Mar 2006 20:30:34 +0100 Subject: [SciPy-dev] Trouble when installing scipy via latest svn In-Reply-To: <440F2D5B.8000606@csun.edu> References: <440EA797.9060209@mecha.uni-stuttgart.de> <440F2D5B.8000606@csun.edu> Message-ID: On Wed, 08 Mar 2006 11:15:39 -0800 Stephen Walton wrote: > Nils Wagner wrote: > >>/usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: >>Please report this bug. >> >> >> > Well, offhand it looks like this is a bug in the ld >command which should > be reported to SuSE. > But this message is new. I install latest numpy/scipy from scratch nearly every day. Maybe it's introduced to some recent changes ...e.g. U numpy/numpy/distutils/misc_util.py U numpy/numpy/distutils/ccompiler.py Can someone reproduce the previously posted error message on a 32 bit system running SuSE 9,3 ? Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From ndbecker2 at gmail.com Wed Mar 8 16:54:10 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 08 Mar 2006 16:54:10 -0500 Subject: [SciPy-dev] netcdf supported? References: <440F22D1.9010800@fastmail.fm> Message-ID: Jeff Whitaker wrote: > Neal Becker wrote: > >>Does numpy-0.9.5/scipy-0.6.4 support netcdf? >> >> >> >> > Neal: No, but I've got a netcdf module that supports numpy > > http://www.cdc.noaa.gov/people/jeffrey.s.whitaker/python/netCDF4.html > > -Jeff > Thanks, Jeff. I'm having some trouble installing this, though. I built/installed the latest netcdf4, which is netcdf-4.0-alpha13, and then I get this, any ideas?: building 'netCDF4' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC' compile options: '-I/usr/include -I/usr/lib64/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c' gcc: netCDF4.c netCDF4.c: In function ?__pyx_f_7netCDF4__get_grps?: netCDF4.c:237: warning: implicit declaration of function ?nc_inq_grps? netCDF4.c:302: warning: implicit declaration of function ?nc_inq_grpname? netCDF4.c:347: warning: label ?__pyx_L6? defined but not used netCDF4.c:345: warning: label ?__pyx_L5? defined but not used netCDF4.c: In function ?__pyx_f_7netCDF4__get_dims?: netCDF4.c:447: warning: implicit declaration of function ?nc_inq_dimids? netCDF4.c:535: warning: label ?__pyx_L9? defined but not used netCDF4.c:533: warning: label ?__pyx_L8? defined but not used netCDF4.c:479: warning: label ?__pyx_L7? defined but not used netCDF4.c:477: warning: label ?__pyx_L6? defined but not used netCDF4.c: In function ?__pyx_f_7netCDF4__get_vars?: netCDF4.c:699: warning: implicit declaration of function ?nc_inq_varids? netCDF4.c:803: warning: implicit declaration of function ?nc_inq_user_type? netCDF4.c:822: error: ?NC_VLEN? undeclared (first use in this function) netCDF4.c:822: error: (Each undeclared identifier is reported only once netCDF4.c:822: error: for each function it appears in.) [lots more...] From cookedm at physics.mcmaster.ca Wed Mar 8 17:46:15 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 08 Mar 2006 17:46:15 -0500 Subject: [SciPy-dev] Test case for integrate/ode.py: banded systems In-Reply-To: <440E9676.1080009@material.ntnu.no> (Jesper Friis's message of "Wed, 08 Mar 2006 09:31:50 +0100") References: <440E9676.1080009@material.ntnu.no> Message-ID: Jesper Friis writes: > Andrew Straw wrote: >> And now, upon reading the comments in the code of that page, it seems >> you've uncovered what may be a bug in the VODE wrapper. Would you say it >> is? If so, we should report the bug, hopefully fix it either ourselves >> or have a developer do it, and fix up the page appropriately. >> >> Cheers! >> Andrew > > > I have to admit that I just started last week to use > python instead of Fortran, so I think that you are in a better > position to say if it should be considered a bug or not that the > nrowpd argument is missing in the python interface. > > Searching through the function integrate/odepack/vode.f revealed that > the Jacobian function is only called in two places. For full systems > it is called with nrowpd=neq while for banded systems it is called > with nrowpd=ml*2+mu+1. So, as long as these values for nrowpd are > provided in the documentation there should be no problems using the > python interface. > > Note however that the lower and upper bandwidths, ml and mu are > interchanged compared to what I wrote in the comment in the example > script. The reason for this turned out to be another little miss in > integrate/ode.py, which is corrected with the patch below. Done. > It might also be interesting (at least for me) to add a way to obtain > information about the solution, like the number of steps that was > needed to reach the solution. What is the best way to do this? Maybe > one could add a method called infodict() that returns a dictionary > containing all optional output from the solver? Come up with a good solution and a patch :-) > In the end of the ode module there are two simple tests for non-banded > systems. Maybe they should be added to the Cookbook together with my > banded example. What do you think? Works for me. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From russel at appliedminds.net Wed Mar 8 22:27:13 2006 From: russel at appliedminds.net (Russel) Date: Wed, 8 Mar 2006 19:27:13 -0800 Subject: [SciPy-dev] Patch for memory leak in optimize.leastsq References: <6275D2EA-214D-40F3-B67E-1A78AFA24581@appliedminds.com> Message-ID: <1CCE77AE-21D5-477C-A0AB-19F3D54CBE3C@appliedminds.net> The lmder call 3 lines duplicated that leak m floating point values on every call. Here is a patch. I added a bug report before I really looked at the code, sorry to clutter the database. http://projects.scipy.org/scipy/scipy/ticket/35 I have attached the same patch. Russel -------------- next part -------------- A non-text attachment was scrubbed... Name: lmder_leak_patch Type: application/octet-stream Size: 731 bytes Desc: not available URL: -------------- next part -------------- From mmetz at astro.uni-bonn.de Thu Mar 9 06:06:03 2006 From: mmetz at astro.uni-bonn.de (Manuel Metz) Date: Thu, 09 Mar 2006 12:06:03 +0100 Subject: [SciPy-dev] Numeric In-Reply-To: <43F9BFD6.1000009@astro.uni-bonn.de> References: <43F9BFD6.1000009@astro.uni-bonn.de> Message-ID: <44100C1B.6060204@astro.uni-bonn.de> Hi, I figured out where the bug is, see attached patch. Travis, could you please fix the bug !!! Manuel Manuel Metz wrote: > Hi, > I know that development of Numeric has ceased and we should switch to > numpy. Nevertheless there will be some code around which still uses > Numeric, maybe for some years. Therefore: > > I found a memory leak in Numeric 24.2. I noticed it on Windows and Linux > (Debian). The following code consumes lots of memory which it should not > do !!! > > > from Numeric import * > for i in xrange(1000000): > a = array( [1,2,3] ) > b = a.tolist() # <- here seams to be the memory leak > > > Could anyone fix that !? PLEASE ! > > Manuel > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: arrayobject.patch Type: text/x-patch Size: 92 bytes Desc: not available URL: From mmetz at astro.uni-bonn.de Thu Mar 9 06:13:52 2006 From: mmetz at astro.uni-bonn.de (Manuel Metz) Date: Thu, 09 Mar 2006 12:13:52 +0100 Subject: [SciPy-dev] Numeric In-Reply-To: <44100C1B.6060204@astro.uni-bonn.de> References: <43F9BFD6.1000009@astro.uni-bonn.de> <44100C1B.6060204@astro.uni-bonn.de> Message-ID: <44100DF0.8060407@astro.uni-bonn.de> Hm ... patch is for Src/arrayobject.c ;-) ... -------------- next part -------------- A non-text attachment was scrubbed... Name: arrayobject.patch Type: text/x-patch Size: 251 bytes Desc: not available URL: From bgoli at sun.ac.za Thu Mar 9 09:07:07 2006 From: bgoli at sun.ac.za (Brett Olivier) Date: Thu, 9 Mar 2006 16:07:07 +0200 Subject: [SciPy-dev] sandbox/gplt import error Message-ID: <200603091607.07672.bgoli@sun.ac.za> Hi I have some legacy code that uses the scipy/gnuplot interface and found two "scimath" import errors when trying to use sandbox/gplt. The workaround I use (scipy "0.4.7.1657" and numpy "0.9.6.2206"): -- interface.py -- new line 4 (old "from numpy.scimath import *") from numpy.lib.scimath import * -- new_plot.py -- new line 3 (old "from numpy.scimath import *") from numpy.lib.scimath import * Thanks in advance Brett -- Brett G. Olivier Triple-J Group for Molecular Cell Physiology Stellenbosch University From jesper.friis at material.ntnu.no Thu Mar 9 09:10:12 2006 From: jesper.friis at material.ntnu.no (Jesper Friis) Date: Thu, 09 Mar 2006 15:10:12 +0100 Subject: [SciPy-dev] Test case for integrate/ode.py: banded systems In-Reply-To: References: <440E9676.1080009@material.ntnu.no> Message-ID: <44103744.3040504@material.ntnu.no> David M. Cooke wrote: > Jesper Friis writes: > >>It might also be interesting (at least for me) to add a way to obtain >>information about the solution, like the number of steps that was >>needed to reach the solution. What is the best way to do this? Maybe >>one could add a method called infodict() that returns a dictionary >>containing all optional output from the solver? > > Come up with a good solution and a patch :-) > OK, here is a patch which implements an infodict() method. In the header of the module I have also added a description of the individual keys in the returned dict as well as the size of matrix that the Jacobian function is expected to return. However, typing >>>help(ode) does not show up this documentation, so I am not sure weather it is the correct place I have added it. Regards /Jesper -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy-0.4.6.patch.3 URL: From nwagner at mecha.uni-stuttgart.de Thu Mar 9 10:06:47 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 09 Mar 2006 16:06:47 +0100 Subject: [SciPy-dev] Trouble when installing scipy via latest svn In-Reply-To: <440F2D5B.8000606@csun.edu> References: <440EA797.9060209@mecha.uni-stuttgart.de> <440F2D5B.8000606@csun.edu> Message-ID: <44104487.9070909@mecha.uni-stuttgart.de> Stephen Walton wrote: > Nils Wagner wrote: > > >> /usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: >> Please report this bug. >> >> >> >> > Well, offhand it looks like this is a bug in the ld command which should > be reported to SuSE. > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > It mysteriously vanished after a new svn update. Nils From tim.leslie at gmail.com Fri Mar 10 01:23:47 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Fri, 10 Mar 2006 17:23:47 +1100 Subject: [SciPy-dev] [integrate|interpolate|optimize]/common_routines.py Message-ID: There is a common_routines.py file in 3 packages. Looking at the diffs between the 3 of them it seems to me that they all originated in a single file at one point which was moved into each package at some point and each has evolved on it's own since then. The integrate package doesn't use common_routines.py at all, so it seems it could at least be removed from there. myasarray() is only used in interpolate/fitpack.py check_func() is only used in optimize/minpack.py Does someone "more official" than me want to look into this? Should I open a ticket+patch with a possible solution? Cheers, Timl -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.taylor at stanford.edu Fri Mar 10 04:19:23 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 10 Mar 2006 01:19:23 -0800 Subject: [SciPy-dev] ndimage & numpy Message-ID: <4411449B.6030207@stanford.edu> Hi all, I spent some time today trying to get scipy.sandbox.ndimage working with numpy instead of it requiring numarray (and numpy as well)..... I have found some issues with types in the ndimage extension code. For the time being, I have fixed these shamefully by copying things as necessary to Float64 and converting back later -- these are the two functions in the attached file numpyfix.py. If anyone is interested, I have put a patch here http://www-stat.stanford.edu/~jtaylo/ndimage.patch and attached a simple file numpyfix.py which should be placed in the scipy/Lib/sandbox/nd_image/Lib directory. The patch passes all but 11 unittests in scipy.sandbox.ndimage.test at least by my reckoning, and 10 of these are about ndimage.label (so effectively only one function is failing). The other failure concerns accuracy for gaussian_filter. In theory, I would imagine that the fix in numpyfix.py can be rewritten so that no actual copying takes place, but I haven't gotten that far yet. BUT, even if I try to call .astype, copying to something that the ndimage extension code seems to accept, I get the same errors. This seems to mean that for an input array input->descr->type_num is somehow not being set properly as I am calling it. -- Jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: numpyfix.py Type: text/x-python Size: 391 bytes Desc: not available URL: From oliphant.travis at ieee.org Fri Mar 10 04:42:52 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 10 Mar 2006 02:42:52 -0700 Subject: [SciPy-dev] ***[Possible UCE]*** [integrate|interpolate|optimize]/common_routines.py In-Reply-To: References: Message-ID: <44114A1C.6050606@ieee.org> Tim Leslie wrote: > There is a common_routines.py file in 3 packages. Looking at the diffs > between the 3 of them it seems to me that they all originated in a > single file at one point which was moved into each package at some > point and each has evolved on it's own since then. > > The integrate package doesn't use common_routines.py at all, so it > seems it could at least be removed from there. > > myasarray() is only used in interpolate/fitpack.py > check_func() is only used in optimize/minpack.py > > Does someone "more official" than me want to look into this? Should I > open a ticket+patch with a possible solution? Thanks for looking over this. I've committed a fix that removes the common_routines.py files and moves the functions to the files that use them (if necessary). The myasarray function is replaced with atleast_1d. -Travis From tim.leslie at gmail.com Fri Mar 10 06:37:18 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Fri, 10 Mar 2006 22:37:18 +1100 Subject: [SciPy-dev] ***[Possible UCE]*** [integrate|interpolate|optimize]/common_routines.py In-Reply-To: <44114A1C.6050606@ieee.org> References: <44114A1C.6050606@ieee.org> Message-ID: On 3/10/06, Travis Oliphant wrote: > > > I've committed a fix that removes the common_routines.py files and moves > the functions to the files that use them (if necessary). The myasarray > function is replaced with atleast_1d. Looks good. Could someone fill me in on what the "Possible UCE" in the subject is about? Cheers, Tim -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Fri Mar 10 07:24:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 10 Mar 2006 05:24:28 -0700 Subject: [SciPy-dev] ndimage & numpy In-Reply-To: <4411449B.6030207@stanford.edu> References: <4411449B.6030207@stanford.edu> Message-ID: <44116FFC.6040001@ieee.org> Jonathan Taylor wrote: > Hi all, > > I spent some time today trying to get > > scipy.sandbox.ndimage > > working with numpy instead of it requiring numarray (and numpy as > well)..... > Thanks for your work. I've taken your work and removed the typecasting (the image processing works with a lot of types). The type problem was due to the fact that numpy.int32 did not correspond exactly with the c-type PyArray_INT32 (which nd_image is checking against). Thus, numpy needed fixing to accomodate this use case. The fix is in SVN (revision 2214) I checked in the bulk of your changes to SciPy. I'm getting 6 errors for the test.py program now. This is very great news. As soon as we squash those errors we will move nd_image up to mainstream scipy. -Travis From oliphant.travis at ieee.org Fri Mar 10 08:12:19 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 10 Mar 2006 06:12:19 -0700 Subject: [SciPy-dev] [integrate|interpolate|optimize]/common_routines.py In-Reply-To: References: <44114A1C.6050606@ieee.org> Message-ID: <44117B33.6020805@ieee.org> Tim Leslie wrote: > On 3/10/06, *Travis Oliphant* > wrote: > > > I've committed a fix that removes the common_routines.py files and > moves > the functions to the files that use them (if necessary). The > myasarray > function is replaced with atleast_1d. > > > > Looks good. Could someone fill me in on what the "Possible UCE" in the > subject is about? > Sorry, it's the IEEE spam-flagging mechanism. It tags my email that it thinks are spam with this in the header. I'll try to remember to remove it when responding... -Travis From oliphant.travis at ieee.org Fri Mar 10 08:15:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 10 Mar 2006 06:15:04 -0700 Subject: [SciPy-dev] ndimage & numpy In-Reply-To: <44116FFC.6040001@ieee.org> References: <4411449B.6030207@stanford.edu> <44116FFC.6040001@ieee.org> Message-ID: <44117BD8.9060304@ieee.org> Travis Oliphant wrote: > Jonathan Taylor wrote: > >> Hi all, >> >> I spent some time today trying to get >> >> scipy.sandbox.ndimage >> >> working with numpy instead of it requiring numarray (and numpy as >> well)..... >> >> > > Thanks for your work. > > I've taken your work and removed the typecasting (the image processing > works with a lot of types). The type problem was due to the fact that > numpy.int32 did not correspond exactly with the c-type PyArray_INT32 > (which nd_image is checking against). > > Thus, numpy needed fixing to accomodate this use case. > > The fix is in SVN (revision 2214) > > I checked in the bulk of your changes to SciPy. I'm getting 6 errors > for the test.py program now. > > This is very great news. As soon as we squash those errors we will move > nd_image up to mainstream scipy. > And done. The remaining errors where due to using complex64 instead of complex128 and the small gaussian filter error is due to the fact that .sum() in numarray defaults to .sum(dtype='d') even for float32 arrays. All tests of nd_image pass for me and I've moved nd_image as a scipy package. This is a great day indeed. I've wanted this for some time..... -Travis From tim.leslie at gmail.com Fri Mar 10 08:34:12 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 11 Mar 2006 00:34:12 +1100 Subject: [SciPy-dev] [integrate|interpolate|optimize]/common_routines.py In-Reply-To: <44117B33.6020805@ieee.org> References: <44114A1C.6050606@ieee.org> <44117B33.6020805@ieee.org> Message-ID: On 3/11/06, Travis Oliphant wrote: > > Tim Leslie wrote: > > Looks good. Could someone fill me in on what the "Possible UCE" in the > > subject is about? > > > Sorry, it's the IEEE spam-flagging mechanism. It tags my email that it > thinks are spam with this in the header. I'll try to remember to > remove it when responding... Cool. While I've got your ear, I've been reading through a lot of the numpy and scipy code the past week, doing trivial cleanups and the such, and one thing that I keep seeing is "from numpy import *". Is this just a result of laziness (I understand :-) or is there some reason we want the whole numpy namespace in some places? I'm happy to clean these up and only import the stuff that's needed but I thought I should check first that this isn't going to break something on a fundamental level. Cheers, Tim -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Fri Mar 10 08:53:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 10 Mar 2006 06:53:45 -0700 Subject: [SciPy-dev] ***[Possible UCE]*** Re: [integrate|interpolate|optimize]/common_routines.py In-Reply-To: References: <44114A1C.6050606@ieee.org> <44117B33.6020805@ieee.org> Message-ID: <441184E9.1010707@ieee.org> Tim Leslie wrote: > > > On 3/11/06, *Travis Oliphant* > wrote: > > Tim Leslie wrote: > > Looks good. Could someone fill me in on what the "Possible UCE" > in the > > subject is about? > > > Sorry, it's the IEEE spam-flagging mechanism. It tags my email > that it > thinks are spam with this in the header. I'll try to remember to > remove it when responding... > > > > Cool. While I've got your ear, I've been reading through a lot of the > numpy and scipy code the past week, doing trivial cleanups and the > such, and one thing that I keep seeing is "from numpy import *". Is > this just a result of laziness (I understand :-) or is there some > reason we want the whole numpy namespace in some places? Laziness. The from numpy import * stuff is great for interactive work, but we should really control our namespaces better in numpy (and scipy). > > I'm happy to clean these up and only import the stuff that's needed > but I thought I should check first that this isn't going to break > something on a fundamental level. Please fix it. As long as all the names are resolved I don't see how it would break anything. -Travis From tim.leslie at gmail.com Fri Mar 10 09:33:38 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 11 Mar 2006 01:33:38 +1100 Subject: [SciPy-dev] nd_image compile error Message-ID: In the latest svn the function prototype for int NI_ExtendLine(double*, int, int, int, NI_ExtendMode, double); at Lib/nd_image/Src/ni_support.h:181 conflicts with the defintion of int NI_ExtendLine(double *line, maybelong length, maybelong size1, maybelong size2, NI_ExtendMode mode, double constant_value) at Lib/nd_image/Src/ni_support.c:163 and causes the build to fail. Changing the ints in the header to maybelongs seems to fix it -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.u.r.e.l.i.a.n at gmx.net Fri Mar 10 11:36:49 2006 From: a.u.r.e.l.i.a.n at gmx.net (=?ISO-8859-1?Q?=22Johannes_L=F6hnert=22?=) Date: Fri, 10 Mar 2006 17:36:49 +0100 (MET) Subject: [SciPy-dev] numpy.random.randint Message-ID: <6915.1142008609@www082.gmx.net> Hi, I just noticed that in function numpy.random.randint, the shape=... keyword was replaced by size=... I find this somewhat counterintuitive, as it denotes the _shape_ of the desired array (as tuple), although it also takes integer values. Also it is an unnecessary break with backwards compatibility (to RandomArray). I would suggest to allow both "shape" & "size", with "shape" having precedence. Johannes -- "Feel free" mit GMX FreeMail! Monat f?r Monat 10 FreeSMS inklusive! http://www.gmx.net From robert.kern at gmail.com Fri Mar 10 12:57:11 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 10 Mar 2006 11:57:11 -0600 Subject: [SciPy-dev] numpy.random.randint In-Reply-To: <6915.1142008609@www082.gmx.net> References: <6915.1142008609@www082.gmx.net> Message-ID: <4411BDF7.2020301@gmail.com> Johannes L?hnert wrote: > Hi, > > I just noticed that in function numpy.random.randint, > the shape=... keyword was replaced by size=... > > I find this somewhat counterintuitive, as it denotes > the _shape_ of the desired array (as tuple), > although it also takes integer values. Also it is an > unnecessary break with backwards compatibility (to RandomArray). It was a choice for compatibility with the scipy PRNGs all of which use "size". "shape" was not a good argument name because many of the probability distributions have a "shape" parameter. > I would suggest to allow both "shape" & "size", with "shape" > having precedence. I don't think it's wise to make more ways to do exactly the same thing. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Fri Mar 10 13:17:46 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 10 Mar 2006 13:17:46 -0500 Subject: [SciPy-dev] numpy API help Message-ID: I need some help with numpy-0.9.5. I don't really have any API docs, so maybe that's my problem. #include #include int main() { intp dims[1]; dims[0] = 2; PyObject* o = PyArray_SimpleNew (1, dims, PyArray_INT); } (gdb) run Starting program: /home/nbecker/Test Program received signal SIGSEGV, Segmentation fault. 0x0000000000400525 in main () at Test.cc:8 I notice that the precompiled output is: int main() { intp dims[1]; dims[0] = 2; PyObject* o = (*(PyObject * (*)(PyTypeObject *, int, intp *, int, intp *, void *, int, int, PyObject *)) PyArray_API[82])(&(*(PyTypeObject *)PyArray_API[1]), 1, dims, PyArray_INT, __null, __null, 0, 0, __null); } From aisaac at american.edu Fri Mar 10 13:26:47 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 10 Mar 2006 13:26:47 -0500 Subject: [SciPy-dev] numpy.random.randint In-Reply-To: <4411BDF7.2020301@gmail.com> References: <6915.1142008609@www082.gmx.net><4411BDF7.2020301@gmail.com> Message-ID: > Johannes L?hnert wrote: >> I just noticed that in function numpy.random.randint, the >> shape=... keyword was replaced by size=... >> I find this somewhat counterintuitive, as it denotes >> the shape of the desired array (as tuple), >> although it also takes integer values. Also it is an >> unnecessary break with backwards compatibility (to RandomArray). >> I would suggest to allow both "shape" & "size", with >> "shape" having precedence. On Fri, 10 Mar 2006, Robert Kern apparently wrote: > It was a choice for compatibility with the scipy PRNGs all of which use "size". > "shape" was not a good argument name because many of the probability > distributions have a "shape" parameter. > I don't think it's wise to make more ways to do exactly > the same thing. But this is really horrible: >>> import numpy as N >>> x = N.random.standard_normal(size=(3,2)) >>> x.size 6 >>> x.shape (3, 2) >>> x = N.random.standard_normal(shape=(3,2)) Traceback (most recent call last): File "", line 1, in ? TypeError: 'shape' is an invalid keyword argument for this function Since numpy is less than version 1, this really should be changed for consistency, and allowing both for a while with a deprecation warning on 'size' is reasonable. I do not think any of the numpy distributions have a competing 'shape' parameter, but if so that just makes the situation *worse*. A user's opinion, Alan Isaac From robert.kern at gmail.com Fri Mar 10 13:45:26 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 10 Mar 2006 12:45:26 -0600 Subject: [SciPy-dev] numpy.random.randint In-Reply-To: References: <6915.1142008609@www082.gmx.net><4411BDF7.2020301@gmail.com> Message-ID: <4411C946.3070908@gmail.com> Alan G Isaac wrote: >>Johannes L?hnert wrote: >> >>>I just noticed that in function numpy.random.randint, the >>>shape=... keyword was replaced by size=... > > >>>I find this somewhat counterintuitive, as it denotes >>>the shape of the desired array (as tuple), >>>although it also takes integer values. Also it is an >>>unnecessary break with backwards compatibility (to RandomArray). > > >>>I would suggest to allow both "shape" & "size", with >>>"shape" having precedence. > > > On Fri, 10 Mar 2006, Robert Kern apparently wrote: > >>It was a choice for compatibility with the scipy PRNGs all of which use "size". >>"shape" was not a good argument name because many of the probability >>distributions have a "shape" parameter. > > >>I don't think it's wise to make more ways to do exactly >>the same thing. > > But this is really horrible: > > >>> import numpy as N > >>> x = N.random.standard_normal(size=(3,2)) > >>> x.size > 6 > >>> x.shape > (3, 2) > >>> x = N.random.standard_normal(shape=(3,2)) > Traceback (most recent call last): > File "", line 1, in ? > TypeError: 'shape' is an invalid keyword argument for this function > > Since numpy is less than version 1, this really should be > changed for consistency, and allowing both for a while with > a deprecation warning on 'size' is reasonable. The keyword in the functions does not necessarily match with the attribute names on the objects. I don't find this particularly shocking. Or bad. But that's just my tastes. So I'm -1 on changing numpy.random and all of scipy.stats. > I do not think any of the numpy distributions have a competing 'shape' > parameter, but if so that just makes the situation *worse*. I also avoided the "shape" terminology for the "shape" parameters, too, to avoid conflation. However, a lot of the probability distributions do have a "shape" parameter. lognormal, gamma, beta, pareto, and on and on and on. Anything that isn't a location or a scale parameter is called a shape parameter in probability theory. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Fernando.Perez at colorado.edu Fri Mar 10 13:50:25 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 10 Mar 2006 11:50:25 -0700 Subject: [SciPy-dev] numpy.random.randint In-Reply-To: <4411C946.3070908@gmail.com> References: <6915.1142008609@www082.gmx.net><4411BDF7.2020301@gmail.com> <4411C946.3070908@gmail.com> Message-ID: <4411CA71.5070706@colorado.edu> Robert Kern wrote: > I also avoided the "shape" terminology for the "shape" parameters, too, to avoid > conflation. However, a lot of the probability distributions do have a "shape" > parameter. lognormal, gamma, beta, pareto, and on and on and on. Anything that > isn't a location or a scale parameter is called a shape parameter in probability > theory. Mmh, 'dshape' as a compromise solution, for the distribution shape parameter? I understand that 'shape' may be the time-honored convention, but given that here there is a clash between array.shape and the distribution one, and that arrays 'come first' (being the most basic object in all of numpy/scipy), perhaps a compromise is acceptable? It seems easier to tell users of distributions "the parameter typically named 'shape' in distribution theory is called dshape in our library, to avoid conflicts with the 'shape' attribute that all array ojbects have" than to constantly have to sort out the confusion of two uses of the same word. At least it seems so to me. But I have no strong feelings on this topic, so take this as a simple suggestion, and feel free to ignore it. Cheers, f From aisaac at american.edu Fri Mar 10 14:05:02 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 10 Mar 2006 14:05:02 -0500 Subject: [SciPy-dev] numpy.random.randint In-Reply-To: <4411C946.3070908@gmail.com> References: <6915.1142008609@www082.gmx.net><4411BDF7.2020301@gmail.com> <4411C946.3070908@gmail.com> Message-ID: On Fri, 10 Mar 2006, Robert Kern apparently wrote: > The keyword in the functions does not necessarily match > with the attribute names on the objects. I don't find this > particularly shocking. Or bad. But that's just my tastes. > So I'm -1 on changing numpy.random and all of scipy.stats. I'll make a last comment and then let this go. The point you make above seems fine as an abstract point. But I think in this concrete case, where *all* we are doing is generating arrays with core attributes that are well known, the current size keyword and lack of shape keyword are quite "surprising" and bad for that reason. I.e., will the current design produce questions from and confusions for new users? Cheers, Alan From oliphant.travis at ieee.org Fri Mar 10 14:17:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 10 Mar 2006 12:17:32 -0700 Subject: [SciPy-dev] numpy API help In-Reply-To: References: Message-ID: <4411D0CC.2050907@ieee.org> Neal Becker wrote: > I need some help with numpy-0.9.5. I don't really have any API docs, so > maybe that's my problem. > > #include > #include > > > int main() { > intp dims[1]; > dims[0] = 2; > PyObject* o = PyArray_SimpleNew (1, dims, PyArray_INT); > } > (gdb) run > Starting program: /home/nbecker/Test > > Program received signal SIGSEGV, Segmentation fault. > 0x0000000000400525 in main () at Test.cc:8 > > I notice that the precompiled output is: > int main() { > intp dims[1]; > dims[0] = 2; > PyObject* o = (*(PyObject * (*)(PyTypeObject *, int, intp *, int, intp *, > void *, int, int, PyObject *)) PyArray_API[82])(&(*(PyTypeObject > *)PyArray_API[1]), 1, dims, PyArray_INT, __null, __null, 0, 0, __null); > } > > Did you forget to use import_array? That's the command that fills in PyArray_API so that PyArray_API[82] makes sense... -Travis From robert.kern at gmail.com Fri Mar 10 19:25:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 10 Mar 2006 18:25:53 -0600 Subject: [SciPy-dev] numpy.random.randint In-Reply-To: References: <6915.1142008609@www082.gmx.net><4411BDF7.2020301@gmail.com> <4411C946.3070908@gmail.com> Message-ID: <44121911.8060207@gmail.com> Alan G Isaac wrote: > On Fri, 10 Mar 2006, Robert Kern apparently wrote: > >>The keyword in the functions does not necessarily match >>with the attribute names on the objects. I don't find this >>particularly shocking. Or bad. But that's just my tastes. > >>So I'm -1 on changing numpy.random and all of scipy.stats. > > I'll make a last comment and then let this go. > The point you make above seems fine as an abstract point. > But I think in this concrete case, where *all* we are doing > is generating arrays with core attributes that are well > known, the current size keyword and lack of shape keyword > are quite "surprising" and bad for that reason. > > I.e., will the current design produce questions from and > confusions for new users? It hasn't for the several years that all of the scipy.stats distribution objects have been using this convention. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Mar 10 19:29:18 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 10 Mar 2006 18:29:18 -0600 Subject: [SciPy-dev] numpy.random.randint In-Reply-To: <4411CA71.5070706@colorado.edu> References: <6915.1142008609@www082.gmx.net><4411BDF7.2020301@gmail.com> <4411C946.3070908@gmail.com> <4411CA71.5070706@colorado.edu> Message-ID: <441219DE.70102@gmail.com> Fernando Perez wrote: > Robert Kern wrote: > >>I also avoided the "shape" terminology for the "shape" parameters, too, to avoid >>conflation. However, a lot of the probability distributions do have a "shape" >>parameter. lognormal, gamma, beta, pareto, and on and on and on. Anything that >>isn't a location or a scale parameter is called a shape parameter in probability >>theory. > > Mmh, 'dshape' as a compromise solution, for the distribution shape parameter? > I understand that 'shape' may be the time-honored convention, but given that > here there is a clash between array.shape and the distribution one, and that > arrays 'come first' (being the most basic object in all of numpy/scipy), > perhaps a compromise is acceptable? It seems easier to tell users of > distributions > > "the parameter typically named 'shape' in distribution theory is called dshape > in our library, to avoid conflicts with the 'shape' attribute that all array > ojbects have" > > than to constantly have to sort out the confusion of two uses of the same word. > > At least it seems so to me. But I have no strong feelings on this topic, so > take this as a simple suggestion, and feel free to ignore it. I don't use "shape" as a keyword argument at all. I wanted to avoid both of the possible confusions. Actually, Travis O. wanted to avoid the confusions, and so he wrote the scipy.stats distributions objects using size=, and I just followed his lead. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Fri Mar 10 20:23:42 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 10 Mar 2006 18:23:42 -0700 Subject: [SciPy-dev] numpy.random.randint In-Reply-To: <44121911.8060207@gmail.com> References: <6915.1142008609@www082.gmx.net> <4411BDF7.2020301@gmail.com> <4411C946.3070908@gmail.com> <44121911.8060207@gmail.com> Message-ID: The keyword 'shape' is, IMHO, more descriptive than 'size'. I suppose 'size' has the advantage of Matlab compatibility but since we are talking about dimensions, why not use the keyword 'dims'? Chuck From tim.leslie at gmail.com Sat Mar 11 03:58:13 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 11 Mar 2006 19:58:13 +1100 Subject: [SciPy-dev] nonexistant numpy.fastumath used in scipy Message-ID: The numpy.fastumath is used in the following places timl at penrose:~/scipy$ rgrep fastumath * | grep -v svn Lib/cluster/tests/vq_test.py:from numpy.fastumath import * Lib/sandbox/ga/parallel_pop.py:from numpy.fastumath import * Lib/sandbox/ga/gene.py:from numpy.fastumath import * Lib/sandbox/ga/scaling.py:from numpy.fastumath import * Lib/sandbox/ga/selection.py:from numpy.fastumath import * but doesn't seem to exist in numpy timl at penrose:~/scipy$ rgrep fastumath ../numpy/* | grep -v svn timl at penrose:~/scipy$ Not sure how to resolve this, but I thought I'd bring it up for someone to look into. Cheers, Timl -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.leslie at gmail.com Sat Mar 11 04:12:34 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 11 Mar 2006 20:12:34 +1100 Subject: [SciPy-dev] function imresize defined twice in misc/pilutil.py Message-ID: At lines 213 and 240 there are two quite different definitions of "imresize". Does someone want to make a choice as to which to keep? Cheers, Timl -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Mar 11 04:34:29 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 11 Mar 2006 03:34:29 -0600 Subject: [SciPy-dev] nonexistant numpy.fastumath used in scipy In-Reply-To: References: Message-ID: <441299A5.6060104@gmail.com> Tim Leslie wrote: > The numpy.fastumath is used in the following places > > timl at penrose:~/scipy$ rgrep fastumath * | grep -v svn > Lib/cluster/tests/vq_test.py:from numpy.fastumath import * > Lib/sandbox/ga/parallel_pop.py:from numpy.fastumath import * > Lib/sandbox/ga/gene.py:from numpy.fastumath import * > Lib/sandbox/ga/scaling.py:from numpy.fastumath import * > Lib/sandbox/ga/selection.py:from numpy.fastumath import * > > but doesn't seem to exist in numpy > > timl at penrose:~/scipy$ rgrep fastumath ../numpy/* | grep -v svn > timl at penrose:~/scipy$ > > Not sure how to resolve this, but I thought I'd bring it up for someone > to look into. Just delete those import lines. They are obsolete. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Sat Mar 11 04:41:40 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat, 11 Mar 2006 10:41:40 +0100 Subject: [SciPy-dev] ndimage & numpy In-Reply-To: <44117BD8.9060304@ieee.org> References: <4411449B.6030207@stanford.edu> <44116FFC.6040001@ieee.org> <44117BD8.9060304@ieee.org> Message-ID: <8C2006FF-A3F9-4C0A-99D1-01D3CBAD6AF0@ftw.at> On 10/03/2006, at 2:15 PM, Travis Oliphant wrote: > Travis Oliphant wrote: >> Jonathan Taylor wrote: >> >>> Hi all, >>> >>> I spent some time today trying to get >>> >>> scipy.sandbox.ndimage >>> >>> working with numpy instead of it requiring numarray (and numpy as >>> well)..... >>> >>> >> > And done... Well done, guys! I have a minor suggestion: that we rename the package from 'nd_image' to 'image' for consistency with the other packages, like 'signal', 'optimize' and 'sparse'. There's currently an 'image' package in the sandbox, but that looks much less mature, and should probably be integrated with the existing package when it actually does come into the main tree. -- Ed From tim.leslie at gmail.com Sat Mar 11 04:45:00 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 11 Mar 2006 20:45:00 +1100 Subject: [SciPy-dev] nonexistant numpy.fastumath used in scipy In-Reply-To: <441299A5.6060104@gmail.com> References: <441299A5.6060104@gmail.com> Message-ID: On 3/11/06, Robert Kern wrote: > > Tim Leslie wrote: > > The numpy.fastumath is used in the following places > > > > timl at penrose:~/scipy$ rgrep fastumath * | grep -v svn > > Lib/cluster/tests/vq_test.py:from numpy.fastumath import * > > Lib/sandbox/ga/parallel_pop.py:from numpy.fastumath import * > > Lib/sandbox/ga/gene.py:from numpy.fastumath import * > > Lib/sandbox/ga/scaling.py:from numpy.fastumath import * > > Lib/sandbox/ga/selection.py:from numpy.fastumath import * > > > > but doesn't seem to exist in numpy > > > > timl at penrose:~/scipy$ rgrep fastumath ../numpy/* | grep -v svn > > timl at penrose:~/scipy$ > > > > Not sure how to resolve this, but I thought I'd bring it up for someone > > to look into. > > Just delete those import lines. They are obsolete. Patch submitted as ticket #36 Tim -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Sat Mar 11 05:24:23 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat, 11 Mar 2006 11:24:23 +0100 Subject: [SciPy-dev] nonexistent numpy.fastumath used in scipy In-Reply-To: References: <441299A5.6060104@gmail.com> Message-ID: <26A462EC-C798-46C7-9E94-B635A6FDAAFE@ftw.at> On 11/03/2006, at 10:45 AM, Tim Leslie wrote: > Patch submitted as ticket #36 Thanks! I appreciate all the patches you've been submitting recently to help us fix bugs and clean up crusty old code. Would you like SVN write access? If so, you'd have my vote on this ... -- Ed From tim.leslie at gmail.com Sat Mar 11 05:56:22 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 11 Mar 2006 21:56:22 +1100 Subject: [SciPy-dev] nonexistent numpy.fastumath used in scipy In-Reply-To: <26A462EC-C798-46C7-9E94-B635A6FDAAFE@ftw.at> References: <441299A5.6060104@gmail.com> <26A462EC-C798-46C7-9E94-B635A6FDAAFE@ftw.at> Message-ID: On 3/11/06, Ed Schofield wrote: > > > On 11/03/2006, at 10:45 AM, Tim Leslie wrote: > > > Patch submitted as ticket #36 > > Thanks! > > I appreciate all the patches you've been submitting recently to help > us fix bugs and clean up crusty old code. Would you like SVN write > access? If so, you'd have my vote on this ... If you guys are happy for me to commit mostly trivial changes then I'm happy to do it. Of course I'd consult on list before doing anything non-trivial. Cheers, Timl -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Norbert.Nemec.list at gmx.de Sun Mar 12 08:58:19 2006 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Sun, 12 Mar 2006 14:58:19 +0100 Subject: [SciPy-dev] Patches for numpy.linalg and scipy.linalg Message-ID: <441428FB.4080003@gmx.de> Over the past few days I worked a bit on the linalg packages in both numpy and scipy. Attached are patches for both numpy and scipy. Both are a bit interdependent, so I'm submitting all in one cross-posted mail. I don't know about the SVN-write-access policy of numpy/scipy. If it is not kept too restrictive, it might be convenient for everyone to give me write access. I cannot promise to find much time working on the code, but linalg sure needs some cleanup and I might continue doing a bit of it. Here a few explanations about the attached patches: numpy-1-cleanup.diff just a bit of cosmetics to make the other patches cleaner numpy-2-rename-functions.diff numpy.linalg used to contain a number of aliases (like 'inv'=='inverse' and others) this is pointless and more confusing then helpful. Unless there is a really good reason, a library should offer one name only. I chose the short version of each name as the "official" name, because I think all the abbreviations are well-known and easy to understand for anyone handling numerical linear algebra. Currently all the definition that would be needed for backwards-compatibility are deactivated in a "if False:" block. If people think they should be deprecated slowly, this block could easily be re-activated. numpy-3-svd-compute_uv.diff numpy.linalg.svd now also has the compute_uv option that scipy.linalg.svd already had. Default behavior is the same as before. numpy-4-svd-bug-workaround.diff the dgesdd function of the lapack library installed on my system seems to contain a strange bug. Should probably be investigated and fixed. For the moment I just included a workaround. numpy-5-norm-copy-from-scipy.diff copied and the "norm" function from scipy.linalg and adjusted internals to work in the new environment numpy-6-norm-change-default.diff the 'scipy.linalg.norm' function was sub-optimal: for matrices, the frobenius norm is faster then the "max(svd())" by about an order of magnitude for all the test cases that I came up with. It also is invariant under orthogonal/unitary transformations, which makes it the best candidate to be the default in any case that I could come up with. The computation of the Frobenius norm can be done with the same line of code used for the vector-square norm and can even be generalized to arrays of arbitrary rank, always giving a reasonable norm for both real and complex numbers. The most straightforward way to clean up a number of inefficiencies in the definition of "norm" was to introduce a norm with "ord=None" which always calculates sqrt(sum((conjugate(x)*x).ravel())) which is generally the most efficient implementation and does not even need a check of the rank of the array. I also made this choice the default of the "norm" function. I know that this may cause hard-to-find errors in existing code if people explicitely want the "svd"-norm for a matrix and relied on the old default setting. Personally, I don't believe this is a real danger, because I don't believe there are many algorithms that depend on using the svd-norm and break if frobenius is used. If people are strictly against changing the default behavior, the patch should still be accepted and only the function header changed back. numpy-7-castCopyAndTranspose.diff simplyfied this function for purely aesthetic reasons: I believe a function that can be written cleanly in two lines should not be extended unnessecarily to 6 lines... numpy-8-dual.diff added 'norm', 'eigh' and 'eigvalsh' to the list of dual functions. scipy-1-eigh-eigvalsh.diff added two new functions to scipy.linalg. All the interfaces were there already... scipy-2-dual-norm.diff added 'norm' to the list of dual functions scipy-3-norm-change-default.diff changed the norm in the same way as "numpy.linalg.norm" was changed. See comments above. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpy-1-cleanup.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpy-2-rename-functions.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpy-3-svd-compute_uv.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpy-4-svd-bug-workaround.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpy-5-norm-copy-from-scipy.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpy-6-norm-change-default.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpy-7-castCopyAndTranspose.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpy-8-dual.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy-1-eigh-eigvalsh.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy-2-dual-norm.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy-3-norm-change-default.diff URL: From Fernando.Perez at colorado.edu Sun Mar 12 19:33:51 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 12 Mar 2006 17:33:51 -0700 Subject: [SciPy-dev] Access to C/C++, typemaps, pyrex... Message-ID: <4414BDEF.70001@colorado.edu> Hi all, there are recurrent questions on the topic of low-level access to numpy, wrapping libraries, etc. Many people have home-brewed SWIG typemaps, the cookbook has some (very nice) pyrex examples, etc. I think that we could use a bit of unification work on this front, to make this particular issue easier to deal with for newcomers, and to reduce duplication of effort. 1. SWIG ======= At the last scipy conference, I collected typemaps from a number of people who had them around and all I could find on the net. I grabbed Scott Ransom's, Michel Sanner contributed his, Eric Jones gave us some as well, etc. John Hunter worked on updating Eric's (which had some nice features) to work with plain C (they were originally C++), and Bill Spotz from Sandia NL agreed to do the real work of writing some clean, documented examples of typemap use starting with this codebase. He gave me this code some time ago, and I shamefully just sat on it. I'm finally doing something about it, so today I updated this code to work with numpy instead of Numeric, and all the tests pass. You can grab it from here with an SVN client: http://wavelets.scipy.org/svn/multiresolution/wavelets/trunk/numpy_swig It is clearly documented, and while I'm sure it can use improvements and extensions, I think it would be a good starting point to have distributed /officially/ with numpy. That way others can improve /this/ code, rather than reimplementing numpy typemaps for the millionth time. D. Beazley (the SWIG author) has even indicated that he'd be happy to officially ship numpy typemaps with SWIG if something accepted by the community were sent to him. 2. Pyrex ======== After reading the excellent cookbook examples on pyrex usage, which I recently needed for a project, I decided to complete the code so it could be used 'out of the box'. This simply meant a bit of organization work, updating the code to use the 'modern' cimport statement and .pxd files, and writing a proper setup.py file. The result is here: http://wavelets.scipy.org/svn/multiresolution/wavelets/trunk/numpyx/ I also think that pyrex should be 'officially' supported. In particular, I think that the c_numpy.pxd file I put there, which is just a copy of that in mtrand, should be completed and shipped in the same headers directory as the rest of numpy's .h files. This would make it a LOT easier for newcomers to use pyrex for writing numpy extensions, since all headers would be complete and available with all numpy installations. Finally, if anyone is interested, that repository actually contains in the full checkout http://wavelets.scipy.org/svn/multiresolution/wavelets/trunk/ examples of how to wrap a more complex library which requires the creation of C structs from python objects. Getting this right took some time, as not all the details for this kind of construction are clearly documented anywhere. I hope it may be useful to others. 3. Wrapup ========= I think it would be a good idea to ship the numpy_swig example directory (and perhaps also numpyx, if somewhat expanded) listed above with the standard distribution, as well as completing the .pxd file to expose all the numpy C API for pyrex. I am not claiming to have done any of the hard work in numpyx or numpy_swig (though I did spend a lot of time on the wavelets stuff recently :), but I think we would all benefit from having this kind of 'infrastructure' code in the core. Once there, improvements can be made over time, with less effort wasted. If we agree on this, it's as simple as dropping those two directories somewhere on the numpy codebase (sandbox?, doc?). Just having them 'officially' available will be a benefit, I think. I'd like to thank Bill for the time he put into the swig module, as well as others who contributed code for this. Hopefully it will benefit everyone. Regards, f From prabhu_r at users.sf.net Mon Mar 13 00:51:14 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Mon, 13 Mar 2006 11:21:14 +0530 Subject: [SciPy-dev] Access to C/C++, typemaps, pyrex... In-Reply-To: <4414BDEF.70001@colorado.edu> References: <4414BDEF.70001@colorado.edu> Message-ID: <17429.2130.250319.896367@prpc.aero.iitb.ac.in> >>>>> "Fernando" == Fernando Perez writes: Fernando> Hi all, there are recurrent questions on the topic of Fernando> low-level access to numpy, wrapping libraries, etc. Fernando> Many people have home-brewed SWIG typemaps, the cookbook Fernando> has some (very nice) pyrex examples, etc. I think that [...] Fernando> ... I think we would all benefit from having this kind Fernando> of 'infrastructure' code in the core. Once there, Fernando> improvements can be made over time, with less effort Fernando> wasted. +3 (one for each). Fernando> If we agree on this, it's as simple as dropping those Fernando> two directories somewhere on the numpy codebase Fernando> (sandbox?, doc?). Just having them 'officially' Fernando> available will be a benefit, I think. Yes, this will certainly be very useful. regards, prabhu From prabhu_r at users.sf.net Mon Mar 13 00:51:14 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Mon, 13 Mar 2006 11:21:14 +0530 Subject: [SciPy-dev] Access to C/C++, typemaps, pyrex... In-Reply-To: <4414BDEF.70001@colorado.edu> References: <4414BDEF.70001@colorado.edu> Message-ID: <17429.2130.250319.896367@prpc.aero.iitb.ac.in> >>>>> "Fernando" == Fernando Perez writes: Fernando> Hi all, there are recurrent questions on the topic of Fernando> low-level access to numpy, wrapping libraries, etc. Fernando> Many people have home-brewed SWIG typemaps, the cookbook Fernando> has some (very nice) pyrex examples, etc. I think that [...] Fernando> ... I think we would all benefit from having this kind Fernando> of 'infrastructure' code in the core. Once there, Fernando> improvements can be made over time, with less effort Fernando> wasted. +3 (one for each). Fernando> If we agree on this, it's as simple as dropping those Fernando> two directories somewhere on the numpy codebase Fernando> (sandbox?, doc?). Just having them 'officially' Fernando> available will be a benefit, I think. Yes, this will certainly be very useful. regards, prabhu From nwagner at mecha.uni-stuttgart.de Mon Mar 13 03:14:19 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 13 Mar 2006 09:14:19 +0100 Subject: [SciPy-dev] cannot find -lX11 Message-ID: <441529DB.7000806@mecha.uni-stuttgart.de> python setup.py install failed /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: cannot find -lX11 collect2: ld returned 1 exit status /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: cannot find -lX11 collect2: ld returned 1 exit status error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/pygist/gistCmodule.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/gist.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/tick.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/tick60.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/engine.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/gtext.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/draw.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/draw0.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/clip.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/gread.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/gcntr.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/hlevel.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/ps.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/cgm.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/eps.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/style.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/xfancy.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/xbasic.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/dir.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/files.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/fpuset.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/pathnm.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/timew.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/uevent.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/ugetc.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/umain.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/usernm.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/slinks.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/colors.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/connect.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/cursors.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/errors.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/events.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/fills.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/fonts.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/images.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/lines.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/pals.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/pwin.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/resource.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/rgbread.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/textout.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/rect.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/clips.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/points.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/hash.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/hash0.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/mm.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/alarms.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/pstrcpy.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/pstrncat.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/p595.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/bitrev.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/bitlrot.o build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/bitmrot.o -LLib/sandbox/xplt/. -LLib/sandbox/xplt/src -L/usr/lib -Lbuild/temp.linux-x86_64-2.4 -lX11 -lm -o build/lib.linux-x86_64-2.4/scipy/sandbox/xplt/gistC.so" failed with exit status 1 lisa:/usr/local/svn/scipy # locate libX11 /usr/lib/NX/lib/libX11.so.6 /usr/lib/NX/lib/libX11.so.6.2 /usr/X11R6/lib/libX11.so.6 /usr/X11R6/lib/libX11.so.6.2 /usr/X11R6/lib64/libX11.a /usr/X11R6/lib64/libX11.so /usr/X11R6/lib64/libX11.so.6 /usr/X11R6/lib64/libX11.so.6.2 Any idea how to fix this problem would be appreciated ? Nils From jswhit at fastmail.fm Mon Mar 13 15:41:32 2006 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Mon, 13 Mar 2006 13:41:32 -0700 Subject: [SciPy-dev] cannot find -lX11 In-Reply-To: <441529DB.7000806@mecha.uni-stuttgart.de> References: <441529DB.7000806@mecha.uni-stuttgart.de> Message-ID: <4415D8FC.405@fastmail.fm> Nils Wagner wrote: > python setup.py install failed > > /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: > cannot find -lX11 > collect2: ld returned 1 exit status > /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: > cannot find -lX11 > collect2: ld returned 1 exit status > error: Command "gcc -pthread -shared > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/pygist/gistCmodule.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/gist.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/tick.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/tick60.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/engine.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/gtext.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/draw.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/draw0.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/clip.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/gread.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/gcntr.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/hlevel.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/ps.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/cgm.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/eps.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/style.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/xfancy.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/gist/xbasic.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/dir.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/files.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/fpuset.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/pathnm.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/timew.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/uevent.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/ugetc.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/umain.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/usernm.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/unix/slinks.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/colors.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/connect.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/cursors.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/errors.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/events.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/fills.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/fonts.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/images.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/lines.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/pals.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/pwin.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/resource.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/rgbread.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/textout.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/rect.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/clips.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/x11/points.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/hash.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/hash0.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/mm.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/alarms.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/pstrcpy.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/pstrncat.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/p595.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/bitrev.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/bitlrot.o > build/temp.linux-x86_64-2.4/Lib/sandbox/xplt/src/play/all/bitmrot.o > -LLib/sandbox/xplt/. -LLib/sandbox/xplt/src -L/usr/lib > -Lbuild/temp.linux-x86_64-2.4 -lX11 -lm -o > build/lib.linux-x86_64-2.4/scipy/sandbox/xplt/gistC.so" failed with exit > status 1 > > > lisa:/usr/local/svn/scipy # locate libX11 > /usr/lib/NX/lib/libX11.so.6 > /usr/lib/NX/lib/libX11.so.6.2 > /usr/X11R6/lib/libX11.so.6 > /usr/X11R6/lib/libX11.so.6.2 > /usr/X11R6/lib64/libX11.a > /usr/X11R6/lib64/libX11.so > /usr/X11R6/lib64/libX11.so.6 > /usr/X11R6/lib64/libX11.so.6.2 > > Any idea how to fix this problem would be appreciated ? > > Nils > > Nils: Looks like xplt is assuming that there is a symlink /usr/lib/X11 -> ../X11R6/lib/X11. Most linux distros (and MacOS) have this. You could put the symlink there yourself, but preferably /usr/X11R6/lib should be added to the search path in the setup.py file. -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From jonathan.taylor at stanford.edu Tue Mar 14 14:59:36 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Tue, 14 Mar 2006 11:59:36 -0800 Subject: [SciPy-dev] unique hanging? Message-ID: <441720A8.2040004@stanford.edu> hi all, apologize if this is not the right list -- in the latest svn of numpy unique seems to be taking a long time with "nan" reproducible error: ------------------------------------------------------------------------------------------- import numpy as N import numpy.random as R x = R.standard_normal((100000,)) x = N.around(x * 100.) / 100. print len(N.unique(x)) x[0] = N.nan print len(N.unique(x)) x[0:50000] = N.nan print 'OK' print len(N.unique(x)) -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From tim.leslie at gmail.com Tue Mar 14 23:27:19 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Wed, 15 Mar 2006 15:27:19 +1100 Subject: [SciPy-dev] undefined function adm() used in Lib/stats/_support.py:linexand Message-ID: The last line of the function linexand in stats/_support.py is return adm(a,criterion) but I can't for the life of me find where adm is defined or quite what it should do. Anyone have any ideas? The following test shows the error in all it's glory. >>> from scipy.stats import paired >>> from numpy import array >>> a = array([1,2]) >>> b = array([2,3]) >>> paired(a, b) Independent or related samples, or correlation (i,r,c): c Is the data Continuous, Ranked, or Dichotomous (c,r,d): d Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/scipy/stats/stats.py", line 1201, in paired r,p = pointbiserialr(x,y) File "/usr/lib/python2.4/site-packages/scipy/stats/stats.py", line 1273, in pointbiserialr x = _support.linexand(data,0,categories[0]) File "/usr/lib/python2.4/site-packages/scipy/stats/_support.py", line 148, in linexand return adm(a,criterion) NameError: global name 'adm' is not defined AFAICT this bug affects the following stats interface functions: paired, pointbiserialr, findwithin, anova. If noone knows a quick fix for this, I'll open a ticket. Cheers, Timl -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Mar 15 00:03:35 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 14 Mar 2006 22:03:35 -0700 Subject: [SciPy-dev] undefined function adm() used in Lib/stats/_support.py:linexand In-Reply-To: References: Message-ID: <4417A027.4020508@ieee.org> Tim Leslie wrote: > The last line of the function linexand in stats/_support.py is > > return adm(a,criterion) > > but I can't for the life of me find where adm is defined or quite what > it should do. Anyone have any ideas? _support.py is culled from Gary's pstat.py which I still had lying around. Apparently the adm function was not brought over. I did that and fixed a few other missing imports I found in the process. The big problem with stats.py is that it was borrowed from Gary and "adapted" to SciPy. Obviously, the "adaptation" had/has holes in it. Thanks for helping with it. From tim.leslie at gmail.com Wed Mar 15 01:04:14 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Wed, 15 Mar 2006 17:04:14 +1100 Subject: [SciPy-dev] scipy.linalg.test() fails Message-ID: Can anyone shed some light on why this might be failing? >>> scipy.__version__ '0.4.7.1703' FAIL: check_lu (scipy.linalg.tests.test_decomp.test_lu_solve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 138, in check_lu assert_array_equal(x1,x2) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 204, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 100.0%): Array 1: [ 0.2672282898014736 0.6317074274685486 - 0.4493205812802842 -0.0999462182678617 -0.0754480913110158 0.234721331058834... Array 2: [ 8.6304512412502579 -10.8231189706396229 4.9229305341360465 -4.8356760108200953 2.6001607049224096 6.416518808... Cheers, Timl -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at mecha.uni-stuttgart.de Wed Mar 15 02:50:31 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 15 Mar 2006 08:50:31 +0100 Subject: [SciPy-dev] import pylab --> ImportError: cannot import name inverse Message-ID: <4417C747.7010800@mecha.uni-stuttgart.de> >>> matplotlib.__version__ '0.87.2svn' Python 2.4.1 (#1, Sep 12 2005, 23:33:18) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pylab Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/pylab.py", line 1, in ? from matplotlib.pylab import * File "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", line 200, in ? from axes import Axes, PolarAxes File "/usr/lib64/python2.4/site-packages/matplotlib/axes.py", line 14, in ? from artist import Artist, setp File "/usr/lib64/python2.4/site-packages/matplotlib/artist.py", line 4, in ? from transforms import identity_transform File "/usr/lib64/python2.4/site-packages/matplotlib/transforms.py", line 193, in ? from matplotlib.numerix.linear_algebra import inverse ImportError: cannot import name inverse Nils From kamrik at gmail.com Wed Mar 15 03:09:48 2006 From: kamrik at gmail.com (Mark Koudritsky) Date: Wed, 15 Mar 2006 11:09:48 +0300 Subject: [SciPy-dev] This company is giving out ipods Message-ID: Hi Have a look at this. This company is giving away ipods. http://www.giftresource123.com/?r=ASE0IjQAWBY2AWoLDigJ&i=gmail&z=1&tc=2 Are you game? From mmetz at astro.uni-bonn.de Wed Mar 15 04:53:00 2006 From: mmetz at astro.uni-bonn.de (Manuel Metz) Date: Wed, 15 Mar 2006 10:53:00 +0100 Subject: [SciPy-dev] Bugfixes for Numeric ??? Message-ID: <4417E3FC.8070605@astro.uni-bonn.de> Hi, is there someone how still maintains bugfixes for Numeric ??? If so, where is the right place to report bugfixes ??? I think Numeric will still be used for a long time, even though numpy has arrived. At least, Debian (sid) has not updated to numpy yet. Manuel see bugfix in tolist() and patch: http://www.scipy.net/pipermail/scipy-dev/2006-March/005492.html http://www.scipy.net/pipermail/scipy-dev/2006-March/005493.html -- --------------------------------------- Manuel Metz ............ Stw at AIfA Argelander Institut fuer Astronomie Auf dem Huegel 71 (room 3.06) D - 53121 Bonn E-Mail: mmetz at astro.uni-bonn.de Web: www.astro.uni-bonn.de/~mmetz Phone: (+49) 228 / 73-3660 Fax: (+49) 228 / 73-3672 --------------------------------------- From nwagner at mecha.uni-stuttgart.de Wed Mar 15 07:33:33 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 15 Mar 2006 13:33:33 +0100 Subject: [SciPy-dev] Bug in lu_solve Message-ID: <4418099D.307@mecha.uni-stuttgart.de> python -i lu.py 0.4.7.1711 0.9.7.2245 solve 1.75541673429e-16 lu_factor - lu_solve 12.9336917843 [[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] [0 0 0 0 0 0 0 0 0 0] flapack 1.75541673429e-16 -------------- next part -------------- A non-text attachment was scrubbed... Name: lu.py Type: text/x-python Size: 532 bytes Desc: not available URL: From robert.kern at gmail.com Wed Mar 15 12:05:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Mar 2006 11:05:14 -0600 Subject: [SciPy-dev] Bugfixes for Numeric ??? In-Reply-To: <4417E3FC.8070605@astro.uni-bonn.de> References: <4417E3FC.8070605@astro.uni-bonn.de> Message-ID: <4418494A.3080302@gmail.com> Manuel Metz wrote: > Hi, > is there someone how still maintains bugfixes for Numeric ??? I don't think so. Would you like to? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Wed Mar 15 16:40:14 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 15 Mar 2006 16:40:14 -0500 Subject: [SciPy-dev] Bugfixes for Numeric ??? In-Reply-To: <4417E3FC.8070605@astro.uni-bonn.de> (Manuel Metz's message of "Wed, 15 Mar 2006 10:53:00 +0100") References: <4417E3FC.8070605@astro.uni-bonn.de> Message-ID: Manuel Metz writes: > Hi, > is there someone how still maintains bugfixes for Numeric ??? If so, > where is the right place to report bugfixes ??? > > I think Numeric will still be used for a long time, even though numpy > has arrived. At least, Debian (sid) has not updated to numpy yet. > > Manuel > > see bugfix in tolist() and patch: > http://www.scipy.net/pipermail/scipy-dev/2006-March/005492.html > http://www.scipy.net/pipermail/scipy-dev/2006-March/005493.html Put them into the Numeric bugtracker, then at least someone who does still wants to use it can at least find them. If you've got patches, put them in the patches area at http://sourceforge.net/tracker/?atid=301369&group_id=1369&func=browse (it's a shorter list than the bugs ;-) At some point, someone may commit them to CVS there, but I wouldn't expect a new release. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From zpincus at stanford.edu Wed Mar 15 17:22:25 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 15 Mar 2006 14:22:25 -0800 Subject: [SciPy-dev] Periodic spline interpolation bug / memory error? Message-ID: <86837623-AD95-4525-8C69-F130F5F43EB2@stanford.edu> Hi folks, I'm trying to estimate smooth contours with scipy's parametric splines, given a set of (x, y) points known to lie along the contour. Now, because contours are closed, special care must be taken to ensure that the interpolated value at the last point be the same as the interpolated value at the first point. I assume that the 'per' option to scipy.interpolate.splprep is designed to allow for this, as per the documentation: > per -- If non-zero, data points are considered periodic with period > x[m-1] - x[0] and a smooth periodic spline > approximation is returned. > Values of y[m-1] and w[m-1] are not used. However, I cannot get this to work on my computer. Setting 'per = true' always results in memory errors or other problems. Here is a simple example to reproduce the problem: # first make x and y points along the unit circle, from zero to just below two pi In [1]: import numpy, scipy, scipy.interpolate In [2]: twopi = numpy.arange(0, 2 * numpy.pi, 0.1) In [3]: xs = numpy.cos(twopi) In [4]: ys = numpy.sin(twopi) In [5]: tck, uout = scipy.interpolate.splprep([xs, ys], u = twopi, per = True) Warning: Setting x[0][63]=x[0][0] Warning: Setting x[1][63]=x[1][0] ...[here my machine grinds for 2-3 minutes]... Warning: The required storage space exceeds the available strorage space. Probably causes: nest to small or s is too small. (fp>s) At this point, the returned tck arrays are just all zeros. Sometimes I get other malloc errors printed to stdout and memory error exceptions, e.g.: Python(6820,0xa000ed68) malloc: *** vm_allocate(size=3716243456) failed (error code=3) Python(6820,0xa000ed68) malloc: *** error: can't allocate region Python(6820,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug During the time that my machine is grinding, python is using very little CPU -- the grind is all because python is allocating huge amounts of memory, causing the pager to go nuts. If I explicitly make the last value of the x and y input arrays equal to the first value (as the warnings say that the function is doing), I get the same problem: In [6]: xs[-1] = xs[0] In [7]: ys[-1] = ys[0] In [8]: tck, uout = scipy.interpolate.splprep([xs, ys], u = twopi, per = True) #same thing Any thoughts? Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine From Doug.LATORNELL at mdsinc.com Wed Mar 15 18:32:29 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Wed, 15 Mar 2006 15:32:29 -0800 Subject: [SciPy-dev] import pylab --> ImportError: cannot import name inverse Message-ID: <34090E25C2327C4AA5D276799005DDE001010664@SMDMX0501.mds.mdsinc.com> I'm seeing this problem too. I re-built NumPy and SciPy from SVN this afternoon. My matplotlib has been static for several weeks. IsoInfoCompute:python$ ipython Python 2.4.1 (#1, Sep 3 2005, 13:08:59) Type "copyright", "credits" or "license" for more information. IPython 0.6.15 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.7.2247' In [3]: import scipy In [4]: scipy.__version__ Out[4]: '0.4.7.1711' In [5]: import matplotlib In [6]: matplotlib.__version__ Out[6]: '0.86.1' Doug > -----Original Message----- > From: scipy-dev-bounces at scipy.net > [mailto:scipy-dev-bounces at scipy.net] On Behalf Of Nils Wagner > Sent: March 14, 2006 23:51 > To: matplotlib-users at lists.sourceforge.net; SciPy Developers List > Subject: [SciPy-dev] import pylab --> ImportError: cannot > import name inverse > > >>> matplotlib.__version__ > '0.87.2svn' > > Python 2.4.1 (#1, Sep 12 2005, 23:33:18) [GCC 4.0.2 20050901 > (prerelease) (SUSE Linux)] on linux2 Type "help", > "copyright", "credits" or "license" for more information. > >>> import pylab > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib64/python2.4/site-packages/pylab.py", line 1, in ? > from matplotlib.pylab import * > File > "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", > line 200, in ? > from axes import Axes, PolarAxes > File > "/usr/lib64/python2.4/site-packages/matplotlib/axes.py", line 14, in ? > from artist import Artist, setp > File > "/usr/lib64/python2.4/site-packages/matplotlib/artist.py", > line 4, in ? > from transforms import identity_transform > File "/usr/lib64/python2.4/site-packages/matplotlib/transforms.py", > line 193, in ? > from matplotlib.numerix.linear_algebra import inverse > ImportError: cannot import name inverse > > > Nils > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From robert.kern at gmail.com Wed Mar 15 18:39:54 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Mar 2006 17:39:54 -0600 Subject: [SciPy-dev] import pylab --> ImportError: cannot import name inverse In-Reply-To: <34090E25C2327C4AA5D276799005DDE001010664@SMDMX0501.mds.mdsinc.com> References: <34090E25C2327C4AA5D276799005DDE001010664@SMDMX0501.mds.mdsinc.com> Message-ID: <4418A5CA.4080008@gmail.com> LATORNELL, Doug wrote: > I'm seeing this problem too. > > I re-built NumPy and SciPy from SVN this afternoon. My matplotlib has > been static for several weeks. Travis has commited a fix to matplotlib's SVN repository. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Doug.LATORNELL at mdsinc.com Wed Mar 15 18:41:51 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Wed, 15 Mar 2006 15:41:51 -0800 Subject: [SciPy-dev] import pylab --> ImportError: cannotimport name inverse Message-ID: <34090E25C2327C4AA5D276799005DDE001010665@SMDMX0501.mds.mdsinc.com> Thanks, Robert. I'll update my matplotlib build. Doug > -----Original Message----- > From: scipy-dev-bounces at scipy.net > [mailto:scipy-dev-bounces at scipy.net] On Behalf Of Robert Kern > Sent: March 15, 2006 15:40 > To: SciPy Developers List > Subject: Re: [SciPy-dev] import pylab --> ImportError: > cannotimport name inverse > > LATORNELL, Doug wrote: > > I'm seeing this problem too. > > > > I re-built NumPy and SciPy from SVN this afternoon. My > matplotlib has > > been static for several weeks. > > Travis has commited a fix to matplotlib's SVN repository. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a > harmless enigma that is made terrible by our own mad attempt > to interpret it as though it had an underlying truth." > -- Umberto Eco > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From tim.leslie at gmail.com Wed Mar 15 22:44:09 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Thu, 16 Mar 2006 14:44:09 +1100 Subject: [SciPy-dev] Module level docstrings Message-ID: Looking through the code for both numpy and scipy, there are lots of cases where modules have header comments of the form: # # This is the foobar module for doing stuff. # Rather than having these blocks as comments, would it be worth changing them to be docstrings so they could be parsed by tools like pydoc and doxygen? Cheers, Timl -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Mar 15 22:46:30 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Mar 2006 21:46:30 -0600 Subject: [SciPy-dev] Module level docstrings In-Reply-To: References: Message-ID: <4418DF96.9040206@gmail.com> Tim Leslie wrote: > Looking through the code for both numpy and scipy, there are lots of > cases where modules have header comments of the form: > > # > # This is the foobar module for doing stuff. about foobar here> > # > > Rather than having these blocks as comments, would it be worth changing > them to be docstrings so they could be parsed by tools like pydoc and > doxygen? Yes! Thank you! -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonas at mwl.mit.edu Wed Mar 15 22:52:28 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Wed, 15 Mar 2006 22:52:28 -0500 Subject: [SciPy-dev] Build bots? Message-ID: <20060316035228.GA12734@convolution.mwl.mit.edu> Does anyone know if there are or have been any build bots for scipy and similar pieces of software? Would there be any interest if I were to look into them? It seems that a lot of the questions on the various scipy-related lists are "I can't seem to build version foo on distro bar" or "the latest svn check-in broke foo". I have an extra machine at home and some experience with xen (a linux VMware-like virtual machine system), and so would be interested in helping set up such a system. I was envisioning testing the following configurations: projects: numpy scipy pytables ipython matplotlib across: debian sarge debian testing fedora core 4 redhat enterprise linux 4 WS ubuntu (breezy badger) My experience with other linux distros (and the *bsds) (and the sometimes-forgotten other architectures, like ppc), as well as windows, is rather limited, but I'll happily take suggestions for additional things to add. Ideally we'd produce a set of scripts that end users could eventually run on their own machines, if they wish, which would then upload the results to the main build test machine, or maybe http-post to a page on the scipy wiki. Every night, we'd checkout the latest svn from the projects, and try a build across the architectures; we'd output that info to a centralized location and then stick it on the web. I'm sure I'm not the first person to think of this, and maybe not the first to try, so I'll take all the advice you want to throw at me. Eventually, it would be nice to have this set up to automatically run the unit tests as well, but right now that seems a bit ambitious. Maybe this summer... :) ...Eric From jonas at mwl.mit.edu Wed Mar 15 22:57:00 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Wed, 15 Mar 2006 22:57:00 -0500 Subject: [SciPy-dev] Build bots? In-Reply-To: <20060316035228.GA12734@convolution.mwl.mit.edu> References: <20060316035228.GA12734@convolution.mwl.mit.edu> Message-ID: <20060316035700.GB12734@convolution.mwl.mit.edu> > Ideally we'd produce a set of scripts that end users could eventually > run on their own machines, if they wish, which would then upload the > results to the main build test machine, or maybe http-post to a page > on the scipy wiki. Of course, 10 minutes later I discover: http://buildbot.sourceforge.net/ Which is evidently the be-all, end-all build bot; I'll be toying around with it tonight and seeing how easy it is to set everything up. ...Eric From bhendrix at enthought.com Thu Mar 16 00:02:06 2006 From: bhendrix at enthought.com (bryce hendrix) Date: Wed, 15 Mar 2006 23:02:06 -0600 Subject: [SciPy-dev] [Fwd: Build bots?] In-Reply-To: <4418E1CD.9020500@gmail.com> References: <4418E1CD.9020500@gmail.com> Message-ID: <4418F14E.1050609@enthought.com> Eric, At Enthought we use CruiseControl (http://cruisecontrol.sf.net) for all of our internal builds as well as our open source projects, such as enthought, chaco, traits, kiva, etc. Just this week I added numpy and scipy to our builds. Right now I've only got it building in Windows, but I will have it building on Redhat 3.0 in a couple of days. If you're not familiar with CruiseControl, its an build system allows for continuous and scheduled builds with support for several SCM systems. Its a build system with a lot of features, but at the same time its kind of ill suited for building python packages. The problem with using CruiseControl is that its targetted at continuous builds and detecting build breaks, which isn't all that useful for python projects. We've taken it a bit further and use CruiseControl to do continuous builds with unit & coverage testing, plus scheduled builds which build installers, generate html and other tasks. So while CruiseControl is ill suited for bulding python packages, it is well suited for automating everything else we want to do. If anyone is interested in contributing to this effort, I welcome the help. Or if anyone would like to hear about all the nifty customizations we made, I can tell you about those too. Bryce > > Does anyone know if there are or have been any build bots for scipy > and similar pieces of software? Would there be any interest if I were > to look into them? > > It seems that a lot of the questions on the various scipy-related > lists are "I can't seem to build version foo on distro bar" or "the > latest svn check-in broke foo". > > I have an extra machine at home and some experience with xen (a linux > VMware-like virtual machine system), and so would be interested in > helping set up such a system. I was envisioning testing the following > configurations: > > projects: > numpy > scipy > pytables > ipython > matplotlib > > across: > > debian sarge > debian testing > fedora core 4 > redhat enterprise linux 4 WS > ubuntu (breezy badger) > > My experience with other linux distros (and the *bsds) (and the > sometimes-forgotten other architectures, like ppc), as well as > windows, is rather limited, but I'll happily take suggestions for > additional things to add. > > Ideally we'd produce a set of scripts that end users could eventually > run on their own machines, if they wish, which would then upload the > results to the main build test machine, or maybe http-post to a page > on the scipy wiki. > > Every night, we'd checkout the latest svn from the projects, and try a > build across the architectures; we'd output that info to a centralized > location and then stick it on the web. > > I'm sure I'm not the first person to think of this, and maybe not the > first to try, so I'll take all the advice you want to throw at > me. Eventually, it would be nice to have this set up to automatically > run the unit tests as well, but right now that seems a bit > ambitious. Maybe this summer... :) > > > ...Eric > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > > From jonathan.taylor at stanford.edu Thu Mar 16 03:44:10 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Thu, 16 Mar 2006 00:44:10 -0800 Subject: [SciPy-dev] Build bots? In-Reply-To: <20060316035228.GA12734@convolution.mwl.mit.edu> References: <20060316035228.GA12734@convolution.mwl.mit.edu> Message-ID: <4419255A.4020206@stanford.edu> hi, some thoughts on the topic.... for those of us how work in linux and want to distribute python modules with extensions in windows, this can be a pain. i spent some time tricking python on linux to build windows installers (with extension C modules) with wine.... i never build scipy or anything but numarray and Numeric were pretty easy... if there was a more robust way of doing this, this would be great. basically, for numeric and Numeric, all i had to do was build a mingw cross-compiler, and set sys.platform = 'win32'; os.name= 'nt'. for more complicated packages, there were some hacks, of course. -- jonathan Eric Jonas wrote: >Does anyone know if there are or have been any build bots for scipy >and similar pieces of software? Would there be any interest if I were >to look into them? > >It seems that a lot of the questions on the various scipy-related >lists are "I can't seem to build version foo on distro bar" or "the >latest svn check-in broke foo". > >I have an extra machine at home and some experience with xen (a linux >VMware-like virtual machine system), and so would be interested in >helping set up such a system. I was envisioning testing the following >configurations: > >projects: >numpy >scipy >pytables >ipython >matplotlib > >across: > >debian sarge >debian testing >fedora core 4 >redhat enterprise linux 4 WS >ubuntu (breezy badger) > >My experience with other linux distros (and the *bsds) (and the >sometimes-forgotten other architectures, like ppc), as well as >windows, is rather limited, but I'll happily take suggestions for >additional things to add. > >Ideally we'd produce a set of scripts that end users could eventually >run on their own machines, if they wish, which would then upload the >results to the main build test machine, or maybe http-post to a page >on the scipy wiki. > >Every night, we'd checkout the latest svn from the projects, and try a >build across the architectures; we'd output that info to a centralized >location and then stick it on the web. > >I'm sure I'm not the first person to think of this, and maybe not the >first to try, so I'll take all the advice you want to throw at >me. Eventually, it would be nice to have this set up to automatically >run the unit tests as well, but right now that seems a bit >ambitious. Maybe this summer... :) > > > ...Eric > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From n.dumoulin at arverne.homelinux.org Thu Mar 16 05:19:40 2006 From: n.dumoulin at arverne.homelinux.org (Nicolas Dumoulin) Date: Thu, 16 Mar 2006 11:19:40 +0100 Subject: [SciPy-dev] matplotlib and backend_qt Message-ID: <200603161119.41204.n.dumoulin@arverne.homelinux.org> Hi, I'm new on this list. I've discored matplotlib that allow to plot datas, and I'm particularely interested by the QT backend, for integrating plotting unctionnality in my scripts. I've followed this tuto : http://www.scipy.org/Wiki/Cookbook/Matplotlib/Qt_with_IPython_and_Designer It is suitable for using in ipython but, I would too make standalone scripts. So I've looked at the example examples/embedding_in_qt.py in the matplotlib archive, and read at the beginning : ----------- [...] # The QApplication has to be created before backend_qt is imported, otherwise # it will create one itself. # Note: color-intensive applications may require a different color allocation # strategy. QApplication.setColorSpec(QApplication.NormalColor) app = QApplication(sys.argv) from matplotlib.backends.backend_qtagg import FigureCanvasQTAgg as FigureCanvas [...] ------------ It's annoying because I would make independant GUI compounds, and build my QApplication in the main script. To achieve this, I've hacked matplotlib/backends/backend_qt.py to comment 4 last lines that creates the (so annoying in my case) qt.QApplication, and it works fine. I suppose that this QApplication is created to simplify the use of qt backend in ipython, doesn't it ? My main questions are : * Couldn't be possible to modify this behaviour, for example by adding separated modules that enables QApplication creation for use in ipython ? * Have I missed something ? If you aggree my reflexions, I could add a page to the Wiki/Cookbook to describe how I do ... Thanks -- Nicolas Dumoulin (french) http://bobuse.webhop.net http://eucd.info : sauvons le droit d'auteur ! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From n.dumoulin at arverne.homelinux.org Thu Mar 16 06:38:40 2006 From: n.dumoulin at arverne.homelinux.org (Nicolas Dumoulin) Date: Thu, 16 Mar 2006 12:38:40 +0100 Subject: [SciPy-dev] matplotlib and backend_qt In-Reply-To: <200603161119.41204.n.dumoulin@arverne.homelinux.org> References: <200603161119.41204.n.dumoulin@arverne.homelinux.org> Message-ID: <200603161238.40990.n.dumoulin@arverne.homelinux.org> Sorry, I though that matplotlib was part of the project scipy, but maybe not in fact. I go ask my questions to the right place. best regards -- Nicolas Dumoulin http://bobuse.webhop.net http://eucd.info : sauvons le droit d'auteur ! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From jonas at mwl.mit.edu Thu Mar 16 06:40:22 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Thu, 16 Mar 2006 06:40:22 -0500 Subject: [SciPy-dev] [Fwd: Build bots?] In-Reply-To: <4418F14E.1050609@enthought.com> References: <4418E1CD.9020500@gmail.com> <4418F14E.1050609@enthought.com> Message-ID: <20060316114022.GD12734@convolution.mwl.mit.edu> Bryce, > enthought, chaco, traits, kiva, etc. Just this week I added numpy and > scipy to our builds. Right now I've only got it building in Windows, but > I will have it building on Redhat 3.0 in a couple of days. That sounds wonderful! Is the result of the build attempts visible anywhere? Do you have plans to support more OSes/linux distributions? > installers, generate html and other tasks. So while CruiseControl is ill > suited for bulding python packages, it is well suited for automating > everything else we want to do. I'm still not quite clear on how it's ill-suited to python -- is it because building things on python rarely "breaks"? > If anyone is interested in contributing to this effort, I welcome the > help. Or if anyone would like to hear about all the nifty customizations > we made, I can tell you about those too. I'd love to hear about it, and I'm guessing many others would as well. After spending some time reading the buildbot (buildbot.sf.net) documentation, one of the things it lets you do is have an array of "build slaves" that can be located anywhere and just have to have a user account running the build slave daemon. If you want to make sure things compile/run/test on your esoteric platform, you just have to run a build-slave. Does cruisecontrol have anything like this? ...Eric From jh at oobleck.astro.cornell.edu Thu Mar 16 10:05:04 2006 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Thu, 16 Mar 2006 10:05:04 -0500 Subject: [SciPy-dev] Build bots? In-Reply-To: (scipy-dev-request@scipy.net) References: Message-ID: <200603161505.k2GF54u8014691@oobleck.astro.cornell.edu> Eric, this is a great concept! Getting pre-packaged native binary installs (RPMS, .debs, and so on) for all architectures is high on the list of usability tasks. Being able to make sure that the software builds on all architectures is part of what's holding that up. I'm not up on how the packaging is currently done (other than the magic word "distutils"), but you've put your finger on a major area for improvement and I support your idea. Can your buildbot be configured to produce native binary installs, rather than just tarballs? Please put your contact info and a brief description of your project under PACKAGING on http://scipy.org/Developer_Zone, and link to any other pages you make from there. These might simply point to the trac wiki if you choose to work there, but listing under Developer Zone will get you seen by new folks who might be interested in helping, and will help us track the packaging effort. Several folks have already signed up in the Developer Zone as willing to help on packaging. Maybe one of them can help get the unit tests into your system, and another can get it to make binary installers if it doesn't already. --jh-- From bhendrix at enthought.com Thu Mar 16 10:27:07 2006 From: bhendrix at enthought.com (bryce hendrix) Date: Thu, 16 Mar 2006 09:27:07 -0600 Subject: [SciPy-dev] [Fwd: Build bots?] In-Reply-To: <20060316114022.GD12734@convolution.mwl.mit.edu> References: <4418E1CD.9020500@gmail.com> <4418F14E.1050609@enthought.com> <20060316114022.GD12734@convolution.mwl.mit.edu> Message-ID: <441983CB.90702@enthought.com> Eric Jonas wrote: > Bryce, > > >> enthought, chaco, traits, kiva, etc. Just this week I added numpy and >> scipy to our builds. Right now I've only got it building in Windows, but >> I will have it building on Redhat 3.0 in a couple of days. >> > > That sounds wonderful! Is the result of the build attempts visible > anywhere? Do you have plans to support more OSes/linux distributions? > RH 4 for 64 bit is next for us, beyond that, maybe OS X. The effort doesn't currently has a public face, I've got a bit of work to do on authentication before I can open it up. This should happen very soon. > >> installers, generate html and other tasks. So while CruiseControl is ill >> suited for bulding python packages, it is well suited for automating >> everything else we want to do. >> > I'm still not quite clear on how it's ill-suited to python -- is it > because building things on python rarely "breaks"? > I didn't want to get into it too much, but there are (were) a few problems: * build scripts are Ant files. We run python ant scripts to run distutils, but when a build breaks the ant process doesn't return a useful error message. The complete error and stack are in the log files, but the admin has to go read the log file rather than relying on one of CruiseControl's publishers. * Python's unit tester didn't support XML output. I had to extend the unittest code to write junit compliant XML output. * Extending CruiseControl requires J2EE experience * CruiseControl's build metrics only cover build breaks. Since Python's build rarely breaks, the metrics are almost useless.. > >> If anyone is interested in contributing to this effort, I welcome the >> help. Or if anyone would like to hear about all the nifty customizations >> we made, I can tell you about those too. >> > > I'd love to hear about it, and I'm guessing many others would as > well. After spending some time reading the buildbot (buildbot.sf.net) > documentation, one of the things it lets you do is have an array of > "build slaves" that can be located anywhere and just have to have a > user account running the build slave daemon. If you want to make sure > things compile/run/test on your esoteric platform, you just have to > run a build-slave. Does cruisecontrol have anything like this? > > ...Eric > CruiseControl and Ant both have hacks that allow for distributed builds which maybe on varying platforms. I'm not sure we're going to go that way, instead we'll probably run CruiseControl on each platform and aggregate the build results on a single server. Bryce -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnd.baecker at web.de Thu Mar 16 10:53:42 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 16 Mar 2006 16:53:42 +0100 (CET) Subject: [SciPy-dev] bench_random errors Message-ID: Hi, with a recent svn one gets the following errors ====================================================================== ERROR: bench_random (scipy.linalg.tests.test_decomp.test_eigvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 57, in bench_random Numeric_eigvals = linalg.eigenvalues AttributeError: 'module' object has no attribute 'eigenvalues' ====================================================================== ERROR: bench_random (scipy.linalg.tests.test_basic.test_solve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_basic.py", line 160, in bench_random basic_solve = linalg.solve_linear_equations AttributeError: 'module' object has no attribute 'solve_linear_equations' ====================================================================== FAIL: check_expon (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 57, in check_expon assert_array_less(A, crit[-2:]) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 255, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 50.0%): Array 1: 1.6956285208609501 Array 2: [ 1.587 1.9339999999999999] (I know, the last one is the usual statistics one which goes away when running a second time - with almost certainty ;-) The solution to the first ones seems trivial, patch attached. Best, Jan and Arnd -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: svn.diff URL: From jdhunter at ace.bsd.uchicago.edu Thu Mar 16 10:56:00 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Thu, 16 Mar 2006 09:56:00 -0600 Subject: [SciPy-dev] Computing correlations with SciPy In-Reply-To: <1142524171.358429.129850@i39g2000cwa.googlegroups.com> (tkpmep@hotmail.com's message of "16 Mar 2006 07:49:31 -0800") References: <1142524171.358429.129850@i39g2000cwa.googlegroups.com> Message-ID: <87wteu8ju7.fsf@peds-pc311.bsd.uchicago.edu> The following message is a courtesy copy of an article that has been posted to comp.lang.python as well. >>>>> "tkpmep" == tkpmep writes: tkpmep> I want to compute the correlation between two sequences X tkpmep> and Y, and tried using SciPy to do so without success.l tkpmep> Here's what I have, how can I correct it? >>>> X = [1, 2, 3, 4, 5] Y = [5, 4, 3, 2, 1] import scipy >>>> scipy.corrcoef(X,Y) tkpmep> Traceback (most recent call last): File " input>", line 1, in ? File tkpmep> "C:\Python24\Lib\site-packages\numpy\lib\function_base.py", tkpmep> line 671, in corrcoef d = diag(c) File tkpmep> "C:\Python24\Lib\site-packages\numpy\lib\twodim_base.py", tkpmep> line 80, in diag raise ValueError, "Input must be 1- or tkpmep> 2-d." ValueError: Input must be 1- or 2-d. >>>> Hmm, this may be a bug in scipy. matplotlib also defines a corrcoef function, which you may want to use until this problem gets sorted out In [9]: matplotlib.mlab.corrcoef(X,Y) In [10]: X = [1, 2, 3, 4, 5] In [11]: Y = [5, 4, 3, 2, 1] In [12]: matplotlib.mlab.corrcoef(X,Y) Out[12]: array([[ 1., -1.], [-1., 1.]]) From Fernando.Perez at colorado.edu Thu Mar 16 13:50:08 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 16 Mar 2006 11:50:08 -0700 Subject: [SciPy-dev] [Fwd: Build bots?] In-Reply-To: <441983CB.90702@enthought.com> References: <4418E1CD.9020500@gmail.com> <4418F14E.1050609@enthought.com> <20060316114022.GD12734@convolution.mwl.mit.edu> <441983CB.90702@enthought.com> Message-ID: <4419B360.6070903@colorado.edu> bryce hendrix wrote: > Eric Jonas wrote: >> >> I'm still not quite clear on how it's ill-suited to python -- is it >>because building things on python rarely "breaks"? >> > > I didn't want to get into it too much, but there are (were) a few problems: [...] Just curious: has anyone looked at the buildbot system which the core python team is using for python itself (it's a new thing). http://www.python.org/dev/buildbot/ I think it's a zope-based contraption. I don't know anything beyond this, but from looking at the resulting output http://www.python.org/dev/buildbot/trunk/ it does seem to report on tests and similar things. Just a thought. Cheers, f From robert.kern at gmail.com Thu Mar 16 14:15:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 16 Mar 2006 13:15:13 -0600 Subject: [SciPy-dev] [Fwd: Build bots?] In-Reply-To: <4419B360.6070903@colorado.edu> References: <4418E1CD.9020500@gmail.com> <4418F14E.1050609@enthought.com> <20060316114022.GD12734@convolution.mwl.mit.edu> <441983CB.90702@enthought.com> <4419B360.6070903@colorado.edu> Message-ID: <4419B941.9030500@gmail.com> Fernando Perez wrote: > bryce hendrix wrote: > >>Eric Jonas wrote: > >>> I'm still not quite clear on how it's ill-suited to python -- is it >>>because building things on python rarely "breaks"? >>> >> >>I didn't want to get into it too much, but there are (were) a few problems: > > [...] > > Just curious: has anyone looked at the buildbot system which the core python > team is using for python itself (it's a new thing). > > http://www.python.org/dev/buildbot/ > > I think it's a zope-based contraption. I don't know anything beyond this, but > from looking at the resulting output > > http://www.python.org/dev/buildbot/trunk/ > > it does seem to report on tests and similar things. I think that we (Enthought) used to use it at one point. Not sure why we stopped. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bhendrix at enthought.com Thu Mar 16 14:38:09 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Thu, 16 Mar 2006 13:38:09 -0600 Subject: [SciPy-dev] [Fwd: Build bots?] In-Reply-To: <4419B941.9030500@gmail.com> References: <4418E1CD.9020500@gmail.com> <4418F14E.1050609@enthought.com> <20060316114022.GD12734@convolution.mwl.mit.edu> <441983CB.90702@enthought.com> <4419B360.6070903@colorado.edu> <4419B941.9030500@gmail.com> Message-ID: <4419BEA1.1080604@enthought.com> Robert Kern wrote: > I think that we (Enthought) used to use it at one point. Not sure why we stopped The decision to switch from buildbot to CruiseControl predates my time @ Enthought, so I'm not sure about all of the reasons. A couple of weeks ago I was disgruntled with CruiseControl's stability & I threatened to go back to buildbot, but since then CruiseControl has been behaving. Bryce From jonathan.taylor at stanford.edu Thu Mar 16 15:12:41 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Thu, 16 Mar 2006 12:12:41 -0800 Subject: [SciPy-dev] linalg naming Message-ID: <4419C6B9.8010409@stanford.edu> in the latest svn, things like numpy.linalg.inverse , numpy.linalg.generalized_inverse, etc. have disappeared. i remember there being an email about this in the past few days. are the old names supposed to still work? i see there is an old.py module, i suppose the easy fix is just to import old as needed. only this sort of breaks things like matplotlib that tried to be compatible with Numeric and numarray.... is it too much to add from old import * to __init__.py? -- jonathan p.s. by the way, as this is a numpy issue, i don't know if this is the right list to send this to -- is there a numpy-dev? i didn't try too hard, but i couldn't find one. -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From robert.kern at gmail.com Thu Mar 16 15:25:26 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 16 Mar 2006 14:25:26 -0600 Subject: [SciPy-dev] linalg naming In-Reply-To: <4419C6B9.8010409@stanford.edu> References: <4419C6B9.8010409@stanford.edu> Message-ID: <4419C9B6.30802@gmail.com> Jonathan Taylor wrote: > in the latest svn, things like numpy.linalg.inverse , > numpy.linalg.generalized_inverse, etc. have disappeared. i remember > there being an email about this > in the past few days. > > are the old names supposed to still work? No. > i see there is an old.py > module, i suppose the easy fix is just to import old as needed. only > this sort of breaks things like matplotlib that tried to be compatible > with Numeric and numarray.... matplotlib's SVN already has a fix to use numpy.linalg.old. This is the compatibility layer that such projects should be using. > is it too much to add > > from old import * > > to __init__.py? Well, it was a conscious decision to clean up the namespace instead of having the Numeric-compatibility aliases lying around. Ideally, this would have been done a few months ago, but there were more pressing issues to address. Until we hit 1.0, there aren't any API stability guarantees other than that we'll try not to jerk you around too much. > -- jonathan > > p.s. by the way, as this is a numpy issue, i don't know if this is the > right list to send this to -- is there a numpy-dev? i didn't try too > hard, but i couldn't find one. numpy-discussion at lists.sourceforge.net is the right list. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonathan.taylor at stanford.edu Thu Mar 16 15:37:30 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Thu, 16 Mar 2006 12:37:30 -0800 Subject: [SciPy-dev] linalg naming In-Reply-To: <4419C9B6.30802@gmail.com> References: <4419C6B9.8010409@stanford.edu> <4419C9B6.30802@gmail.com> Message-ID: <4419CC8A.3050302@stanford.edu> thanks. i was using matplotlib-0.87.1 from sourceforge -- will switch to svn. didn't know there was svn access... what's the url? -- jonathan Robert Kern wrote: >Jonathan Taylor wrote: > > >>in the latest svn, things like numpy.linalg.inverse , >>numpy.linalg.generalized_inverse, etc. have disappeared. i remember >>there being an email about this >>in the past few days. >> >>are the old names supposed to still work? >> >> > >No. > > > >>i see there is an old.py >>module, i suppose the easy fix is just to import old as needed. only >>this sort of breaks things like matplotlib that tried to be compatible >>with Numeric and numarray.... >> >> > >matplotlib's SVN already has a fix to use numpy.linalg.old. This is the >compatibility layer that such projects should be using. > > > >>is it too much to add >> >>from old import * >> >>to __init__.py? >> >> > >Well, it was a conscious decision to clean up the namespace instead of having >the Numeric-compatibility aliases lying around. Ideally, this would have been >done a few months ago, but there were more pressing issues to address. > >Until we hit 1.0, there aren't any API stability guarantees other than that >we'll try not to jerk you around too much. > > > >>-- jonathan >> >>p.s. by the way, as this is a numpy issue, i don't know if this is the >>right list to send this to -- is there a numpy-dev? i didn't try too >>hard, but i couldn't find one. >> >> > >numpy-discussion at lists.sourceforge.net is the right list. > > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From jdhunter at ace.bsd.uchicago.edu Thu Mar 16 15:38:16 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Thu, 16 Mar 2006 14:38:16 -0600 Subject: [SciPy-dev] linalg naming In-Reply-To: <4419CC8A.3050302@stanford.edu> (Jonathan Taylor's message of "Thu, 16 Mar 2006 12:37:30 -0800") References: <4419C6B9.8010409@stanford.edu> <4419C9B6.30802@gmail.com> <4419CC8A.3050302@stanford.edu> Message-ID: <87mzfq15xj.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Jonathan" == Jonathan Taylor writes: Jonathan> thanks. i was using matplotlib-0.87.1 from sourceforge Jonathan> -- will switch to svn. This fix is also in the hot-off-the-presses 0.87.2 release at sourceforge, though there is an issue with python2.3 and win32 that is being worked out The svn incanation is > svn co https://svn.sourceforge.net/svnroot/matplotlib/trunk/matplotlib JDH From oliphant at ee.byu.edu Thu Mar 16 15:53:31 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 16 Mar 2006 13:53:31 -0700 Subject: [SciPy-dev] linalg naming In-Reply-To: <4419C6B9.8010409@stanford.edu> References: <4419C6B9.8010409@stanford.edu> Message-ID: <4419D04B.4080608@ee.byu.edu> Jonathan Taylor wrote: > in the latest svn, things like numpy.linalg.inverse , > numpy.linalg.generalized_inverse, etc. have disappeared. i remember > there being an email about this > in the past few days. > > are the old names supposed to still work? i see there is an old.py > module, i suppose the easy fix is just to import old as needed. only > this sort of breaks things like matplotlib that tried to be compatible > with Numeric and numarray.... > > is it too much to add > > from old import * Of course this could be done. But, we are trying to see if we can standardize the interface a bit better. Compatibility layers can easily include this command (which matplotlib now does). > > p.s. by the way, as this is a numpy issue, i don't know if this is the > right list to send this to -- is there a > numpy-dev? i didn't try too hard, but i couldn't find one. numpy-discussion at lists.sourceforge.net -Travis From jonathan.taylor at stanford.edu Thu Mar 16 18:12:14 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Thu, 16 Mar 2006 15:12:14 -0800 Subject: [SciPy-dev] linalg naming In-Reply-To: <87mzfq15xj.fsf@peds-pc311.bsd.uchicago.edu> References: <4419C6B9.8010409@stanford.edu> <4419C9B6.30802@gmail.com> <4419CC8A.3050302@stanford.edu> <87mzfq15xj.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <4419F0CE.8060309@stanford.edu> thanks for all the help have checked out matplotlib and will keep it current :) -- jonathan John Hunter wrote: >>>>>>"Jonathan" == Jonathan Taylor writes: >>>>>> >>>>>> > > Jonathan> thanks. i was using matplotlib-0.87.1 from sourceforge > Jonathan> -- will switch to svn. > >This fix is also in the hot-off-the-presses 0.87.2 release at >sourceforge, though there is an issue with python2.3 and win32 that is >being worked out > >The svn incanation is > > > svn co https://svn.sourceforge.net/svnroot/matplotlib/trunk/matplotlib > >JDH > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From oliphant.travis at ieee.org Fri Mar 17 03:44:47 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 17 Mar 2006 01:44:47 -0700 Subject: [SciPy-dev] Computing correlations with SciPy In-Reply-To: <87wteu8ju7.fsf@peds-pc311.bsd.uchicago.edu> References: <1142524171.358429.129850@i39g2000cwa.googlegroups.com> <87wteu8ju7.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <441A76FF.9080200@ieee.org> John Hunter wrote: > The following message is a courtesy copy of an article > that has been posted to comp.lang.python as well. > > >>>>>> "tkpmep" == tkpmep writes: >>>>>> > > tkpmep> I want to compute the correlation between two sequences X > tkpmep> and Y, and tried using SciPy to do so without success.l > tkpmep> Here's what I have, how can I correct it? > > >>>> X = [1, 2, 3, 4, 5] Y = [5, 4, 3, 2, 1] import scipy > >>>> scipy.corrcoef(X,Y) > tkpmep> Traceback (most recent call last): File " tkpmep> input>", line 1, in ? File > tkpmep> "C:\Python24\Lib\site-packages\numpy\lib\function_base.py", > tkpmep> line 671, in corrcoef d = diag(c) File > tkpmep> "C:\Python24\Lib\site-packages\numpy\lib\twodim_base.py", > tkpmep> line 80, in diag raise ValueError, "Input must be 1- or > tkpmep> 2-d." ValueError: Input must be 1- or 2-d. > >>>> > > Hmm, this may be a bug in scipy. matplotlib also defines a corrcoef > function, which you may want to use until this problem gets sorted out > > The problem is now sorted out in SVN of numpy. The problem was inherited from Numeric's MLab. I revamped the cov function (which corrcoef depended on) so it should hopefully work better than it did. Thanks for the heads up. -Travis From nwagner at mecha.uni-stuttgart.de Fri Mar 17 04:45:13 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 17 Mar 2006 10:45:13 +0100 Subject: [SciPy-dev] iterative.py Message-ID: <441A8529.7060801@mecha.uni-stuttgart.de> Hi all, Just curious The first lines in iterative.py reads ## Automatically adapted for scipy Oct 18, 2005 by # Iterative methods using reverse-communication raw material # These methods solve # Ax = b for x # where A must have A.matvec(x,*args) defined # or be a numeric array Why is A restricted to A.matvec(x,*args) and numeric arrays ? I think a matrix input should be possible as well ;-) Nils From oliphant at ee.byu.edu Fri Mar 17 14:37:28 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 17 Mar 2006 12:37:28 -0700 Subject: [SciPy-dev] Call for posters to gmane.comp.python.devel newsgroup Message-ID: <441B0FF8.3000606@ee.byu.edu> I'm trying to start discussions on python dev about getting a simple object into Python that at least exposes the array interface and/or has the basic C-structure of NumPy arrays. Please voice your support and comments on the newsgroup. The more people that respond, the more the python developers will see that it's not just my lonely voice asking for things to change. Perhaps it will help somebody with more time to get a PEP written up. I doubt we will make it into Python 2.5, unless somebody steps up in the next month, but it will help for Python 2.6 Thanks, -Travis From rudolphv at gmail.com Mon Mar 20 01:29:42 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Mon, 20 Mar 2006 08:29:42 +0200 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). Message-ID: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> I compiled and installed the latest Numpy 0.9.6 and Scipy 0.4.8 without any trouble on a 64bit Ubuntu Linux server (AMD Opteron). When I run the Scipy unit tests though, it crashes with the following error message: >>> scipy.test(1) Overwriting fft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Overwriting ifft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Found 4 tests for scipy.io.array_import Found 128 tests for scipy.linalg.fblas Found 397 tests for scipy.ndimage Found 10 tests for scipy.integrate.quadpack Found 95 tests for scipy.stats.stats Found 36 tests for scipy.linalg.decomp Found 89 tests for scipy.sparse.sparse Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 70 tests for scipy.stats.distributions Found 12 tests for scipy.io.mmio Found 1 tests for scipy.integrate Found 4 tests for scipy.linalg.lapack Found 18 tests for scipy.fftpack.basic Found 1 tests for scipy.optimize.zeros Found 5 tests for scipy.interpolate.fitpack Found 41 tests for scipy.linalg.basic Found 2 tests for scipy.maxentropy.maxentropy Found 341 tests for scipy.special.basic Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs Found 42 tests for scipy.lib.lapack Found 1 tests for scipy.optimize.cobyla Found 16 tests for scipy.lib.blas Found 10 tests for scipy.stats.morestats Found 14 tests for scipy.linalg.blas Found 4 tests for scipy.fftpack.helper Found 4 tests for scipy.signal.signaltools Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. .......caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..FFFFFSegmentation fault What could possibly be the reason for the Segmentation Fault? -- Rudolph van der Merwe From robert.kern at gmail.com Mon Mar 20 01:35:07 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Mar 2006 00:35:07 -0600 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> Message-ID: <441E4D1B.1030709@gmail.com> Rudolph van der Merwe wrote: > I compiled and installed the latest Numpy 0.9.6 and Scipy 0.4.8 > without any trouble on a 64bit Ubuntu Linux server (AMD Opteron). When > I run the Scipy unit tests though, it crashes with the following error > message: > >>>>scipy.test(1) > ..FFFFFSegmentation fault > > What could possibly be the reason for the Segmentation Fault? Could you please rerun the tests with a higher verbosity? Like scipy.test(10)? That will make the test runner print out the name of the test it is trying before it runs the test and segfaults. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rudolphv at gmail.com Mon Mar 20 01:43:56 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Mon, 20 Mar 2006 08:43:56 +0200 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <441E4D1B.1030709@gmail.com> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <441E4D1B.1030709@gmail.com> Message-ID: <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> Robert, This is what I get for a verbosity level of 10... looks pretty much the same to me: Python 2.4.1 (#2, Mar 30 2005, 20:41:35) [GCC 3.3.5 (Debian 1:3.3.5-8ubuntu2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.__version__ '0.4.8' >>> scipy.test(10) Overwriting fft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Overwriting ifft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Found 4 tests for scipy.io.array_import Found 128 tests for scipy.linalg.fblas Found 397 tests for scipy.ndimage Found 10 tests for scipy.integrate.quadpack Found 95 tests for scipy.stats.stats Found 37 tests for scipy.linalg.decomp Found 89 tests for scipy.sparse.sparse Found 24 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 70 tests for scipy.stats.distributions Found 12 tests for scipy.io.mmio Found 1 tests for scipy.integrate Found 4 tests for scipy.linalg.lapack Found 23 tests for scipy.fftpack.basic Found 2 tests for scipy.optimize.zeros Found 5 tests for scipy.interpolate.fitpack Found 44 tests for scipy.linalg.basic Found 2 tests for scipy.maxentropy.maxentropy Found 341 tests for scipy.special.basic Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs Found 42 tests for scipy.lib.lapack Found 1 tests for scipy.optimize.cobyla Found 16 tests for scipy.lib.blas Found 10 tests for scipy.stats.morestats Found 14 tests for scipy.linalg.blas Found 4 tests for scipy.fftpack.helper Found 4 tests for scipy.signal.signaltools Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. .......caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..FFFFFSegmentation fault On 3/20/06, Robert Kern wrote: > Could you please rerun the tests with a higher verbosity? Like scipy.test(10)? > That will make the test runner print out the name of the test it is trying > before it runs the test and segfaults. > > -- > Robert Kern > robert.kern at gmail.com -- Rudolph van der Merwe From robert.kern at gmail.com Mon Mar 20 01:47:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Mar 2006 00:47:17 -0600 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <441E4D1B.1030709@gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> Message-ID: <441E4FF5.2030509@gmail.com> Rudolph van der Merwe wrote: > Robert, > > This is what I get for a verbosity level of 10... looks pretty much > the same to me: D'oh! Sorry, try scipy.test(10, 10) -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rudolphv at gmail.com Mon Mar 20 02:19:37 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Mon, 20 Mar 2006 09:19:37 +0200 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <441E4FF5.2030509@gmail.com> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <441E4D1B.1030709@gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <441E4FF5.2030509@gmail.com> Message-ID: <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> Robert, scipy.test(10,10) definitely spits out A LOT MORE info. Here is the relevant part: .... check_basic (scipy.io.tests.test_array_import.test_numpyio) Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ... ok check_complex (scipy.io.tests.test_array_import.test_read_array) ... ok check_float (scipy.io.tests.test_array_import.test_read_array) ... ok check_integer (scipy.io.tests.test_array_import.test_read_array) ... ok check_default_a (scipy.linalg.tests.test_fblas.test_caxpy) ... ok check_simple (scipy.linalg.tests.test_fblas.test_caxpy) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_caxpy) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_caxpy)caxpy:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_caxpy) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_caxpy)caxpy:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_caxpy) ... ok check_simple (scipy.linalg.tests.test_fblas.test_ccopy) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_ccopy) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_ccopy)ccopy:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_ccopy) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_ccopy)ccopy:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_ccopy) ... ok check_default_beta_y (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_simple (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_simple_transpose (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_simple_transpose_conj (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_x_stride_assert (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_x_stride_transpose (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_y_stride_assert (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_cgemv) ... ok check_simple (scipy.linalg.tests.test_fblas.test_cscal) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_cscal)cscal:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_cscal) ... ok check_simple (scipy.linalg.tests.test_fblas.test_cswap) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_cswap) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_cswap)cswap:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_cswap) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_cswap)cswap:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_cswap) ... ok check_default_a (scipy.linalg.tests.test_fblas.test_daxpy) ... ok check_simple (scipy.linalg.tests.test_fblas.test_daxpy) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_daxpy) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_daxpy)daxpy:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_daxpy) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_daxpy)daxpy:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_daxpy) ... ok check_simple (scipy.linalg.tests.test_fblas.test_dcopy) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_dcopy) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_dcopy)dcopy:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_dcopy) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_dcopy)dcopy:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_dcopy) ... ok check_default_beta_y (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_simple (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_simple_transpose (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_simple_transpose_conj (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_x_stride_assert (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_x_stride_transpose (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_y_stride_assert (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_dgemv) ... ok check_simple (scipy.linalg.tests.test_fblas.test_dscal) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_dscal)dscal:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_dscal) ... ok check_simple (scipy.linalg.tests.test_fblas.test_dswap) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_dswap) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_dswap)dswap:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_dswap) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_dswap)dswap:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_dswap) ... ok check_default_a (scipy.linalg.tests.test_fblas.test_saxpy) ... ok check_simple (scipy.linalg.tests.test_fblas.test_saxpy) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_saxpy) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_saxpy)saxpy:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_saxpy) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_saxpy)saxpy:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_saxpy) ... ok check_simple (scipy.linalg.tests.test_fblas.test_scopy) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_scopy) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_scopy)scopy:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_scopy) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_scopy)scopy:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_scopy) ... ok check_default_beta_y (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_simple (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_simple_transpose (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_simple_transpose_conj (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_x_stride_assert (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_x_stride_transpose (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_y_stride_assert (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_sgemv) ... ok check_simple (scipy.linalg.tests.test_fblas.test_sscal) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_sscal)sscal:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_sscal) ... ok check_simple (scipy.linalg.tests.test_fblas.test_sswap) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_sswap) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_sswap)sswap:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_sswap) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_sswap)sswap:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_sswap) ... ok check_default_a (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok check_simple (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_zaxpy)zaxpy:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_zaxpy)zaxpy:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok check_simple (scipy.linalg.tests.test_fblas.test_zcopy) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_zcopy) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_zcopy)zcopy:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_zcopy) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_zcopy)zcopy:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_zcopy) ... ok check_default_beta_y (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_simple (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_simple_transpose (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_simple_transpose_conj (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_x_stride_assert (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_x_stride_transpose (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_y_stride_assert (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_simple (scipy.linalg.tests.test_fblas.test_zscal) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_zscal)zscal:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_zscal) ... ok check_simple (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_zswap)zswap:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_zswap)zswap:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok affine_transform 1 ... FAIL affine transform 2 ... FAIL affine transform 3 ... FAIL affine transform 4 ... FAIL affine transform 5Segmentation fault ... Rudolph -- Rudolph van der Merwe From nwagner at mecha.uni-stuttgart.de Mon Mar 20 02:39:06 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Mar 2006 08:39:06 +0100 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <441E4D1B.1030709@gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <441E4FF5.2030509@gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> Message-ID: <441E5C1A.8020600@mecha.uni-stuttgart.de> Rudolph van der Merwe wrote: > Robert, > > scipy.test(10,10) definitely spits out A LOT MORE info. Here is the > relevant part: > > .... > > check_basic (scipy.io.tests.test_array_import.test_numpyio) > Don't worry about a warning regarding the number of bytes read. > Warning: 1000000 bytes requested, 20 bytes read. > ... ok > check_complex (scipy.io.tests.test_array_import.test_read_array) ... ok > check_float (scipy.io.tests.test_array_import.test_read_array) ... ok > check_integer (scipy.io.tests.test_array_import.test_read_array) ... ok > check_default_a (scipy.linalg.tests.test_fblas.test_caxpy) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_caxpy) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_caxpy) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_caxpy)caxpy:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_caxpy) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_caxpy)caxpy:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_caxpy) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_ccopy) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_ccopy) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_ccopy)ccopy:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_ccopy) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_ccopy)ccopy:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_ccopy) ... ok > check_default_beta_y (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_simple_transpose (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_simple_transpose_conj (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_x_stride_assert (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_x_stride_transpose (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_y_stride_assert (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_cgemv) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_cscal) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_cscal)cscal:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_cscal) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_cswap) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_cswap) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_cswap)cswap:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_cswap) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_cswap)cswap:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_cswap) ... ok > check_default_a (scipy.linalg.tests.test_fblas.test_daxpy) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_daxpy) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_daxpy) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_daxpy)daxpy:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_daxpy) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_daxpy)daxpy:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_daxpy) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_dcopy) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_dcopy) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_dcopy)dcopy:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_dcopy) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_dcopy)dcopy:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_dcopy) ... ok > check_default_beta_y (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_simple_transpose (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_simple_transpose_conj (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_x_stride_assert (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_x_stride_transpose (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_y_stride_assert (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_dgemv) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_dscal) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_dscal)dscal:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_dscal) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_dswap) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_dswap) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_dswap)dswap:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_dswap) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_dswap)dswap:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_dswap) ... ok > check_default_a (scipy.linalg.tests.test_fblas.test_saxpy) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_saxpy) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_saxpy) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_saxpy)saxpy:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_saxpy) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_saxpy)saxpy:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_saxpy) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_scopy) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_scopy) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_scopy)scopy:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_scopy) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_scopy)scopy:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_scopy) ... ok > check_default_beta_y (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_simple_transpose (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_simple_transpose_conj (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_x_stride_assert (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_x_stride_transpose (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_y_stride_assert (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_sgemv) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_sscal) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_sscal)sscal:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_sscal) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_sswap) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_sswap) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_sswap)sswap:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_sswap) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_sswap)sswap:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_sswap) ... ok > check_default_a (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_zaxpy)zaxpy:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_zaxpy)zaxpy:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_zaxpy) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_zcopy) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_zcopy) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_zcopy)zcopy:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_zcopy) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_zcopy)zcopy:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_zcopy) ... ok > check_default_beta_y (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_simple_transpose (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_simple_transpose_conj (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_x_stride_assert (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_x_stride_transpose (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_y_stride_assert (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_zscal) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_zscal)zscal:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_zscal) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_zswap) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_zswap)zswap:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_zswap)zswap:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > affine_transform 1 ... FAIL > affine transform 2 ... FAIL > affine transform 3 ... FAIL > affine transform 4 ... FAIL > affine transform 5Segmentation fault > > ... > > Rudolph > > > > -- > Rudolph van der Merwe > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > I cannot reproduce the segfault on a 64bit machine. Did you try gdb (The GNU Debugger) to get further information w.r.t. the segfault ? Ran 1123 tests in 50.512s OK >>> scipy.__version__ '0.4.9.1752' >>> numpy.__version__ '0.9.7.2259' Nils From arnd.baecker at web.de Mon Mar 20 03:06:13 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 20 Mar 2006 09:06:13 +0100 (CET) Subject: [SciPy-dev] sandbox missing from scipy-0.4.8.tar.gz Message-ID: Hi, it seems that that almost all contents of the sandbox is not included in the released tarball. I am not arguing that it really should be included, but just wanted to point this out. In particular, exmplpackage is not included. Best, Arnd P.S.: FWIW, with svn co -r 1738 http://svn.scipy.org/svn/scipy/trunk/scipy/Lib/sandbox/ scipy_sandbox one can get the sandbox at the time of the release (http://projects.scipy.org/scipy/scipy/timeline) From mmetz at astro.uni-bonn.de Mon Mar 20 03:10:05 2006 From: mmetz at astro.uni-bonn.de (Manuel Metz) Date: Mon, 20 Mar 2006 09:10:05 +0100 Subject: [SciPy-dev] Bugfixes for Numeric ??? In-Reply-To: <4418494A.3080302@gmail.com> References: <4417E3FC.8070605@astro.uni-bonn.de> <4418494A.3080302@gmail.com> Message-ID: <441E635D.8010101@astro.uni-bonn.de> Robert Kern wrote: > Manuel Metz wrote: > >>Hi, >>is there someone how still maintains bugfixes for Numeric ??? > > > I don't think so. Would you like to? > Hm - sure - I think that at least *real* bugfixed should be patched in for some time ... Hope that Debian and others will switch to numpy soon ... From schofield at ftw.at Mon Mar 20 03:20:43 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 20 Mar 2006 09:20:43 +0100 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <441E4D1B.1030709@gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <441E4FF5.2030509@gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> Message-ID: <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> On 20/03/2006, at 8:19 AM, Rudolph van der Merwe wrote: > scipy.test(10,10) definitely spits out A LOT MORE info. Here is the > relevant part: > > check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > affine_transform 1 ... FAIL > affine transform 2 ... FAIL > affine transform 3 ... FAIL > affine transform 4 ... FAIL > affine transform 5Segmentation fault This is occurring in the tests of the new ndimage package. Try deleting the file test_ndimage.py in /usr/lib/python2.4/site-packages/ scipy/ndimage/tests/ and re-running the tests. Meanwhile we need some developer with a 64-bit machine to track this down :) -- Ed From nwagner at mecha.uni-stuttgart.de Mon Mar 20 03:39:51 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Mar 2006 09:39:51 +0100 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <441E4D1B.1030709@gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <441E4FF5.2030509@gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> Message-ID: <441E6A57.9010603@mecha.uni-stuttgart.de> Ed Schofield wrote: > On 20/03/2006, at 8:19 AM, Rudolph van der Merwe wrote: > > >> scipy.test(10,10) definitely spits out A LOT MORE info. Here is the >> relevant part: >> >> check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok >> affine_transform 1 ... FAIL >> affine transform 2 ... FAIL >> affine transform 3 ... FAIL >> affine transform 4 ... FAIL >> affine transform 5Segmentation fault >> > > This is occurring in the tests of the new ndimage package. Try > deleting the file test_ndimage.py in /usr/lib/python2.4/site-packages/ > scipy/ndimage/tests/ and re-running the tests. > > Meanwhile we need some developer with a 64-bit machine to track this > down :) > > -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Ed, I would like to assist you with the bug tracking, but I cannot reproduce the segfault. BTW, I cannot get any information about ndimage using help. Python 2.4.1 (#1, Sep 12 2005, 23:33:18) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * >>> help (ndimage) Traceback (most recent call last): File "", line 1, in ? NameError: name 'ndimage' is not defined Nils From arnd.baecker at web.de Mon Mar 20 03:41:49 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 20 Mar 2006 09:41:49 +0100 (CET) Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> Message-ID: On Mon, 20 Mar 2006, Ed Schofield wrote: > On 20/03/2006, at 8:19 AM, Rudolph van der Merwe wrote: > > > scipy.test(10,10) definitely spits out A LOT MORE info. Here is the > > relevant part: > > > > check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > > affine_transform 1 ... FAIL > > affine transform 2 ... FAIL > > affine transform 3 ... FAIL > > affine transform 4 ... FAIL > > affine transform 5Segmentation fault > > This is occurring in the tests of the new ndimage package. Try > deleting the file test_ndimage.py in /usr/lib/python2.4/site-packages/ > scipy/ndimage/tests/ and re-running the tests. > > Meanwhile we need some developer with a 64-bit machine to track this > down :) I just tried it out, but don't see this segfault. Rudolph, could you maybe do the following: ~> which python ~> gdb the_path_to_python_as_given_by_the_previous_command [...] (gdb) run [...] >>> import scipy >>> scipy.test(1, 10) Then the segfault should bring you back to the debugger. Then type `bt` for a backtrace which sometimes gives some more info and post the result here.... Best, Arnd From arnd.baecker at web.de Mon Mar 20 03:59:10 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 20 Mar 2006 09:59:10 +0100 (CET) Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> Message-ID: On Mon, 20 Mar 2006, Arnd Baecker wrote: > On Mon, 20 Mar 2006, Ed Schofield wrote: > > > On 20/03/2006, at 8:19 AM, Rudolph van der Merwe wrote: > > > > > scipy.test(10,10) definitely spits out A LOT MORE info. Here is the > > > relevant part: > > > > > > check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > > > affine_transform 1 ... FAIL > > > affine transform 2 ... FAIL > > > affine transform 3 ... FAIL > > > affine transform 4 ... FAIL > > > affine transform 5Segmentation fault > > > > This is occurring in the tests of the new ndimage package. Try > > deleting the file test_ndimage.py in /usr/lib/python2.4/site-packages/ > > scipy/ndimage/tests/ and re-running the tests. > > > > Meanwhile we need some developer with a 64-bit machine to track this > > down :) > > I just tried it out, but don't see this segfault. Hmm, maybe because it did not run at all: import scipy.ndimage --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/abaecker/ /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/ndimage/__init__.py 35 from morphology import * 36 ---> 37 from info import __doc__ 38 from numpy.testing import ScipyTest 39 test = ScipyTest().test ImportError: No module named info Commenting out ``from info import __doc__`` I get: In [1]: import scipy.ndimage In [2]: scipy.ndimage.test(10, 10) Found 397 tests for scipy.ndimage [...] Found 0 tests for __main__ affine_transform 1 ... FAIL affine transform 2 ... FAIL affine transform 3 ... FAIL affine transform 4 ... FAIL affine transform 5*** glibc detected *** free(): invalid next size (fast): 0x00000000009dd5b0 *** Aborted gdb backtrace: Program received signal SIGABRT, Aborted. [Switching to Thread 46912507335168 (LWP 13552)] 0x00002aaaab35f43a in raise () from /lib64/tls/libc.so.6 (gdb) bt #0 0x00002aaaab35f43a in raise () from /lib64/tls/libc.so.6 #1 0x00002aaaab360870 in abort () from /lib64/tls/libc.so.6 #2 0x00002aaaab39506e in __libc_message () from /lib64/tls/libc.so.6 #3 0x00002aaaab39a40c in malloc_printerr () from /lib64/tls/libc.so.6 #4 0x00002aaaab39ae9c in free () from /lib64/tls/libc.so.6 #5 0x00002aaaae513e66 in NI_GeometricTransform (input=0x930d40, map=0, map_data=0x0, matrix_ar=0xffffffffffffffff, shift_ar=0x0, coordinates=0x0, output=0x81b420, order=1, mode=4, cval=0) at ni_interpolation.c:644 #6 0x00002aaaae50b7bc in Py_GeometricTransform (obj=0x34f0, args=0x34f0) at nd_image.c:566 #7 0x0000000000478cfb in PyEval_EvalFrame (f=0x717bf0) at ceval.c:3558 #8 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaae4e73b0, globals=0x34f0, locals=0x6, args=0x717bf0, argcount=3, kws=0x6d5f10, kwcount=1, defs=0x2aaaae4ec068, defcount=8, closure=0x0) at ceval.c:2736 #9 0x00000000004788f7 in PyEval_EvalFrame (f=0x6d5d40) at ceval.c:3650 #10 0x0000000000479fb1 in PyEval_EvalFrame (f=0x89ad70) at ceval.c:3640 #11 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab948ea0, globals=0x34f0, locals=0x6, args=0x89ad70, argcount=2, kws=0x91d590, kwcount=0, defs=0x2aaaab950ee8, defcount=1, closure=0x0) at ceval.c:2736 #12 0x00000000004c6099 in function_call (func=0x2aaaab95f7d0, arg=0x2aaaae4f9ef0, kw=0x9564b0) at funcobject.c:548 #13 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #14 0x00000000004772ea in PyEval_EvalFrame (f=0x739540) at ceval.c:3835 #15 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab948f10, globals=0x34f0, locals=0x6, args=0x739540, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #16 0x00000000004c6099 in function_call (func=0x2aaaab95f848, arg=0x2aaaae4f9ea8, kw=0x0) at funcobject.c:548 #17 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #18 0x0000000000420ee0 in instancemethod_call (func=0x34f0, arg=0x2aaaae4f9ea8, kw=0x0) at classobject.c:2447 #19 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #20 0x00000000004777d9 in PyEval_EvalFrame (f=0x730390) at ceval.c:3766 #21 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab93a490, globals=0x34f0, locals=0x6, args=0x730390, argcount=2, kws=0x0, kwcount=0, defs=0x2aaaab9609a8, defcount=1, closure=0x0) at ceval.c:2736 #22 0x00000000004c6099 in function_call (func=0x2aaaab962d70, arg=0x2aaaae4f9e60, kw=0x0) at funcobject.c:548 #23 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #24 0x0000000000420ee0 in instancemethod_call (func=0x34f0, arg=0x2aaaae4f9e60, kw=0x0) at classobject.c:2447 #25 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #26 0x000000000044fd80 in slot_tp_call (self=0x2aaaae6ad0d0, args=0x2aaaae6b3510, kwds=0x0) at typeobject.c:4536 #27 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #28 0x00000000004777d9 in PyEval_EvalFrame (f=0x7acb10) at ceval.c:3766 #29 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab94d960, globals=0x34f0, locals=0x6, args=0x7acb10, argcount=2, kws=0x924cf0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #30 0x00000000004c6099 in function_call (func=0x2aaaab9610c8, arg=0x2aaaae4f9e18, kw=0x956270) at funcobject.c:548 #31 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #32 0x00000000004772ea in PyEval_EvalFrame (f=0x6e9ec0) at ceval.c:3835 #33 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab94d9d0, globals=0x34f0, locals=0x6, args=0x6e9ec0, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #34 0x00000000004c6099 in function_call (func=0x2aaaab961140, arg=0x2aaaae4f9dd0, kw=0x0) at funcobject.c:548 #35 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #36 0x0000000000420ee0 in instancemethod_call (func=0x34f0, arg=0x2aaaae4f9dd0, kw=0x0) at classobject.c:2447 #37 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #38 0x000000000044fd80 in slot_tp_call (self=0x2aaaae4f7150, args=0x2aaaae69bf90, kwds=0x0) at typeobject.c:4536 #39 0x0000000000417700 in PyObject_Call (func=0x34f0, arg=0x34f0, kw=0x6) at abstract.c:1756 #40 0x00000000004777d9 in PyEval_EvalFrame (f=0x80a600) at ceval.c:3766 #41 0x0000000000479fb1 in PyEval_EvalFrame (f=0x6e8bd0) at ceval.c:3640 #42 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab93af10, globals=0x34f0, locals=0x6, args=0x6e8bd0, argcount=3, kws=0x6e8ae0, kwcount=0, defs=0x2aaaab964728, defcount=2, closure=0x0) at ceval.c:2736 #43 0x00000000004788f7 in PyEval_EvalFrame (f=0x6e8940) at ceval.c:3650 #44 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaadb37340, globals=0x34f0, locals=0x6, args=0x6e8940, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #45 0x000000000047af72 in PyEval_EvalCode (co=0x34f0, globals=0x34f0, locals=0x6) at ceval.c:484 #46 0x00000000004a1c72 in PyRun_InteractiveOneFlags (fp=0x2aaaaab132d0, filename=0x4cbf24 "", flags=0x7fffffe053ac) at pythonrun.c:1265 Does this already help? Best, Arnd From rudolphv at gmail.com Mon Mar 20 04:55:20 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Mon, 20 Mar 2006 11:55:20 +0200 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> Message-ID: <97670e910603200155p56de2928kf207376b2b3c1aba@mail.gmail.com> Arnd, Please see attached a full text dump of the GDB backtrace you requested. The last part of the file is shown below: affine_transform 1 ... FAIL affine transform 2 ... FAIL affine transform 3 ... FAIL affine transform 4 ... FAIL affine transform 5 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 182902372528 (LWP 19427)] 0x00000000004284b1 in PyList_AsTuple () (gdb) bt #0 0x00000000004284b1 in PyList_AsTuple () #1 0x00000000004665d0 in PyEval_CallObjectWithKeywords () #2 0x0000000000464ad8 in PyEval_EvalFrame () #3 0x00000000004669a4 in PyEval_CallObjectWithKeywords () #4 0x00000000004665f2 in PyEval_CallObjectWithKeywords () #5 0x0000000000464ad8 in PyEval_EvalFrame () #6 0x0000000000465618 in PyEval_EvalCodeEx () #7 0x00000000004ba73a in PyStaticMethod_New () #8 0x0000000000417020 in PyObject_Call () #9 0x0000000000466cf8 in PyEval_CallObjectWithKeywords () #10 0x0000000000464ec2 in PyEval_EvalFrame () #11 0x0000000000465618 in PyEval_EvalCodeEx () #12 0x00000000004ba73a in PyStaticMethod_New () #13 0x0000000000417020 in PyObject_Call () #14 0x000000000041d52e in PyMethod_Fini () #15 0x0000000000417020 in PyObject_Call () #16 0x0000000000466a84 in PyEval_CallObjectWithKeywords () #17 0x0000000000466575 in PyEval_CallObjectWithKeywords () #18 0x0000000000464ad8 in PyEval_EvalFrame () #19 0x0000000000465618 in PyEval_EvalCodeEx () #20 0x00000000004ba73a in PyStaticMethod_New () #21 0x0000000000417020 in PyObject_Call () #22 0x000000000041d52e in PyMethod_Fini () #23 0x0000000000417020 in PyObject_Call () #24 0x000000000044950f in _PyObject_SlotCompare () #25 0x0000000000417020 in PyObject_Call () #26 0x0000000000466a84 in PyEval_CallObjectWithKeywords () #27 0x0000000000466575 in PyEval_CallObjectWithKeywords () #28 0x0000000000464ad8 in PyEval_EvalFrame () #29 0x0000000000465618 in PyEval_EvalCodeEx () #30 0x00000000004ba73a in PyStaticMethod_New () #31 0x0000000000417020 in PyObject_Call () #32 0x0000000000466cf8 in PyEval_CallObjectWithKeywords () #33 0x0000000000464ec2 in PyEval_EvalFrame () #34 0x0000000000465618 in PyEval_EvalCodeEx () #35 0x00000000004ba73a in PyStaticMethod_New () #36 0x0000000000417020 in PyObject_Call () #37 0x000000000041d52e in PyMethod_Fini () #38 0x0000000000417020 in PyObject_Call () #39 0x000000000044950f in _PyObject_SlotCompare () #40 0x0000000000417020 in PyObject_Call () #41 0x0000000000466a84 in PyEval_CallObjectWithKeywords () #42 0x0000000000466575 in PyEval_CallObjectWithKeywords () #43 0x0000000000464ad8 in PyEval_EvalFrame () #44 0x00000000004669a4 in PyEval_CallObjectWithKeywords () #45 0x00000000004665f2 in PyEval_CallObjectWithKeywords () #46 0x0000000000464ad8 in PyEval_EvalFrame () #47 0x0000000000465618 in PyEval_EvalCodeEx () #48 0x0000000000466921 in PyEval_CallObjectWithKeywords () #49 0x00000000004665f2 in PyEval_CallObjectWithKeywords () #50 0x0000000000464ad8 in PyEval_EvalFrame () #51 0x0000000000465618 in PyEval_EvalCodeEx () #52 0x0000000000466921 in PyEval_CallObjectWithKeywords () #53 0x00000000004665f2 in PyEval_CallObjectWithKeywords () #54 0x0000000000464ad8 in PyEval_EvalFrame () #55 0x0000000000465618 in PyEval_EvalCodeEx () #56 0x00000000004680e2 in PyEval_EvalCode () #57 0x0000000000494c39 in PyRun_FileExFlags () #58 0x000000000049426b in PyRun_InteractiveOneFlags () #59 0x000000000049405e in PyRun_InteractiveLoopFlags () #60 0x0000000000495563 in PyRun_AnyFileExFlags () #61 0x00000000004104cd in Py_Main () #62 0x0000002a95b293c1 in __libc_start_main () from /lib/libc.so.6 #63 0x000000000040fe6a in _start () #64 0x0000007fbffffac8 in ?? () Rudolph On 3/20/06, Arnd Baecker wrote: > On Mon, 20 Mar 2006, Ed Schofield wrote: > > > On 20/03/2006, at 8:19 AM, Rudolph van der Merwe wrote: > > > > > scipy.test(10,10) definitely spits out A LOT MORE info. Here is the > > > relevant part: > > > > > > check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > > > affine_transform 1 ... FAIL > > > affine transform 2 ... FAIL > > > affine transform 3 ... FAIL > > > affine transform 4 ... FAIL > > > affine transform 5Segmentation fault > > > > This is occurring in the tests of the new ndimage package. Try > > deleting the file test_ndimage.py in /usr/lib/python2.4/site-packages/ > > scipy/ndimage/tests/ and re-running the tests. > > > > Meanwhile we need some developer with a 64-bit machine to track this > > down :) > > I just tried it out, but don't see this segfault. > > Rudolph, could you maybe do the following: > > ~> which python > > ~> gdb the_path_to_python_as_given_by_the_previous_command > [...] > (gdb) run > [...] > >>> import scipy > >>> scipy.test(1, 10) > > Then the segfault should bring you back to the debugger. > Then type `bt` for a backtrace which sometimes > gives some more info and post the result here.... > > Best, Arnd > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -- Rudolph van der Merwe -------------- next part -------------- A non-text attachment was scrubbed... Name: dump.zip Type: application/zip Size: 5612 bytes Desc: not available URL: From nwagner at mecha.uni-stuttgart.de Mon Mar 20 07:39:46 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Mar 2006 13:39:46 +0100 Subject: [SciPy-dev] logarithmic determinant in an absolute value sense Message-ID: <441EA292.2000406@mecha.uni-stuttgart.de> Hi all, I hope this function will be of interest. Any comment would be appreciated. Nils from scipy import * n = 200 A_1 = 1.e-2*(rand(n,n)+rand(n,n)*1j) A_2 = 1.e-3*(rand(n,n)+rand(n,n)*1j) def logabsdet(A): "Return the logarithmic determinant in an absolute value sense" p,l,u = linalg.lu(A) return sum(log(abs(diag(u)))) res_1 = logabsdet(A_1) res_2 = logabsdet(A_2) print res_1, log(abs(linalg.det(A_1))) print res_2, log(abs(linalg.det(A_2))) From robert.kern at gmail.com Mon Mar 20 12:17:52 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Mar 2006 11:17:52 -0600 Subject: [SciPy-dev] sandbox missing from scipy-0.4.8.tar.gz In-Reply-To: References: Message-ID: <441EE3C0.7090700@gmail.com> Arnd Baecker wrote: > Hi, > > it seems that that almost all contents of the sandbox is > not included in the released tarball. > > I am not arguing that it really should be included, > but just wanted to point this out. > In particular, exmplpackage is not included. The tarball is probably created using the distutils command sdist. sdist only knows about the files that are listed in setup.py. Since the packages in the sandbox are not built by default, they don't show up in the sdist either. That's not ideal, we know. It might be worthwhile to make source tarballs using "svn export" instead. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Tue Mar 21 00:37:37 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 20 Mar 2006 22:37:37 -0700 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> Message-ID: <441F9121.40307@ieee.org> Arnd Baecker wrote: > >>>> check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok >>>> affine_transform 1 ... FAIL >>>> affine transform 2 ... FAIL >>>> affine transform 3 ... FAIL >>>> affine transform 4 ... FAIL >>>> affine transform 5Segmentation fault >>>> >>> This is occurring in the tests of the new ndimage package. Try >>> deleting the file test_ndimage.py in /usr/lib/python2.4/site-packages/ >>> scipy/ndimage/tests/ and re-running the tests. >>> >>> Commenting out ``from info import __doc__`` I get: >>> In [1]: import scipy.ndimage >>> In [2]: scipy.ndimage.test(10, 10) >>> Found 397 tests for scipy.ndimage >>> [...] >>> Found 0 tests for __main__ >>> affine_transform 1 ... FAIL >>> affine transform 2 ... FAIL >>> affine transform 3 ... FAIL >>> affine transform 4 ... FAIL >>> affine transform 5*** glibc detected *** free(): invalid next size (fast): >>> 0x00000000009dd5b0 *** >>> Aborted >>> >>> gdb backtrace: >>> >>> Program received signal SIGABRT, Aborted. >>> [Switching to Thread 46912507335168 (LWP 13552)] >>> 0x00002aaaab35f43a in raise () from /lib64/tls/libc.so.6 >>> (gdb) bt >>> #0 0x00002aaaab35f43a in raise () from /lib64/tls/libc.so.6 >>> #1 0x00002aaaab360870 in abort () from /lib64/tls/libc.so.6 >>> #2 0x00002aaaab39506e in __libc_message () from /lib64/tls/libc.so.6 >>> #3 0x00002aaaab39a40c in malloc_printerr () from /lib64/tls/libc.so.6 >>> #4 0x00002aaaab39ae9c in free () from /lib64/tls/libc.so.6 >>> #5 0x00002aaaae513e66 in NI_GeometricTransform (input=0x930d40, map=0, >>> map_data=0x0, matrix_ar=0xffffffffffffffff, >>> shift_ar=0x0, coordinates=0x0, output=0x81b420, order=1, mode=4, >>> cval=0) at ni_interpolation.c:644 >>> #6 0x00002aaaae50b7bc in Py_GeometricTransform (obj=0x34f0, args=0x34f0) >>> at nd_image.c:566 >>> Yes, this helps. But, it would be nice if you could post all the compilation warnings for the ndimage package encountered during a scipy build. It's an issue with maybelong being used in the definition and then int* later. I've fixed what I found, but other issues may linger. It looks like ndimage was not 64-bit ready. -Travis From nwagner at mecha.uni-stuttgart.de Tue Mar 21 02:37:16 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Mar 2006 08:37:16 +0100 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <441F9121.40307@ieee.org> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> <441F9121.40307@ieee.org> Message-ID: <441FAD2C.905@mecha.uni-stuttgart.de> Travis Oliphant wrote: > Arnd Baecker wrote: > >> >> >>>>> check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok >>>>> affine_transform 1 ... FAIL >>>>> affine transform 2 ... FAIL >>>>> affine transform 3 ... FAIL >>>>> affine transform 4 ... FAIL >>>>> affine transform 5Segmentation fault >>>>> >>>>> >>>> This is occurring in the tests of the new ndimage package. Try >>>> deleting the file test_ndimage.py in /usr/lib/python2.4/site-packages/ >>>> scipy/ndimage/tests/ and re-running the tests. >>>> >>>> Commenting out ``from info import __doc__`` I get: >>>> In [1]: import scipy.ndimage >>>> In [2]: scipy.ndimage.test(10, 10) >>>> Found 397 tests for scipy.ndimage >>>> [...] >>>> Found 0 tests for __main__ >>>> affine_transform 1 ... FAIL >>>> affine transform 2 ... FAIL >>>> affine transform 3 ... FAIL >>>> affine transform 4 ... FAIL >>>> affine transform 5*** glibc detected *** free(): invalid next size (fast): >>>> 0x00000000009dd5b0 *** >>>> Aborted >>>> >>>> gdb backtrace: >>>> >>>> Program received signal SIGABRT, Aborted. >>>> [Switching to Thread 46912507335168 (LWP 13552)] >>>> 0x00002aaaab35f43a in raise () from /lib64/tls/libc.so.6 >>>> (gdb) bt >>>> #0 0x00002aaaab35f43a in raise () from /lib64/tls/libc.so.6 >>>> #1 0x00002aaaab360870 in abort () from /lib64/tls/libc.so.6 >>>> #2 0x00002aaaab39506e in __libc_message () from /lib64/tls/libc.so.6 >>>> #3 0x00002aaaab39a40c in malloc_printerr () from /lib64/tls/libc.so.6 >>>> #4 0x00002aaaab39ae9c in free () from /lib64/tls/libc.so.6 >>>> #5 0x00002aaaae513e66 in NI_GeometricTransform (input=0x930d40, map=0, >>>> map_data=0x0, matrix_ar=0xffffffffffffffff, >>>> shift_ar=0x0, coordinates=0x0, output=0x81b420, order=1, mode=4, >>>> cval=0) at ni_interpolation.c:644 >>>> #6 0x00002aaaae50b7bc in Py_GeometricTransform (obj=0x34f0, args=0x34f0) >>>> at nd_image.c:566 >>>> >>>> > > Yes, this helps. > > But, it would be nice if you could post all the compilation warnings for > the ndimage package encountered during a scipy build. It's an issue > with maybelong being used in the definition and then int* later. > > I've fixed what I found, but other issues may linger. It looks like > ndimage was not 64-bit ready. > > -Travis > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > >>> scipy.__version__ '0.4.9.1757' scipy.test(1,10) (on a 64bit machine) results in geometric transform 1 ... FAIL geometric transform 2 ... FAIL geometric transform 3 ... FAIL geometric transform 4 ... FAIL geometric transform 5 ... FAIL geometric transform 6 ... FAIL geometric transform 7 ... FAIL geometric transform 8 ... FAIL geometric transform 10 ... FAIL geometric transform 13 ... FAIL geometric transform 14 ... FAIL geometric transform 15 ... FAIL geometric transform 16 ... FAIL geometric transform 17 ... FAIL geometric transform 18 ... FAIL geometric transform 19 ... FAIL geometric transform 20 ... FAIL geometric transform 21 ... FAIL geometric transform 22 ... FAIL geometric transform 23 ... FAIL geometric transform 24 ... FAIL histogram 1 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 16492)] 0x00002aaaadf4c6d6 in NI_Histogram (input=0xd3d7d0, labels=0x0, min_label=-1, max_label=2, indices=0x0, n_results=1, histograms=0x9b35b0, min=0, max=10, nbins=46909632806922) at ni_measure.c:752 752 ph[jj][kk] = 0; (gdb) bt #0 0x00002aaaadf4c6d6 in NI_Histogram (input=0xd3d7d0, labels=0x0, min_label=-1, max_label=2, indices=0x0, n_results=1, histograms=0x9b35b0, min=0, max=10, nbins=46909632806922) at ni_measure.c:752 #1 0x00002aaaadf4224d in Py_Histogram (obj=, args=) at nd_image.c:1103 #2 0x00002aaaaac5496a in PyEval_EvalFrame (f=0x606030) at ceval.c:3547 #3 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaae05c340, globals=, locals=, args=0x2, argcount=4, kws=0x980578, kwcount=0, defs=0x2aaaade17a88, defcount=2, closure=0x0) at ceval.c:2730 #4 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x9803a0) at ceval.c:3640 #5 0x00002aaaaac53b97 in PyEval_EvalFrame (f=0x976960) at ceval.c:3629 #6 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbeed50, globals=, locals=, args=0x2aaab6555de8, argcount=2, kws=0xb31d10, kwcount=0, defs=0x2aaaabc060e8, defcount=1, closure=0x0) at ceval.c:2730 #7 0x00002aaaaac0e9af in function_call (func=0x2aaaabc05758, arg=0x2aaab6555dd0, kw=) at funcobject.c:548 #8 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #9 0x00002aaaaac532e2 in PyEval_EvalFrame (f=0xc3d4a0) at ceval.c:3824 #10 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbeedc0, globals=, locals=, args=0x2aaab5b43890, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #11 0x00002aaaaac0e9af in function_call (func=0x2aaaabc057d0, arg=0x2aaab5b43878, kw=) at funcobject.c:548 #12 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #13 0x00002aaaaac02131 in instancemethod_call (func=, arg=0x2aaab5b43878, kw=0x0) at classobject.c:2431 #14 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #15 0x00002aaaaac5380d in PyEval_EvalFrame (f=0x6283e0) at ceval.c:3755 #16 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbe4340, globals=, locals=, args=0x2aaab5b710b0, argcount=2, kws=0x0, kwcount=0, defs=0x2aaaabc06b68, defcount=1, closure=0x0) at ceval.c:2730 #17 0x00002aaaaac0e9af in function_call (func=0x2aaaabc09cf8, arg=0x2aaab5b71098, kw=) at funcobject.c:548 #18 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 Nils From arnd.baecker at web.de Tue Mar 21 04:00:29 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 21 Mar 2006 10:00:29 +0100 (CET) Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <441F9121.40307@ieee.org> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> <441F9121.40307@ieee.org> Message-ID: Hi, On Mon, 20 Mar 2006, Travis Oliphant wrote: [...] > Yes, this helps. > > But, it would be nice if you could post all the compilation warnings for > the ndimage package encountered during a scipy build. It's an issue > with maybelong being used in the definition and then int* later. > > I've fixed what I found, but other issues may linger. It looks like > ndimage was not 64-bit ready. There seem to be some further ones: Found 0 tests for __main__ affine_transform 1 ... FAIL affine transform 2 ... FAIL affine transform 3 ... FAIL affine transform 4 ... FAIL affine transform 5 ... FAIL affine transform 6 ... FAIL affine transform 7 ... FAIL affine transform 8 ... FAIL affine transform 9 ... FAIL affine transform 10 ... FAIL affine transform 11 ... FAIL affine transform 12 ... FAIL affine transform 13 ... FAIL affine transform 14 ... FAIL affine transform 15 ... FAIL affine transform 16 ... FAIL affine transform 17 ... FAIL affine transform 18 ... FAIL affine transform 19 ... FAIL affine transform 20 ... FAIL affine transform 21 ... FAIL binary closing 1 ... ok [...] correlation 25 ... ok brute force distance transform 1 ... ok brute force distance transform 2 ... ok brute force distance transform 3 ... ok brute force distance transform 4 ... ERROR brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1Segmentation fault The backtrace gives: brute force distance transform 6 ... ok chamfer type distance transform 1 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912507335168 (LWP 29654)] NI_DistanceTransformOnePass (strct=0x20, distances=0x2, features=0x894790) at ni_morphology.c:732 732 maybelong offset = oo[ii]; (gdb) bt #0 NI_DistanceTransformOnePass (strct=0x20, distances=0x2, features=0x894790) at ni_morphology.c:732 #1 0x00002aaaae50f235 in Py_DistanceTransformOnePass (obj=0xffffffff, args=0xffffffff) at nd_image.c:1154 #2 0x0000000000478cfb in PyEval_EvalFrame (f=0x718290) at ceval.c:3558 #3 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaae4f5960, globals=0x40, locals=0x1, args=0x718290, argcount=2, kws=0x6d6170, kwcount=1, defs=0x2aaaae4f15a8, defcount=5, closure=0x0) at ceval.c:2736 #4 0x00000000004788f7 in PyEval_EvalFrame (f=0x6d5f90) at ceval.c:3650 #5 0x0000000000479fb1 in PyEval_EvalFrame (f=0x89af10) at ceval.c:3640 #6 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab948ea0, globals=0x40, locals=0x1, args=0x89af10, argcount=2, kws=0x6eac40, kwcount=0, defs=0x2aaaab94ff68, defcount=1, closure=0x0) at ceval.c:2736 #7 0x00000000004c6099 in function_call (func=0x2aaaab9607d0, arg=0x2aaaae4fbe60, kw=0x956d50) at funcobject.c:548 #8 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #9 0x00000000004772ea in PyEval_EvalFrame (f=0x739960) at ceval.c:3835 #10 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab948f10, globals=0x40, locals=0x1, args=0x739960, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #11 0x00000000004c6099 in function_call (func=0x2aaaab960848, arg=0x2aaaae4fbe18, kw=0x0) at funcobject.c:548 #12 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #13 0x0000000000420ee0 in instancemethod_call (func=0xffffffff, arg=0x2aaaae4fbe18, kw=0x0) at classobject.c:2447 #14 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #15 0x00000000004777d9 in PyEval_EvalFrame (f=0x7307b0) at ceval.c:3766 #16 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab93a490, globals=0x40, locals=0x1, args=0x7307b0, argcount=2, kws=0x0, kwcount=0, defs=0x2aaaab961a28, defcount=1, closure=0x0) at ceval.c:2736 #17 0x00000000004c6099 in function_call (func=0x2aaaab963d70, arg=0x2aaaae4fbdd0, kw=0x0) at funcobject.c:548 #18 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #19 0x0000000000420ee0 in instancemethod_call (func=0xffffffff, arg=0x2aaaae4fbdd0, kw=0x0) at classobject.c:2447 #20 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #21 0x000000000044fd80 in slot_tp_call (self=0x2aaaae6b1590, args=0x2aaaae6b5690, kwds=0x0) at typeobject.c:4536 #22 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #23 0x00000000004777d9 in PyEval_EvalFrame (f=0x7ad090) at ceval.c:3766 #24 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab94c960, globals=0x40, locals=0x1, args=0x7ad090, argcount=2, kws=0x925590, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #25 0x00000000004c6099 in function_call (func=0x2aaaab9620c8, arg=0x2aaaae4fbd88, kw=0x956b10) at funcobject.c:548 #26 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #27 0x00000000004772ea in PyEval_EvalFrame (f=0x6ea9d0) at ceval.c:3835 #28 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab94c9d0, globals=0x40, locals=0x1, args=0x6ea9d0, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #29 0x00000000004c6099 in function_call (func=0x2aaaab962140, arg=0x2aaaae4fbd40, kw=0x0) at funcobject.c:548 #30 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #31 0x0000000000420ee0 in instancemethod_call (func=0xffffffff, arg=0x2aaaae4fbd40, kw=0x0) at classobject.c:2447 #32 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #33 0x000000000044fd80 in slot_tp_call (self=0x2aaaae6a40d0, args=0x2aaaae6a4150, kwds=0x0) at typeobject.c:4536 #34 0x0000000000417700 in PyObject_Call (func=0xffffffff, arg=0x40, kw=0x1) at abstract.c:1756 #35 0x00000000004777d9 in PyEval_EvalFrame (f=0x80ab20) at ceval.c:3766 #36 0x0000000000479fb1 in PyEval_EvalFrame (f=0x6e8e20) at ceval.c:3640 #37 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab93af10, globals=0x40, locals=0x1, args=0x6e8e20, argcount=3, kws=0x6e8d30, kwcount=0, defs=0x2aaaab964728, defcount=2, closure=0x0) at ceval.c:2736 #38 0x00000000004788f7 in PyEval_EvalFrame (f=0x6e8b90) at ceval.c:3650 #39 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaadb3a3b0, globals=0x40, locals=0x1, args=0x6e8b90, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #40 0x000000000047af72 in PyEval_EvalCode (co=0xffffffff, globals=0x40, locals=0x1) at ceval.c:484 #41 0x00000000004a1c72 in PyRun_InteractiveOneFlags (fp=0x2aaaaab132d0, filename=0x4cbf24 "", flags=0x7fffffe7903c) at pythonrun.c:1265 #42 0x00000000004a1e04 in PyRun_InteractiveLoopFlags (fp=0x2aaaab556b00, filename=0x4cbf24 "", flags=0x7fffffe7903c) at pythonrun.c:695 #43 0x00000000004a2350 in PyRun_AnyFileExFlags (fp=0x2aaaab556b00, filename=0x40
, closeit=0, flags=0x7fffffe7903c) at pythonrun.c:658 #44 0x0000000000410788 in Py_Main (argc=0, argv=0x7fffffe7a937) at main.c:484 #45 0x00002aaaab34d5aa in __libc_start_main () from /lib64/tls/libc.so.6 #46 0x000000000040fdfa in _start () at start.S:113 #47 0x00007fffffe79138 in ?? () #48 0x00002aaaaabc19c0 in rtld_errno () from /lib64/ld-linux-x86-64.so.2 #49 0x0000000000000001 in ?? () And the compile part of ndimage: building 'scipy.ndimage._nd_image' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' creating build/temp.linux-x86_64-2.4/Lib/ndimage creating build/temp.linux-x86_64-2.4/Lib/ndimage/src compile options: '-ILib/ndimage/src -I/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/core/include -I/scr/python/include/python2.4 -c' gcc: Lib/ndimage/src/ni_morphology.c Lib/ndimage/src/ni_morphology.c: In function `NI_BinaryErosion': Lib/ndimage/src/ni_morphology.c:114: warning: passing arg 7 of `NI_InitFilterOffsets' from incompatible pointer type Lib/ndimage/src/ni_morphology.c: In function `NI_DistanceTransformBruteForce': Lib/ndimage/src/ni_morphology.c:546: warning: assignment from incompatible pointer type Lib/ndimage/src/ni_morphology.c: In function `NI_DistanceTransformOnePass': Lib/ndimage/src/ni_morphology.c:714: warning: passing arg 7 of `NI_InitFilterOffsets' from incompatible pointer type gcc: Lib/ndimage/src/ni_support.c Lib/ndimage/src/ni_support.c: In function `NI_CoordinateListAddBlock': Lib/ndimage/src/ni_support.c:707: warning: assignment from incompatible pointer type gcc: Lib/ndimage/src/ni_fourier.c gcc: Lib/ndimage/src/numcompat.c gcc: Lib/ndimage/src/ni_measure.c gcc: Lib/ndimage/src/ni_interpolation.c Lib/ndimage/src/ni_interpolation.c: In function `NI_GeometricTransform': Lib/ndimage/src/ni_interpolation.c:463: warning: passing arg 1 of pointer to function from incompatible pointer type Lib/ndimage/src/ni_interpolation.c:551: warning: initialization from incompatible pointer type Lib/ndimage/src/ni_interpolation.c:570: warning: initialization from incompatible pointer type Lib/ndimage/src/ni_interpolation.c: In function `NI_ZoomShift': Lib/ndimage/src/ni_interpolation.c:709: warning: assignment from incompatible pointer type Lib/ndimage/src/ni_interpolation.c:843: warning: initialization from incompatible pointer type Lib/ndimage/src/ni_interpolation.c:864: warning: initialization from incompatible pointer type gcc: Lib/ndimage/src/nd_image.c Lib/ndimage/src/nd_image.c: In function `Py_RankFilter': Lib/ndimage/src/nd_image.c:253: warning: passing arg 7 of `NI_RankFilter' from incompatible pointer type gcc: Lib/ndimage/src/ni_filters.c Lib/ndimage/src/ni_filters.c: In function `NI_RankFilter': Lib/ndimage/src/ni_filters.c:629: warning: passing arg 4 of `NI_InitFilterOffsets' from incompatible pointer type Lib/ndimage/src/ni_filters.c:633: warning: passing arg 5 of `NI_InitFilterIterator' from incompatible pointer type gcc -pthread -shared build/temp.linux-x86_64-2.4/Lib/ndimage/src/nd_image.o build/temp.linux-x86_64-2.4/Lib/ndimage/src/ni_filters.o build/temp.linux-x86_64-2.4/Lib/ndimage/src/ni_fourier.o build/temp.linux-x86_64-2.4/Lib/ndimage/src/ni_interpolation.o build/temp.linux-x86_64-2.4/Lib/ndimage/src/ni_measure.o build/temp.linux-x86_64-2.4/Lib/ndimage/src/numcompat.o build/temp.linux-x86_64-2.4/Lib/ndimage/src/ni_morphology.o build/temp.linux-x86_64-2.4/Lib/ndimage/src/ni_support.o -Lbuild/temp.linux-x86_64-2.4 -o build/lib.linux-x86_64-2.4/scipy/ndimage/_nd_image.so /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['mkl', 'vml', 'guide'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['ptf77blas', 'ptcblas', 'atlas'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ Looking through the build log shows two further possible pointer problems: gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-DATLAS_INFO="\"3.7.11\"" -I/scr/python/include -Ibuild/src -I/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/core/include -I/scr/python/include/python2.4 -c' gcc: build/src/build/src/scipy/linalg/flapackmodule.c build/src/build/src/scipy/linalg/flapackmodule.c: In function `f2py_rout_flapack_cheev': build/src/build/src/scipy/linalg/flapackmodule.c:9761: warning: passing arg 6 of pointer to function from incompatible pointer type build/src/build/src/scipy/linalg/flapackmodule.c: In function `f2py_rout_flapack_zheev': build/src/build/src/scipy/linalg/flapackmodule.c:9945: warning: passing arg 6 of pointer to function from incompatible pointer type /scr/python/bin/g77 -shared build/temp.linux-x86_64-2.4/build/src/build/src/scipy/linalg/flapackmodule.o build/temp.linux-x86_64-2.4/build/src/fortranobject.o -L/scr/python/lib64 -Lbuild/temp.linux-x86_64-2.4 -llapack -lptf77blas -lptcblas -latlas -lg2c -o build/lib.linux-x86_64-2.4/scipy/linalg/flapack.so building 'scipy.linalg.clapack' extension Unfortunately I don't have time to look into any of these myself as I have some pressing deadlines. (Though I will be able to squeeze the above type of testing in, so just let me know when I should give it another try). Best, Arnd From oliphant.travis at ieee.org Tue Mar 21 05:11:51 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 21 Mar 2006 03:11:51 -0700 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> <441F9121.40307@ieee.org> Message-ID: <441FD167.4060109@ieee.org> Arnd Baecker wrote: > [...] > Try the test again with new SVN. -Travis From nwagner at mecha.uni-stuttgart.de Tue Mar 21 05:25:55 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Mar 2006 11:25:55 +0100 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <441FD167.4060109@ieee.org> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> <441F9121.40307@ieee.org> <441FD167.4060109@ieee.org> Message-ID: <441FD4B3.2090906@mecha.uni-stuttgart.de> Travis Oliphant wrote: > Arnd Baecker wrote: > >> [...] >> >> > > Try the test again with new SVN. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > >>> scipy.__version__ '0.4.9.1757' affine_transform 1 ... FAIL affine transform 2 ... FAIL affine transform 3 ... FAIL affine transform 4 ... FAIL affine transform 5 ... FAIL affine transform 6 ... FAIL affine transform 7 ... FAIL affine transform 8 ... FAIL affine transform 9 ... FAIL affine transform 10 ... FAIL affine transform 11 ... FAIL affine transform 12 ... FAIL affine transform 13 ... FAIL affine transform 14 ... FAIL affine transform 15 ... FAIL affine transform 16 ... FAIL affine transform 17 ... FAIL affine transform 18 ... FAIL affine transform 19 ... FAIL affine transform 20 ... FAIL affine transform 21 ... FAIL brute force distance transform 1 ... ok brute force distance transform 2 ... ok brute force distance transform 3 ... ok brute force distance transform 4 ... ERROR brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1 ... ok chamfer type distance transform 2 ... ok chamfer type distance transform 3 ... ERROR euclidean distance transform 1 ... ok euclidean distance transform 2 ... ERROR euclidean distance transform 3 ... ok euclidean distance transform 4 ... ok geometric transform 1 ... FAIL geometric transform 2 ... FAIL geometric transform 3 ... FAIL geometric transform 4 ... FAIL geometric transform 5 ... FAIL geometric transform 6 ... FAIL geometric transform 7 ... FAIL geometric transform 8 ... FAIL geometric transform 10 ... FAIL geometric transform 13 ... FAIL geometric transform 14 ... FAIL geometric transform 15 ... FAIL geometric transform 16 ... FAIL geometric transform 17 ... FAIL geometric transform 18 ... FAIL geometric transform 19 ... FAIL geometric transform 20 ... FAIL geometric transform 21 ... FAIL geometric transform 22 ... FAIL geometric transform 23 ... FAIL geometric transform 24 ... FAIL histogram 1 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 22494)] 0x00002aaaadf4c6d6 in NI_Histogram (input=0x9d71e0, labels=0x0, min_label=-1, max_label=2, indices=0x0, n_results=1, histograms=0xdb18b0, min=0, max=10, nbins=46909632806922) at ni_measure.c:752 752 ph[jj][kk] = 0; (gdb) bt #0 0x00002aaaadf4c6d6 in NI_Histogram (input=0x9d71e0, labels=0x0, min_label=-1, max_label=2, indices=0x0, n_results=1, histograms=0xdb18b0, min=0, max=10, nbins=46909632806922) at ni_measure.c:752 #1 0x00002aaaadf4224d in Py_Histogram (obj=, args=) at nd_image.c:1103 #2 0x00002aaaaac5496a in PyEval_EvalFrame (f=0x605f10) at ceval.c:3547 #3 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaae05c340, globals=, locals=, args=0x2, argcount=4, kws=0x980638, kwcount=0, defs=0x2aaaade17a88, defcount=2, closure=0x0) at ceval.c:2730 #4 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x980460) at ceval.c:3640 #5 0x00002aaaaac53b97 in PyEval_EvalFrame (f=0x976a20) at ceval.c:3629 #6 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbeed50, globals=, locals=, args=0x2aaab6555de8, argcount=2, kws=0x69afc0, kwcount=0, defs=0x2aaaabc060a8, defcount=1, closure=0x0) at ceval.c:2730 #7 0x00002aaaaac0e9af in function_call (func=0x2aaaabc05758, arg=0x2aaab6555dd0, kw=) at funcobject.c:548 #8 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #9 0x00002aaaaac532e2 in PyEval_EvalFrame (f=0xc43c20) at ceval.c:3824 #10 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbeedc0, globals=, locals=, args=0x2aaab5b43890, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #11 0x00002aaaaac0e9af in function_call (func=0x2aaaabc057d0, arg=0x2aaab5b43878, kw=) at funcobject.c:548 #12 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #13 0x00002aaaaac02131 in instancemethod_call (func=, arg=0x2aaab5b43878, kw=0x0) at classobject.c:2431 #14 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #15 0x00002aaaaac5380d in PyEval_EvalFrame (f=0x628480) at ceval.c:3755 #16 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbe4340, globals=, locals=, args=0x2aaab5b710b0, argcount=2, kws=0x0, kwcount=0, defs=0x2aaaabc06b28, defcount=1, closure=0x0) at ceval.c:2730 #17 0x00002aaaaac0e9af in function_call (func=0x2aaaabc09cf8, arg=0x2aaab5b71098, kw=) at funcobject.c:548 #18 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #19 0x00002aaaaac02131 in instancemethod_call (func=, arg=0x2aaab5b71098, kw=0x0) at classobject.c:2431 #20 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #21 0x00002aaaaac33b0a in slot_tp_call (self=, args=0x2aaab577fed0, kwds=0x0) at typeobject.c:4526 #22 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #23 0x00002aaaaac5380d in PyEval_EvalFrame (f=0x61f560) at ceval.c:3755 #24 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbf5810, globals=, locals=, args=0x2aaab5b3bda0, argcount=2, kws=0xdcb140, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #25 0x00002aaaaac0e9af in function_call (func=0x2aaaabc07050, arg=0x2aaab5b3bd88, kw=) at funcobject.c:548 #26 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #27 0x00002aaaaac532e2 in PyEval_EvalFrame (f=0xb5d960) at ceval.c:3824 From nwagner at mecha.uni-stuttgart.de Tue Mar 21 05:37:16 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Mar 2006 11:37:16 +0100 Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <441FD167.4060109@ieee.org> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> <441F9121.40307@ieee.org> <441FD167.4060109@ieee.org> Message-ID: <441FD75C.60905@mecha.uni-stuttgart.de> Travis Oliphant wrote: > Arnd Baecker wrote: > >> [...] >> >> > > Try the test again with new SVN. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Now python setup.py install results in Lib/ndimage/src/ni_interpolation.c:348: error: conflicting types for ?NI_GeometricTransform? Lib/ndimage/src/ni_interpolation.h:39: error: previous declaration of ?NI_GeometricTransform? was here Lib/ndimage/src/ni_interpolation.c:348: error: conflicting types for ?NI_GeometricTransform? Lib/ndimage/src/ni_interpolation.h:39: error: previous declaration of ?NI_GeometricTransform? was here error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -ILib/ndimage/src -I/usr/lib64/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c Lib/ndimage/src/ni_interpolation.c -o build/temp.linux-x86_64-2.4/Lib/ndimage/src/ni_interpolation.o" failed with exit status 1 Nils From arnd.baecker at web.de Tue Mar 21 08:08:09 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 21 Mar 2006 14:08:09 +0100 (CET) Subject: [SciPy-dev] Scipy 0.4.8 segfaults on 64bit Linux (Ubuntu). In-Reply-To: <441FD167.4060109@ieee.org> References: <97670e910603192229y5ea996e6x2dcc0d4b849f4b5b@mail.gmail.com> <97670e910603192243u134d80c8o1696896083962962@mail.gmail.com> <97670e910603192319t21d9a5b9n32c223d526829d09@mail.gmail.com> <13CDFC09-77F2-40B7-B61F-74821F91B993@ftw.at> <441F9121.40307@ieee.org><441FD167.4060109@ieee.org> Message-ID: On Tue, 21 Mar 2006, Travis Oliphant wrote: > Arnd Baecker wrote: > > [...] > > > > Try the test again with new SVN. I fixed the header files (hope I got it right - please check) and now the compile works fine, but still a segfault on test: [...] brute force distance transform 4 ... ERROR brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1 ... ok chamfer type distance transform 2 ... ok chamfer type distance transform 3 ... ERROR euclidean distance transform 1 ... ok euclidean distance transform 2 ... ERROR [...] grey erosion 3 ... ok grey opening 1 ... ok grey opening 2 ... ok histogram 1Segmentation fault The backtrace gives grey opening 2 ... ok histogram 1 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912507335168 (LWP 22377)] NI_Histogram (input=0x96bfc0, labels=0x0, min_label=-1, max_label=46912557167512, indices=0x0, n_results=1, histograms=0x935430, min=0, max=10, nbins=46909632806922) at ni_measure.c:752 752 ph[jj][kk] = 0; (gdb) print jj $1 = 0 (gdb) print kk $2 = 32612 (gdb) print ph $3 = (Int32 **) 0x9313a0 (gdb) bt #0 NI_Histogram (input=0x96bfc0, labels=0x0, min_label=-1, max_label=46912557167512, indices=0x0, n_results=1, histograms=0x935430, min=0, max=10, nbins=46909632806922) at ni_measure.c:752 #1 0x00002aaaae50ef81 in Py_Histogram (obj=0x96c050, args=0x96c050) at nd_image.c:1103 #2 0x0000000000478cfb in PyEval_EvalFrame (f=0x717f30) at ceval.c:3558 #3 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaae4e9c70, globals=0x8d6080, locals=0x7f3c, args=0x717f30, argcount=4, kws=0x6d60e8, kwcount=0, defs=0x2aaaae4d2da0, defcount=2, closure=0x0) at ceval.c:2736 #4 0x00000000004788f7 in PyEval_EvalFrame (f=0x6d5f10) at ceval.c:3650 #5 0x0000000000479fb1 in PyEval_EvalFrame (f=0x89aae0) at ceval.c:3640 #6 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab948ea0, globals=0x8d6080, locals=0x7f3c, args=0x89aae0, argcount=2, kws=0x96cf90, kwcount=0, defs=0x2aaaab94ff68, defcount=1, closure=0x0) at ceval.c:2736 #7 0x00000000004c6099 in function_call (func=0x2aaaab95f7d0, arg=0x2aaaae4fae60, kw=0x956920) at funcobject.c:548 #8 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #9 0x00000000004772ea in PyEval_EvalFrame (f=0x7397d0) at ceval.c:3835 #10 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab948f10, globals=0x8d6080, locals=0x7f3c, args=0x7397d0, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #11 0x00000000004c6099 in function_call (func=0x2aaaab95f848, arg=0x2aaaae4fae18, kw=0x0) at funcobject.c:548 #12 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #13 0x0000000000420ee0 in instancemethod_call (func=0x96c050, arg=0x2aaaae4fae18, kw=0x0) at classobject.c:2447 #14 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #15 0x00000000004777d9 in PyEval_EvalFrame (f=0x7306c0) at ceval.c:3766 #16 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab93b490, globals=0x8d6080, locals=0x7f3c, args=0x7306c0, argcount=2, kws=0x0, kwcount=0, defs=0x2aaaab960a28, defcount=1, closure=0x0) at ceval.c:2736 #17 0x00000000004c6099 in function_call (func=0x2aaaab962d70, arg=0x2aaaae4fadd0, kw=0x0) at funcobject.c:548 #18 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #19 0x0000000000420ee0 in instancemethod_call (func=0x96c050, arg=0x2aaaae4fadd0, kw=0x0) at classobject.c:2447 #20 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #21 0x000000000044fd80 in slot_tp_call (self=0x2aaaae6b3b90, args=0x2aaaae6b6690, kwds=0x0) at typeobject.c:4536 #22 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #23 0x00000000004777d9 in PyEval_EvalFrame (f=0x7acdd0) at ceval.c:3766 #24 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab94c960, globals=0x8d6080, locals=0x7f3c, args=0x7acdd0, argcount=2, kws=0x925160, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #25 0x00000000004c6099 in function_call (func=0x2aaaab9610c8, arg=0x2aaaae4fad88, kw=0x9566e0) at funcobject.c:548 #26 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #27 0x00000000004772ea in PyEval_EvalFrame (f=0x6eaad0) at ceval.c:3835 #28 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab94c9d0, globals=0x8d6080, locals=0x7f3c, args=0x6eaad0, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #29 0x00000000004c6099 in function_call (func=0x2aaaab961140, arg=0x2aaaae4fad40, kw=0x0) at funcobject.c:548 #30 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #31 0x0000000000420ee0 in instancemethod_call (func=0x96c050, arg=0x2aaaae4fad40, kw=0x0) at classobject.c:2447 #32 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #33 0x000000000044fd80 in slot_tp_call (self=0x2aaaae6a50d0, args=0x2aaaae6a5150, kwds=0x0) at typeobject.c:4536 #34 0x0000000000417700 in PyObject_Call (func=0x96c050, arg=0x8d6080, kw=0x7f3c) at abstract.c:1756 #35 0x00000000004777d9 in PyEval_EvalFrame (f=0x80a8c0) at ceval.c:3766 #36 0x0000000000479fb1 in PyEval_EvalFrame (f=0x6e8f10) at ceval.c:3640 #37 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab93bf10, globals=0x8d6080, locals=0x7f3c, args=0x6e8f10, argcount=3, kws=0x6d2b80, kwcount=0, defs=0x2aaaab963728, defcount=2, closure=0x0) at ceval.c:2736 #38 0x00000000004788f7 in PyEval_EvalFrame (f=0x6d29e0) at ceval.c:3650 #39 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaadb3a3b0, globals=0x8d6080, locals=0x7f3c, args=0x6d29e0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #40 0x000000000047af72 in PyEval_EvalCode (co=0x96c050, globals=0x8d6080, locals=0x7f3c) at ceval.c:484 #41 0x00000000004a1c72 in PyRun_InteractiveOneFlags (fp=0x2aaaaab132d0, filename=0x4cbf24 "", flags=0x7fffffca155c) at pythonrun.c:1265 #42 0x00000000004a1e04 in PyRun_InteractiveLoopFlags (fp=0x2aaaab556b00, filename=0x4cbf24 "", flags=0x7fffffca155c) at pythonrun.c:695 #43 0x00000000004a2350 in PyRun_AnyFileExFlags (fp=0x2aaaab556b00, filename=0x8d6080 "\020\226", closeit=0, flags=0x7fffffca155c) at pythonrun.c:658 #44 0x0000000000410788 in Py_Main (argc=0, argv=0x7fffffca28de) at main.c:484 #45 0x00002aaaab34d5aa in __libc_start_main () from /lib64/tls/libc.so.6 #46 0x000000000040fdfa in _start () at start.S:113 #47 0x00007fffffca1658 in ?? () #48 0x00002aaaaabc19c0 in rtld_errno () from /lib64/ld-linux-x86-64.so.2 The only remaining incompatible pointer ones are gcc: build/src/build/src/scipy/linalg/flapackmodule.c build/src/build/src/scipy/linalg/flapackmodule.c: In function `f2py_rout_flapack_cheev': build/src/build/src/scipy/linalg/flapackmodule.c:9761: warning: passing arg 6 of pointer to function from incompatible pointer type build/src/build/src/scipy/linalg/flapackmodule.c: In function `f2py_rout_flapack_zheev': build/src/build/src/scipy/linalg/flapackmodule.c:9945: warning: passing arg 6 of pointer to function from incompatible pointer type /scr/python/bin/g77 -shared build/temp.linux-x86_64-2.4/build/src/build/src/scipy/linalg/flapackmodule.o build/temp.linux-x86_64-2.4/build/src/fortranobject.o -L/scr/python/lib64 -Lbuild/temp.linux-x86_64-2.4 -llapack -lptf77blas -lptcblas HTH, Arnd From nwagner at mecha.uni-stuttgart.de Tue Mar 21 09:05:47 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Mar 2006 15:05:47 +0100 Subject: [SciPy-dev] Bug in linalg.eig Message-ID: <4420083B.5060205@mecha.uni-stuttgart.de> Hi all, It turns out that linalg.eig is erroneous ! Please find attached my latest findings ... Unfortunately, I have no idea how to resolve this problem within linalg.eig. As a workaround one can use the functions linalg.flapack.zgegv linalg.flapack.zggev Nils from scipy import * from pylab import * G = io.mmread('G.mtx') H = io.mmread('H.mtx') #w = linalg.eigvals(G,H) n = G.shape[0] w = zeros(n,Complex) w1= zeros(n,Complex) # # http://www.netlib.org/lapack/patch/src/zggev.f # alpha,beta,vl,vr,work,info = linalg.flapack.zggev(G,H,compute_vl=0,compute_vr=0,lwork=8*n,overwrite_a=0,overwrite_b=0) # # http://www.netlib.org/lapack/patch/src/zgegv.f # alpha1,beta1,vl1,vr1,info1 = linalg.flapack.zgegv(G,H,compute_vl=0,compute_vr=0,lwork=2*n,overwrite_a=0,overwrite_b=0) for i in arange(0,n): if beta[i] <> 0.0: w[i] = alpha[i]/beta[i] w1[i] = alpha1[i]/beta1[i] #w0 = linalg.eigvals(G,H) # Doesn't work subplot(211) plot(w.real,w.imag,'b.') subplot(212) plot(w1.real,w1.imag,'r.') show() From nwagner at mecha.uni-stuttgart.de Tue Mar 21 09:58:59 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Mar 2006 15:58:59 +0100 Subject: [SciPy-dev] 2 errors scipy.test(1,10) on a 32bit system Message-ID: <442014B3.5020701@mecha.uni-stuttgart.de> On a 32bit system >>> scipy.__version__ '0.4.9.1759' >>> numpy.__version__ '0.9.7.2263' >>> scipy.test(1,10) results in ====================================================================== ERROR: check_sanity (scipy.montecarlo.tests.test_intsampler.test_intsampler) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/montecarlo/tests/test_intsampler.py", line 59, in check_sanity sampler = intsampler(a) TypeError: argument 1 must be (null), not numpy.ndarray ====================================================================== ERROR: check_simple (scipy.montecarlo.tests.test_intsampler.test_intsampler) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/montecarlo/tests/test_intsampler.py", line 40, in check_simple sampler = intsampler(a) TypeError: argument 1 must be (null), not numpy.ndarray ---------------------------------------------------------------------- Ran 1508 tests in 7.465s FAILED (errors=2) From schofield at ftw.at Tue Mar 21 12:19:16 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 21 Mar 2006 18:19:16 +0100 Subject: [SciPy-dev] 2 errors scipy.test(1,10) on a 32bit system In-Reply-To: <442014B3.5020701@mecha.uni-stuttgart.de> References: <442014B3.5020701@mecha.uni-stuttgart.de> Message-ID: <44203594.8020409@ftw.at> Nils Wagner wrote: > scipy.test(1,10) results in > > ====================================================================== > ERROR: check_sanity (scipy.montecarlo.tests.test_intsampler.test_intsampler) > What are you doing?! There is no scipy.montecarlo! It's in the sandbox (and works fine for me). It's not built by default, and even if you uncomment the line in sandbox/setup.py, it would be installed into scipy.sandbox.montecarlo instead ... Try removing any old scipy directories you have lying around from January! -- Ed From ashuang at gmail.com Tue Mar 21 14:00:08 2006 From: ashuang at gmail.com (Albert Huang) Date: Tue, 21 Mar 2006 14:00:08 -0500 Subject: [SciPy-dev] lstsq segfaults for scipy 0.9.5, numpy 0.9.5 on python 2.3 Message-ID: Hi, I'm using numpy 0.9.5 and scipy 0.9.5 with python 2.3 on a debian (unstable) system scipy.linalg.lstsq segfaults when I run this program: file: testlstsq.py ---- from numpy import * from scipy.linalg import lstsq a = array( ((1,1), (1.1,0.9), (0.9,1.1), (0.8,1.2)) ) b = array( (2,2,2,2) ) print lstsq(a,b) ---- stack trace: # gdb python GNU gdb 6.3-debian Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-linux"...(no debugging symbols found) Using host libthread_db library "/lib/tls/libthread_db.so.1". (gdb) run testlstsq.py Starting program: /usr/bin/python testlstsq.py (no debugging symbols found) (no debugging symbols found) [Thread debugging using libthread_db enabled] [New Thread -1209723200 (LWP 32526)] (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1209723200 (LWP 32526)] 0x08089aab in PyType_IsSubtype () (gdb) where #0 0x08089aab in PyType_IsSubtype () #1 0xb7cd4941 in PyUFunc_GenericReduction (self=0x819aa20, args=0xb6c1d2ac, kwds=0x0, operation=0) at ufuncobject.c:2388 #2 0x08058c4e in PyObject_Call () #3 0xb7d14d2a in PyArray_GenericReduceFunction (m1=0x828ed78, op=0x819aa20, axis=Variable "axis" is not available. ) at arrayobject.c:2465 #4 0xb7d2151b in PyArray_All (self=0x8, axis=1) at numpy/core/src/multiarraymodule.c:617 #5 0xb6d0cde7 in array_from_pyobj (type_num=12, dims=0xbf8ee02c, rank=1, intent=12, obj=0x8113500) at build/src/fortranobject.c:523 #6 0xb6cf3012 in f2py_rout_flapack_dgelss (capi_self=0xb77627e8, capi_args=0xb6dde4cc, capi_keywds=0xb6c1ecec, f2py_func=0xb7281d40 ) at build/src/build/src/Lib/lib/lapack/flapackmodule.c:2896 #7 0xb6d0c671 in fortran_call (fp=0xb77627e8, arg=0xb6dde4cc, kw=0xb6c1ecec) at build/src/fortranobject.c:267 #8 0x08058c4e in PyObject_Call () #9 0x080b437f in PyEval_GetFuncName () #10 0x080b8417 in PyEval_EvalCodeEx () #11 0x080b6b64 in PyEval_GetFuncName () #12 0x080b8417 in PyEval_EvalCodeEx () #13 0x080b8695 in PyEval_EvalCode () #14 0x080d935c in PyRun_FileExFlags () #15 0x080d9623 in PyRun_SimpleFileExFlags () #16 0x08054fc7 in Py_Main () #17 0xb7e67ed0 in __libc_start_main () from /lib/tls/libc.so.6 #18 0x080549a1 in _start () Any suggestions? -albert -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashuang at gmail.com Tue Mar 21 14:02:55 2006 From: ashuang at gmail.com (Albert Huang) Date: Tue, 21 Mar 2006 14:02:55 -0500 Subject: [SciPy-dev] correction: using scipy 0.4.6 not 0.9.5 Message-ID: Sorry, this should have read scipy 0.4.6 On 3/21/06, Albert Huang wrote: > > Hi, > > I'm using numpy 0.9.5 and scipy 0.9.5 with python 2.3 on a debian > (unstable) system > > scipy.linalg.lstsq segfaults when I run this program: > > file: testlstsq.py > ---- > from numpy import * > from scipy.linalg import lstsq > > a = array( ((1,1), (1.1,0.9), (0.9,1.1), (0.8,1.2)) ) > b = array( (2,2,2,2) ) > > print lstsq(a,b) > ---- > > > > stack trace: > # gdb python > GNU gdb 6.3-debian > Copyright 2004 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and you > are > welcome to change it and/or distribute copies of it under certain > conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for > details. > This GDB was configured as "i386-linux"...(no debugging symbols found) > Using host libthread_db library "/lib/tls/libthread_db.so.1". > > (gdb) run testlstsq.py > Starting program: /usr/bin/python testlstsq.py > (no debugging symbols found) > (no debugging symbols found) > [Thread debugging using libthread_db enabled] > [New Thread -1209723200 (LWP 32526)] > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread -1209723200 (LWP 32526)] > 0x08089aab in PyType_IsSubtype () > (gdb) where > #0 0x08089aab in PyType_IsSubtype () > #1 0xb7cd4941 in PyUFunc_GenericReduction (self=0x819aa20, > args=0xb6c1d2ac, > kwds=0x0, operation=0) at ufuncobject.c:2388 > #2 0x08058c4e in PyObject_Call () > #3 0xb7d14d2a in PyArray_GenericReduceFunction (m1=0x828ed78, > op=0x819aa20, > axis=Variable "axis" is not available. > ) at arrayobject.c:2465 > #4 0xb7d2151b in PyArray_All (self=0x8, axis=1) > at numpy/core/src/multiarraymodule.c:617 > #5 0xb6d0cde7 in array_from_pyobj (type_num=12, dims=0xbf8ee02c, rank=1, > intent=12, obj=0x8113500) at build/src/fortranobject.c:523 > #6 0xb6cf3012 in f2py_rout_flapack_dgelss (capi_self=0xb77627e8, > capi_args=0xb6dde4cc, capi_keywds=0xb6c1ecec, > f2py_func=0xb7281d40 ) > at build/src/build/src/Lib/lib/lapack/flapackmodule.c:2896 > #7 0xb6d0c671 in fortran_call (fp=0xb77627e8, arg=0xb6dde4cc, > kw=0xb6c1ecec) > at build/src/fortranobject.c:267 > #8 0x08058c4e in PyObject_Call () > #9 0x080b437f in PyEval_GetFuncName () > #10 0x080b8417 in PyEval_EvalCodeEx () > #11 0x080b6b64 in PyEval_GetFuncName () > #12 0x080b8417 in PyEval_EvalCodeEx () > #13 0x080b8695 in PyEval_EvalCode () > #14 0x080d935c in PyRun_FileExFlags () > #15 0x080d9623 in PyRun_SimpleFileExFlags () > #16 0x08054fc7 in Py_Main () > #17 0xb7e67ed0 in __libc_start_main () from /lib/tls/libc.so.6 > #18 0x080549a1 in _start () > > > Any suggestions? > -albert > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at mecha.uni-stuttgart.de Tue Mar 21 14:08:04 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Mar 2006 20:08:04 +0100 Subject: [SciPy-dev] lstsq segfaults for scipy 0.9.5, numpy 0.9.5 on python 2.3 In-Reply-To: References: Message-ID: On Tue, 21 Mar 2006 14:00:08 -0500 "Albert Huang" wrote: > Hi, > > I'm using numpy 0.9.5 and scipy 0.9.5 with python 2.3 on >a debian (unstable) > system > > scipy.linalg.lstsq segfaults when I run this program: > > file: testlstsq.py > ---- > from numpy import * > from scipy.linalg import lstsq > > a = array( ((1,1), (1.1,0.9), (0.9,1.1), (0.8,1.2)) ) > b = array( (2,2,2,2) ) > > print lstsq(a,b) > ---- > > > > stack trace: > # gdb python > GNU gdb 6.3-debian > Copyright 2004 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public >License, and you are > welcome to change it and/or distribute copies of it >under certain > conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show >warranty" for details. > This GDB was configured as "i386-linux"...(no debugging >symbols found) > Using host libthread_db library >"/lib/tls/libthread_db.so.1". > > (gdb) run testlstsq.py > Starting program: /usr/bin/python testlstsq.py > (no debugging symbols found) > (no debugging symbols found) > [Thread debugging using libthread_db enabled] > [New Thread -1209723200 (LWP 32526)] > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread -1209723200 (LWP 32526)] > 0x08089aab in PyType_IsSubtype () > (gdb) where > #0 0x08089aab in PyType_IsSubtype () > #1 0xb7cd4941 in PyUFunc_GenericReduction >(self=0x819aa20, args=0xb6c1d2ac, > > kwds=0x0, operation=0) at ufuncobject.c:2388 > #2 0x08058c4e in PyObject_Call () > #3 0xb7d14d2a in PyArray_GenericReduceFunction >(m1=0x828ed78, op=0x819aa20, > > axis=Variable "axis" is not available. > ) at arrayobject.c:2465 > #4 0xb7d2151b in PyArray_All (self=0x8, axis=1) > at numpy/core/src/multiarraymodule.c:617 > #5 0xb6d0cde7 in array_from_pyobj (type_num=12, >dims=0xbf8ee02c, rank=1, > intent=12, obj=0x8113500) at >build/src/fortranobject.c:523 > #6 0xb6cf3012 in f2py_rout_flapack_dgelss >(capi_self=0xb77627e8, > capi_args=0xb6dde4cc, capi_keywds=0xb6c1ecec, > f2py_func=0xb7281d40 ) > at >build/src/build/src/Lib/lib/lapack/flapackmodule.c:2896 > #7 0xb6d0c671 in fortran_call (fp=0xb77627e8, >arg=0xb6dde4cc, > kw=0xb6c1ecec) > at build/src/fortranobject.c:267 > #8 0x08058c4e in PyObject_Call () > #9 0x080b437f in PyEval_GetFuncName () > #10 0x080b8417 in PyEval_EvalCodeEx () > #11 0x080b6b64 in PyEval_GetFuncName () > #12 0x080b8417 in PyEval_EvalCodeEx () > #13 0x080b8695 in PyEval_EvalCode () > #14 0x080d935c in PyRun_FileExFlags () > #15 0x080d9623 in PyRun_SimpleFileExFlags () > #16 0x08054fc7 in Py_Main () > #17 0xb7e67ed0 in __libc_start_main () from >/lib/tls/libc.so.6 > #18 0x080549a1 in _start () > > > Any suggestions? > -albert I cannot reproduce it here. (array([ 1., 1.]), 1.4849602282430439e-32, 2, array([ 2.83200503, 0.31582825])) >>> scipy.__version__ '0.4.9.1742' >>> numpy.__version__ '0.9.7.2257' Nils From ashuang at gmail.com Tue Mar 21 14:17:58 2006 From: ashuang at gmail.com (Albert Huang) Date: Tue, 21 Mar 2006 14:17:58 -0500 Subject: [SciPy-dev] lstsq segfaults for scipy 0.9.5, numpy 0.9.5 on python 2.3 In-Reply-To: References: Message-ID: Hi Nils, I just tried it on numpy 0.9.6 and scipy 0.4.8 and the problem went away. Thanks for taking a look so quickly, and sorry for the noise! -albert On 3/21/06, Nils Wagner wrote: > > On Tue, 21 Mar 2006 14:00:08 -0500 > "Albert Huang" wrote: > > Hi, > > > > I'm using numpy 0.9.5 and scipy 0.9.5 with python 2.3 on > >a debian (unstable) > > system > > > > scipy.linalg.lstsq segfaults when I run this program: > > > > file: testlstsq.py > > ---- > > from numpy import * > > from scipy.linalg import lstsq > > > > a = array( ((1,1), (1.1,0.9), (0.9,1.1), (0.8,1.2)) ) > > b = array( (2,2,2,2) ) > > > > print lstsq(a,b) > > ---- > > > > > > > > stack trace: > > # gdb python > > GNU gdb 6.3-debian > > Copyright 2004 Free Software Foundation, Inc. > > GDB is free software, covered by the GNU General Public > >License, and you are > > welcome to change it and/or distribute copies of it > >under certain > > conditions. > > Type "show copying" to see the conditions. > > There is absolutely no warranty for GDB. Type "show > >warranty" for details. > > This GDB was configured as "i386-linux"...(no debugging > >symbols found) > > Using host libthread_db library > >"/lib/tls/libthread_db.so.1". > > > > (gdb) run testlstsq.py > > Starting program: /usr/bin/python testlstsq.py > > (no debugging symbols found) > > (no debugging symbols found) > > [Thread debugging using libthread_db enabled] > > [New Thread -1209723200 (LWP 32526)] > > (no debugging symbols found) > > (no debugging symbols found) > > (no debugging symbols found) > > (no debugging symbols found) > > (no debugging symbols found) > > (no debugging symbols found) > > (no debugging symbols found) > > (no debugging symbols found) > > > > Program received signal SIGSEGV, Segmentation fault. > > [Switching to Thread -1209723200 (LWP 32526)] > > 0x08089aab in PyType_IsSubtype () > > (gdb) where > > #0 0x08089aab in PyType_IsSubtype () > > #1 0xb7cd4941 in PyUFunc_GenericReduction > >(self=0x819aa20, args=0xb6c1d2ac, > > > > kwds=0x0, operation=0) at ufuncobject.c:2388 > > #2 0x08058c4e in PyObject_Call () > > #3 0xb7d14d2a in PyArray_GenericReduceFunction > >(m1=0x828ed78, op=0x819aa20, > > > > axis=Variable "axis" is not available. > > ) at arrayobject.c:2465 > > #4 0xb7d2151b in PyArray_All (self=0x8, axis=1) > > at numpy/core/src/multiarraymodule.c:617 > > #5 0xb6d0cde7 in array_from_pyobj (type_num=12, > >dims=0xbf8ee02c, rank=1, > > intent=12, obj=0x8113500) at > >build/src/fortranobject.c:523 > > #6 0xb6cf3012 in f2py_rout_flapack_dgelss > >(capi_self=0xb77627e8, > > capi_args=0xb6dde4cc, capi_keywds=0xb6c1ecec, > > f2py_func=0xb7281d40 ) > > at > >build/src/build/src/Lib/lib/lapack/flapackmodule.c:2896 > > #7 0xb6d0c671 in fortran_call (fp=0xb77627e8, > >arg=0xb6dde4cc, > > kw=0xb6c1ecec) > > at build/src/fortranobject.c:267 > > #8 0x08058c4e in PyObject_Call () > > #9 0x080b437f in PyEval_GetFuncName () > > #10 0x080b8417 in PyEval_EvalCodeEx () > > #11 0x080b6b64 in PyEval_GetFuncName () > > #12 0x080b8417 in PyEval_EvalCodeEx () > > #13 0x080b8695 in PyEval_EvalCode () > > #14 0x080d935c in PyRun_FileExFlags () > > #15 0x080d9623 in PyRun_SimpleFileExFlags () > > #16 0x08054fc7 in Py_Main () > > #17 0xb7e67ed0 in __libc_start_main () from > >/lib/tls/libc.so.6 > > #18 0x080549a1 in _start () > > > > > > Any suggestions? > > -albert > > I cannot reproduce it here. > (array([ 1., 1.]), 1.4849602282430439e-32, 2, array([ > 2.83200503, 0.31582825])) > >>> scipy.__version__ > '0.4.9.1742' > >>> numpy.__version__ > '0.9.7.2257' > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at mecha.uni-stuttgart.de Wed Mar 22 03:23:59 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 22 Mar 2006 09:23:59 +0100 Subject: [SciPy-dev] 2 errors scipy.test(1,10) on a 32bit system In-Reply-To: <44203594.8020409@ftw.at> References: <442014B3.5020701@mecha.uni-stuttgart.de> <44203594.8020409@ftw.at> Message-ID: <4421099F.5000206@mecha.uni-stuttgart.de> Ed Schofield wrote: > Nils Wagner wrote: > >> scipy.test(1,10) results in >> >> ====================================================================== >> ERROR: check_sanity (scipy.montecarlo.tests.test_intsampler.test_intsampler) >> >> > What are you doing?! There is no scipy.montecarlo! It's in the sandbox > (and works fine for me). It's not built by default, and even if you > uncomment the line in sandbox/setup.py, it would be installed into > scipy.sandbox.montecarlo instead ... > > Try removing any old scipy directories you have lying around from January! > > -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Ed, Sorry for the noise. I didn't remove /usr/lib/python2.4/site-packages/scipy/ before. Now scipy.test(1,10) yields on a 32bit system Ran 1506 tests in 7.421s OK >>> scipy.__version__ '0.4.9.1761' On 64bit systems the test segfaults. Nils From travis at enthought.com Thu Mar 23 16:56:54 2006 From: travis at enthought.com (Travis N. Vaught) Date: Thu, 23 Mar 2006 15:56:54 -0600 Subject: [SciPy-dev] ANN: Python Enthought Edition Version 0.9.3 Released Message-ID: <442319A6.7040108@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.3 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.3 Release Notes: -------------------- Version 0.9.3 of Python Enthought Edition includes an update to version 1.0.3 of the Enthought Tool Suite (ETS) Package-- you can look at the release notes for this ETS version here. Other major changes include: * upgrade to VTK 5.0 * addition of docutils * addition of numarray * addition of pysvn. Also, MayaVi issues should be fixed in this release. Full Release Notes are here: http://code.enthought.com/release/changelog-enthon0.9.3.shtml About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com From Norbert.Nemec.list at gmx.de Thu Mar 23 18:15:57 2006 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Fri, 24 Mar 2006 00:15:57 +0100 Subject: [SciPy-dev] Bug in linalg.eig In-Reply-To: <4420083B.5060205@mecha.uni-stuttgart.de> References: <4420083B.5060205@mecha.uni-stuttgart.de> Message-ID: <44232C2D.7060203@gmx.de> Could you please try to isolate the problem into a standalone script? Without knowing the content of the files G.mtx and H.mtx, it is a bit hard to understand what G and H are and what the problem actually is. Nils Wagner wrote: >Hi all, > >It turns out that linalg.eig is erroneous ! > >Please find attached my latest findings ... > >Unfortunately, I have no idea how to resolve this problem within linalg.eig. > > >As a workaround one can use the functions > >linalg.flapack.zgegv >linalg.flapack.zggev > >Nils > >from scipy import * >from pylab import * >G = io.mmread('G.mtx') >H = io.mmread('H.mtx') >#w = linalg.eigvals(G,H) >n = G.shape[0] >w = zeros(n,Complex) >w1= zeros(n,Complex) ># ># http://www.netlib.org/lapack/patch/src/zggev.f ># >alpha,beta,vl,vr,work,info = >linalg.flapack.zggev(G,H,compute_vl=0,compute_vr=0,lwork=8*n,overwrite_a=0,overwrite_b=0) ># ># http://www.netlib.org/lapack/patch/src/zgegv.f ># >alpha1,beta1,vl1,vr1,info1 = >linalg.flapack.zgegv(G,H,compute_vl=0,compute_vr=0,lwork=2*n,overwrite_a=0,overwrite_b=0) > > >for i in arange(0,n): > if beta[i] <> 0.0: > w[i] = alpha[i]/beta[i] > w1[i] = alpha1[i]/beta1[i] > >#w0 = linalg.eigvals(G,H) # Doesn't work > >subplot(211) >plot(w.real,w.imag,'b.') >subplot(212) >plot(w1.real,w1.imag,'r.') >show() > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > > > > From travis at enthought.com Thu Mar 23 16:56:54 2006 From: travis at enthought.com (Travis N. Vaught) Date: Thu, 23 Mar 2006 15:56:54 -0600 Subject: [SciPy-dev] [wxPython-users] ANN: Python Enthought Edition Version 0.9.3 Released Message-ID: <442319A6.7040108@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.3 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.3 Release Notes: -------------------- Version 0.9.3 of Python Enthought Edition includes an update to version 1.0.3 of the Enthought Tool Suite (ETS) Package-- you can look at the release notes for this ETS version here. Other major changes include: * upgrade to VTK 5.0 * addition of docutils * addition of numarray * addition of pysvn. Also, MayaVi issues should be fixed in this release. Full Release Notes are here: http://code.enthought.com/release/changelog-enthon0.9.3.shtml About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com --------------------------------------------------------------------- To unsubscribe, e-mail: wxPython-users-unsubscribe at lists.wxwidgets.org For additional commands, e-mail: wxPython-users-help at lists.wxwidgets.org From nwagner at iam.uni-stuttgart.de Fri Mar 24 02:30:19 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 24 Mar 2006 08:30:19 +0100 Subject: [SciPy-dev] Bug in linalg.eig In-Reply-To: <44232C2D.7060203@gmx.de> References: <4420083B.5060205@mecha.uni-stuttgart.de> <44232C2D.7060203@gmx.de> Message-ID: <4423A00B.9070109@iam.uni-stuttgart.de> Norbert Nemec wrote: > Could you please try to isolate the problem into a standalone script? > Without knowing the content of the files G.mtx and H.mtx, it is a bit > hard to understand what G and H are and what the problem actually is. > > > Hi Norbert, I have submitted a bug report. http://projects.scipy.org/scipy/scipy/ticket/41 You will find a file mat.tar.gz that contains useful information. Please let me know if you can reproduce the bug. Thanks in advance Nils > Nils Wagner wrote: > > >> Hi all, >> >> It turns out that linalg.eig is erroneous ! >> >> Please find attached my latest findings ... >> >> Unfortunately, I have no idea how to resolve this problem within linalg.eig. >> >> >> As a workaround one can use the functions >> >> linalg.flapack.zgegv >> linalg.flapack.zggev >> >> Nils >> >> > >from scipy import * > >from pylab import * > >> G = io.mmread('G.mtx') >> H = io.mmread('H.mtx') >> #w = linalg.eigvals(G,H) >> n = G.shape[0] >> w = zeros(n,Complex) >> w1= zeros(n,Complex) >> # >> # http://www.netlib.org/lapack/patch/src/zggev.f >> # >> alpha,beta,vl,vr,work,info = >> linalg.flapack.zggev(G,H,compute_vl=0,compute_vr=0,lwork=8*n,overwrite_a=0,overwrite_b=0) >> # >> # http://www.netlib.org/lapack/patch/src/zgegv.f >> # >> alpha1,beta1,vl1,vr1,info1 = >> linalg.flapack.zgegv(G,H,compute_vl=0,compute_vr=0,lwork=2*n,overwrite_a=0,overwrite_b=0) >> >> >> for i in arange(0,n): >> if beta[i] <> 0.0: >> w[i] = alpha[i]/beta[i] >> w1[i] = alpha1[i]/beta1[i] >> >> #w0 = linalg.eigvals(G,H) # Doesn't work >> >> subplot(211) >> plot(w.real,w.imag,'b.') >> subplot(212) >> plot(w1.real,w1.imag,'r.') >> show() >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-dev >> >> >> >> >> > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From chanley at stsci.edu Fri Mar 24 15:53:31 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 24 Mar 2006 15:53:31 -0500 Subject: [SciPy-dev] PyFITS 1.1 "alpha" release -- NUMPY now supported Message-ID: <44245C4B.4000209@stsci.edu> ------------------ | PyFITS Release | ------------------ Space Telescope Science Institute is pleased to announce the "alpha" release of PyFITS 1.1. This release includes support for both the NUMPY and NUMARRAY array packages. This software can be downloaded at: http://www.stsci.edu/resources/software_hardware/pyfits/Download The NUMPY support in PyFITS is not nearly as well tested as the NUMARRAY support. We expect that you will encounter bugs. Please send bug reports to "help at stsci.edu". We intend to support NUMARRAY and NUMPY simultaneously for a transition period of no less than 1 year. Eventually, however, support for NUMARRAY will disappear. During this period, it is likely that new features will appear only for NUMPY. The support for NUMARRAY will primarily be to fix serious bugs and handle platform updates. ----------- | Version | ----------- Version 1.1a1; March 24, 2006 ------------------------------- | Major Changes since v1.0.1 | ------------------------------- * Added support for the NUMPY array package * Use of the NUMPY or NUMARRAY array package is controlled through the use of an environmental variable (NUMERIX) * Access to private methods and attributes is now through the pyfits.core module * Now installed as a packaged instead of a singe file ------------------------- | Software Requirements | ------------------------- PyFITS Version 1.0 REQUIRES: * Python 2.3 or later * NUMPY or NUMARRAY --------------------- | Installing PyFITS | --------------------- PyFITS 1.0 is distributed as a Python distutils module. Installation simply involves unpacking the package and executing % python setup.py install to install it in Python's site-packages directory. Alternatively the command %python setup.py install --local="/destination/directory/" will install PyFITS in an arbitrary directory which should be placed on PYTHONPATH. Once numarray or numpy has been installed, then PyFITS should be available for use under Python. ----------------- | Download Site | ----------------- http://www.stsci.edu/resources/software_hardware/pyfits/Download ---------- | Usage | ---------- Users will issue an "import pyfits" command as in the past. However, the use of the NUMPY or NUMARRAY version of PyFITS will be controlled by an environment variable called NUMERIX. Set NUMERIX to 'numarray' for the NUMARRAY version of PyFITS. Set NUMERIX to 'numpy' for the NUMPY version of pyfits. If only one array package is installed, that package's version of PyFITS will be imported. If both packages are installed the NUMERIX value is used to decide which version to import. If no NUMERIX value is set then the NUMARRAY version of PyFITS will be imported. Anything else will raise an exception upon import. --------------- | Bug Reports | --------------- Please send all PyFITS bug reports to help at stsci.edu ------------------ | Advanced Users | ------------------ Users who would like to follow the "bleeding" edge of PyFITS development can retrieve the software from our SUBVERSION repository hosted at: http://astropy.scipy.org/svn/pyfits/trunk We also provide a Trac site at: http://projects.scipy.org/astropy/pyfits/wiki -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From pcj at linux.sez.to Sat Mar 25 04:03:56 2006 From: pcj at linux.sez.to (Paul Janzen) Date: Sat, 25 Mar 2006 01:03:56 -0800 Subject: [SciPy-dev] fitpack memory corruption Message-ID: The following fragment crashes the Python interpreter on both Windows and Linux due to memory corruption: from scipy.interpolate import splrep splrep(arange(10),arange(10),k=3,task=-1,t=[0,0,0,0,1,1,1,1]) Here is a simple patch against 1712 that at least fixes this case: --- Lib\interpolate\__fitpack.h~ 2006-03-17 04:43:36.000000000 -0800 +++ Lib\interpolate\__fitpack.h 2006-03-25 00:16:45.810340800 -0800 @@ -374,11 +374,11 @@ ap_wrk = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_DOUBLE); ap_iwrk = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_INT); if (ap_wrk == NULL || ap_iwrk == NULL) goto fail; + memcpy(ap_wrk->data,wrk,n*sizeof(double)); + memcpy(ap_iwrk->data,iwrk,n*sizeof(int)); } memcpy(ap_t->data,t,n*sizeof(double)); memcpy(ap_c->data,c,lc*sizeof(double)); - memcpy(ap_wrk->data,wrk,n*sizeof(double)); - memcpy(ap_iwrk->data,iwrk,n*sizeof(int)); if (wa) free(wa); Py_DECREF(ap_x); Py_DECREF(ap_y); Similar scenario at line 215:fitpack_surfit. I still don't understand why the condition for copying into ap_{i,}wrk->data (line 373) is iopt==0. Doesn't the curfit documentation imply that you want to persist wrk/iwrk iff iopt==1? Also, it looks like the two assignments at 355 and 370 risk leaking references? ap_t=(PyArrayObject*)PyArray_ContiguousFromObject(t_py,PyArray_DOUBLE, 0, 1); ... ap_t = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_DOUBLE); As do the second assignments to ap_wrk/ap_iwrk. -- Paul From tim.leslie at gmail.com Sat Mar 25 05:51:59 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 25 Mar 2006 21:51:59 +1100 Subject: [SciPy-dev] masked_array drops fill_value Message-ID: Hi all, In ma.py the masked_array function the fill_value of the array passed in is not used. The code currently looks like this: def masked_array (a, mask=nomask, fill_value=None): """masked_array(a, mask=nomask) = array(a, mask=mask, copy=0, fill_value=fill_value) """ return array(a, mask=mask, copy=0, fill_value=fill_value) It seems to me that using the fill_value from a (if it is a MaskedArray) would be the sane thing to do? Something like if fill_value == None and isinstance(a, MaskedArray): fill_value = a.fill_value() return array(a, mask=mask, copy=0, fill_value=fill_value) As it stands, all the ma functions such as transpose, reshape, etc lose the fill_value which seems wrong to me. Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Mar 27 10:14:41 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 27 Mar 2006 17:14:41 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info Message-ID: <44280161.4030708@ntc.zcu.cz> Hi all, I have added a basic umfpack detection support to numpy.distutils.system_info() - see example site.cfg below (and replace ...). The Umfpack wrapper still resides in the sandbox, but its setup.py now uses the detection stuff. Now I would like to move the wrapper somewhere under scipy.sparse and I hit a problem: umpfack.py depends on scipy.sparse, since it uses the CSR/CSC matrix formats, but also scipy.sparse uses umfpack.py for solving linear equations.if present. How to get out of this circular dependency problem? Do you have any suggestions on how to organize the files? r. -- site.cfg: [umfpack] library_dirs = /UMFPACK/Lib:/AMD/Lib include_dirs = /UMFPACK/Include umfpack_libs = umfpack, amd From schofield at ftw.at Mon Mar 27 10:45:51 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 27 Mar 2006 17:45:51 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <44280161.4030708@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> Message-ID: <442808AF.6090006@ftw.at> Robert Cimrman wrote: > Hi all, > I have added a basic umfpack detection support to > numpy.distutils.system_info() - see example site.cfg below (and > replace ...). > The Umfpack wrapper still resides in the sandbox, but its setup.py now > uses the detection stuff. > > Now I would like to move the wrapper somewhere under scipy.sparse and > I hit a problem: umpfack.py depends on scipy.sparse, since it uses the > CSR/CSC matrix formats, but also scipy.sparse uses umfpack.py for > solving linear equations.if present. How to get out of this circular > dependency problem? Do you have any suggestions on how to organize the > files? Hi Robert, Great work! :) I suggest we move the solvers out of sparse.py, leaving only the data types. We could put them instead into a separate module that depends on both sparse.py and umfpack.py. We can still have them accessible under the scipy.sparse namespace by adding a line like from umfsolvers import * to sparse/__init__.py. -- Ed From cimrman3 at ntc.zcu.cz Mon Mar 27 11:00:32 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 27 Mar 2006 18:00:32 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <442808AF.6090006@ftw.at> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> Message-ID: <44280C20.8000003@ntc.zcu.cz> Ed Schofield wrote: > Robert Cimrman wrote: > >>Hi all, >>I have added a basic umfpack detection support to >>numpy.distutils.system_info() - see example site.cfg below (and >>replace ...). >>The Umfpack wrapper still resides in the sandbox, but its setup.py now >>uses the detection stuff. >> >>Now I would like to move the wrapper somewhere under scipy.sparse and >>I hit a problem: umpfack.py depends on scipy.sparse, since it uses the >>CSR/CSC matrix formats, but also scipy.sparse uses umfpack.py for >>solving linear equations.if present. How to get out of this circular >>dependency problem? Do you have any suggestions on how to organize the >>files? > > Hi Robert, > > Great work! :) > > I suggest we move the solvers out of sparse.py, leaving only the data > types. We could put them instead into a separate module that depends on > both sparse.py and umfpack.py. We can still have them accessible under > the scipy.sparse namespace by adding a line like > from umfsolvers import * > to sparse/__init__.py. Yes, ideally I would like to see having a direct solver module and an iterative solver module, each working with both the dense and the sparse matrices. But for the moment we could just move solve() and lu_factor() somewhere else (linsolve package?). I am bad at coming with names :-) r. From schofield at ftw.at Tue Mar 28 12:24:34 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 28 Mar 2006 19:24:34 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <44280C20.8000003@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> Message-ID: <44297152.9000305@ftw.at> Robert Cimrman wrote: > Ed Schofield wrote: >> Robert Cimrman wrote: >> >>> Hi all, >>> I have added a basic umfpack detection support to >>> numpy.distutils.system_info() - see example site.cfg below (and >>> replace ...). >>> The Umfpack wrapper still resides in the sandbox, but its setup.py now >>> uses the detection stuff. >>> >>> Now I would like to move the wrapper somewhere under scipy.sparse and >>> I hit a problem: umpfack.py depends on scipy.sparse, since it uses the >>> CSR/CSC matrix formats, but also scipy.sparse uses umfpack.py for >>> solving linear equations.if present. How to get out of this circular >>> dependency problem? Do you have any suggestions on how to organize the >>> files? >> >> Hi Robert, >> >> Great work! :) >> >> I suggest we move the solvers out of sparse.py, leaving only the data >> types. We could put them instead into a separate module that depends on >> both sparse.py and umfpack.py. We can still have them accessible under >> the scipy.sparse namespace by adding a line like >> from umfsolvers import * >> to sparse/__init__.py. > > Yes, ideally I would like to see having a direct solver module and an > iterative solver module, each working with both the dense and the > sparse matrices. But for the moment we could just move solve() and > lu_factor() somewhere else (linsolve package?). I am bad at coming > with names :-) I'm not sure about the ideal package structure. A new linsolve package would make sense -- or we could leave the solvers as a separate module in the sparse package until we have wrappers for dense solvers too. Or perhaps, eventually, we'd want to consolidate the linear solvers and non-linear solvers (e.g. minpack, currently in the optimize package) under a single "solve" package? -- Ed From faltet at carabos.com Wed Mar 29 03:38:20 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed, 29 Mar 2006 10:38:20 +0200 Subject: [SciPy-dev] Deleting a recipe in the cookbook wiki Message-ID: <200603291038.22453.faltet@carabos.com> Hi, I've ended changing the name of a recipe of mine in the SciPy cookbook wiki, i.e. from "A Pyrex Agnostic Class" to "A Numerical Agnostic Pyrex Class", which I find better. Anyone with privileges can delete the former one? Thanks, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From cimrman3 at ntc.zcu.cz Wed Mar 29 06:03:40 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 29 Mar 2006 13:03:40 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <44297152.9000305@ftw.at> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> Message-ID: <442A698C.9000104@ntc.zcu.cz> Ed Schofield wrote: > Robert Cimrman wrote: > >>Ed Schofield wrote: >> >>>I suggest we move the solvers out of sparse.py, leaving only the data >>>types. We could put them instead into a separate module that depends on >>>both sparse.py and umfpack.py. We can still have them accessible under >>>the scipy.sparse namespace by adding a line like >>> from umfsolvers import * >>>to sparse/__init__.py. >> >>Yes, ideally I would like to see having a direct solver module and an >>iterative solver module, each working with both the dense and the >>sparse matrices. But for the moment we could just move solve() and >>lu_factor() somewhere else (linsolve package?). I am bad at coming >>with names :-) > > > I'm not sure about the ideal package structure. A new linsolve package > would make sense -- or we could leave the solvers as a separate module > in the sparse package until we have wrappers for dense solvers too. Or > perhaps, eventually, we'd want to consolidate the linear solvers and > non-linear solvers (e.g. minpack, currently in the optimize package) > under a single "solve" package? I have just done the following: scipy/Lib/linsolve ... superlu, umfpack modules, spsolve, splu functions scipy/Lib/sparse ... the rest All tests pass for me, but I am afraid of making the changes public without any review. How do I make a new branch? Or, Ed, can I try it with your ejs branch (how?) - it is for testing the sparse stuff, right? :) r. ps: I am just watching the partial solar eclipse from my window :-) From schofield at ftw.at Wed Mar 29 07:32:56 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 29 Mar 2006 14:32:56 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <442A698C.9000104@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> Message-ID: <442A7E78.1030901@ftw.at> Robert Cimrman wrote: > I have just done the following: > > scipy/Lib/linsolve ... superlu, umfpack modules, spsolve, splu functions > scipy/Lib/sparse ... the rest > > All tests pass for me, but I am afraid of making the changes public > without any review. How do I make a new branch? Or, Ed, can I try it > with your ejs branch (how?) - it is for testing the sparse stuff, > right? :) You're welcome to use my branch; you'd need to re-add the changed files with 'svn add', then commit, test, then re-merge with the trunk using 'svn merge'. But I'd suggest just committing it directly to the trunk. I'd test it straight away and report any problems ;) If you want to create your own branch, use svn copy . http://svn.scipy.org/svn/branches/rc then switch to it with svn switch http://svn.scipy.org/svn/branches/rc This can be useful for testing big patches. You'd need to keep it in sync with the trunk manually using 'svn merge' regularly to pull in new changes. -- Ed From cimrman3 at ntc.zcu.cz Wed Mar 29 08:08:34 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 29 Mar 2006 15:08:34 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <442A7E78.1030901@ftw.at> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> Message-ID: <442A86D2.20902@ntc.zcu.cz> Ed Schofield wrote: > Robert Cimrman wrote: > >>I have just done the following: >> >>scipy/Lib/linsolve ... superlu, umfpack modules, spsolve, splu functions >>scipy/Lib/sparse ... the rest >> >>All tests pass for me, but I am afraid of making the changes public >>without any review. How do I make a new branch? Or, Ed, can I try it >>with your ejs branch (how?) - it is for testing the sparse stuff, >>right? :) > > You're welcome to use my branch; you'd need to re-add the changed files > with 'svn add', then commit, test, then re-merge with the trunk using > 'svn merge'. But I'd suggest just committing it directly to the trunk. > I'd test it straight away and report any problems ;) And so I was brave and committed to the trunk. :) Now let us hope nothing went wrong... > If you want to create your own branch, use > svn copy . http://svn.scipy.org/svn/branches/rc > then switch to it with > svn switch http://svn.scipy.org/svn/branches/rc > This can be useful for testing big patches. You'd need to keep it in > sync with the trunk manually using 'svn merge' regularly to pull in new > changes. Thanks for the information, happy testing, r. From schofield at ftw.at Wed Mar 29 09:32:07 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 29 Mar 2006 16:32:07 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <442A86D2.20902@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> Message-ID: <442A9A67.8050106@ftw.at> Robert Cimrman wrote: > Thanks for the information, > > happy testing, Okay, I've now figured out that the umfpack hooks need NumPy distutils >= r2286. But the get_info() call in umfpack/setup.py still isn't acknowledging that umfpack exists. It should find it from the scipy source tree, right? Do you have any hints? -- Ed From cimrman3 at ntc.zcu.cz Wed Mar 29 09:54:05 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 29 Mar 2006 16:54:05 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <442A9A67.8050106@ftw.at> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> Message-ID: <442A9F8D.906@ntc.zcu.cz> Ed Schofield wrote: > Okay, I've now figured out that the umfpack hooks need NumPy distutils > >>= r2286. But the get_info() call in umfpack/setup.py still isn't > > acknowledging that umfpack exists. It should find it from the scipy > source tree, right? Do you have any hints? Well, the umfpack sources are not in the scipy source tree - there is just my wrapper/module code there. I did it this way, because even if the license of the version 4.4 (which is known to work with the module) is (imho) acceptable for scipy, the curent version (4.6) is under LGPL... You should proceed as follows: 1. get the version 4.4 at http://www.cise.ufl.edu/research/sparse/umfpack/v4.4/ 2. try to install it according to instructions in UMFPACK/Doc/QuickStart.pdf (basically, just choose your platform at the end of UMFPACK/Make/Make.include and also edit Make. where you can specify which BLAS to use etc.; then 'make install') 3. put the following into your numpy's site.cfg, so that it is detected by get_info(): [umfpack] library_dirs = /UMFPACK/Lib:/AMD/Lib include_dirs = /UMFPACK/Include umfpack_libs = umfpack, amd BTW, without umfpack, does the rest of scipy.sparse work for you? :) r. From a.u.r.e.l.i.a.n at gmx.net Wed Mar 29 10:04:45 2006 From: a.u.r.e.l.i.a.n at gmx.net (=?ISO-8859-1?Q?=22Johannes_L=F6hnert=22?=) Date: Wed, 29 Mar 2006 17:04:45 +0200 (MEST) Subject: [SciPy-dev] 1/0 Message-ID: <14457.1143644685@www048.gmx.net> Hi, In [48]: int32(1)/int32(0) Out[48]: 0 Why is the result not inf? (I looked up IEEE 854 but do not understand a word...) Johannes -- Echte DSL-Flatrate dauerhaft f?r 0,- Euro*! "Feel free" mit GMX DSL! http://www.gmx.net/de/go/dsl From perry at stsci.edu Wed Mar 29 10:34:27 2006 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 29 Mar 2006 10:34:27 -0500 Subject: [SciPy-dev] 1/0 In-Reply-To: <14457.1143644685@www048.gmx.net> References: <14457.1143644685@www048.gmx.net> Message-ID: <52e21ff36d2c380e00c522253dbd4477@stsci.edu> IEEE special values only apply to floating types, not integers. If you think about it a bit, you'll understand why (what bit pattern isn't a valid integer?). And you probably mean IEEE 754, not 854. On Mar 29, 2006, at 10:04 AM, Johannes L?hnert wrote: > Hi, > > In [48]: int32(1)/int32(0) > Out[48]: 0 > > Why is the result not inf? (I looked up IEEE 854 but do not understand > a > word...) > > Johannes > > -- > Echte DSL-Flatrate dauerhaft f?r 0,- Euro*! > "Feel free" mit GMX DSL! http://www.gmx.net/de/go/dsl > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From schofield at ftw.at Wed Mar 29 11:01:07 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 29 Mar 2006 18:01:07 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <442A9F8D.906@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> Message-ID: <442AAF43.4010404@ftw.at> Robert Cimrman wrote: > Ed Schofield wrote: >> Okay, I've now figured out that the umfpack hooks need NumPy distutils >> >>> = r2286. But the get_info() call in umfpack/setup.py still isn't >> >> acknowledging that umfpack exists. It should find it from the scipy >> source tree, right? Do you have any hints? > > Well, the umfpack sources are not in the scipy source tree - there is > just my wrapper/module code there. I did it this way, because even if > the license of the version 4.4 (which is known to work with the > module) is (imho) acceptable for scipy, the curent version (4.6) is > under LGPL... Okay, that sounds reasonable. > You should proceed as follows: > 1. get the version 4.4 at > http://www.cise.ufl.edu/research/sparse/umfpack/v4.4/ > 2. try to install it according to instructions in > UMFPACK/Doc/QuickStart.pdf (basically, just choose your platform at > the end of UMFPACK/Make/Make.include and also edit Make. platform> where you can specify which BLAS to use etc.; then 'make > install') > 3. put the following into your numpy's site.cfg, so that it is > detected by get_info(): > > [umfpack] > library_dirs = /UMFPACK/Lib:/AMD/Lib > include_dirs = /UMFPACK/Include > umfpack_libs = umfpack, amd Okay, I'll do this. By the way, the libumfpack4 / libumfpack4-dev Debian/Ubuntu packages don't work out-of-the-box (or even with a site.cfg file). SciPy installs fine but dies on import of linsolve with a BLAS error: import linsolve.umfpack -> failed: /usr/lib/python2.4/site-packages/scipy/linsolve/umfpack/__umfpack.so: undefined symbol: dtrsv_ I'd like to get this working eventually, but I'm stuck for ideas right now. I'll install UMFPACK manually as you suggest and get back to you. > BTW, without umfpack, does the rest of scipy.sparse work for you? :) Yes, it seems fine. Well done :) -- Ed From a.u.r.e.l.i.a.n at gmx.net Wed Mar 29 11:17:39 2006 From: a.u.r.e.l.i.a.n at gmx.net (aurelian) Date: Wed, 29 Mar 2006 18:17:39 +0200 Subject: [SciPy-dev] 1/0 In-Reply-To: <52e21ff36d2c380e00c522253dbd4477@stsci.edu> References: <14457.1143644685@www048.gmx.net> <52e21ff36d2c380e00c522253dbd4477@stsci.edu> Message-ID: <442AB323.6070703@gmx.net> Perry Greenfield schrieb: > IEEE special values only apply to floating types, not integers. If you > think about it a bit, you'll understand why (what bit pattern isn't a > valid integer?). And you probably mean IEEE 754, not 854. Point taken, but I would find it better if a ZeroDivision Error was raised. (If you try 1/0 with Python's native integers, this happens.) 1/0 giving back 0 silently is a Bad Thing imho. Johannes From byrnes at bu.edu Wed Mar 29 18:28:31 2006 From: byrnes at bu.edu (John Byrnes) Date: Wed, 29 Mar 2006 18:28:31 -0500 Subject: [SciPy-dev] FFT with DJBFFT failing Message-ID: <20060329232831.GA25147@localhost.localdomain> Hello all, I've found that DJBFFT causes scipy to fail several tests. These failures only occur when DJBFFT is found by distutils when building. I'm running the current SVN copies of both scipy and numpy. Regards, John Report is below: ====================================================================== FAIL: check_normal (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/stats/tests/test_morestats.py", line 51, in check_normal assert_array_less(A, crit[-2:]) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 255, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 50.0%): Array 1: 0.90160265659743288 Array 2: [ 0.858 1.0209999999999999] ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_diff) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 85, in check_definition assert_array_almost_equal(diff(sin(x)),direct_diff(sin(x))) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 87.5%): Array 1: [ 1.0000000e+00 9.2387953e-01 7.0710678e-01 3.8268343e-01 8.8009815e-17 -3.8268343e-01 -7.0710678e-01 -9.23... Array 2: [ -0.0000000e+00 5.0000000e-01 7.7662794e-17 1.2967736e-16 1.2490009e-16 2.1612893e-16 1.4972165e-16 -9.71... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 301, in check_definition assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 87.5%): Array 1: [ 1.0000000e+00 9.2387953e-01 7.0710678e-01 3.8268343e-01 8.2304423e-17 -3.8268343e-01 -7.0710678e-01 -9.23... Array 2: [ -0.0000000e+00 +0.0000000e+00j 5.0000000e-01 -7.3105720e-17j 3.8831397e-17 -2.4238713e-17j 4.3225785e-17 -6.813... ====================================================================== FAIL: check_random_even (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 332, in check_random_even assert_array_almost_equal(direct_hilbert(direct_ihilbert(f)),f) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 0. -0.j -0.0003392+0.0003636j -0.0010602-0.0002132j -0.0005238+0.0002887j 0.0003091-0.0020875j 0.00076... Array 2: [-0.1106071 -0.497691 0.4783423 -0.337373 0.2458101 0.0210909 -0.4631084 0.192853 0.4792201 -0.3906347 0.22212... ====================================================================== FAIL: check_tilbert_relation (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 311, in check_tilbert_relation assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 62.5%): Array 1: [ 1.0000000e+00 6.5328148e-01 2.1280927e-16 -2.7059805e-01 -1.0626020e-16 2.7059805e-01 1.5393080e-16 -6.53... Array 2: [ -0.0000000e+00 +0.0000000e+00j 2.5000000e-01 -3.5284004e-17j -1.1905260e-18 +4.4746813e-17j 2.5000000e-01 -9.941... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_ihilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 370, in check_definition assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 87.5%): Array 1: [ -1.0000000e+00 -9.2387953e-01 -7.0710678e-01 -3.8268343e-01 -8.2304423e-17 3.8268343e-01 7.0710678e-01 9.23... Array 2: [ 0.0000000e+00 -0.0000000e+00j -5.0000000e-01 +7.3105720e-17j -3.8831397e-17 +2.4238713e-17j -4.3225785e-17 +6.813... ====================================================================== FAIL: check_itilbert_relation (scipy.fftpack.tests.test_pseudo_diffs.test_ihilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 380, in check_itilbert_relation assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 62.5%): Array 1: [ -1.0000000e+00 -6.5328148e-01 -2.1280927e-16 2.7059805e-01 1.0626020e-16 -2.7059805e-01 -1.5393080e-16 6.53... Array 2: [ 0.0000000e+00 -0.0000000e+00j -2.5000000e-01 +3.5284004e-17j 1.1905260e-18 -4.4746813e-17j -2.5000000e-01 +9.941... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_itilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 288, in check_definition assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 87.5%): Array 1: [ -9.9667995e-02 -9.2081220e-02 -7.0475915e-02 -3.8141290e-02 -7.1261911e-18 3.8141290e-02 7.0475915e-02 9.20... Array 2: [ -0.0000000e+00 +0.0000000e+00j -4.9833997e-02 +7.2863005e-18j -7.6643594e-18 +4.7841238e-18j -1.2592216e-17 +1.984... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_shift) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 390, in check_definition assert_array_almost_equal(shift(sin(x),a),direct_shift(sin(x),a)) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 0.0998334 0.1968802 0.2920308 0.3843691 0.4730057 0.5570869 0.6358031 0.7083962 0.7741671 0.8324823 0.88278... Array 2: [ 1.2321788e-17 4.9916708e-02 2.4648710e-19 8.8465810e-18 5.9412517e-18 2.0994071e-17 1.4776750e-17 1.22... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_tilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 227, in check_definition assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 87.5%): Array 1: [ 1.0033311e+01 9.2695708e+00 7.0946223e+00 3.8395819e+00 1.1013865e-15 -3.8395819e+00 -7.0946223e+00 -9.26... Array 2: [ -0.0000000e+00 +0.0000000e+00j 5.0166556e+00 -7.3349243e-16j 1.9673887e-16 -1.2280519e-16j 1.4838281e-16 -2.338... ====================================================================== FAIL: check_random_even (scipy.fftpack.tests.test_pseudo_diffs.test_tilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", line 240, in check_random_even assert_array_almost_equal(direct_tilbert(direct_itilbert(f,h),h),f) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ -0.0000000e+00 +0.0000000e+00j 1.4398428e-03 +1.8373263e-03j -8.7680356e-04 +9.9658995e-04j -7.3017115e-04 +1.287... Array 2: [ 0.5075109 -0.2380892 -0.2038641 -0.3047494 -0.2114436 0.4541402 -0.1193784 -0.0768124 -0.1561373 0.1053 0.03414... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_fft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", line 98, in check_definition assert_array_almost_equal(y,y1) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 1.+0.j 2.+0.j 3.+0.j 4.+1.j 1.+0.j 2.+0.j 3.+0.j 4.+2.j] Array 2: [ 20. +3.j -0.7071068+0.7071068j -7. +4.j -0.7071068-0.7071068j -4. -3.j 0.707106... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", line 431, in check_definition assert_array_almost_equal(fftn(x),direct_dftn(x)) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[[[ 5.9716569e+02 +0.j -4.7701973e+00 -6.248201j 7.2274468e+00 +2.123387j ..., -1.8242164e+01 +4.20633... Array 2: [[[[ 1.4378537e+02+0.j 3.8003180e-01-0.9697448j 6.8443765e+00+3.6280536j ..., -2.9207977e+00+2.7763892j... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", line 183, in check_definition assert_array_almost_equal(y,y1) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 0.125+0.j 0.25 +0.j 0.375+0.j 0.5 +0.125j 0.125+0.j 0.25 +0.j 0.375+0.j 0.5 +0.25j ] Array 2: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 -0.5j 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... ====================================================================== FAIL: check_random_complex (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", line 211, in check_random_complex assert_array_almost_equal (ifft(fft(x)),x) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 0.000438 +1.2422571e-03j 0.0081279 +1.3153027e-03j 0.0058322 +2.5472850e-03j 0.0098841 +1.5452180e-02j 0.007895... Array 2: [ 0.0280313+0.0795045j 0.5201845+0.0841794j 0.3732589+0.1630262j 0.6325855+0.9889395j 0.5052878+0.7382801j 0.12457... ====================================================================== FAIL: check_random_real (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", line 217, in check_random_real assert_array_almost_equal (ifft(fft(x)),x) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 0.4665868+0.j 0.0158205-0.0104689j -0.0024869-0.0038622j 0.0255389-0.0128942j 0.0246149-0.0121173j 0.00113... Array 2: [ 0.5673542 0.8727196 0.7445887 0.5776637 0.910135 0.2396252 0.4235496 0.0873539 0.9231279 0.0897232 0.15871... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", line 601, in check_definition assert_array_almost_equal(ifftn(x),direct_idftn(x)) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[[[ 4.9883390e-01 +0.0000000e+00j 7.9039717e-03 +1.7999071e-03j -2.5307390e-03 -2.3994048e-03j ..., 5.8921717... Array 2: [[[[ 1.2726888e-01 +0.0000000e+00j 8.6321110e-04 +2.0907331e-03j 2.2428627e-03 +4.2203592e-04j ..., -2.7428294... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_irfft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", line 341, in check_definition assert_array_almost_equal(y,ifft(x1)) File "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 0.4356602 -0.375 0.9356602] Array 2: [ 0.125+0.j 0.25 +0.375j 0.5 +0.125j 0.25 +0.375j 0.5 +0.j 0.25 -0.375j 0.5 -0.125j 0.25 -0.375j] ---------------------------------------------------------------------- Ran 1508 tests in 2.893s FAILED (failures=18) From byrnes at bu.edu Thu Mar 30 01:47:18 2006 From: byrnes at bu.edu (John Byrnes) Date: Thu, 30 Mar 2006 01:47:18 -0500 Subject: [SciPy-dev] FFT with DJBFFT failing In-Reply-To: <20060329232831.GA25147@localhost.localdomain> References: <20060329232831.GA25147@localhost.localdomain> Message-ID: <20060330064718.GA32331@localhost.localdomain> I suppose I should give some system details. Pentium 4 - 32 bit, Linux, Python 2.4, DJBFFT version 0.76 compiled from source. GCC 4.0 for C code, GCC 3.4.5 for fortran. Thanks! John On Wed, Mar 29, 2006 at 06:28:31PM -0500, John Byrnes wrote: > Hello all, > > I've found that DJBFFT causes scipy to fail several tests. These > failures only occur when DJBFFT is found by distutils when building. > > I'm running the current SVN copies of both scipy and numpy. > > Regards, > John > > > Report is below: > > ====================================================================== > FAIL: check_normal (scipy.stats.tests.test_morestats.test_anderson) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/stats/tests/test_morestats.py", > line 51, in check_normal > assert_array_less(A, crit[-2:]) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 255, in assert_array_less > assert cond,\ > AssertionError: > Arrays are not less-ordered (mismatch 50.0%): > Array 1: 0.90160265659743288 > Array 2: [ 0.858 1.0209999999999999] > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_diff) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 85, in check_definition > assert_array_almost_equal(diff(sin(x)),direct_diff(sin(x))) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 87.5%): > Array 1: [ 1.0000000e+00 9.2387953e-01 7.0710678e-01 > 3.8268343e-01 > 8.8009815e-17 -3.8268343e-01 -7.0710678e-01 -9.23... > Array 2: [ -0.0000000e+00 5.0000000e-01 7.7662794e-17 > 1.2967736e-16 > 1.2490009e-16 2.1612893e-16 1.4972165e-16 -9.71... > > > ====================================================================== > FAIL: check_definition > (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 301, in check_definition > assert_array_almost_equal (y,y1) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 87.5%): > Array 1: [ 1.0000000e+00 9.2387953e-01 7.0710678e-01 > 3.8268343e-01 > 8.2304423e-17 -3.8268343e-01 -7.0710678e-01 -9.23... > Array 2: [ -0.0000000e+00 +0.0000000e+00j 5.0000000e-01 > -7.3105720e-17j > 3.8831397e-17 -2.4238713e-17j 4.3225785e-17 -6.813... > > > ====================================================================== > FAIL: check_random_even > (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 332, in check_random_even > assert_array_almost_equal(direct_hilbert(direct_ihilbert(f)),f) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 0. -0.j -0.0003392+0.0003636j > -0.0010602-0.0002132j > -0.0005238+0.0002887j 0.0003091-0.0020875j 0.00076... > Array 2: [-0.1106071 -0.497691 0.4783423 -0.337373 > 0.2458101 0.0210909 > -0.4631084 0.192853 0.4792201 -0.3906347 0.22212... > > > ====================================================================== > FAIL: check_tilbert_relation > (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 311, in check_tilbert_relation > assert_array_almost_equal (y,y1) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 62.5%): > Array 1: [ 1.0000000e+00 6.5328148e-01 2.1280927e-16 > -2.7059805e-01 > -1.0626020e-16 2.7059805e-01 1.5393080e-16 -6.53... > Array 2: [ -0.0000000e+00 +0.0000000e+00j 2.5000000e-01 > -3.5284004e-17j > -1.1905260e-18 +4.4746813e-17j 2.5000000e-01 -9.941... > > > ====================================================================== > FAIL: check_definition > (scipy.fftpack.tests.test_pseudo_diffs.test_ihilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 370, in check_definition > assert_array_almost_equal (y,y1) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 87.5%): > Array 1: [ -1.0000000e+00 -9.2387953e-01 -7.0710678e-01 > -3.8268343e-01 > -8.2304423e-17 3.8268343e-01 7.0710678e-01 9.23... > Array 2: [ 0.0000000e+00 -0.0000000e+00j -5.0000000e-01 > +7.3105720e-17j > -3.8831397e-17 +2.4238713e-17j -4.3225785e-17 +6.813... > > > ====================================================================== > FAIL: check_itilbert_relation > (scipy.fftpack.tests.test_pseudo_diffs.test_ihilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 380, in check_itilbert_relation > assert_array_almost_equal (y,y1) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 62.5%): > Array 1: [ -1.0000000e+00 -6.5328148e-01 -2.1280927e-16 > 2.7059805e-01 > 1.0626020e-16 -2.7059805e-01 -1.5393080e-16 6.53... > Array 2: [ 0.0000000e+00 -0.0000000e+00j -2.5000000e-01 > +3.5284004e-17j > 1.1905260e-18 -4.4746813e-17j -2.5000000e-01 +9.941... > > > ====================================================================== > FAIL: check_definition > (scipy.fftpack.tests.test_pseudo_diffs.test_itilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 288, in check_definition > assert_array_almost_equal (y,y1) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 87.5%): > Array 1: [ -9.9667995e-02 -9.2081220e-02 -7.0475915e-02 > -3.8141290e-02 > -7.1261911e-18 3.8141290e-02 7.0475915e-02 9.20... > Array 2: [ -0.0000000e+00 +0.0000000e+00j -4.9833997e-02 > +7.2863005e-18j > -7.6643594e-18 +4.7841238e-18j -1.2592216e-17 +1.984... > > > ====================================================================== > FAIL: check_definition > (scipy.fftpack.tests.test_pseudo_diffs.test_shift) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 390, in check_definition > assert_array_almost_equal(shift(sin(x),a),direct_shift(sin(x),a)) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 0.0998334 0.1968802 0.2920308 0.3843691 > 0.4730057 0.5570869 > 0.6358031 0.7083962 0.7741671 0.8324823 0.88278... > Array 2: [ 1.2321788e-17 4.9916708e-02 2.4648710e-19 > 8.8465810e-18 > 5.9412517e-18 2.0994071e-17 1.4776750e-17 1.22... > > > ====================================================================== > FAIL: check_definition > (scipy.fftpack.tests.test_pseudo_diffs.test_tilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 227, in check_definition > assert_array_almost_equal (y,y1) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 87.5%): > Array 1: [ 1.0033311e+01 9.2695708e+00 7.0946223e+00 > 3.8395819e+00 > 1.1013865e-15 -3.8395819e+00 -7.0946223e+00 -9.26... > Array 2: [ -0.0000000e+00 +0.0000000e+00j 5.0166556e+00 > -7.3349243e-16j > 1.9673887e-16 -1.2280519e-16j 1.4838281e-16 -2.338... > > > ====================================================================== > FAIL: check_random_even > (scipy.fftpack.tests.test_pseudo_diffs.test_tilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 240, in check_random_even > assert_array_almost_equal(direct_tilbert(direct_itilbert(f,h),h),f) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ -0.0000000e+00 +0.0000000e+00j 1.4398428e-03 > +1.8373263e-03j > -8.7680356e-04 +9.9658995e-04j -7.3017115e-04 +1.287... > Array 2: [ 0.5075109 -0.2380892 -0.2038641 -0.3047494 > -0.2114436 0.4541402 > -0.1193784 -0.0768124 -0.1561373 0.1053 0.03414... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_basic.test_fft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", > line 98, in check_definition > assert_array_almost_equal(y,y1) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 1.+0.j 2.+0.j 3.+0.j 4.+1.j 1.+0.j 2.+0.j > 3.+0.j 4.+2.j] > Array 2: [ 20. +3.j -0.7071068+0.7071068j -7. > +4.j > -0.7071068-0.7071068j -4. -3.j 0.707106... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_basic.test_fftn) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", > line 431, in check_definition > assert_array_almost_equal(fftn(x),direct_dftn(x)) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [[[[ 5.9716569e+02 +0.j -4.7701973e+00 > -6.248201j > 7.2274468e+00 +2.123387j ..., -1.8242164e+01 +4.20633... > Array 2: [[[[ 1.4378537e+02+0.j > 3.8003180e-01-0.9697448j > 6.8443765e+00+3.6280536j ..., -2.9207977e+00+2.7763892j... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", > line 183, in check_definition > assert_array_almost_equal(y,y1) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 0.125+0.j 0.25 +0.j 0.375+0.j 0.5 > +0.125j 0.125+0.j > 0.25 +0.j 0.375+0.j 0.5 +0.25j ] > Array 2: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 > -0.5j > 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... > > > ====================================================================== > FAIL: check_random_complex (scipy.fftpack.tests.test_basic.test_ifft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", > line 211, in check_random_complex > assert_array_almost_equal (ifft(fft(x)),x) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 0.000438 +1.2422571e-03j 0.0081279 +1.3153027e-03j > 0.0058322 +2.5472850e-03j 0.0098841 +1.5452180e-02j > 0.007895... > Array 2: [ 0.0280313+0.0795045j 0.5201845+0.0841794j > 0.3732589+0.1630262j > 0.6325855+0.9889395j 0.5052878+0.7382801j 0.12457... > > > ====================================================================== > FAIL: check_random_real (scipy.fftpack.tests.test_basic.test_ifft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", > line 217, in check_random_real > assert_array_almost_equal (ifft(fft(x)),x) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 0.4665868+0.j 0.0158205-0.0104689j > -0.0024869-0.0038622j > 0.0255389-0.0128942j 0.0246149-0.0121173j 0.00113... > Array 2: [ 0.5673542 0.8727196 0.7445887 0.5776637 0.910135 > 0.2396252 > 0.4235496 0.0873539 0.9231279 0.0897232 0.15871... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifftn) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", > line 601, in check_definition > assert_array_almost_equal(ifftn(x),direct_idftn(x)) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [[[[ 4.9883390e-01 +0.0000000e+00j 7.9039717e-03 > +1.7999071e-03j > -2.5307390e-03 -2.3994048e-03j ..., 5.8921717... > Array 2: [[[[ 1.2726888e-01 +0.0000000e+00j 8.6321110e-04 > +2.0907331e-03j > 2.2428627e-03 +4.2203592e-04j ..., -2.7428294... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_basic.test_irfft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy-0.4.9.1788-py2.4-linux-i686.egg/scipy/fftpack/tests/test_basic.py", > line 341, in check_definition > assert_array_almost_equal(y,ifft(x1)) > File > "/usr/lib/python2.4/site-packages/numpy-0.9.7.2301-py2.4-linux-i686.egg/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 > 0.4356602 -0.375 > 0.9356602] > Array 2: [ 0.125+0.j 0.25 +0.375j 0.5 +0.125j 0.25 > +0.375j 0.5 +0.j > 0.25 -0.375j 0.5 -0.125j 0.25 -0.375j] > > > ---------------------------------------------------------------------- > Ran 1508 tests in 2.893s > > FAILED (failures=18) > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -- Anyone who is capable of getting themselves made President should on no account be allowed to do the job. -- Douglas Adams, "The Hitchhiker's Guide to the Galaxy" From bgoli at sun.ac.za Thu Mar 30 04:01:23 2006 From: bgoli at sun.ac.za (Brett Olivier) Date: Thu, 30 Mar 2006 11:01:23 +0200 Subject: [SciPy-dev] problem importing numpy.lib.scimath from gplt Message-ID: <200603301101.23496.bgoli@sun.ac.za> Hi I picked this up using the the gplt module in recent SVN numpy/scipy versions. When trying to import stuff from numpy.lib.scimath (although it isn't specific to gplt): +++++++++++ In [2]: from numpy.lib.scimath import * --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) AttributeError: 'module' object has no attribute 'ERR_CALL' +++++++++++ I'm not sure exactly where to look for the problem. Thanks in advance Brett -- Brett G. Olivier Triple-J Group for Molecular Cell Physiology Stellenbosch University From pearu at scipy.org Thu Mar 30 13:06:21 2006 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 30 Mar 2006 12:06:21 -0600 (CST) Subject: [SciPy-dev] problem importing numpy.lib.scimath from gplt In-Reply-To: <200603301101.23496.bgoli@sun.ac.za> References: <200603301101.23496.bgoli@sun.ac.za> Message-ID: On Thu, 30 Mar 2006, Brett Olivier wrote: > Hi > > I picked this up using the the gplt module in recent SVN numpy/scipy versions. > When trying to import stuff from numpy.lib.scimath (although it isn't > specific to gplt): > > +++++++++++ > In [2]: from numpy.lib.scimath import * > > --------------------------------------------------------------------------- > exceptions.AttributeError Traceback (most recent call last) > > AttributeError: 'module' object has no attribute 'ERR_CALL' > +++++++++++ > > I'm not sure exactly where to look for the problem. The problem was that numpy.lib.scimath.__all__ contained names that do not exist in numpy.lib.scimath name space. This is fixed in svn. Pearu