From michael.sorich at gmail.com Wed Mar 1 01:40:52 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Wed, 1 Mar 2006 17:10:52 +1030 Subject: [SciPy-user] Table like array Message-ID: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> Hi, I am looking for a table like array. Something like a 'data frame' object to those familiar with the statistical languages R and Splus. This is mainly to hold and manipulate 2D spreadsheet like data, which tends to be of relatively small size (compared to what many people seem to use numpy for), heterogenous, have column and row names, and often contains missing data. A RecArray seems potentially useful, as it allows different fields to have different data types and holds the name of the field. However it doesn't seem easy to manipulate the data. Or perhaps I am simply having difficulty finding documentation on there features. eg adding a new column/field (and to a lesser extent a new row/record) to the recarray Changing the field/column names make a new table by selecting a subset of fields/columns. (you can select a single field/column, but not multiple). merging tables (concatenate seems to allow a recarray to be added as new rows but not columns) It would also be nice for the table to be able to deal easily with masked data (I have not tried this with recarray yet) and perhaps also to be able to give the rows/records unique ids that could be used to select the rows/records (in addition to the row/record index), in the same way that the fieldnames can select the fields. Can anyone comment on this issue? Particularly whether code exists for this purpose, and if not ideas about how best to go about developing such a Table like array (this would need to be limited to python programing as my ability to program in c is very limited). Thanks, michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Mar 1 02:15:37 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 01 Mar 2006 00:15:37 -0700 Subject: [SciPy-user] Table like array In-Reply-To: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> References: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> Message-ID: <44054A19.6040202@ieee.org> Michael Sorich wrote: > Hi, > > I am looking for a table like array. Something like a 'data frame' > object to those familiar with the statistical languages R and Splus. > This is mainly to hold and manipulate 2D spreadsheet like data, which > tends to be of relatively small size (compared to what many people > seem to use numpy for), heterogenous, have column and row names, and > often contains missing data. You could subclass the ndarray to produce one of these fairly easily, I think. The missing data item could be handled by a mask stored along with the array (or even in the array itself). Or you could use a masked array as your core object (though I'm not sure how it handles the arbitrary (i.e. record-like) data-types yet). Alternatively, and probably the easiest way to get started, you could just create your own table-like class and use simple 1-d arrays or 1-d masked arrays for each of the columns --- This has always been a way to store record-like tables. It really depends what you want the data-frames to be able to do and what you want them to "look-like." > A RecArray seems potentially useful, as it allows different fields to > have different data types and holds the name of the field. However it > doesn't seem easy to manipulate the data. Or perhaps I am simply > having difficulty finding documentation on there features. Adding a new column/field means basically creating a new array with a new data-type and copying data over into the already-defined fields. Data-types always have a fixed number of bytes per item. What those bytes represent can be quite arbitrary but it's always fixed. So, it is always "more work" to insert a new column. You could make that seamless in your table class so the user doesn't see it though. You'll want to thoroughly understand the dtype object including it's attributes and methods. Particularly the fields attribute of the dtype object. > eg > adding a new column/field (and to a lesser extent a new row/record) to > the recarray Adding a new row or record is actually similar because once an array is created it is usually resized by creating another array and copying the old array into it in the right places. > Changing the field/column names > make a new table by selecting a subset of fields/columns. (you can > select a single field/column, but not multiple). Right. So far you can't select multiple columns. It would be possible to add this feature with a little-bit of effort if there were a strong demand for it, but it would be much easier to do it in your subclass and/or container class. How many people would like to see x['f1','f2','f5'] return a new array with a new data-type descriptor constructed from the provided fields? > It would also be nice for the table to be able to deal easily with > masked data (I have not tried this with recarray yet) and perhaps also > to be able to give the rows/records unique ids that could be used to > select the rows/records (in addition to the row/record index), in the > same way that the fieldnames can select the fields. Adding fieldnames to the "rows" is definitely something that a subclass would be needed for. I'm not sure how you would even propose to select using row names. Would you also use getitem semantics? > Can anyone comment on this issue? Particularly whether code exists for > this purpose, and if not ideas about how best to go about developing > such a Table like array (this would need to be limited to python > programing as my ability to program in c is very limited). I don't know of code that already exists for this, but I don't think it would be too hard to construct your own data-frame object. I would probably start with an implementation that just used standard arrays of a particular type to represent the internal columns and then handle the indexing using your own over-riding of the __getitem__ and __setitem__ special methods. This would be the easiest to get working, I think. -Travis From oliphant.travis at ieee.org Wed Mar 1 02:36:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 01 Mar 2006 00:36:17 -0700 Subject: [SciPy-user] Table like array In-Reply-To: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> References: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> Message-ID: <44054EF1.7090502@ieee.org> Michael Sorich wrote: > Hi, > > I am looking for a table like array. Something like a 'data frame' > object to those familiar with the statistical languages R and Splus. It just occurred to me you might not have heard of RPy. RPy is a Python interface to the R language. Whether you want to actually interface with R or not. It has defined something called the DataFrame class to interface with R's data-frames. You could start there and just use arrays to store the actual column data... http://rpy.sourceforge.net/rpy/doc/manual_html/DataFrame-class.html That example shows that a simple data-frame is just a dictionary keyed by column name. You could then add a key for your "row names" and use that to access data using row-names. In fact, the record data-types are also dictionary-based (look at the fields method of a data-type object). It makes me think that you could get something very much what you want using your own class that just wraps 1-d arrays (or even lists). -Travis From nwagner at mecha.uni-stuttgart.de Wed Mar 1 02:46:03 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Mar 2006 08:46:03 +0100 Subject: [SciPy-user] Modified makefile for gcc 4.x Message-ID: <4405513B.7070708@mecha.uni-stuttgart.de> Hi Hanno, Please can you send me your modified ATLAS makefile. Thanks in advance Nils From nwagner at mecha.uni-stuttgart.de Wed Mar 1 04:39:14 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Mar 2006 10:39:14 +0100 Subject: [SciPy-user] gfortran, ifc, compat-g77 Message-ID: <44056BC2.505@mecha.uni-stuttgart.de> Hi all, It seems to me that installing numpy/scipy on SuSE10.0 is not straightforward in contrast to prior versions. If I remove compat-g77 and try to rebuild numpy from scratch the ifc is used. I thought that gfortran will be used in that case. Am I missing something ? Anyway python setup.py build results in ld: skipping incompatible build/temp.linux-x86_64-2.4/libfblas_src.a when searching for -lfblas_src ld: cannot find -lfblas_src ld: skipping incompatible build/temp.linux-x86_64-2.4/libfblas_src.a when searching for -lfblas_src ld: cannot find -lfblas_src error: Command "/opt2/intel/compiler70/ia32/bin/ifc -shared build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o -Lbuild/temp.linux-x86_64-2.4 -lfblas_src -o build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so" failed with exit status 1 Nils Is there someone on the list who has successfully installed numpy/scipy with or without ATLAS using SuSE 10.0 ? http://www.novell.com/products/linuxpackages/professional/compat-g77.html From zpincus at stanford.edu Wed Mar 1 04:54:01 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 1 Mar 2006 01:54:01 -0800 Subject: [SciPy-user] scipy.stats.ttest_ind broken? Message-ID: Hi folks, I'm using scipy 0.4.6 with numpy 0.9.5, and I have noticed that the t- test in the stats library is broken. scipy.stats.ttest_ind([1,2,3], [4,5,6]) ------------------------------------------------------------------------ --- exceptions.TypeError Traceback (most recent call last) /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/scipy/stats/stats.py in ttest_ind(a, b, axis, printit, name1, name2, writemode) 1461 if type(t) == ArrayType: 1462 probs = reshape(probs,t.shape) -> 1463 if len(probs) == 1: 1464 probs = probs[0] 1465 TypeError: len() of unsized object What's happening is that betai is returning a scalar value, which has no length. This causes the len(probs) call to fail. Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine From ckkart at hoc.net Wed Mar 1 05:18:15 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 01 Mar 2006 19:18:15 +0900 Subject: [SciPy-user] gfortran, ifc, compat-g77 In-Reply-To: <44056BC2.505@mecha.uni-stuttgart.de> References: <44056BC2.505@mecha.uni-stuttgart.de> Message-ID: <440574E7.3080708@hoc.net> Nils Wagner wrote: > Hi all, > > It seems to me that installing numpy/scipy on SuSE10.0 is not > straightforward in contrast > to prior versions. > > If I remove compat-g77 and try to rebuild numpy from scratch the ifc is > used. > I thought that gfortran will be used in that case. > Am I missing something ? > > Anyway > > python setup.py build results in > > ld: skipping incompatible build/temp.linux-x86_64-2.4/libfblas_src.a > when searching for -lfblas_src > ld: cannot find -lfblas_src > ld: skipping incompatible build/temp.linux-x86_64-2.4/libfblas_src.a > when searching for -lfblas_src > ld: cannot find -lfblas_src > error: Command "/opt2/intel/compiler70/ia32/bin/ifc -shared > build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o > -Lbuild/temp.linux-x86_64-2.4 -lfblas_src -o > build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so" failed with exit status 1 > > > Nils > > Is there someone on the list who has successfully installed numpy/scipy > with or without ATLAS using SuSE 10.0 ? Yes, me (again). Maybe this is a processor issue, but for an Intel P4, I only can repeat: if you have installed gcc4/gfortran and no other compiler, following the installation instructions for ATLAS on the wiki building scipy _is_ straightforward. You won't need any additional information, even building ATLAS with full is covered there. You could also try to force to use gfortran like this: python setup.py config_fc --fcompiler=g95 build Regards, Christian ps: I can provide numpy/scipy rpms for SuSE10 with ATLAS built on that processor: vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Pentium(R) 4 CPU 3.00GHz stepping : 1 cpu MHz : 3007.387 cache size : 1024 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe pni monitor ds_cpl cid xtpr bogomips : 6019.80 From ckkart at hoc.net Wed Mar 1 05:21:55 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 01 Mar 2006 19:21:55 +0900 Subject: [SciPy-user] gfortran, ifc, compat-g77 In-Reply-To: <440574E7.3080708@hoc.net> References: <44056BC2.505@mecha.uni-stuttgart.de> <440574E7.3080708@hoc.net> Message-ID: <440575C3.1050303@hoc.net> Christian Kristukat wrote: > instructions for ATLAS on the wiki building scipy _is_ straightforward. You > won't need any additional information, even building ATLAS with full is covered Sorry, this should read: "... ATLAS with full LAPACK..." Christian From pearu at scipy.org Wed Mar 1 04:32:48 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 1 Mar 2006 03:32:48 -0600 (CST) Subject: [SciPy-user] gfortran, ifc, compat-g77 In-Reply-To: <440574E7.3080708@hoc.net> References: <44056BC2.505@mecha.uni-stuttgart.de> <440574E7.3080708@hoc.net> Message-ID: On Wed, 1 Mar 2006, Christian Kristukat wrote: > You could also try to force to use gfortran like this: > > python setup.py config_fc --fcompiler=g95 build g95 is not gfortran. One should use python setup.py config_fc --fcompiler=gnu95 build for forcing gfortran. Pearu From nwagner at mecha.uni-stuttgart.de Wed Mar 1 05:33:30 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Mar 2006 11:33:30 +0100 Subject: [SciPy-user] gfortran, ifc, compat-g77 In-Reply-To: <440574E7.3080708@hoc.net> References: <44056BC2.505@mecha.uni-stuttgart.de> <440574E7.3080708@hoc.net> Message-ID: <4405787A.1060609@mecha.uni-stuttgart.de> Christian Kristukat wrote: > Nils Wagner wrote: > >> Hi all, >> >> It seems to me that installing numpy/scipy on SuSE10.0 is not >> straightforward in contrast >> to prior versions. >> >> If I remove compat-g77 and try to rebuild numpy from scratch the ifc is >> used. >> I thought that gfortran will be used in that case. >> Am I missing something ? >> >> Anyway >> >> python setup.py build results in >> >> ld: skipping incompatible build/temp.linux-x86_64-2.4/libfblas_src.a >> when searching for -lfblas_src >> ld: cannot find -lfblas_src >> ld: skipping incompatible build/temp.linux-x86_64-2.4/libfblas_src.a >> when searching for -lfblas_src >> ld: cannot find -lfblas_src >> error: Command "/opt2/intel/compiler70/ia32/bin/ifc -shared >> build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o >> -Lbuild/temp.linux-x86_64-2.4 -lfblas_src -o >> build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so" failed with exit status 1 >> >> >> Nils >> >> Is there someone on the list who has successfully installed numpy/scipy >> with or without ATLAS using SuSE 10.0 ? >> > > Yes, me (again). > Maybe this is a processor issue, but for an Intel P4, I only can repeat: if you > have installed gcc4/gfortran and no other compiler, following the installation > instructions for ATLAS on the wiki building scipy _is_ straightforward. You > won't need any additional information, even building ATLAS with full is covered > there. > > You could also try to force to use gfortran like this: > > python setup.py config_fc --fcompiler=g95 build > > Regards, Christian > > ps: I can provide numpy/scipy rpms for SuSE10 with ATLAS built on that processor: > > vendor_id : GenuineIntel > cpu family : 15 > model : 4 > model name : Intel(R) Pentium(R) 4 CPU 3.00GHz > stepping : 1 > cpu MHz : 3007.387 > cache size : 1024 KB > physical id : 0 > siblings : 2 > core id : 0 > cpu cores : 1 > fdiv_bug : no > hlt_bug : no > f00f_bug : no > coma_bug : no > fpu : yes > fpu_exception : yes > cpuid level : 5 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov > pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe pni monitor ds_cpl cid > xtpr > bogomips : 6019.80 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Just in the case of a processor issue ... processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 47 model name : AMD Athlon(tm) 64 Processor 3200+ stepping : 2 cpu MHz : 2000.141 cache size : 512 KB fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni lahf_lm bogomips : 4009.73 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc Nils From pebarrett at gmail.com Wed Mar 1 09:44:24 2006 From: pebarrett at gmail.com (Paul Barrett) Date: Wed, 1 Mar 2006 09:44:24 -0500 Subject: [SciPy-user] [Numpy-discussion] Re: Table like array In-Reply-To: <44054A19.6040202@ieee.org> References: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> <44054A19.6040202@ieee.org> Message-ID: <40e64fa20603010644x129d3f63ka07ab0b061469fd4@mail.gmail.com> On 3/1/06, Travis Oliphant wrote: > > > How many people would like to see x['f1','f2','f5'] return a new array > with a new data-type descriptor constructed from the provided fields? > > +1 I'm surprised that it's not already available. -- Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at mecha.uni-stuttgart.de Wed Mar 1 11:08:35 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Mar 2006 17:08:35 +0100 Subject: [SciPy-user] gfortran, ifc, compat-g77 In-Reply-To: <440575C3.1050303@hoc.net> References: <44056BC2.505@mecha.uni-stuttgart.de> <440574E7.3080708@hoc.net> <440575C3.1050303@hoc.net> Message-ID: <4405C703.3050508@mecha.uni-stuttgart.de> Christian Kristukat wrote: > Christian Kristukat wrote: > >> instructions for ATLAS on the wiki building scipy _is_ straightforward. You >> won't need any additional information, even building ATLAS with full is covered >> > > Sorry, this should read: "... ATLAS with full LAPACK..." > > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hi all, After compiling ATLAS umpteen times :'( I was able to install numpy and scipy. numpy.test(1,10) passed while scipy.test(1,10) check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_algebraic_log_weight (scipy.integrate.tests.test_quadpack.test_quad) ... ok check_cauchypv_weight (scipy.integrate.tests.test_quadpack.test_quad) ... ok check_cosine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad)STOP 778 Any idea ? Nils From cimrman3 at ntc.zcu.cz Wed Mar 1 11:32:36 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 01 Mar 2006 17:32:36 +0100 Subject: [SciPy-user] New sparse matrix functionality In-Reply-To: <664793EC-254A-4381-906F-72AD5CD1A918@ftw.at> References: <4402C496.90204@ftw.at> <4402D324.4090207@ntc.zcu.cz> <664793EC-254A-4381-906F-72AD5CD1A918@ftw.at> Message-ID: <4405CCA4.1080105@ntc.zcu.cz> >>Do you also plan to add the c-based linked-list matrix as in PySparse >>(ll_mat.c there)? This could be even faster than using the Python >>lists >>(IMHO...). > > > Well, I guess it would be nice to have, and the code's already > written, but I don't know how we'd make it derive from the spmatrix > base class, which is written in Python. Travis mentioned back in > October that this is possible but not easy. So it would require some well, we might eventually write a llmat class in Python that will 'borrow' (and appreciate, of course ;-)) relevant ll_mat functions - we do not need the ll_mat Python object. I can live without it for now, though :-) > work. I don't need the extra speed personally -- the new class seems > to be fast enough for my needs (the bottleneck for my work is now > elsewhere :) OK, I see... Can you tell me where? Just curious :-) > An update: I've changed the matrix.__mul__ function in NumPy SVN to > return NotImplemented if the right operand defines __rmul__ and isn't > a NumPy-compatible type. This seems to work fine for * now. > Functions like numpy.dot() still won't work on sparse matrices, but I > don't really have a problem with this ;) Fine with me... In the meantime, I have added a rudimentary umfpack support to the sparse module - it is used when present by 'solve' (and can be switched off). I have also fixed the umfpack module in the sandbox for complex matrices. (At least I hope so :)) Still, the umfpack must be installed separately, doing the classical 'python setup.py install' in its sandbox home, because I am still struggling with a proper system_info class to detect the umfpack libraries in the system. Any help/ideas would be appreciated. r. From nwagner at mecha.uni-stuttgart.de Wed Mar 1 11:46:30 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Mar 2006 17:46:30 +0100 Subject: [SciPy-user] gfortran, ifc, compat-g77 In-Reply-To: <4405C703.3050508@mecha.uni-stuttgart.de> References: <44056BC2.505@mecha.uni-stuttgart.de> <440574E7.3080708@hoc.net> <440575C3.1050303@hoc.net> <4405C703.3050508@mecha.uni-stuttgart.de> Message-ID: <4405CFE6.6050603@mecha.uni-stuttgart.de> Nils Wagner wrote: > Christian Kristukat wrote: > >> Christian Kristukat wrote: >> >> >>> instructions for ATLAS on the wiki building scipy _is_ straightforward. You >>> won't need any additional information, even building ATLAS with full is covered >>> >>> >> Sorry, this should read: "... ATLAS with full LAPACK..." >> >> Christian >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> >> > Hi all, > > After compiling ATLAS umpteen times :'( > I was able to install numpy and scipy. > numpy.test(1,10) passed while > scipy.test(1,10) > check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > check_algebraic_log_weight > (scipy.integrate.tests.test_quadpack.test_quad) ... ok > check_cauchypv_weight (scipy.integrate.tests.test_quadpack.test_quad) ... ok > check_cosine_weighted_infinite > (scipy.integrate.tests.test_quadpack.test_quad)STOP 778 > > Any idea ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Sorry for replying to myself. I have used gdb to get further information This is the output snip check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_algebraic_log_weight (scipy.integrate.tests.test_quadpack.test_quad) ... ok check_cauchypv_weight (scipy.integrate.tests.test_quadpack.test_quad) ... ok check_cosine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad)STOP 778 Program exited with code 012. (gdb) bt No stack. Any suggestion ? Nils From strawman at astraw.com Wed Mar 1 12:42:26 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 01 Mar 2006 09:42:26 -0800 Subject: [SciPy-user] Table like array In-Reply-To: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> References: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> Message-ID: <4405DD02.20901@astraw.com> Dear Michael, Here's something I wrote several years ago and which maybe you can use as a starting point. It's certainly not the best code (and some of the methods could clearly just be chopped as they were written for special data-analysis cases), but it's been quite functional for me. It doesn't have explicit support for masked arrays. I would be interested in any progress you make, but I don't really have time to contribute to further development at this point. Cheers! Andrew Michael Sorich wrote: > Hi, > > I am looking for a table like array. Something like a 'data frame' > object to those familiar with the statistical languages R and Splus. > This is mainly to hold and manipulate 2D spreadsheet like data, which > tends to be of relatively small size (compared to what many people > seem to use numpy for), heterogenous, have column and row names, and > often contains missing data. A RecArray seems potentially useful, as > it allows different fields to have different data types and holds the > name of the field. However it doesn't seem easy to manipulate the > data. Or perhaps I am simply having difficulty finding documentation > on there features. > eg > adding a new column/field (and to a lesser extent a new row/record) to > the recarray > Changing the field/column names > make a new table by selecting a subset of fields/columns. (you can > select a single field/column, but not multiple). > merging tables (concatenate seems to allow a recarray to be added as > new rows but not columns) > It would also be nice for the table to be able to deal easily with > masked data (I have not tried this with recarray yet) and perhaps also > to be able to give the rows/records unique ids that could be used to > select the rows/records (in addition to the row/record index), in the > same way that the fieldnames can select the fields. > > Can anyone comment on this issue? Particularly whether code exists for > this purpose, and if not ideas about how best to go about developing > such a Table like array (this would need to be limited to python > programing as my ability to program in c is very limited). > > Thanks, > > michael > > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > -------------- next part -------------- A non-text attachment was scrubbed... Name: data_frame.py Type: text/x-python Size: 17578 bytes Desc: not available URL: From emsellem at obs.univ-lyon1.fr Wed Mar 1 14:19:25 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Wed, 01 Mar 2006 20:19:25 +0100 Subject: [SciPy-user] gfortran, ifc, compat-g77 In-Reply-To: References: Message-ID: <4405F3BD.2060402@obs.univ-lyon1.fr> Hi, I thought I answered to your email before. Yes compiling on Suse 10 was not a problem for me. Let me know. Cheers Eric > Hi all, > > It seems to me that installing numpy/scipy on SuSE10.0 is not > straightforward in contrast > to prior versions. > > If I remove compat-g77 and try to rebuild numpy from scratch the ifc is > used. > I thought that gfortran will be used in that case. > Am I missing something ? > > Anyway > > -- =============================================================== Observatoire de Lyon emsellem at obs.univ-lyon1.fr 9 av. Charles-Andre tel: +33 4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem =============================================================== From nwagner at mecha.uni-stuttgart.de Wed Mar 1 14:30:52 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Mar 2006 20:30:52 +0100 Subject: [SciPy-user] gfortran, ifc, compat-g77 In-Reply-To: <4405F3BD.2060402@obs.univ-lyon1.fr> References: <4405F3BD.2060402@obs.univ-lyon1.fr> Message-ID: On Wed, 01 Mar 2006 20:19:25 +0100 Eric Emsellem wrote: > Hi, > I thought I answered to your email before. Yes compiling >on Suse 10 was > not a problem for me. > Let me know. > > Cheers > Eric >> Hi all, >> >> It seems to me that installing numpy/scipy on SuSE10.0 >>is not >> straightforward in contrast >> to prior versions. >> >> If I remove compat-g77 and try to rebuild numpy from >>scratch the ifc is >> used. >> I thought that gfortran will be used in that case. >> Am I missing something ? >> >> Anyway >> >> > > -- > =============================================================== > Observatoire de Lyon > emsellem at obs.univ-lyon1.fr > 9 av. Charles-Andre tel: +33 4 78 >86 83 84 > 69561 Saint-Genis Laval Cedex fax: +33 4 78 >86 83 86 >France > http://www-obs.univ-lyon1.fr/eric.emsellem > =============================================================== > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user Hi Eric, Finally I was able to compile ATLAS on SuSE 10.0. BTW, what is the output of cat /proc/cpuinfo ? numpy.test(1,10) works fine. Now I have some trouble with scipy.test(1,10). Can you reproduce this failure ? This is the output of scipy.test(1,10) snip check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_algebraic_log_weight (scipy.integrate.tests.test_quadpack.test_quad) ... ok check_cauchypv_weight (scipy.integrate.tests.test_quadpack.test_quad) ... ok check_cosine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad)STOP 778 Program exited with code 012. (gdb) bt No stack. Nils From zpincus at stanford.edu Wed Mar 1 15:03:32 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 1 Mar 2006 12:03:32 -0800 Subject: [SciPy-user] scipy.stats.ttest_ind broken? In-Reply-To: References: Message-ID: <8823D52D-C5D2-4593-A606-8DE5A797723C@stanford.edu> Can anyone please verify this? The basic T-test should not be broken for 1D arrays. import scipy.stats scipy.stats.ttest_ind([1,2,3], [2,3,4]) TypeError: len() of unsized object For 1d arrays, all the values that the ttest computes are scalars, but it assumes that it would be an array. A patch (though I think there could be a better patch that doesn't need a try block): stats.py, change lines 1463-1464 to: try: if len(probs) == 1: probs = probs[0] except TypeError: pass Zach On Mar 1, 2006, at 1:54 AM, Zachary Pincus wrote: > Hi folks, > > I'm using scipy 0.4.6 with numpy 0.9.5, and I have noticed that the t- > test in the stats library is broken. > > > scipy.stats.ttest_ind([1,2,3], [4,5,6]) > ---------------------------------------------------------------------- > -- > --- > exceptions.TypeError Traceback (most recent call > last) > > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- > packages/scipy/stats/stats.py in ttest_ind(a, b, axis, printit, > name1, name2, writemode) > 1461 if type(t) == ArrayType: > 1462 probs = reshape(probs,t.shape) > -> 1463 if len(probs) == 1: > 1464 probs = probs[0] > 1465 > > TypeError: len() of unsized object > > What's happening is that betai is returning a scalar value, which has > no length. This causes the len(probs) call to fail. > > Zach Pincus > > Program in Biomedical Informatics and Department of Biochemistry > Stanford University School of Medicine > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From schofield at ftw.at Thu Mar 2 08:45:20 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 02 Mar 2006 14:45:20 +0100 Subject: [SciPy-user] Matplotlib wiki cookbook link Message-ID: <4406F6F0.3030008@ftw.at> Hi Andrew, It seems the matplotlib site has a link to http://www.scipy.org/wikis/topical_software/MatplotlibCookbook, which doesn't exist. Could you please configure the wiki so this URL that refers to the correct page (http://www.scipy.org/Cookbook/Matplotlib)? Or is it possible for normal wiki users to do this? -- Ed From ryanlists at gmail.com Thu Mar 2 11:18:35 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 2 Mar 2006 11:18:35 -0500 Subject: [SciPy-user] block diagram or UML generator Message-ID: Does anyone know of a Python tool to automatically generate a block diagram or UML sketch of a class based on parsing existing python code? I am trying to think about how to talk about my code from a big picture perspective in my thesis and I don't think that my committee members will care a lot about the details or want to read too much actual code. Any suggestions? Thanks, Ryan From josegomez at gmx.net Thu Mar 2 11:30:50 2006 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Thu, 2 Mar 2006 17:30:50 +0100 (MET) Subject: [SciPy-user] Compiling scipy on Cygwin Message-ID: <22585.1141317050@www011.gmx.net> Hi! I have succeeded compiling the SVN version of Numpy on Cygwin. It works, and with it, so does F2PY (using g77/Cygwin). This is very good news. However, I can't yet compile scipy on Cygwin. While I have the LAPACK and BLAS libraries installed from the Cygwin repository, these are not found by the setup program. Do I need to use ATLAS, or compile my own LAPACK+BLAS? Can I use Cygwins version? Thanks! Jose -- Bis zu 70% Ihrer Onlinekosten sparen: GMX SmartSurfer! Kostenlos downloaden: http://www.gmx.net/de/go/smartsurfer From matthew at sel.cam.ac.uk Thu Mar 2 11:50:52 2006 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Thu, 2 Mar 2006 16:50:52 +0000 Subject: [SciPy-user] block diagram or UML generator In-Reply-To: References: Message-ID: <70DEA064-C25B-4A4C-AD90-CAED4DA2360A@sel.cam.ac.uk> On 2 Mar 2006, at 16:18, Ryan Krauss wrote: > Does anyone know of a Python tool to automatically generate a block > diagram or UML sketch of a class based on parsing existing python > code? You could probably knock something up using graphviz; there might even be such a tool extant. http://www.research.att.com/sw/tools/graphviz/ Regards, Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From aarre at pair.com Thu Mar 2 12:09:43 2006 From: aarre at pair.com (Aarre Laakso) Date: Thu, 02 Mar 2006 12:09:43 -0500 Subject: [SciPy-user] block diagram or UML generator In-Reply-To: <70DEA064-C25B-4A4C-AD90-CAED4DA2360A@sel.cam.ac.uk> References: <70DEA064-C25B-4A4C-AD90-CAED4DA2360A@sel.cam.ac.uk> Message-ID: <440726D7.2090001@pair.com> Matthew Vernon wrote: > On 2 Mar 2006, at 16:18, Ryan Krauss wrote: > >> Does anyone know of a Python tool to automatically generate a block >> diagram or UML sketch of a class based on parsing existing python >> code? > > You could probably knock something up using graphviz; there might > even be such a tool extant. > > http://www.research.att.com/sw/tools/graphviz/ > > Regards, > > Matthew > I have a short list of tools that do this on my Wiki: http://www.laakshmi.com/aarre/wiki/index.php/UML_for_Python -- Aarre Laakso http://www.laakshmi.com/aarre/ From strawman at astraw.com Thu Mar 2 14:09:44 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 02 Mar 2006 11:09:44 -0800 Subject: [SciPy-user] Matplotlib wiki cookbook link In-Reply-To: <4406F6F0.3030008@ftw.at> References: <4406F6F0.3030008@ftw.at> Message-ID: <440742F8.2050303@astraw.com> Done. But we should really update the MPL site. Where is the link there that needs to be fixed? Ed Schofield wrote: >Hi Andrew, > >It seems the matplotlib site has a link to >http://www.scipy.org/wikis/topical_software/MatplotlibCookbook, which >doesn't exist. Could you please configure the wiki so this URL that >refers to the correct page (http://www.scipy.org/Cookbook/Matplotlib)? >Or is it possible for normal wiki users to do this? > >-- Ed > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From jdhunter at ace.bsd.uchicago.edu Thu Mar 2 14:29:27 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Thu, 02 Mar 2006 13:29:27 -0600 Subject: [SciPy-user] Matplotlib wiki cookbook link In-Reply-To: <440742F8.2050303@astraw.com> (Andrew Straw's message of "Thu, 02 Mar 2006 11:09:44 -0800") References: <4406F6F0.3030008@ftw.at> <440742F8.2050303@astraw.com> Message-ID: <87ek1klk6g.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Andrew" == Andrew Straw writes: Andrew> Done. But we should really update the MPL site. Where is Andrew> the link there that needs to be fixed? It's fixed now, as is the broken link to the user's guide. Thanks, JDH From schofield at ftw.at Thu Mar 2 17:15:11 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 2 Mar 2006 23:15:11 +0100 Subject: [SciPy-user] Matplotlib wiki cookbook link In-Reply-To: <87ek1klk6g.fsf@peds-pc311.bsd.uchicago.edu> References: <4406F6F0.3030008@ftw.at> <440742F8.2050303@astraw.com> <87ek1klk6g.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <1217BE8C-EF9A-40DC-9AF7-EAB5DA96ACD8@ftw.at> On 02/03/2006, at 8:29 PM, John Hunter wrote: >>>>>> "Andrew" == Andrew Straw writes: > > Andrew> Done. But we should really update the MPL site. Where is > Andrew> the link there that needs to be fixed? > > It's fixed now, as is the broken link to the user's guide. Well done, guys! -- Ed From ryanlists at gmail.com Thu Mar 2 17:32:05 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 2 Mar 2006 17:32:05 -0500 Subject: [SciPy-user] problem with odeint Message-ID: I am having a problem with odeint. My integration runs o.k. if I choose a fairly short time interval, but when I set the maximum value for my t vector higher than about 1 second, I get this message: In [178]: run sweptsine_ode.py lsoda-- at t (=r1), too much accuracy requested for precision of machine.. see tolsf (=r2) ls in above, r1 = 0.5448183650273E+00 r2 = NAN Excess accuracy requested (tolerances too small). Run with full_output = 1 to get quantitative information. Running with fulloutput produces: {'hu': array([ 0.00020682, 0.00043534, 0.00043534, ..., 0. , 0. , 0. ]), 'imxer': -1267232872, 'leniw': 23, 'lenrw': 68, 'message': 'Excess accuracy requested (tolerances too small).', 'mused': array([1, 1, 1, ..., 0, 0, 0]), 'nfe': array([19, 27, 31, ..., 0, 0, 0]), 'nje': array([0, 0, 0, ..., 0, 0, 0]), 'nqu': array([2, 3, 3, ..., 0, 0, 0]), 'nst': array([ 9, 13, 15, ..., 0, 0, 0]), 'tcur': array([ 0.00106956, 0.00235387, 0.00322454, ..., 0. , 0. , 0. ]), 'tolsf': array([ 7.88983883e-314, 7.88983883e-314, 7.88983883e-314, ..., 0.00000000e+000, 0.00000000e+000, 0.00000000e+000]), 'tsw': array([ 0., 0., 0., ..., 0., 0., 0.])} The script is attached and should be self contained. Messing with atol and rtol doesn't seem to help, so I don't know what the message about requested accuracy means. Any help/direction in fixing this would be appreciated. Thanks, Ryan -------------- next part -------------- A non-text attachment was scrubbed... Name: sweptsine_ode.py Type: text/x-python Size: 1491 bytes Desc: not available URL: From michael.sorich at gmail.com Thu Mar 2 18:35:53 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Fri, 3 Mar 2006 10:05:53 +1030 Subject: [SciPy-user] Table like array In-Reply-To: <44054A19.6040202@ieee.org> References: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> <44054A19.6040202@ieee.org> Message-ID: <16761e100603021535q46f2a7b9x94a2016aad2912bb@mail.gmail.com> On 3/1/06, Travis Oliphant wrote: > > Michael Sorich wrote: > > > Hi, > > > > I am looking for a table like array. Something like a 'data frame' > > object to those familiar with the statistical languages R and Splus. > > This is mainly to hold and manipulate 2D spreadsheet like data, which > > tends to be of relatively small size (compared to what many people > > seem to use numpy for), heterogenous, have column and row names, and > > often contains missing data. > > You could subclass the ndarray to produce one of these fairly easily, I > think. The missing data item could be handled by a mask stored along > with the array (or even in the array itself). Or you could use a masked > array as your core object (though I'm not sure how it handles the > arbitrary (i.e. record-like) data-types yet). Thanks for the replies. You mention that missing data could be stored in the array itself. Can one use nan to indicate missing data? In some ways it seems more convenient to store this data in the array itself rather than have a second mask array. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Thu Mar 2 18:59:50 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 02 Mar 2006 16:59:50 -0700 Subject: [SciPy-user] Table like array In-Reply-To: <16761e100603021535q46f2a7b9x94a2016aad2912bb@mail.gmail.com> References: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> <44054A19.6040202@ieee.org> <16761e100603021535q46f2a7b9x94a2016aad2912bb@mail.gmail.com> Message-ID: <440786F6.9040300@ee.byu.edu> Michael Sorich wrote: > On 3/1/06, *Travis Oliphant* > wrote: > > Michael Sorich wrote: > > > Hi, > > > > I am looking for a table like array. Something like a 'data frame' > > object to those familiar with the statistical languages R and Splus. > > This is mainly to hold and manipulate 2D spreadsheet like data, > which > > tends to be of relatively small size (compared to what many people > > seem to use numpy for), heterogenous, have column and row names, and > > often contains missing data. > > You could subclass the ndarray to produce one of these fairly > easily, I > think. The missing data item could be handled by a mask stored along > with the array (or even in the array itself). Or you could use a > masked > array as your core object (though I'm not sure how it handles the > arbitrary (i.e. record-like) data-types yet). > > > Thanks for the replies. You mention that missing data could be stored > in the array itself. Can one use nan to indicate missing data? In some > ways it seems more convenient to store this data in the array itself > rather than have a second mask array. Yes, that's the approach I usually take for floating-point data. There are some speed concerns for large arrays because I think operations with nans can be slower. But, I have not tested that statement recently. -Travis From ckkart at hoc.net Thu Mar 2 21:03:56 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 03 Mar 2006 11:03:56 +0900 Subject: [SciPy-user] minimizers don't work - d1mach problem Message-ID: <4407A40C.8080000@hoc.net> Hi, sorry for posting this again, but I'd really like to have the minimizers work again. Every call to any of the miminizers scipy.fmin* fails with: Adjust D1MACH by uncommenting data statements appropriate for your machine. STOP 779 I already looked at Lib/special/d1mach.f but I don't see what I should change there. Anyway, d1mach.f hasn't been changed for ages so I doubt that the problem is there. Here's my configuration: P4 Python 2.4.1 (#1, Sep 13 2005, 00:39:20) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 numpy/scipy from svn ATLAS built with gfortran Regards, Christian From robert.kern at gmail.com Thu Mar 2 21:21:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 02 Mar 2006 20:21:31 -0600 Subject: [SciPy-user] minimizers don't work - d1mach problem In-Reply-To: <4407A40C.8080000@hoc.net> References: <4407A40C.8080000@hoc.net> Message-ID: <4407A82B.6000100@gmail.com> Christian Kristukat wrote: > Hi, > sorry for posting this again, but I'd really like to have the minimizers work > again. Every call to any of the miminizers scipy.fmin* fails with: > > Adjust D1MACH by uncommenting data statements > appropriate for your machine. > STOP 779 > > I already looked at Lib/special/d1mach.f but I don't see what I should change > there. Anyway, d1mach.f hasn't been changed for ages so I doubt that the problem > is there. > > Here's my configuration: > > P4 > > Python 2.4.1 (#1, Sep 13 2005, 00:39:20) > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 > > numpy/scipy from svn > ATLAS built with gfortran Odd. For your configuration, D1MACH ought to be taking the IEEE LITTLE ENDIAN branch earlier on, and the STOP 779 statements should never be reached. Try adding these lines to diagnose the issue: Index: Lib/special/mach/d1mach.f =================================================================== --- Lib/special/mach/d1mach.f (revision 1630) +++ Lib/special/mach/d1mach.f (working copy) @@ -60,6 +60,9 @@ C ON FIRST CALL, IF NO DATA UNCOMMENTED, TEST MACHINE TYPES. IF (SC .NE. 987) THEN DMACH(1) = 1.D13 + write(*,*) 'DMACH(1) == ', DMACH(1) + write(*,*) 'SMALL(1) == ', SMALL(1) + write(*,*) 'SMALL(2) == ', SMALL(2) IF ( SMALL(1) .EQ. 1117925532 * .AND. SMALL(2) .EQ. -448790528) THEN * *** IEEE BIG ENDIAN *** -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ckkart at hoc.net Thu Mar 2 22:02:51 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 03 Mar 2006 12:02:51 +0900 Subject: [SciPy-user] minimizers don't work - d1mach problem In-Reply-To: <4407A82B.6000100@gmail.com> References: <4407A40C.8080000@hoc.net> <4407A82B.6000100@gmail.com> Message-ID: <4407B1DB.90708@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: >> Hi, >> sorry for posting this again, but I'd really like to have the minimizers work >> again. Every call to any of the miminizers scipy.fmin* fails with: >> >> Adjust D1MACH by uncommenting data statements >> appropriate for your machine. >> STOP 779 >> >> I already looked at Lib/special/d1mach.f but I don't see what I should change >> there. Anyway, d1mach.f hasn't been changed for ages so I doubt that the problem >> is there. >> >> Here's my configuration: >> >> P4 >> >> Python 2.4.1 (#1, Sep 13 2005, 00:39:20) >> [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 >> >> numpy/scipy from svn >> ATLAS built with gfortran > > Odd. For your configuration, D1MACH ought to be taking the IEEE LITTLE ENDIAN > branch earlier on, and the STOP 779 statements should never be reached. Try > adding these lines to diagnose the issue: > > Index: Lib/special/mach/d1mach.f > =================================================================== > --- Lib/special/mach/d1mach.f (revision 1630) > +++ Lib/special/mach/d1mach.f (working copy) > @@ -60,6 +60,9 @@ > C ON FIRST CALL, IF NO DATA UNCOMMENTED, TEST MACHINE TYPES. > IF (SC .NE. 987) THEN > DMACH(1) = 1.D13 > + write(*,*) 'DMACH(1) == ', DMACH(1) > + write(*,*) 'SMALL(1) == ', SMALL(1) > + write(*,*) 'SMALL(2) == ', SMALL(2) > IF ( SMALL(1) .EQ. 1117925532 > * .AND. SMALL(2) .EQ. -448790528) THEN > * *** IEEE BIG ENDIAN *** > There's no additional output, so the if condition seems to be False. There's another d1mach.f in Lib/integrate/mach. Which one is the relevant here? Btw, the minimizers partly run, maybe in case when the gradients are far from IEEE numeric limits? Christian From robert.kern at gmail.com Thu Mar 2 22:18:34 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 02 Mar 2006 21:18:34 -0600 Subject: [SciPy-user] minimizers don't work - d1mach problem In-Reply-To: <4407B1DB.90708@hoc.net> References: <4407A40C.8080000@hoc.net> <4407A82B.6000100@gmail.com> <4407B1DB.90708@hoc.net> Message-ID: <4407B58A.7010402@gmail.com> Christian Kristukat wrote: > Robert Kern wrote: > >>Christian Kristukat wrote: >> >>>Hi, >>>sorry for posting this again, but I'd really like to have the minimizers work >>>again. Every call to any of the miminizers scipy.fmin* fails with: >>> >>> Adjust D1MACH by uncommenting data statements >>> appropriate for your machine. >>>STOP 779 >>> >>>I already looked at Lib/special/d1mach.f but I don't see what I should change >>>there. Anyway, d1mach.f hasn't been changed for ages so I doubt that the problem >>>is there. >>> >>>Here's my configuration: >>> >>>P4 >>> >>>Python 2.4.1 (#1, Sep 13 2005, 00:39:20) >>>[GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 >>> >>>numpy/scipy from svn >>>ATLAS built with gfortran >> >>Odd. For your configuration, D1MACH ought to be taking the IEEE LITTLE ENDIAN >>branch earlier on, and the STOP 779 statements should never be reached. Try >>adding these lines to diagnose the issue: >> >>Index: Lib/special/mach/d1mach.f >>=================================================================== >>--- Lib/special/mach/d1mach.f (revision 1630) >>+++ Lib/special/mach/d1mach.f (working copy) >>@@ -60,6 +60,9 @@ >> C ON FIRST CALL, IF NO DATA UNCOMMENTED, TEST MACHINE TYPES. >> IF (SC .NE. 987) THEN >> DMACH(1) = 1.D13 >>+ write(*,*) 'DMACH(1) == ', DMACH(1) >>+ write(*,*) 'SMALL(1) == ', SMALL(1) >>+ write(*,*) 'SMALL(2) == ', SMALL(2) >> IF ( SMALL(1) .EQ. 1117925532 >> * .AND. SMALL(2) .EQ. -448790528) THEN >> * *** IEEE BIG ENDIAN *** > > There's no additional output, so the if condition seems to be False. In that case, the STOP statements wouldn't be executed either. Add more write(*,*) statements to explore it, if you like. But: > There's > another d1mach.f in Lib/integrate/mach. Which one is the relevant here? Couldn't say. Try adding some different write(*,*) statements to it in order to find out. > Btw, the minimizers partly run, maybe in case when the gradients are far from > IEEE numeric limits? That wouldn't trigger anything in d1mach, though. What is your function? Are you using anything in scipy.special or scipy.integrate? -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ckkart at hoc.net Thu Mar 2 22:46:01 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 03 Mar 2006 12:46:01 +0900 Subject: [SciPy-user] minimizers don't work - d1mach problem In-Reply-To: <4407B58A.7010402@gmail.com> References: <4407A40C.8080000@hoc.net> <4407A82B.6000100@gmail.com> <4407B1DB.90708@hoc.net> <4407B58A.7010402@gmail.com> Message-ID: <4407BBF9.4070406@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: >> Robert Kern wrote: >> >>> Christian Kristukat wrote: >>> >>>> Hi, >>>> sorry for posting this again, but I'd really like to have the minimizers work >>>> again. Every call to any of the miminizers scipy.fmin* fails with: >>>> >>>> Adjust D1MACH by uncommenting data statements >>>> appropriate for your machine. >>>> STOP 779 >>>> >>>> I already looked at Lib/special/d1mach.f but I don't see what I should change >>>> there. Anyway, d1mach.f hasn't been changed for ages so I doubt that the problem >>>> is there. >>>> >>>> Here's my configuration: >>>> >>>> P4 >>>> >>>> Python 2.4.1 (#1, Sep 13 2005, 00:39:20) >>>> [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 >>>> >>>> numpy/scipy from svn >>>> ATLAS built with gfortran >>> Odd. For your configuration, D1MACH ought to be taking the IEEE LITTLE ENDIAN >>> branch earlier on, and the STOP 779 statements should never be reached. Try >>> adding these lines to diagnose the issue: >>> >>> Index: Lib/special/mach/d1mach.f >>> =================================================================== >>> --- Lib/special/mach/d1mach.f (revision 1630) >>> +++ Lib/special/mach/d1mach.f (working copy) >>> @@ -60,6 +60,9 @@ >>> C ON FIRST CALL, IF NO DATA UNCOMMENTED, TEST MACHINE TYPES. >>> IF (SC .NE. 987) THEN >>> DMACH(1) = 1.D13 >>> + write(*,*) 'DMACH(1) == ', DMACH(1) >>> + write(*,*) 'SMALL(1) == ', SMALL(1) >>> + write(*,*) 'SMALL(2) == ', SMALL(2) >>> IF ( SMALL(1) .EQ. 1117925532 >>> * .AND. SMALL(2) .EQ. -448790528) THEN >>> * *** IEEE BIG ENDIAN *** >> There's no additional output, so the if condition seems to be False. > > In that case, the STOP statements wouldn't be executed either. Add more > write(*,*) statements to explore it, if you like. But: > >> There's >> another d1mach.f in Lib/integrate/mach. Which one is the relevant here? > > Couldn't say. Try adding some different write(*,*) statements to it in order to > find out. > >> Btw, the minimizers partly run, maybe in case when the gradients are far from >> IEEE numeric limits? > > That wouldn't trigger anything in d1mach, though. > > What is your function? Are you using anything in scipy.special or scipy.integrate? > I put the same some write statements in Lib/integrate/mach/d1mach.f, and that's what I get: tnc: Version 1.2, (c) 2002-2003, Jean-Sebastien Roy (js at jeannot.org) tnc: RCS ID: @(#) $Jeannot: tnc.c,v 1.201 2004/04/02 22:36:25 js Exp $ integrate DMACH(1) == 10000000000000.0 integrate SMALL(1) == -448790528 integrate SMALL(2) == 1117925532 STOP 778 Odd, that the error message disappeared and the subroutine stops at a different line number. Before, the messag was: tnc: Version 1.2, (c) 2002-2003, Jean-Sebastien Roy (js at jeannot.org) tnc: RCS ID: @(#) $Jeannot: tnc.c,v 1.201 2004/04/02 22:36:25 js Exp $ Adjust D1MACH by uncommenting data statements appropriate for your machine. STOP 779 Any ideas? Regards, Christian From robert.kern at gmail.com Thu Mar 2 23:33:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 02 Mar 2006 22:33:13 -0600 Subject: [SciPy-user] minimizers don't work - d1mach problem In-Reply-To: <4407BBF9.4070406@hoc.net> References: <4407A40C.8080000@hoc.net> <4407A82B.6000100@gmail.com> <4407B1DB.90708@hoc.net> <4407B58A.7010402@gmail.com> <4407BBF9.4070406@hoc.net> Message-ID: <4407C709.80900@gmail.com> Christian Kristukat wrote: > I put the same some write statements in Lib/integrate/mach/d1mach.f, and that's > what I get: > > tnc: Version 1.2, (c) 2002-2003, Jean-Sebastien Roy (js at jeannot.org) > tnc: RCS ID: @(#) $Jeannot: tnc.c,v 1.201 2004/04/02 22:36:25 js Exp $ > integrate DMACH(1) == 10000000000000.0 > integrate SMALL(1) == -448790528 > integrate SMALL(2) == 1117925532 > STOP 778 > > Odd, that the error message disappeared and the subroutine stops at a different > line number. Ah, the glorious heisenbug. > Before, the messag was: > > tnc: Version 1.2, (c) 2002-2003, Jean-Sebastien Roy (js at jeannot.org) > tnc: RCS ID: @(#) $Jeannot: tnc.c,v 1.201 2004/04/02 22:36:25 js Exp $ > > Adjust D1MACH by uncommenting data statements > appropriate for your machine. > STOP 779 > > Any ideas? Well, look at the STOP 778 statement. IF (DMACH(4) .GE. 1.0D0) STOP 778 Add a write statement to find out what DMACH(4) is. According to the documentation, it corresponds to the largest relative spacing of double-precision floating point numbers. For that matter, output all 5 entries in DMACH. It would not surprise me if gfortran is screwing things up. In my experience, it is not very stable. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ckkart at hoc.net Thu Mar 2 23:41:52 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 03 Mar 2006 13:41:52 +0900 Subject: [SciPy-user] minimizers don't work - d1mach problem In-Reply-To: <4407C709.80900@gmail.com> References: <4407A40C.8080000@hoc.net> <4407A82B.6000100@gmail.com> <4407B1DB.90708@hoc.net> <4407B58A.7010402@gmail.com> <4407BBF9.4070406@hoc.net> <4407C709.80900@gmail.com> Message-ID: <4407C910.2010902@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: > >> I put the same some write statements in Lib/integrate/mach/d1mach.f, and that's >> what I get: >> >> tnc: Version 1.2, (c) 2002-2003, Jean-Sebastien Roy (js at jeannot.org) >> tnc: RCS ID: @(#) $Jeannot: tnc.c,v 1.201 2004/04/02 22:36:25 js Exp $ >> integrate DMACH(1) == 10000000000000.0 >> integrate SMALL(1) == -448790528 >> integrate SMALL(2) == 1117925532 >> STOP 778 >> >> Odd, that the error message disappeared and the subroutine stops at a different >> line number. > > Ah, the glorious heisenbug. > >> Before, the messag was: >> >> tnc: Version 1.2, (c) 2002-2003, Jean-Sebastien Roy (js at jeannot.org) >> tnc: RCS ID: @(#) $Jeannot: tnc.c,v 1.201 2004/04/02 22:36:25 js Exp $ >> >> Adjust D1MACH by uncommenting data statements >> appropriate for your machine. >> STOP 779 >> >> Any ideas? > > Well, look at the STOP 778 statement. > > IF (DMACH(4) .GE. 1.0D0) STOP 778 > > Add a write statement to find out what DMACH(4) is. According to the > documentation, it corresponds to the largest relative spacing of > double-precision floating point numbers. For that matter, output all 5 entries > in DMACH. I wrote out DMACH(1-5) right before the STOP 778 condition: DMACH(1): 2.225073858507201E-308 DMACH(2): 1.797693134862316E+308 DMACH(3): 1.110223024625157E-016 DMACH(4): 2.220446049250313E-016 DMACH(5): 0.301029995663981 ****integrate DMACH(1): 2.225073858507201E-308 DMACH(2): 1.797693134862316E+308 DMACH(3): 1.110223024625157E-016 DMACH(4): 2.220446049250313E-016 DMACH(5): 0.301029995663981 ****integrate DMACH(1): 2.225073858507201E-308 DMACH(2): 1.797693134862316E+308 DMACH(3): 1.110223024625157E-016 DMACH(4): 2.220446049250313E-016 DMACH(5): 0.301029995663981 ****integrate DMACH(1): NaN DMACH(2): 1.856669494629715E-313 DMACH(3): 1764.25781250000 DMACH(4): 1.060997897063646E-313 DMACH(5): 1694.56250000000 ****integrate DMACH(1): NaN DMACH(2): 1.856669494629715E-313 DMACH(3): 1764.25781250000 DMACH(4): 1.060997897063646E-313 DMACH(5): 1694.56250000000 ****integrate DMACH(1): 1.50000000000000 DMACH(2): 1.731915531800882E+015 DMACH(3): 1764.25781250000 DMACH(4): 293.667305600229 DMACH(5): 1.50000000000000 STOP 778 > > It would not surprise me if gfortran is screwing things up. In my experience, it > is not very stable. I'll try g77, even though I was telling everybody that they should use gfortan on SuSE10.0.... sorry. Regards, Christian From ckkart at hoc.net Fri Mar 3 00:14:27 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 03 Mar 2006 14:14:27 +0900 Subject: [SciPy-user] minimizers don't work - d1mach problem In-Reply-To: <4407C910.2010902@hoc.net> References: <4407A40C.8080000@hoc.net> <4407A82B.6000100@gmail.com> <4407B1DB.90708@hoc.net> <4407B58A.7010402@gmail.com> <4407BBF9.4070406@hoc.net> <4407C709.80900@gmail.com> <4407C910.2010902@hoc.net> Message-ID: <4407D0B3.4040008@hoc.net> Christian Kristukat wrote: > Robert Kern wrote: >> Christian Kristukat wrote: >> >>> I put the same some write statements in Lib/integrate/mach/d1mach.f, and that's >>> what I get: >>> >>> tnc: Version 1.2, (c) 2002-2003, Jean-Sebastien Roy (js at jeannot.org) >>> tnc: RCS ID: @(#) $Jeannot: tnc.c,v 1.201 2004/04/02 22:36:25 js Exp $ >>> integrate DMACH(1) == 10000000000000.0 >>> integrate SMALL(1) == -448790528 >>> integrate SMALL(2) == 1117925532 >>> STOP 778 >>> >>> Odd, that the error message disappeared and the subroutine stops at a different >>> line number. >> Ah, the glorious heisenbug. >> >>> Before, the messag was: >>> >>> tnc: Version 1.2, (c) 2002-2003, Jean-Sebastien Roy (js at jeannot.org) >>> tnc: RCS ID: @(#) $Jeannot: tnc.c,v 1.201 2004/04/02 22:36:25 js Exp $ >>> >>> Adjust D1MACH by uncommenting data statements >>> appropriate for your machine. >>> STOP 779 >>> >>> Any ideas? >> Well, look at the STOP 778 statement. >> >> IF (DMACH(4) .GE. 1.0D0) STOP 778 >> >> Add a write statement to find out what DMACH(4) is. According to the >> documentation, it corresponds to the largest relative spacing of >> double-precision floating point numbers. For that matter, output all 5 entries >> in DMACH. > > I wrote out DMACH(1-5) right before the STOP 778 condition: > > DMACH(1): 2.225073858507201E-308 > DMACH(2): 1.797693134862316E+308 > DMACH(3): 1.110223024625157E-016 > DMACH(4): 2.220446049250313E-016 > DMACH(5): 0.301029995663981 > ****integrate > DMACH(1): 2.225073858507201E-308 > DMACH(2): 1.797693134862316E+308 > DMACH(3): 1.110223024625157E-016 > DMACH(4): 2.220446049250313E-016 > DMACH(5): 0.301029995663981 > ****integrate > DMACH(1): 2.225073858507201E-308 > DMACH(2): 1.797693134862316E+308 > DMACH(3): 1.110223024625157E-016 > DMACH(4): 2.220446049250313E-016 > DMACH(5): 0.301029995663981 > ****integrate > DMACH(1): NaN > DMACH(2): 1.856669494629715E-313 > DMACH(3): 1764.25781250000 > DMACH(4): 1.060997897063646E-313 > DMACH(5): 1694.56250000000 > ****integrate > DMACH(1): NaN > DMACH(2): 1.856669494629715E-313 > DMACH(3): 1764.25781250000 > DMACH(4): 1.060997897063646E-313 > DMACH(5): 1694.56250000000 > ****integrate > DMACH(1): 1.50000000000000 > DMACH(2): 1.731915531800882E+015 > DMACH(3): 1764.25781250000 > DMACH(4): 293.667305600229 > DMACH(5): 1.50000000000000 > STOP 778 > >> It would not surprise me if gfortran is screwing things up. In my experience, it >> is not very stable. > > I'll try g77, even though I was telling everybody that they should use gfortan > on SuSE10.0.... sorry. > Just built everything with g77 (BLAS+LAPACK) and it works. Thanks, Robert. Regards, Christian From nwagner at mecha.uni-stuttgart.de Fri Mar 3 04:06:44 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 10:06:44 +0100 Subject: [SciPy-user] gfortran, ifc, compat-g77 In-Reply-To: <44080017.5050305@obs.univ-lyon1.fr> References: <4405F3BD.2060402@obs.univ-lyon1.fr> <44080017.5050305@obs.univ-lyon1.fr> Message-ID: <44080724.3010304@mecha.uni-stuttgart.de> Eric Emsellem wrote: > Hi, > I have compiled these on two machines: a laptop (see output of cpuinfo > below) > and a desktop both with Suse 10. With the procedure I sent you, I had > no problem installing all of it, except for the line to be added in > the setup of matplotlib (I hope the new version has been updated). > > And no, I cannot reproduce the failure you have with the test on > scipy. sorry... (I am not an expert there, so cannot really help) > > Eric > > processor : 0 > vendor_id : GenuineIntel > cpu family : 6 > model : 9 > model name : Intel(R) Pentium(R) M processor 1700MHz > stepping : 5 > cpu MHz : 598.146 > cache size : 1024 KB > fdiv_bug : no > hlt_bug : no > f00f_bug : no > coma_bug : no > fpu : yes > fpu_exception : yes > cpuid level : 2 > wp : yes > flags : fpu vme de pse tsc msr mce cx8 sep mtrr pge mca cmov > pat clflush dts acpi mmx fxsr sse sse2 tm pbe est tm2 > bogomips : 1197.31 > > >> >> Hi Eric, >> >> >> Finally I was able to compile ATLAS on SuSE 10.0. >> BTW, what is the output of cat /proc/cpuinfo ? >> >> numpy.test(1,10) works fine. >> Now I have some trouble with scipy.test(1,10). >> Can you reproduce this failure ? >> >> This is the output of scipy.test(1,10) >> >> snip >> >> check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok >> check_algebraic_log_weight >> (scipy.integrate.tests.test_quadpack.test_quad) ... ok >> check_cauchypv_weight (scipy.integrate.tests.test_quadpack.test_quad) >> ... ok >> check_cosine_weighted_infinite >> (scipy.integrate.tests.test_quadpack.test_quad)STOP 778 >> >> Program exited with code 012. >> (gdb) bt >> No stack. >> >> >> Nils >> > Hi Eric, Did you use gfortran to compile lapack ? Robert mentioned that gfortran is unstable ! How did you compile ATLAS in detail ? Which version did you use 3.7.11 developer or 3.6 stable ? Which flags did you use -fPIC etc ? Also which fortran compiler did you use gfortran / g77 ? It would be very kind of you if you could expand on that. Thanks in advance. I look forward to hearing from you soon. BTW, did you try my code sparse_test.py ? Nils From nwagner at mecha.uni-stuttgart.de Fri Mar 3 08:06:42 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 14:06:42 +0100 Subject: [SciPy-user] fftw and scipy.show_config() on SUSE 10.0 Message-ID: <44083F62.10406@mecha.uni-stuttgart.de> Hi all, I have installed the rpm's w.r.t. to fft's. /usr/lib64/libdfftw.so.2 /usr/lib64/libdfftw.so.2.0.7 /usr/lib64/libdrfftw.so.2 /usr/lib64/libdrfftw.so.2.0.7 /usr/lib64/libsfftw.so.2 /usr/lib64/libsfftw.so.2.0.7 /usr/lib64/libsrfftw.so.2 /usr/lib64/libsrfftw.so.2.0.7 /usr/lib64/libfftw3.so.3 /usr/lib64/libfftw3.so.3.0.1 /usr/lib64/libfftw3f.so.3 /usr/lib64/libfftw3f.so.3.0.1 but scipy.show_config() cannot find them. How can I fix this problem ? Do I need symbolic links ? Is /usr/lib64 not standard ? Nils Numpy version 0.9.6.2193 Scipy version 0.4.7.1630 dfftw_info: NOT AVAILABLE fft_opt_info: NOT AVAILABLE fftw2_info: NOT AVAILABLE fftw3_info: NOT AVAILABLE From rng7 at cornell.edu Fri Mar 3 09:50:18 2006 From: rng7 at cornell.edu (Ryan Gutenkunst) Date: Fri, 03 Mar 2006 09:50:18 -0500 Subject: [SciPy-user] problem with odeint In-Reply-To: References: Message-ID: <440857AA.7070809@cornell.edu> Ryan Krauss wrote: > I am having a problem with odeint. My integration runs o.k. if I > choose a fairly short time interval, but when I set the maximum value > for my t vector higher than about 1 second, I get this message: > > In [178]: run sweptsine_ode.py > lsoda-- at t (=r1), too much accuracy requested > for precision of machine.. see tolsf (=r2) ls > in above, r1 = 0.5448183650273E+00 r2 = NAN > Excess accuracy requested (tolerances too small). > Run with full_output = 1 to get quantitative information. > > > The script is attached and should be self contained. Messing with > atol and rtol doesn't seem to help, so I don't know what the message > about requested accuracy means. > > Any help/direction in fixing this would be appreciated. > > Thanks, > > Ryan Hi Ryan, When I run your code under Scipy 0.3.2, I get errors about assigning complex numbers to floats. Changing the q2 = sqrt(...) calls to q2 = sqrt(abs(...)) calls fixes these and allows me to run as long as I want. I'm guessing that the argument to the sqrt(...) is getting to zero within numerical precision, and small negative values are creeping in. Another factor may be that, in stiff mode, odeint takes large extrapolating steps then back-corrects them. A large attempted step may be taking you into negative argument regions, which then causes the code to choke before it can back-correct. Cheers, Ryan -- Ryan Gutenkunst | Cornell LASSP | "It is not the mountain | we conquer but ourselves." Clark 535 / (607)227-7914 | -- Sir Edmund Hillary AIM: JepettoRNG | http://www.physics.cornell.edu/~rgutenkunst/ From morovia at rediffmail.com Fri Mar 3 12:42:02 2006 From: morovia at rediffmail.com (morovia) Date: 3 Mar 2006 17:42:02 -0000 Subject: [SciPy-user] complex error function in scipy.special Message-ID: <20060303174202.4694.qmail@webmail50.rediffmail.com> ? Hello, I would like to know whether complex error function which can accept complex argument is implemented in scipy.special. w(z) = i/pi*int(exp(-t**2)/(z-t))dt limits: -Inf to Inf where z is complex. Thanks, Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at mecha.uni-stuttgart.de Fri Mar 3 13:03:16 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Mar 2006 19:03:16 +0100 Subject: [SciPy-user] complex error function in scipy.special In-Reply-To: <20060303174202.4694.qmail@webmail50.rediffmail.com> References: <20060303174202.4694.qmail@webmail50.rediffmail.com> Message-ID: On 3 Mar 2006 17:42:02 -0000 "morovia" wrote: > > Hello, > > I would like to know whether complex error function >which can accept complex argument is implemented in >scipy.special. > > w(z) = i/pi*int(exp(-t**2)/(z-t))dt limits: -Inf to >Inf > > where z is complex. > > Thanks, > Morovia. In [2]: ?erf Type: ufunc String Form: Namespace: Interactive Docstring: y = erf(x) y=erf(z) returns the error function of complex argument defined as as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) Nils From arnd.baecker at web.de Fri Mar 3 13:06:03 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 3 Mar 2006 19:06:03 +0100 (CET) Subject: [SciPy-user] complex error function in scipy.special In-Reply-To: <20060303174202.4694.qmail@webmail50.rediffmail.com> References: <20060303174202.4694.qmail@webmail50.rediffmail.com> Message-ID: On Fri, 3 Mar 2006, morovia wrote: > ? > Hello, > > I would like to know whether complex error function which can accept complex argument is implemented in scipy.special. > > w(z) = i/pi*int(exp(-t**2)/(z-t))dt limits: -Inf to Inf > > where z is complex. What is implemented is this one: In [1]: import scipy.special In [2]: scipy.special.erf? Type: ufunc String Form: Namespace: Interactive Docstring: y = erf(x) y=erf(z) returns the error function of complex argument defined as as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) which is related to the above w(z), see Abramowitz/Stegun, 7.1.3 and 7.1.4.: http://www.math.sfu.ca/~cbm/aands/page_297.htm However, I am not sure if using these relations is the best way. Wait - there is also In [6]: scipy.special.wofz? Type: ufunc String Form: Namespace: Interactive Docstring: y = wofz(x) y=wofz(z) returns the value of the fadeeva function for complex argument z: exp(-z**2)*erfc(-i*z) So this looks like the one you are looking for. (as usual: better compare the results for some known values). Best, Arnd From oliphant at ee.byu.edu Fri Mar 3 13:37:27 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 03 Mar 2006 11:37:27 -0700 Subject: [SciPy-user] complex error function in scipy.special In-Reply-To: References: <20060303174202.4694.qmail@webmail50.rediffmail.com> Message-ID: <44088CE7.5000200@ee.byu.edu> Arnd Baecker wrote: >On Fri, 3 Mar 2006, morovia wrote: > > > >> ? >>Hello, >> >>I would like to know whether complex error function which can accept complex argument is implemented in scipy.special. >> >>w(z) = i/pi*int(exp(-t**2)/(z-t))dt limits: -Inf to Inf >> >>where z is complex. >> >> > >What is implemented is this one: > >In [1]: import scipy.special >In [2]: scipy.special.erf? >Type: ufunc >String Form: >Namespace: Interactive >Docstring: > y = erf(x) y=erf(z) returns the error function of complex argument >defined as > as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) > > These should be the same thing. The implementation uses a polynomial approximation and works for both real and complex arguments. But, checking against known results is always a good practice. There are some checks of this type for the special library in SciPy (I paid a student to do this some years ago), but I'm sure the tests could be improved on. -Travis From morovia at rediffmail.com Fri Mar 3 13:40:10 2006 From: morovia at rediffmail.com (morovia) Date: 3 Mar 2006 18:40:10 -0000 Subject: [SciPy-user] complex error function in scipy.special Message-ID: <20060303184010.32608.qmail@webmail53.rediffmail.com> Thanks for the replies. wofz(z)is the one I was looking for. On Fri, 03 Mar 2006 Arnd Baecker wrote : >On Fri, 3 Mar 2006, morovia wrote: > > > > > Hello, > > > > I would like to know whether complex error function which can accept complex argument is implemented in scipy.special. > > > > w(z) = i/pi*int(exp(-t**2)/(z-t))dt limits: -Inf to Inf > > > > where z is complex. > >What is implemented is this one: > >In [1]: import scipy.special >In [2]: scipy.special.erf? >Type: ufunc >String Form: >Namespace: Interactive >Docstring: > y = erf(x) y=erf(z) returns the error function of complex argument >defined as > as 2/sqrt(pi)*integral(exp(-t**2),t=0..z) > >which is related to the above w(z), see >Abramowitz/Stegun, 7.1.3 and 7.1.4.: > http://www.math.sfu.ca/~cbm/aands/page_297.htm >However, I am not sure if using these relations is the best way. > >Wait - there is also > >In [6]: scipy.special.wofz? >Type: ufunc >String Form: >Namespace: Interactive >Docstring: > y = wofz(x) y=wofz(z) returns the value of the fadeeva function for >complex argument > z: exp(-z**2)*erfc(-i*z) > >So this looks like the one you are looking for. >(as usual: better compare the results for some known values). > >Best, Arnd > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Sun Mar 5 11:23:49 2006 From: strawman at astraw.com (Andrew Straw) Date: Sun, 05 Mar 2006 08:23:49 -0800 Subject: [SciPy-user] Table like array In-Reply-To: <16761e100603021535q46f2a7b9x94a2016aad2912bb@mail.gmail.com> References: <16761e100602282240y5bcf869fme9dd2f42771066c4@mail.gmail.com> <44054A19.6040202@ieee.org> <16761e100603021535q46f2a7b9x94a2016aad2912bb@mail.gmail.com> Message-ID: <440B1095.50003@astraw.com> Michael Sorich wrote: > On 3/1/06, *Travis Oliphant* > wrote: > > Michael Sorich wrote: > > > Hi, > > > > I am looking for a table like array. Something like a 'data frame' > > object to those familiar with the statistical languages R and Splus. > > This is mainly to hold and manipulate 2D spreadsheet like data, > which > > tends to be of relatively small size (compared to what many people > > seem to use numpy for), heterogenous, have column and row names, and > > often contains missing data. > > You could subclass the ndarray to produce one of these fairly > easily, I > think. The missing data item could be handled by a mask stored along > with the array (or even in the array itself). Or you could use a > masked > array as your core object (though I'm not sure how it handles the > arbitrary (i.e. record-like) data-types yet). > > > Thanks for the replies. You mention that missing data could be stored > in the array itself. Can one use nan to indicate missing data? In some > ways it seems more convenient to store this data in the array itself > rather than have a second mask array. > FYI, check out 'Does NumPy support nan ("not a number")?' in http://www.scipy.org/FAQ From cournape at atr.jp Sun Mar 5 20:29:56 2006 From: cournape at atr.jp (Cournapeau David) Date: Mon, 06 Mar 2006 10:29:56 +0900 Subject: [SciPy-user] scipy, blas, lapack, atlas and debian/ubuntu Message-ID: <1141608596.16688.14.camel@localhost.localdomain> Hi, I am using scipy for some time at work on my linux (x86 box), and it works fine; I am in general really happy about it as a matlab replacement. Recently, I tried to install the whole thing numpy/scipy/matplotlib on my minimac (again, linux, but on a ppc architecture), and I have some strange installation issues, which makes me wondering about the following issues on scipy installation. First, the conditions on both machines: ubuntu breezy, atlas3 (altivec or sse2 depending on the machine), numpy 0.9.5, scipy 0.4.6, matplotlib 0.87 ('stable' releases). - I am using ATLAS; to have a valid blas/lapack, I used the trick to rebuild a static LAPACK library using ATLAS. But I noticed this weekend that the liblapack.a included in the ubuntu ATLAS package is quite big (around 7 Mb), and the size of the packaged ATLAS liblapack.a and the one I am using are the same (by comparing the libraries, they have different md5, but when extracting them using ar x, the content looks like the same, ie same .o files). So I was wondering, does that mean the atlas lapack is actually a full implementation (maybe done by the debian/ubuntu packagers ?). Is there a way to be sure that a given library implements full lapack ? - When building numpy, is there a way to actually know which BLAS/LAPACK version is used ? For example, at work, on my x86 machine, python setup.py config gives me the following message at the end (I exported BLAS and LAPACK env variables to the location of the corresponding static versions, at /usr/lib/atlas/sse2) : """ lapack_info: Replacing _lib_names[0]=='lapack' with 'lapack' Replacing _lib_names[0]=='lapack' with 'lapack' FOUND: libraries = ['lapack'] library_dirs = ['/usr/lib/atlas/sse2'] language = f77 FOUND: libraries = ['f77blas', 'cblas', 'atlas', 'lapack'] library_dirs = ['/usr/lib/sse2', '/usr/lib/atlas/sse2'] define_macros = [('ATLAS_WITHOUT_LAPACK', None), ('ATLAS_INFO', '"\ \"3.6.0\\""')] language = f77 include_dirs = ['/usr/include'] """ I am not sure to understand the message: it seems like two lapack were found, which one is taken ? - Also, with BLAS/ LAPACK env variables, I select static libraries, but it seems like shared libraries are used afterwards: how can I know which is one is used ? I know how to do this kind of things with C programs (trakcing which libraries are loaded at runtime), but with python, it can quite hard to know which module uses which library. I hope I am not too unclear !, David From tim.leslie at gmail.com Sun Mar 5 22:52:53 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 6 Mar 2006 14:52:53 +1100 Subject: [SciPy-user] scipy.optimize.anneal with multiple parameters Message-ID: Hi all, Before I dive too deeply into the internals of this code I thought I'd check here to see what people know. I'm trying to use simulated annealing to fit a quadratic to some data (I'll be using a more complex function later, the quadratic is just to get me started). I'm doing the following: func = lambda p: sum(abs(dat - array([p[0]*x*x + p[1]*x + p[2] for x in range(150, 450)]))) print anneal(func,[0.001, 0.001, 0.001],full_output=1,upper=[3.0, 3.0, 3.0 ],lower=[-3.0, -3.0, -3.0],feps=1e-6,maxiter=1000,schedule='fast') timl at mercury:~/thesis/src% python fit.py Traceback (most recent call last): File "fit.py", line 17, in ? print anneal(func,[0.001, 0.001, 0.001],full_output=1,upper=[3.0, 3.0, 3.0],lower=[-3.0, -3.0, -3.0],feps=1e-6,maxiter=1000,schedule='fast') File "/usr/lib/python2.4/site-packages/scipy/optimize/anneal.py", line 215, in anneal xnew = schedule.update_guess(x0) File "/usr/lib/python2.4/site-packages/scipy/optimize/anneal.py", line 92, in update_guess xc = y*(self.upper - self.lower) TypeError: unsupported operand type(s) for -: 'list' and 'list' I stole the syntax from the __main__ section of anneal.py and modified it to use a list of size 3, but I'm not sure if this is the correct way to do multiple parameters, the docs leave plenty to the imagination So, am I doing something wrong, in which case could someone show me the light, or is anneal a bit broken, in which case I'm happy to dive in and take a stab at fixing it. Cheers, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Mar 5 23:10:34 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 05 Mar 2006 22:10:34 -0600 Subject: [SciPy-user] scipy.optimize.anneal with multiple parameters In-Reply-To: References: Message-ID: <440BB63A.3080100@gmail.com> Tim Leslie wrote: > Hi all, > > Before I dive too deeply into the internals of this code I thought I'd > check here to see what people know. > > I'm trying to use simulated annealing to fit a quadratic to some data > (I'll be using a more complex function later, the quadratic is just to > get me started). I'm doing the following: > > func = lambda p: sum(abs(dat - array([p[0]*x*x + p[1]*x + p[2] for x in > range(150, 450)]))) > print anneal(func,[0.001, 0.001, 0.001],full_output=1,upper=[3.0, 3.0, > 3.0],lower=[-3.0, -3.0, -3.0],feps=1e-6,maxiter=1000,schedule='fast') > > timl at mercury:~/thesis/src% python fit.py > Traceback (most recent call last): > File "fit.py", line 17, in ? > print anneal(func,[0.001, 0.001, 0.001],full_output=1,upper=[3.0, > 3.0, 3.0],lower=[- 3.0, -3.0, -3.0],feps=1e-6,maxiter=1000,schedule='fast') > File "/usr/lib/python2.4/site-packages/scipy/optimize/anneal.py", line > 215, in anneal > xnew = schedule.update_guess(x0) > File "/usr/lib/python2.4/site-packages/scipy/optimize/anneal.py", line > 92, in update_guess > xc = y*(self.upper - self.lower) > TypeError: unsupported operand type(s) for -: 'list' and 'list' > > I stole the syntax from the __main__ section of anneal.py and modified > it to use a list of size 3, but I'm not sure if this is the correct way > to do multiple parameters, the docs leave plenty to the imagination > > So, am I doing something wrong, in which case could someone show me the > light, or is anneal a bit broken, in which case I'm happy to dive in and > take a stab at fixing it. It looks like anneal()'s argument handling is not very robust. If you pass in arrays instead of lists, it should probably work. The way to fix it would be to call numpy.asarray() on the appropriate inputs. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.leslie at gmail.com Sun Mar 5 23:14:51 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 6 Mar 2006 15:14:51 +1100 Subject: [SciPy-user] scipy.optimize.anneal with multiple parameters In-Reply-To: <440BB63A.3080100@gmail.com> References: <440BB63A.3080100@gmail.com> Message-ID: On 3/6/06, Robert Kern wrote: > > Tim Leslie wrote: > > So, am I doing something wrong, in which case could someone show me the > > light, or is anneal a bit broken, in which case I'm happy to dive in and > > take a stab at fixing it. > > It looks like anneal()'s argument handling is not very robust. If you pass > in > arrays instead of lists, it should probably work. The way to fix it would > be to > call numpy.asarray() on the appropriate inputs. Thanks for the quick reply Robert. I came across that solution about 1 minute after I hit the send button. Always the way :-) Anyway, it looks like there's quite a few things in the anneal module which aren't very robust. If I do some work to fix it up, what would be the best way to submit a patch? Cheers, Tim -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Mar 5 23:21:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 05 Mar 2006 22:21:59 -0600 Subject: [SciPy-user] scipy.optimize.anneal with multiple parameters In-Reply-To: References: <440BB63A.3080100@gmail.com> Message-ID: <440BB8E7.9020607@gmail.com> Tim Leslie wrote: > Anyway, it looks like there's quite a few things in the anneal module > which aren't very robust. If I do some work to fix it up, what would be > the best way to submit a patch? You can submit a new bug ticket here: http://projects.scipy.org/scipy/scipy/newticket After that, you will see a page with a button that lets you attach a file. Thank you very much! -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Tony.Mannucci at jpl.nasa.gov Mon Mar 6 01:49:30 2006 From: Tony.Mannucci at jpl.nasa.gov (Tony Mannucci) Date: Sun, 5 Mar 2006 22:49:30 -0800 Subject: [SciPy-user] Data types Message-ID: Dear Scipy community, I cannot find a clear statement as to allowable data types, and how these are specified. I am trying to read in an array using read_array. One of the arguments of the function is "atype", to specify the dtype of the output array. I cannot figure out what format the tuple atype is in. For example, the following does not work: import scipy.io.array_import as ARRIN data = ARRIN.read_array('filename',columns=(0,2),atype=('d','d')) Yet, if I omit the atype argument, and I do data.dtype.char, I get 'd'. In the read_array doc, it suggests using a "typecode". I don't know what are the allowable typecodes, or the type of a typecode (string? number? etc), and cannot seem to locate this from the Numpy book. If Numpy is like Numeric, then, according to "Python In A Nutshell", the array objects contain a single type. So, I cannot use several values, I would specify only one value for atype. This seems to differ from the scipy online documentation, suggesting atype could be a tuple. I realize the functionality is evolving. Along these lines, the following works: data = ARRIN.read_array('filename',columns=(0,2),atype='d') and appears to give a double precision array (as evidenced from data.dtype.char = 'd'). However, using a few other strings for atype does not work, e.g. 'I', or 'L', etc. Yet, these strings were found from Table 2.1 in the Numpy book. Finally, the following does not seem to work: data = ARRIN.read_array('filename',columns=(2)) so reading a single column appears to cause a problem. The following DOES work: data = ARRIN.read_array('filename',columns=(0,2)) Perhaps I have an incorrect version of the scipy/Numpy modules. I am working on OS X 10.4. Thanks! -Tony -- Tony Mannucci Supervisor, Ionospheric and Atmospheric Remote Sensing Group Mail-Stop 138-308, Tel > (818) 354-1699 Jet Propulsion Laboratory, Fax > (818) 393-5115 California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov 4800 Oak Grove Drive, http://genesis.jpl.nasa.gov Pasadena, CA 91109 From oliphant.travis at ieee.org Mon Mar 6 02:44:35 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 06 Mar 2006 00:44:35 -0700 Subject: [SciPy-user] Data types In-Reply-To: References: Message-ID: <440BE863.5050306@ieee.org> Tony Mannucci wrote: > Dear Scipy community, > > I cannot find a clear statement as to allowable data types, and how > these are specified. I am trying to read in an array using > read_array. One of the arguments of the function is "atype", to > specify the dtype of the output array. I cannot figure out what > format the tuple atype is in. > > While SciPy builds for NumPy, it has not been "fully" adapted in that it still uses a lot of Numeric idioms for data-types. The read_array code is still using the old concept of "typecode" which is a one-character string (i.e. what dtype.char gives now). There may also still be bugs still in SciPy. > If Numpy is like Numeric, then, according to "Python In A Nutshell", > the array objects contain a single type. So, I cannot use several > values, I would specify only one value for atype. This seems to > differ from the scipy online documentation, suggesting atype could be > a tuple. I realize the functionality is evolving. > In this case, the functionality was there but might have become broken in the transition to NumPy (or the result of an old bug). You should be able to specify a tuple for data-type. Most SciPy functions that take data-types need to be made aware of the new concept of data type in NumPy. > Along these lines, the following works: > data = ARRIN.read_array('filename',columns=(0,2),atype='d') > > and appears to give a double precision array (as evidenced from > data.dtype.char = 'd'). However, using a few other strings for atype > does not work, e.g. 'I', or 'L', etc. What do you get as the error? > Finally, the following does not seem to work: > data = ARRIN.read_array('filename',columns=(2)) > > so reading a single column appears to cause a problem. The following DOES work: > data = ARRIN.read_array('filename',columns=(0,2)) > > It would be easiest if you opened a ticket and attached an example file. Open tickets at http://projects.scipy.org/scipy/scipy/timeline Thanks, -Travis From nwagner at mecha.uni-stuttgart.de Mon Mar 6 02:44:54 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Mar 2006 08:44:54 +0100 Subject: [SciPy-user] CSparse: software for upcoming book on sparse direct methods Message-ID: <440BE876.8080304@mecha.uni-stuttgart.de> This might be of interest. Nils From: Tim Davis Date: Tue, 28 Feb 2006 13:46:36 -0500 Subject: CSparse: software for upcoming book on sparse direct methods I would like to announce the release of a sparse matrix package, CSparse, that I've written for my upcoming book, "Direct Methods for Sparse Linear Systems," to appear in SIAM's Series on the Fundamentals of Algorithms sometime this year. It demonstrates a wide range of sparse matrix algorithms in as concise a code as possible (sparse LU with partial pivoting is only 165 lines, for example, not including the fill-reducing ordering). The algorithms are asymptotically optimal (in most cases), and have comparable performance with existing methods (with exception of the sparse LU, Cholesky, and QR factorizations). One section of the book provides an overview of a wide range of available software for sparse direct methods, with links to high-performance sparse LU, Cholesky, and QR factorization codes: http://www.cise.ufl.edu/research/sparse/codes A MATLAB interface is included (including a pretty color "spy", and an easy MATLAB interface to the UF Sparse Matrix Collection). See http://www.cise.ufl.edu/research/sparse/CSparse or http://www.cise.ufl.edu/research/sparse/CXSparse for an extended version (for complex matrices). Tim Davis University of Florida From aisaac at american.edu Mon Mar 6 03:27:17 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 6 Mar 2006 03:27:17 -0500 Subject: [SciPy-user] =?utf-8?q?CSparse=3A_software_for_upcoming_book_on_s?= =?utf-8?q?parse_direct=09methods?= In-Reply-To: <440BE876.8080304@mecha.uni-stuttgart.de> References: <440BE876.8080304@mecha.uni-stuttgart.de> Message-ID: On Mon, 06 Mar 2006, Nils Wagner apparently wrote: > This might be of interest. See > http://www.cise.ufl.edu/research/sparse/CSparse or > http://www.cise.ufl.edu/research/sparse/CXSparse License? To be useful, ask him for something like MIT or BSD. Cheers, Alan Isaac From d.howey at imperial.ac.uk Mon Mar 6 05:24:41 2006 From: d.howey at imperial.ac.uk (Howey, David A) Date: Mon, 6 Mar 2006 10:24:41 -0000 Subject: [SciPy-user] CFD in scipy Message-ID: <056D32E9B2D93B49B01256A88B3EB218011609B4@icex2.ic.ac.uk> Check this out as well - not python, but open source CFD called openfoam http://www.opencfd.co.uk/ Dave -----Original Message----- From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Ryan Krauss Sent: 09 February 2006 10:52 To: SciPy Users List Subject: Re: [SciPy-user] CFD in scipy I typed Python CFD into google and this is the first thing that came up: http://datamining.anu.edu.au/~ole/pypar/py4cfd.pdf On 2/9/06, nokophala at aim.com wrote: > > > Hi, > Are there any success stories on the use of Scipy to do CFD or solve > the Navier-Stokes equations, simple or complex problems? I would like > to try some simple RTD distribution calculations for pulp/slurry flows > through mixed batch/continuous reactors, and other relatively simple > cases - but have no funds to do this so I cant buy advanced software yet. > > Thanks in advance, > Noko > ________________________________ > Check Out the new free AIM(R) Mail -- 2 GB of storage and > industry-leading spam and email virus protection. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From nwagner at mecha.uni-stuttgart.de Mon Mar 6 09:34:06 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Mar 2006 15:34:06 +0100 Subject: [SciPy-user] CSparse: software for upcoming book on sparse direct methods In-Reply-To: References: <440BE876.8080304@mecha.uni-stuttgart.de> Message-ID: <440C485E.8060509@mecha.uni-stuttgart.de> Alan G Isaac wrote: > On Mon, 06 Mar 2006, Nils Wagner apparently wrote: > >> This might be of interest. See >> http://www.cise.ufl.edu/research/sparse/CSparse or >> http://www.cise.ufl.edu/research/sparse/CXSparse >> > > License? > To be useful, ask him for something like MIT or BSD. > > Cheers, > Alan Isaac > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hi Alan, >Dear Dr. Davis, >I saw your recent announcement in NA Digest, V. 06, # 09. Please can you >expand on the license issue w.r.t. all your sparse matrix packages. >I long to know whether its possible to release them under MIT or BSD. >I look forward to hearing from you. >With best regards, > Nils Wagner http://en.wikipedia.org/wiki/MIT_License http://en.wikipedia.org/wiki/BSD_license This is the reply by Tim. There is a License.txt file in each package. Many are GNU LGPL. A few modules in CHOLMOD are GNU GPL. I'm not familiar enough with the MIT and BSD licenses, so I'd rather leave them under GNU. Thanks, Tim From nwagner at mecha.uni-stuttgart.de Mon Mar 6 09:38:18 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Mar 2006 15:38:18 +0100 Subject: [SciPy-user] Question w.r.t. sparse matrices Message-ID: <440C495A.8030300@mecha.uni-stuttgart.de> Hi, Consider the sparse vector >>> x <420x1 sparse matrix of type '' with 2 stored elements (space for 2) in Compressed Sparse Row format> >>> print x (3, 0) 1.0 (30, 0) 1.0 x.getnnz() yields the number of nonzero elements >>> x.getnnz() 2 How can I obtain the pattern of x ? I mean the indices of nonzero elements. Nils From nwagner at mecha.uni-stuttgart.de Mon Mar 6 10:53:46 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Mar 2006 16:53:46 +0100 Subject: [SciPy-user] Question w.r.t. sparse matrices In-Reply-To: <440C495A.8030300@mecha.uni-stuttgart.de> References: <440C495A.8030300@mecha.uni-stuttgart.de> Message-ID: <440C5B0A.2010904@mecha.uni-stuttgart.de> Nils Wagner wrote: > Hi, > > Consider the sparse vector > >>> x > <420x1 sparse matrix of type '' > with 2 stored elements (space for 2) > in Compressed Sparse Row format> > >>> print x > (3, 0) 1.0 > (30, 0) 1.0 > > x.getnnz() yields the number of nonzero elements > >>> x.getnnz() > 2 > > How can I obtain the pattern of x ? > I mean the indices of nonzero elements. > > Nils > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Sorry for replying to myself. It seems to be not implemented yet. >>> where (x <>0) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 160, in __cmp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented Nils From Tony.Mannucci at jpl.nasa.gov Mon Mar 6 11:17:38 2006 From: Tony.Mannucci at jpl.nasa.gov (Tony Mannucci) Date: Mon, 6 Mar 2006 08:17:38 -0800 Subject: [SciPy-user] Data types In-Reply-To: References: Message-ID: I am now learning that my installation may be lacking. Travis seemed to confirm that I was not necessarily making a completely silly mistake, so the next step would be to clean up the installation. I great deal of information can be provided according to the instructions in INSTALL.txt. It seems wrong to send that to everyone. Can that also be a ticket? -Tony > >Message: 5 >Date: Mon, 06 Mar 2006 00:44:35 -0700 >From: Travis Oliphant >Subject: Re: [SciPy-user] Data types >To: SciPy Users List >Message-ID: <440BE863.5050306 at ieee.org> >Content-Type: text/plain; charset=ISO-8859-1; format=flowed > >Tony Mannucci wrote: >> Dear Scipy community, >> >> I cannot find a clear statement as to allowable data types, and how >> these are specified. I am trying to read in an array using >> read_array. One of the arguments of the function is "atype", to >> specify the dtype of the output array. I cannot figure out what >> format the tuple atype is in. >> >> >While SciPy builds for NumPy, it has not been "fully" adapted in that it >still uses a lot of Numeric idioms for data-types. > >The read_array code is still using the old concept of "typecode" which >is a one-character string (i.e. what dtype.char gives now). > >There may also still be bugs still in SciPy. >> If Numpy is like Numeric, then, according to "Python In A Nutshell", >> the array objects contain a single type. So, I cannot use several >> values, I would specify only one value for atype. This seems to >> differ from the scipy online documentation, suggesting atype could be >> a tuple. I realize the functionality is evolving. >> >In this case, the functionality was there but might have become broken >in the transition to NumPy (or the result of an old bug). You should be >able to specify a tuple for data-type. Most SciPy functions that take >data-types need to be made aware of the new concept of data type in NumPy. >> Along these lines, the following works: >> data = ARRIN.read_array('filename',columns=(0,2),atype='d') >> >> and appears to give a double precision array (as evidenced from >> data.dtype.char = 'd'). However, using a few other strings for atype >> does not work, e.g. 'I', or 'L', etc. >What do you get as the error? > >> Finally, the following does not seem to work: >> data = ARRIN.read_array('filename',columns=(2)) >> >> so reading a single column appears to cause a problem. The >>following DOES work: >> data = ARRIN.read_array('filename',columns=(0,2)) >> >> >It would be easiest if you opened a ticket and attached an example >file. Open tickets at > >http://projects.scipy.org/scipy/scipy/timeline > >Thanks, > >-Travis > -- Tony Mannucci Supervisor, Ionospheric and Atmospheric Remote Sensing Group Mail-Stop 138-308, Tel > (818) 354-1699 Jet Propulsion Laboratory, Fax > (818) 393-5115 California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov 4800 Oak Grove Drive, http://genesis.jpl.nasa.gov Pasadena, CA 91109 From Paul.Ray at nrl.navy.mil Mon Mar 6 19:13:35 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Mon, 6 Mar 2006 19:13:35 -0500 Subject: [SciPy-user] 64 bit support Message-ID: Hi, Does anyone have experience building SciPy on 64 bit linux boxes? Our machine is a dual Opteron, running something like RedHat Enterprise 4.2: xxi1 : 74>uname -a Linux xxi1.nrl.navy.mil 2.6.9-11.ELsmp #1 SMP Thu Jun 16 11:18:13 CDT 2005 x86_64 x86_64 x86_64 GNU/Linux We've been running into problems with numpy/scipy looking for libraries in /usr/lib when it should be using /usr/lib64, so that a simple install with setup.py install doesn't work. It crashes when you run and it loads those 32 bit libraries. I think all I need to do is ensure that the library search paths include /usr/lib64 and / usr/X11R6/lib64 and NOT /usr/lib or /usr/X11R6/lib. What is the easy or "right" way to do this? Is there an option to setup.py? Should I make a site.cfg (and where should it reside?)? Should I edit system_info.py? I assume that this configuration is becoming more common as 64-bit processors show up in more machines, so I'm surprised that setup.py doesn't already figure it out and "do the right thing". Are we doing something boneheaded? Thanks, -- Paul From ckkart at hoc.net Tue Mar 7 01:22:11 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 07 Mar 2006 15:22:11 +0900 Subject: [SciPy-user] numpy/scipy+ATLAS on SuSE10.0/gcc4 (once more) Message-ID: <440D2693.10409@hoc.net> Hi, I've got some observations to share concerning the compilation of numpy/scipy with ATLAS on SuSE10.0 In some earlier posts I told everybody to use gfortran to build numpy/scipy/ATLAS. Although the compilation worked without errors and numpy/scipy seemed to work as well, Robert prooved, that gfortran is causing serious faults so that scipy eventually fails during execution. Originally I have chosen to use gfortran because every attempt to build numpy/scipy/ATLAS on SuSE10.0 using gcc4 and g77 failed. I will shortly describe what I tried (with gcc4/g77) 1) using SUSE10.0 blas and lapack rpms Python 2.4.1 (#1, Sep 13 2005, 00:39:20) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy import linalg -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename >>> 2) with ATLAS Python 2.4.1 (#1, Sep 13 2005, 00:39:20) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy import linalg -> failed: /usr/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: _gfortran_copy_string >>> 3) using self compiled lapack and blas works!! Can anyone figure out what the problem is and propose what to try next? Regards, Christian From nwagner at mecha.uni-stuttgart.de Tue Mar 7 02:09:36 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 07 Mar 2006 08:09:36 +0100 Subject: [SciPy-user] numpy/scipy+ATLAS on SuSE10.0/gcc4 (once more) In-Reply-To: <440D2693.10409@hoc.net> References: <440D2693.10409@hoc.net> Message-ID: <440D31B0.7040205@mecha.uni-stuttgart.de> Christian Kristukat wrote: > Hi, > I've got some observations to share concerning the compilation of numpy/scipy > with ATLAS on SuSE10.0 > In some earlier posts I told everybody to use gfortran to build > numpy/scipy/ATLAS. Although the compilation worked without errors and > numpy/scipy seemed to work as well, Robert prooved, that gfortran is causing > serious faults so that scipy eventually fails during execution. > Originally I have chosen to use gfortran because every attempt to build > numpy/scipy/ATLAS on SuSE10.0 using gcc4 and g77 failed. I will shortly describe > what I tried (with gcc4/g77) > > 1) using SUSE10.0 blas and lapack rpms > > Python 2.4.1 (#1, Sep 13 2005, 00:39:20) > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> import numpy >>>> > import linalg -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > > > 2) with ATLAS > > Python 2.4.1 (#1, Sep 13 2005, 00:39:20) > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> import numpy >>>> > import linalg -> failed: > /usr/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: > _gfortran_copy_string > > > 3) using self compiled lapack and blas works!! > > Can anyone figure out what the problem is and propose what to try next? > > Regards, Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hi Christian, It is well known that the rpm's (lapack + blas) shipped with SuSE are incomplete. So I recommend to remove these rpm's and comile lapack/blas from scratch. BTW, I have send a message to SuSE two years ago and didn't receive any reply. Nils From nwagner at mecha.uni-stuttgart.de Tue Mar 7 02:31:45 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 07 Mar 2006 08:31:45 +0100 Subject: [SciPy-user] Addition of sparse matrices Message-ID: <440D36E1.80607@mecha.uni-stuttgart.de> Hi Robert, I am sorry but I cannot resolve my problem w.r.t. to addition of sparse matrices. I tried astype('d') and astype('D'). Please can you exemplify how to fix it. Thanks in advance Nils I think it works ok: $ python add.py 0.393709556483 0.393709570169 should be zero -1.36865185851e-08 1e-8 is the float precision... Try using doubles. r. From schofield at ftw.at Tue Mar 7 03:52:43 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 07 Mar 2006 09:52:43 +0100 Subject: [SciPy-user] Addition of sparse matrices In-Reply-To: <440D36E1.80607@mecha.uni-stuttgart.de> References: <440D36E1.80607@mecha.uni-stuttgart.de> Message-ID: <440D49DB.5000607@ftw.at> Nils Wagner wrote: > I am sorry but I cannot resolve my problem w.r.t. to addition of sparse > matrices. > I tried astype('d') and astype('D'). > Please can you exemplify how to fix it. > I agree there's a bug with adding double precision csr matrices. I tried last night to find out what's wrong, but didn't find it. The double precision FORTRAN function dcscadd is being called correctly. Travis, Robert, do you have any idea what's going on? Adding a zero CSC or CSR matrix also introduces an error of the order of 1E-8. Nils, if they don't have time, could you please file a bug report? You can work around it for now by using complex data types and then casting back to double type. For example, this works: >>> (A_csr.astype('D') + B_csr.astype('D')).astype('d')[0,0] - C[0,0] 0.0 -- Ed From nwagner at mecha.uni-stuttgart.de Tue Mar 7 04:28:36 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 07 Mar 2006 10:28:36 +0100 Subject: [SciPy-user] unsupported operand type(s) for +: 'slice' and 'int' Message-ID: <440D5244.2070802@mecha.uni-stuttgart.de> Traceback (most recent call last): File "aispd.py", line 33, in ? M[:,j] = m_j File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 1308, in __setitem__ self.indptr = resize1d(self.indptr, row+2) TypeError: unsupported operand type(s) for +: 'slice' and 'int' >>> M <420x420 sparse matrix of type '' with 0 stored elements (space for 100) in Compressed Sparse Row format> >>> m_j <420x1 sparse matrix of type '' with 10 stored elements (space for 1002) in Compressed Sparse Row format> How can I resolve this problem ? for i in arange(0,420): M[i,j] = m_j[i] seems to be not very efficient. Nils From pgmdevlist at mailcan.com Tue Mar 7 04:47:19 2006 From: pgmdevlist at mailcan.com (pierregm) Date: Tue, 7 Mar 2006 04:47:19 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 31, Issue 8 In-Reply-To: References: Message-ID: <200603070447.19361.pgmdevlist@mailcan.com> Paul, I'm running numpy/scipy on an Gentoo/AMD64 without glitches. I assume you have some troubles with lapack/blas & atlas ? I just followed the generic installation instructions from the wiki http://www.scipy.org/Installing_SciPy/BuildingGeneral I copied the blas.a and lapack.a I got in /usr/lib, and the numpy/scipy installation went fine. Not a very elegant method, but one that does the trick. But I'm surprised you have some 32b libraries in /usr/lib. Don't you have a /usr/lib32 in parallel to /usr/lib64, with lib pointing to lib64 only ? I remmbr having the same problem with SuSE (the reason why I switched to Gentoo). What are the libraries being accessed, in fact ? Sorry for not being more helpful -- Pierre GM From schofield at ftw.at Tue Mar 7 04:52:52 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 07 Mar 2006 10:52:52 +0100 Subject: [SciPy-user] unsupported operand type(s) for +: 'slice' and 'int' In-Reply-To: <440D5244.2070802@mecha.uni-stuttgart.de> References: <440D5244.2070802@mecha.uni-stuttgart.de> Message-ID: <440D57F4.6000605@ftw.at> Nils Wagner wrote: > Traceback (most recent call last): > File "aispd.py", line 33, in ? > M[:,j] = m_j > File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line > 1308, in __setitem__ > self.indptr = resize1d(self.indptr, row+2) > TypeError: unsupported operand type(s) for +: 'slice' and 'int' > >>> M > <420x420 sparse matrix of type '' > with 0 stored elements (space for 100) > in Compressed Sparse Row format> > >>> m_j > <420x1 sparse matrix of type '' > with 10 stored elements (space for 1002) > in Compressed Sparse Row format> > > How can I resolve this problem ? > > > > for i in arange(0,420): > M[i,j] = m_j[i] > > seems to be not very efficient. > There's only support for slicing lil_matrix and dok_matrix objects. All sparse matrix manipulations like your for loop are much more efficient with these objects. You're trying to do column-wise slicing and lil_matrix is a row-wise format, so I suggest you use dok_matrix. Only convert to CSR or CSC after you've finished modifying its elements, for multiplication, solvers, etc. -- Ed From nwagner at mecha.uni-stuttgart.de Tue Mar 7 05:02:14 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 07 Mar 2006 11:02:14 +0100 Subject: [SciPy-user] unsupported operand type(s) for +: 'slice' and 'int' In-Reply-To: <440D57F4.6000605@ftw.at> References: <440D5244.2070802@mecha.uni-stuttgart.de> <440D57F4.6000605@ftw.at> Message-ID: <440D5A26.8000607@mecha.uni-stuttgart.de> Ed Schofield wrote: > Nils Wagner wrote: > >> Traceback (most recent call last): >> File "aispd.py", line 33, in ? >> M[:,j] = m_j >> File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line >> 1308, in __setitem__ >> self.indptr = resize1d(self.indptr, row+2) >> TypeError: unsupported operand type(s) for +: 'slice' and 'int' >> >>> M >> <420x420 sparse matrix of type '' >> with 0 stored elements (space for 100) >> in Compressed Sparse Row format> >> >>> m_j >> <420x1 sparse matrix of type '' >> with 10 stored elements (space for 1002) >> in Compressed Sparse Row format> >> >> How can I resolve this problem ? >> >> >> >> for i in arange(0,420): >> M[i,j] = m_j[i] >> >> seems to be not very efficient. >> >> > There's only support for slicing lil_matrix and dok_matrix objects. All > sparse matrix manipulations like your for loop are much more efficient > with these objects. You're trying to do column-wise slicing and > lil_matrix is a row-wise format, so I suggest you use dok_matrix. Only > convert to CSR or CSC after you've finished modifying its elements, for > multiplication, solvers, etc. > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > I followed your advice. M = dok_matrix((n,n)) Traceback (most recent call last): File "aispd.py", line 40, in ? M[:,j] = m_j File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 1644, in __setitem__ if len(seq) != len(value): TypeError: __len__() should return an int >>> M <420x420 sparse matrix with 0 stored elements in Dictionary Of Keys format> >>> m_j <420x1 sparse matrix of type '' with 3 stored elements (space for 2002) in Compressed Sparse Row format> Am I missing something ? Nils From mfmorss at aep.com Tue Mar 7 08:07:20 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Tue, 7 Mar 2006 08:07:20 -0500 Subject: [SciPy-user] numpy/scipy+ATLAS on SuSE10.0/gcc4 (once more) In-Reply-To: <440D31B0.7040205@mecha.uni-stuttgart.de> Message-ID: "GCC 3.4.x is the last edition of GCC to contain g77 - from GCC 3.5 onwards, use gfortran" That is a quotation from http://gcc.gnu.org/onlinedocs/gcc-3.4.1 /g77/News.html. Therefore, I don't know what it means to say "with gcc4/g77." It may simply be that "g77" is a synonym for gfortran under gcc-4.0. That could explain why your shared libraries have been compiled with what appear to be gfortran-specific references. To repeat what I said here some days ago, I strongly recommend you quit using gcc-4.0. It is not difficult to build the much more reliable gcc-3.4.5 from source. Use --enable-languages=c,f77. It may be a good idea to keep it in a different directory from that which holds gcc-4. You can do that if you configure with --prefix=. Then alias "g77" to the the new g77. Then build the math libraries. Mark F. Morss Principal Analyst, Market Risk American Electric Power Nils Wagner To Sent by: SciPy Users List scipy-user-bounce s at scipy.net cc Subject 03/07/2006 02:09 Re: [SciPy-user] numpy/scipy+ATLAS AM on SuSE10.0/gcc4 (once more) Please respond to SciPy Users List Christian Kristukat wrote: > Hi, > I've got some observations to share concerning the compilation of numpy/scipy > with ATLAS on SuSE10.0 > In some earlier posts I told everybody to use gfortran to build > numpy/scipy/ATLAS. Although the compilation worked without errors and > numpy/scipy seemed to work as well, Robert prooved, that gfortran is causing > serious faults so that scipy eventually fails during execution. > Originally I have chosen to use gfortran because every attempt to build > numpy/scipy/ATLAS on SuSE10.0 using gcc4 and g77 failed. I will shortly describe > what I tried (with gcc4/g77) > > 1) using SUSE10.0 blas and lapack rpms > > Python 2.4.1 (#1, Sep 13 2005, 00:39:20) > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> import numpy >>>> > import linalg -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > > > 2) with ATLAS > > Python 2.4.1 (#1, Sep 13 2005, 00:39:20) > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> import numpy >>>> > import linalg -> failed: > /usr/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: > _gfortran_copy_string > > > 3) using self compiled lapack and blas works!! > > Can anyone figure out what the problem is and propose what to try next? > > Regards, Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hi Christian, It is well known that the rpm's (lapack + blas) shipped with SuSE are incomplete. So I recommend to remove these rpm's and comile lapack/blas from scratch. BTW, I have send a message to SuSE two years ago and didn't receive any reply. Nils _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From mfmorss at aep.com Tue Mar 7 08:12:34 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Tue, 7 Mar 2006 08:12:34 -0500 Subject: [SciPy-user] 64 bit support In-Reply-To: Message-ID: Let me ask you this: do you have Python itself built in 64 bit? When I tried that here on my AIX server, I could not get the datetime module to build. Mark F. Morss Principal Analyst, Market Risk American Electric Power Paul Ray To Sent by: SciPy Users List scipy-user-bounce s at scipy.net cc Dan Wood Subject 03/06/2006 07:13 [SciPy-user] 64 bit support PM Please respond to SciPy Users List Hi, Does anyone have experience building SciPy on 64 bit linux boxes? Our machine is a dual Opteron, running something like RedHat Enterprise 4.2: xxi1 : 74>uname -a Linux xxi1.nrl.navy.mil 2.6.9-11.ELsmp #1 SMP Thu Jun 16 11:18:13 CDT 2005 x86_64 x86_64 x86_64 GNU/Linux We've been running into problems with numpy/scipy looking for libraries in /usr/lib when it should be using /usr/lib64, so that a simple install with setup.py install doesn't work. It crashes when you run and it loads those 32 bit libraries. I think all I need to do is ensure that the library search paths include /usr/lib64 and / usr/X11R6/lib64 and NOT /usr/lib or /usr/X11R6/lib. What is the easy or "right" way to do this? Is there an option to setup.py? Should I make a site.cfg (and where should it reside?)? Should I edit system_info.py? I assume that this configuration is becoming more common as 64-bit processors show up in more machines, so I'm surprised that setup.py doesn't already figure it out and "do the right thing". Are we doing something boneheaded? Thanks, -- Paul _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From nwagner at mecha.uni-stuttgart.de Tue Mar 7 08:16:50 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 07 Mar 2006 14:16:50 +0100 Subject: [SciPy-user] numpy/scipy+ATLAS on SuSE10.0/gcc4 (once more) In-Reply-To: References: Message-ID: <440D87C2.3050703@mecha.uni-stuttgart.de> mfmorss at aep.com wrote: > "GCC 3.4.x is the last edition of GCC to contain g77 - from GCC 3.5 > onwards, use gfortran" > > That is a quotation from http://gcc.gnu.org/onlinedocs/gcc-3.4.1 > /g77/News.html. Therefore, I don't know what it means to say "with > gcc4/g77." It may simply be that "g77" is a synonym for gfortran under > gcc-4.0. That could explain why your shared libraries have been compiled > with what appear to be gfortran-specific references. > > To repeat what I said here some days ago, I strongly recommend you quit > using gcc-4.0. It is not difficult to build the much more reliable > gcc-3.4.5 from source. Use --enable-languages=c,f77. It may be a good > idea to keep it in a different directory from that which holds gcc-4. You > can do that if you configure with --prefix=. Then alias > "g77" to the the new g77. Then build the math libraries. > > Mark F. Morss > Principal Analyst, Market Risk > American Electric Power > > > I was able to build numpy/scipy using gcc4.0.2. SuSE 10.0 comes with rpm -qi compat-g77 Name : compat-g77 Relocations: /usr Version : 3.3.5 Vendor: SUSE LINUX Products GmbH, Nuernberg, Germany Release : 2 Build Date: Sat 10 Sep 2005 01:47:44 AM CEST Install date: Thu 02 Mar 2006 03:32:44 PM CET Build Host: ensslin.suse.de Group : Development/Languages/Fortran Source RPM: compat-g77-3.3.5-2.src.rpm Size : 6289257 License: LGPL Signature : DSA/SHA1, Sat 10 Sep 2005 04:55:00 AM CEST, Key ID a84edae89c800aca Packager : http://www.suse.de/feedback URL : http://gcc.gnu.org/ Summary : GNU Fortran 77 Compiler Description : This is a Fortran 77 only compiler based on GCC 3.3.5. It can be used for source not yet compilable by the gcc-fortran package which contains the new gfortran compiler. So I have used g77 to compile ATLAS and numpy/scipy. It works fine for me. Essentially gfortran cannot be used !!!! Nils From schofield at ftw.at Tue Mar 7 09:24:20 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 07 Mar 2006 15:24:20 +0100 Subject: [SciPy-user] unsupported operand type(s) for +: 'slice' and 'int' In-Reply-To: <440D5A26.8000607@mecha.uni-stuttgart.de> References: <440D5244.2070802@mecha.uni-stuttgart.de> <440D57F4.6000605@ftw.at> <440D5A26.8000607@mecha.uni-stuttgart.de> Message-ID: <440D9794.8090401@ftw.at> Nils Wagner wrote: > I followed your advice. > M = dok_matrix((n,n)) > > Traceback (most recent call last): > File "aispd.py", line 40, in ? > M[:,j] = m_j > File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line > 1644, in __setitem__ > if len(seq) != len(value): > TypeError: __len__() should return an int > >>> M > <420x420 sparse matrix with 0 stored elements in Dictionary Of Keys format> > >>> m_j > <420x1 sparse matrix of type '' > with 3 stored elements (space for 2002) > in Compressed Sparse Row format> > > Am I missing something ? > Yes, m_j needs to be a dok_matrix too. -- Ed From agn at noc.soton.ac.uk Tue Mar 7 10:37:46 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Tue, 7 Mar 2006 15:37:46 +0000 Subject: [SciPy-user] 64 bit support In-Reply-To: References: Message-ID: <4CCC2816-F0F1-41B3-8297-91D293ED42E2@noc.soton.ac.uk> On 7 Mar 2006, at 00:13, Paul Ray wrote: > Hi, > > Does anyone have experience building SciPy on 64 bit linux boxes? > > Our machine is a dual Opteron, running something like RedHat > Enterprise 4.2: > xxi1 : 74>uname -a > Linux xxi1.nrl.navy.mil 2.6.9-11.ELsmp #1 SMP Thu Jun 16 11:18:13 CDT > 2005 x86_64 x86_64 x86_64 GNU/Linux > > We've been running into problems with numpy/scipy looking for > libraries in /usr/lib when it should be using /usr/lib64, so that a > simple install with setup.py install doesn't work. It crashes when > you run and it loads those 32 bit libraries. I think all I need to > do is ensure that the library search paths include /usr/lib64 and / > usr/X11R6/lib64 and NOT /usr/lib or /usr/X11R6/lib. What is the easy > or "right" way to do this? Is there an option to setup.py? Should I > make a site.cfg (and where should it reside?)? Should I edit > system_info.py? > > I assume that this configuration is becoming more common as 64-bit > processors show up in more machines, so I'm surprised that setup.py > doesn't already figure it out and "do the right thing". Are we doing > something boneheaded? I managed to get numpy/scipy/matplotlib set up on a single core (SUN) Opteron running a similar RedHat uname -a Linux nohow 2.6.9-22.0.2.EL #1 Thu Jan 5 17:03:08 EST 2006 x86_64 x86_64 x86_64 GNU/Linux (BTW - was the uname -a issued last summer, or is your clock wrong:) I'm sure that I did not do it the easy way -- i used a site.cfg -- but it worked in the end. Since I do not have root privileges on this computer, and in any case I used the acml libraries, I used a site.cfg file for numpy and scipy. I believe that the code has been changed to allow this site.cfg file to reside either in (in decreasing order of priority) 1. the directory where you are doing setup.py from 2. your home directory $HOME 3. the numpy/distutils subdirectory of the root numpy source directory when installing numpy OR (when installing scipy) the distutils subdirectory of the *installed* numpy directory. This copying of the site.cfg is now automatically done when numpy is installed. numpy and scipy only seemed to need the site.cfg file to find the acml libraries (or in your case, presumably the Atlas libraries). The standard setup.py worked fine otherwise. FFTW of course has to be set, so that $FFTW/include and $FFTW/lib hold the include files and fftw libraries. For matplotlib, I did have to modfy the setupext.py file, replacing libdirs = [os.path.join(p, 'lib') for p in basedir [sys.platform] if os.path.exists(p)] by libdirs = [os.path.join(p, 'lib64') for p in basedir [sys.platform] if os.path.exists(p)] + \ [os.path.join(p, 'lib') for p in basedir[sys.platform] if os.path.exists(p)] in function add_base_flags (~line 104) HTH. George. From nwagner at mecha.uni-stuttgart.de Tue Mar 7 11:46:17 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 07 Mar 2006 17:46:17 +0100 Subject: [SciPy-user] 64 bit support In-Reply-To: <4CCC2816-F0F1-41B3-8297-91D293ED42E2@noc.soton.ac.uk> References: <4CCC2816-F0F1-41B3-8297-91D293ED42E2@noc.soton.ac.uk> Message-ID: <440DB8D9.30701@mecha.uni-stuttgart.de> George Nurser wrote: > On 7 Mar 2006, at 00:13, Paul Ray wrote: > > >> Hi, >> >> Does anyone have experience building SciPy on 64 bit linux boxes? >> >> Our machine is a dual Opteron, running something like RedHat >> Enterprise 4.2: >> xxi1 : 74>uname -a >> Linux xxi1.nrl.navy.mil 2.6.9-11.ELsmp #1 SMP Thu Jun 16 11:18:13 CDT >> 2005 x86_64 x86_64 x86_64 GNU/Linux >> >> We've been running into problems with numpy/scipy looking for >> libraries in /usr/lib when it should be using /usr/lib64, so that a >> simple install with setup.py install doesn't work. It crashes when >> you run and it loads those 32 bit libraries. I think all I need to >> do is ensure that the library search paths include /usr/lib64 and / >> usr/X11R6/lib64 and NOT /usr/lib or /usr/X11R6/lib. What is the easy >> or "right" way to do this? Is there an option to setup.py? Should I >> make a site.cfg (and where should it reside?)? Should I edit >> system_info.py? >> >> I assume that this configuration is becoming more common as 64-bit >> processors show up in more machines, so I'm surprised that setup.py >> doesn't already figure it out and "do the right thing". Are we doing >> something boneheaded? >> > > > I managed to get numpy/scipy/matplotlib set up on a single core (SUN) > Opteron running a similar RedHat > uname -a > Linux nohow 2.6.9-22.0.2.EL #1 Thu Jan 5 17:03:08 EST 2006 x86_64 > x86_64 x86_64 GNU/Linux > (BTW - was the uname -a issued last summer, or is your clock wrong:) > > > I'm sure that I did not do it the easy way -- i used a site.cfg -- > but it worked in the end. > > Since I do not have root privileges on this computer, and in any > case I used the acml libraries, I used a site.cfg file for numpy and > scipy. > > I believe that the code has been changed to allow this site.cfg file > to reside either in (in decreasing order of priority) > 1. the directory where you are doing setup.py from > 2. your home directory $HOME > 3. the numpy/distutils subdirectory of the root numpy source > directory when installing numpy OR (when installing scipy) > the distutils subdirectory of the *installed* numpy directory. > This copying of the site.cfg is now automatically done when numpy is > installed. > > numpy and scipy only seemed to need the site.cfg file to find the > acml libraries (or in your case, presumably the Atlas libraries). > The standard setup.py worked fine otherwise. > FFTW of course has to be set, so that $FFTW/include and $FFTW/lib > hold the include files and fftw libraries. > > For matplotlib, I did have to modfy the setupext.py file, replacing > libdirs = [os.path.join(p, 'lib') for p in basedir > [sys.platform] > if os.path.exists(p)] > by > libdirs = [os.path.join(p, 'lib64') for p in basedir > [sys.platform] > if os.path.exists(p)] + \ > [os.path.join(p, 'lib') for p in basedir[sys.platform] > if os.path.exists(p)] > in function add_base_flags (~line 104) > > > HTH. George. > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hi George, Please can you send me your site.cfg. And how did you set FFTW ? I look forward to hearing from you. Nils From Paul.Ray at nrl.navy.mil Tue Mar 7 12:21:46 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Tue, 7 Mar 2006 12:21:46 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 31, Issue 8 In-Reply-To: <200603070447.19361.pgmdevlist@mailcan.com> References: <200603070447.19361.pgmdevlist@mailcan.com> Message-ID: Hi, Thanks for all the replies on my 64 bit questions. I'll compile several questions from different people into one reply to save list clutter... pierregm wrote: > But I'm surprised you have some 32b libraries in /usr/lib. Don't > you have > a /usr/lib32 in parallel to /usr/lib64, with lib pointing to lib64 > only ? I > remmbr having the same problem with SuSE (the reason why I switched to > Gentoo). What are the libraries being accessed, in fact ? My machine has a /usr/lib and a /usr/lib64. The blas contained in each is: > file /usr/lib64/libblas.so.3.0.3 /usr/lib64/libblas.so.3.0.3: ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), stripped > file /usr/lib/libblas.so.3.0.3 /usr/lib/libblas.so.3.0.3: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), stripped mfmorss asked: > Let me ask you this: do you have Python itself built in 64 bit? > When I > tried that here on my AIX server, I could not get the datetime > module to > build. Here is the answer: > file /usr/local/bin/python2.4 /usr/local/bin/python2.4: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0, dynamically linked (uses shared libs), not stripped George Nurser asked: > I managed to get numpy/scipy/matplotlib set up on a single core (SUN) > Opteron running a similar RedHat > uname -a > Linux nohow 2.6.9-22.0.2.EL #1 Thu Jan 5 17:03:08 EST 2006 x86_64 > x86_64 x86_64 GNU/Linux > (BTW - was the uname -a issued last summer, or is your clock wrong:) I just issued the uname -a command. Isn't the date the build date of the kernel or something like that? The link problem we are having now is the undefined symbol "srotmg_" which is a sign of an incomplete blas/lapack distribution that came with the OS, I believe. How can I tell setup.py to NOT search for external blas/lapack and just use its own internal implementation (since we don't care at all about linalg performance)? Also, using the profiler, it appears that fft's are still being done by the internal fft engine, instead of FFTW3, despite the fact that SciPy thinks it found FFTW: scipy.show_config() gives the following: fft_opt_info: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] atlas_blas_threads_info: NOT AVAILABLE djbfft_info: NOT AVAILABLE fftw3_info: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] Is there any way to positively confirm that FFTW3 is actually getting called? Thanks, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From agn at noc.soton.ac.uk Tue Mar 7 12:51:57 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Tue, 7 Mar 2006 17:51:57 +0000 Subject: [SciPy-user] SciPy-user Digest, Vol 31, Issue 8 In-Reply-To: References: <200603070447.19361.pgmdevlist@mailcan.com> Message-ID: <7C26E4BE-1ED7-488A-BE3C-329004DB55D5@noc.soton.ac.uk> On 7 Mar 2006, at 17:21, Paul Ray wrote: > > Is there any way to positively confirm that FFTW3 is actually getting > called? > Do an ldd on ....../site-packages/scipy/fftpack/_fftpack.so to check if that's linked to your fftw3 libraries George. From Paul.Ray at nrl.navy.mil Tue Mar 7 13:46:49 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Tue, 7 Mar 2006 13:46:49 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 31, Issue 8 In-Reply-To: <7C26E4BE-1ED7-488A-BE3C-329004DB55D5@noc.soton.ac.uk> References: <200603070447.19361.pgmdevlist@mailcan.com> <7C26E4BE-1ED7-488A-BE3C-329004DB55D5@noc.soton.ac.uk> Message-ID: On Mar 7, 2006, at 12:51 PM, George Nurser wrote: > > On 7 Mar 2006, at 17:21, Paul Ray wrote: >> >> Is there any way to positively confirm that FFTW3 is actually getting >> called? >> > > Do an ldd on > ....../site-packages/scipy/fftpack/_fftpack.so > > to check if that's linked to your fftw3 libraries >ldd /usr/local/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so libg2c.so.0 => /usr/lib64/libg2c.so.0 (0x0000002a95743000) libm.so.6 => /lib64/tls/libm.so.6 (0x0000002a95864000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x0000002a959ea000) libc.so.6 => /lib64/tls/libc.so.6 (0x0000002a95af6000) /lib64/ld-linux-x86-64.so.2 (0x000000552aaaa000) Interesting, so it seems like it is NOT linking to /usr/local/lib/ libfftw3*, despite scipy.show_config() saying that it is. Cheers, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From mcantor at stanford.edu Tue Mar 7 15:37:24 2006 From: mcantor at stanford.edu (mike cantor) Date: Tue, 07 Mar 2006 12:37:24 -0800 Subject: [SciPy-user] Getting an RGB value from a colormap Message-ID: <6.0.1.1.2.20060307122614.0304f4a0@mcantor.pobox.stanford.edu> Hi, I'd like to use the colormap interface in a different way than it's used in plotting images. I am plotting a series of curves (using regular old plot), each of which corresponds to the behavior of a biological circuit under a different level of "inducer", ranging from around 0-100. I want to assign the color of each line according to the inducer value, and I want to try out different colormaps to see which I like best. All I really want is a method that, for a given colormap (jet, pink, hot, hsv, etc), takes a number between 0,1 as input and returns the corresponding RGB value, which I can then use a 'color' argument in a plotting command. (I was sort of hoping that the colormap object itself would provide such a method but I couldn't find any description of it's interface). Any ideas? Thanks, -mike From jdhunter at ace.bsd.uchicago.edu Tue Mar 7 15:49:45 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 07 Mar 2006 14:49:45 -0600 Subject: [SciPy-user] [Matplotlib-users] Getting an RGB value from a colormap In-Reply-To: <6.0.1.1.2.20060307122614.0304f4a0@mcantor.pobox.stanford.edu> (mike cantor's message of "Tue, 07 Mar 2006 12:37:24 -0800") References: <6.0.1.1.2.20060307122614.0304f4a0@mcantor.pobox.stanford.edu> Message-ID: <87d5gyc74m.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "mike" == mike cantor writes: mike> Hi, I'd like to use the colormap interface in a different mike> way than it's used in plotting images. I am plotting a mike> series of curves (using regular old plot), each of which mike> corresponds to the behavior of a biological circuit under a mike> different level of "inducer", ranging from around 0-100. I mike> want to assign the color of each line according to the mike> inducer value, and I want to try out different colormaps to mike> see which I like best. mike> All I really want is a method that, for a given colormap mike> (jet, pink, hot, hsv, etc), takes a number between 0,1 as mike> input and returns the corresponding RGB value, which I can mike> then use a 'color' argument in a plotting command. (I was In [6]: import matplotlib.cm as cm In [7]: cm.jet(.5) Out[7]: (0.49019607843137247, 1.0, 0.47754585705249841, 1.0) mike> sort of hoping that the colormap object itself would provide mike> such a method but I couldn't find any description of it's mike> interface). The class documentation is available at http://matplotlib.sourceforge.net/classdocs.html in particular, take a look at http://matplotlib.sourceforge.net/matplotlib.colors.html#LinearSegmentedColormap "jet" in the example above, is an instance of the LinearSegmentedColormap colormap class, and the __call__ method returns the data you are interested in for scalars or sequences. JDH From hetland at tamu.edu Tue Mar 7 16:13:28 2006 From: hetland at tamu.edu (Rob Hetland) Date: Tue, 7 Mar 2006 15:13:28 -0600 Subject: [SciPy-user] Compiling scipy with ifort on an intel Mac Message-ID: I have had a hard time getting g77 compiled on my intel Mac, but I do have a beta version of the intel fortran compiler for the Mac... So, I am trying to get scipy compiled with ifort. I get an error like that shown below. I get a similar error when trying to use f2py. I should note that numpy (other than f2py) compiles fine. I am such a hack when it comes to these things -- I'm sure I am doing something stupid. I have tried a number of things -- I have the regular Mac python, and a fresh install in /usr/local, but both fail in the same way. I have tried including/excluding some of the obvious compiler flags such as -nofor_main. It seems that the linker can't find the python libraries. Including -lpython reduces the list of undefined symbols, but _MAIN__ still remains. Can someone point me to the truth? -Rob EXAMPLE: mire:~/src/python/scipy$ python setup.py config_fc --fcompiler=intel build [...snip...] f2py options: [] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. adding 'build/src/Lib/stats/mvn-f2pywrappers.f' to sources. building data_files sources running build_py copying Lib/__svn_version__.py -> build/lib.darwin-8.5.2-i386-2.4/scipy copying build/src/scipy/__config__.py -> build/lib.darwin-8.5.2- i386-2.4/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib Could not locate executable efort Could not locate executable efc customize IntelFCompiler customize IntelFCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize IntelFCompiler customize IntelFCompiler using build_ext building 'scipy.interpolate._fitpack' extension compiling C sources gcc options: '-fno-strict-aliasing -Wno-long-double -no-cpp-precomp - mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes' compile options: '-I/usr/local/lib/python2.4/site-packages/numpy/core/ include -I/usr/local/include/python2.4 -c' /opt/intel/fc/9.1.017/bin/ifort -shared -nofor_main build/ temp.darwin-8.5.2-i386-2.4/Lib/interpolate/_fitpackmodule.o -Lbuild/ temp.darwin-8.5.2-i386-2.4 -lfitpack -o build/lib.darwin-8.5.2- i386-2.4/scipy/interpolate/_fitpack.so ifort: Command line warning: ignoring unknown option '-shared' ld: Undefined symbols: _PyArg_ParseTuple _PyCObject_AsVoidPtr _PyCObject_Type _PyDict_SetItemString _PyErr_Format _PyErr_NewException _PyErr_NoMemory _PyErr_Occurred _PyExc_RuntimeError _PyImport_ImportModule _PyModule_GetDict _PyObject_GetAttrString _PyString_FromString _Py_BuildValue _Py_FatalError _Py_InitModule4 _MAIN__ ifort: Command line warning: ignoring unknown option '-shared' ld: Undefined symbols: _PyArg_ParseTuple _PyCObject_AsVoidPtr _PyCObject_Type _PyDict_SetItemString _PyErr_Format _PyErr_NewException _PyErr_NoMemory _PyErr_Occurred _PyExc_RuntimeError _PyImport_ImportModule _PyModule_GetDict _PyObject_GetAttrString _PyString_FromString _Py_BuildValue _Py_FatalError _Py_InitModule4 _MAIN__ error: Command "/opt/intel/fc/9.1.017/bin/ifort -shared -nofor_main build/temp.darwin-8.5.2-i386-2.4/Lib/interpolate/_fitpackmodule.o - Lbuild/temp.darwin-8.5.2-i386-2.4 -lfitpack -o build/lib.darwin-8.5.2- i386-2.4/scipy/interpolate/_fitpack.so" failed with exit status 1 mire:~/src/python/scipy$ python setup.py config_fc --fcompiler=intel build From o_medoc at yahoo.fr Tue Mar 7 17:09:26 2006 From: o_medoc at yahoo.fr (=?ISO-8859-1?Q?Olivier_M=E9doc?=) Date: Tue, 07 Mar 2006 23:09:26 +0100 Subject: [SciPy-user] minimizers don't work - d1mach problem Message-ID: <440E0496.3050409@yahoo.fr> I found this thread in the mailinglist archive. I seems to get the same error : scipy 0.4.6 build with a gcc4.0.3 snapshot and gfortran4.0.3 snapshot. When I run test() I get : Adjust D1MACH by uncommenting data statements appropriate for your machine. STOP 779 I think this error is an arch specific error (AMD64 ?). Finally I found a workaround. I can run the test suite at level 10 without errors : - I deleted the two d1mach.f files and replaced them with a cfile found here : |https://pse.cheme.cmu.edu/svn-view/ascend/code/trunk/linpack/ (the file d1mach.c) | - then adjusted the setup.py files | sed "s/join('mach','\*\.f')/join('mach','*.f'),join('mach','*.c')/" -i Lib/special/setup.py sed "s/join('mach','\*\.f')/join('mach','*.f'),join('mach','*.c')/" -i Lib/integrate/setup.py |Please note that I considered that it worked because I successfully compute the tests at level 10. But I don't know more about this.. Regards Olivier. ___________________________________________________________________________ Nouveau : t?l?phonez moins cher avec Yahoo! Messenger ! D?couvez les tarifs exceptionnels pour appeler la France et l'international. T?l?chargez sur http://fr.messenger.yahoo.com From robert.kern at gmail.com Tue Mar 7 17:48:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 07 Mar 2006 16:48:31 -0600 Subject: [SciPy-user] Compiling scipy with ifort on an intel Mac In-Reply-To: References: Message-ID: <440E0DBF.4030707@gmail.com> Rob Hetland wrote: > I have had a hard time getting g77 compiled on my intel Mac, but I do > have a beta version of the intel fortran compiler for the Mac... So, > I am trying to get scipy compiled with ifort. > > I get an error like that shown below. I get a similar error when > trying to use f2py. I should note that numpy (other than f2py) > compiles fine. I am such a hack when it comes to these things -- I'm > sure I am doing something stupid. I have tried a number of things -- > I have the regular Mac python, and a fresh install in /usr/local, but > both fail in the same way. I have tried including/excluding some of > the obvious compiler flags such as -nofor_main. It seems that the > linker can't find the python libraries. Including -lpython reduces > the list of undefined symbols, but _MAIN__ still remains. > > Can someone point me to the truth? Not having an Intel Mac, I'm not sure what needs to be done. You might try adding the following flag to the linker_so command. -Wl,-bundle,-flat_namespace,-undefined,suppress You may need to check ifort's manual for the appopriate way to pass options to the underlying linker. I took this example from ibm.py. Probably, the linker_so command will also need the equivalent of "-framework Python". If ifort accepts that directly, all the better. If not, then it may need to be "escaped" like the above flag is "escaped". -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ckkart at hoc.net Tue Mar 7 20:17:41 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 08 Mar 2006 10:17:41 +0900 Subject: [SciPy-user] numpy/scipy+ATLAS on SuSE10.0/gcc4 (once more) In-Reply-To: <440D87C2.3050703@mecha.uni-stuttgart.de> References: <440D87C2.3050703@mecha.uni-stuttgart.de> Message-ID: <440E30B5.9080208@hoc.net> Nils Wagner wrote: >> > I was able to build numpy/scipy using gcc4.0.2. > [...] > > So I have used g77 to compile ATLAS and numpy/scipy. It works fine for me. > > Essentially gfortran cannot be used !!!! > Hi Nils, can you tell me how you did that? I'm about to switch to gcc3 but I'd prefer not to depend on so much additional software. Regards, Christian From mantha at chem.unr.edu Tue Mar 7 22:46:44 2006 From: mantha at chem.unr.edu (Jordan Mantha) Date: Tue, 7 Mar 2006 19:46:44 -0800 Subject: [SciPy-user] Compiling scipy with ifort on an intel Mac In-Reply-To: References: Message-ID: <086262C8-95F8-4545-AF97-31B1B5F29D61@chem.unr.edu> On Mar 7, 2006, at 1:13 PM, Rob Hetland wrote: > > I have had a hard time getting g77 compiled on my intel Mac, but I do > have a beta version of the intel fortran compiler for the Mac... So, > I am trying to get scipy compiled with ifort. I too have a Intel iMac and have been having a similar struggle. I signed up to get the beta Intel Fortran compiler but haven't gotten it yet. I'm very interested in getting scipy and numpy to work on this computer so if you get it to work please email the list for the rest of us Intel mac users. I've tried some of the g77/gcc builds from http://hpc.sourceforge.net/index.php and http:// wiki.urbanek.info/index.cgi?TigerGCC without much success. -Jordan Mantha From nwagner at mecha.uni-stuttgart.de Wed Mar 8 02:16:03 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Mar 2006 08:16:03 +0100 Subject: [SciPy-user] numpy/scipy+ATLAS on SuSE10.0/gcc4 (once more) In-Reply-To: <440E30B5.9080208@hoc.net> References: <440D87C2.3050703@mecha.uni-stuttgart.de> <440E30B5.9080208@hoc.net> Message-ID: <440E84B3.4040102@mecha.uni-stuttgart.de> Christian Kristukat wrote: > Nils Wagner wrote: > >>> >>> >> I was able to build numpy/scipy using gcc4.0.2. >> >> > [...] > >> So I have used g77 to compile ATLAS and numpy/scipy. It works fine for me. >> >> Essentially gfortran cannot be used !!!! >> >> > > Hi Nils, > can you tell me how you did that? I'm about to switch to gcc3 but I'd prefer not > to depend on so much additional software. > > Regards, Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hi Christian, All you need is to install compat-g77. Once you have build ATLAS using gcc4.02 you should build a complete liblapack.a. http://math-atlas.sourceforge.net/errata.html#completelp Copy all *.a into /usr/local/lib/atlas Checkout scipy/numpy via svn and install them via python setup.py install That's all. Cheers, Nils rpm -qi compat-g77 Name : compat-g77 Relocations: /usr Version : 3.3.5 Vendor: SUSE LINUX Products GmbH, Nuernberg, Germany Release : 2 Build Date: Sat 10 Sep 2005 01:47:44 AM CEST Install date: Thu 02 Mar 2006 03:32:44 PM CET Build Host: ensslin.suse.de Group : Development/Languages/Fortran Source RPM: compat-g77-3.3.5-2.src.rpm Size : 6289257 License: LGPL Signature : DSA/SHA1, Sat 10 Sep 2005 04:55:00 AM CEST, Key ID a84edae89c800aca Packager : http://www.suse.de/feedback URL : http://gcc.gnu.org/ Summary : GNU Fortran 77 Compiler Description : This is a Fortran 77 only compiler based on GCC 3.3.5. It can be used for source not yet compilable by the gcc-fortran package which contains the new gfortran compiler. Authors: -------- James Craig Burley Toon Moene Distribution: SUSE LINUX 10.0 (X86-64) From ckkart at hoc.net Wed Mar 8 04:30:02 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 08 Mar 2006 18:30:02 +0900 Subject: [SciPy-user] numpy/scipy+ATLAS on SuSE10.0/gcc4 (once more) In-Reply-To: <440E84B3.4040102@mecha.uni-stuttgart.de> References: <440D87C2.3050703@mecha.uni-stuttgart.de> <440E30B5.9080208@hoc.net> <440E84B3.4040102@mecha.uni-stuttgart.de> Message-ID: <440EA41A.5020204@hoc.net> Nils Wagner wrote: > Christian Kristukat wrote: >> Nils Wagner wrote: >> >>>> >>>> >>> I was able to build numpy/scipy using gcc4.0.2. >>> >>> >> [...] >> >>> So I have used g77 to compile ATLAS and numpy/scipy. It works fine for me. >>> >>> Essentially gfortran cannot be used !!!! >>> >>> >> Hi Nils, >> can you tell me how you did that? I'm about to switch to gcc3 but I'd prefer not >> to depend on so much additional software. >> >> Regards, Christian >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> > Hi Christian, > > All you need is to install compat-g77. Once you have build ATLAS using > gcc4.02 > you should build a complete liblapack.a. > http://math-atlas.sourceforge.net/errata.html#completelp > > Copy all *.a into /usr/local/lib/atlas > Checkout scipy/numpy via svn > and install them via > python setup.py install > That's all. But I've tried that many times, and as said before, I end up with an error when importing numpy or scipy: import linalg -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename Could you maybe send me the list of installed packages on your machine, just to compare? (rpm -qa) Regards, Christian From nwagner at mecha.uni-stuttgart.de Wed Mar 8 04:39:36 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Mar 2006 10:39:36 +0100 Subject: [SciPy-user] numpy/scipy+ATLAS on SuSE10.0/gcc4 (once more) In-Reply-To: <440EA41A.5020204@hoc.net> References: <440D87C2.3050703@mecha.uni-stuttgart.de> <440E30B5.9080208@hoc.net> <440E84B3.4040102@mecha.uni-stuttgart.de> <440EA41A.5020204@hoc.net> Message-ID: <440EA658.90402@mecha.uni-stuttgart.de> Christian Kristukat wrote: > Nils Wagner wrote: > >> Christian Kristukat wrote: >> >>> Nils Wagner wrote: >>> >>> >>>>> >>>>> >>>>> >>>> I was able to build numpy/scipy using gcc4.0.2. >>>> >>>> >>>> >>> [...] >>> >>> >>>> So I have used g77 to compile ATLAS and numpy/scipy. It works fine for me. >>>> >>>> Essentially gfortran cannot be used !!!! >>>> >>>> >>>> >>> Hi Nils, >>> can you tell me how you did that? I'm about to switch to gcc3 but I'd prefer not >>> to depend on so much additional software. >>> >>> Regards, Christian >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.net >>> http://www.scipy.net/mailman/listinfo/scipy-user >>> >>> >> Hi Christian, >> >> All you need is to install compat-g77. Once you have build ATLAS using >> gcc4.02 >> you should build a complete liblapack.a. >> http://math-atlas.sourceforge.net/errata.html#completelp >> >> Copy all *.a into /usr/local/lib/atlas >> Checkout scipy/numpy via svn >> and install them via >> python setup.py install >> That's all. >> > > But I've tried that many times, and as said before, I end up with an error when > importing numpy or scipy: > > import linalg -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > > Could you maybe send me the list of installed packages on your machine, just to > compare? (rpm -qa) > > Regards, Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hi Christian, You cannot use the prebuild libraries (blas/lapack) ! So deinstall these rpm's. rpm -qi lapack package lapack is not installed rpm -qi blas package blas is not installed Then I have build blas / lapack following http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 This is my modified make.inc in /usr/local/src/LAPACK #################################################################### # LAPACK make include file. # # LAPACK, Version 3.0 # # June 30, 1999 # #################################################################### # SHELL = /bin/sh # # The machine (platform) identifier to append to the library names # PLAT = _LINUX # # Modify the FORTRAN and OPTS definitions to refer to the # compiler and desired compiler options for your machine. NOOPT # refers to the compiler options desired when NO OPTIMIZATION is # selected. Define LOADER and LOADOPTS to refer to the loader and # desired load options for your machine. # FORTRAN = g77 #OPTS = -funroll-all-loops -fno-f2c -O3 # # On 64 bit systems with GNU compiler # OPTS = -O2 -m64 -fPIC NOOPT = -m64 -fPIC DRVOPTS = $(OPTS) #NOOPT = LOADER = g77 LOADOPTS = # # The archiver and the flag(s) to use when building archive (library) # If you system has no ranlib, set RANLIB = echo. # ARCH = ar ARCHFLAGS= cr RANLIB = ranlib # # The location of the libraries to which you will link. (The # machine-specific, optimized BLAS library should be used whenever # possible.) # BLASLIB = ../../blas$(PLAT).a LAPACKLIB = lapack$(PLAT).a TMGLIB = tmglib$(PLAT).a EIGSRCLIB = eigsrc$(PLAT).a LINSRCLIB = linsrc$(PLAT).a Nils From gnchen at cortechs.net Wed Mar 8 12:07:38 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Wed, 8 Mar 2006 09:07:38 -0800 Subject: [SciPy-user] SciPy job ad Message-ID: <12E50158-8721-4B3C-818E-DE9BC507F446@cortechs.net> Hi! All, I know this is not the right place to put an ad. But I think someone in scipy community might be interested in this job: Neuroimaging System Engineer/Programmer CorTechs Labs, Inc. is developing MRI-based diagnostics which are targeted at the volumetric measurement, classification, and early detection of specific Central Nervous System diseases, including Alzheimer?s disease, Multiple Sclerosis, and HIV. We are currently seeking candidates for a Neuroimaging System Engineer/Programmer position which will develop and support commercial software code which allows radiolical and healthcare IT components to interface with our diagnostic software components. As a member of a small development team, you will work closely with world-class scientists and physicians who are participating in the National Institute of Health/National Institute of Aging/National Institute of Infectious Diseases sponsored projects. Qualifications -3-5 years of experience in developing commercial and or academic applications in the Python language. -Expert in building Python extension module -Experience in embedding Python -3 ? 5 years of experience in DICOM components and a thorough understanding of the DICOM standard -3 ? 5 years of experience with Matlab and C/C++ -Experience with HPC (High Performance Computation) system -Experience with LAMP (Linux, Apache, MySQL +Python/PHP) -Experience with SQL programming calls to relational database management systems -Experience with QT/PyQT and OpenGL will be a plus. Duties and Responsibilities -Based upon written specifications and perhaps well defined prototype components, design and build commercial interface code which allows clinicians to use our computationally derived products within workflow customary in hospitals and radiology departments. -Develop a service based extension which allows for broader use of a centrally available computational resource for medical researchers -Help scientist to port current code to Python-based scientific computation environment. Education A M.S. or higher from an accredited college in Computer Science or other related discipline, and 3.5 or higher GPA is required. Cortechs Labs, Inc. is located in La Jolla Village in San Diego, CA. This is a full-time position which includes a competitive salary, subsidized healthcare benefits, vacation, and if needed H1B sponsorship. Qualified candidates should send or email (please do not send Word files- they will not be opened) their resumes to the attention of: Howard Pinsky, Chief Operating Officer Cortechs Labs, Inc. 1020 Prospect St., Suite 304 La Jolla, CA 92037 Or email to: pinsky at cortechs.net Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Mar 9 15:22:45 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 9 Mar 2006 20:22:45 +0000 Subject: [SciPy-user] scipy.test problem In-Reply-To: References: Message-ID: <1e2af89e0603091222i155e03cfhdcfa86bd4b64a64c@mail.gmail.com> Hi, > I work on a dual processor Xeon box, and try to install scipy. After > various unsuccessful attempts, I now compiled BLAS, LAPACK ATLAS and > FFTW3.1 from scratch, then installed numpy-0.9.5 and then installed > scipy-0.4.6 under python 2.4.2. The installation went fine but after > > import scipy > scipy.test(level=1) > > I get the following error message: > > ====================================================================== > FAIL: check_simple (scipy.linalg.decomp.test_decomp.test_eig) > ---------------------------------------------------------------------- I get the same errors. I think we are both working on a 64 bit system. I noticed the comments in the test_fblas.py file in the linalg/tests directory: # These test really need to be checked on 64 bit architectures. # What does complex32 become on such machines? complex64 I'll bet. # If so, I think we are OK. # Check when we have a machine to check on. Is it possible that this is the source of our problem? Best, Matthew From hetland at tamu.edu Fri Mar 10 09:58:23 2006 From: hetland at tamu.edu (Robert Hetland) Date: Fri, 10 Mar 2006 08:58:23 -0600 Subject: [SciPy-user] GCC 3.3 requirement Message-ID: Apple shipped a crippled version of GCC 3.3 with the new intel Macs. (It seems like it is there -- there is a package, and you can gcc_select it -- but it does not work properly.) Many on this list have discouraged using 4.0 for building scipy. I have also found that 3.3 works on the *PPC* Mac, where 4.0 does not. However, it seems inevitable that mac users will need to compile with 4.0 as more users get new Intel based machines. I am sure that eventually someone will provide an unofficial gcc 3.3 package (like Guarav Khanna who maintains hpc.sf.net), but I don't think this is a good long-term solution. What are the issues involved? Why do we need gcc 3.3? Is there some sort of timescale for moving the code base up to GCC 4.0 (or 4.x) compliance? -Rob. ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Fri Mar 10 10:14:54 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 10 Mar 2006 16:14:54 +0100 Subject: [SciPy-user] GCC 3.3 requirement In-Reply-To: References: Message-ID: On 10/03/2006, at 3:58 PM, Robert Hetland wrote: > > Apple shipped a crippled version of GCC 3.3 with the new intel > Macs. (It seems like it is there -- there is a package, and you > can gcc_select it -- but it does not work properly.) > > Many on this list have discouraged using 4.0 for building scipy. I > have also found that 3.3 works on the *PPC* Mac, where 4.0 does > not. However, it seems inevitable that mac users will need to > compile with 4.0 as more users get new Intel based machines. I am > sure that eventually someone will provide an unofficial gcc 3.3 > package (like Guarav Khanna who maintains hpc.sf.net), but I don't > think this is a good long-term solution. I agree. > What are the issues involved? Why do we need gcc 3.3? Is there > some sort of timescale for moving the code base up to GCC 4.0 (or > 4.x) compliance? It seems the main problem is with bugs in the latest versions, rather than any fundamental incompatibility. Of course it would be great to isolate them, report them to the GCC maintainers, and get them fixed. I seem to recall that SciPy mostly works with Apple's GCC 4.0 for PPC, except for long double data types. Perhaps we could start up a Wiki page with info on what works and what doesn't with GCC 4.x on different platforms, and try to isolate or work around the problems one by one ... -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From humufr at yahoo.fr Fri Mar 10 13:53:58 2006 From: humufr at yahoo.fr (Humufr) Date: Fri, 10 Mar 2006 13:53:58 -0500 Subject: [SciPy-user] GCC 3.3 requirement In-Reply-To: References: Message-ID: <4411CB46.3080804@yahoo.fr> One big problem with gcc4, especially 4.0 is that gfortran is buggy. A lot of improvement has been done but in compariason with commercial compiler or g95, it's not the same things. gfortran is an alpha version of a true fortran 90 compilers. So many things in scipy are using fortran so this is a critical issue for the moment. N. Robert Hetland wrote: > > Apple shipped a crippled version of GCC 3.3 with the new intel Macs. > (It seems like it is there -- there is a package, and you can > gcc_select it -- but it does not work properly.) > > Many on this list have discouraged using 4.0 for building scipy. I > have also found that 3.3 works on the *PPC* Mac, where 4.0 does not. > However, it seems inevitable that mac users will need to compile with > 4.0 as more users get new Intel based machines. I am sure that > eventually someone will provide an unofficial gcc 3.3 package (like > Guarav Khanna who maintains hpc.sf.net), but I don't think this is a > good long-term solution. > > What are the issues involved? Why do we need gcc 3.3? Is there some > sort of timescale for moving the code base up to GCC 4.0 (or 4.x) > compliance? > > -Rob. > > ----- > Rob Hetland, Assistant Professor > Dept of Oceanography, Texas A&M University > p: 979-458-0096, f: 979-845-6331 > e: hetland at tamu.edu, w: http://pong.tamu.edu > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From mfmorss at aep.com Fri Mar 10 15:15:59 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Fri, 10 Mar 2006 15:15:59 -0500 Subject: [SciPy-user] GCC 3.3 requirement In-Reply-To: <4411CB46.3080804@yahoo.fr> Message-ID: Exactly, but even for compiling C code, gcc-4.0 is less trustworthy. The last verson of gcc that was produced before gcc-4.0 was, if I'm not mistaken, gcc-3.4.5. Not gcc-3.3. I would recommend gcc-3.4.5 for building anything, not just numpy and scipy. I was surprised to learn here that gcc-4.0 is shipped with SuSE. Also unless I'm mistaken, there is no need to use a Fortran compiler more advanced than g77 to compile the math libs, which are written in Fortran 77. I don't really see any problem with not being able or willing to use more recent versions of gcc, since for the time being, the older version is more stable. Gcc-3.4.5 (including g77) isn't going to go away, and if you don't have it, it's not a huge problem to build it. My belief is that the compiled distributions of Linux, for example, still all use gcc-3. I notice, for example, that Gentoo does not mark gcc-4 "stable" for a build on any system (http://packages.gentoo.org/search/?sstring=gcc). And Lunar is still based on 3.4.5. Mark F. Morss Principal Analyst, Market Risk American Electric Power Humufr Sent by: To scipy-user-bounce SciPy Users List s at scipy.net cc 03/10/2006 01:53 Subject PM Re: [SciPy-user] GCC 3.3 requirement Please respond to SciPy Users List One big problem with gcc4, especially 4.0 is that gfortran is buggy. A lot of improvement has been done but in compariason with commercial compiler or g95, it's not the same things. gfortran is an alpha version of a true fortran 90 compilers. So many things in scipy are using fortran so this is a critical issue for the moment. N. Robert Hetland wrote: > > Apple shipped a crippled version of GCC 3.3 with the new intel Macs. > (It seems like it is there -- there is a package, and you can > gcc_select it -- but it does not work properly.) > > Many on this list have discouraged using 4.0 for building scipy. I > have also found that 3.3 works on the *PPC* Mac, where 4.0 does not. > However, it seems inevitable that mac users will need to compile with > 4.0 as more users get new Intel based machines. I am sure that > eventually someone will provide an unofficial gcc 3.3 package (like > Guarav Khanna who maintains hpc.sf.net), but I don't think this is a > good long-term solution. > > What are the issues involved? Why do we need gcc 3.3? Is there some > sort of timescale for moving the code base up to GCC 4.0 (or 4.x) > compliance? > > -Rob. > > ----- > Rob Hetland, Assistant Professor > Dept of Oceanography, Texas A&M University > p: 979-458-0096, f: 979-845-6331 > e: hetland at tamu.edu, w: http://pong.tamu.edu > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From guillem at torroja.dmt.upm.es Fri Mar 10 16:26:04 2006 From: guillem at torroja.dmt.upm.es (Guillem Borrell Nogueras) Date: Fri, 10 Mar 2006 22:26:04 +0100 Subject: [SciPy-user] error compiling scipy with gcc 4.1 Message-ID: <200603102226.04938.guillem@torroja.dmt.upm.es> Hi I am trying to compile scipy svn (r1676) with gcc 4.1. One of gfortran's developers suggested me to use gcc 4.1 instead of 4.0.2 because this release is supposed to be much more stable and fast. After building numpy successfully, also from svn, the scipy compilation fails when tries to build the dfftpack library. It seems that distutils fails to find the fortran compiler. I'm a bit perplexed about this because it has been able to build all the previous fortran subpackages... ... customize IntelFCompiler customize LaheyFCompiler customize PGroupFCompiler customize AbsoftFCompiler customize NAGFCompiler customize VastFCompiler customize GnuFCompiler customize CompaqFCompiler customize IntelItaniumFCompiler customize Gnu95FCompiler customize G95FCompiler customize GnuFCompiler customize Gnu95FCompiler customize GnuFCompiler customize GnuFCompiler using build_clib building 'dfftpack' library compiling Fortran sources f77(f77) options: '-Wall -fno-second-underscore -fPIC -O2 -funroll-loops -fomit-frame-pointer -malign-double' compile options: '-c' f77:f77: Lib/fftpack/dfftpack/dcost.f sh: f77: command not found sh: f77: command not found error: Command "f77 -Wall -fno-second-underscore -fPIC -O2 -funroll-loops -fomit-frame-pointer -malign-double -c -c Lib/fftpack/dfftpack/dcost.f -o build/temp.linux-i686-2.4/Lib/fftpack/dfftpack/dcost.o" failed with exit status 127 I am not specially interested in building scipy with gcc 4.1 but I thought that the resoults would have some interest. guillem -- Guillem Borrell Nogueras WEBSITE http://torroja.dmt.upm.es:9673/Guillem_Site/ BLOG http://torroja.dmt.upm.es:9673/Guillem_Borrell/ EMAIL guillemborrell_at_gmail.com (personal) guillem_at_torroja.dmt.upm.es (CFD Lab. ETSIA) From rclewley at cam.cornell.edu Fri Mar 10 18:46:21 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Fri, 10 Mar 2006 18:46:21 -0500 (EST) Subject: [SciPy-user] ANN: PyDSTool v0.83 - new release. Message-ID: Dear SciPy users, PyDSTool is an integrated simulation, modeling and analysis package for dynamical systems (including ODEs, DAEs, maps, and hybrid systems) and scientific data. Building on SciPy classes, the package also supports symbolic expression processing, bifurcation analysis, and enhanced arrays for "index-free" and highly contextualized scientific data manipulation. Model building tools are provided, which use symbolic expression and hierarchical specification classes to ease the development and analysis of complex dynamical models. This includes automated compilation of symbolic representations of models into fast numerical code using enhanced legacy Fortran and C integrators for both stiff and non-stiff systems. A full overview and extensive user documentation is provided online at http://pydstool.sourceforge.net, and in the code itself. We have made a significant update in version 0.83 of PyDSTool, including added features for bifurcation/continuation analysis, symbolic expression manipulation and symbolic differentiation, use of Points as enhanced arrays, and the addition of support for external inputs to ODEs, maps, etc. (e.g., from experimental data sources). A full list of what's new is provided in the release notes on SourceForge. Thanks for your attention, Rob Clewley, Erik Sherwood, Drew LaMar, Dept. of Mathematics and Center for Applied Mathematics, Cornell University. From hetland at tamu.edu Fri Mar 10 19:35:18 2006 From: hetland at tamu.edu (Robert Hetland) Date: Fri, 10 Mar 2006 18:35:18 -0600 Subject: [SciPy-user] GCC 3.3 requirement In-Reply-To: References: Message-ID: <84DD7DFA-4F9E-4259-A0BC-6D4963DF9C91@tamu.edu> On Mar 10, 2006, at 2:15 PM, mfmorss at aep.com wrote: > I don't really see any problem with not being able or willing to > use more > recent versions of gcc, since for the time being, the older version > is more > stable. Well, the new Intel Macs don't have the older version. I suspect at some point some of the linuxes will do as well. That's my only point. > Gcc-3.4.5 (including g77) isn't going to go away, and if you don't > have it, it's not a huge problem to build it. Well, it is also not all that simple either. You need to bootstrap a compiler build -- if you only have the newest, you are coming from the wrong direction. I imagine that there will be binary packages for the Intel Mac in short order, but that still requires installing even *more* stuff to get scipy working right. My main point is simply: If you restrict your audience to people who can install compilers, it will be pretty small.. If I don't reply, it's because I'm on vacation for a week... -Rob. ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From w.northcott at unsw.edu.au Fri Mar 10 21:04:45 2006 From: w.northcott at unsw.edu.au (Bill Northcott) Date: Sat, 11 Mar 2006 13:04:45 +1100 Subject: [SciPy-user] GCC 3.3 requirement In-Reply-To: References: Message-ID: On 11/03/2006, at 1:14 AM, Robert Hetland wrote: > Apple shipped a crippled version of GCC 3.3 with the new intel > Macs. (It seems like it is there -- there is a package, and you can > gcc_select it -- but it does not work properly.) Apple are very clear that their gcc-3.3 does NOT support the Intel machines. It is not crippled, it is just PPC software running under Rosetta! That is why everyone is working to get packages to function with gcc-4. Bill Northcott From strawman at astraw.com Sat Mar 11 20:14:23 2006 From: strawman at astraw.com (Andrew Straw) Date: Sat, 11 Mar 2006 17:14:23 -0800 Subject: [SciPy-user] ANN: PyDSTool v0.83 - new release. In-Reply-To: References: Message-ID: <441375EF.10602@astraw.com> Robert Clewley wrote: >PyDSTool is an integrated simulation, modeling and analysis package for >dynamical systems (including ODEs, DAEs, maps, and hybrid systems) and >scientific data. > > Hi, PyDSTool looks really interesting. A couple of questions, which I might suggest should be on your website's first page: 1) What is the license of PyDSTool? (I didn't download the source to find out.) 2) Is there a mailing list? neither searching for "mailing list" in the text of your wiki pages nor going to the SourceForge project page "List" turned up anything... And finally, since I'm asking questions, one that almost certainly doesn't need to go on your website's first page: Do you have any c2d-type function that takes a transfer function from the continuous to discrete domains? From webb.sprague at gmail.com Sat Mar 11 22:36:10 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Sat, 11 Mar 2006 19:36:10 -0800 Subject: [SciPy-user] GCC 3.3 requirement Message-ID: Hi All, Just for the record, I used GCC-3.4.5 on a gentoo box recently to build scipy, and ran into a bug wherein singular value decomposition hangs forever, so I rev'ed back to GCC-3.3.x. I just worked around it, but it might be worth looking into http://bugs.gentoo.org/show_bug.cgi?id=114885 W From rclewley at cam.cornell.edu Sun Mar 12 00:42:34 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Sun, 12 Mar 2006 00:42:34 -0500 (EST) Subject: [SciPy-user] ANN: PyDSTool v0.83 - new release. In-Reply-To: <441375EF.10602@astraw.com> References: <441375EF.10602@astraw.com> Message-ID: Thanks for your suggestions, Andrew. I've created a mailing list at sourceforge and linked to the appropriate info for it and the license (which is BSD, by the way) from the main project page of the wiki. Your question about transfer functions reminded me that I meant to include a piecewise-constant interpolation option to the "InterpTable" class, which I have now done in a slightly updated version of the code at SourceForge. If you see the test script 'interp_pcwc.py' you'll see an example. It will only perform piecewise-linear and piecewise-constant interpolation, nothing fancy like Matlab's c2d. I hope that's useful. You could discretize any PyDSTool trajectory/curve, regular python function, or fine-resolution array data in a similar way. Regards, Rob On Sat, 11 Mar 2006, Andrew Straw wrote: > Hi, PyDSTool looks really interesting. A couple of questions, which I > might suggest should be on your website's first page: > > 1) What is the license of PyDSTool? (I didn't download the source to > find out.) > 2) Is there a mailing list? neither searching for "mailing list" in the > text of your wiki pages nor going to the SourceForge project page "List" > turned up anything... > > And finally, since I'm asking questions, one that almost certainly > doesn't need to go on your website's first page: > > Do you have any c2d-type function that takes a transfer function from > the continuous to discrete domains? > From mfmorss at aep.com Mon Mar 13 07:51:45 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Mon, 13 Mar 2006 07:51:45 -0500 Subject: [SciPy-user] GCC 3.3 requirement In-Reply-To: <84DD7DFA-4F9E-4259-A0BC-6D4963DF9C91@tamu.edu> Message-ID: MFMorss said: Gcc-3.4.5 (including g77) isn't going to go away, and if you don't have it, it's not a huge problem to build it. Robert Headland said: Well, it is also not all that simple either.? You need to bootstrap a compiler build -- if you only have the newest, you are coming from the wrong direction. You may be right about "coming from the wrong direction," but in fact, on our AIX 5.2 machine here, I did first obtain GCC-4.0.2 in binary from UCLA's Public Domain Software Library for AIX (http://aixpdslib.seas.ucla.edu/index.html), then use it to build gcc-3.4.5. The resulting compiler appears to work just fine; using it, I've been able to build some things that I couldn't build with gcc4. Whether this outcome is unusual, I don't know. I sometimes think that everything about AIX is unusual. Mark F. Morss Principal Analyst, Market Risk American Electric Power Robert Hetland To Sent by: SciPy Users List scipy-user-bounce s at scipy.net cc Subject 03/10/2006 07:35 Re: [SciPy-user] GCC 3.3 PM requirement Please respond to SciPy Users List On Mar 10, 2006, at 2:15 PM, mfmorss at aep.com wrote: I don't really see any problem with not being able or willing to use more recent versions of gcc, since for the time being, the older version is more stable. Well, the new Intel Macs don't have the older version. I suspect at some point some of the linuxes will do as well.? That's my only point. Gcc-3.4.5 (including g77) isn't going to go away, and if you don't have it, it's not a huge problem to build it. Well, it is also not all that simple either.? You need to bootstrap a compiler build -- if you only have the newest, you are coming from the wrong direction.? I imagine that there will be binary packages for the Intel Mac in short order, but that still requires installing even *more* stuff to get scipy working right. My main point is simply:? If you restrict your audience to people who can install compilers, it will be pretty small.. If I don't reply, it's because I'm on vacation for a week... -Rob. ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From jonathan.taylor at stanford.edu Mon Mar 13 15:43:22 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Mon, 13 Mar 2006 12:43:22 -0800 Subject: [SciPy-user] step function Message-ID: <4415D96A.5010207@stanford.edu> i was just wondering if there is a simple way to define a step function in scipy, like an ECDF (empirical cumulative distribution function) in statistics.... i have hacked my own version, but it is quite slow.... thanks, jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From robert.kern at gmail.com Mon Mar 13 15:50:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 13 Mar 2006 14:50:37 -0600 Subject: [SciPy-user] step function In-Reply-To: <4415D96A.5010207@stanford.edu> References: <4415D96A.5010207@stanford.edu> Message-ID: <4415DB1D.6070702@gmail.com> Jonathan Taylor wrote: > i was just wondering if there is a simple way to define a step function > in scipy, like an ECDF (empirical cumulative distribution function) in > statistics.... > > i have hacked my own version, but it is quite slow.... scipy.interpolate.interp1d has an option for nearest-neighbor interpolation which is almost what you want except that the provided data points are in the center of the steps instead of at the left or right. What does your version look like? Let's see if we can't improve upon it. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonathan.taylor at stanford.edu Mon Mar 13 17:12:54 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Mon, 13 Mar 2006 14:12:54 -0800 Subject: [SciPy-user] step function In-Reply-To: <4415DB1D.6070702@gmail.com> References: <4415D96A.5010207@stanford.edu> <4415DB1D.6070702@gmail.com> Message-ID: <4415EE66.4010008@stanford.edu> believe it or not, i just discovered "searchsorted"... my earlier hack was brutal, using "bisect".... this is my current version and it seems to work fine. class StepFunction: '''A basic step function: values at the ends are handled in the simplest way possible: everything to the left of x[0] is set to ival; everything to the right of x[-1] is set to y[-1]. >>> >>> from numpy import * >>> >>> x = arange(20) >>> y = arange(20) >>> >>> f = StepFunction(x, y) >>> >>> print f(3.2) 3 >>> print f([[3.2,4.5],[24,-3.1]]) [[ 3 4] [19 0]] >>> ''' def __init__(self, x, y, ival=0., sorted=False): _x = N.array(x) _y = N.array(y) if _x.shape != _y.shape: raise ValueError, 'in StepFunction: x and y do not have the same shape' if len(_x.shape) != 1: raise ValueError, 'in StepFunction: x and y must be 1-dimensional' self.x = N.array([-N.inf] + list(x)) self.y = N.array([ival] + list(y)) if not sorted: asort = N.argsort(self.x) self.x = N.take(self.x, asort) self.y = N.take(self.y, asort) self.n = self.x.shape[0] def __call__(self, time): tind = scipy.searchsorted(self.x, time) - 1 _shape = tind.shape if tind.shape is (): return self.y[tind] else: tmp = N.take(self.y, tind) tmp.shape = _shape return tmp Robert Kern wrote: >Jonathan Taylor wrote: > > >>i was just wondering if there is a simple way to define a step function >>in scipy, like an ECDF (empirical cumulative distribution function) in >>statistics.... >> >>i have hacked my own version, but it is quite slow.... >> >> > >scipy.interpolate.interp1d has an option for nearest-neighbor interpolation >which is almost what you want except that the provided data points are in the >center of the steps instead of at the left or right. > >What does your version look like? Let's see if we can't improve upon it. > > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From robert.kern at gmail.com Mon Mar 13 17:41:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 13 Mar 2006 16:41:58 -0600 Subject: [SciPy-user] step function In-Reply-To: <4415EE66.4010008@stanford.edu> References: <4415D96A.5010207@stanford.edu> <4415DB1D.6070702@gmail.com> <4415EE66.4010008@stanford.edu> Message-ID: <4415F536.7040400@gmail.com> Jonathan Taylor wrote: > believe it or not, i just discovered "searchsorted"... my earlier hack > was brutal, using "bisect".... Ouch! Yeah, that may be why it was slow. :-) > this is my current version and it seems to work fine. It looks good. I'll just make a few comments inline. > class StepFunction: > '''A basic step function: values at the ends are handled in the > simplest way possible: everything to the left of x[0] is set to ival; > everything to the right of x[-1] is set to y[-1]. > > >>> > >>> from numpy import * > >>> > >>> x = arange(20) > >>> y = arange(20) > >>> > >>> f = StepFunction(x, y) > >>> > >>> print f(3.2) > 3 > >>> print f([[3.2,4.5],[24,-3.1]]) > [[ 3 4] > [19 0]] > >>> > > ''' > > def __init__(self, x, y, ival=0., sorted=False): > > _x = N.array(x) > _y = N.array(y) N.asarray() will be a tad more efficient. > if _x.shape != _y.shape: > raise ValueError, 'in StepFunction: x and y do not have the > same shape' > if len(_x.shape) != 1: > raise ValueError, 'in StepFunction: x and y must be > 1-dimensional' > > self.x = N.array([-N.inf] + list(x)) > self.y = N.array([ival] + list(y)) You can do this more efficiently by using numpy.concatenate() (or numpy.hstack() which happens to be equivalent in this particular case). self.x = N.hstack(([-N.inf], _x)) > if not sorted: > asort = N.argsort(self.x) > self.x = N.take(self.x, asort) > self.y = N.take(self.y, asort) > self.n = self.x.shape[0] > > def __call__(self, time): > > tind = scipy.searchsorted(self.x, time) - 1 > _shape = tind.shape > if tind.shape is (): > return self.y[tind] > else: > tmp = N.take(self.y, tind) > tmp.shape = _shape > return tmp self.y[tind] will give the correct answer regardless. In numpy, you can index into an array using another array. The shape of the returned array will have the shape of the index array. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From m.cooper at computer.org Mon Mar 13 20:45:34 2006 From: m.cooper at computer.org (Matthew Cooper) Date: Mon, 13 Mar 2006 17:45:34 -0800 Subject: [SciPy-user] maxentropy Message-ID: <43f499ca0603131745w2095ff31n591e6c7d77e9357d@mail.gmail.com> Hi, I am wondering if there is any additional documentation for the scipy.maxentropy module. I am interested in building a conditional maxent classifier using the module, but am having trouble making sense of it from the source. Any additional information you can provide is greatly appreciated. Thanks, Matt Cooper From m.cooper at computer.org Mon Mar 13 20:46:37 2006 From: m.cooper at computer.org (Matthew Cooper) Date: Mon, 13 Mar 2006 17:46:37 -0800 Subject: [SciPy-user] maxentropy Message-ID: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> Hi Ed, Thanks very much for your reply. I think it helped a lot, but I may be a bit confused about conditional versus unconditional modeling. What I'm doing is similar to text categorization. I observe some text (vector X) and want to determine if a binary label (scalar y) should be applied (y=1) or not (y=0). So, I look at this as using maxent to estimate P(y|X). In this case, is my sample space simply {0,1} or is it the space from which X is sampled? I had thought this was conditional modelling, but I don't want to explicitly train models for every different X, rather I want to select and weight features from the whole corpus that in turn imply, for each X', some P(y|X'). I assumed it was conditional because I'm not modelling P(y',X'). I have been trying to build models using a sample space with tuples as elements like (X_n,y_n). I then am greedily building a set of feature functions using information gain to select features. It's not quite working, and I'm worried I'm not defining the sample space properly. I will try to figure out how to define the sample space as {0,1} and then redefine the features, but I'd appreciate any advice. Also, if I get this working, I'd be happy to try and help with adding some feature selection examples to your documentation. At the moment, I'm still, as you can tell, figuring out what I'm doing. Thanks, Matt From george_geller at speakeasy.net Mon Mar 13 21:12:13 2006 From: george_geller at speakeasy.net (George Geller) Date: Mon, 13 Mar 2006 18:12:13 -0800 Subject: [SciPy-user] cspline1d_eval In-Reply-To: References: Message-ID: <1142302334.17002.7.camel@localhost.localdomain> I was trying to follow the recipe at: http://www.scipy.org/Cookbook/Interpolation from numpy import r_, sin from scipy.signal import cspline1d, cspline1d_eval x = r_[0:10] dx = x[1]-x[0] newx = r_[-3:13:0.1] # notice outside the original domain y = sin(x) cj = cspline1d(y) newy = cspline1d_eval(cj, newx, dx=dx,x0=x[0]) from pylab import plot plot(newx, newy, x, y, 'o') First, with my installation the first import line has to read: from scipy import r_, sin Second, the cspline1d_eval function seems to no longer be part of scipy.signal: ggeller at hayes:~$ python Python 2.4.2 (#2, Sep 30 2005, 21:19:01) [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.signal import cspline1d_eval Traceback (most recent call last): File "", line 1, in ? ImportError: cannot import name cspline1d_eval Am I doing something really dumb, or is the recipe seriously obsolete? George From oliphant.travis at ieee.org Tue Mar 14 01:50:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 13 Mar 2006 23:50:57 -0700 Subject: [SciPy-user] cspline1d_eval In-Reply-To: <1142302334.17002.7.camel@localhost.localdomain> References: <1142302334.17002.7.camel@localhost.localdomain> Message-ID: <441667D1.2060006@ieee.org> George Geller wrote: > I was trying to follow the recipe at: > > http://www.scipy.org/Cookbook/Interpolation > > > from numpy import r_, sin > from scipy.signal import cspline1d, cspline1d_eval > > > First, with my installation the first import line has to read: > > This seems to me that you don't have the latest scipy installed. The recipe only works with numpy and scipy (*not* with the SciPy based on Numeric). What version of scipy do you have? > Am I doing something really dumb, or is the recipe seriously obsolete? > It appears that you have an older version of scipy, and the recipe only works in newer installations. -Travis From schofield at ftw.at Tue Mar 14 02:58:18 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 14 Mar 2006 08:58:18 +0100 Subject: [SciPy-user] maxentropy In-Reply-To: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> References: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> Message-ID: <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> Hi Matt, I've just read a paper on maxent text classification (by Nigam, Lafferty and McCallam, 1999) to try to understand how you'd do this. In this paper the model was p(y|x), like you were intending. So the sample space is {0,1}. They express features as functions f_i(x, y) of both the class and the text. Fine ... but now I can understand why you don't want to explicitly train models for each x, which might not even be possible; instead you want a model with one set of features {f_i, i=1,...,m} and one set of corresponding parameters. Hmmm ... so a conditional framework is trickier than I thought... ... And I've now just re-read Malouf's (2002) paper. I think fitting a conditional model just requires proper handling of the normalization constant. So I'll thrash out some code in the next few days and get back to you :) -- Ed On 14/03/2006, at 2:46 AM, Matthew Cooper wrote: > Hi Ed, > > Thanks very much for your reply. I think it helped a lot, but I may > be a bit confused about conditional versus unconditional modeling. > > What I'm doing is similar to text categorization. I observe some text > (vector X) and want to determine if a binary label (scalar y) should > be applied (y=1) or not (y=0). So, I look at this as using maxent to > estimate P(y|X). In this case, is my sample space simply {0,1} or is > it the space from which X is sampled? I had thought this was > conditional modelling, but I don't want to explicitly train models for > every different X, rather I want to select and weight features from > the whole corpus that in turn imply, for each X', some P(y|X'). I > assumed it was conditional because I'm not modelling P(y',X'). > > I have been trying to build models using a sample space with tuples as > elements like (X_n,y_n). I then am greedily building a set of feature > functions using information gain to select features. It's not quite > working, and I'm worried I'm not defining the sample space properly. > I will try to figure out how to define the sample space as {0,1} and > then redefine the features, but I'd appreciate any advice. Also, if I > get this working, I'd be happy to try and help with adding some > feature selection examples to your documentation. At the moment, I'm > still, as you can tell, figuring out what I'm doing. > > Thanks, > Matt > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From oliphant.travis at ieee.org Tue Mar 14 04:05:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 14 Mar 2006 02:05:43 -0700 Subject: [SciPy-user] [ANN] NumPy 0.9.6 released Message-ID: <44168767.80803@ieee.org> This post is to announce the release of NumPy 0.9.6 which fixes some important bugs and has several speed improvments. NumPy is a multi-dimensional array-package for Python that allows rapid high-level array computing with Python. It is successor to both Numeric and Numarray. More information at http://numeric.scipy.org The release notes are attached: Best regards, NumPy Developers -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: release-notes-0.9.6 URL: From matthew.brett at gmail.com Tue Mar 14 05:43:24 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 14 Mar 2006 10:43:24 +0000 Subject: [SciPy-user] scipy.test problem In-Reply-To: <1e2af89e0603091222i155e03cfhdcfa86bd4b64a64c@mail.gmail.com> References: <1e2af89e0603091222i155e03cfhdcfa86bd4b64a64c@mail.gmail.com> Message-ID: <1e2af89e0603140243p11dcb7a9kc0b5d143a2041770@mail.gmail.com> Hi, > > I work on a dual processor Xeon box, and try to install scipy. After > > various unsuccessful attempts, I now compiled BLAS, LAPACK ATLAS and > > FFTW3.1 from scratch, then installed numpy-0.9.5 and then installed > > scipy-0.4.6 under python 2.4.2. The installation went fine but after > > > > import scipy > > scipy.test(level=1) > > > > I get the following error message: > > > > ====================================================================== > > FAIL: check_simple (scipy.linalg.decomp.test_decomp.test_eig) > > ---------------------------------------------------------------------- Sorry to press the team - but a) Has anyone got this test to pass on an Intel 64bit system? If so, would they mind forwarding the details of their setup? b) Any suggestions of where to start in debugging the problem? The following replicates the errors Hanno and I are getting: python -c 'import scipy.linalg ; scipy.linalg.test()' Thanks a lot, Matthew From wbaxter at gmail.com Tue Mar 14 06:52:20 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 14 Mar 2006 20:52:20 +0900 Subject: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released In-Reply-To: <44168767.80803@ieee.org> References: <44168767.80803@ieee.org> Message-ID: Just wondering, does this one also require an update to scipy? And in general do numpy updates always require an update to scipy, too? Or is it only when the numpy C API interface changes? --bb On 3/14/06, Travis Oliphant wrote: > > This post is to announce the release of NumPy 0.9.6 which fixes some > important bugs and has several speed improvments. > > NumPy is a multi-dimensional array-package for Python that allows rapid > high-level array computing with Python. It is successor to both Numeric > and Numarray. More information at http://numeric.scipy.org > > The release notes are attached: > > Best regards, > > NumPy Developers > > > > > > > NumPy 0.9.6 is a bug-fix and optimization release with a > few new features: > > > New features (and changes): > > - bigndarray removed and support for Python2.5 ssize_t added giving > full support in Python2.5 to very-large arrays on 64-bit systems. > > - Strides can be set more arbitrarily from Python (and checking is done > to make sure memory won't be violated). > > - __array_finalize__ is now called for every array sub-class creation. > > - kron and repmat functions added > > - .round() method added for arrays > > - rint, square, reciprocal, and ones_like ufuncs added. > > - keyword arguments now possible for methods taking a single 'axis' > argument > > - Swig and Pyrex examples added in doc/swig and doc/pyrex > > - NumPy builds out of the box for cygwin > > - Different unit testing naming schemes are now supported. > > - memusage in numpy.distutils works for NT platforms > > - numpy.lib.math functions now take vectors > > - Most functions in oldnumeric now return intput class where possible > > > Speed ups: > > - x**n for integer n signficantly improved > > - array() much faster > > - .fill() method is much faster > > > Other fixes: > > - Output arrays to ufuncs works better. > > - Several ma (Masked Array) fixes. > > - umath code generation improved > > - many fixes to optimized dot function (fixes bugs in > matrix-sub-class multiply) > > - scalartype fixes > > - improvements to poly1d > > - f2py fixed to handle character arrays in common blocks > > - Scalar arithmetic improved to handle mixed-mode operation. > > - Make sure Python intYY types correspond exactly with C PyArray_INTYY > > > > > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadavh at visionsense.com Tue Mar 14 09:07:47 2006 From: nadavh at visionsense.com (Nadav Horesh) Date: Tue, 14 Mar 2006 16:07:47 +0200 Subject: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released Message-ID: <07C6A61102C94148B8104D42DE95F7E8C8EF52@exchange2k.envision.co.il> There is a compatibility problem, at least with the last formal release of scipy. Should we checkout and compile scipy from svn? Nadav. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net on behalf of Bill Baxter Sent: Tue 14-Mar-06 13:52 To: numpy-discussion; SciPy Users List Cc: Subject: Re: [Numpy-discussion] [ANN] NumPy 0.9.6 released Just wondering, does this one also require an update to scipy? And in general do numpy updates always require an update to scipy, too? Or is it only when the numpy C API interface changes? --bb On 3/14/06, Travis Oliphant wrote: > > This post is to announce the release of NumPy 0.9.6 which fixes some > important bugs and has several speed improvments. > > NumPy is a multi-dimensional array-package for Python that allows rapid > high-level array computing with Python. It is successor to both Numeric > and Numarray. More information at http://numeric.scipy.org > > The release notes are attached: > > Best regards, > > NumPy Developers > > > > > > > NumPy 0.9.6 is a bug-fix and optimization release with a > few new features: > > > New features (and changes): > > - bigndarray removed and support for Python2.5 ssize_t added giving > full support in Python2.5 to very-large arrays on 64-bit systems. > > - Strides can be set more arbitrarily from Python (and checking is done > to make sure memory won't be violated). > > - __array_finalize__ is now called for every array sub-class creation. > > - kron and repmat functions added > > - .round() method added for arrays > > - rint, square, reciprocal, and ones_like ufuncs added. > > - keyword arguments now possible for methods taking a single 'axis' > argument > > - Swig and Pyrex examples added in doc/swig and doc/pyrex > > - NumPy builds out of the box for cygwin > > - Different unit testing naming schemes are now supported. > > - memusage in numpy.distutils works for NT platforms > > - numpy.lib.math functions now take vectors > > - Most functions in oldnumeric now return intput class where possible > > > Speed ups: > > - x**n for integer n signficantly improved > > - array() much faster > > - .fill() method is much faster > > > Other fixes: > > - Output arrays to ufuncs works better. > > - Several ma (Masked Array) fixes. > > - umath code generation improved > > - many fixes to optimized dot function (fixes bugs in > matrix-sub-class multiply) > > - scalartype fixes > > - improvements to poly1d > > - f2py fixed to handle character arrays in common blocks > > - Scalar arithmetic improved to handle mixed-mode operation. > > - Make sure Python intYY types correspond exactly with C PyArray_INTYY > > > > > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 From schofield at ftw.at Tue Mar 14 10:54:32 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 14 Mar 2006 16:54:32 +0100 Subject: [SciPy-user] New SciPy release? In-Reply-To: <07C6A61102C94148B8104D42DE95F7E8C8EF52@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E8C8EF52@exchange2k.envision.co.il> Message-ID: <4416E738.5020304@ftw.at> Bill Baxter wrote: > Just wondering, does this one also require an update to scipy? > And in general do numpy updates always require an update to scipy, too? > Or is it only when the numpy C API interface changes? Nadav Horesh wrote: > There is a compatibility problem, at least with the last formal release of scipy. Should we checkout and compile scipy from svn? > I think a recompilation of the current SciPy (0.4.6) against the new NumPy version would also be fine, but I'm not sure about this. I could make a new SciPy release this weekend if people are happy with that. The main need is for NumPy compatibility, but we also have a new image package, new sparse matrix functionality, and quite a lot of fixes... -- Ed From philippe.fontaine at synchrotron-soleil.fr Tue Mar 14 12:03:47 2006 From: philippe.fontaine at synchrotron-soleil.fr (FONTAINE Philipe) Date: Tue, 14 Mar 2006 18:03:47 +0100 Subject: [SciPy-user] installation problems Message-ID: <1142355827.25646.13.camel@rosette.synchrotron-soleil.fr> I have installed scipy with all the needed packages When I load it from Python, it gives me the following messages: >>> from scipy import * Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/site- packages/scipy/signal/__init__.py", line 9, in ? from bsplines import * File "/usr/local/lib/python2.3/site- packages/scipy/signal/bsplines.py", line 3, in ? import scipy.special File "/usr/local/lib/python2.3/site- packages/scipy/special/__init__.py", line 10, in ? import orthogonal File "/usr/local/lib/python2.3/site- packages/scipy/special/orthogonal.py", line 66, in ? from scipy.linalg import eig File "/usr/local/lib/python2.3/site- packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/local/lib/python2.3/site-packages/scipy/linalg/basic.py", line 228, in ? import decomp File "/usr/local/lib/python2.3/site-packages/scipy/linalg/decomp.py", line 18, in ? from blas import get_blas_funcs File "/usr/local/lib/python2.3/site-packages/scipy/linalg/blas.py", line 15, in ? import fblas ImportError: /usr/local/lib/python2.3/site- packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ did someone have the same problem? Anyone knows the how to solve it Many Thanks Philippe From yood at hotmail.com Tue Mar 14 12:12:04 2006 From: yood at hotmail.com (Mark W.) Date: Tue, 14 Mar 2006 09:12:04 -0800 Subject: [SciPy-user] 64 bit Address Space Limitations Message-ID: Hi. We are converting our systems to a 64-bit platform to hopefully take advantage of larger address spaces for arrays and such. Can anyone tell me - or point me to documentation which tells - how much address space for an array I could hope to get? We have a memory error on the 32-bit machines when we try to load a large array and we're hoping this will get around that 2 Gig (or less) limit. Thankx for any insights you might have, Mark From klemm at phys.ethz.ch Tue Mar 14 12:12:24 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 14 Mar 2006 18:12:24 +0100 Subject: [SciPy-user] installation problems In-Reply-To: <1142355827.25646.13.camel@rosette.synchrotron-soleil.fr> References: <1142355827.25646.13.camel@rosette.synchrotron-soleil.fr> Message-ID: Philippe, I had a similar problem. When you are installing on a Red Hat distribution, it is highly likely that your BLAS or another numerical library is incomplete. That seemed to be the problem on the Red Hat distribution I have been using. Probably you then have to compile the numerical libraries yourself (that's at least what I did). HTH, Hanno On Mar 14, 2006, at 6:03 PM, FONTAINE Philipe wrote: > I have installed scipy with all the needed packages > When I load it from Python, it gives me the following messages: > >>>> from scipy import * > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.3/site- > packages/scipy/signal/__init__.py", line 9, in ? > from bsplines import * > File "/usr/local/lib/python2.3/site- > packages/scipy/signal/bsplines.py", line 3, in ? > import scipy.special > File "/usr/local/lib/python2.3/site- > packages/scipy/special/__init__.py", line 10, in ? > import orthogonal > File "/usr/local/lib/python2.3/site- > packages/scipy/special/orthogonal.py", line 66, in ? > from scipy.linalg import eig > File "/usr/local/lib/python2.3/site- > packages/scipy/linalg/__init__.py", line 8, in ? > from basic import * > File "/usr/local/lib/python2.3/site-packages/scipy/linalg/basic.py", > line 228, in ? > import decomp > File "/usr/local/lib/python2.3/site-packages/scipy/linalg/decomp.py", > line 18, in ? > from blas import get_blas_funcs > File "/usr/local/lib/python2.3/site-packages/scipy/linalg/blas.py", > line 15, in ? > import fblas > ImportError: /usr/local/lib/python2.3/site- > packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ > > did someone have the same problem? > Anyone knows the how to solve it > > Many Thanks > > Philippe > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- Hanno Klemm klemm at itp.phys.ethz.ch ETH Zurich tel: +41-1-6332580 Institute for theoretical physics mobile: +41-79-4500428 http://www.mth.kcl.ac.uk/~klemm From oliphant.travis at ieee.org Tue Mar 14 13:32:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 14 Mar 2006 11:32:03 -0700 Subject: [SciPy-user] 64 bit Address Space Limitations In-Reply-To: References: Message-ID: <44170C23.4040807@ieee.org> Mark W. wrote: > Hi. We are converting our systems to a 64-bit platform to hopefully take > advantage of larger address spaces for arrays and such. Can anyone tell me - > or point me to documentation which tells - how much address space for an > array I could hope to get? We have a memory error on the 32-bit machines > when we try to load a large array and we're hoping this will get around that > 2 Gig (or less) limit. > This is finally possible using Python 2.5 and numpy. But, you need to use Python 2.5 which is only available as an SVN check-out and still has a few issues. Python 2.5 should be available as a release in the summer. NumPy allows creation of larger arrays even with Python 2.4 but there will be some errors in some uses of slicing, the buffer interface, and memory-mapped arrays because of inherent limitations to Python that were only recently removed. -Travis From oliphant.travis at ieee.org Tue Mar 14 13:33:34 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 14 Mar 2006 11:33:34 -0700 Subject: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released In-Reply-To: References: <44168767.80803@ieee.org> Message-ID: <44170C7E.6070208@ieee.org> Bill Baxter wrote: > Just wondering, does this one also require an update to scipy? > And in general do numpy updates always require an update to scipy, too? > Or is it only when the numpy C API interface changes? It's only when the C-API interface changes that this is necessary. Pre-1.0 it is likely to happen at each release. After that it is less likely to happen. -Travis From Paul.Ray at nrl.navy.mil Tue Mar 14 14:30:40 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Tue, 14 Mar 2006 14:30:40 -0500 Subject: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released In-Reply-To: <44170C7E.6070208@ieee.org> References: <44168767.80803@ieee.org> <44170C7E.6070208@ieee.org> Message-ID: <51CE8530-E9F6-4AE1-A1CF-9FF6C6FEB25F@nrl.navy.mil> On Mar 14, 2006, at 1:33 PM, Travis Oliphant wrote: >> Just wondering, does this one also require an update to scipy? >> And in general do numpy updates always require an update to scipy, >> too? >> Or is it only when the numpy C API interface changes? > It's only when the C-API interface changes that this is necessary. > Pre-1.0 it is likely to happen at each release. > > After that it is less likely to happen. Pre-1.0, I think it is fine for the version numbering to be a free- for-all, as it has been. There have been many times, it seems, where the latest numpy didn't work with the latest scipy and stuff like that. After 1.0, I would like to encourage some discipline with regards to numpy version numbering. From 1.0, the API should NOT change at all until 1.1. Releases like 1.0.1, 1.0.2, ... should be bug fixes only (and possibly small additions, like new methods that weren't there before). But if any change will break code that USES numpy like SciPy, Matplotlib, PyFITS, etc... it should require a minor version increment to 1.1+. Is that the basic plan of the numpy developers? I think that many people with codes that use Numeric or numarray are awaiting the numpy 1.0 release as a sign that it is stable and ready for prime time, before converting their codes. Cheers, -- Paul From robert.kern at gmail.com Tue Mar 14 14:40:32 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 14 Mar 2006 13:40:32 -0600 Subject: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released In-Reply-To: <51CE8530-E9F6-4AE1-A1CF-9FF6C6FEB25F@nrl.navy.mil> References: <44168767.80803@ieee.org> <44170C7E.6070208@ieee.org> <51CE8530-E9F6-4AE1-A1CF-9FF6C6FEB25F@nrl.navy.mil> Message-ID: <44171C30.1060505@gmail.com> Paul Ray wrote: > On Mar 14, 2006, at 1:33 PM, Travis Oliphant wrote: > >>>Just wondering, does this one also require an update to scipy? >>>And in general do numpy updates always require an update to scipy, >>>too? >>>Or is it only when the numpy C API interface changes? >> >>It's only when the C-API interface changes that this is necessary. >>Pre-1.0 it is likely to happen at each release. >> >>After that it is less likely to happen. > > Pre-1.0, I think it is fine for the version numbering to be a free- > for-all, as it has been. There have been many times, it seems, where > the latest numpy didn't work with the latest scipy and stuff like that. > > After 1.0, I would like to encourage some discipline with regards to > numpy version numbering. From 1.0, the API should NOT change at all > until 1.1. Releases like 1.0.1, 1.0.2, ... should be bug fixes only > (and possibly small additions, like new methods that weren't there > before). But if any change will break code that USES numpy like > SciPy, Matplotlib, PyFITS, etc... it should require a minor version > increment to 1.1+. Is that the basic plan of the numpy developers? Yes. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Tue Mar 14 15:24:00 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 14 Mar 2006 12:24:00 -0800 Subject: [SciPy-user] 64 bit Address Space Limitations In-Reply-To: <44170C23.4040807@ieee.org> References: <44170C23.4040807@ieee.org> Message-ID: <44172660.4060106@astraw.com> Travis Oliphant wrote: >Mark W. wrote: > > >>Hi. We are converting our systems to a 64-bit platform to hopefully take >>advantage of larger address spaces for arrays and such. Can anyone tell me - >>or point me to documentation which tells - how much address space for an >>array I could hope to get? We have a memory error on the 32-bit machines >>when we try to load a large array and we're hoping this will get around that >>2 Gig (or less) limit. >> >> >> >This is finally possible using Python 2.5 and numpy. But, you need to >use Python 2.5 which is only available as an SVN check-out and still has >a few issues. Python 2.5 should be available as a release in the summer. > >NumPy allows creation of larger arrays even with Python 2.4 but there >will be some errors in some uses of slicing, the buffer interface, and >memory-mapped arrays because of inherent limitations to Python that were >only recently removed. > > While what Travis says is correct, even with older Pythons (such as 2.3) you can have processes with > 2GB memory, even if any individual array doesn't go into the realm Travis mentions. This was a major reason for me to move to a 64 bit machine. (Loading a few 1 GB arrays.) My amd64 is working quite well in full 64-bit mode with debian sarge's default python2.3 and the latest numpy, scipy, etc. and I can easily have individual processes > 2GB. Also, I've found this page helpful for information about this stuff in linux: http://www.spack.org/wiki/LinuxRamLimits Cheers! Andrew From oliphant at ee.byu.edu Tue Mar 14 15:25:43 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 14 Mar 2006 13:25:43 -0700 Subject: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released In-Reply-To: <51CE8530-E9F6-4AE1-A1CF-9FF6C6FEB25F@nrl.navy.mil> References: <44168767.80803@ieee.org> <44170C7E.6070208@ieee.org> <51CE8530-E9F6-4AE1-A1CF-9FF6C6FEB25F@nrl.navy.mil> Message-ID: <441726C7.3020207@ee.byu.edu> Paul Ray wrote: >On Mar 14, 2006, at 1:33 PM, Travis Oliphant wrote: > > > >>>Just wondering, does this one also require an update to scipy? >>>And in general do numpy updates always require an update to scipy, >>>too? >>>Or is it only when the numpy C API interface changes? >>> >>> >>It's only when the C-API interface changes that this is necessary. >>Pre-1.0 it is likely to happen at each release. >> >>After that it is less likely to happen. >> >> > >Pre-1.0, I think it is fine for the version numbering to be a free- >for-all, as it has been. There have been many times, it seems, where >the latest numpy didn't work with the latest scipy and stuff like that. > > The current SVN of numpy and the current SVN of scipy usually work with each other (perhaps with only a day lag). >After 1.0, I would like to encourage some discipline with regards to >numpy version numbering. From 1.0, the API should NOT change at all >until 1.1. > This is the plan and it is why we are being slow with getting 1.0 out. But, we *need* people to try NumPy on their codes so that we can determine whether or not API additions are needed. Without this testing we are just guessing based on the developer's own codes which do not cover the breadth. >I think that many people with codes that use Numeric or numarray are >awaiting the numpy 1.0 release as a sign that it is stable and ready >for prime time, before converting their codes. > > We need some of these people to convert earlier than that, though, so we can make sure that 1.0 really is ready for prime time. I think it's very close already or the version number wouldn't be so high. We are just waiting for more people to start using it and report any issues that they have before going to 1.0 (I'd also like scalar math to be implemented pre 1.0 as well in case that requires any C-API additions). Thanks for the feedback. What you described will be the basic behavior once 1.0 is released. Note that right now the only changes that usually need to be made are a re-compile. I know this can still be annoying when you have a deep code stack. -Travis From Paul.Ray at nrl.navy.mil Tue Mar 14 15:38:03 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Tue, 14 Mar 2006 15:38:03 -0500 Subject: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released In-Reply-To: <441726C7.3020207@ee.byu.edu> References: <44168767.80803@ieee.org> <44170C7E.6070208@ieee.org> <51CE8530-E9F6-4AE1-A1CF-9FF6C6FEB25F@nrl.navy.mil> <441726C7.3020207@ee.byu.edu> Message-ID: <64DF7D92-839F-4B49-AC08-D757E4753580@nrl.navy.mil> On Mar 14, 2006, at 3:25 PM, Travis Oliphant wrote: >> I think that many people with codes that use Numeric or numarray are >> awaiting the numpy 1.0 release as a sign that it is stable and ready >> for prime time, before converting their codes. >> > We need some of these people to convert earlier than that, though, > so we > can make sure that 1.0 really is ready for prime time. I think it's > very close already or the version number wouldn't be so high. We are > just waiting for more people to start using it and report any issues > that they have before going to 1.0 (I'd also like scalar math to be > implemented pre 1.0 as well in case that requires any C-API > additions). I agree. Lots of people do seem to be trying out numpy. I use it for all new code I develop, and I have converted over a bunch of older SciPy stuff that uses both the python and C APIs, including ppgplot for plotting. I also heard from the PyFITS developers that an alpha release based on numpy is coming very soon. Cheers, -- Paul From wbaxter at gmail.com Tue Mar 14 20:21:16 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 15 Mar 2006 10:21:16 +0900 Subject: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released In-Reply-To: <441726C7.3020207@ee.byu.edu> References: <44168767.80803@ieee.org> <44170C7E.6070208@ieee.org> <51CE8530-E9F6-4AE1-A1CF-9FF6C6FEB25F@nrl.navy.mil> <441726C7.3020207@ee.byu.edu> Message-ID: On 3/15/06, Travis Oliphant wrote: > > Paul Ray wrote: > > >On Mar 14, 2006, at 1:33 PM, Travis Oliphant wrote: > > > > > > > >I think that many people with codes that use Numeric or numarray are > >awaiting the numpy 1.0 release as a sign that it is stable and ready > >for prime time, before converting their codes. > > > > > We need some of these people to convert earlier than that, though, so we > can make sure that 1.0 really is ready for prime time. I think it's > very close already or the version number wouldn't be so high. This is probably obvious and in the plans already, but if it's true that many people are waiting for 1.0 before they give it a try, it may make sense to have a "1.0-beta" series of releases, or "1.0-release candidates" leading up to the final release of 1.0. This may pull a few more folks before the 1.0 API is frozen forever. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at atr.jp Tue Mar 14 21:39:20 2006 From: cournape at atr.jp (Cournapeau David) Date: Wed, 15 Mar 2006 11:39:20 +0900 Subject: [SciPy-user] [Scipy Users] Weave documentation not on new scipy.org ? Message-ID: <1142390360.4907.1.camel@localhost.localdomain> Hi, I wanted to know if there is any documentation for weave on the new scipy.org webpage ? I remembered having seen some tutorial some time ago on the old page, but the link for weave leads to a non existant page on scipy.org. A search on the scipy.org website does not give any result either, Thank you, David From bgoli at sun.ac.za Wed Mar 15 01:39:48 2006 From: bgoli at sun.ac.za (Brett Olivier) Date: Wed, 15 Mar 2006 08:39:48 +0200 Subject: [SciPy-user] scipy.test problem In-Reply-To: <1e2af89e0603140243p11dcb7a9kc0b5d143a2041770@mail.gmail.com> References: <1e2af89e0603091222i155e03cfhdcfa86bd4b64a64c@mail.gmail.com> <1e2af89e0603140243p11dcb7a9kc0b5d143a2041770@mail.gmail.com> Message-ID: <200603150839.48570.bgoli@sun.ac.za> Hi I've built Scipy/numpy on an Intel P4 based 64bit system and only get one test failure (different from yours though). Build environment : SVN scipy version: '0.4.7.1677' SVN numpy version: '0.9.6.2226' with ATLAS 3.7.11/LAPACK built with GCC 3.4.3 Perhaps you should try with the latest SVN versions of scipy/numpy and see what you get? Brett ====================================================================== FAIL: W.II.A.0. Print ROUND with only one digit. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/stats/tests/test_stats.py", line 85, in check_rounding0 assert_equal(y,i+1) File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 128, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: 1 ACTUAL: 0.0 ---------------------------------------------------------------------- Ran 1123 tests in 43.839s On Tuesday 14 March 2006 12:43, Matthew Brett wrote: > Hi, > > > > I work on a dual processor Xeon box, and try to install scipy. After > > > various unsuccessful attempts, I now compiled BLAS, LAPACK ATLAS and > > > FFTW3.1 from scratch, then installed numpy-0.9.5 and then installed > > > scipy-0.4.6 under python 2.4.2. The installation went fine but after > > > > > > import scipy > > > scipy.test(level=1) > > > > > > I get the following error message: > > > > > > ====================================================================== > > > FAIL: check_simple (scipy.linalg.decomp.test_decomp.test_eig) > > > ---------------------------------------------------------------------- > > Sorry to press the team - but > > a) Has anyone got this test to pass on an Intel 64bit system? If so, > would they mind forwarding the details of their setup? > b) Any suggestions of where to start in debugging the problem? > > The following replicates the errors Hanno and I are getting: > > python -c 'import scipy.linalg ; scipy.linalg.test()' > > Thanks a lot, > > Matthew > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Brett G. Olivier PhD Triple-J Group for Molecular Cell Physiology Stellenbosch University From nwagner at mecha.uni-stuttgart.de Wed Mar 15 02:55:37 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 15 Mar 2006 08:55:37 +0100 Subject: [SciPy-user] scipy.test problem In-Reply-To: <200603150839.48570.bgoli@sun.ac.za> References: <1e2af89e0603091222i155e03cfhdcfa86bd4b64a64c@mail.gmail.com> <1e2af89e0603140243p11dcb7a9kc0b5d143a2041770@mail.gmail.com> <200603150839.48570.bgoli@sun.ac.za> Message-ID: <4417C879.6020007@mecha.uni-stuttgart.de> Brett Olivier wrote: > Hi > > I've built Scipy/numpy on an Intel P4 based 64bit system and only get one test > failure (different from yours though). > > Build environment : > SVN scipy version: '0.4.7.1677' > SVN numpy version: '0.9.6.2226' > with ATLAS 3.7.11/LAPACK built with GCC 3.4.3 > > Perhaps you should try with the latest SVN versions of scipy/numpy and see > what you get? > > Brett > > ====================================================================== > FAIL: W.II.A.0. Print ROUND with only one digit. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib64/python2.4/site-packages/scipy/stats/tests/test_stats.py", > line 85, in check_rounding0 > assert_equal(y,i+1) > File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 128, > in assert_equal > assert desired == actual, msg > AssertionError: > Items are not equal: > DESIRED: 1 > ACTUAL: 0.0 > > ---------------------------------------------------------------------- > Ran 1123 tests in 43.839s > > I've built scipy/numpy on an AMD based 64bit system. I can reproduce the error reported by Tim. >>> scipy.__version__ '0.4.7.1707' >>> numpy.__version__ '0.9.7.2243' ====================================================================== FAIL: check_lu (scipy.linalg.tests.test_decomp.test_lu_solve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 138, in check_lu assert_array_equal(x1,x2) File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 204, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 100.0%): Array 1: [-0.2788866064997962 3.5000435636821954 0.2273024982361812 -0.4566513297109564 2.1391760680488212 -0.132995040912662... Array 2: [ 0.1037139891362202 -0.6723715892452233 0.1226811680403352 -0.3152497111751822 -0.1347991093520258 -0.711241441748883... ---------------------------------------------------------------------- Ran 1109 tests in 1.909s FAILED (failures=1) Nils > On Tuesday 14 March 2006 12:43, Matthew Brett wrote: > >> Hi, >> >> >>>> I work on a dual processor Xeon box, and try to install scipy. After >>>> various unsuccessful attempts, I now compiled BLAS, LAPACK ATLAS and >>>> FFTW3.1 from scratch, then installed numpy-0.9.5 and then installed >>>> scipy-0.4.6 under python 2.4.2. The installation went fine but after >>>> >>>> import scipy >>>> scipy.test(level=1) >>>> >>>> I get the following error message: >>>> >>>> ====================================================================== >>>> FAIL: check_simple (scipy.linalg.decomp.test_decomp.test_eig) >>>> ---------------------------------------------------------------------- >>>> >> Sorry to press the team - but >> >> a) Has anyone got this test to pass on an Intel 64bit system? If so, >> would they mind forwarding the details of their setup? >> b) Any suggestions of where to start in debugging the problem? >> >> The following replicates the errors Hanno and I are getting: >> >> python -c 'import scipy.linalg ; scipy.linalg.test()' >> >> Thanks a lot, >> >> Matthew >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> > > From vinicius.lobosco at gmail.com Wed Mar 15 07:04:48 2006 From: vinicius.lobosco at gmail.com (Vinicius Lobosco) Date: Wed, 15 Mar 2006 13:04:48 +0100 Subject: [SciPy-user] MemoryError with xplt on nighly compiled scipy In-Reply-To: <1142355827.25646.13.camel@rosette.synchrotron-soleil.fr> References: <1142355827.25646.13.camel@rosette.synchrotron-soleil.fr> Message-ID: <200603151304.48347.vinicius.lobosco@gmail.com> Dear folks! I ran the Travis regression code (p 32 from the Tutorial) with perfect results on the scipy available for for Ubuntu and Mandriva. However, when I decided to use the nightly compiled version I got this error: "LinearRegression2.py", line 18, in ? plot(xi, zi, 'x', xi2, yi2) File "/usr/lib/python2.4/site-packages/scipy/xplt/Mplot.py", line 588, in plot gist.plg(y,x,type=thetype,color=thecolor,marker=themarker,marks=tomark,msize=msize,width=linewidth) I've followed the linux installation using ATLAS. Any suggestion? Thanks! Best regards, Vinicius -- ------------------------- Vinicius Lobosco, PhD Process Intelligence www.nephila.se/paperplat (in construcion) www.paperplat.com +46 8 612 7803 +46 73 925 8476 (cell phone) Bj?rnn?sv?gen 21 SE-113 47 Stockholm, Sweden From jelle.feringa at ezct.net Wed Mar 15 08:08:23 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Wed, 15 Mar 2006 14:08:23 +0100 Subject: [SciPy-user] scipy 0.4.6 / numpy 0.9.5 | overwriting functions Message-ID: <025901c64831$8b385500$0e01a8c0@JELLE> Dear Group, I'm having some issues running scipy 0.4.6 / numpy 0.9.5, which are hard to comprehend for me. Perhaps someone is able to inform me what is going wrong here. I've tried installing numpy 0.9.6, and scipy failed to load when 0.9.6 was installed. I referred back to 0.9.5 and am getting the following errors (see end of mail). Am I making a serious mistake here setting up scipy/numpy or is this the current state of affairs? Also I've been experimenting with the Delaunay module. I've been able to compile it using Mike Fletchers setup: http://www.vrplumber.com/programming/mstoolkit/ Its really a wonderful addition to scipy and compiles perfectly out of the box. Thanks so much for the effort. Cheers, jelle No scipy-style subpackage 'scipy.test' found in c:\Python24\lib\site-packages\scipy. Ignoring. No scipy-style subpackage 'scipy.base' found in c:\Python24\lib\site-packages\scipy. Ignoring. Overwriting utils= from c:\Python24\lib\site-packages\scipy\utils\__init__.pyc (was from c:\Python24\lib\site-packages\scipy\base\utils.pyc) Overwriting info= from scipy.utils.helpmod (was from scipy.misc.helpmod) Overwriting factorial= from scipy.utils.common (was from scipy.misc.common) Overwriting factorial2= from scipy.utils.common (was from scipy.misc.common) Overwriting factorialk= from scipy.utils.common (was from scipy.misc.common) Overwriting comb= from scipy.utils.common (was from scipy.misc.common) Overwriting who= from scipy.utils.common (was from scipy.misc.common) Overwriting lena= from scipy.utils.common (was from scipy.misc.common) Overwriting central_diff_weights= from scipy.utils.common (was from scipy.misc.common) Overwriting derivative= from scipy.utils.common (was from scipy.misc.common) Overwriting pade= from scipy.utils.common (was from scipy.misc.common) Overwriting info= from scipy.misc.helpmod (was from scipy.utils.helpmod) Overwriting factorial= from scipy.misc.common (was from scipy.utils.common) Overwriting factorial2= from scipy.misc.common (was from scipy.utils.common) Overwriting factorialk= from scipy.misc.common (was from scipy.utils.common) Overwriting comb= from scipy.misc.common (was from scipy.utils.common) Overwriting who= from scipy.misc.common (was from scipy.utils.common) Overwriting lena= from scipy.misc.common (was from scipy.utils.common) Overwriting central_diff_weights= from scipy.misc.common (was from scipy.utils.common) Overwriting derivative= from scipy.misc.common (was from scipy.utils.common) Overwriting pade= from scipy.misc.common (was from scipy.utils.common) Overwriting test= from c:\Python24\lib\site-packages\scipy\test\__init__.pyc (was from scipy) Overwriting ScipyTest= from scipy.test.testing (was from numpy.testing.numpytest) Overwriting fft= from scipy.basic.fft_lite (was from numpy.dft.fftpack) Overwriting ifft= from scipy.basic.fft_lite (was from numpy.dft.fftpack) Overwriting rand= (was ) Overwriting randn= (was ) Overwriting random= from c:\Python24\lib\site-packages\scipy\basic\random.pyc (was from c:\Python24\lib\site-packages\numpy\random\__init__.pyc) Overwriting linalg= from c:\Python24\lib\site-packages\scipy\basic\linalg.pyc (was from c:\Python24\lib\site-packages\numpy\linalg\__init__.pyc) Overwriting disp= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting all= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting atleast_2d= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting ptp= from scipy.base.function_base (was from numpy.core.oldnumeric) Overwriting unicode_= from __builtin__ (was from __builtin__) Overwriting string= from __builtin__ (was from __builtin__) Overwriting float96= from __builtin__ (was from __builtin__) Overwriting void= from __builtin__ (was from __builtin__) Overwriting unicode0= from __builtin__ (was from __builtin__) Overwriting void0= from __builtin__ (was from __builtin__) Overwriting tri= from scipy.base.twodim_base (was from numpy.lib.twodim_base) Overwriting arrayrange= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting indices= from scipy.base.numeric (was from numpy.core.numeric) Overwriting set_numeric_ops= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting cosh= (was ) Overwriting object0= from __builtin__ (was from __builtin__) Overwriting restoredot= from scipy.lib._dotblas (was from numpy.core._dotblas) Overwriting index_exp= from scipy.base.index_tricks (was from numpy.lib.index_tricks) Overwriting nanargmax= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting power= (was ) Overwriting typename= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting diag= from scipy.base.twodim_base (was from numpy.lib.twodim_base) Overwriting sum= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting polyfit= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting nanmin= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting memmap= from scipy.base.memmap (was from numpy.core.memmap) Overwriting nan_to_num= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting complex64= from __builtin__ (was from __builtin__) Overwriting logspace= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting sinh= (was ) Overwriting vstack= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting MachAr= from scipy.base.machar (was from numpy.lib.machar) Overwriting asscalar= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting less_equal= (was ) Overwriting object_= from __builtin__ (was from __builtin__) Overwriting divide= (was ) Overwriting csingle= from __builtin__ (was from __builtin__) Overwriting unsignedinteger= from __builtin__ (was from __builtin__) Overwriting sign= from scipy.base.ufunclike (was ) Overwriting fastCopyAndTranspose= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting bitwise_and= (was ) Overwriting uintc= from __builtin__ (was from __builtin__) Overwriting register_dtype= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting select= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting eye= from scipy.base.twodim_base (was from numpy.lib.twodim_base) Overwriting newbuffer= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting negative= (was ) Overwriting mintypecode= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting sort= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting uint0= from __builtin__ (was from __builtin__) Overwriting uint8= from __builtin__ (was from __builtin__) Overwriting chararray= from c:\Python24\lib\site-packages\scipy\base\chararray.pyc (was from numpy.core.defchararray) Overwriting linspace= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting resize= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting uint64= from __builtin__ (was from __builtin__) Overwriting ma= from c:\Python24\lib\site-packages\scipy\base\ma.pyc (was from c:\Python24\lib\site-packages\numpy\core\ma.pyc) Overwriting true_divide= (was ) Overwriting Inf=1.#INF (was 1.#INF) Overwriting finfo= from scipy.base.getlimits (was from numpy.lib.getlimits) Overwriting seterrcall= from scipy.base.numeric (was from numpy.core.numeric) Overwriting logical_or= (was ) Overwriting minimum= (was ) Overwriting tan= (was ) Overwriting absolute= (was ) Overwriting array_repr= from scipy.base.numeric (was from numpy.core.numeric) Overwriting polymul= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting array_str= from scipy.base.numeric (was from numpy.core.numeric) Overwriting sin= (was ) Overwriting longlong= from __builtin__ (was from __builtin__) Overwriting product= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting int16= from __builtin__ (was from __builtin__) Overwriting str_= from __builtin__ (was from __builtin__) Overwriting mat= from scipy.base.matrix (was from numpy.core.defmatrix) Overwriting asanyarray= from scipy.base.numeric (was from numpy.core.numeric) Overwriting uint= from __builtin__ (was from __builtin__) Overwriting update_use_defaults= from scipy.base.umath (was from numpy.core.umath) Overwriting amin= from scipy.base.function_base (was from numpy.core.oldnumeric) Overwriting correlate= from scipy.base.numeric (was from numpy.core.numeric) Overwriting fromstring= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting left_shift= (was ) Overwriting searchsorted= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting int64= from __builtin__ (was from __builtin__) Overwriting dsplit= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting arraytype= from scipy (was from numpy) Overwriting can_cast= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting cumsum= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting roots= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting outer= from scipy.base.numeric (was from numpy.core.numeric) Overwriting fix= from scipy.base.ufunclike (was from numpy.lib.ufunclike) Overwriting choose= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting empty_like= from scipy.base.numeric (was from numpy.core.numeric) Overwriting greater= (was ) Overwriting polyint= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting arctan2= (was ) Overwriting complexfloating= from __builtin__ (was from __builtin__) Overwriting PZERO=0.0 (was 0.0) Overwriting isfortran= from scipy.base.numeric (was from numpy.core.numeric) Overwriting asfarray= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting fliplr= from scipy.base.twodim_base (was from numpy.lib.twodim_base) Overwriting alen= from scipy.base.function_base (was from numpy.core.oldnumeric) Overwriting recarray= from scipy.base.records (was from numpy.core.records) Overwriting fmod= (was ) Overwriting intp= from __builtin__ (was from __builtin__) Overwriting ndenumerate= from scipy.base.index_tricks (was from numpy.lib.index_tricks) Overwriting ogrid= from scipy.base.index_tricks (was from numpy.lib.index_tricks) Overwriting nanargmin= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting r_= from scipy.base.index_tricks (was from numpy.lib.index_tricks) Overwriting allclose= from scipy.base.numeric (was from numpy.core.numeric) Overwriting extract= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting vsplit= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting ulonglong= from __builtin__ (was from __builtin__) Overwriting matrix= from scipy.base.matrix (was from numpy.core.defmatrix) Overwriting asarray= from scipy.base.numeric (was from numpy.core.numeric) Overwriting poly1d= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting rec= from c:\Python24\lib\site-packages\scipy\base\records.pyc (was from c:\Python24\lib\site-packages\numpy\core\records.pyc) Overwriting uint32= from __builtin__ (was from __builtin__) Overwriting log2= from scipy.base.ufunclike (was from numpy.lib.ufunclike) Overwriting cumproduct= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting diagonal= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting atleast_1d= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting column_stack= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting put= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting byte= from __builtin__ (was from __builtin__) Overwriting remainder= (was ) Overwriting arccos= (was ) Overwriting signedinteger= from __builtin__ (was from __builtin__) Overwriting ndim= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting rank= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting ldexp= (was ) Overwriting array= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting common_type= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting size= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting logical_xor= (was ) Overwriting geterrcall= from scipy.base.numeric (was from numpy.core.numeric) Overwriting cross_correlate= from scipy.base.numeric (was from numpy.core.numeric) Overwriting sometrue= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting bool8= from __builtin__ (was from __builtin__) Overwriting alltrue= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting zeros= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting complex192= from __builtin__ (was from __builtin__) Overwriting nansum= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting bool_= from __builtin__ (was from __builtin__) Overwriting inexact= from __builtin__ (was from __builtin__) Overwriting broadcast= from scipy (was from numpy) Overwriting short= from __builtin__ (was from __builtin__) Overwriting typecodes={'All': '?bhilqpBHILQPfdgFDGSUVO', 'AllInteger': 'bBhHiIlLqQpP', 'AllFloat': 'fdgFDG', 'UnsignedInteger': 'BHILQP', 'Float': 'fdg', 'Character': 'S1', 'Complex': 'FDG', 'Integer': 'bhilqp'} (was {'All': '?bhilqpBHILQPfdgFDGSUVO', 'AllInteger': 'bBhHiIlLqQpP', 'AllFloat': 'fdgFDG', 'UnsignedInteger': 'BHILQP', 'Float': 'fdg', 'Character': 'S1', 'Complex': 'FDG', 'Integer': 'bhilqp'}) Overwriting rot90= from scipy.base.twodim_base (was from numpy.lib.twodim_base) Overwriting copy= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting ubyte= from __builtin__ (was from __builtin__) Overwriting not_equal= (was ) Overwriting fromfunction= from scipy.base.numeric (was from numpy.core.numeric) Overwriting double= from __builtin__ (was from __builtin__) Overwriting getbuffer= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting clip= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting frompyfunc= from scipy.base.umath (was from numpy.core.umath) Overwriting conjugate= (was ) Overwriting alterdot= from scipy.lib._dotblas (was from numpy.core._dotblas) Overwriting polyval= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting angle= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting apply_over_axes= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting right_shift= (was ) Overwriting take= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting sarray= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting outerproduct= from scipy.base.numeric (was from numpy.core.numeric) Overwriting trace= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting compress= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting multiply= (was ) Overwriting amax= from scipy.base.function_base (was from numpy.core.oldnumeric) Overwriting logical_not= (was ) Overwriting average= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting nbytes={: 8, : 2, : 4, : 0, : 4, : 4, : 0, : 24, : 8, : 0, : 8, : 4, : 4, : 1, : 16, : 1, : 8, : 4, : 2, : 12, : 1} from scipy.base.numerictypes (was {: 8, : 2, : 1, : 0, : 4, : 16, : 1, : 8, : 12, : 8, : 4, : 4, : 0, : 4, : 24, : 4, : 2, : 1, : 8, : 0, : 4} from numpy.core.numerictypes) Overwriting exp= (was ) Overwriting arctanh= (was ) Overwriting dot= from scipy.lib._dotblas (was from numpy.core._dotblas) Overwriting longfloat= from __builtin__ (was from __builtin__) Overwriting frexp= (was ) Overwriting putmask= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting swapaxes= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting infty=1.#INF (was 1.#INF) Overwriting digitize= from scipy.base._compiled_base (was from numpy.lib._compiled_base) Overwriting NZERO=0.0 (was 0.0) Overwriting UFuncType= from scipy (was from numpy) Overwriting ceil= (was ) Overwriting ones= from scipy.base.numeric (was from numpy.core.numeric) Overwriting UfuncType= from scipy (was from numpy) Overwriting geterr= from scipy.base.numeric (was from numpy.core.numeric) Overwriting convolve= from scipy.base.numeric (was from numpy.core.numeric) Overwriting isreal= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting where= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting isfinite= (was ) Overwriting argmax= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting polyder= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting isnan= (was ) Overwriting NINF=-1.#INF (was -1.#INF) Overwriting sort_complex= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting concatenate= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting vdot= from scipy.lib._dotblas (was from numpy.core._dotblas) Overwriting bincount= from scipy.base._compiled_base (was from numpy.lib._compiled_base) Overwriting transpose= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting array2string= from scipy.base.arrayprint (was from numpy.core.arrayprint) Overwriting vectorize= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting set_printoptions= from scipy.base.arrayprint (was from numpy.core.arrayprint) Overwriting trim_zeros= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting cos= (was ) Overwriting float64= from __builtin__ (was from __builtin__) Overwriting ushort= from __builtin__ (was from __builtin__) Overwriting equal= (was ) Overwriting cumprod= from scipy.base.function_base (was from numpy.core.oldnumeric) Overwriting float_= from __builtin__ (was from __builtin__) Overwriting vander= from scipy.base.twodim_base (was from numpy.lib.twodim_base) Overwriting load= from scipy.base.numeric (was from numpy.core.numeric) Overwriting poly= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting bitwise_or= (was ) Overwriting diff= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting iterable= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting array_split= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting piecewise= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting invert= (was ) Overwriting UFUNC_PYVALS_NAME='UFUNC_PYVALS' (was 'UFUNC_PYVALS') Overwriting c_= from scipy.base.index_tricks (was from numpy.lib.index_tricks) Overwriting flexible= from __builtin__ (was from __builtin__) Overwriting pi=3.1415926535897931 (was 3.1415926535897931) Overwriting empty= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting isposinf= from scipy.base.ufunclike (was from numpy.lib.ufunclike) Overwriting imag= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting apply_along_axis= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting tanh= (was ) Overwriting dstack= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting cast={: at 0x00DEEBB0>, : at 0x00DEEBF0>, : at 0x00DEEC30>, : at 0x00DEEC70>, : at 0x00DEECB0>, : at 0x00DEECF0>, : at 0x00DEEE70>, : at 0x00DEED30>, : at 0x00DEEDB0>, : at 0x00DEEF30>, : at 0x00DEEE30>, : at 0x00DEED70>, : at 0x00E050B0>, : at 0x00DEEEB0>, : at 0x00DEEEF0>, : at 0x00DEEF70>, : at 0x00DEEFB0>, : at 0x00E05030>, : at 0x00E05070>, : at 0x00DEEDF0>, : at 0x00E050F0>} from scipy.base.numerictypes (was {: at 0x00BC6570>, : at 0x00BC65B0>, : at 0x00BC65F0>, : at 0x00BC6670>, : at 0x00BC6870>, : at 0x00BC66B0>, : at 0x00BC66F0>, : at 0x00BC67B0>, : at 0x00BC67F0>, : at 0x00BC6830>, : at 0x00BC6730>, : at 0x00BC68B0>, : at 0x00BC6770>, : at 0x00BC6930>, : at 0x00BC6970>, : at 0x00BC6630>, : at 0x00BC69B0>, : at 0x00BC69F0>, : at 0x00BC6A30>, : at 0x00BC68F0>, : at 0x00BC6A70>} from numpy.core.numerictypes) Overwriting mgrid= from scipy.base.index_tricks (was from numpy.lib.index_tricks) Overwriting utils= from c:\Python24\lib\site-packages\scipy\base\utils.pyc (was from c:\Python24\lib\site-packages\scipy\utils\__init__.pyc) Overwriting longdouble= from __builtin__ (was from __builtin__) Overwriting ArrayType= from scipy (was from numpy) Overwriting signbit= (was ) Overwriting conj= (was ) Overwriting asmatrix= from scipy.base.matrix (was from numpy.core.defmatrix) Overwriting floating= from __builtin__ (was from __builtin__) Overwriting flatiter= from scipy (was from numpy) Overwriting bitwise_xor= (was ) Overwriting fabs= (was ) Overwriting generic= from __builtin__ (was from __builtin__) Overwriting reshape= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting NaN=-1.#IND (was -1.#IND) Overwriting cross= from scipy.base.numeric (was from numpy.core.numeric) Overwriting sqrt= (was ) Overwriting any= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting split= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting floor_divide= (was ) Overwriting format_parser= from scipy.base.records (was from numpy.core.records) Overwriting binary_repr= from scipy.base.function_base (was from numpy.core.numeric) Overwriting flipud= from scipy.base.twodim_base (was from numpy.lib.twodim_base) Overwriting iscomplexobj= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting polydiv= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting identity= from scipy.base.numeric (was from numpy.core.numeric) Overwriting greater_equal= (was ) Overwriting PINF=1.#INF (was 1.#INF) Overwriting less= (was ) Overwriting UFUNC_BUFSIZE_DEFAULT=10000 (was 10000) Overwriting NAN=-1.#IND (was -1.#IND) Overwriting typeDict={0: , 1: , 2: , 3: , 4: , 5: , 6: , 7: , 8: , 9: , 10: , 11: , 12: , 13: , 14: , 15: , 16: , 17: , 18: , 19: , 20: , 'unicode': , 'cfloat': , 'Int32': , 'Complex192': , 'Complex64': , 'uint16': , 'c16': , 'float32': , 'int32': , 'D': , 'float96': , 'H': , 'void': , 'unicode0': , 'L': , 'P': , 'void0': , 'd': , 'h': , 'l': , 'p': , 'Uint16': , 'object0': , 'complex64': , 'b1': , 'Uint32': , 'String0': , 'ulonglong': , 'Uint8': , 'I': , 'uint32': , '?': , 'Void0': , 'object': , 'G': , 'O': , 'S': , 'byte': , 'g': , 'float64': , 'ushort': , 'U': , 'cdouble': , 'uintp': , 'float': , 'Bool8': , 'bool8': , 'intp': , 'u8': , 'Float96': , 'u4': , 'Unicode0': , 'u1': , 'complex128': , 'long': , 'u2': , 'f8': , 'ubyte': , 'Int8': , 'Uint64': , 'complex192': , 'B': , 'F': , 'uint8': , 'c8': , 'Int64': , 'V': , 'int8': , 'uint64': , 'b': , 'c24': , 'f': , 'double': , 'clongdouble': , 'f12': , 'f4': , 'int': , 'longdouble': , 'Complex128': , 'string': , 'q': , 'Int16': , 'Float64': , 'bool': , 'Float32': , 'string0': , 'longlong': , 'i8': , 'int16': , 'ulong': , 'i1': , 'i2': , 'i4': , 'Q': , 'uint': , 'short': , 'i': , 'Object0': , 'int64': } (was {0: , 1: , 2: , 3: , 4: , 5: , 6: , 7: , 8: , 9: , 10: , 11: , 12: , 13: , 14: , 15: , 16: , 17: , 18: , 19: , 20: , 'cfloat': , 'longfloat': , 'Int32': , 'unicode_': , 'uint16': , 'c16': , 'float32': , 'int32': , 'D': , 'float96': , 'H': , 'void': , 'unicode0': , 'L': , 'P': , 'void0': , 'd': , 'h': , 'l': , 'p': , 'object0': , 'complex64': , 'b1': , 'String0': , 'ulonglong': , 'i1': , 'uint32': , '?': , 'Void0': , 'G': , 'O': , 'UInt8': , 'S': , 'byte': , 'UInt64': , 'g': , 'float64': , 'ushort': , 'float_': , 'U': , 'object_': , 'complex_': , 'cdouble': , 'uintp': , 'intc': , 'csingle': , 'bool8': , 'Bool': , 'intp': , 'uintc': , 'u8': , 'Float96': , 'u4': , 'int_': , 'Unicode0': , 'u1': , 'complex128': , 'u2': , 'f8': , 'ubyte': , 'Int8': , 'complex192': , 'B': , 'uint0': , 'F': , 'bool_': , 'uint8': , 'c8': , 'Int64': , 'int0': , 'V': , 'int8': , 'uint64': , 'b': , 'c24': , 'f': , 'double': , 'UInt32': , 'clongdouble': , 'f12': , 'f4': , 'longdouble': , 'single': , 'string': , 'q': , 'Int16': , 'Float64': , 'UInt16': , 'Float32': , 'string0': , 'longlong': , 'i8': , 'int16': , 'str_': , 'I': , 'i2': , 'i4': , 'Q': , 'uint': , 'short': , 'i': , 'clongfloat': , 'Object0': , 'int64': }) Overwriting shape= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting setbufsize= from scipy.base.numeric (was from numpy.core.numeric) Overwriting cfloat= from __builtin__ (was from __builtin__) Overwriting uintp= from __builtin__ (was from __builtin__) Overwriting isscalar= from scipy.base.type_check (was from numpy.core.numeric) Overwriting character= from __builtin__ (was from __builtin__) Overwriting bigndarray= from scipy (was from numpy) Overwriting add= (was ) Overwriting uint16= from __builtin__ (was from __builtin__) Overwriting ufunc= from scipy (was from numpy) Overwriting ravel= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting float32= from __builtin__ (was from __builtin__) Overwriting real= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting int32= from __builtin__ (was from __builtin__) Overwriting around= from scipy.base.function_base (was from numpy.core.oldnumeric) Overwriting complex_= from __builtin__ (was from __builtin__) Overwriting issubclass_= from scipy.base.utils (was from numpy.lib.utils) Overwriting atleast_3d= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting isneginf= from scipy.base.ufunclike (was from numpy.lib.ufunclike) Overwriting integer= from __builtin__ (was from __builtin__) Overwriting unique= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting mod= (was ) Overwriting insert= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting getbufsize= from scipy.base.numeric (was from numpy.core.numeric) Overwriting arange= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting asarray_chkfinite= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting hypot= (was ) Overwriting logical_and= (was ) Overwriting get_printoptions= from scipy.base.arrayprint (was from numpy.core.arrayprint) Overwriting nonzero= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting polysub= from scipy.base.polynomial (was from numpy.lib.polynomial) Overwriting fromfile= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting prod= from scipy.base.function_base (was from numpy.core.oldnumeric) Overwriting nanmax= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting modf= (was ) Overwriting hstack= from scipy.base.shape_base (was from numpy.lib.shape_base) Overwriting subtract= (was ) Overwriting frombuffer= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting iscomplex= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting argsort= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting arcsinh= (was ) Overwriting intc= from __builtin__ (was from __builtin__) Overwriting ix_= from scipy.base.index_tricks (was from numpy.lib.index_tricks) Overwriting Infinity=1.#INF (was 1.#INF) Overwriting log= (was ) Overwriting cdouble= from __builtin__ (was from __builtin__) Overwriting complex128= from __builtin__ (was from __builtin__) Overwriting round_= from scipy.base.function_base (was from numpy.core.oldnumeric) Overwriting inner= from scipy.lib._dotblas (was from numpy.core._dotblas) Overwriting int_= from __builtin__ (was from __builtin__) Overwriting log10= (was ) Overwriting matrixmultiply= from scipy.base.multiarray (was from numpy.core.multiarray) Overwriting unwrap= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting histogram= from scipy.base.function_base (was from numpy.lib.function_base) Overwriting int0= from __builtin__ (was from __builtin__) Overwriting squeeze= from scipy.base.shape_base (was from numpy.core.oldnumeric) Overwriting int8= from __builtin__ (was from __builtin__) Overwriting seterr= from scipy.base.numeric (was from numpy.core.numeric) Overwriting argmin= from scipy.base.oldnumeric (was from numpy.core.oldnumeric) Overwriting maximum= (was ) Overwriting record= from scipy.base.records (was from numpy.core.records) Overwriting clongdouble= from __builtin__ (was from __builtin__) Overwriting isrealobj= from scipy.base.type_check (was from numpy.lib.type_check) Overwriting arccosh= (was ) Overwriting tril= from scipy.base.twodim_base (was from numpy.lib.twodim_base) Overwriting char= from c:\Python24\lib\site-packages\scipy\base\chararray.pyc (was from c:\Python24\lib\site-packages\numpy\core\defchararray.pyc) Overwriting single= from __builtin__ (was from __builtin__) Overwriting ScalarType=(, , , , , , , , , , , , , , , , , , , , , , , , , , , , ) (was (, , , , Message-ID: >We need some of these people to convert earlier than that, though, so we >can make sure that 1.0 really is ready for prime time. I think it's >very close already or the version number wouldn't be so high. We are >just waiting for more people to start using it and report any issues >that they have before going to 1.0 (I'd also like scalar math to be >implemented pre 1.0 as well in case that requires any C-API additions). I can understand that, but here as a potential industrial users of Numpy, we can't really afford the risk. We're looking at Numpy as a key piece of a Python replacement of commercial software for critical daily production. If we build Numpy/Scipy into our system, it has to work. We don't want to be anyone's beta testers. Mark F. Morss Principal Analyst, Market Risk American Electric Power Travis Oliphant To Sent by: SciPy Users List numpy-discussion- , admin at lists.sourc numpy-discussion eforge.net cc 03/14/2006 03:25 PM Subject Re: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released Paul Ray wrote: >On Mar 14, 2006, at 1:33 PM, Travis Oliphant wrote: > > > >>>Just wondering, does this one also require an update to scipy? >>>And in general do numpy updates always require an update to scipy, >>>too? >>>Or is it only when the numpy C API interface changes? >>> >>> >>It's only when the C-API interface changes that this is necessary. >>Pre-1.0 it is likely to happen at each release. >> >>After that it is less likely to happen. >> >> > >Pre-1.0, I think it is fine for the version numbering to be a free- >for-all, as it has been. There have been many times, it seems, where >the latest numpy didn't work with the latest scipy and stuff like that. > > The current SVN of numpy and the current SVN of scipy usually work with each other (perhaps with only a day lag). >After 1.0, I would like to encourage some discipline with regards to >numpy version numbering. From 1.0, the API should NOT change at all >until 1.1. > This is the plan and it is why we are being slow with getting 1.0 out. But, we *need* people to try NumPy on their codes so that we can determine whether or not API additions are needed. Without this testing we are just guessing based on the developer's own codes which do not cover the breadth. >I think that many people with codes that use Numeric or numarray are >awaiting the numpy 1.0 release as a sign that it is stable and ready >for prime time, before converting their codes. > > We need some of these people to convert earlier than that, though, so we can make sure that 1.0 really is ready for prime time. I think it's very close already or the version number wouldn't be so high. We are just waiting for more people to start using it and report any issues that they have before going to 1.0 (I'd also like scalar math to be implemented pre 1.0 as well in case that requires any C-API additions). Thanks for the feedback. What you described will be the basic behavior once 1.0 is released. Note that right now the only changes that usually need to be made are a re-compile. I know this can still be annoying when you have a deep code stack. -Travis ------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From schofield at ftw.at Wed Mar 15 09:06:17 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 15 Mar 2006 15:06:17 +0100 Subject: [SciPy-user] [Numpy-discussion] [ANN] NumPy 0.9.6 released In-Reply-To: References: Message-ID: <44181F59.3030001@ftw.at> mfmorss at aep.com wrote: >> We need some of these people to convert earlier than that, though, so we >> can make sure that 1.0 really is ready for prime time. I think it's >> very close already or the version number wouldn't be so high. We are >> just waiting for more people to start using it and report any issues >> that they have before going to 1.0 (I'd also like scalar math to be >> implemented pre 1.0 as well in case that requires any C-API additions). >> > I can understand that, but here as a potential industrial users of Numpy, > we can't really afford the risk. We're looking at Numpy as a key piece of > a Python replacement of commercial software for critical daily production. > If we build Numpy/Scipy into our system, it has to work. We don't want to > be anyone's beta testers. > NumPy is rapidly become more stable, due the hard work of Travis and its other developers. But my opinion is that SciPy is not yet ready for "critical daily production" unless you're willing to work through bugs or missing functionality. I assume you have compelling reasons for wanting to migrate from your current commercial software to Python / NumPy / SciPy, in which case I suggest you consider a support contract with a company like Enthought. I don't have links to them myself, but the website advertises consulting services for numerical computing on these platforms, and there will be others on the scipy-user list who can give you more information. -- Ed From manouchk at gmail.com Wed Mar 15 10:01:12 2006 From: manouchk at gmail.com (manouchk) Date: Wed, 15 Mar 2006 12:01:12 -0300 Subject: [SciPy-user] installation problems In-Reply-To: References: <1142355827.25646.13.camel@rosette.synchrotron-soleil.fr> Message-ID: <200603151201.12834.manouchk@gmail.com> Le Mardi 14 Mars 2006 14:12, Hanno Klemm a ?crit?: > Philippe, > > I had a similar problem. When you are installing on a Red Hat > distribution, it is highly likely that your BLAS or another numerical > library is incomplete. That seemed to be the problem on the Red Hat > distribution I have been using. > > Probably you then have to compile the numerical libraries yourself > (that's at least what I did). On mandriva 2005LE there is a similar problem when I run scipy.test(level=1) it gives a lo of line with the problem "undefined symbol: srotmg_" like this one : import signal -> failed: /usr/lib/python2.4/site-packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ and then there is a warning : WARNING: clapack module is empty If I refer to old mail of scipy-user mailing list : http://www.scipy.net/pipermail/scipy-user/2002-June/000447.html the solution was already given there : "Yes, srotmg_ is missing in the BLAS libraries that are included in LAPACK. You have to download http://netlib2.cs.utk.edu/blas/blas.tgz and rebuild BLAS to fix it." Question 1: only blas.tgz is needed? Question 2: is it better to use the cblas.tgz in order to have coherent *.c and *.f files? Question 3: the solution is to replace all *.f and .c files and then build? Question 4: what is the license of blas.tgz and cblas.tgz? The problem is still there in mandriva 2005! It is a pity that the problem was not resolved so that the distribution chip the complete version. I hope it is not to much out of the subject but I'd like to know if mandriva is now a exception and other distribution ship the "complete" version? > HTH, > Hanno > > On Mar 14, 2006, at 6:03 PM, FONTAINE Philipe wrote: > > I have installed scipy with all the needed packages > > > > When I load it from Python, it gives me the following messages: > >>>> from scipy import * > > > > Traceback (most recent call last): > > File "", line 1, in ? > > File "/usr/local/lib/python2.3/site- > > packages/scipy/signal/__init__.py", line 9, in ? > > from bsplines import * > > File "/usr/local/lib/python2.3/site- > > packages/scipy/signal/bsplines.py", line 3, in ? > > import scipy.special > > File "/usr/local/lib/python2.3/site- > > packages/scipy/special/__init__.py", line 10, in ? > > import orthogonal > > File "/usr/local/lib/python2.3/site- > > packages/scipy/special/orthogonal.py", line 66, in ? > > from scipy.linalg import eig > > File "/usr/local/lib/python2.3/site- > > packages/scipy/linalg/__init__.py", line 8, in ? > > from basic import * > > File "/usr/local/lib/python2.3/site-packages/scipy/linalg/basic.py", > > line 228, in ? > > import decomp > > File "/usr/local/lib/python2.3/site-packages/scipy/linalg/decomp.py", > > line 18, in ? > > from blas import get_blas_funcs > > File "/usr/local/lib/python2.3/site-packages/scipy/linalg/blas.py", > > line 15, in ? > > import fblas > > ImportError: /usr/local/lib/python2.3/site- > > packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ > > > > did someone have the same problem? > > Anyone knows the how to solve it > > > > Many Thanks > > > > Philippe > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user From garyt at bmb.leeds.ac.uk Wed Mar 15 11:36:46 2006 From: garyt at bmb.leeds.ac.uk (Gary S. Thompson) Date: Wed, 15 Mar 2006 16:36:46 +0000 Subject: [SciPy-user] non contiguous numpy arrays Message-ID: <4418429E.3000806@bmb.leeds.ac.uk> Hi I have read the stuff on the numpy interface and understand most of if(?) ;-) However I want to interface numpy to an array which is stored as a series of data blocks which are not contiguous in memory... Help! so for example I hava 3d matrix of 1024*512*128 where data is stored in blocks within memory which are 16x16x16 floats an read in from disk on demand... regards gary -- ------------------------------------------------------------------- Dr Gary Thompson Astbury Centre for Structural Molecular Biology, University of Leeds, Astbury Building, Leeds, LS2 9JT, West-Yorkshire, UK Tel. +44-113-3433024 email: garyt at bmb.leeds.ac.uk Fax +44-113-2331407 ------------------------------------------------------------------- From robert.kern at gmail.com Wed Mar 15 12:08:30 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Mar 2006 11:08:30 -0600 Subject: [SciPy-user] scipy 0.4.6 / numpy 0.9.5 | overwriting functions In-Reply-To: <025901c64831$8b385500$0e01a8c0@JELLE> References: <025901c64831$8b385500$0e01a8c0@JELLE> Message-ID: <44184A0E.60108@gmail.com> Jelle Feringa / EZCT Architecture & Design Research wrote: > Dear Group, > > I'm having some issues running scipy 0.4.6 / numpy 0.9.5, which are hard to > comprehend for me. Perhaps someone is able to inform me what is going wrong > here. I've tried installing numpy 0.9.6, and scipy failed to load when 0.9.6 > was installed. I referred back to 0.9.5 and am getting the following errors > (see end of mail). Am I making a serious mistake here setting up scipy/numpy > or is this the current state of affairs? Did you remove scipy completely and then re-install? or did you try to install over the existing version? > Also I've been experimenting with the Delaunay module. > I've been able to compile it using Mike Fletchers setup: > http://www.vrplumber.com/programming/mstoolkit/ > Its really a wonderful addition to scipy and compiles perfectly out of the > box. Thanks so much for the effort. Excellent! You're welcome. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Mar 15 12:23:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Mar 2006 11:23:43 -0600 Subject: [SciPy-user] installation problems In-Reply-To: <200603151201.12834.manouchk@gmail.com> References: <1142355827.25646.13.camel@rosette.synchrotron-soleil.fr> <200603151201.12834.manouchk@gmail.com> Message-ID: <44184D9F.7090703@gmail.com> manouchk wrote: > Le Mardi 14 Mars 2006 14:12, Hanno Klemm a ?crit : > >>Philippe, >> >>I had a similar problem. When you are installing on a Red Hat >>distribution, it is highly likely that your BLAS or another numerical >>library is incomplete. That seemed to be the problem on the Red Hat >>distribution I have been using. >> >>Probably you then have to compile the numerical libraries yourself >>(that's at least what I did). > > On mandriva 2005LE there is a similar problem when I run scipy.test(level=1) > > it gives a lo of line with the problem "undefined symbol: srotmg_" like this > one : > > import signal -> > failed: /usr/lib/python2.4/site-packages/scipy/linalg/fblas.so: undefined > symbol: srotmg_ > > and then there is a warning : > > WARNING: clapack module is empty > > If I refer to old mail of scipy-user mailing list : > http://www.scipy.net/pipermail/scipy-user/2002-June/000447.html > > the solution was already given there : > > "Yes, srotmg_ is missing in the BLAS libraries that are included in LAPACK. > You have to download http://netlib2.cs.utk.edu/blas/blas.tgz and rebuild > BLAS to fix it." > > Question 1: > only blas.tgz is needed? Yes. > Question 2: > is it better to use the cblas.tgz in order to have coherent *.c and *.f files? I'm not sure what this means. I would imagine that it's only worth using CBLAS if it's optimized like ATLAS. Don't bother. > Question 3: > the solution is to replace all *.f and .c files and then build? You can actually just unpack the blas.tgz somewhere and set the environment variable BLAS_SRC to that directory. The "Linear Algebra libraries" section of http://old.scipy.org/documentation/buildscipy.txt tells you how (it still applies, although it is an old document). > Question 4: > what is the license of blas.tgz and cblas.tgz? According to Debian, which is quite picky about such matters, blas.tgz at least is in the public domain. > The problem is still there in mandriva 2005! It is a pity that the problem was > not resolved so that the distribution chip the complete version. > > I hope it is not to much out of the subject but I'd like to know if mandriva > is now a exception and other distribution ship the "complete" version? Redhat seems to have that problem, too. Debian and its progeny do not. Go Ubuntu! -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Wed Mar 15 16:37:54 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 15 Mar 2006 14:37:54 -0700 Subject: [SciPy-user] non contiguous numpy arrays In-Reply-To: <4418429E.3000806@bmb.leeds.ac.uk> References: <4418429E.3000806@bmb.leeds.ac.uk> Message-ID: <44188932.2000309@ee.byu.edu> Gary S. Thompson wrote: >Hi > I have read the stuff on the numpy interface and understand most of >if(?) ;-) However I want to interface numpy to an array which is stored >as a series of data blocks which are not contiguous in memory... Help! > > > >so for example I hava 3d matrix of 1024*512*128 where data is stored in >blocks within memory which are 16x16x16 floats an read in from disk on >demand... > > > The memory model for numpy arrays requires that you can access the next element in each dimension by striding a "fixed-number" of bytes. In other words, to be a single ndarray, element i,j,k of the array, B, must be at start_of_data + i*B.stride[0] + j*B.stride[1] + k*B.stride[2] So, in your case, do these 16x16x16 blocks of floats all exist at arbitrary memory locations? If so, then I don't see how you can map that to a single ndarray. You could, however, write a class that wraps each block as a separate sub-array and do the indexing calculations yourself to determine which sub-block the data is stored in. But, I don't see a way to use a single ndarray to access memory layed out like that. Perhaps I still don't understand what you are doing. Are you using a memory map? -Travis From zpincus at stanford.edu Wed Mar 15 17:22:25 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 15 Mar 2006 14:22:25 -0800 Subject: [SciPy-user] Periodic spline interpolation bug / memory error? Message-ID: <86837623-AD95-4525-8C69-F130F5F43EB2@stanford.edu> Hi folks, I'm trying to estimate smooth contours with scipy's parametric splines, given a set of (x, y) points known to lie along the contour. Now, because contours are closed, special care must be taken to ensure that the interpolated value at the last point be the same as the interpolated value at the first point. I assume that the 'per' option to scipy.interpolate.splprep is designed to allow for this, as per the documentation: > per -- If non-zero, data points are considered periodic with period > x[m-1] - x[0] and a smooth periodic spline > approximation is returned. > Values of y[m-1] and w[m-1] are not used. However, I cannot get this to work on my computer. Setting 'per = true' always results in memory errors or other problems. Here is a simple example to reproduce the problem: # first make x and y points along the unit circle, from zero to just below two pi In [1]: import numpy, scipy, scipy.interpolate In [2]: twopi = numpy.arange(0, 2 * numpy.pi, 0.1) In [3]: xs = numpy.cos(twopi) In [4]: ys = numpy.sin(twopi) In [5]: tck, uout = scipy.interpolate.splprep([xs, ys], u = twopi, per = True) Warning: Setting x[0][63]=x[0][0] Warning: Setting x[1][63]=x[1][0] ...[here my machine grinds for 2-3 minutes]... Warning: The required storage space exceeds the available strorage space. Probably causes: nest to small or s is too small. (fp>s) At this point, the returned tck arrays are just all zeros. Sometimes I get other malloc errors printed to stdout and memory error exceptions, e.g.: Python(6820,0xa000ed68) malloc: *** vm_allocate(size=3716243456) failed (error code=3) Python(6820,0xa000ed68) malloc: *** error: can't allocate region Python(6820,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug During the time that my machine is grinding, python is using very little CPU -- the grind is all because python is allocating huge amounts of memory, causing the pager to go nuts. If I explicitly make the last value of the x and y input arrays equal to the first value (as the warnings say that the function is doing), I get the same problem: In [6]: xs[-1] = xs[0] In [7]: ys[-1] = ys[0] In [8]: tck, uout = scipy.interpolate.splprep([xs, ys], u = twopi, per = True) #same thing Any thoughts? Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine From oliphant at ee.byu.edu Wed Mar 15 17:40:07 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 15 Mar 2006 15:40:07 -0700 Subject: [SciPy-user] Periodic spline interpolation bug / memory error? In-Reply-To: <86837623-AD95-4525-8C69-F130F5F43EB2@stanford.edu> References: <86837623-AD95-4525-8C69-F130F5F43EB2@stanford.edu> Message-ID: <441897C7.5030204@ee.byu.edu> Zachary Pincus wrote: >Hi folks, > >I'm trying to estimate smooth contours with scipy's parametric >splines, given a set of (x, y) points known to lie along the contour. > >Now, because contours are closed, special care must be taken to >ensure that the interpolated value at the last point be the same as >the interpolated value at the first point. > > You've potentially uncovered a bug. For periodic interpolation, I typically use Fourier methods. I presume, your known points are not "equally-spaced" though? Hopefully your example will let us find the problem. -Travis From zpincus at stanford.edu Wed Mar 15 18:05:27 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 15 Mar 2006 15:05:27 -0800 Subject: [SciPy-user] Periodic spline interpolation bug / memory error? In-Reply-To: <441897C7.5030204@ee.byu.edu> References: <86837623-AD95-4525-8C69-F130F5F43EB2@stanford.edu> <441897C7.5030204@ee.byu.edu> Message-ID: On Mar 15, 2006, at 2:40 PM, Travis Oliphant wrote: > Zachary Pincus wrote: > >> Hi folks, >> >> I'm trying to estimate smooth contours with scipy's parametric >> splines, given a set of (x, y) points known to lie along the contour. >> >> Now, because contours are closed, special care must be taken to >> ensure that the interpolated value at the last point be the same as >> the interpolated value at the first point. > > You've potentially uncovered a bug. > > For periodic interpolation, I typically use Fourier methods. I > presume, > your known points are not "equally-spaced" though? Yes, the points are irregularly spaced. I'm actually using the interpolation in a procedure to relax the spacing so that all points *are* equally spaced. Right now, I'm working around this bug by overlapping the data on each end, and not evaluating the spline only at some distance from the ends. This works OK, in case anyone else needs to work around this. > Hopefully your example will let us find the problem. Let me know if I can help. I also forgot the version information: In [49]: numpy.version.version Out[49]: '0.9.6.2208' In [50]: scipy.version.version Out[50]: '0.4.7.1660' All on OS X version 10.4.5 with python 2.4.2. Zach > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From oliphant at ee.byu.edu Wed Mar 15 18:24:24 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 15 Mar 2006 16:24:24 -0700 Subject: [SciPy-user] Periodic spline interpolation bug / memory error? In-Reply-To: References: <86837623-AD95-4525-8C69-F130F5F43EB2@stanford.edu> <441897C7.5030204@ee.byu.edu> Message-ID: <4418A228.7060303@ee.byu.edu> Zachary Pincus wrote: > > >>Hopefully your example will let us find the problem. >> >> > >Let me know if I can help. I also forgot the version information: > > It looks like a problem with the clocur function from FITPACK itself is not setting the value of 'n' like it is supposed to. I'm not sure why, perhaps some parameters are set wrong. -Travis From Doug.LATORNELL at mdsinc.com Wed Mar 15 18:47:05 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Wed, 15 Mar 2006 15:47:05 -0800 Subject: [SciPy-user] Bug in scipy.integrate.ode? Message-ID: <34090E25C2327C4AA5D276799005DDE001010666@SMDMX0501.mds.mdsinc.com> In [2]: numpy.__version__ Out[2]: '0.9.7.2247' In [4]: scipy.__version__ Out[4]: '0.4.7.1711' Trivial integration of a decaying exponential: from scipy import * def f(t, y): return -log(2) / 2 * y r = integrate.ode(f).set_integrator('vode') r.set_initial_value(1.0) while r.successful() and r.t <= 2: r.integrate(r.t + 0.1) print r.t, r.y Produces: Python 2.4.1 (#1, Sep 3 2005, 13:08:59) Type "copyright", "credits" or "license" for more information. IPython 0.6.15 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: ## working on region in file /tmp/python-23462ATK.py... Found integrator vode ------------------------------------------------------------------------ --- exceptions.NameError Traceback (most recent call last) /home/doug/python/ /tmp/python-23462ATK.py 6 7 r = integrate.ode(f).set_integrator('vode') ----> 8 r.set_initial_value(1.0) 9 while r.successful() and r.t <= 2: 10 r.integrate(r.t + 0.1) /usr/local/lib/python2.4/site-packages/scipy/integrate/ode.py in set_initial_value(self, y, t) 141 def set_initial_value(self,y,t=0.0): 142 """Set initial conditions y(t) = y.""" --> 143 if isscalar(y): 144 y = [y] 145 n_prev = len(self.y) NameError: global name 'isscalar' is not defined The fix appears to be to add 'isscalar' to the 'from numpy import ...' at line 94 in scipy/integrate/ode.py Doug This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From oliphant at ee.byu.edu Wed Mar 15 18:54:38 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 15 Mar 2006 16:54:38 -0700 Subject: [SciPy-user] Periodic spline interpolation bug / memory error? In-Reply-To: References: <86837623-AD95-4525-8C69-F130F5F43EB2@stanford.edu> <441897C7.5030204@ee.byu.edu> Message-ID: <4418A93E.7080909@ee.byu.edu> Zachary Pincus wrote: >Yes, the points are irregularly spaced. I'm actually using the >interpolation in a procedure to relax the spacing so that all points >*are* equally spaced. > >Right now, I'm working around this bug by overlapping the data on >each end, and not evaluating the spline only at some distance from >the ends. This works OK, in case anyone else needs to work around this. > > > >>Hopefully your example will let us find the problem. >> >> I think I found the problem. For this situation, the estimated for the number of knots needed was set to the expected value (but it could need more, which it did in your situation). Then, this error was not handled well because the value of n under these conditions was not even being set (but was being used to allocate memory). I committed two fixes to SVN: 1) The estimated number of knots was increased to the maximum needed (this alone removes the error you are seeing) To see if this helps you go to /scipy/interpolate/fitpack.py line number 198 and change nest = m/2 to nest = m+2*k This might be enough to fix the problem. The second fix handles the error condition more gracefully, hopefully, then allocating a huge array.... 2) If the error does show up, then don't assume 'n' is valid but set it to something sane. From tim.leslie at gmail.com Wed Mar 15 20:01:47 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Thu, 16 Mar 2006 12:01:47 +1100 Subject: [SciPy-user] Bug in scipy.integrate.ode? In-Reply-To: <34090E25C2327C4AA5D276799005DDE001010666@SMDMX0501.mds.mdsinc.com> References: <34090E25C2327C4AA5D276799005DDE001010666@SMDMX0501.mds.mdsinc.com> Message-ID: On 3/16/06, LATORNELL, Doug wrote: > > > The fix appears to be to add 'isscalar' to the 'from numpy import ...' > at line 94 in scipy/integrate/ode.py I've fixed this in svn revision 1713. Thanks for the report. Tim Doug > > > > This email and any files transmitted with it may contain privileged or > confidential information and may be read or used only by the intended > recipient. If you are not the intended recipient of the email or any of its > attachments, please be advised that you have received this email in error > and any use, dissemination, distribution, forwarding, printing or copying of > this email or any attached files is strictly prohibited. If you have > received this email in error, please immediately purge it and all > attachments and notify the sender by reply email or contact the sender at > the number listed. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zpincus at stanford.edu Wed Mar 15 20:41:20 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 15 Mar 2006 17:41:20 -0800 Subject: [SciPy-user] Periodic spline interpolation bug / memory error? In-Reply-To: <4418A93E.7080909@ee.byu.edu> References: <86837623-AD95-4525-8C69-F130F5F43EB2@stanford.edu> <441897C7.5030204@ee.byu.edu> <4418A93E.7080909@ee.byu.edu> Message-ID: > I think I found the problem. For this situation, the estimated for > the > number of knots needed was set to the expected value (but it could > need > more, which it did in your situation). Then, this error was not > handled > well because the value of n under these conditions was not even being > set (but was being used to allocate memory). > > I committed two fixes to SVN: Hello again. I just svn up'd my scipy and numpy trees to check this out. The fix does indeed allow the periodic parametric spline interpolation to work. Unfortunately, now it seems to be producing wrong results! Unless perhaps I am misunderstanding what the 'per' option is supposed to be doing. Same setup as before: import numpy, scipy, scipy.interpolate twopi = numpy.arange(0, 2 * numpy.pi, 0.1) xs = numpy.cos(twopi) ys = numpy.sin(twopi) tck, uout = scipy.interpolate.splprep([xs, ys], u = twopi, per = False) tck_per, uout = scipy.interpolate.splprep([xs, ys], u = twopi, per = True) # some warning text is printed out = scipy.interpolate.splev(twopi, tck) out_per = scipy.interpolate.splev(twopi, tck_per) Now, 'out' isn't quite right because it wasn't forced to be periodic. So instead of a circle, it looks like an alpha: import Gnuplot g = Gnuplot.Gnuplot() g.plot(numpy.transpose(out)) But, 'out_per' is even worse. It seems to be a line, or nearly so: g.plot(numpy.transpose(out_per)) I'm not sure if this is an error or what... but it seems to be somehow wrong, for sure! Zach From nwagner at mecha.uni-stuttgart.de Thu Mar 16 03:38:26 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Mar 2006 09:38:26 +0100 Subject: [SciPy-user] ValueError: need more than 1 value to unpack Message-ID: <44192402.4040207@mecha.uni-stuttgart.de> Is this a bug ? A_csc = csc_matrix((3,3),Complex128) File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 521, in __init__ (M, N) = dims ValueError: need more than 1 value to unpack Nils From samrobertsmith at gmail.com Thu Mar 16 04:05:37 2006 From: samrobertsmith at gmail.com (linda.s) Date: Thu, 16 Mar 2006 01:05:37 -0800 Subject: [SciPy-user] difference between NumPy and Scipy Message-ID: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> i saw the news of NumPy release and SciPy release. Numpy is said to be the core of Scipy. But why they have different releases? for example: NumPy 0.9.6 released! (2006-03-14) See the Download and Release Notes pages. SciPy 0.4.6 released! (2006-02-16) See the Download and Release Notes pages. NumPy 0.9.5 released! (2006-02-16) See the Download and Release Notes pages. From nwagner at mecha.uni-stuttgart.de Thu Mar 16 04:41:49 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Mar 2006 10:41:49 +0100 Subject: [SciPy-user] linspace Message-ID: <441932DD.2000808@mecha.uni-stuttgart.de> Hi all, linspace is useful for one-dimensional objects. Is there something similar for surfaces ? I would like to build a uniform mesh on a cube (-R,R)^3 I mean collocation points uniformly distributed on each side of the cube. Nils From arnd.baecker at web.de Thu Mar 16 04:52:02 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 16 Mar 2006 10:52:02 +0100 (CET) Subject: [SciPy-user] linspace In-Reply-To: <441932DD.2000808@mecha.uni-stuttgart.de> References: <441932DD.2000808@mecha.uni-stuttgart.de> Message-ID: On Thu, 16 Mar 2006, Nils Wagner wrote: > Hi all, > > linspace is useful for one-dimensional objects. > Is there something similar for surfaces ? > I would like to build a uniform mesh on a cube (-R,R)^3 > I mean collocation points uniformly distributed on each side of the cube. What about numpy.mgrid? Arnd From nwagner at mecha.uni-stuttgart.de Thu Mar 16 05:09:27 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Mar 2006 11:09:27 +0100 Subject: [SciPy-user] linspace In-Reply-To: References: <441932DD.2000808@mecha.uni-stuttgart.de> Message-ID: <44193957.6020606@mecha.uni-stuttgart.de> Arnd Baecker wrote: > On Thu, 16 Mar 2006, Nils Wagner wrote: > > >> Hi all, >> >> linspace is useful for one-dimensional objects. >> Is there something similar for surfaces ? >> I would like to build a uniform mesh on a cube (-R,R)^3 >> I mean collocation points uniformly distributed on each side of the cube. >> > > What about numpy.mgrid? > > Arnd > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > That hits the spot, but how can I distribute N**2 points in the open domain, that is points should not lie on the edges of the cube. Please can you fill the gaps in X,Y,Z = mgrid[ ..., ..., ...] Thanks in advance Nils From nwagner at mecha.uni-stuttgart.de Thu Mar 16 05:36:23 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Mar 2006 11:36:23 +0100 Subject: [SciPy-user] AttributeError: 'numpy.ndarray' object has no attribute 'step' In-Reply-To: References: <441932DD.2000808@mecha.uni-stuttgart.de> Message-ID: <44193FA7.7090107@mecha.uni-stuttgart.de> Arnd Baecker wrote: > On Thu, 16 Mar 2006, Nils Wagner wrote: > > >> Hi all, >> >> linspace is useful for one-dimensional objects. >> Is there something similar for surfaces ? >> I would like to build a uniform mesh on a cube (-R,R)^3 >> I mean collocation points uniformly distributed on each side of the cube. >> > > What about numpy.mgrid? > > Arnd > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > I tried >>> X,Y = mgrid[linspace(-0.4,0.4,10),linspace(-0.4,0.4,10)] Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/numpy/lib/index_tricks.py", line 89, in __getitem__ step = key[k].step AttributeError: 'numpy.ndarray' object has no attribute 'step' Nils From schofield at ftw.at Thu Mar 16 06:40:29 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 16 Mar 2006 12:40:29 +0100 Subject: [SciPy-user] ValueError: need more than 1 value to unpack In-Reply-To: <44192402.4040207@mecha.uni-stuttgart.de> References: <44192402.4040207@mecha.uni-stuttgart.de> Message-ID: <44194EAD.50809@ftw.at> Nils Wagner wrote: > Is this a bug ? > > A_csc = csc_matrix((3,3),Complex128) > File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line > 521, in __init__ > (M, N) = dims > ValueError: need more than 1 value to unpack > This works fine for me: >>> from scipy import sparse as s >>> import numpy >>> A_csc = s.csc_matrix((3,3), numpy.complex128) Try with a lowercase 'complex128'. I think the uppercase dtype names are deprecated, but also _different_ to the lowercase dtype names, so Complex128 would be equivalent to complex256 -- that is, the real and imaginary components are both 128-bit floats. Complex128, with uppercase 'C', is not even defined on my 32-bit machine... But it looks like a bug anyway. Robert C., do you have a 64-bit machine to investigate this? -- Ed From nwagner at mecha.uni-stuttgart.de Thu Mar 16 06:56:23 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Mar 2006 12:56:23 +0100 Subject: [SciPy-user] ValueError: need more than 1 value to unpack In-Reply-To: <44194EAD.50809@ftw.at> References: <44192402.4040207@mecha.uni-stuttgart.de> <44194EAD.50809@ftw.at> Message-ID: <44195267.3000306@mecha.uni-stuttgart.de> Ed Schofield wrote: > Nils Wagner wrote: > >> Is this a bug ? >> >> A_csc = csc_matrix((3,3),Complex128) >> File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line >> 521, in __init__ >> (M, N) = dims >> ValueError: need more than 1 value to unpack >> >> > > This works fine for me: > > >>>> from scipy import sparse as s >>>> import numpy >>>> A_csc = s.csc_matrix((3,3), numpy.complex128) >>>> > > Try with a lowercase 'complex128'. I think the uppercase dtype names > are deprecated, but also _different_ to the lowercase dtype names, so > Complex128 would be equivalent to complex256 -- that is, the real and > imaginary components are both 128-bit floats. Complex128, with > uppercase 'C', is not even defined on my 32-bit machine... > > On a 32bit system A_csc = s.csc_matrix((3,3),Complex128) NameError: name 'Complex128' is not defined If I use a lowercase dtype name A_csc = s.csc_matrix((3,3),complex128) for i in arange(0,3): A_csr[i,i] = 1.0+2j A_csc[i,i] = 1.0+2j >>> A_csc <3x3 sparse matrix of type '' with 3 stored elements (space for 100) in Compressed Sparse Column format> Note that the imaginary part is missing ! >>> print A_csc (0, 0) 1.0 (1, 1) 1.0 (2, 2) 1.0 Nils > But it looks like a bug anyway. Robert C., do you have a 64-bit machine > to investigate this? > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From garyt at bmb.leeds.ac.uk Thu Mar 16 06:57:42 2006 From: garyt at bmb.leeds.ac.uk (Gary S. Thompson) Date: Thu, 16 Mar 2006 11:57:42 +0000 Subject: [SciPy-user] non contiguous numpy arrays In-Reply-To: References: Message-ID: <441952B6.5080708@bmb.leeds.ac.uk> >Hi > I have read the stuff on the numpy interface and understand most of >if(?) ;-) However I want to interface numpy to an array which is stored >as a series of data blocks which are not contiguous in memory... Help! > >so for example I hava 3d matrix of 1024*512*128 where data is stored in >blocks within memory which are 16x16x16 floats an read in from disk on >demand... > >regards >gary > > >>i >>> I have read the stuff on the numpy interface and understand most of >>>if(?) ;-) However I want to interface numpy to an array which is stored >>>as a series of data blocks which are not contiguous in memory... Help! >>> >>> >>> >> >> >>>so for example I hava 3d matrix of 1024*512*128 where data is stored in >>>blocks within memory which are 16x16x16 floats an read in from disk on >>>demand... >>> >>> >>> >> >> > The memory model for numpy arrays requires that you can access the > next element in each dimension by striding a "fixed-number" of bytes. > In other words, to be a single ndarray, element i,j,k of the array, B, > must be at start_of_data + i*B.stride[0] + j*B.stride[1] + > k*B.stride[2] So, in your case, do these 16x16x16 blocks of floats all > exist at arbitrary memory locations? yep afraid so > If so, then I don't see how you can map that to a single ndarray. You > could, however, write a class that wraps each block as a separate > sub-array and do the indexing calculations yourself to determine which > sub-block the data is stored in. But, I don't see a way to use a > single ndarray to access memory layed out like that. Perhaps I still > don't understand what you are doing. Are you using a memory map? -Travis no ;-) no memory maps I believe So the problem I am trying to overcome is that the program I am using looks at large data matrices (100's of megabyte). Therefore it uses these submatrix formats to keep memory usage down if you are only lookiung at particular regions. Mentally I was thinking of this as being a bit like a sparse matrix but with all the data present So is there sparse matrix supportfor numpy? Now this was something that wasn't clear do you access each value in the numpy array by a direct memory reference or by a function call when you want to use it... regards gary -- ------------------------------------------------------------------- Dr Gary Thompson Astbury Centre for Structural Molecular Biology, University of Leeds, Astbury Building, Leeds, LS2 9JT, West-Yorkshire, UK Tel. +44-113-3433024 email: garyt at bmb.leeds.ac.uk Fax +44-113-2331407 ------------------------------------------------------------------- From oliphant.travis at ieee.org Thu Mar 16 10:12:40 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 16 Mar 2006 08:12:40 -0700 Subject: [SciPy-user] AttributeError: 'numpy.ndarray' object has no attribute 'step' In-Reply-To: <44193FA7.7090107@mecha.uni-stuttgart.de> References: <441932DD.2000808@mecha.uni-stuttgart.de> <44193FA7.7090107@mecha.uni-stuttgart.de> Message-ID: <44198068.2070703@ieee.org> Nils Wagner wrote: >>>> X,Y = mgrid[linspace(-0.4,0.4,10),linspace(-0.4,0.4,10)] >>>> > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib64/python2.4/site-packages/numpy/lib/index_tricks.py", > line 89, in __getitem__ > step = key[k].step > AttributeError: 'numpy.ndarray' object has no attribute 'step' > mgrid doesn't take sequences it takes "slice notation" mgrid[-0.4:0.4:10j, -0.4:0.4:10j] should work -Travis From oliphant.travis at ieee.org Thu Mar 16 10:17:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 16 Mar 2006 08:17:04 -0700 Subject: [SciPy-user] difference between NumPy and Scipy In-Reply-To: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> Message-ID: <44198170.1050309@ieee.org> linda.s wrote: > i saw the news of NumPy release and SciPy release. > Numpy is said to be the core of Scipy. But why they have different releases? > Because NumPy is the core of other packages as well. So, it has a different release schedule. It would be nice if NumPy and SciPy had synchronous releases, but we need more people to help with releases to make that possible. For now, SciPy releases are usually a little behind. -Travis From schofield at ftw.at Thu Mar 16 12:00:41 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 16 Mar 2006 18:00:41 +0100 Subject: [SciPy-user] difference between NumPy and Scipy In-Reply-To: <44198170.1050309@ieee.org> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> Message-ID: <441999B9.9020209@ftw.at> Travis Oliphant wrote: > linda.s wrote: > >> i saw the news of NumPy release and SciPy release. >> Numpy is said to be the core of Scipy. But why they have different releases? >> >> > Because NumPy is the core of other packages as well. So, it has a > different release schedule. > > It would be nice if NumPy and SciPy had synchronous releases, but we > need more people to help with releases to make that possible. For now, > SciPy releases are usually a little behind. > I could try to make a new SciPy release this weekend. Before then, though, I suggest we change the package name of nd_image. Here was my proposal from a previous thread: > I have a minor suggestion: that we rename the package from 'nd_image' > to 'image' for consistency with the other packages, like 'signal', > 'optimize' and 'sparse'. There's currently an 'image' package in the > sandbox, but that looks much less mature, and should probably be > integrated with the existing package when it actually does come into > the main tree. What do people think? At the very least we should remove the underscore, so it's more consistent with linalg and fftpack (which aren't lin_alg, fft_pack ;) -- Ed From arnd.baecker at web.de Thu Mar 16 12:02:28 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 16 Mar 2006 18:02:28 +0100 (CET) Subject: [SciPy-user] difference between NumPy and Scipy In-Reply-To: <441999B9.9020209@ftw.at> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> <441999B9.9020209@ftw.at> Message-ID: On Thu, 16 Mar 2006, Ed Schofield wrote: [...] > I could try to make a new SciPy release this weekend. that would be great! (for various reasons;-) > Before then, though, I suggest we change the package name of nd_image. > Here was my proposal from a previous thread: > > > I have a minor suggestion: that we rename the package from 'nd_image' > > to 'image' for consistency with the other packages, like 'signal', > > 'optimize' and 'sparse'. There's currently an 'image' package in the > > sandbox, but that looks much less mature, and should probably be > > integrated with the existing package when it actually does come into > > the main tree. > > What do people think? At the very least we should remove the > underscore, so it's more consistent with linalg and fftpack (which > aren't lin_alg, fft_pack ;) +1 for image - the shorter the better ;-) Arnd From oliphant.travis at ieee.org Thu Mar 16 12:08:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 16 Mar 2006 10:08:07 -0700 Subject: [SciPy-user] difference between NumPy and Scipy In-Reply-To: <441999B9.9020209@ftw.at> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> <441999B9.9020209@ftw.at> Message-ID: <44199B77.1010109@ieee.org> Ed Schofield wrote: > I could try to make a new SciPy release this weekend. > > Before then, though, I suggest we change the package name of nd_image. > Here was my proposal from a previous thread: > > >> I have a minor suggestion: that we rename the package from 'nd_image' >> to 'image' for consistency with the other packages, like 'signal', >> 'optimize' and 'sparse'. There's currently an 'image' package in the >> sandbox, but that looks much less mature, and should probably be >> integrated with the existing package when it actually does come into >> the main tree. >> > > > What do people think? At the very least we should remove the > underscore, so it's more consistent with linalg and fftpack (which > aren't lin_alg, fft_pack ;) > Probably the right thing to do, especially as we improve it. Right now it's just a copy of nd_image from numarray (thus the name). Anybody who wants to make the necessary name changes is welcome. -Travis From matthew.brett at gmail.com Thu Mar 16 12:23:32 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 16 Mar 2006 17:23:32 +0000 Subject: [SciPy-user] scipy.test problem In-Reply-To: <200603150839.48570.bgoli@sun.ac.za> References: <1e2af89e0603091222i155e03cfhdcfa86bd4b64a64c@mail.gmail.com> <1e2af89e0603140243p11dcb7a9kc0b5d143a2041770@mail.gmail.com> <200603150839.48570.bgoli@sun.ac.za> Message-ID: <1e2af89e0603160923h6b1d2a0epff8dd7795c849f49@mail.gmail.com> Hi, Just to update on 64 bit, scipy tests. I have now tested on two identical 64 bit linux P4 systems: 1) Fedora core 4 gcc (GCC) 4.0.2 20051125 (Red Hat 4.0.2-8) GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-47.fc4)) 2) Ubuntu gcc (GCC) 4.0.2 20050808 (prerelease) GNU Fortran (GCC) 3.4.5 20050809 Both have: nocona optimized atlas, fftw 3.1, numpy '0.9.7.2248', scipy '0.4.7.1713' System 1 generates error messages Hanno and I have reported for scipy.linalg.test() > FAIL: check_simple (scipy.linalg.decomp.test_decomp.test_eig) System 2 passes all scipy.linalg.test() Neither system generates an error for scipy.stats.test() - which appears to be the source of the errors reported by Nils and Brett Both systems hang indefinitely for: scipy.lib.lapack.test() I guess I'm going to conclude that At least the combination of gcc 4.0.2 and g77 3.2.3 and ATLAS is not compatible with linalg At least the combination of ATLAS, gcc 4.0.2 and either version of g77 that I have is not going to work for lib.lapack There is some other incompatibility for the setup of Nils and Brett. Best, Matthew From robert.kern at gmail.com Thu Mar 16 13:02:50 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 16 Mar 2006 12:02:50 -0600 Subject: [SciPy-user] scipy.test problem In-Reply-To: <1e2af89e0603160923h6b1d2a0epff8dd7795c849f49@mail.gmail.com> References: <1e2af89e0603091222i155e03cfhdcfa86bd4b64a64c@mail.gmail.com> <1e2af89e0603140243p11dcb7a9kc0b5d143a2041770@mail.gmail.com> <200603150839.48570.bgoli@sun.ac.za> <1e2af89e0603160923h6b1d2a0epff8dd7795c849f49@mail.gmail.com> Message-ID: <4419A84A.5070406@gmail.com> Matthew Brett wrote: > Hi, > > Just to update on 64 bit, scipy tests. I have now tested on two > identical 64 bit linux P4 systems: > > 1) Fedora core 4 > gcc (GCC) 4.0.2 20051125 (Red Hat 4.0.2-8) > GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-47.fc4)) > > 2) Ubuntu > gcc (GCC) 4.0.2 20050808 (prerelease) > GNU Fortran (GCC) 3.4.5 20050809 > > Both have: nocona optimized atlas, fftw 3.1, numpy '0.9.7.2248', scipy > '0.4.7.1713' > > System 1 generates error messages Hanno and I have reported for > scipy.linalg.test() > >>FAIL: check_simple (scipy.linalg.decomp.test_decomp.test_eig) > > System 2 passes all scipy.linalg.test() > Neither system generates an error for scipy.stats.test() - which > appears to be the source of the errors reported by Nils and Brett > > Both systems hang indefinitely for: > > scipy.lib.lapack.test() > > I guess I'm going to conclude that > > At least the combination of gcc 4.0.2 and g77 3.2.3 and ATLAS is not > compatible with linalg > At least the combination of ATLAS, gcc 4.0.2 and either version of g77 > that I have is not going to work for lib.lapack > There is some other incompatibility for the setup of Nils and Brett. Hmm. On my AMD64 Ubuntu Breezy with the same compilers that you list above and ATLAS from the atlas3-base package, scipy.lib.lapack.test() runs perfectly. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Thu Mar 16 14:13:01 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 16 Mar 2006 20:13:01 +0100 Subject: [SciPy-user] nd_image rename In-Reply-To: <44199B77.1010109@ieee.org> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> <441999B9.9020209@ftw.at> <44199B77.1010109@ieee.org> Message-ID: <4419B8BD.8020808@ftw.at> Travis Oliphant wrote: > Ed Schofield wrote: > >> I could try to make a new SciPy release this weekend. >> >> Before then, though, I suggest we change the package name of nd_image. >> > Probably the right thing to do, especially as we improve it. Right now > it's just a copy of nd_image from numarray (thus the name). > > Anybody who wants to make the necessary name changes is welcome. > Done it. I've renamed the package at the Python level; the C module is still available to Python as _nd_image, with the leading underscore etc. Is the nd_image package still being maintained in numarray, or is SciPy now its official home? :) -- Ed From oliphant at ee.byu.edu Thu Mar 16 15:11:26 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 16 Mar 2006 13:11:26 -0700 Subject: [SciPy-user] nd_image rename In-Reply-To: <4419B8BD.8020808@ftw.at> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> <441999B9.9020209@ftw.at> <44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at> Message-ID: <4419C66E.4000803@ee.byu.edu> Ed Schofield wrote: >Travis Oliphant wrote: > > >>Ed Schofield wrote: >> >> >> >>>I could try to make a new SciPy release this weekend. >>> >>>Before then, though, I suggest we change the package name of nd_image. >>> >>> >>> >>Probably the right thing to do, especially as we improve it. Right now >>it's just a copy of nd_image from numarray (thus the name). >> >>Anybody who wants to make the necessary name changes is welcome. >> >> >> >Done it. I've renamed the package at the Python level; the C module is >still available to Python as _nd_image, with the leading underscore etc. > >Is the nd_image package still being maintained in numarray, or is SciPy >now its official home? :) > > Peter, the original author, has said (for reasons that were not entirely clear) that he will not be supporting the numpy version (or eventually the numarray version, either --- I think he's planning on writing some other library-based solution). So, what we have in SciPy is "the official" home for the numpy version of his package. -Travis From joris at ster.kuleuven.ac.be Thu Mar 16 16:42:56 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Thu, 16 Mar 2006 22:42:56 +0100 Subject: [SciPy-user] nd_image Message-ID: <1142545376.4419dbe0b1903@webmail.ster.kuleuven.ac.be> [TO]: So, what we have in SciPy is "the official" home for the numpy [TO]: version of his package. Sorry, I am a bit lost here. So nd_image was a numarray library, and is now available via scipy where it works with numpy arrays (correct?). At the moment, the library is unmaintained though, regardless whether it would be part of scipy or numpy. Correct? If so, why exactly would it be better to put it under scipy rather than under numpy? Some people don't need scipy for their work and are happy with numpy alone. It used to be rather easy for them to use nd_image with numarray, but if I understand you correctly, they would now have to install an entire new package, which for scipy is still quite a challenge compared to numpy. Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From schofield at ftw.at Thu Mar 16 17:02:24 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 16 Mar 2006 23:02:24 +0100 Subject: [SciPy-user] ValueError: need more than 1 value to unpack In-Reply-To: <44195267.3000306@mecha.uni-stuttgart.de> References: <44192402.4040207@mecha.uni-stuttgart.de> <44194EAD.50809@ftw.at> <44195267.3000306@mecha.uni-stuttgart.de> Message-ID: On 16/03/2006, at 12:56 PM, Nils Wagner wrote: > If I use a lowercase dtype name > A_csc = s.csc_matrix((3,3),complex128) > for i in arange(0,3): > > A_csr[i,i] = 1.0+2j > A_csc[i,i] = 1.0+2j > >>>> A_csc > <3x3 sparse matrix of type '' > with 3 stored elements (space for 100) > in Compressed Sparse Column format> > > Note that the imaginary part is missing ! > >>>> print A_csc > (0, 0) 1.0 > (1, 1) 1.0 > (2, 2) 1.0 Try again using: >>> A_csc = csc_matrix((3,3), dtype=complex128) The dtype keyword is necessary explicitly. I've closed the bug report for now. Could you please submit your user name with future bug reports on Trac? Thanks :) -- Ed From robert.kern at gmail.com Thu Mar 16 17:52:36 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 16 Mar 2006 16:52:36 -0600 Subject: [SciPy-user] nd_image In-Reply-To: <1142545376.4419dbe0b1903@webmail.ster.kuleuven.ac.be> References: <1142545376.4419dbe0b1903@webmail.ster.kuleuven.ac.be> Message-ID: <4419EC34.50509@gmail.com> joris at ster.kuleuven.ac.be wrote: > [TO]: So, what we have in SciPy is "the official" home for the numpy > [TO]: version of his package. > > Sorry, I am a bit lost here. So nd_image was a numarray library, and is now > available via scipy where it works with numpy arrays (correct?). At the moment, > the library is unmaintained though, regardless whether it would be part of > scipy or numpy. Correct? If so, why exactly would it be better to put it under > scipy rather than under numpy? In my opinion, there should be even less stuff in numpy. If I had my 'druthers, linalg, random, and dft wouldn't be in numpy either. > Some people don't need scipy for their work and are happy with numpy alone. > It used to be rather easy for them to use nd_image with numarray, but if I > understand you correctly, they would now have to install an entire new package, > which for scipy is still quite a challenge compared to numpy. The problem is that this argument is the same for just about *any* scipy package. There needs to be a better reason to place something in numpy. Otherwise, we will end up with a numpy as bloated and difficult to install as scipy. If you want to ameliorate the problem, the way to do it is to work on scipy's packaging such that each subpackage can be built and installed separately. We're not very far from being able to do that. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Thu Mar 16 17:54:41 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 16 Mar 2006 15:54:41 -0700 Subject: [SciPy-user] nd_image In-Reply-To: <1142545376.4419dbe0b1903@webmail.ster.kuleuven.ac.be> References: <1142545376.4419dbe0b1903@webmail.ster.kuleuven.ac.be> Message-ID: <4419ECB1.6050709@ee.byu.edu> joris at ster.kuleuven.ac.be wrote: > [TO]: So, what we have in SciPy is "the official" home for the numpy > [TO]: version of his package. > >Sorry, I am a bit lost here. So nd_image was a numarray library, and is now >available via scipy where it works with numpy arrays (correct?). > nd_image was a Package that built on top of numarray and was also distributed with numarray. Exactly what goes in the base numpy backage and what goes into add-on packages is a question we will wrestle with for a while. With the existence of SciPy, I don't favor growing NumPy much beyond the (general) feature-set it has now. There are already people who grumble that it includes too-much stuff that they don't need. Other functionality should be added by other packages. We are trying to make SciPy into a large collection of packages that don't all have to be installed, so that users can pick and choose what they need if they must. >At the moment, >the library is unmaintained though, regardless whether it would be part of >scipy or numpy. Correct? If so, why exactly would it be better to put it under >scipy rather than under numpy? > > Not directly accurate. The SciPy version is maintained by the SciPy developers (not by the original author). >Some people don't need scipy for their work and are happy with numpy alone. >It used to be rather easy for them to use nd_image with numarray, but if I >understand you correctly, they would now have to install an entire new package, > > Many SciPy packages can be installed separately and nd_image (now scipy.image) is one of them. So, it is very easy to just install the scipy.image package. Hopefully, more people will run with this concept as the need arises. And, scipy is not has hard to install as you might think. Especially if you go into the setup.py file and comment out all the packages you don't actually want... -Travis From nwagner at mecha.uni-stuttgart.de Fri Mar 17 02:27:21 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 17 Mar 2006 08:27:21 +0100 Subject: [SciPy-user] ValueError: invalid return array shape In-Reply-To: References: <44192402.4040207@mecha.uni-stuttgart.de> <44194EAD.50809@ftw.at> <44195267.3000306@mecha.uni-stuttgart.de> Message-ID: <441A64D9.1060105@mecha.uni-stuttgart.de> Ed Schofield wrote: > On 16/03/2006, at 12:56 PM, Nils Wagner wrote: > > >> If I use a lowercase dtype name >> A_csc = s.csc_matrix((3,3),complex128) >> for i in arange(0,3): >> >> A_csr[i,i] = 1.0+2j >> A_csc[i,i] = 1.0+2j >> >> >>>>> A_csc >>>>> >> <3x3 sparse matrix of type '' >> with 3 stored elements (space for 100) >> in Compressed Sparse Column format> >> >> Note that the imaginary part is missing ! >> >> >>>>> print A_csc >>>>> >> (0, 0) 1.0 >> (1, 1) 1.0 >> (2, 2) 1.0 >> > > Try again using: > >>> A_csc = csc_matrix((3,3), dtype=complex128) > > The dtype keyword is necessary explicitly. I've closed the bug > report for now. Could you please submit your user name with future > bug reports on Trac? Thanks :) > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hi Ed, I have added the dtype keyword - it works but the iterative solvers get into trouble if the input is a dense matrix. For what reason ? Should I submit another bug report w.r.t. this issue ? cg --- Conjugate gradient (symmetric systems only) cgs --- Conjugate gradient squared qmr --- Quasi-minimal residual gmres --- Generalized minimal residual bicg --- Bi-conjugate gradient bicgstab --- Bi-conjugate gradient stabilized x1,info3 = linalg.gmres(A_csc_dense,b) # ValueError: invalid return array shape File "/usr/lib64/python2.4/site-packages/scipy/linalg/iterative.py", line 615, in gmres work[slice2] += sclr1*matvec(work[slice1]) ValueError: invalid return array shape >>> A_csc_dense matrix([[ 1.+2.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 1.+2.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 1.+2.j]]) >>> b array([ 0.99145813+0.30371645j, 0.36026628+0.88278217j, 0.65509504+0.08744001j]) Nils From haase at msg.ucsf.edu Fri Mar 17 00:57:20 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 16 Mar 2006 21:57:20 -0800 Subject: [SciPy-user] nd_image rename In-Reply-To: <4419C66E.4000803@ee.byu.edu> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> <441999B9.9020209@ftw.at> <44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at> <4419C66E.4000803@ee.byu.edu> Message-ID: <441A4FC0.9000206@msg.ucsf.edu> Hi, I just noticed that nd_image is going to be named scipy.image. My concern is that that might remind to much of jpg,bmp,tiff,... image files... Also "image" sounds very 2d ! like 'picture' ;-) I thought that Peter Verveer's nd_image name took nicely care of both of these concerns... (As a reminder: Peter's package contains lot's of generic ND filtering and some nice segmentation and morphology routines: Multi-dimensional image processing [http://stsdas.stsci.edu/numarray/numarray-1.5.html/module-numarray.ndimage.html]) Did I miss the beginning of this thread ? Just my 2 cents. - Sebastian Haase Travis Oliphant wrote: > Ed Schofield wrote: > >> Travis Oliphant wrote: >> >> >>> Ed Schofield wrote: >>> >>> >>> >>>> I could try to make a new SciPy release this weekend. >>>> >>>> Before then, though, I suggest we change the package name of nd_image. >>>> >>>> >>>> >>> Probably the right thing to do, especially as we improve it. Right now >>> it's just a copy of nd_image from numarray (thus the name). >>> >>> Anybody who wants to make the necessary name changes is welcome. >>> >>> >>> >> Done it. I've renamed the package at the Python level; the C module is >> still available to Python as _nd_image, with the leading underscore etc. >> >> Is the nd_image package still being maintained in numarray, or is SciPy >> now its official home? :) >> >> > Peter, the original author, has said (for reasons that were not entirely > clear) that he will not be supporting the numpy version (or eventually > the numarray version, either --- I think he's planning on writing some > other library-based solution). > > So, what we have in SciPy is "the official" home for the numpy version > of his package. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From oliphant.travis at ieee.org Fri Mar 17 03:24:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 17 Mar 2006 01:24:26 -0700 Subject: [SciPy-user] Interesting link for ideas Message-ID: <441A723A.4050000@ieee.org> Here's a link to PyVox which might be an interesting source for code to be brought into SciPy. A lot of the code is "re-invent the wheel" type stuff, but there are some interesting algorithms that we may be able to borrow from to add to the ndimage library. http://www.med.upenn.edu/bbl/downloads/pyvox/index.shtml -Travis From oliphant.travis at ieee.org Fri Mar 17 03:26:34 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 17 Mar 2006 01:26:34 -0700 Subject: [SciPy-user] nd_image rename In-Reply-To: <441A4FC0.9000206@msg.ucsf.edu> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> <441999B9.9020209@ftw.at> <44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at> <4419C66E.4000803@ee.byu.edu> <441A4FC0.9000206@msg.ucsf.edu> Message-ID: <441A72BA.70100@ieee.org> Sebastian Haase wrote: > Hi, > I just noticed that nd_image is going to be named scipy.image. > My concern is that that might remind to much of jpg,bmp,tiff,... image > files... > Also "image" sounds very 2d ! like 'picture' ;-) > True, it might give that connotation. While I'm not one who thinks much of the distinction between 2-d and N-d, there are many that do, So, perhaps we should rename (yet again --- isn't SVN nice :-) ) the library to ndimage or imagend, or something like that. -Travis From nwagner at mecha.uni-stuttgart.de Fri Mar 17 04:12:03 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 17 Mar 2006 10:12:03 +0100 Subject: [SciPy-user] ValueError: need more than 1 value to unpack In-Reply-To: References: <44192402.4040207@mecha.uni-stuttgart.de> <44194EAD.50809@ftw.at> <44195267.3000306@mecha.uni-stuttgart.de> Message-ID: <441A7D63.1080306@mecha.uni-stuttgart.de> Ed Schofield wrote: > On 16/03/2006, at 12:56 PM, Nils Wagner wrote: > > >> If I use a lowercase dtype name >> A_csc = s.csc_matrix((3,3),complex128) >> for i in arange(0,3): >> >> A_csr[i,i] = 1.0+2j >> A_csc[i,i] = 1.0+2j >> >> >>>>> A_csc >>>>> >> <3x3 sparse matrix of type '' >> with 3 stored elements (space for 100) >> in Compressed Sparse Column format> >> >> Note that the imaginary part is missing ! >> >> >>>>> print A_csc >>>>> >> (0, 0) 1.0 >> (1, 1) 1.0 >> (2, 2) 1.0 >> > > Try again using: > >>> A_csc = csc_matrix((3,3), dtype=complex128) > > The dtype keyword is necessary explicitly. I've closed the bug > report for now. Could you please submit your user name with future > bug reports on Trac? Thanks :) > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > BTW, dtype=complex256 aka dtype=Complex128 doesn't work in this context. Who can confirm this behaviour on a 64bit system ? A_csc = s.csc_matrix((3,3),dtype=complex256) for i in arange(0,3): A_csc[i,i] = 1.0+2j print A_csc Nils From oliphant.travis at ieee.org Fri Mar 17 04:19:16 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 17 Mar 2006 02:19:16 -0700 Subject: [SciPy-user] ValueError: need more than 1 value to unpack In-Reply-To: <441A7D63.1080306@mecha.uni-stuttgart.de> References: <44192402.4040207@mecha.uni-stuttgart.de> <44194EAD.50809@ftw.at> <44195267.3000306@mecha.uni-stuttgart.de> <441A7D63.1080306@mecha.uni-stuttgart.de> Message-ID: <441A7F14.2070401@ieee.org> Nils Wagner wrote: >> >> > BTW, dtype=complex256 aka dtype=Complex128 doesn't work in this context. > > Who can confirm this behaviour on a 64bit system ? > > A_csc = s.csc_matrix((3,3),dtype=complex256) > > for i in arange(0,3): > > A_csc[i,i] = 1.0+2j > > print A_csc > In order to have complex256 defined, the long double on your machine would have to translate to 16-bytes (I don't think that's even true on 64-bit systems is it?). -Travis From schofield at ftw.at Fri Mar 17 04:41:18 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 17 Mar 2006 10:41:18 +0100 Subject: [SciPy-user] ValueError: need more than 1 value to unpack In-Reply-To: <441A7F14.2070401@ieee.org> References: <44192402.4040207@mecha.uni-stuttgart.de> <44194EAD.50809@ftw.at> <44195267.3000306@mecha.uni-stuttgart.de> <441A7D63.1080306@mecha.uni-stuttgart.de> <441A7F14.2070401@ieee.org> Message-ID: On 17/03/2006, at 10:19 AM, Travis Oliphant wrote: > Nils Wagner wrote: >>> >>> >> BTW, dtype=complex256 aka dtype=Complex128 doesn't work in this >> context. >> >> Who can confirm this behaviour on a 64bit system ? >> >> A_csc = s.csc_matrix((3,3),dtype=complex256) >> >> for i in arange(0,3): >> >> A_csc[i,i] = 1.0+2j >> >> print A_csc >> > > In order to have complex256 defined, the long double on your machine > would have to translate to 16-bytes (I don't think that's even true on > 64-bit systems is it?). I think complex256 is defined on Nils's 64-bit machine. Nils, is this true? -- Ed From nwagner at mecha.uni-stuttgart.de Fri Mar 17 04:48:17 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 17 Mar 2006 10:48:17 +0100 Subject: [SciPy-user] ValueError: need more than 1 value to unpack In-Reply-To: References: <44192402.4040207@mecha.uni-stuttgart.de> <44194EAD.50809@ftw.at> <44195267.3000306@mecha.uni-stuttgart.de> <441A7D63.1080306@mecha.uni-stuttgart.de> <441A7F14.2070401@ieee.org> Message-ID: <441A85E1.8090000@mecha.uni-stuttgart.de> Ed Schofield wrote: > On 17/03/2006, at 10:19 AM, Travis Oliphant wrote: > > >> Nils Wagner wrote: >> >>>> >>> BTW, dtype=complex256 aka dtype=Complex128 doesn't work in this >>> context. >>> >>> Who can confirm this behaviour on a 64bit system ? >>> >>> A_csc = s.csc_matrix((3,3),dtype=complex256) >>> >>> for i in arange(0,3): >>> >>> A_csc[i,i] = 1.0+2j >>> >>> print A_csc >>> >>> >> In order to have complex256 defined, the long double on your machine >> would have to translate to 16-bytes (I don't think that's even true on >> 64-bit systems is it?). >> > > > I think complex256 is defined on Nils's 64-bit machine. Nils, is > this true? > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Ed, I am quite new to 64bit systems. At least I get no error message like that Traceback (most recent call last): File "sparse_test.py", line 5, in ? A_csc = s.csc_matrix((3,3),dtype=complex256) NameError: name 'complex256' is not defined on a 32bit system. Nils From schofield at ftw.at Fri Mar 17 04:49:55 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 17 Mar 2006 10:49:55 +0100 Subject: [SciPy-user] nd_image rename In-Reply-To: <441A72BA.70100@ieee.org> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> <441999B9.9020209@ftw.at> <44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at> <4419C66E.4000803@ee.byu.edu> <441A4FC0.9000206@msg.ucsf.edu> <441A72BA.70100@ieee.org> Message-ID: <49F86C7A-3F16-4744-9936-8B440BDD3E4B@ftw.at> On 17/03/2006, at 9:26 AM, Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi, >> I just noticed that nd_image is going to be named scipy.image. >> My concern is that that might remind to much of jpg,bmp,tiff,... >> image >> files... >> Also "image" sounds very 2d ! like 'picture' ;-) >> > True, it might give that connotation. While I'm not one who thinks > much > of the distinction between 2-d and N-d, there are many that do, > > So, perhaps we should rename (yet again --- isn't SVN nice :-) ) the > library to ndimage or imagend, or something like that. > I don't think an image has to be 2-d. And the package easily handles 2-d images as a special case, right? ;) But I don't feel particularly strongly about it. What do other people think? -- Ed From matthew.brett at gmail.com Fri Mar 17 06:32:23 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 17 Mar 2006 11:32:23 +0000 Subject: [SciPy-user] nd_image rename In-Reply-To: <49F86C7A-3F16-4744-9936-8B440BDD3E4B@ftw.at> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> <441999B9.9020209@ftw.at> <44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at> <4419C66E.4000803@ee.byu.edu> <441A4FC0.9000206@msg.ucsf.edu> <441A72BA.70100@ieee.org> <49F86C7A-3F16-4744-9936-8B440BDD3E4B@ftw.at> Message-ID: <1e2af89e0603170332v69917331m57462daccacf38f1@mail.gmail.com> Hi, > >> I just noticed that nd_image is going to be named scipy.image. > >> My concern is that that might remind to much of jpg,bmp,tiff,... > >> image > >> files... > >> Also "image" sounds very 2d ! like 'picture' ;-) My vote is for ndimage - I tend to agree about 'image' being slightly confusing. Matthew From nwagner at mecha.uni-stuttgart.de Fri Mar 17 07:45:17 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 17 Mar 2006 13:45:17 +0100 Subject: [SciPy-user] Converting dense into sparse matrices is slow Message-ID: <441AAF5D.20903@mecha.uni-stuttgart.de> Hi all, AFAIK linalg.kron only works with dense matrices. It would be nice if kron can handle sparse matrices as well. The example (bao.py) takes a lot of time Kronecker product (sec): 6.28 Dense to sparse (sec): 70.09 Number of nonzero elements 16129 If one uses a dense matrix there are 16777216 entries. Anyway, is it possible to accelerate some operations (especially csr_matrix()) in bao.py ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: bao.py Type: text/x-python Size: 875 bytes Desc: not available URL: From matthew.brett at gmail.com Fri Mar 17 07:50:35 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 17 Mar 2006 12:50:35 +0000 Subject: [SciPy-user] scipy.test problem In-Reply-To: <4419A84A.5070406@gmail.com> References: <1e2af89e0603091222i155e03cfhdcfa86bd4b64a64c@mail.gmail.com> <1e2af89e0603140243p11dcb7a9kc0b5d143a2041770@mail.gmail.com> <200603150839.48570.bgoli@sun.ac.za> <1e2af89e0603160923h6b1d2a0epff8dd7795c849f49@mail.gmail.com> <4419A84A.5070406@gmail.com> Message-ID: <1e2af89e0603170450u5755898cya4159892b6562ab7@mail.gmail.com> Hi, > Hmm. On my AMD64 Ubuntu Breezy with the same compilers that you list above and > ATLAS from the atlas3-base package, scipy.lib.lapack.test() runs perfectly. More Ubuntu 64 bit platform trivia. Compiling against atlas3 provided with Ubuntu allows scipy.test to run without error (including lib.lapack.test() obviously). I suppose there must be something wrong with the compilation of lapack or ATLAS then. Maybe a problem with 3.7.11. Etc etc. Dammit, what a way to spend your day. Incidentally, scipy.test(10) still dies at scipy.linalg with: Finding matrix determinant ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | basic | scipy | basic 20 | 0.20 | 0.28 | 0.23 | 0.30 (secs for 2000 calls) 100 | 0.32 | 0.45 | 0.38 | 0.53 (secs for 300 calls) 500Illegal instruction This error does not occur with my home-compiled atlas libraries. AMD specific compile for Ubuntu atlas maybe? Best, Matthew From schofield at ftw.at Fri Mar 17 08:38:13 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 17 Mar 2006 14:38:13 +0100 Subject: [SciPy-user] maxentropy In-Reply-To: <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> References: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> Message-ID: <441ABBC5.90304@ftw.at> Hi again, Matt ... I hope you don't mind my forwarding this to the list too: > I appreciate your help with this. I'm not sure a ton of stuff will > have to be changed. Perhaps the model may need a field for observable > data, so the features can map (x,y) to a scalar. I think the > probdist/pmf method in your current model class will need to change, > maybe to a method for evaluating P(y|X) for a specific document X. > > I'm really happy to see this module in Scipy, so if there's anything I > can do to help, please let me know. Now I've been praised in public! I feel warm and fuzzy. My current thinking is to create a new class, e.g. 'conditionalmodel', that derives from the model class and overrides the necessary methods. I've written most of the code I think is necessary for this, with a couple of example scripts with comments to try to explain what's going on. It's not yet working fully, but I've now made it available anyway in my development branch. You can get it using svn checkout http://svn.scipy.org/svn/scipy/branches/ejs scipy_ejs The two example scripts are in the maxentropy/examples/ directory. Matt, would you like to take it from here? My implementation is based on a paper by Robert Malouf, "A comparison of algorithms for maximum entropy parameter estimation", 2002. He also made the source code available for his implementation, which is now at http://tadm.sourceforge.net/. I've used this for inspiration, and it probably deserves more careful study. -- Ed From bgoli at sun.ac.za Fri Mar 17 09:29:23 2006 From: bgoli at sun.ac.za (Brett Olivier) Date: Fri, 17 Mar 2006 16:29:23 +0200 Subject: [SciPy-user] scipy.test problem In-Reply-To: <1e2af89e0603170450u5755898cya4159892b6562ab7@mail.gmail.com> References: <4419A84A.5070406@gmail.com> <1e2af89e0603170450u5755898cya4159892b6562ab7@mail.gmail.com> Message-ID: <200603171629.24128.bgoli@sun.ac.za> Hi Matthew On Friday 17 March 2006 14:50, Matthew Brett wrote: > I suppose there must be something wrong with the compilation of lapack > or ATLAS then. Maybe a problem with 3.7.11. Etc etc. Dammit, what a > way to spend your day. Just out of interest, what compiler options are you using to build LAPACK and ATLAS? On the 64 bit P4 I use for LAPACK (in make.inc): OPTS = -funroll-all-loops -O3 -m64 -fno-second-underscore -fPIC NOOPT = -m64 -fno-second-underscore -fPIC and if I remember correctly, for ATLAS 3.7.11 I added (to Make. after ATLAS configuration): -fPIC and -m64 to various compiler options. All compiled with gcc/g77 3.4.3. Brett > Incidentally, scipy.test(10) still dies at scipy.linalg with: > > Finding matrix determinant > ================================== > > | contiguous | non-contiguous > > ---------------------------------------------- > size | scipy | basic | scipy | basic > 20 | 0.20 | 0.28 | 0.23 | 0.30 (secs for 2000 calls) > 100 | 0.32 | 0.45 | 0.38 | 0.53 (secs for 300 calls) > 500Illegal instruction > > This error does not occur with my home-compiled atlas libraries. AMD > specific compile for Ubuntu atlas maybe? > > Best, > > Matthew > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From aisaac at american.edu Fri Mar 17 09:45:37 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 17 Mar 2006 09:45:37 -0500 Subject: [SciPy-user] nd_image rename In-Reply-To: <441A72BA.70100@ieee.org> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com><44198170.1050309@ieee.org> <441999B9.9020209@ftw.at><44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at><4419C66E.4000803@ee.byu.edu> <441A4FC0.9000206@msg.ucsf.edu><441A72BA.70100@ieee.org> Message-ID: On Fri, 17 Mar 2006, Travis Oliphant apparently wrote: > ndimage or imagend or image_n (imagine) Cheers, Alan Isaac From arnd.baecker at web.de Fri Mar 17 09:51:28 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 17 Mar 2006 15:51:28 +0100 (CET) Subject: [SciPy-user] nd_image rename In-Reply-To: References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com><44198170.1050309@ieee.org> <441999B9.9020209@ftw.at><44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at><4419C66E.4000803@ee.byu.edu> <441A4FC0.9000206@msg.ucsf.edu><441A72BA.70100@ieee.org> Message-ID: On Fri, 17 Mar 2006, Alan G Isaac wrote: > On Fri, 17 Mar 2006, Travis Oliphant apparently wrote: > > ndimage or imagend > > or image_n (imagine) great - what about another round of name-changing including a voting page? (I loved the one leading to numpy ...;-) Sorry, couldn't resist, Arnd From matthew.brett at gmail.com Fri Mar 17 10:33:53 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 17 Mar 2006 15:33:53 +0000 Subject: [SciPy-user] scipy.test problem In-Reply-To: <200603171629.24128.bgoli@sun.ac.za> References: <4419A84A.5070406@gmail.com> <1e2af89e0603170450u5755898cya4159892b6562ab7@mail.gmail.com> <200603171629.24128.bgoli@sun.ac.za> Message-ID: <1e2af89e0603170733uba9d8e0r4fb57009c4cef2f1@mail.gmail.com> Hi, > Just out of interest, what compiler options are you using to build LAPACK and > ATLAS? On the 64 bit P4 I use for LAPACK (in make.inc): > > OPTS = -funroll-all-loops -O3 -m64 -fno-second-underscore -fPIC > NOOPT = -m64 -fno-second-underscore -fPIC OPTS = -funroll-all-loops -fno-f2c -O3 -m64 -fPIC NOOPT = -m64 -fPIC > and if I remember correctly, for ATLAS 3.7.11 I added (to Make. after > ATLAS configuration): > -fPIC and -m64 to various compiler options. -fomit-frame-pointer -funroll-all-loops -mfpmath=sse -m64 -fPIC with -03, -O as per the atlas defaults, and -march=nocona for gcc... I've attached my build scripts in case they're useful. Best, Matthew -------------- next part -------------- A non-text attachment was scrubbed... Name: atlas_lapack_build.tgz Type: application/x-gzip Size: 3017 bytes Desc: not available URL: From oliphant at ee.byu.edu Fri Mar 17 14:37:28 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 17 Mar 2006 12:37:28 -0700 Subject: [SciPy-user] Call for posters to gmane.comp.python.devel newsgroup Message-ID: <441B0FF8.3000606@ee.byu.edu> I'm trying to start discussions on python dev about getting a simple object into Python that at least exposes the array interface and/or has the basic C-structure of NumPy arrays. Please voice your support and comments on the newsgroup. The more people that respond, the more the python developers will see that it's not just my lonely voice asking for things to change. Perhaps it will help somebody with more time to get a PEP written up. I doubt we will make it into Python 2.5, unless somebody steps up in the next month, but it will help for Python 2.6 Thanks, -Travis From Fernando.Perez at colorado.edu Fri Mar 17 14:43:53 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 17 Mar 2006 12:43:53 -0700 Subject: [SciPy-user] nd_image rename In-Reply-To: <1e2af89e0603170332v69917331m57462daccacf38f1@mail.gmail.com> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <44198170.1050309@ieee.org> <441999B9.9020209@ftw.at> <44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at> <4419C66E.4000803@ee.byu.edu> <441A4FC0.9000206@msg.ucsf.edu> <441A72BA.70100@ieee.org> <49F86C7A-3F16-4744-9936-8B440BDD3E4B@ftw.at> <1e2af89e0603170332v69917331m57462daccacf38f1@mail.gmail.com> Message-ID: <441B1179.1040202@colorado.edu> Matthew Brett wrote: > Hi, > > >>>>I just noticed that nd_image is going to be named scipy.image. >>>>My concern is that that might remind to much of jpg,bmp,tiff,... >>>>image >>>>files... >>>>Also "image" sounds very 2d ! like 'picture' ;-) > > > My vote is for ndimage - I tend to agree about 'image' being slightly confusing. +1 on ndimage: scipy includes pilutils, which wraps the PIL whose import statement is import Image and is very oriented towards 'image as a picture' processing. I think it's a good idea to avoid confusion in the minds of possible users between the well established Image module and scipy's more flexible (but with a different focus) n-dimensional package. Cheers, f From haase at msg.ucsf.edu Fri Mar 17 14:36:11 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 17 Mar 2006 11:36:11 -0800 Subject: [SciPy-user] nd_image rename In-Reply-To: <1e2af89e0603170332v69917331m57462daccacf38f1@mail.gmail.com> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <49F86C7A-3F16-4744-9936-8B440BDD3E4B@ftw.at> <1e2af89e0603170332v69917331m57462daccacf38f1@mail.gmail.com> Message-ID: <200603171136.12058.haase@msg.ucsf.edu> On Friday 17 March 2006 03:32, Matthew Brett wrote: > Hi, > > > >> I just noticed that nd_image is going to be named scipy.image. > > >> My concern is that that might remind to much of jpg,bmp,tiff,... > > >> image > > >> files... > > >> Also "image" sounds very 2d ! like 'picture' ;-) > > My vote is for ndimage - I tend to agree about 'image' being slightly > confusing. > > Matthew ndimage would probably be my vote too - nd_image was OK too... ( I don't like image_n !) -- - Sebastian Haase From Fernando.Perez at colorado.edu Fri Mar 17 14:56:32 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 17 Mar 2006 12:56:32 -0700 Subject: [SciPy-user] nd_image rename In-Reply-To: <200603171136.12058.haase@msg.ucsf.edu> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <49F86C7A-3F16-4744-9936-8B440BDD3E4B@ftw.at> <1e2af89e0603170332v69917331m57462daccacf38f1@mail.gmail.com> <200603171136.12058.haase@msg.ucsf.edu> Message-ID: <441B1470.80903@colorado.edu> Sebastian Haase wrote: > On Friday 17 March 2006 03:32, Matthew Brett wrote: > >>Hi, >> >> >>>>>I just noticed that nd_image is going to be named scipy.image. >>>>>My concern is that that might remind to much of jpg,bmp,tiff,... >>>>>image >>>>>files... >>>>>Also "image" sounds very 2d ! like 'picture' ;-) >> >>My vote is for ndimage - I tend to agree about 'image' being slightly >>confusing. >> >>Matthew > > > ndimage would probably be my vote too - nd_image was OK too... > ( I don't like image_n !) I think in scipy we're trying to (loosely) follow the Python naming PEP, which discourages underscores in module names. In that case, ndimage would win over nd_image. Cheers, f From jonathan.taylor at stanford.edu Fri Mar 17 15:00:38 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 17 Mar 2006 12:00:38 -0800 Subject: [SciPy-user] nd_image rename In-Reply-To: <441B1470.80903@colorado.edu> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <49F86C7A-3F16-4744-9936-8B440BDD3E4B@ftw.at> <1e2af89e0603170332v69917331m57462daccacf38f1@mail.gmail.com> <200603171136.12058.haase@msg.ucsf.edu> <441B1470.80903@colorado.edu> Message-ID: <441B1566.1070206@stanford.edu> i vote for ndimage as well.... -- jonathan Fernando Perez wrote: >Sebastian Haase wrote: > > >>On Friday 17 March 2006 03:32, Matthew Brett wrote: >> >> >> >>>Hi, >>> >>> >>> >>> >>>>>>I just noticed that nd_image is going to be named scipy.image. >>>>>>My concern is that that might remind to much of jpg,bmp,tiff,... >>>>>>image >>>>>>files... >>>>>>Also "image" sounds very 2d ! like 'picture' ;-) >>>>>> >>>>>> >>>My vote is for ndimage - I tend to agree about 'image' being slightly >>>confusing. >>> >>>Matthew >>> >>> >>ndimage would probably be my vote too - nd_image was OK too... >>( I don't like image_n !) >> >> > >I think in scipy we're trying to (loosely) follow the Python naming PEP, which >discourages underscores in module names. In that case, ndimage would win over >nd_image. > >Cheers, > >f > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From gruben at bigpond.net.au Fri Mar 17 18:46:12 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 18 Mar 2006 10:46:12 +1100 Subject: [SciPy-user] nd_image rename In-Reply-To: References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com><44198170.1050309@ieee.org> <441999B9.9020209@ftw.at><44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at><4419C66E.4000803@ee.byu.edu> <441A4FC0.9000206@msg.ucsf.edu><441A72BA.70100@ieee.org> Message-ID: <441B4A44.9070001@bigpond.net.au> I think, in the true spirit of python, every extension module should be renamed Bruce. Slightly more seriously, I agree it should be changed to avoid confusion with PIL. You could call it ndimage or imageproc or ndimproc. Gary R Arnd Baecker wrote: > On Fri, 17 Mar 2006, Alan G Isaac wrote: > >> On Fri, 17 Mar 2006, Travis Oliphant apparently wrote: >>> ndimage or imagend >> or image_n (imagine) > > great - what about another round of name-changing including > a voting page? (I loved the one leading to numpy ...;-) > > Sorry, couldn't resist, Arnd From m.cooper at computer.org Fri Mar 17 19:31:24 2006 From: m.cooper at computer.org (Matthew Cooper) Date: Fri, 17 Mar 2006 16:31:24 -0800 Subject: [SciPy-user] maxentropy In-Reply-To: <441ABBC5.90304@ftw.at> References: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> <441ABBC5.90304@ftw.at> Message-ID: <43f499ca0603171631w57400cf4n583bfb0c26e6fc5c@mail.gmail.com> Hi Ed, Thanks again for working on this. I can try and work on it a bit this weekend. I've had time to look over the two example scripts you provided. There seemed to be some difference in the two in terms of the call to the conditionalmodel fit method. In the low level example, the count parameter seemed to provide the empirical counts of the feature functions, where the features were simply (context,label) co-occurrence. In the high level example, the features are more complicated, and the counts parameter seems to have different dimensionality. I'll try and get a working high level example together next. /mc On 3/17/06, Ed Schofield wrote: > > Hi again, Matt ... I hope you don't mind my forwarding this to the list > too: > > I appreciate your help with this. I'm not sure a ton of stuff will > > have to be changed. Perhaps the model may need a field for observable > > data, so the features can map (x,y) to a scalar. I think the > > probdist/pmf method in your current model class will need to change, > > maybe to a method for evaluating P(y|X) for a specific document X. > > > > I'm really happy to see this module in Scipy, so if there's anything I > > can do to help, please let me know. > > Now I've been praised in public! I feel warm and fuzzy. > > My current thinking is to create a new class, e.g. 'conditionalmodel', > that derives from the model class and overrides the necessary methods. > I've written most of the code I think is necessary for this, with a > couple of example scripts with comments to try to explain what's going > on. It's not yet working fully, but I've now made it available anyway > in my development branch. You can get it using > > svn checkout http://svn.scipy.org/svn/scipy/branches/ejs scipy_ejs > > The two example scripts are in the maxentropy/examples/ directory. > > Matt, would you like to take it from here? My implementation is based > on a paper by Robert Malouf, "A comparison of algorithms for maximum > entropy parameter estimation", 2002. He also made the source code > available for his implementation, which is now at > http://tadm.sourceforge.net/. I've used this for inspiration, and it > probably deserves more careful study. > > > -- Ed > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pebarrett at gmail.com Fri Mar 17 21:54:16 2006 From: pebarrett at gmail.com (Paul Barrett) Date: Fri, 17 Mar 2006 21:54:16 -0500 Subject: [SciPy-user] nd_image rename In-Reply-To: <441B4A44.9070001@bigpond.net.au> References: <1d987df30603160105u66e85d18n53e913de1230537d@mail.gmail.com> <441999B9.9020209@ftw.at> <44199B77.1010109@ieee.org> <4419B8BD.8020808@ftw.at> <4419C66E.4000803@ee.byu.edu> <441A4FC0.9000206@msg.ucsf.edu> <441A72BA.70100@ieee.org> <441B4A44.9070001@bigpond.net.au> Message-ID: <40e64fa20603171854jce011br3392900ed68379c5@mail.gmail.com> On 3/17/06, Gary Ruben wrote: > > I think, in the true spirit of python, every extension module should be > renamed Bruce. > > Slightly more seriously, I agree it should be changed to avoid confusion > with PIL. You could call it ndimage or imageproc or ndimproc. > In the spirit of political correctness, every other extension module should be renamed Sheila. I prefer imageproc over ndimage for what it's worth. -- Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ganesh_v at iitm.ac.in Sat Mar 18 09:17:24 2006 From: ganesh_v at iitm.ac.in (Ganesh V) Date: Sat, 18 Mar 2006 19:47:24 +0530 Subject: [SciPy-user] Complex Numbers in ODEINT and BVP's Message-ID: <20060318141717.GA6851@localhost.localdomain> Hi! I want to use complex numbers in the odeint package. Let me make it more precise. I have an eigenvalue inside the problem that I find by other means. (Related to acoustics). Now that can be complex. Is it possible in the current scipy ? In my case it was a Linear ODE, and hence I split into separate ones for real and imaginary ones and soved it. The other question is about Boundary value problem solvers. i.e can odeint or anything else solve Boundary value problems instead of initial value problems, or do I have to recourse to an iterative procedure myself instead of the solver doing it? (AFAIK matlab solves it in a similar way, assuming a derivative value at 0, and then iterating over it to meet the boundary value at the other end) Bye! -- Ganesh V Undergraduate student, Department of Aerospace Engineering, IIT Madras, India. My homepage --> http://www.ae.iitm.ac.in/~ae03b007 From schofield at ftw.at Sat Mar 18 12:34:54 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat, 18 Mar 2006 18:34:54 +0100 Subject: [SciPy-user] [ANN] SciPy 0.4.8 released Message-ID: <441C44BE.5040802@ftw.at> =========================== SciPy 0.4.8 Scientific tools for Python =========================== I'm pleased to announce the release of SciPy 0.4.8. This release adds support for the latest NumPy (version 0.9.6) and adds Peter Verveer's multi-dimensional image processing package. It also has enhancements for sparse matrices and maximum entropy modelling, fixes bugs in linear algebra, simulated annealing, statistics, least squares, sparse matrices, and has seen extensive code cleanups. It is available for download from http://www.scipy.org/Download as a source tarball for Linux/Solaris/OS X/BSD/Windows (64-bit and 32-bit) and as an executable installer for Win32. More information on SciPy is available at http://www.scipy.org/ =========================== SciPy is an Open Source library of scientific tools for Python. It contains a variety of high-level science and engineering modules, including modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, genetic algorithms, ODE solvers, special functions, and more. From fonnesbeck at gmail.com Sat Mar 18 13:28:44 2006 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Sat, 18 Mar 2006 13:28:44 -0500 Subject: [SciPy-user] [ANN] SciPy 0.4.8 released In-Reply-To: <441C44BE.5040802@ftw.at> References: <441C44BE.5040802@ftw.at> Message-ID: <723eb6930603181028y6c130a32l9fcaf55a9a83cef6@mail.gmail.com> On 3/18/06, Ed Schofield wrote: > > It is available for download from > > http://www.scipy.org/Download > > as a source tarball for Linux/Solaris/OS X/BSD/Windows (64-bit and 32-bit) > and as an executable installer for Win32. > I have also added OSX binary installers in .dmg and .egg format. C. -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Sat Mar 18 17:32:57 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat, 18 Mar 2006 23:32:57 +0100 Subject: [SciPy-user] maxentropy In-Reply-To: <43f499ca0603171631w57400cf4n583bfb0c26e6fc5c@mail.gmail.com> References: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> <441ABBC5.90304@ftw.at> <43f499ca0603171631w57400cf4n583bfb0c26e6fc5c@mail.gmail.com> Message-ID: On 18/03/2006, at 1:31 AM, Matthew Cooper wrote: > > Hi Ed, > > Thanks again for working on this. I can try and work on it a bit > this weekend. I've had time to look over the two example scripts > you provided. There seemed to be some difference in the two in > terms of the call to the conditionalmodel fit method. In the low > level example, the count parameter seemed to provide the empirical > counts of the feature functions, where the features were simply > (context,label) co-occurrence. In the high level example, the > features are more complicated, and the counts parameter seems to > have different dimensionality. I'll try and get a working high > level example together next. > Hi Matt, I've now found and fixed some bugs in the conditional maxent code. The computation of the conditional expectations was wrong, and the p_tilde parameter was interpreted inconsistently. Both the examples work now! Fantastic! I'd be very grateful for any assistance you could give in providing more examples -- especially real examples from text classification. The two examples at the moment are too artificial and perhaps a bit confusing. Or if you have any suggestions or patches for simplifying the interface (e.g. the constructor arguments) or any other improvements (e.g. bug fixes, better docs, or a tutorial) I'd also readily merge them. Let me know how you go with it. When you're happy that it's all working, I'll merge it with the main SVN trunk. -- Ed From marjan.grah at guest.arnes.si Sun Mar 19 16:52:41 2006 From: marjan.grah at guest.arnes.si (Marjan Grah) Date: Sun, 19 Mar 2006 22:52:41 +0100 Subject: [SciPy-user] ImportError: No module named win32pdh Message-ID: <441DD2A9.2020206@guest.arnes.si> Dear all, I have some "strange" error. I installed the numpy 0.9.6 (from the file: numpy-0.9.6.win32-py2.4.exe; in the file c:\Python24\Lib\site-packages\numpy\version.py the first line is version='0.9.6') scipy version 0.4.8 (from the file: scipy-0.4.8.win32-py2.4-pentium4sse2.exe; in the file c:\Python24\Lib\site-packages\scipy\__svn_version__.py I have: version = '1738') and when I wish to import scipy I got this error: >>> from scipy import * Traceback (most recent call last): File "", line 1, in -toplevel- from scipy import * File "C:\Python24\Lib\site-packages\scipy\lib\__init__.py", line 4, in -toplevel- from numpy.testing import ScipyTest File "C:\Python24\Lib\site-packages\numpy\testing\__init__.py", line 3, in -toplevel- from numpytest import * File "C:\Python24\Lib\site-packages\numpy\testing\numpytest.py", line 18, in -toplevel- from utils import jiffies File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 69, in -toplevel- import win32pdh ImportError: No module named win32pdh For the python I have installed this version: >>> import sys >>> sys.version '2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)]' my operatting system is Windows XP with service pack 2. The computer is not Intel Pentium but an AMD Opteron 148. Before this instalation I use Numpy version 0.94 and scipy 0.4.6 and everything work. Could somebody help me? Thank you in advance. Marjan Grah From simon.j.hook at jpl.nasa.gov Sun Mar 19 17:31:14 2006 From: simon.j.hook at jpl.nasa.gov (Simon Hook) Date: Sun, 19 Mar 2006 14:31:14 -0800 Subject: [SciPy-user] Enthon, Scipy, Numpy, 64-bit windows binaries Message-ID: <441DDBB2.2010006@jpl.nasa.gov> All, A few weeks ago there was a thread about users waiting for a particular release of Scipy/Numpy before trying their code with it. We are waiting for a new release of Enthon including Scipy/Numpy before trying our code. In addition we are planning on getting some new machines with 64-bit windows XP. My questions: Any idea when the new version of Enthon will be out. The Enthon folks said weeks rather than months about a month ago? Will there be a 64-bit version of Enthon, Numpy, Scipy as a downloadable binary for 64-bit XP? I do not have any experience running 64-bit windows - are there any issues running 32-bit programs on 64-bit windows systems? Many thanks, especially to all the people putting Numpy/Scipy/Enthon together. Simon p.s. I am happy to purchase a copy of the new Enthon and hope to provide some examples to the Scipy Wiki when we get the new Scipy/Numpy/Enthon going. From robert.kern at gmail.com Sun Mar 19 18:45:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 19 Mar 2006 17:45:14 -0600 Subject: [SciPy-user] Enthon, Scipy, Numpy, 64-bit windows binaries In-Reply-To: <441DDBB2.2010006@jpl.nasa.gov> References: <441DDBB2.2010006@jpl.nasa.gov> Message-ID: <441DED0A.2000700@gmail.com> Simon Hook wrote: > All, > > A few weeks ago there was a thread about users waiting for a particular > release of Scipy/Numpy before trying their code with it. We are waiting > for a new release of Enthon including Scipy/Numpy before trying our > code. In addition we are planning on getting some new machines with > 64-bit windows XP. My questions: > > Any idea when the new version of Enthon will be out. The Enthon folks > said weeks rather than months about a month ago? 0.9.3 should be out sometime this week, I think. It's currently being tested. But I'm pretty sure we never promised an Enthon with numpy and scipy 0.4 in that time frame. It probably won't be until some time in April before we can transition our commercial product line to numpy. Enthon exists primarily to support the applications that we deliver to customers. > Will there be a 64-bit version of Enthon, Numpy, Scipy as a downloadable > binary for 64-bit XP? None of our customers have requested deliverables on 64-bit Windows, so there probably won't be an Enthon for that platform until that happens. I haven't heard of anyone trying to build numpy or scipy on that platform, either. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From webb.sprague at gmail.com Mon Mar 20 00:55:22 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Sun, 19 Mar 2006 21:55:22 -0800 Subject: [SciPy-user] Optimization / fitting routines Message-ID: Hi all, (I am a little bit over my head, so if I say things that sound stupid, please excuse.) (1) Could someone point me toward documentation for optimization routines in numpy? I need the routine to be very general, excepting an arbitrary Python function that accepts a test value, and a target value for that function. We can assume that the function will be monotonic. (2) Could someone point me or recommend a book that clearly describes these problems mathematically at an "advanced undergrad" level? Currently we have a hand-rolled binary search like thing, but it can narrow down on a value .2 off from the target, but because the search increment is always shrinking, the routine loops forever. Thanks for your help and your patience :) W From robert.kern at gmail.com Mon Mar 20 01:37:40 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Mar 2006 00:37:40 -0600 Subject: [SciPy-user] Optimization / fitting routines In-Reply-To: References: Message-ID: <441E4DB4.4060500@gmail.com> Webb Sprague wrote: > Hi all, > > (I am a little bit over my head, so if I say things that sound stupid, > please excuse.) > > (1) Could someone point me toward documentation for optimization > routines in numpy? I need the routine to be very general, excepting > an arbitrary Python function that accepts a test value, and a target > value for that function. We can assume that the function will be > monotonic. It seems like you have a root-finding problem. I'm not really sure; your description is vague. Do you mean that you have a function f(x) taking a scalar real number and returning a scalar real number? And you want to find the x such that f(x) equals some given number y? In that case, recast your problem a little: define g(x) = (f(x) - y) and use one of the root-finding routines in scipy.optimize to find the x that makes g(x) = 0. In [5]: scipy.optimize? Type: module Base Class: String Form: Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy-0.4.7.1607-py2.4-mac osx-10.4-ppc.egg/scipy/optimize/__init__.py Docstring: Optimization Tools ================== ... Also a collection of general_purpose root-finding routines. fsolve -- Non-linear multi-variable equation solver. Scalar function solvers brentq -- quadratic interpolation Brent method brenth -- Brent method (modified by Harris with hyperbolic extrapolation) ridder -- Ridder's method bisect -- Bisection method newton -- Secant method or Newton's method fixed_point -- Single-variable fixed-point solver. > (2) Could someone point me or recommend a book that clearly describes > these problems mathematically at an "advanced undergrad" level? _Numerical Recipes_ does a reasonable job of describing the problems. I don't like their code, but the discussions are pretty good. You can even read it online, if you like: http://www.numerical-recipes.com/nronline_switcher.html -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ckkart at hoc.net Mon Mar 20 01:40:35 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Mon, 20 Mar 2006 15:40:35 +0900 Subject: [SciPy-user] Optimization / fitting routines In-Reply-To: References: Message-ID: <441E4E63.4000104@hoc.net> Webb Sprague wrote: > Hi all, > > (I am a little bit over my head, so if I say things that sound stupid, > please excuse.) > > (1) Could someone point me toward documentation for optimization > routines in numpy? I need the routine to be very general, excepting > an arbitrary Python function that accepts a test value, and a target > value for that function. We can assume that the function will be > monotonic. An overview of the optimization routines is available in the docstring: help(optimize) And each of the routines has some further documentation, e.g. help(optimize.fmin) The following code snippet finds the minimum of a parabola: def func(x): return .3*x**2-3.4*x res = optimize.fmin(func, [7]) Optimization terminated successfully. Current function value: -9.633333 Iterations: 16 Function evaluations: 32 print res [ 5.66665039] > (2) Could someone point me or recommend a book that clearly describes > these problems mathematically at an "advanced undergrad" level? I guess many of them are described in the 'Numercial recipies'. Hope this helps, Christian From robert.kern at gmail.com Mon Mar 20 01:45:32 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Mar 2006 00:45:32 -0600 Subject: [SciPy-user] ImportError: No module named win32pdh In-Reply-To: <441DD2A9.2020206@guest.arnes.si> References: <441DD2A9.2020206@guest.arnes.si> Message-ID: <441E4F8C.1010309@gmail.com> Marjan Grah wrote: > Dear all, > > I have some "strange" error. I installed the numpy 0.9.6 (from the file: > numpy-0.9.6.win32-py2.4.exe; in the file > c:\Python24\Lib\site-packages\numpy\version.py the first line is > version='0.9.6') > scipy version 0.4.8 (from the file: > scipy-0.4.8.win32-py2.4-pentium4sse2.exe; in the file > c:\Python24\Lib\site-packages\scipy\__svn_version__.py I have: version = > '1738') and when I wish to > import scipy I got this error: > > >>> from scipy import * > > Traceback (most recent call last): > File "", line 1, in -toplevel- > from scipy import * > File "C:\Python24\Lib\site-packages\scipy\lib\__init__.py", line 4, in > -toplevel- > from numpy.testing import ScipyTest > File "C:\Python24\Lib\site-packages\numpy\testing\__init__.py", line > 3, in -toplevel- > from numpytest import * > File "C:\Python24\Lib\site-packages\numpy\testing\numpytest.py", line > 18, in -toplevel- > from utils import jiffies > File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 69, > in -toplevel- > import win32pdh > ImportError: No module named win32pdh > > For the python I have installed this version: > >>> import sys > >>> sys.version > '2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)]' > > my operatting system is Windows XP with service pack 2. The computer is > not Intel Pentium but an AMD Opteron 148. > Before this instalation I use Numpy version 0.94 and scipy 0.4.6 and > everything work. Could somebody help me? This is a known issue. A dependency on the win32all package (which provides win32pdh) snuck in. We will be removing it in the next release. In the meantime, you can either install win32all (which I do recommend; it's widely considered an essential package for Windows users): http://starship.python.net/~skippy/win32/ Or you can change numpy/testing/utils.py on line 67 or thereabouts. Change the line that looks like this (or something like it): if os.name=='nt' and sys.version[:3] > '2.3': to this: if False: -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From d.howey at imperial.ac.uk Mon Mar 20 04:06:09 2006 From: d.howey at imperial.ac.uk (Howey, David A) Date: Mon, 20 Mar 2006 09:06:09 -0000 Subject: [SciPy-user] Onlne message repository? Message-ID: <056D32E9B2D93B49B01256A88B3EB218011609D6@icex2.ic.ac.uk> Does anyone know if there is an online archive of past scipy-user, matplotlib and ipython list server emails? I find the list server very useful, but the volume of messages is enormous and my mailbox keeps getting overloaded. If there were an online searchable archive this would be amazing! Dave From arnd.baecker at web.de Mon Mar 20 04:28:42 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 20 Mar 2006 10:28:42 +0100 (CET) Subject: [SciPy-user] Onlne message repository? In-Reply-To: <056D32E9B2D93B49B01256A88B3EB218011609D6@icex2.ic.ac.uk> References: <056D32E9B2D93B49B01256A88B3EB218011609D6@icex2.ic.ac.uk> Message-ID: Hi, On Mon, 20 Mar 2006, Howey, David A wrote: > Does anyone know if there is an online archive of past scipy-user, > matplotlib and ipython list server emails? For numpy-discussion, Scipy-User and Scipy-Dev have a look at http://www.scipy.org/Mailing_Lists ("Archives") where there is a also a search option. For ipython: http://scipy.net/mailman/listinfo/ipython-user (on http://ipython.scipy.org/ you'll also find the link to the gmane news gateway). For matplotlib: http://matplotlib.sourceforge.net/ ==> "Mailing List" ==> "matplotlib-users Archives " > I find the list server very useful, but the volume of messages is > enormous and my mailbox keeps getting overloaded. If there were an > online searchable archive this would be amazing! Yes, the volume has gone up a bit (which is a very good sign!). You could also consider to subscribe in "digest" mode. Best, Arnd From yannick.dirou at axetic.com Mon Mar 20 04:37:08 2006 From: yannick.dirou at axetic.com (Yannick Dirou) Date: Mon, 20 Mar 2006 10:37:08 +0100 Subject: [SciPy-user] Filtering high frquency noise Message-ID: <441E77C4.5070702@axetic.com> Hello, I have a multirate signal (from 18 to 24 sample per day), and if i plot it i see something like "high frequency" noise (actually not that high but higher than the remaining of the signal), i thought about using a median filter but this is not good for a multirate signal, then i though i could use scipy filters to do the job, unfortunately i know nearly nothing in filter design, and don't know how to do the job. Is there a tutorial or simple example to design a low pass filter? the signal data is made of a datetime in epoch format and the measured value. Thanks in advance, Yannick From tim.leslie at gmail.com Mon Mar 20 06:14:02 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 20 Mar 2006 22:14:02 +1100 Subject: [SciPy-user] Onlne message repository? In-Reply-To: <056D32E9B2D93B49B01256A88B3EB218011609D6@icex2.ic.ac.uk> References: <056D32E9B2D93B49B01256A88B3EB218011609D6@icex2.ic.ac.uk> Message-ID: On 3/20/06, Howey, David A wrote: > > Does anyone know if there is an online archive of past scipy-user, > matplotlib and ipython list server emails? > I find the list server very useful, but the volume of messages is > enormous and my mailbox keeps getting overloaded. If there were an > online searchable archive this would be amazing! I like to get all my mail delivered to my gmail account, since it's simple to set up the filtering, has excellent searching and oodles of space which isn't on my computer. If you (or anyone else) wants a gmail invite feel free to mail me off list and I'll send you one. Tim Dave > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Mar 20 06:39:57 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 20 Mar 2006 12:39:57 +0100 Subject: [SciPy-user] Converting dense into sparse matrices is slow In-Reply-To: <441AAF5D.20903@mecha.uni-stuttgart.de> References: <441AAF5D.20903@mecha.uni-stuttgart.de> Message-ID: <441E948D.4010904@ntc.zcu.cz> Nils Wagner wrote: > Hi all, > > AFAIK linalg.kron only works with dense matrices. > It would be nice if kron can handle sparse matrices as well. > The example (bao.py) takes a lot of time > > Kronecker product (sec): 6.28 > Dense to sparse (sec): 70.09 > Number of nonzero elements 16129 > If one uses a dense matrix there are 16777216 entries. > > Anyway, is it possible to accelerate some operations (especially > csr_matrix()) in bao.py ? > > Nils Hi Nils, the actual conversion is done by *fulltocsc() function of sparsetools, which IMHO allocates space for the whole dense matrix which is very large in your case. Maybe a two-pass approach would be faster - 1. count the actual nonzeros, 2. build the matrix. I cannot try it right now, though... r. From nwagner at mecha.uni-stuttgart.de Mon Mar 20 06:53:36 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Mar 2006 12:53:36 +0100 Subject: [SciPy-user] Converting dense into sparse matrices is slow In-Reply-To: <441E948D.4010904@ntc.zcu.cz> References: <441AAF5D.20903@mecha.uni-stuttgart.de> <441E948D.4010904@ntc.zcu.cz> Message-ID: <441E97C0.9050904@mecha.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Hi all, >> >> AFAIK linalg.kron only works with dense matrices. >> It would be nice if kron can handle sparse matrices as well. >> The example (bao.py) takes a lot of time >> >> Kronecker product (sec): 6.28 >> Dense to sparse (sec): 70.09 >> Number of nonzero elements 16129 >> If one uses a dense matrix there are 16777216 entries. >> >> Anyway, is it possible to accelerate some operations (especially >> csr_matrix()) in bao.py ? >> >> Nils >> > > Hi Nils, > > the actual conversion is done by *fulltocsc() function of sparsetools, > which IMHO allocates space for the whole dense matrix which is very > large in your case. Maybe a two-pass approach would be faster - 1. count > the actual nonzeros, 2. build the matrix. I cannot try it right now, > though... > > r. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hi Robert, Thank you for your short note. BTW, a fix for http://projects.scipy.org/scipy/scipy/ticket/40 would be awesome. Cheers, Nils From jjk3 at msstate.edu Mon Mar 20 09:16:08 2006 From: jjk3 at msstate.edu (Joel Konkle-Parker) Date: Mon, 20 Mar 2006 14:16:08 +0000 Subject: [SciPy-user] no module: win32pdh Message-ID: <1142864168.441eb928ee8a1@webmail.msstate.edu> I just installed numpy-0.9.6 and scipy-0.4.8 over an existing python-2.4.2 installation (win32), and now I'm getting the following warnings: Python 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy import testing -> failed: No module named win32pdh import misc -> failed: No module named win32pdh What is this win32pdh module, and where do I get it? -- Joel Konkle-Parker From aisaac at american.edu Mon Mar 20 10:13:08 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 20 Mar 2006 10:13:08 -0500 Subject: [SciPy-user] no module: win32pdh In-Reply-To: <1142864168.441eb928ee8a1@webmail.msstate.edu> References: <1142864168.441eb928ee8a1@webmail.msstate.edu> Message-ID: On Mon, 20 Mar 2006, Joel Konkle-Parker apparently wrote: > What is this win32pdh module, and where do I get it? http://www.python.org/windows/win32/ hth, Alan Isaac From nwagner at mecha.uni-stuttgart.de Mon Mar 20 10:20:36 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Mar 2006 16:20:36 +0100 Subject: [SciPy-user] generalized eig algorithm did not converge Message-ID: <441EC844.3030408@mecha.uni-stuttgart.de> Hi all, What is the reason for this message ? Traceback (most recent call last): File "gh.py", line 4, in ? w = linalg.eigvals(G,H) File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line 244, in eigvals return eig(a,b=b,left=0,right=0,overwrite_a=overwrite_a) File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line 121, in eig return _geneig(a1,b,left,right,overwrite_a,overwrite_b) File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line 72, in _geneig if info>0: raise LinAlgError,"generalized eig algorithm did not converge" scipy.linalg.basic.LinAlgError: generalized eig algorithm did not converge Matlab is able to solve this generalized eigenvalue problem without any warning. Afaik, Matlab and scipy's eig is based on LAPACK. So what is the reason for the difference ? Any idea ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: mat.tar.gz Type: application/x-tar Size: 20590 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gh.m Type: application/m-file Size: 52 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gh.py Type: text/x-python Size: 90 bytes Desc: not available URL: From nwagner at mecha.uni-stuttgart.de Mon Mar 20 11:12:02 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Mar 2006 17:12:02 +0100 Subject: [SciPy-user] Bug in docstring special.k0? Message-ID: <441ED452.5040004@mecha.uni-stuttgart.de> In [10]:special.k0? Type: ufunc String Form: Namespace: Interactive Docstring: y = k0(x) y=i0(x) returns the modified Bessel function of the third kind of order 0 at x. http://mathworld.wolfram.com/IModifiedBesselFunctionoftheSecondKind.html It should be y=k0(x) returns the modified Bessel function of the second kind of order 0 at x. Nils From robert.kern at gmail.com Mon Mar 20 12:22:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Mar 2006 11:22:13 -0600 Subject: [SciPy-user] Onlne message repository? In-Reply-To: <056D32E9B2D93B49B01256A88B3EB218011609D6@icex2.ic.ac.uk> References: <056D32E9B2D93B49B01256A88B3EB218011609D6@icex2.ic.ac.uk> Message-ID: <441EE4C5.8080103@gmail.com> Howey, David A wrote: > Does anyone know if there is an online archive of past scipy-user, > matplotlib and ipython list server emails? > I find the list server very useful, but the volume of messages is > enormous and my mailbox keeps getting overloaded. If there were an > online searchable archive this would be amazing! You might consider using GMane to access these lists through a newsgroup interface or through the Web. GMane also provides search functionality. http://dir.gmane.org/gmane.comp.python.scientific.user http://dir.gmane.org/gmane.comp.python.scientific.devel http://dir.gmane.org/gmane.comp.python.matplotlib.general http://dir.gmane.org/gmane.comp.python.matplotlib.devel http://dir.gmane.org/gmane.comp.python.ipython.user http://dir.gmane.org/gmane.comp.python.ipython.devel -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david.huard at gmail.com Mon Mar 20 14:07:35 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 20 Mar 2006 14:07:35 -0500 Subject: [SciPy-user] Fast Gauss Transform Message-ID: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> Hi, Is anyone aware of a piece of code for N-D Fast Gauss Transform (in python, C or fortran) ? The only codes I could find were for one dimensional cases (Strain's), or in C++ (Yang) but relied on the matlab mex library. I used stats.gaussian_kde but it proves too slow for large arrays (4 x 100 000) Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From zpincus at stanford.edu Mon Mar 20 22:08:14 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Mon, 20 Mar 2006 19:08:14 -0800 Subject: [SciPy-user] Build regression: return of restFP/saveFP problems on OS X? Message-ID: Hi folks, I've been building svn versions of scipy just fine for some months now, following Chris Fonnesbeck's instructions on the wiki. (Short version: use gcc3.3, get g77 from the hpc repository.) Today I updated to the latest svn scipy, and I now cannot get scipy to compile because of link errors saying that restFP and saveFP are not defined. These problems used to be endemic with scipy and OS X, but they were fixed some time ago. It looks like they have been re- introduced recently, though. The link errors look like this: /usr/local/bin/g77 -undefined dynamic_lookup -bundle build/ temp.darwin-8.5.0-Power_Macintosh-2.4/Lib/integrate/_quadpackmodule.o -L/usr/local/lib/gcc/powerpc-apple-darwin7.9.0/3.4.4 -Lbuild/ temp.darwin-8.5.0-Power_Macintosh-2.4 -lquadpack -llinpack_lite - lmach -lg2c -o build/lib.darwin-8.5.0-Power_Macintosh-2.4/scipy/ integrate/_quadpack.so /usr/bin/ld: build/temp.darwin-8.5.0-Power_Macintosh-2.4/Lib/ integrate/_quadpackmodule.o has external relocation entries in non- writable section (__TEXT,__text) for symbols: restFP saveFP The solution is, as it has ever been, to add -lcc_dynamic to the link flags. So, were some changes made in a setup file which removed the - lcc_dynamic option? OS X users still need that, I think. There is also a secondary build problem. I tried to fix the issue by setting LDFLAGS as follows: 'setenv LDFLAGS -lcc_dynamic'. However, in this case setup.py will no longer provide the '-undefined lookup' and '-bundle' options, which are also necessary. I think that the LDFLAGS in the environment should be *appended to* the flags that setup.py selects, and not replace them. (Others may disagree.) In any case, this is all moot since the '-lcc_dynamic' option only works if it is listed *after* the '-L/usr/local/lib/gcc/powerpc-apple- darwin7.9.0/3.4.4' part of the g77 command line, but the LDFLAGS are put before it. Not sure if this is as it should be... but that's beside the point anyhow. Anyhow, where is the setup file I need to edit to get -lcc_dynamic back on the link line? And can this change (or something similar) work its way back into the svn version, so that scipy continues to build on OS X? Zach From robert.kern at gmail.com Mon Mar 20 22:17:49 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Mar 2006 21:17:49 -0600 Subject: [SciPy-user] Build regression: return of restFP/saveFP problems on OS X? In-Reply-To: References: Message-ID: <441F705D.4060706@gmail.com> Zachary Pincus wrote: > Hi folks, > > I've been building svn versions of scipy just fine for some months > now, following Chris Fonnesbeck's instructions on the wiki. (Short > version: use gcc3.3, get g77 from the hpc repository.) > > Today I updated to the latest svn scipy, and I now cannot get scipy > to compile because of link errors saying that restFP and saveFP are > not defined. These problems used to be endemic with scipy and OS X, > but they were fixed some time ago. It looks like they have been re- > introduced recently, though. Uncomment lines 114-115 in numpy/distutils/fcompiler/gnu.py . David (Cooke), can you explain why you commented them out? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Mar 20 22:21:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Mar 2006 21:21:09 -0600 Subject: [SciPy-user] Fast Gauss Transform In-Reply-To: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> References: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> Message-ID: <441F7125.2000709@gmail.com> David Huard wrote: > Hi, > > Is anyone aware of a piece of code for N-D Fast Gauss Transform (in > python, C or fortran) ? The only codes I could find were for one > dimensional cases (Strain's), or in C++ (Yang) but relied on the matlab > mex library. I used stats.gaussian_kde but it proves too slow for large > arrays (4 x 100 000) I don't know of any, but if you find some suitably licensed code or write one yourself, I would like to put it into scipy to speed up gaussian_kde. By the way, Yang's code doesn't seem to require Matlab. It just has a Matlab wrapper around it. It should be relatively easy to wrap it for Python. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Mon Mar 20 22:56:09 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 20 Mar 2006 22:56:09 -0500 Subject: [SciPy-user] Build regression: return of restFP/saveFP problems on OS X? In-Reply-To: <441F705D.4060706@gmail.com> (Robert Kern's message of "Mon, 20 Mar 2006 21:17:49 -0600") References: <441F705D.4060706@gmail.com> Message-ID: Robert Kern writes: > Zachary Pincus wrote: >> Hi folks, >> >> I've been building svn versions of scipy just fine for some months >> now, following Chris Fonnesbeck's instructions on the wiki. (Short >> version: use gcc3.3, get g77 from the hpc repository.) >> >> Today I updated to the latest svn scipy, and I now cannot get scipy >> to compile because of link errors saying that restFP and saveFP are >> not defined. These problems used to be endemic with scipy and OS X, >> but they were fixed some time ago. It looks like they have been re- >> introduced recently, though. > > Uncomment lines 114-115 in numpy/distutils/fcompiler/gnu.py . > > David (Cooke), can you explain why you commented them out? Whoops. I was testing whether I could build using gfortran 4.1. I compiled gcc 4.1 using darwinports, which doesn't support cc_dynamic. As far as I can tell, there's not actually a library by that name; I think the linker does something special when it finds it? 4.1 still didn't work, btw. It snuck in when I checked in the version-matching changes. Fixed in svn. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Mon Mar 20 23:09:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Mar 2006 22:09:24 -0600 Subject: [SciPy-user] Build regression: return of restFP/saveFP problems on OS X? In-Reply-To: References: <441F705D.4060706@gmail.com> Message-ID: <441F7C74.9010801@gmail.com> David M. Cooke wrote: > Robert Kern writes: > >>Zachary Pincus wrote: >> >>>Hi folks, >>> >>>I've been building svn versions of scipy just fine for some months >>>now, following Chris Fonnesbeck's instructions on the wiki. (Short >>>version: use gcc3.3, get g77 from the hpc repository.) >>> >>>Today I updated to the latest svn scipy, and I now cannot get scipy >>>to compile because of link errors saying that restFP and saveFP are >>>not defined. These problems used to be endemic with scipy and OS X, >>>but they were fixed some time ago. It looks like they have been re- >>>introduced recently, though. >> >>Uncomment lines 114-115 in numpy/distutils/fcompiler/gnu.py . >> >>David (Cooke), can you explain why you commented them out? > > Whoops. I was testing whether I could build using gfortran 4.1. > I compiled gcc 4.1 using darwinports, which doesn't support > cc_dynamic. As far as I can tell, there's not actually a library by > that name; I think the linker does something special when it finds it? [~]$ ls -l /usr/lib/libcc_dynamic.a lrwxr-xr-x 1 root wheel 27 Nov 7 18:08 /usr/lib/libcc_dynamic.a -> gcc/darwin/default/libgcc.a -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Mon Mar 20 23:35:09 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 20 Mar 2006 23:35:09 -0500 Subject: [SciPy-user] Build regression: return of restFP/saveFP problems on OS X? In-Reply-To: <441F7C74.9010801@gmail.com> (Robert Kern's message of "Mon, 20 Mar 2006 22:09:24 -0600") References: <441F705D.4060706@gmail.com> <441F7C74.9010801@gmail.com> Message-ID: Robert Kern writes: > David M. Cooke wrote: >> Robert Kern writes: >> >>>Zachary Pincus wrote: >>> >>>>Hi folks, >>>> >>>>I've been building svn versions of scipy just fine for some months >>>>now, following Chris Fonnesbeck's instructions on the wiki. (Short >>>>version: use gcc3.3, get g77 from the hpc repository.) >>>> >>>>Today I updated to the latest svn scipy, and I now cannot get scipy >>>>to compile because of link errors saying that restFP and saveFP are >>>>not defined. These problems used to be endemic with scipy and OS X, >>>>but they were fixed some time ago. It looks like they have been re- >>>>introduced recently, though. >>> >>>Uncomment lines 114-115 in numpy/distutils/fcompiler/gnu.py . >>> >>>David (Cooke), can you explain why you commented them out? >> >> Whoops. I was testing whether I could build using gfortran 4.1. >> I compiled gcc 4.1 using darwinports, which doesn't support >> cc_dynamic. As far as I can tell, there's not actually a library by >> that name; I think the linker does something special when it finds it? > > [~]$ ls -l /usr/lib/libcc_dynamic.a > lrwxr-xr-x 1 root wheel 27 Nov 7 18:08 /usr/lib/libcc_dynamic.a -> > gcc/darwin/default/libgcc.a Ah hah, that symlink is only made when you've got 3.3 selected by gcc_select (it's not made for gcc 4.0). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From nwagner at mecha.uni-stuttgart.de Tue Mar 21 02:52:56 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Mar 2006 08:52:56 +0100 Subject: [SciPy-user] generalized eig algorithm did not converge In-Reply-To: <441EC844.3030408@mecha.uni-stuttgart.de> References: <441EC844.3030408@mecha.uni-stuttgart.de> Message-ID: <441FB0D8.6090900@mecha.uni-stuttgart.de> Nils Wagner wrote: > Hi all, > > What is the reason for this message ? > > Traceback (most recent call last): > File "gh.py", line 4, in ? > w = linalg.eigvals(G,H) > File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line > 244, in eigvals > return eig(a,b=b,left=0,right=0,overwrite_a=overwrite_a) > File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line > 121, in eig > return _geneig(a1,b,left,right,overwrite_a,overwrite_b) > File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line > 72, in _geneig > if info>0: raise LinAlgError,"generalized eig algorithm did not > converge" > scipy.linalg.basic.LinAlgError: generalized eig algorithm did not converge > > Matlab is able to solve this generalized eigenvalue problem without any > warning. > > Afaik, Matlab and scipy's eig is based on LAPACK. So what is the reason > for the difference ? > > Any idea ? > > Nils > > ------------------------------------------------------------------------ > > from scipy import * > G = io.mmread('G.mtx') > H = io.mmread('H.mtx') > w = linalg.eigvals(G,H) > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user I forgot to ask if someone can reproduce this problem. Travis, Pearu, Robert K., do you have a clue why Matlab can solve this eigenproblem while scipy cannot cope with that task. Nils From lanceboyle at qwest.net Tue Mar 21 04:47:42 2006 From: lanceboyle at qwest.net (lanceboyle at qwest.net) Date: Tue, 21 Mar 2006 02:47:42 -0700 Subject: [SciPy-user] Filtering high frquency noise In-Reply-To: <441E77C4.5070702@axetic.com> References: <441E77C4.5070702@axetic.com> Message-ID: <11C42000-3040-4A34-9597-BC109F4DB385@qwest.net> Actually, I think someone did once write a filtering tutorial for SciPy but I can't remember who it was or where to find it. Anyway, you might look for some of the web sites that have interactive filter design capability to get the coefficients that you need, then plug them into SciPy's filter routine. The approach that I would take would be to (1) interpolate the data using a cubic spline and then take uniformly-spaced samples at a rate that is high enough to guarantee that there is no aliasing (keep increasing the sample rate until the spectrum doesn't change), (2) low pass filter with the uniform samples, and if necessary (3) interpolate the uniform samples to get back the values at the original sample instants. Jerry On Mar 20, 2006, at 2:37 AM, Yannick Dirou wrote: > Hello, > > I have a multirate signal (from 18 to 24 sample per day), and if i > plot > it i see something like "high frequency" noise (actually not that high > but higher than the remaining of the signal), i thought about using a > median filter but this is not good for a multirate signal, > then i though i could use scipy filters to do the job, > unfortunately i know nearly nothing in filter design, and don't > know how > to do the job. > > Is there a tutorial or simple example to design a low pass filter? > > the signal data is made of a datetime in epoch format and the measured > value. > > Thanks in advance, > > Yannick From manouchk at gmail.com Tue Mar 21 06:57:08 2006 From: manouchk at gmail.com (manouchk) Date: Tue, 21 Mar 2006 08:57:08 -0300 Subject: [SciPy-user] generalized eig algorithm did not converge In-Reply-To: <441FB0D8.6090900@mecha.uni-stuttgart.de> References: <441EC844.3030408@mecha.uni-stuttgart.de> <441FB0D8.6090900@mecha.uni-stuttgart.de> Message-ID: <200603210857.10474.manouchk@gmail.com> Well I don't know if there is a link but I try to build scipy-0.4.6 on my machine ,during testing it could finish this : check_simple (scipy.linalg.tests.test_decomp.test_eig) (It just would pass this test, there is maybe a link?) See herer my old report: Hi, I'm again trying to build scipy. Now I get much further. It does last a long time first to compile full lapack and blas stuff and then build scipy 0.4.6. After it runs the tests it fails "only" once : W.II.A.0. Print ROUND with only one digit. ... FAIL but few tests later it does stop at that line : check_simple (scipy.linalg.tests.test_decomp.test_eig) and "never" passes this test or it is a "very long" test (more than 20 minutes with 1.5GHz dothan proc). To build scipy with complete lapack, blas I use the way, the old mandrake RPM package does, it builds lapack and blas and link them inside the tree of scipy. I guess I'm still missing something? Le Mardi 21 Mars 2006 04:52, Nils Wagner a ?crit?: > Nils Wagner wrote: > > Hi all, > > > > What is the reason for this message ? > > > > Traceback (most recent call last): > > File "gh.py", line 4, in ? > > w = linalg.eigvals(G,H) > > File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line > > 244, in eigvals > > return eig(a,b=b,left=0,right=0,overwrite_a=overwrite_a) > > File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line > > 121, in eig > > return _geneig(a1,b,left,right,overwrite_a,overwrite_b) > > File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line > > 72, in _geneig > > if info>0: raise LinAlgError,"generalized eig algorithm did not > > converge" > > scipy.linalg.basic.LinAlgError: generalized eig algorithm did not > > converge > > > > Matlab is able to solve this generalized eigenvalue problem without any > > warning. > > > > Afaik, Matlab and scipy's eig is based on LAPACK. So what is the reason > > for the difference ? > > > > Any idea ? > > > > Nils > > > > ------------------------------------------------------------------------ > > > > from scipy import * > > G = io.mmread('G.mtx') > > H = io.mmread('H.mtx') > > w = linalg.eigvals(G,H) > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > I forgot to ask if someone can reproduce this problem. > > Travis, Pearu, Robert K., do you have a clue why Matlab can solve this > eigenproblem while scipy cannot cope with that task. > > Nils > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From a.u.r.e.l.i.a.n at gmx.net Tue Mar 21 07:38:12 2006 From: a.u.r.e.l.i.a.n at gmx.net (=?ISO-8859-1?Q?=22Johannes_L=F6hnert=22?=) Date: Tue, 21 Mar 2006 13:38:12 +0100 (MET) Subject: [SciPy-user] Second order differencing, periodic functions Message-ID: <9766.1142944692@www004.gmx.net> Hi, I want to calculate the 2nd derivative of a periodic function. If I have the values given as array f_arr, I would like to do this with 2nd order differencing. For a non-periodic function I could use d2f = convolve(f_arr, array([-.25, .5, -.25]), mode=2), but with this approach f_arr is zero-padded. Is there a similar function for periodic boundary conditions for f_arr? I know about fftpack.diff, but I would rather use SOD, since diff yields "strange" results for non-contiguous functions. Best Regards, Johannes Loehnert -- Echte DSL-Flatrate dauerhaft f?r 0,- Euro*! "Feel free" mit GMX DSL! http://www.gmx.net/de/go/dsl From pearu at scipy.org Tue Mar 21 07:15:12 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 21 Mar 2006 06:15:12 -0600 (CST) Subject: [SciPy-user] Second order differencing, periodic functions In-Reply-To: <9766.1142944692@www004.gmx.net> References: <9766.1142944692@www004.gmx.net> Message-ID: On Tue, 21 Mar 2006, "Johannes L?hnert" wrote: > Hi, > > I want to calculate the 2nd derivative of a periodic function. If I have the > > values given as array f_arr, I would like to do this with 2nd order > differencing. For a non-periodic function I could use > > d2f = convolve(f_arr, array([-.25, .5, -.25]), mode=2), > > but with this approach f_arr is zero-padded. Is there a similar function for > > periodic boundary conditions for f_arr? I know about fftpack.diff, but I > would rather use SOD, since diff yields "strange" results for non-contiguous > functions. scipy.sandbox.fdfpack[*] contains a function periodic_finite_difference(x, h=2*pi/len(x), k=1, m=1) that returns k-th derivative of len(x)*h periodic sequence x using m-th order finite difference formulae [**]. The error of derivative is O(h^(2*(m-1))) within numerical accuracy. This function is faster than fftpack.diff, returns less numerical noise for very long periodic sequences (ffpack.diff is almost unusable for calculating higher order of derivatives of long periodic sequences). [*] To build fdfpack, you must add the following line config.add_subpackage('fdfpack') to scipy/Lib/sandbox/setup.py file. [**] http://epubs.siam.org/sam-bin/dbq/article/32250 HTH, Pearu From a.u.r.e.l.i.a.n at gmx.net Tue Mar 21 09:24:10 2006 From: a.u.r.e.l.i.a.n at gmx.net (=?ISO-8859-1?Q?=22Johannes_L=F6hnert=22?=) Date: Tue, 21 Mar 2006 15:24:10 +0100 (MET) Subject: [SciPy-user] Second order differencing, periodic functions References: Message-ID: <20314.1142951050@www041.gmx.net> > scipy.sandbox.fdfpack[*] contains a function > > periodic_finite_difference(x, h=2*pi/len(x), k=1, m=1) > > that returns k-th derivative of len(x)*h periodic sequence x using > m-th order finite difference formulae [**]. The error of derivative is > O(h^(2*(m-1))) within numerical accuracy. This function is faster than > fftpack.diff, returns less numerical noise for very long periodic > sequences (ffpack.diff is almost unusable for calculating higher order > of derivatives of long periodic sequences). Thank you, I will give this a look. You added this just yesterday? You don't have any crystall balls at home, do you? *winks* Johannes -- "Feel free" mit GMX FreeMail! Monat f?r Monat 10 FreeSMS inklusive! http://www.gmx.net From david.huard at gmail.com Tue Mar 21 09:52:02 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 21 Mar 2006 09:52:02 -0500 Subject: [SciPy-user] Fast Gauss Transform In-Reply-To: <441F7125.2000709@gmail.com> References: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> <441F7125.2000709@gmail.com> Message-ID: <91cf711d0603210652u8f93965y@mail.gmail.com> That was my first impulse, however, I came across an article comparing different fast kde algorithms and Yang's performance seemed overated (i.e. not faster than the naive implementation). The conclusion was that the kd-tree method (A.G Gray) was the most efficient for N-D problems by orders of magnitude. I have no clue yet what this is about but I'll gladly share the code once it is functional. Cheers, David 2006/3/20, Robert Kern : > > David Huard wrote: > > Hi, > > > > Is anyone aware of a piece of code for N-D Fast Gauss Transform (in > > python, C or fortran) ? The only codes I could find were for one > > dimensional cases (Strain's), or in C++ (Yang) but relied on the matlab > > mex library. I used stats.gaussian_kde but it proves too slow for large > > arrays (4 x 100 000) > > I don't know of any, but if you find some suitably licensed code or write > one > yourself, I would like to put it into scipy to speed up gaussian_kde. > > By the way, Yang's code doesn't seem to require Matlab. It just has a > Matlab > wrapper around it. It should be relatively easy to wrap it for Python. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis.brady at gmail.com Tue Mar 21 11:17:39 2006 From: travis.brady at gmail.com (Travis Brady) Date: Tue, 21 Mar 2006 08:17:39 -0800 Subject: [SciPy-user] Filtering high frquency noise In-Reply-To: <11C42000-3040-4A34-9597-BC109F4DB385@qwest.net> References: <441E77C4.5070702@axetic.com> <11C42000-3040-4A34-9597-BC109F4DB385@qwest.net> Message-ID: If you'd like to try out Scipy's capabilities these emails from the matplotlib list by John Hunter are a good starting point: http://article.gmane.org/gmane.comp.python.matplotlib.general/3397/match=butterworth http://article.gmane.org/gmane.comp.python.matplotlib.general/1586/match=butterworth good luck Travis On 3/21/06, lanceboyle at qwest.net wrote: > > Actually, I think someone did once write a filtering tutorial for > SciPy but I can't remember who it was or where to find it. Anyway, > you might look for some of the web sites that have interactive filter > design capability to get the coefficients that you need, then plug > them into SciPy's filter routine. > > The approach that I would take would be to (1) interpolate the data > using a cubic spline and then take uniformly-spaced samples at a rate > that is high enough to guarantee that there is no aliasing (keep > increasing the sample rate until the spectrum doesn't change), (2) > low pass filter with the uniform samples, and if necessary (3) > interpolate the uniform samples to get back the values at the > original sample instants. > > Jerry > > > On Mar 20, 2006, at 2:37 AM, Yannick Dirou wrote: > > > Hello, > > > > I have a multirate signal (from 18 to 24 sample per day), and if i > > plot > > it i see something like "high frequency" noise (actually not that high > > but higher than the remaining of the signal), i thought about using a > > median filter but this is not good for a multirate signal, > > then i though i could use scipy filters to do the job, > > unfortunately i know nearly nothing in filter design, and don't > > know how > > to do the job. > > > > Is there a tutorial or simple example to design a low pass filter? > > > > the signal data is made of a datetime in epoch format and the measured > > value. > > > > Thanks in advance, > > > > Yannick > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Tue Mar 21 15:40:19 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 21 Mar 2006 15:40:19 -0500 Subject: [SciPy-user] Fast Gauss Transform In-Reply-To: <91cf711d0603210652u8f93965y@mail.gmail.com> References: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> <441F7125.2000709@gmail.com> <91cf711d0603210652u8f93965y@mail.gmail.com> Message-ID: <91cf711d0603211240k4e435e09r@mail.gmail.com> Hi Robert, I finally followed the path of least resistance and tried to wrap Yang's package using Swig but it seems like I'm out of my depth here. I managed to create a python module, but when I call the FastGauss class, it complains that the class asks for double * and that it receives a numpy.array. I guess the interface I provided is not correct, but I couldn't find any example that I could follow step by step. I also tried to wrap the class with f2py, but python complains : ImportError: dynamic module does not define init function (initfgt) Could this be because the initialization of the c++ class is in the header file and f2py seems to ignore it ? Could you direct me toward the most efficient way to get a working python module ? swig ? f2py ? Thanks, David 2006/3/21, David Huard : > > That was my first impulse, however, I came across an article comparing > different fast kde algorithms and Yang's performance seemed overated (i.e. > not faster than the naive implementation). The conclusion was that the > kd-tree method (A.G Gray) was the most efficient for N-D problems by > orders of magnitude. I have no clue yet what this is about but I'll gladly > share the code once it is functional. > > Cheers, > > David > > 2006/3/20, Robert Kern : > > > > David Huard wrote: > > > Hi, > > > > > > Is anyone aware of a piece of code for N-D Fast Gauss Transform (in > > > python, C or fortran) ? The only codes I could find were for one > > > dimensional cases (Strain's), or in C++ (Yang) but relied on the > > matlab > > > mex library. I used stats.gaussian_kde but it proves too slow for > > large > > > arrays (4 x 100 000) > > > > I don't know of any, but if you find some suitably licensed code or > > write one > > yourself, I would like to put it into scipy to speed up gaussian_kde. > > > > By the way, Yang's code doesn't seem to require Matlab. It just has a > > Matlab > > wrapper around it. It should be relatively easy to wrap it for Python. > > > > -- > > Robert Kern > > robert.kern at gmail.com > > > > "I have come to believe that the whole world is an enigma, a harmless > > enigma > > that is made terrible by our own mad attempt to interpret it as though > > it had > > an underlying truth." > > -- Umberto Eco > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.cooper at computer.org Tue Mar 21 15:53:04 2006 From: m.cooper at computer.org (Matthew Cooper) Date: Tue, 21 Mar 2006 12:53:04 -0800 Subject: [SciPy-user] maxentropy In-Reply-To: References: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> <441ABBC5.90304@ftw.at> <43f499ca0603171631w57400cf4n583bfb0c26e6fc5c@mail.gmail.com> Message-ID: <43f499ca0603211253k585c56eev3bd08fb42f6574b8@mail.gmail.com> Hi Ed, I am playing around with the code on some more small examples and everything has been fine. The thing that will hold me back from testing on larger datasets is the F matrix which thus far requires the space of (context,label) pairs to be enumerable. I know that internally you are using a sparse representation for this matrix. Can I initialize the model with a sparse matrix also? This also requires changes with the indices_context parameter in the examples. I see that you also have an unconditional bigmodel class that seems related, but I'm not sure what would need to be changed. For a conditional model, computing the feature expectation under the current model still requires knowledge of the training samples. So what I think would make sense is to use two sparse matrices. One matrix needs to represent the training data (our model is for q(x|w) but we still use the empirical p(w) as the prior on the context when computing the feature expectations under the model (so we don't need to consider the whole exponential space of possible contexts). This is shown in Malouf's paper in the equation for the log-likelihood (2) and the second equation in Sec 2.1). Each feature then maps the training data to the corresponding feature output. This requires an N vector per feature so a N by (#features) sparse matrix could be used for F. Does this make sense? I should be able to test on some standard datasets if we can figure out how to handle the larger context spaces that come with larger text collections. Matt On 3/18/06, Ed Schofield wrote: > > > On 18/03/2006, at 1:31 AM, Matthew Cooper wrote: > > > > > Hi Ed, > > > > Thanks again for working on this. I can try and work on it a bit > > this weekend. I've had time to look over the two example scripts > > you provided. There seemed to be some difference in the two in > > terms of the call to the conditionalmodel fit method. In the low > > level example, the count parameter seemed to provide the empirical > > counts of the feature functions, where the features were simply > > (context,label) co-occurrence. In the high level example, the > > features are more complicated, and the counts parameter seems to > > have different dimensionality. I'll try and get a working high > > level example together next. > > > > Hi Matt, > > I've now found and fixed some bugs in the conditional maxent code. > The computation of the conditional expectations was wrong, and the > p_tilde parameter was interpreted inconsistently. Both the examples > work now! Fantastic! > > I'd be very grateful for any assistance you could give in providing > more examples -- especially real examples from text classification. > The two examples at the moment are too artificial and perhaps a bit > confusing. Or if you have any suggestions or patches for simplifying > the interface (e.g. the constructor arguments) or any other > improvements (e.g. bug fixes, better docs, or a tutorial) I'd also > readily merge them. > > Let me know how you go with it. When you're happy that it's all > working, I'll merge it with the main SVN trunk. > > -- Ed > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Mar 21 15:56:52 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 21 Mar 2006 14:56:52 -0600 Subject: [SciPy-user] Fast Gauss Transform In-Reply-To: <91cf711d0603211240k4e435e09r@mail.gmail.com> References: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> <441F7125.2000709@gmail.com> <91cf711d0603210652u8f93965y@mail.gmail.com> <91cf711d0603211240k4e435e09r@mail.gmail.com> Message-ID: <44206894.20007@gmail.com> David Huard wrote: > Hi Robert, > I finally followed the path of least resistance and tried to wrap Yang's > package using Swig but it seems like I'm out of my depth here. I managed > to create a python module, but when I call the FastGauss class, it > complains that the class asks for double * and that it receives a > numpy.array. I guess the interface I provided is not correct, but I > couldn't find any example that I could follow step by step. > > I also tried to wrap the class with f2py, but python complains : > ImportError: dynamic module does not define init function (initfgt) > > Could this be because the initialization of the c++ class is in the > header file and f2py seems to ignore it ? f2py probably won't deal with C++ well. Without seeing the code that you tried, I can't tell you exactly what went wrong. > Could you direct me toward the most efficient way to get a working > python module ? swig ? f2py ? SWIG is probably easiest given that you are wrapping a C++ class. Recently, Fernando P?rez contributed some SWIG typemaps that handle simple conversions between numpy arrays and C pointers. In a recent SVN checkout of numpy, look in numpy/doc/swig/ for the typemaps and an example. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michael.sorich at gmail.com Tue Mar 21 19:09:33 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Wed, 22 Mar 2006 10:39:33 +1030 Subject: [SciPy-user] masked record arrays Message-ID: <16761e100603211609j79fe848evebf4c8fba242a93f@mail.gmail.com> Hi, I was just wondering whether there are any current plans to allow masking of record like arrays. eg something along the lines of desc = dtype({'names': ['name', 'age', 'weight'], 'formats': ['S30', 'i2', 'f']}) a = N.ma.array([('Bill',31,260.0),('Fred', 15, 145.0)], mask= [[0,1,0,],[0,0,0]], dtype=desc) Thanks, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From Fernando.Perez at colorado.edu Tue Mar 21 19:12:41 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 21 Mar 2006 17:12:41 -0700 Subject: [SciPy-user] Fast Gauss Transform In-Reply-To: <44206894.20007@gmail.com> References: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> <441F7125.2000709@gmail.com> <91cf711d0603210652u8f93965y@mail.gmail.com> <91cf711d0603211240k4e435e09r@mail.gmail.com> <44206894.20007@gmail.com> Message-ID: <44209679.9090509@colorado.edu> Robert Kern wrote: > SWIG is probably easiest given that you are wrapping a C++ class. Recently, > Fernando P?rez contributed some SWIG typemaps that handle simple conversions > between numpy arrays and C pointers. In a recent SVN checkout of numpy, look in > numpy/doc/swig/ for the typemaps and an example. It's worth reiterating that those typemaps were meant as a /start/, they are still incomplete (nothing beyond 2d, possibly other limitations). Hopefully those who use them and end up extending them will contribute their work back (at least the parts that are pure-numpy and not specific to their own application). If that happens, we'll eventually have good coverage for 'out of the box' SWIG use of numpy. Cheers, f From novak at ucolick.org Tue Mar 21 20:19:35 2006 From: novak at ucolick.org (Gregory Novak) Date: Tue, 21 Mar 2006 17:19:35 -0800 Subject: [SciPy-user] Puzzling NAN semantics Message-ID: I'm confused by the semantics of nan: def the_test(a,b): print a, " less than ", b, ":", ab In [85]: the_test(5,6) 5 less than 6 : True 5 equals 6 : False 5 greater than 6 : False In [86]: the_test(5,5) 5 less than 5 : False 5 equals 5 : True 5 greater than 5 : False So far so good In [87]: the_test(5,nan) 5 less than nan : False 5 equals nan : True 5 greater than nan : False This doens't seem desirable: any number is equal to nan? In [88]: the_test(nan,nan) nan less than nan : False nan equals nan : True nan greater than nan : False I believe that the IEEE standard says that nan is _not_ equal to itself In [89]: the_test(array([5,5]), array([5,nan])) [5 5] less than [ 5. nan] : [False False] [5 5] equals [ 5. nan] : [True False] [5 5] greater than [ 5. nan] : [False False] In [90]: the_test(array([nan,nan]), array([5,nan])) [nan nan] less than [ 5. nan] : [False False] [nan nan] equals [ 5. nan] : [False False] [nan nan] greater than [ 5. nan] : [False False] When nan appears inside arrays, the behavior is what I expected: nan is not equal to, greater than, or less than anything, including itself. I'm using OS X 10.4.5, Python 2.3.5, IPython 0.6.15, scipy-core 0.9.5, scipy 0.4.6, and Numeric 24.2 Should I consider this a bug? Thanks, Greg From cournape at atr.jp Tue Mar 21 20:26:41 2006 From: cournape at atr.jp (Cournapeau David) Date: Wed, 22 Mar 2006 10:26:41 +0900 Subject: [SciPy-user] Fast Gauss Transform In-Reply-To: <44206894.20007@gmail.com> References: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> <91cf711d0603210652u8f93965y@mail.gmail.com> <44206894.20007@gmail.com> Message-ID: <1142990801.30961.11.camel@localhost.localdomain> On Tue, 2006-03-21 at 14:56 -0600, Robert Kern wrote: > David Huard wrote: > > Hi Robert, > > I finally followed the path of least resistance and tried to wrap Yang's > > package using Swig but it seems like I'm out of my depth here. I managed > > to create a python module, but when I call the FastGauss class, it > > complains that the class asks for double * and that it receives a > > numpy.array. I guess the interface I provided is not correct, but I > > couldn't find any example that I could follow step by step. > > > > I also tried to wrap the class with f2py, but python complains : > > ImportError: dynamic module does not define init function (initfgt) What is meant by fast gauss ? I have a small module to compute gaussian densities in C, meant as a speed up for EM algorithm for Gaussian mixture models under matlab which works OK (the C module is totally independant of matlab). It doesn't use any fancy techniques, but is quite fast (can definitely handle 4 * 1e5 samples), and tested against valgrind for any potential memory misuse. It depends on LAPACK for full covariance matrices cases (I need Cholesky decomposition), but this is not a problem with scipy :) As an exemple, used under matlab, computing 4 different 4 dimension gaussian a posteriori densities with full covariance matrices, takes around 0.5 s on my PIV 3Ghz for 1e6 samples; and matlab overhead is not negligeable (re-arranging array layout, etc...). I can definitely see usage cases where this would be too slow, but for many practical cases, this is definitely enough (it is at least for me :) ) David From jdhunter at ace.bsd.uchicago.edu Tue Mar 21 20:26:44 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 21 Mar 2006 19:26:44 -0600 Subject: [SciPy-user] Fast Gauss Transform In-Reply-To: <1142990801.30961.11.camel@localhost.localdomain> (Cournapeau David's message of "Wed, 22 Mar 2006 10:26:41 +0900") References: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> <91cf711d0603210652u8f93965y@mail.gmail.com> <44206894.20007@gmail.com> <1142990801.30961.11.camel@localhost.localdomain> Message-ID: <87ek0vs20b.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Cournapeau" == Cournapeau David writes: Cournapeau> What is meant by fast gauss ? Google is your friend http://www.google.com/search?hl=en&q=gauss+transform&btnG=Google+Search From robert.kern at gmail.com Tue Mar 21 20:32:16 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 21 Mar 2006 19:32:16 -0600 Subject: [SciPy-user] Puzzling NAN semantics In-Reply-To: References: Message-ID: <4420A920.4040400@gmail.com> Gregory Novak wrote: > I'm confused by the semantics of nan: > > def the_test(a,b): > print a, " less than ", b, ":", a print a, " equals ", b, ":", a==b > print a, " greater than ", b, ":",a>b > > In [85]: the_test(5,6) > 5 less than 6 : True > 5 equals 6 : False > 5 greater than 6 : False > > In [86]: the_test(5,5) > 5 less than 5 : False > 5 equals 5 : True > 5 greater than 5 : False > > So far so good > > In [87]: the_test(5,nan) > 5 less than nan : False > 5 equals nan : True > 5 greater than nan : False > > This doens't seem desirable: any number is equal to nan? > > In [88]: the_test(nan,nan) > nan less than nan : False > nan equals nan : True > nan greater than nan : False > > I believe that the IEEE standard says that nan is _not_ equal to itself > > In [89]: the_test(array([5,5]), array([5,nan])) > [5 5] less than [ 5. nan] : [False False] > [5 5] equals [ 5. nan] : [True False] > [5 5] greater than [ 5. nan] : [False False] > > In [90]: the_test(array([nan,nan]), array([5,nan])) > [nan nan] less than [ 5. nan] : [False False] > [nan nan] equals [ 5. nan] : [False False] > [nan nan] greater than [ 5. nan] : [False False] > > When nan appears inside arrays, the behavior is what I expected: nan > is not equal to, greater than, or less than anything, including > itself. > > I'm using OS X 10.4.5, Python 2.3.5, IPython 0.6.15, scipy-core 0.9.5, > scipy 0.4.6, and Numeric 24.2 > > Should I consider this a bug? On OS X 10.4.4: In [353]: def the_test(a, b): .....: print "%s < %s = %s" % (a, b, a < b) .....: print "%s == %s = %s" % (a, b, a == b) .....: print "%s > %s = %s" % (a, b, a > b) .....: In [357]: the_test(numpy.nan, numpy.nan) nan < nan = False nan == nan = False nan > nan = False In [361]: the_test(numpy.array([numpy.nan, numpy.nan]), numpy.array([5., numpy.nan])) [ nan nan] < [ 5. nan] = [False False] [ nan nan] == [ 5. nan] = [False False] [ nan nan] > [ 5. nan] = [False False] In [362]: the_test(5, numpy.nan) 5 < nan = False 5 == nan = False 5 > nan = False In [359]: scipy.__version__ Out[359]: '0.4.7.1607' In [360]: numpy.__version__ Out[360]: '0.9.6.2148' -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Tue Mar 21 21:08:17 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 21 Mar 2006 21:08:17 -0500 Subject: [SciPy-user] data acquisition in Python Message-ID: Is anyone aware of data acquisition hardware that works well with Python? I would like to use Python for buffered analog-to-digital and digital-to-analog applications. I would be particularly interested in fairly inexpensive solutions. I found one European company that sort of claims to support Python, but I didn't know how easy it might be to wrap C++ drivers from other hardware manufacturers. Thanks for your thoughts, Ryan From strawman at astraw.com Tue Mar 21 21:21:00 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 21 Mar 2006 18:21:00 -0800 Subject: [SciPy-user] Puzzling NAN semantics In-Reply-To: References: Message-ID: <4420B48C.1070509@astraw.com> Gregory Novak wrote: >I'm confused by the semantics of nan: > > Check this: http://scipy.org/FAQ#head-fff4d6fce7528974185715153cfbc1a191dcb915 From strawman at astraw.com Tue Mar 21 21:27:42 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 21 Mar 2006 18:27:42 -0800 Subject: [SciPy-user] data acquisition in Python In-Reply-To: References: Message-ID: <4420B61E.4090305@astraw.com> Ryan Krauss wrote: >Is anyone aware of data acquisition hardware that works well with >Python? I would like to use Python for buffered analog-to-digital and >digital-to-analog applications. I would be particularly interested in >fairly inexpensive solutions. I found one European company that sort >of claims to support Python, but I didn't know how easy it might be to >wrap C++ drivers from other hardware manufacturers. > > What OS? Do you mean double buffered (for continuous ongoing sampling) or is single-shot OK? Generally, there's probaby a C library to do what you want, so there's the option to wrap it. For Measurement Computing devices, see http://www.its.caltech.edu/~astraw/pyul.html . There's a 'Also of interest' section on that page. I've also got a Pyrex wrapper for Jasper Warren's linux PMD1208FS code which I may be able to put online if there's interest... Note that none of this stuff is extensively tested, but it's a start, anyway. Comedi comes with a Python interface, but I haven't tried it. From mark at mitre.org Tue Mar 21 22:00:15 2006 From: mark at mitre.org (Mark Heslep) Date: Tue, 21 Mar 2006 22:00:15 -0500 Subject: [SciPy-user] g77 sse2 problem back(?) with fedora 5 Message-ID: <4420BDBF.2020507@mitre.org> Scipy 4.6 & 4.8 build fails with Fedora 5 _only_ on a Pentium M; same box and scipy release built okay in FC4. It fails in some g77 work with the same assembler errors reported back in this thread http://scipy.net/pipermail/scipy-user/2003-March/001361.html. Scipy builds correctly on an SMP _Xeon_ CPU box w/ the same Fedora 5 distro. I note that the Numpy disutils recognizes the Pentium M vice others. Fix: Removing the 'sse2' flag in ..numpy/distutils/fcompiler/gnu.py clears the problem. No doubt there's a way to override compile flags in setup.py (?) but my distutils skills are thin. FYI Fedora 5 is using: gcc: > Target: i386-redhat-linux > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --enable-shared --enable-threads=posix > --enable-checking=release --with-system-zlib --enable-__cxa_atexit > --disable-libunwind-exceptions --enable-libgcj-multifile > --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > --enable-java-awt=gtk --disable-dssi > --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre > --with-cpu=generic --host=i386-redhat-linux > Thread model: posix > gcc version 4.1.0 20060304 (Red Hat 4.1.0-3) and g77: > Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2.3/specs > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --enable-shared --enable-threads=posix > --disable-checking --with-system-zlib --enable-__cxa_atexit > --enable-languages=c,c++,f77 --disable-libgcj --host=i386-redhat-linux > Thread model: posix > gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-55.fc5) For quick reference the cpuinfo: /proc/cpuinfo from the Xeon > processor : 0 > vendor_id : GenuineIntel > cpu family : 15 > model : 3 > model name : Intel(R) Xeon(TM) CPU 3.20GHz > stepping : 4 > cpu MHz : 3192.161 > cache size : 1024 KB > physical id : 0 > siblings : 2 > core id : 0 > cpu cores : 1 > fdiv_bug : no > hlt_bug : no > f00f_bug : no > coma_bug : no > fpu : yes > fpu_exception : yes > cpuid level : 3 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm > constant_tsc pni monitor ds_cpl cid xtpr > bogomips : 6390.11 and the Pentium M > processor : 0 > vendor_id : GenuineIntel > cpu family : 6 > model : 13 > model name : Intel(R) Pentium(R) M processor 2.00GHz > stepping : 6 > cpu MHz : 800.000 > cache size : 2048 KB > fdiv_bug : no > hlt_bug : no > f00f_bug : no > coma_bug : no > fpu : yes > fpu_exception : yes > cpuid level : 2 > wp : yes > flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov pat > clflush dts acpi mmx fxsr sse sse2 ss tm pbe est tm2 > bogomips : 1601.14 Mark From mark at mitre.org Tue Mar 21 22:20:27 2006 From: mark at mitre.org (Mark Heslep) Date: Tue, 21 Mar 2006 22:20:27 -0500 Subject: [SciPy-user] g77 sse2 problem back(?) with fedora 5 In-Reply-To: <4420BDBF.2020507@mitre.org> References: <4420BDBF.2020507@mitre.org> Message-ID: <4420C27B.2060807@mitre.org> Mark Heslep wrote: > Fix: Removing the 'sse2' flag in ..numpy/distutils/fcompiler/gnu.py > clears the problem. > Well maybe not? scipy.test(9) on the Pentium M laptop gives > Ran 1520 tests in 74.202s > FAILED (failures=5, errors=145) > > With errors like: > ====================================================================== > ERROR: Does the matrix's sum() method work? > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 42, in check_sum > assert_array_equal(self.dat.sum(), self.datsp.sum()) > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line > 396, in sum > return (o0 * (self * o1)).A.squeeze() > File "/usr/lib/python2.4/site-packages/numpy/core/defmatrix.py", line > 138, in __mul__ > return N.dot(self, other) > ValueError: objects are not aligned The Xeon box reports no errors, no failures. > Ran 1081 tests in 84.241s > FAILED (failures=2) Checkdot caused the failure. From ryanlists at gmail.com Tue Mar 21 22:24:36 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 21 Mar 2006 22:24:36 -0500 Subject: [SciPy-user] data acquisition in Python In-Reply-To: <4420B61E.4090305@astraw.com> References: <4420B61E.4090305@astraw.com> Message-ID: OS is probably negotiable. Could be windows or linux. On 3/21/06, Andrew Straw wrote: > Ryan Krauss wrote: > > >Is anyone aware of data acquisition hardware that works well with > >Python? I would like to use Python for buffered analog-to-digital and > >digital-to-analog applications. I would be particularly interested in > >fairly inexpensive solutions. I found one European company that sort > >of claims to support Python, but I didn't know how easy it might be to > >wrap C++ drivers from other hardware manufacturers. > > > > > What OS? Do you mean double buffered (for continuous ongoing sampling) > or is single-shot OK? > > Generally, there's probaby a C library to do what you want, so there's > the option to wrap it. > > For Measurement Computing devices, see > http://www.its.caltech.edu/~astraw/pyul.html . There's a 'Also of > interest' section on that page. I've also got a Pyrex wrapper for Jasper > Warren's linux PMD1208FS code which I may be able to put online if > there's interest... Note that none of this stuff is extensively tested, > but it's a start, anyway. > > Comedi comes with a Python interface, but I haven't tried it. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From mark at mitre.org Tue Mar 21 22:26:44 2006 From: mark at mitre.org (Mark Heslep) Date: Tue, 21 Mar 2006 22:26:44 -0500 Subject: [SciPy-user] g77 sse2 problem back(?) with fedora 5 In-Reply-To: <4420C27B.2060807@mitre.org> References: <4420BDBF.2020507@mitre.org> <4420C27B.2060807@mitre.org> Message-ID: <4420C3F4.40501@mitre.org> Sorry - scratch the last note, it was Scipy 4.6 versus 4.8. 4.8 reports the boat load of errors on both Xeon and Pentium M Mark Mark Heslep wrote: > Mark Heslep wrote: > >> Fix: Removing the 'sse2' flag in ..numpy/distutils/fcompiler/gnu.py >> clears the problem. >> >> > Well maybe not? scipy.test(9) on the Pentium M laptop gives > >> Ran 1520 tests in 74.202s >> FAILED (failures=5, errors=145) >> >> >> From a.u.r.e.l.i.a.n at gmx.net Wed Mar 22 03:58:30 2006 From: a.u.r.e.l.i.a.n at gmx.net (aurelian) Date: Wed, 22 Mar 2006 09:58:30 +0100 Subject: [SciPy-user] Puzzling NAN semantics In-Reply-To: <4420A920.4040400@gmail.com> References: <4420A920.4040400@gmail.com> Message-ID: <442111B6.8000909@gmx.net> I get the same behaviour as Robert (the correct one). In [13]: sys.version Out[13]: '2.4.2 (#2, Sep 30 2005, 21:19:01) \n[GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)]' In [14]: numpy.__version__ Out[14]: '0.9.5.2044' Johannes From bldrake at adaptcs.com Wed Mar 22 09:07:46 2006 From: bldrake at adaptcs.com (Barry Drake) Date: Wed, 22 Mar 2006 06:07:46 -0800 (PST) Subject: [SciPy-user] data acquisition in Python In-Reply-To: Message-ID: <20060322140746.62266.qmail@web203.biz.mail.re2.yahoo.com> Ryan, A good source for inexpensive A/D boards is http://www.rtd.com/ I've used their hardware in the past on many projects. They have drivers for all platforms which you can easily wrap in a .dll, .so, or .rc (Mac, I think). You can then wrap some code in Python that talks to these shared libs. Of course, the reason for the shared libs is that you can then use those for other applications. Barry Drake --- Ryan Krauss wrote: > Is anyone aware of data acquisition hardware that > works well with > Python? I would like to use Python for buffered > analog-to-digital and > digital-to-analog applications. I would be > particularly interested in > fairly inexpensive solutions. I found one European > company that sort > of claims to support Python, but I didn't know how > easy it might be to > wrap C++ drivers from other hardware manufacturers. > > Thanks for your thoughts, > > Ryan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From david.huard at gmail.com Wed Mar 22 11:39:13 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 22 Mar 2006 11:39:13 -0500 Subject: [SciPy-user] Fast Gauss Transform In-Reply-To: <87ek0vs20b.fsf@peds-pc311.bsd.uchicago.edu> References: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> <91cf711d0603210652u8f93965y@mail.gmail.com> <44206894.20007@gmail.com> <1142990801.30961.11.camel@localhost.localdomain> <87ek0vs20b.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <91cf711d0603220839g2c974524h@mail.gmail.com> While fiddling with new code to compute kde, I noticed that gaussian_kde has an unexpected behavior for 1D arrays. For instance : x = rand(1000) g = gaussian_kde(x) g(z) returns the same value no matter what z is. The problem disappears for multidimensional data. The dot product (self.inv_cov, diff) seems to be causing the problem, doing the product only for the first element of diff and returning zeros for the rest. I guess this is not the intended behavior for dot. Cheers, David 2006/3/21, John Hunter : > > >>>>> "Cournapeau" == Cournapeau David writes: > > Cournapeau> What is meant by fast gauss ? > > Google is your friend > > http://www.google.com/search?hl=en&q=gauss+transform&btnG=Google+Search > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Wed Mar 22 12:09:44 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 22 Mar 2006 12:09:44 -0500 Subject: [SciPy-user] data acquisition in Python In-Reply-To: <20060322140746.62266.qmail@web203.biz.mail.re2.yahoo.com> References: <20060322140746.62266.qmail@web203.biz.mail.re2.yahoo.com> Message-ID: Thanks Barry, it looks like there are some interesting possibilities with rtd. Have you used any of their hardware from Python? Thanks, Ryan On 3/22/06, Barry Drake wrote: > Ryan, > A good source for inexpensive A/D boards is > > http://www.rtd.com/ > > I've used their hardware in the past on many projects. > They have drivers for all platforms which you can > easily wrap in a .dll, .so, or .rc (Mac, I think). > You can then wrap some code in Python that talks to > these shared libs. Of course, the reason for the > shared libs is that you can then use those for other > applications. > > Barry Drake > > --- Ryan Krauss wrote: > > > Is anyone aware of data acquisition hardware that > > works well with > > Python? I would like to use Python for buffered > > analog-to-digital and > > digital-to-analog applications. I would be > > particularly interested in > > fairly inexpensive solutions. I found one European > > company that sort > > of claims to support Python, but I didn't know how > > easy it might be to > > wrap C++ drivers from other hardware manufacturers. > > > > Thanks for your thoughts, > > > > Ryan > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From tgrav at mac.com Wed Mar 22 12:17:24 2006 From: tgrav at mac.com (Tommy Grav) Date: Wed, 22 Mar 2006 12:17:24 -0500 Subject: [SciPy-user] Problems installing SciPy Message-ID: I am new to Python and just downloaded ActivePython 2.4.2.10 on my Mac PPC with OS X 10.4. I added the numpy package (0.9.6-py2.4) and it imports fine. But when I try to import scipy (0.4.8-py2.4) I get an error: >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/__init__.py", line 32, in ? from numpy import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/f2py/__init__.py", line 6, in ? import tempfile File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/tempfile.py", line 33, in ? from random import Random as _Random ImportError: cannot import name Random importing random works fine, so I don't understand what the problem is. Can anyone provide any clues? Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Mar 22 12:44:35 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 22 Mar 2006 10:44:35 -0700 Subject: [SciPy-user] Fast Gauss Transform In-Reply-To: <91cf711d0603220839g2c974524h@mail.gmail.com> References: <91cf711d0603201107k7b1f38c7g@mail.gmail.com> <91cf711d0603210652u8f93965y@mail.gmail.com> <44206894.20007@gmail.com> <1142990801.30961.11.camel@localhost.localdomain> <87ek0vs20b.fsf@peds-pc311.bsd.uchicago.edu> <91cf711d0603220839g2c974524h@mail.gmail.com> Message-ID: <44218D03.4080703@ieee.org> David Huard wrote: > While fiddling with new code to compute kde, I noticed that > gaussian_kde has an unexpected behavior for 1D arrays. > > For instance : > > x = rand(1000) > g = gaussian_kde(x) > g(z) > > returns the same value no matter what z is. The problem disappears for > multidimensional data. The dot product (self.inv_cov, diff) seems to > be causing the problem, doing the product only for the first element > of diff and returning zeros for the rest. I guess this is not the > intended behavior for dot. Yet another untested branch of the optimized dot code. Apparently it's been very hard to get this one right after re-writing the scalar-multiplication portion. Scalar multiplication is used in several branches but the parameters must be set just right. You were exposing a (1,1) x (1,N) error that uses BLAS scalar multiplication at the core but the number of multiplies was set to 1 rather than N. I added a test for this case in numpy and fixed the problem. Thanks for the report. -Travis From robert.kern at gmail.com Wed Mar 22 15:19:55 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 22 Mar 2006 14:19:55 -0600 Subject: [SciPy-user] Problems installing SciPy In-Reply-To: References: Message-ID: <4421B16B.5060101@gmail.com> Tommy Grav wrote: > I am new to Python and just downloaded ActivePython 2.4.2.10 on my Mac > PPC with OS X 10.4. > I added the numpy package (0.9.6-py2.4) and it imports fine. But when I > try to import scipy (0.4.8-py2.4) > I get an error: > >>>> import scipy > Traceback (most recent call last): > File "", line 1, in ? > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/__init__.py", > line 32, in ? > from numpy import * > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/f2py/__init__.py", > line 6, in ? > import tempfile > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/tempfile.py", > line 33, in ? > from random import Random as _Random > ImportError: cannot import name Random > > importing random works fine, so I don't understand what the problem is. > Can anyone provide any clues? tempfile.py is looking for the standard library's random.py module. I'll bet that the directory you are in has its own random module that is getting imported instead. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tgrav at mac.com Wed Mar 22 15:25:13 2006 From: tgrav at mac.com (Tommy Grav) Date: Wed, 22 Mar 2006 15:25:13 -0500 Subject: [SciPy-user] Problems installing SciPy In-Reply-To: <4421B16B.5060101@gmail.com> References: <4421B16B.5060101@gmail.com> Message-ID: > tempfile.py is looking for the standard library's random.py module. > I'll bet > that the directory you are in has its own random module that is > getting imported > instead. Actually there is no random.py in the directory. However, the error only happens when scipy is imported in the interactive session. When importing it in a script it seems to work fine (I have not really played around using its functions, but the import does not return any errors). Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Mar 22 15:39:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 22 Mar 2006 14:39:24 -0600 Subject: [SciPy-user] Problems installing SciPy In-Reply-To: References: <4421B16B.5060101@gmail.com> Message-ID: <4421B5FC.6010703@gmail.com> Tommy Grav wrote: >> tempfile.py is looking for the standard library's random.py module. >> I'll bet >> that the directory you are in has its own random module that is >> getting imported >> instead. > > Actually there is no random.py in the directory. However, the error only > happens > when scipy is imported in the interactive session. When importing it in > a script > it seems to work fine (I have not really played around using its > functions, but the > import does not return any errors). Okay, but where is it coming from since it's not the standard library's random.py? To find out: In [22]: import random In [23]: random.__file__ Out[23]: '/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/random.pyc' -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthew.brett at gmail.com Wed Mar 22 15:39:40 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 22 Mar 2006 12:39:40 -0800 Subject: [SciPy-user] Intermittent fail of scipy.optimize.test Message-ID: <1e2af89e0603221239v37ee2ae1k6f2531c5f2318ed6@mail.gmail.com> Hi, I am running into an odd problem that I have not seen before. On two systems, with current numpy and scipy svn, I am getting following error: import scipy.optimize scipy.optimize.test() .... ====================================================================== ERROR: line-search Newton conjugate gradient optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 104, in check_ncg full_output=False, disp=False, retall=False) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 986, in fmin_ncg alphak, fc, gc, old_fval = line_search_BFGS(f,xk,pk,gfk,old_fval) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 565, in line_search_BFGS phi_a2 = apply(f,(xk+alpha2*pk,)+args) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 130, in function_wrapper return function(x, *args) File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 31, in func raise RuntimeError, "too many iterations in optimization routine" RuntimeError: too many iterations in optimization routine This seems to occur randomly, about 3 times in ten runs of the test. From tgrav at mac.com Wed Mar 22 15:48:34 2006 From: tgrav at mac.com (Tommy Grav) Date: Wed, 22 Mar 2006 15:48:34 -0500 Subject: [SciPy-user] Problems installing SciPy In-Reply-To: <4421B5FC.6010703@gmail.com> References: <4421B16B.5060101@gmail.com> <4421B5FC.6010703@gmail.com> Message-ID: > Okay, but where is it coming from since it's not the standard > library's > random.py? To find out: > > In [22]: import random > > In [23]: random.__file__ > Out[23]: > '/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ > random.pyc' I am now not able to reproduce the error, so I think the whole problem was something wrong in the installation that was fixed by reinstalling all the packages. Sorry for wasting the bandwidth and thanks for the help! Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From arserlom at gmail.com Wed Mar 22 17:19:26 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Wed, 22 Mar 2006 23:19:26 +0100 Subject: [SciPy-user] Help installing SciPy. Message-ID: Hello everybody. I'm trying to install SciPy on an Intel Pentium M Windows XP laptop. I'm using Python 2.4.2. I'm following the instructions posted at http://www.scipy.org/Installing_SciPy/Windows. Step 1 (Getting MinGW): ok. Step 2 (numpy): Downloaded Tortoise SVN: ok. Tried to compile: failure. Had to manually edit the PATH enviroment variable to add the MinGW/bin path (that should have been explained somewhere). Retry to complie: ok, numpy works. Step 3 (scipy): I'm supposed to download some ATLAS binaries from http://old.scipy.org/download/atlasbinaries/winnt/ I'm not sure which one to download, can anybody help (Intel Pentium M Windows XP)? Also, what should I do with those files, just put them in any folder I choose and set the environment variable accordingly? Armando Serrano Lombillo. From pajer at iname.com Wed Mar 22 21:32:50 2006 From: pajer at iname.com (Gary) Date: Wed, 22 Mar 2006 21:32:50 -0500 Subject: [SciPy-user] Help installing SciPy. In-Reply-To: References: Message-ID: <442208D2.5010509@iname.com> Armando Serrano Lombillo wrote: >Hello everybody. > >I'm trying to install SciPy on an Intel Pentium M Windows XP laptop. >I'm using Python 2.4.2. I'm following the instructions posted at >http://www.scipy.org/Installing_SciPy/Windows. > >Step 1 (Getting MinGW): ok. > >Step 2 (numpy): Downloaded Tortoise SVN: ok. Tried to compile: >failure. Had to manually edit the PATH enviroment variable to add the >MinGW/bin path (that should have been explained somewhere). > Isn't that in the MinGW docs? >Retry to >complie: ok, numpy works. > >Step 3 (scipy): I'm supposed to download some ATLAS binaries from >http://old.scipy.org/download/atlasbinaries/winnt/ I'm not sure which >one to download, can anybody help (Intel Pentium M Windows XP)? Also, >what should I do with those files, just put them in any folder I >choose and set the environment variable accordingly? > > I think the one you want is WinNT_P4SSE2.zip. (Not 100% sure, but pretty sure.) Then put it anywhere and set the environment variable. For example, I have all the ATLAS files in c:\MinGW\lib\atlas, so I set ATLAS = c:\MinGW\lib\atlas hth, gary >Armando Serrano Lombillo. > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From travis.brady at gmail.com Thu Mar 23 10:51:53 2006 From: travis.brady at gmail.com (Travis Brady) Date: Thu, 23 Mar 2006 07:51:53 -0800 Subject: [SciPy-user] 64 bit Address Space Limitations In-Reply-To: <44170C23.4040807@ieee.org> References: <44170C23.4040807@ieee.org> Message-ID: Regarding the ability to do this with 2.4, what types of slicing would run into problems? I might consider sticking with 2.4 for now if the slicing and buffer issues mentioned can be coded around. Travis On 3/14/06, Travis Oliphant wrote: > > Mark W. wrote: > > Hi. We are converting our systems to a 64-bit platform to hopefully take > > advantage of larger address spaces for arrays and such. Can anyone tell > me - > > or point me to documentation which tells - how much address space for an > > array I could hope to get? We have a memory error on the 32-bit machines > > when we try to load a large array and we're hoping this will get around > that > > 2 Gig (or less) limit. > > > This is finally possible using Python 2.5 and numpy. But, you need to > use Python 2.5 which is only available as an SVN check-out and still has > a few issues. Python 2.5 should be available as a release in the summer. > > NumPy allows creation of larger arrays even with Python 2.4 but there > will be some errors in some uses of slicing, the buffer interface, and > memory-mapped arrays because of inherent limitations to Python that were > only recently removed. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at mapledesign.co.uk Thu Mar 23 11:10:14 2006 From: peter at mapledesign.co.uk (Peter Bowyer) Date: Thu, 23 Mar 2006 16:10:14 +0000 Subject: [SciPy-user] The "Performance Python with Weave" article Message-ID: <7.0.1.0.0.20060323155841.023b7ec0@mapledesign.co.uk> Hi, Referring to http://old.scipy.org/documentation/weave/weaveperformance.html (I can't find it on the new site - weave appears to have vanished), I note that there is no date on it and neither are there any details of the versions of the libraries uses. Could anyone enlighten me as to how accurate it is, as I was planning to reference it in my dissertation writeup as an example of how Python code can be speeded up if required. Thanks, Peter -- Maple Design - quality web design and programming http://www.mapledesign.co.uk From bldrake at adaptcs.com Thu Mar 23 11:41:14 2006 From: bldrake at adaptcs.com (Barry Drake) Date: Thu, 23 Mar 2006 08:41:14 -0800 (PST) Subject: [SciPy-user] data acquisition in Python In-Reply-To: Message-ID: <20060323164114.91134.qmail@web201.biz.mail.re2.yahoo.com> Ryan, No, haven't used RTD with Python; I waa using LavVIEW (National Instruments) at the time. However, I had to write the dlls and wrappers in a similar way one would using Python: vhll (Python, LabVIEW) <- C/C++ wrapper <- (dll, so, or rc) <- RTD drivers. Regards, Barry --- Ryan Krauss wrote: > Thanks Barry, it looks like there are some > interesting possibilities > with rtd. Have you used any of their hardware from > Python? > > Thanks, > > Ryan > > On 3/22/06, Barry Drake wrote: > > Ryan, > > A good source for inexpensive A/D boards is > > > > http://www.rtd.com/ > > > > I've used their hardware in the past on many > projects. > > They have drivers for all platforms which you can > > easily wrap in a .dll, .so, or .rc (Mac, I think). > > You can then wrap some code in Python that talks > to > > these shared libs. Of course, the reason for the > > shared libs is that you can then use those for > other > > applications. > > > > Barry Drake > > > > --- Ryan Krauss wrote: > > > > > Is anyone aware of data acquisition hardware > that > > > works well with > > > Python? I would like to use Python for buffered > > > analog-to-digital and > > > digital-to-analog applications. I would be > > > particularly interested in > > > fairly inexpensive solutions. I found one > European > > > company that sort > > > of claims to support Python, but I didn't know > how > > > easy it might be to > > > wrap C++ drivers from other hardware > manufacturers. > > > > > > Thanks for your thoughts, > > > > > > Ryan > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.net > > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From david.huard at gmail.com Thu Mar 23 11:51:57 2006 From: david.huard at gmail.com (David Huard) Date: Thu, 23 Mar 2006 11:51:57 -0500 Subject: [SciPy-user] Experience with freeze and numpy someone ? Message-ID: <91cf711d0603230851x60a58c66o@mail.gmail.com> Hi, I'd like to freeze some code in order to run it on a 64 bit machine on which scipy is not installed. My first questions is : Is this gonna work if I freeze it on a 32 bit machine ? Second question : Using the test code #test.py from numpy import mean, rand x = rand(1000) print mean(x) and >python freeze test.py >make I get the following error message when executing ./test No scipy-style subpackage 'testing' found in n:u:m:p:y. Ignoring. No scipy-style subpackage 'core' found in n:u:m:p:y. Ignoring. No scipy-style subpackage 'lib' found in n:u:m:p:y. Ignoring. No scipy-style subpackage 'linalg' found in n:u:m:p:y. Ignoring. No scipy-style subpackage 'dft' found in n:u:m:p:y. Ignoring. No scipy-style subpackage 'random' found in n:u:m:p:y. Ignoring. No scipy-style subpackage 'f2py' found in n:u:m:p:y. Ignoring. Traceback (most recent call last): File "test.py", line 1, in ? from numpy import mean, rand File "/usr/lib/python2.4/site-packages/numpy/__init__.py", line 44, in ? __doc__ += pkgload.get_pkgdocs() File "/usr/lib/python2.4/site-packages/numpy/_import_tools.py", line 320, in get_pkgdocs retstr = self._format_titles(titles) +\ File "/usr/lib/python2.4/site-packages/numpy/_import_tools.py", line 283, in _format_titles max_length = max(lengths) ValueError: max() arg is an empty sequence Is this a problem with freeze or with numpy ? I tried to do the same with cx_Freeze and had basically the same error message. Using the latest tarball. Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From curtis at lpl.arizona.edu Thu Mar 23 12:00:44 2006 From: curtis at lpl.arizona.edu (Curtis Cooper) Date: Thu, 23 Mar 2006 10:00:44 -0700 (MST) Subject: [SciPy-user] SciPy-user Digest, Vol 31, Issue 35 In-Reply-To: Message-ID: > Message: 5 > Date: Thu, 23 Mar 2006 16:10:14 +0000 > From: Peter Bowyer > Subject: [SciPy-user] The "Performance Python with Weave" article > To: "SciPy Users List" > Message-ID: <7.0.1.0.0.20060323155841.023b7ec0 at mapledesign.co.uk> > Content-Type: text/plain; charset="us-ascii"; format=flowed > > Hi, > > Referring to > http://old.scipy.org/documentation/weave/weaveperformance.html (I > can't find it on the new site - weave appears to have vanished), I > note that there is no date on it and neither are there any details of > the versions of the libraries uses. Could anyone enlighten me as to > how accurate it is, as I was planning to reference it in my > dissertation writeup as an example of how Python code can be speeded > up if required. > I use weave in my own projects. For computationally intensive loops that cannot be written with NumPy's vector operations (use that if you can), switching to C++ often gives you ~100 times performance improvement above the same algorithm written in Python. Don't do it if you don't need to, but for performance critical code that must be executed repeatedly in your calculations, weave is a great solution. Cheers, Curtis From afraser at lanl.gov Thu Mar 23 12:09:16 2006 From: afraser at lanl.gov (afraser) Date: Thu, 23 Mar 2006 10:09:16 -0700 Subject: [SciPy-user] multiplying sparse matrices in scipy-0.4.8? Message-ID: <87odzx3x6r.fsf@hmm.lanl.gov> I'm using numpy-0.9.6 and scipy-0.4.8. When I multiply sparse matrices, I often get the error: "ValueError: nzmax must not be less than nnz". The error happens when the matrices are only sort of sparse. Any advice appreciated. Here is a sample program and Traceback: ============================================================ import numpy, scipy, scipy.sparse, random L = 30 frac = .3 random.seed(0) # make runs repeatable A = scipy.sparse.csc_matrix((L,2)) #A = numpy.asmatrix(numpy.zeros((L,2),numpy.Float)) for i in xrange(L): for j in xrange(2): r = random.random() if r < frac: A[i,j] = r/frac B = A*A.T print B ============================================================ Traceback (most recent call last): File "", line 15, in ? File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 658, in __mul__ return self.dot(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 306, in dot result = self.matmat(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 824, in matmat return csc_matrix((c, rowc, ptrc), dims=(M, N)) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 558, in __init__ self._check() File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 574, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ============================================================ Andy From schofield at ftw.at Thu Mar 23 12:29:59 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 23 Mar 2006 18:29:59 +0100 Subject: [SciPy-user] maxentropy In-Reply-To: <43f499ca0603211253k585c56eev3bd08fb42f6574b8@mail.gmail.com> References: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> <441ABBC5.90304@ftw.at> <43f499ca0603171631w57400cf4n583bfb0c26e6fc5c@mail.gmail.com> <43f499ca0603211253k585c56eev3bd08fb42f6574b8@mail.gmail.com> Message-ID: <4422DB17.40402@ftw.at> On 21/03/2006, at 9:53 PM, Matthew Cooper wrote: > > Hi Ed, > > I am playing around with the code on some more small examples and > everything has been fine. The thing that will hold me back from > testing on larger datasets is the F matrix which thus far requires the > space of (context,label) pairs to be enumerable. I know that > internally you are using a sparse representation for this matrix. Can > I initialize the model with a sparse matrix also? This also requires > changes with the indices_context parameter in the examples. Hi Matt, Yes, good point. I'd conveniently forgotten about this little problem ;) It turns out scipy's sparse matrices need extending to support this. I've made some changes already (to the ejs branch); the next requirement is more flexible slicing support. I added partial slicing support (for slicing an entire row of a lil_matrix) a couple of months ago, but this isn't good enough here, although it shouldn't be too hard to extend. One upside of using slicing, rather than fancy indexing as before (which some of scipy's sparse matrix formats do already support), is that the indices_context parameter can then go away completely; we'll just expect the features indices to be ordered contiguously, which I think is perfectly reasonable here. I've checked in my latest code (into the ejs branch) in case you want to follow my progress or work on it yourself. But the conditional maxent examples no longer work, so avoid doing 'svn update' if you want to keep the working version for now... > I see that you also have an unconditional bigmodel class that seems > related, but I'm not sure what would need to be changed. Actually, the definition of 'big' here is 'requires Monte Carlo simulation' -- for example, continuous models in many dimensions or models on very large discrete spaces, such as the space of all possible sentences. I'll give some more thought to the rest of your post and get back to you in a few more days... -- Ed From arserlom at gmail.com Thu Mar 23 15:42:17 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Thu, 23 Mar 2006 21:42:17 +0100 Subject: [SciPy-user] Help installing SciPy. In-Reply-To: <442208D2.5010509@iname.com> References: <442208D2.5010509@iname.com> Message-ID: [...] > >Step 2 (numpy): Downloaded Tortoise SVN: ok. Tried to compile: > >failure. Had to manually edit the PATH enviroment variable to add the > >MinGW/bin path (that should have been explained somewhere). > > > Isn't that in the MinGW docs? [...] If I have to read the MinGW docs just to install SciPy, then something is going wrong. Really, the installation instruction in the SciPy page should contain everything you need to know to install SciPy. Would it be a good idea if I updated that in the installation page? [...] > I think the one you want is WinNT_P4SSE2.zip. (Not 100% sure, but > pretty sure.) > Then put it anywhere and set the environment variable. > > For example, I have all the ATLAS files in c:\MinGW\lib\atlas, so I > set ATLAS = c:\MinGW\lib\atlas [...] Ok, did it and scipy compiled (I had to restart my computer after setting the environment variable). If tested it and it works. [...] > gary [...] Thanks. Step 4 (Do science): ok :) From robert.kern at gmail.com Thu Mar 23 15:51:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 23 Mar 2006 14:51:38 -0600 Subject: [SciPy-user] Help installing SciPy. In-Reply-To: References: <442208D2.5010509@iname.com> Message-ID: <44230A5A.4050608@gmail.com> Armando Serrano Lombillo wrote: > [...] > >>>Step 2 (numpy): Downloaded Tortoise SVN: ok. Tried to compile: >>>failure. Had to manually edit the PATH enviroment variable to add the >>>MinGW/bin path (that should have been explained somewhere). >> >>Isn't that in the MinGW docs? > > [...] > > If I have to read the MinGW docs just to install SciPy, then something > is going wrong. Really, the installation instruction in the SciPy page > should contain everything you need to know to install SciPy. Would it > be a good idea if I updated that in the installation page? Yes. If it's information that would have been helpful to you, it will probably be helpful to others, too. But you don't have to ask permission from us; it's a Wiki. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pajer at iname.com Thu Mar 23 16:11:50 2006 From: pajer at iname.com (Gary) Date: Thu, 23 Mar 2006 16:11:50 -0500 Subject: [SciPy-user] Help installing SciPy. In-Reply-To: References: <442208D2.5010509@iname.com> Message-ID: <44230F16.2040509@iname.com> Armando Serrano Lombillo wrote: >[...] > > >>>Step 2 (numpy): Downloaded Tortoise SVN: ok. Tried to compile: >>>failure. Had to manually edit the PATH enviroment variable to add the >>>MinGW/bin path (that should have been explained somewhere). >>> >>> >>> >>Isn't that in the MinGW docs? >> >> >[...] > >If I have to read the MinGW docs just to install SciPy, then something >is going wrong. Really, the installation instruction in the SciPy page >should contain everything you need to know to install SciPy. Would it >be a good idea if I updated that in the installation page? > > By all means. I guess I had set the path long enough ago that I forgot. Since the procedure is fresh in your mind, and I'd have to figure it out again, why don't you go ahead and make the change. >[...] > > >>I think the one you want is WinNT_P4SSE2.zip. (Not 100% sure, but >>pretty sure.) >>Then put it anywhere and set the environment variable. >> >>For example, I have all the ATLAS files in c:\MinGW\lib\atlas, so I >>set ATLAS = c:\MinGW\lib\atlas >> >> >[...] > >Ok, did it and scipy compiled (I had to restart my computer after >setting the environment variable). If tested it and it works. > > Hmm. I never had to restart the computer. If you change the environment variable using the WinXP Control Panel facility, it should be enough to open a new command line window (maybe you tried using a command line window that was already open when you changed the environment variable). If you use the DOS-style "set ATLAS = ..." interactively, then you should be able to continue immediately in that window. Every time you recompile, you would have to execute "set ATLAS = ..." again. I do the latter. Since I don't compile very often, I avoid cluttering the environment. thanks, gary >[...] > > >>gary >> >> >[...] > >Thanks. > >Step 4 (Do science): ok :) > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From travis at enthought.com Thu Mar 23 16:56:54 2006 From: travis at enthought.com (Travis N. Vaught) Date: Thu, 23 Mar 2006 15:56:54 -0600 Subject: [SciPy-user] ANN: Python Enthought Edition Version 0.9.3 Released Message-ID: <442319A6.7040108@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.3 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.3 Release Notes: -------------------- Version 0.9.3 of Python Enthought Edition includes an update to version 1.0.3 of the Enthought Tool Suite (ETS) Package-- you can look at the release notes for this ETS version here. Other major changes include: * upgrade to VTK 5.0 * addition of docutils * addition of numarray * addition of pysvn. Also, MayaVi issues should be fixed in this release. Full Release Notes are here: http://code.enthought.com/release/changelog-enthon0.9.3.shtml About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com From arserlom at gmail.com Thu Mar 23 17:10:22 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Thu, 23 Mar 2006 23:10:22 +0100 Subject: [SciPy-user] Help installing SciPy. In-Reply-To: <44230F16.2040509@iname.com> References: <442208D2.5010509@iname.com> <44230F16.2040509@iname.com> Message-ID: I updated the wiki to include the MinGW environment variable instructions. I also added a link to http://tortoisesvn.sourceforge.net/. Armando Serrano Lombillo From travis at enthought.com Thu Mar 23 16:56:54 2006 From: travis at enthought.com (Travis N. Vaught) Date: Thu, 23 Mar 2006 15:56:54 -0600 Subject: [SciPy-user] [wxPython-users] ANN: Python Enthought Edition Version 0.9.3 Released Message-ID: <442319A6.7040108@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.3 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.3 Release Notes: -------------------- Version 0.9.3 of Python Enthought Edition includes an update to version 1.0.3 of the Enthought Tool Suite (ETS) Package-- you can look at the release notes for this ETS version here. Other major changes include: * upgrade to VTK 5.0 * addition of docutils * addition of numarray * addition of pysvn. Also, MayaVi issues should be fixed in this release. Full Release Notes are here: http://code.enthought.com/release/changelog-enthon0.9.3.shtml About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com --------------------------------------------------------------------- To unsubscribe, e-mail: wxPython-users-unsubscribe at lists.wxwidgets.org For additional commands, e-mail: wxPython-users-help at lists.wxwidgets.org From prabhu_r at users.sf.net Fri Mar 24 01:24:25 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Fri, 24 Mar 2006 11:54:25 +0530 Subject: [SciPy-user] The "Performance Python with Weave" article In-Reply-To: <7.0.1.0.0.20060323155841.023b7ec0@mapledesign.co.uk> References: <7.0.1.0.0.20060323155841.023b7ec0@mapledesign.co.uk> Message-ID: <17443.37017.775159.526526@prpc.aero.iitb.ac.in> >>>>> "Peter" == Peter Bowyer writes: Peter> Hi, Referring to Peter> http://old.scipy.org/documentation/weave/weaveperformance.html Peter> (I can't find it on the new site - weave appears to have Peter> vanished), I note that there is no date on it and neither Peter> are there any details of the versions of the libraries Peter> uses. Could anyone enlighten me as to how accurate it is, Peter> as I was planning to reference it in my dissertation Peter> writeup as an example of how Python code can be speeded up Peter> if required. I just added the old docs into the new site. http://www.scipy.org/PerformancePython It still needs to be "upgraded" to work with numpy and the code isn't there yet. I'll try and do it one of these weekends after testing it out on numpy. cheers, prabhu From nwagner at iam.uni-stuttgart.de Fri Mar 24 03:31:38 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 24 Mar 2006 09:31:38 +0100 Subject: [SciPy-user] Handling NaN in array's Message-ID: <4423AE6A.1070201@iam.uni-stuttgart.de> >>> b=rand(2) >>> linalg.cg(a,b) (array([ nan, nan]), 1) >>> linalg.cgs(a,b) (array([ nan, nan]), 1) >>> linalg.solve(a,b) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/scipy/linalg/basic.py", line 103, in solve a1, b1 = map(asarray_chkfinite,(a,b)) File "/usr/lib64/python2.4/site-packages/numpy/lib/function_base.py", line 167, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs Iterative solvers are inured to NaNs. Also >>> linalg.hessenberg(a) array([[ nan, 1. ], [ 1. , 0. ]]) >>> a array([[ nan, 1. ], [ 1. , 0. ]]) Nils From lars.bittrich at googlemail.com Fri Mar 24 09:46:32 2006 From: lars.bittrich at googlemail.com (Lars Bittrich) Date: Fri, 24 Mar 2006 15:46:32 +0100 Subject: [SciPy-user] Problems with porting code using weave Message-ID: <200603241546.32934.lars.bittrich@googlemail.com> Hi, I just tried to port my code to the new SciPy (scipy 0.4.7.1715 / numpy 0.9.7.2248). I got some problems with weave which are usually hard to track down. So I looked at the tests: scipy.test(10, 10) runs without any errors. But I was really surprised when I got a few errors running: scipy.weave.test(1) ? Found 16 tests for scipy.weave.slice_handler ? Found 0 tests for scipy.weave.c_spec ? Found 9 tests for scipy.weave.build_tools ? Found 0 tests for scipy.weave.inline_tools ? Found 1 tests for scipy.weave.ast_tools ? Found 0 tests for scipy.weave.wx_spec ? Found 2 tests for scipy.weave.blitz_tools building extensions here: /home/bittrich/.python23_compiled/m15 ? Found 1 tests for scipy.weave.ext_tools ? Found 3 tests for scipy.weave.standard_array_spec ? Found 26 tests for scipy.weave.catalog ? Found 74 tests for scipy.weave.size_check ? Found 0 tests for __main__ ................warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ..warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations .....F....copying /home/bittrich/.python23_compiled/linux223compiled_catalog -> /tmp/tmpIvzGBO copying /tmp/tmpIvzGBO/linux223compiled_catalog -> /home/bittrich/.python23_compiled .........copying /home/bittrich/.python23_compiled/linux223compiled_catalog -> /tmp/tmpvpgFD0 copying /tmp/tmpvpgFD0/linux223compiled_catalog -> /home/bittrich/.python23_compiled .copying /home/bittrich/.python23_compiled/linux223compiled_catalog -> /tmp/tmp2Mm4v7 copying /tmp/tmp2Mm4v7/linux223compiled_catalog -> /home/bittrich/.python23_compiled ............removing '/tmp/tmp2vO62Qcat_test' (and everything under it) Exception bsddb._db.DBNoSuchFileError: in ?ignored .removing '/tmp/tmpuIJkqocat_test' (and everything under it) .............................E..E........E................EEEE............. ====================================================================== ERROR: check_1d_3 (scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 207, in check_1d_3 ? ? if nx.which[0] != "numarray": AttributeError: 'module' object has no attribute 'which' ====================================================================== ERROR: check_1d_6 (scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 214, in check_1d_6 ? ? if nx.which[0] != "numarray": AttributeError: 'module' object has no attribute 'which' ====================================================================== ERROR: through a bunch of different indexes at it for good measure. ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 265, in check_1d_random ? ? self.generic_1d('a[%s:%s:%s]' %(beg,end,step)) ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 174, in generic_1d ? ? self.generic_wrap(a,expr) ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 164, in generic_wrap ? ? desired = array(eval(expr).shape) ? File "", line 0, in ? ValueError: slice step cannot be zero ====================================================================== ERROR: through a bunch of different indexes at it for good measure. ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 289, in check_2d_random ? ? self.generic_2d(expr) ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 177, in generic_2d ? ? self.generic_wrap(a,expr) ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 164, in generic_wrap ? ? desired = array(eval(expr).shape) ? File "", line 0, in ? ValueError: slice step cannot be zero ====================================================================== ERROR: through a bunch of different indexes at it for good measure. ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 303, in check_3d_random ? ? self.generic_3d(expr) ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 180, in generic_3d ? ? self.generic_wrap(a,expr) ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 164, in generic_wrap ? ? desired = array(eval(expr).shape) ? File "", line 0, in ? ValueError: slice step cannot be zero ====================================================================== ERROR: check_calculated_index (scipy.weave.tests.test_size_check.test_expressions) ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 410, in check_calculated_index ? ? size_check.check_expr(expr,locals()) ? File "/usr/lib/python2.3/site-packages/weave/size_check.py", line 52, in check_expr ? ? exec(expr,values) ? File "", line 1, in ? NameError: name 'a' is not defined ====================================================================== ERROR: check_calculated_index2 (scipy.weave.tests.test_size_check.test_expressions) ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", line 415, in check_calculated_index2 ? ? size_check.check_expr(expr,locals()) ? File "/usr/lib/python2.3/site-packages/weave/size_check.py", line 52, in check_expr ? ? exec(expr,values) ? File "", line 1, in ? NameError: name 'a' is not defined ====================================================================== FAIL: check_type_match_array (scipy.weave.tests.test_standard_array_spec.test_array_converter) ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_standard_array_spec.py", line 40, in check_type_match_array ? ? assert(s.type_match(arange(4))) AssertionError ---------------------------------------------------------------------- Ran 132 tests in 0.804s FAILED (failures=1, errors=7) I wondered why there is no such test already in scipy.test(10, 10). I also tried scipy.weave.test(10, 10). That test really takes a long time and breaks with a segmentation fault. Using gdb I got: [...] None .file changed None .file changed None after and after2 should be equal in the following before, after, after2: 2 3 3 .file changed None .file changed None .file changed None .file changed None hash: 123 .file changed None ..file changed None /home/bittrich/.python23_compiled/sc_47e62d08f4ce8af8a6c67863d921b7942.cpp: In ? ?function `PyObject* compiled_func(PyObject*, PyObject*)': /home/bittrich/.python23_compiled/sc_47e62d08f4ce8af8a6c67863d921b7942.cpp:648: error: parse ? ?error before `!' token /home/bittrich/.python23_compiled/sc_47e62d08f4ce8af8a6c67863d921b7942.cpp: In ? ?function `PyObject* compiled_func(PyObject*, PyObject*)': /home/bittrich/.python23_compiled/sc_47e62d08f4ce8af8a6c67863d921b7942.cpp:648: error: parse ? ?error before `!' token Efile changed None /home/bittrich/.python23_compiled/sc_47e62d08f4ce8af8a6c67863d921b7943.cpp: In ? ?function `PyObject* compiled_func(PyObject*, PyObject*)': /home/bittrich/.python23_compiled/sc_47e62d08f4ce8af8a6c67863d921b7943.cpp:648: error: parse ? ?error before `!' token /home/bittrich/.python23_compiled/sc_47e62d08f4ce8af8a6c67863d921b7943.cpp: In ? ?function `PyObject* compiled_func(PyObject*, PyObject*)': /home/bittrich/.python23_compiled/sc_47e62d08f4ce8af8a6c67863d921b7943.cpp:648: error: parse ? ?error before `!' token Efile changed None .file changed None .file changed None .file changed None .file changed None .file changed None .file changed None .file changed None Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1075425408 (LWP 23435)] 0x400cde50 in clearerr () from /lib/tls/libc.so.6 (gdb) (gdb) bt #0 ?0x400cde50 in clearerr () from /lib/tls/libc.so.6 #1 ?0x08079725 in PyDict_DelItemString () #2 ?0x0807b5b5 in PyObject_Print () #3 ?0x4363f962 in compiled_func (self=0x0, args=0x0) at object.h:175 #4 ?0x080fde6a in PyCFunction_Call () #5 ?0x0805b989 in PyObject_Call () #6 ?0x080ab5c7 in PyEval_CallObjectWithKeywords () #7 ?0x080a0df6 in _PyBuiltin_Init () #8 ?0x080fde6a in PyCFunction_Call () #9 ?0x080ab834 in PyEval_CallObjectWithKeywords () #10 0x080a9bee in Py_MakePendingCalls () #11 0x080ab96d in PyEval_CallObjectWithKeywords () #12 0x080ab72c in PyEval_CallObjectWithKeywords () #13 0x080a9bee in Py_MakePendingCalls () #14 0x080aa77c in PyEval_EvalCodeEx () #15 0x080ab8e9 in PyEval_CallObjectWithKeywords () #16 0x080ab72c in PyEval_CallObjectWithKeywords () #17 0x080a9bee in Py_MakePendingCalls () #18 0x080aa77c in PyEval_EvalCodeEx () #19 0x080ab8e9 in PyEval_CallObjectWithKeywords () #20 0x080ab72c in PyEval_CallObjectWithKeywords () #21 0x080a9bee in Py_MakePendingCalls () #22 0x080aa77c in PyEval_EvalCodeEx () #23 0x080fd9b7 in PyStaticMethod_New () #24 0x0805b989 in PyObject_Call () #25 0x080623d8 in PyMethod_Fini () #26 0x0805b989 in PyObject_Call () #27 0x080aba52 in PyEval_CallObjectWithKeywords () #28 0x080ab6b9 in PyEval_CallObjectWithKeywords () #29 0x080a9bee in Py_MakePendingCalls () #30 0x080aa77c in PyEval_EvalCodeEx () #31 0x080fd9b7 in PyStaticMethod_New () #32 0x0805b989 in PyObject_Call () #33 0x080623d8 in PyMethod_Fini () #34 0x0805b989 in PyObject_Call () #35 0x0808e6fb in _PyObject_SlotCompare () #36 0x0805b989 in PyObject_Call () #37 0x080aba52 in PyEval_CallObjectWithKeywords () #38 0x080ab6b9 in PyEval_CallObjectWithKeywords () #39 0x080a9bee in Py_MakePendingCalls () #40 0x080aa77c in PyEval_EvalCodeEx () #41 0x080fd9b7 in PyStaticMethod_New () #42 0x0805b989 in PyObject_Call () #43 0x080623d8 in PyMethod_Fini () #44 0x0805b989 in PyObject_Call () #45 0x0808e6fb in _PyObject_SlotCompare () #46 0x0805b989 in PyObject_Call () #47 0x080aba52 in PyEval_CallObjectWithKeywords () #48 0x080ab6b9 in PyEval_CallObjectWithKeywords () #49 0x080a9bee in Py_MakePendingCalls () I am using Debian Sarge and have already installed the patch glibc package. So there are no problems with weave while using old scipy (debian stable package). Best regards, Lars From oliphant at ee.byu.edu Fri Mar 24 14:22:29 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 24 Mar 2006 12:22:29 -0700 Subject: [SciPy-user] Problems with porting code using weave In-Reply-To: <200603241546.32934.lars.bittrich@googlemail.com> References: <200603241546.32934.lars.bittrich@googlemail.com> Message-ID: <442446F5.1040507@ee.byu.edu> Lars Bittrich wrote: >Hi, > >I just tried to port my code to the new SciPy (scipy 0.4.7.1715 / numpy >0.9.7.2248). I got some problems with weave which are usually hard to track >down. So I looked at the tests: > > >scipy.test(10, 10) > >runs without any errors. But I was really surprised when I got a few errors >running: > >scipy.weave.test(1) > > Found 16 tests for scipy.weave.slice_handler > Found 0 tests for scipy.weave.c_spec > Found 9 tests for scipy.weave.build_tools > Found 0 tests for scipy.weave.inline_tools > Found 1 tests for scipy.weave.ast_tools > Found 0 tests for scipy.weave.wx_spec > Found 2 tests for scipy.weave.blitz_tools >building extensions here: /home/bittrich/.python23_compiled/m15 > Found 1 tests for scipy.weave.ext_tools > Found 3 tests for scipy.weave.standard_array_spec > Found 26 tests for scipy.weave.catalog > Found 74 tests for scipy.weave.size_check > Found 0 tests for __main__ >................warning: specified build_dir '_bad_path_' does not exist or is >not writable. Trying default locations >...warning: specified build_dir '..' does not exist or is not writable. Trying >default locations >..warning: specified build_dir '_bad_path_' does not exist or is not writable. >Trying default locations >...warning: specified build_dir '..' does not exist or is not writable. Trying >default locations >.....F....copying /home/bittrich/.python23_compiled/linux223compiled_catalog >-> /tmp/tmpIvzGBO >copying /tmp/tmpIvzGBO/linux223compiled_catalog >-> /home/bittrich/.python23_compiled >.........copying /home/bittrich/.python23_compiled/linux223compiled_catalog >-> /tmp/tmpvpgFD0 >copying /tmp/tmpvpgFD0/linux223compiled_catalog >-> /home/bittrich/.python23_compiled >.copying /home/bittrich/.python23_compiled/linux223compiled_catalog >-> /tmp/tmp2Mm4v7 >copying /tmp/tmp2Mm4v7/linux223compiled_catalog >-> /home/bittrich/.python23_compiled >............removing '/tmp/tmp2vO62Qcat_test' (and everything under it) >Exception bsddb._db.DBNoSuchFileError: at 0x4a279bec> in ignored >.removing '/tmp/tmpuIJkqocat_test' (and everything under it) >.............................E..E........E................EEEE............. >====================================================================== >ERROR: check_1d_3 >(scipy.weave.tests.test_size_check.test_dummy_array_indexing) >---------------------------------------------------------------------- >Traceback (most recent call last): > File >"/opt/cp/lib/python2.3/site-packages/scipy/weave/tests/test_size_check.py", >line 207, in check_1d_3 > if nx.which[0] != "numarray": >AttributeError: 'module' object has no attribute 'which' > > > These weave tests have apparently not been checked because they are not updated. Why don't you let us know what problems you are having with weave directly. I've fixed the tests in SVN for this particular problem now. >I wondered why there is no such test already in scipy.test(10, 10). I also >tried scipy.weave.test(10, 10). That test really takes a long time and breaks >with a segmentation fault. Using gdb I got: > > weave is tested separately. There may still be lingering issues with weave as my once-twice over of it may have missed a couple of things in specific corners. Please let us know what trouble you are having. -Travis From adiril at mynet.com Sat Mar 25 09:26:34 2006 From: adiril at mynet.com (adiril) Date: Sat, 25 Mar 2006 16:26:34 +0200 (EET) Subject: [SciPy-user] Ynt: SciPy-user Digest, Vol 31, Issue 28 Message-ID: <1726.85.98.50.102.1143296794.mynet@webmail38.mynet.com> An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Mar 25 10:03:44 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 25 Mar 2006 15:03:44 +0000 Subject: [SciPy-user] Segfault on ndimage.test() - the long odyssey that is 64 bit Message-ID: <1e2af89e0603250703ja6c509s7224abbb4bcb7d5d@mail.gmail.com> Hi, Sorry to keep bombarding the list with 64-bit troubles. In brief - scipy.ndimage.test() segfaults on my x86-64 P4 system, but passes on other x86-32 systems. After many interations of gcc versions and compile flags, I successfully compiled scipy / ATLAS on my 64bit FC4 P4 system - in the end using gcc / g77 3.4.6. When running scipy.test(), I get a segfault. This turns out to be from scipy.ndimage.test(). I've appended a backtrace from gdb in the hope that it's useful. I have an older Redhat P4 x86-32 ES3 system for which all tests pass - so this issue seems to be specific to 64 bit. It may also be specific to the 64 bit compile of python - on an almost identical ubuntu 64 bit platform, but with a 32-bit python (as standard) - I get a different set of errors (also attached as ndimag_test_out.txt) - I assume this is expected? Has anyone had the same trouble? Is there anyone out there running a P4 / Python 64 bit / latest SVN numpy,scipy / x86-64 bit linux system that isn't getting this behavior? I've attached my ATLAS and LAPACK make includes for reference, Best, Matthew gdb output: >>> import scipy.ndimage >>> scipy.ndimage.test() Found 397 tests for scipy.ndimage Found 0 tests for __main__ ............................................................................................................................................E....E.E.................................................................................. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912496295936 (LWP 17678)] 0x00002aaaafbfb3f5 in NI_Histogram (input=0x821310, labels=0x0, min_label=-1, max_label=0, indices=0x0, n_results=1, histograms=0x822c00, min=0, max=10, nbins=210453397514) at Lib/ndimage/src/ni_measure.c:752 752 Lib/ndimage/src/ni_measure.c: No such file or directory. in Lib/ndimage/src/ni_measure.c (gdb) bt #0 0x00002aaaafbfb3f5 in NI_Histogram (input=0x821310, labels=0x0, min_label=-1, max_label=0, indices=0x0, n_results=1, histograms=0x822c00, min=0, max=10, nbins=210453397514) at Lib/ndimage/src/ni_measure.c:752 #1 0x00002aaaafbf0ee4 in Py_Histogram (obj=Variable "obj" is not available. ) at Lib/ndimage/src/nd_image.c:1103 #2 0x00000031b8b8da11 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #3 0x00000031b8b8e238 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #4 0x00000031b8b8d6a1 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #5 0x00000031b8b8d775 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #6 0x00000031b8b8e238 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #7 0x00000031b8b49922 in PyFunction_SetClosure () from /usr/lib64/libpython2.4.so.1.0 #8 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #9 0x00000031b8b8c164 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #10 0x00000031b8b8e238 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #11 0x00000031b8b49922 in PyFunction_SetClosure () ---Type to continue, or q to quit--- from /usr/lib64/libpython2.4.so.1.0 #12 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #13 0x00000031b8b3e455 in PyMethod_Fini () from /usr/lib64/libpython2.4.so.1.0 #14 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #15 0x00000031b8b8a4a0 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #16 0x00000031b8b8e238 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #17 0x00000031b8b49922 in PyFunction_SetClosure () from /usr/lib64/libpython2.4.so.1.0 #18 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #19 0x00000031b8b3e455 in PyMethod_Fini () from /usr/lib64/libpython2.4.so.1.0 #20 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #21 0x00000031b8b6dd15 in _PyObject_SlotCompare () from /usr/lib64/libpython2.4.so.1.0 #22 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #23 0x00000031b8b8a4a0 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #24 0x00000031b8b8e238 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #25 0x00000031b8b49922 in PyFunction_SetClosure () from /usr/lib64/libpython2.4.so.1.0 #26 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 ---Type to continue, or q to quit--- #27 0x00000031b8b8c164 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #28 0x00000031b8b8e238 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #29 0x00000031b8b49922 in PyFunction_SetClosure () from /usr/lib64/libpython2.4.so.1.0 #30 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #31 0x00000031b8b3e455 in PyMethod_Fini () from /usr/lib64/libpython2.4.so.1.0 #32 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #33 0x00000031b8b6dd15 in _PyObject_SlotCompare () from /usr/lib64/libpython2.4.so.1.0 #34 0x00000031b8b3783b in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #35 0x00000031b8b8a4a0 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #36 0x00000031b8b8d775 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #37 0x00000031b8b8e238 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #38 0x00000031b8b8d6a1 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #39 0x00000031b8b8e238 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #40 0x00000031b8b8e471 in PyEval_EvalCode () ---Type to continue, or q to quit--- from /usr/lib64/libpython2.4.so.1.0 #41 0x00000031b8ba8801 in PyErr_Display () from /usr/lib64/libpython2.4.so.1.0 #42 0x00000031b8ba9a07 in PyRun_InteractiveOneFlags () from /usr/lib64/libpython2.4.so.1.0 #43 0x00000031b8ba9ae4 in PyRun_InteractiveLoopFlags () from /usr/lib64/libpython2.4.so.1.0 #44 0x00000031b8ba9fdf in PyRun_AnyFileExFlags () from /usr/lib64/libpython2.4.so.1.0 #45 0x00000031b8baf313 in Py_Main () from /usr/lib64/libpython2.4.so.1.0 #46 0x00000031b3e1c3cf in __libc_start_main () from /lib64/libc.so.6 #47 0x0000000000400689 in _start () #48 0x00007ffffff6c718 in ?? () #49 0x0000000000000000 in ?? () -------------- next part -------------- A non-text attachment was scrubbed... Name: make.inc Type: application/octet-stream Size: 1420 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Make.Linux_gcc34 Type: application/octet-stream Size: 6220 bytes Desc: not available URL: -------------- next part -------------- Found 397 tests for scipy.ndimage Found 0 tests for __main__ ............................................................................................................................................E....E.E......................................................................................................................................................................................................................................................... ====================================================================== ERROR: brute force distance transform 4 ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/matthew/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py", line 3443, in test_distance_transform_bf04 return_distances = False, return_indices = True, indices = ft) File "/home/matthew/lib/python2.4/site-packages/scipy/ndimage/morphology.py", line 578, in distance_transform_bf raise RuntimeError, 'indices must of Int32 type' RuntimeError: indices must of Int32 type ====================================================================== ERROR: chamfer type distance transform 3 ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/matthew/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py", line 3653, in test_distance_transform_cdt03 return_distances = False, return_indices = True, indices = ft) File "/home/matthew/lib/python2.4/site-packages/scipy/ndimage/morphology.py", line 677, in distance_transform_cdt raise RuntimeError, 'indices must of Int32 type' RuntimeError: indices must of Int32 type ====================================================================== ERROR: euclidean distance transform 2 ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/matthew/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py", line 3729, in test_distance_transform_edt02 return_distances = False,return_indices = True, indices = ft) File "/home/matthew/lib/python2.4/site-packages/scipy/ndimage/morphology.py", line 742, in distance_transform_edt raise RuntimeError, 'indices must be of Int32 type' RuntimeError: indices must be of Int32 type ---------------------------------------------------------------------- Ran 397 tests in 0.798s FAILED (errors=3) From oliphant at ee.byu.edu Sat Mar 25 21:23:14 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat, 25 Mar 2006 19:23:14 -0700 Subject: [SciPy-user] Segfault on ndimage.test() - the long odyssey that is 64 bit In-Reply-To: <1e2af89e0603250703ja6c509s7224abbb4bcb7d5d@mail.gmail.com> References: <1e2af89e0603250703ja6c509s7224abbb4bcb7d5d@mail.gmail.com> Message-ID: <4425FB12.6030501@ee.byu.edu> Matthew Brett wrote: >Hi, > >Sorry to keep bombarding the list with 64-bit troubles. > >In brief - scipy.ndimage.test() segfaults on my x86-64 P4 system, but >passes on other x86-32 systems. > > ndimage is not 64-bit ready at this point. I would disable it in the setup.py script until it is (or remove it from the scipy directory). -Travis From schofield at ftw.at Sun Mar 26 12:47:12 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 26 Mar 2006 19:47:12 +0200 Subject: [SciPy-user] multiplying sparse matrices in scipy-0.4.8? In-Reply-To: <87odzx3x6r.fsf@hmm.lanl.gov> References: <87odzx3x6r.fsf@hmm.lanl.gov> Message-ID: <98790998-659E-42C3-A2AD-005AB68CCA12@ftw.at> On 23/03/2006, at 6:09 PM, afraser wrote: > I'm using numpy-0.9.6 and scipy-0.4.8. When I multiply sparse > matrices, I often get the error: "ValueError: nzmax must not be less > than nnz". The error happens when the matrices are only sort of > sparse. Any advice appreciated. > > Here is a sample program and Traceback: > ============================================================ > import numpy, scipy, scipy.sparse, random > > L = 30 > frac = .3 > > random.seed(0) # make runs repeatable > A = scipy.sparse.csc_matrix((L,2)) > #A = numpy.asmatrix(numpy.zeros((L,2),numpy.Float)) > > for i in xrange(L): > for j in xrange(2): > r = random.random() > if r < frac: > A[i,j] = r/frac > B = A*A.T > print B > ============================================================ > Traceback (most recent call last): > File "", line 15, in ? > File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", > line 658, in __mul__ > return self.dot(other) > File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", > line 306, in dot > result = self.matmat(other) > File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", > line 824, in matmat > return csc_matrix((c, rowc, ptrc), dims=(M, N)) > File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", > line 558, in __init__ > self._check() > File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", > line 574, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > ============================================================ Yes, this is a bug. Travis, could you please take a look at this? The FORTRAN functions dcscmucsc and dcscmucsr both seem to be returning incorrect values in indptr. In fact, could you please explain the Python code in matmat () that calls these functions? I'd like to understand what the "while 1" loop is for, and in particular why we have c, rowc, ptrc, irow, kcol, ierr = func(*args) when c, rowc, etc are part of *args anyway. -- Ed From schofield at ftw.at Sun Mar 26 12:54:30 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 26 Mar 2006 19:54:30 +0200 Subject: [SciPy-user] maxentropy In-Reply-To: <4422DB17.40402@ftw.at> References: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> <441ABBC5.90304@ftw.at> <43f499ca0603171631w57400cf4n583bfb0c26e6fc5c@mail.gmail.com> <43f499ca0603211253k585c56eev3bd08fb42f6574b8@mail.gmail.com> <4422DB17.40402@ftw.at> Message-ID: <8866A7C4-460D-4EE0-9102-54B71810EEA9@ftw.at> On 23/03/2006, at 6:29 PM, Ed Schofield wrote: > > On 21/03/2006, at 9:53 PM, Matthew Cooper wrote: > >> >> Hi Ed, >> >> I am playing around with the code on some more small examples and >> everything has been fine. The thing that will hold me back from >> testing on larger datasets is the F matrix which thus far requires >> the >> space of (context,label) pairs to be enumerable. I know that >> internally you are using a sparse representation for this matrix. >> Can >> I initialize the model with a sparse matrix also? This also requires >> changes with the indices_context parameter in the examples. > > Hi Matt, > Yes, good point. I'd conveniently forgotten about this little problem > ;) It turns out scipy's sparse matrices need extending to support > this. And done. I've committed the new sparse matrix features to the ejs branch and fixed conditional maxent models to work with them. The examples seem to work fine too. Please let me know how you go with it! -- Ed From arserlom at gmail.com Sun Mar 26 13:22:29 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Sun, 26 Mar 2006 20:22:29 +0200 Subject: [SciPy-user] Typo in SciPy code Message-ID: I found a typo in the help provided by: help(scipy.optimize) where should I report that? Armando. From schofield at ftw.at Sun Mar 26 16:03:55 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 26 Mar 2006 23:03:55 +0200 Subject: [SciPy-user] Web site down? Message-ID: The scipy.org site seems to be down again... -- Ed From tim.leslie at gmail.com Sun Mar 26 18:25:02 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 27 Mar 2006 10:25:02 +1100 Subject: [SciPy-user] Typo in SciPy code In-Reply-To: References: Message-ID: On 3/27/06, Armando Serrano Lombillo wrote: > > I found a typo in the help provided by: > help(scipy.optimize) > where should I report that? For trivial things like this reporting to this list should be fine. Tim > Armando. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Mon Mar 27 00:51:55 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 27 Mar 2006 07:51:55 +0200 Subject: [SciPy-user] Typo in SciPy code In-Reply-To: References: Message-ID: On 26/03/2006, at 8:22 PM, Armando Serrano Lombillo wrote: > I found a typo in the help provided by: > help(scipy.optimize) > where should I report that? I've had a look and fixed a couple of typos in SVN. Can you seen any more? It's now as in http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/optimize/ info.py -- Ed From lars.bittrich at googlemail.com Mon Mar 27 05:14:25 2006 From: lars.bittrich at googlemail.com (Lars Bittrich) Date: Mon, 27 Mar 2006 12:14:25 +0200 Subject: [SciPy-user] Problems with porting code using weave In-Reply-To: <442446F5.1040507@ee.byu.edu> References: <200603241546.32934.lars.bittrich@googlemail.com> <442446F5.1040507@ee.byu.edu> Message-ID: <200603271214.25123.lars.bittrich@googlemail.com> Hi again, On Friday 24 March 2006 20:22, Travis Oliphant wrote: [...] > Please let us know what trouble you are having. finally I have isolated a very strange behavior. Sorry about the delay but I was not able to answer during the weekend. My code example looks as follows: ------------------------------------------------------------------------------ from scipy import * from scipy.weave import inline,converters N = 10 arr = zeros(N, Float) factor = sqrt(1./pi) #factor = pi code= \ """ double tmp = 1.0; for (int i=0; i < N; i++) { arr(i) = tmp*factor; } """ inline(code, ['arr', 'N', 'factor'], type_converters = converters.blitz) print arr ------------------------------------------------------------------------------ When I run that file I get: ------------------------------------------------------------------------------ /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp: In function `PyObject* compiled_func(PyObject*, PyObject*)': /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp:715: error: ambiguous overload for 'operator*' in 'tmp * factor' /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp:715: error: candidates are: operator*(double, double) /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp:715: error: operator*(double, float) /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp:715: error: operator*(double, int) /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp: In function `PyObject* compiled_func(PyObject*, PyObject*)': /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp:715: error: ambiguous overload for 'operator*' in 'tmp * factor' /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp:715: error: candidates are: operator*(double, double) /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp:715: error: operator*(double, float) /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp:715: error: operator*(double, int) Traceback (most recent call last): File "test.py", line 18, in ? inline(code, ['arr', 'N', 'factor'], type_converters = converters.blitz) File "/opt/cp/lib/python2.3/site-packages/scipy/weave/inline_tools.py", line 334, in inline auto_downcast = auto_downcast, File "/opt/cp/lib/python2.3/site-packages/scipy/weave/inline_tools.py", line 442, in compile_function verbose=verbose, **kw) File "/opt/cp/lib/python2.3/site-packages/scipy/weave/ext_tools.py", line 353, in compile verbose = verbose, **kw) File "/opt/cp/lib/python2.3/site-packages/scipy/weave/build_tools.py", line 274, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/opt/cp/lib/python2.3/site-packages/numpy/distutils/core.py", line 85, in setup return old_setup(**new_attr) File "/usr/lib/python2.3/distutils/core.py", line 166, in setup raise SystemExit, "error: " + str(msg) scipy.weave.build_tools.CompileError: error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wstrict-prototypes -fPIC -I/opt/cp/lib/python2.3/site-packages/scipy/weave -I/opt/cp/lib/python2.3/site-packages/scipy/weave/scxx -I/opt/cp/lib/python2.3/site-packages/scipy/weave/blitz -I/opt/cp/lib/python2.3/site-packages/numpy/core/include -I/usr/include/python2.3 -c /home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.cpp -o /tmp/bittrich/python23_intermediate/compiler_cae5f1e251cd037a19f0234c558d6f0e/home/bittrich/.python23_compiled/sc_42e1646ad2085e8e2fc33f53a826fe5a0.o" failed with exit status 1 ------------------------------------------------------------------------------ Uncommenting 'factor = pi' it simply works fine. Even if I remove that line once again and leave the c part unchanged so that there is no recompile needed, the program just works. Now I have found a solution for me. I just have to add some line like: double tmpfactor = factor; and do not use 'factor' again in the code. But that behavior remains odd in my eyes. Maybe it has something to do with my compiler. Maybe I can try the newest svn-version of scipy during the next few days. I will keep reporting anything alike. Thank you for your help. Best regards, Lars From travis at enthought.com Mon Mar 27 15:45:59 2006 From: travis at enthought.com (Travis N. Vaught) Date: Mon, 27 Mar 2006 14:45:59 -0600 Subject: [SciPy-user] Web site down? In-Reply-To: References: Message-ID: <44284F07.1050603@enthought.com> Yes, the scipy site was down. A strange electrical fire (under the street) took out the power for most of the day yesterday across several blocks here downtown. http://www.news8austin.com/content/your_news/default.asp?ArID=158154 We think everything is back up now, but let us know if there are any problems. Travis Ed Schofield wrote: > The scipy.org site seems to be down again... > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- ........................ Travis N. Vaught CEO Enthought, Inc. http://www.enthought.com ........................ From arserlom at gmail.com Mon Mar 27 16:35:19 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Mon, 27 Mar 2006 23:35:19 +0200 Subject: [SciPy-user] Typo in SciPy code In-Reply-To: References: Message-ID: In http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/optimize/info.py where it says: min_cobyla -- Contrained Optimization BY Linear Approximation it should say [...] -- Constrained [...] This also appears when you use help(scipy.optimize.fmin_cobyla). BTW I've been testing some of the optimization routines in scipy (Nelder-Mead, Powell and COBYLA) and COBYLA was 10 times faster than the others and it showed much better precision and stability. Is this due to the implementation (fortran instead of python) or is it just that the algorithm works better for my problem? 2006/3/27, Ed Schofield : > > On 26/03/2006, at 8:22 PM, Armando Serrano Lombillo wrote: > > > I found a typo in the help provided by: > > help(scipy.optimize) > > where should I report that? > > I've had a look and fixed a couple of typos in SVN. Can you seen any > more? It's now as in > http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/optimize/ > info.py > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From oliphant at ee.byu.edu Mon Mar 27 16:48:13 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 27 Mar 2006 14:48:13 -0700 Subject: [SciPy-user] Typo in SciPy code In-Reply-To: References: Message-ID: <44285D9D.50503@ee.byu.edu> Armando Serrano Lombillo wrote: >In http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/optimize/info.py >where it says: > >min_cobyla -- Contrained Optimization BY Linear Approximation > >it should say [...] -- Constrained [...] > >This also appears when you use help(scipy.optimize.fmin_cobyla). > > >BTW I've been testing some of the optimization routines in scipy >(Nelder-Mead, Powell and COBYLA) and COBYLA was 10 times faster than >the others and it showed much better precision and stability. Is this >due to the implementation (fortran instead of python) or is it just >that the algorithm works better for my problem? > > > Try the BFGS method (fmin_bfgs) instead of Powell. The Powell implementation in SciPy needs some work to be truly representative. The BFGS methods should be fairly decent (it uses a good line search). The Nelder-Mead algorithm is usually always going to use more function evaluations. It's always hard to predict which optimization algorithm is going to work "best" for a particular problem (that's why there are so many of them). I suspect it's the number of function evaluations that's most important rather than whether or not the iteration is written in FORTRAN or Python. -Travis From lroubeyrie at limair.asso.fr Tue Mar 28 05:33:19 2006 From: lroubeyrie at limair.asso.fr (Lionel Roubeyrie) Date: Tue, 28 Mar 2006 12:33:19 +0200 Subject: [SciPy-user] amax, amin, mean Message-ID: <200603281233.19855.lroubeyrie@limair.asso.fr> Hello all, I'm trying to use Scipy for computing large time series, but I start with a strange thing (last Scipy is build on a recent linux box) : lionel[~]2>from scipy import * lionel[~]3>import MA lionel[~]4>test=[1,2,3,4,nan,5,6] lionel[~]5>test=MA.masked_object(test, nan) lionel[~]6>amax(test) Sortie[6]:6.0 lionel[~]7>amin(test) Sortie[7]:5.0 lionel[~]8>mean(test) Sortie[8]:nan hum, max is good, but mean and amin not. In the doc, Scipy use numpy.amin as amin, but: lionel[~]9>import numpy lionel[~]10>numpy.amin(test) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) ... ValueError: object __array__ method not producing an array I'm new in the Scipy/numpy word, does I do something wrong? Thanks for your help. -L. Roubeyrie From david.huard at gmail.com Tue Mar 28 09:41:59 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 28 Mar 2006 09:41:59 -0500 Subject: [SciPy-user] Installation problem Message-ID: <91cf711d0603280641q1f07ab8fr1c2f9137e5480ffd@mail.gmail.com> I had a fine scipy installation and then had the bad idea to change something... I compiled python from source and then tried to install back numpy, scipy and matplotlib. It didn't work out very well for matplotlib, so I tried to get back my previous installation, and it doesn't work either anymore... More precisely, when I import numpy, there is a bunch of error messages to the effect that PyUnicodeUCS2_FromUnicode in mutliarray.so is an undefined symbol. Is this a problem related to gcc 4.0 ? If it is, how can I force numpy to compile with 3.4 ? Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Tue Mar 28 12:42:54 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 28 Mar 2006 19:42:54 +0200 Subject: [SciPy-user] Installation problem In-Reply-To: <91cf711d0603280641q1f07ab8fr1c2f9137e5480ffd@mail.gmail.com> References: <91cf711d0603280641q1f07ab8fr1c2f9137e5480ffd@mail.gmail.com> Message-ID: <4429759E.9010305@ftw.at> David Huard wrote: > I had a fine scipy installation and then had the bad idea to change > something... > > I compiled python from source and then tried to install back numpy, > scipy and matplotlib. It didn't work out very well for matplotlib, so > I tried to get back my previous installation, and it doesn't work > either anymore... > > More precisely, when I import numpy, there is a bunch of error > messages to the effect that > PyUnicodeUCS2_FromUnicode > in mutliarray.so is an undefined symbol. > > Is this a problem related to gcc 4.0 ? No, this is a problem with whether Python is built using 2-byte or 4-byte unicode representation internally. My guess is that you've built Python from source using a different option from what your distribution provided. If so, remove the distribution-provided Python before building NumPy and SciPy, or at least move it away so distutils can't find it. See e.g. http://lists.fourthought.com/pipermail/4suite/2005-March/007036.html -- Ed From python at koepsell.de Tue Mar 28 15:52:48 2006 From: python at koepsell.de (Kilian Koepsell) Date: Tue, 28 Mar 2006 12:52:48 -0800 Subject: [SciPy-user] bug in scipy array? Message-ID: hi, i came across the following strange behavior while multiplying a scipy array: a[:,2:] gives a column of zeros if a is an 2x2 array (that is correct) but if i multiply this column by zero, it affects the element a[2,0] which is probably a bug. >>> from scipy import * >>> a = array([[1,2],[3,4]]) >>> a array([[1, 2], [3, 4]]) >>> a[:,2:] zeros((2, 0), 'l') >>> a[:,2:] *= 0 >>> a array([[1, 2], [0, 4]]) i am using scipy 0.3.2 with python 2.4.2 and Mac OSX 10.4.5. thanks for any help/comment. kilian From robert.kern at gmail.com Tue Mar 28 16:01:16 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Mar 2006 15:01:16 -0600 Subject: [SciPy-user] bug in scipy array? In-Reply-To: References: Message-ID: <4429A41C.70003@gmail.com> Kilian Koepsell wrote: > hi, > > i came across the following strange behavior while multiplying a > scipy array: > a[:,2:] gives a column of zeros if a is an 2x2 array (that is correct) > but if i multiply this column by zero, it affects the element a[2,0] > which is probably a bug. > > >>> from scipy import * > >>> a = array([[1,2],[3,4]]) > >>> a > array([[1, 2], > [3, 4]]) > >>> a[:,2:] > zeros((2, 0), 'l') > >>> a[:,2:] *= 0 > >>> a > array([[1, 2], > [0, 4]]) > > i am using scipy 0.3.2 with python 2.4.2 and Mac OSX 10.4.5. This is a bug in Numeric 24.x. This bug does not appear in current versions of numpy, Numeric's replacement. In [37]: import numpy In [38]: import Numeric In [39]: a = numpy.array([[1, 2], [3, 4]]) In [40]: a[:,2:] *= 0 In [41]: a Out[41]: array([[1, 2], [3, 4]]) In [42]: a = Numeric.array([[1, 2], [3, 4]]) In [43]: a[:,2:] *= 0 In [44]: a Out[44]: array([[0, 2], [0, 4]]) -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From elcorto at gmx.net Tue Mar 28 17:57:39 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 29 Mar 2006 00:57:39 +0200 Subject: [SciPy-user] sparse test errors Message-ID: <4429BF63.2060901@gmx.net> Hi The latest svn build fails when testing sparse: In [55]: scipy.__version__ Out[55]: '0.4.9.1780' In [56]: numpy.__version__ Out[56]: '0.9.7.2289' In [57]: scipy.test(level=1) [...] ====================================================================== ERROR: check_matmat (scipy.sparse.tests.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/scipy/sparse/tests/test_sparse.py", line 170, in check_matmat B = A*A.T File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 664, in __mul__ return self.dot(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 306, in dot result = self.matmat(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 830, in matmat return csc_matrix((c, rowc, ptrc), dims=(M, N)) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 557, in __init__ self._check() File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 573, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: check_matmat (scipy.sparse.tests.test_sparse.test_csr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/scipy/sparse/tests/test_sparse.py", line 170, in check_matmat B = A*A.T File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 1149, in __mul__ return self.dot(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 306, in dot result = self.matmat(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 1316, in matmat return csc_matrix((c, rowc, ptrc), dims=(M, N)) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 557, in __init__ self._check() File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 573, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: check_matmat (scipy.sparse.tests.test_sparse.test_dok) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/scipy/sparse/tests/test_sparse.py", line 170, in check_matmat B = A*A.T File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 1803, in __mul__ return self.dot(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 306, in dot result = self.matmat(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 318, in matmat return csc.matmat(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 830, in matmat return csc_matrix((c, rowc, ptrc), dims=(M, N)) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 557, in __init__ self._check() File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 573, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: check_matmat (scipy.sparse.tests.test_sparse.test_lil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/scipy/sparse/tests/test_sparse.py", line 170, in check_matmat B = A*A.T File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 199, in __mul__ return csc * other File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 664, in __mul__ return self.dot(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 306, in dot result = self.matmat(other) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 830, in matmat return csc_matrix((c, rowc, ptrc), dims=(M, N)) File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 557, in __init__ self._check() File "/usr/lib/python2.3/site-packages/scipy/sparse/sparse.py", line 573, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ---------------------------------------------------------------------- Ran 1508 tests in 6.736s FAILED (errors=4) [...] cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From schofield at ftw.at Tue Mar 28 18:48:51 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 29 Mar 2006 01:48:51 +0200 Subject: [SciPy-user] multiplying sparse matrices in scipy-0.4.8? In-Reply-To: <4429C30A.9040206@ieee.org> References: <87odzx3x6r.fsf@hmm.lanl.gov> <98790998-659E-42C3-A2AD-005AB68CCA12@ftw.at> <4429C30A.9040206@ieee.org> Message-ID: <33AF8F99-B4BE-4980-A376-C5F17B501469@ftw.at> On 29/03/2006, at 1:13 AM, Travis Oliphant wrote: > Ed Schofield wrote: >> >> >> Yes, this is a bug. >> >> Travis, could you please take a look at this? The FORTRAN >> functions dcscmucsc and dcscmucsr both seem to be returning >> incorrect values in indptr. In fact, could you please explain the >> Python code in matmat() that calls these functions? I'd like to >> understand what the "while 1" loop is for, and in particular why >> we have >> c, rowc, ptrc, irow, kcol, ierr = func(*args) >> when c, rowc, etc are part of *args anyway. > > The arguments are input and output arguments because that is my > understanding of f2py's preference if something is both input and > ouput rather than using an in-place argument. To minimize side-effects? Okay ;) > The while True: loop is so that resizing can occur if the guess on > how many non-zero elements to reserve for the output matrix is wrong. > There were some problems with the re-entry that I've now fixed. > The tests you added to SciPy now pass. Thanks, Travis! -- Ed From zpincus at stanford.edu Tue Mar 28 19:33:21 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Tue, 28 Mar 2006 16:33:21 -0800 Subject: [SciPy-user] scipy.stats.gaussian_kde broken? Message-ID: <091F15C1-F5D5-42A1-B87D-5FE6485FF9D2@stanford.edu> Hi folks, I can't seem to get scipy.stats.gaussian_kde to work properly. Here is an example. [In:] scipy.__version__ '0.4.9.1754' [In: ] numpy.__version__ '0.9.7.2262' [In:] k = scipy.stats.gaussian_kde([-2, -1, -0.5, 0, 0, 0, 0, 0.5, 1, 2]) [In:] k([-100, -2, 0, 2, 100]) array([ 0.52684053, 0.58537837, 0.52762998, 0.52684053, 0.52684053]) Clearly the above result is wrong. The 'dataset' points cluster around zero, and have no support anywhere near 100 or -100. Yet the estimated density is essentially flat across that whole range. Even more strange is the fact that when the size of the set of points to estimate the density at is larger than the size of the data, the first estimated value is different than the others. [In:] k([-2]) array([ 0.58537837]) [In:] k([-2] * 7) array([ 0.58537837, 0.58537837, 0.58537837, 0.58537837, 0.58537837, 0.58537837, 0.58537837]) [In:] k([-2] * 10) array([ 0.08691171, 0.58537837, 0.58537837, 0.58537837, 0.58537837, 0.58537837, 0.58537837, 0.58537837, 0.58537837, 0.58537837]) In fact, I suspect that this first estimated value is the correct value, and the rest are garbage. Any thoughts? Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine From oliphant.travis at ieee.org Tue Mar 28 20:00:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 28 Mar 2006 18:00:45 -0700 Subject: [SciPy-user] NetCDF module that links against NumPy Message-ID: <4429DC3D.1010000@ieee.org> I've adapted Konrad Hinsen's Scientific.IO.NetCDF module to work with NumPy. For now it's available at http://sourceforge.net/project/showfiles.php?group_id=1315&package_id=185504&release_id=405511 I would like to put this module in SciPy but don't understand whether or not the License is compatible. It sounds GPL-ish. Would it be O.K. to place in the SciPy/sandbox? Regards, -Travis From zpincus at stanford.edu Tue Mar 28 20:03:54 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Tue, 28 Mar 2006 17:03:54 -0800 Subject: [SciPy-user] scipy.stats.gaussian_kde broken? In-Reply-To: <091F15C1-F5D5-42A1-B87D-5FE6485FF9D2@stanford.edu> References: <091F15C1-F5D5-42A1-B87D-5FE6485FF9D2@stanford.edu> Message-ID: <7A0270A7-7D0D-41BA-9E7C-73295A9FE145@stanford.edu> Never mind. This was due to a bug in numpy.dot() which is fixed in the newest version. Zach On Mar 28, 2006, at 4:33 PM, Zachary Pincus wrote: > Hi folks, > > I can't seem to get scipy.stats.gaussian_kde to work properly. Here > is an example. > > [In:] scipy.__version__ > '0.4.9.1754' > [In: ] numpy.__version__ > '0.9.7.2262' > [In:] k = scipy.stats.gaussian_kde([-2, -1, -0.5, 0, 0, 0, 0, 0.5, 1, > 2]) > [In:] k([-100, -2, 0, 2, 100]) > array([ 0.52684053, 0.58537837, 0.52762998, 0.52684053, > 0.52684053]) > > Clearly the above result is wrong. The 'dataset' points cluster > around zero, and have no support anywhere near 100 or -100. Yet the > estimated density is essentially flat across that whole range. > > Even more strange is the fact that when the size of the set of points > to estimate the density at is larger than the size of the data, the > first estimated value is different than the others. > [In:] k([-2]) > array([ 0.58537837]) > [In:] k([-2] * 7) > array([ 0.58537837, 0.58537837, 0.58537837, 0.58537837, > 0.58537837, > 0.58537837, 0.58537837]) > [In:] k([-2] * 10) > array([ 0.08691171, 0.58537837, 0.58537837, 0.58537837, > 0.58537837, > 0.58537837, 0.58537837, 0.58537837, 0.58537837, > 0.58537837]) > > In fact, I suspect that this first estimated value is the correct > value, and the rest are garbage. > > Any thoughts? > > Zach Pincus > Program in Biomedical Informatics and Department of Biochemistry > Stanford University School of Medicine > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From robert.kern at gmail.com Tue Mar 28 20:25:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Mar 2006 19:25:59 -0600 Subject: [SciPy-user] NetCDF module that links against NumPy In-Reply-To: <4429DC3D.1010000@ieee.org> References: <4429DC3D.1010000@ieee.org> Message-ID: <4429E227.4030206@gmail.com> Travis Oliphant wrote: > I've adapted Konrad Hinsen's Scientific.IO.NetCDF module to work with > NumPy. > > For now it's available at > > http://sourceforge.net/project/showfiles.php?group_id=1315&package_id=185504&release_id=405511 > > I would like to put this module in SciPy but don't understand whether or > not the License is compatible. It sounds GPL-ish. > > Would it be O.K. to place in the SciPy/sandbox? The CeCILL license seems to be fairly similar to the LGPL in purpose and effect. It is significantly more restrictive than the BSD license of scipy. I think a good rule of thumb is, "if you don't understand whether or not the license is compatible, it isn't." I am -1 on putting this in the scipy sandbox. I am +1 on using this to prime the pump for a SciPy Kits package. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Mar 28 20:45:55 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Mar 2006 19:45:55 -0600 Subject: [SciPy-user] NetCDF module that links against NumPy In-Reply-To: <4429DC3D.1010000@ieee.org> References: <4429DC3D.1010000@ieee.org> Message-ID: <4429E6D3.40809@gmail.com> Travis Oliphant wrote: > I've adapted Konrad Hinsen's Scientific.IO.NetCDF module to work with > NumPy. > > For now it's available at > > http://sourceforge.net/project/showfiles.php?group_id=1315&package_id=185504&release_id=405511 > > I would like to put this module in SciPy but don't understand whether or > not the License is compatible. It sounds GPL-ish. OTOH, ScientificPython-2.4.9 does have a scipy-acceptable license. I doubt there was much significant change in this module between 2.4.9 and the CeCILL-licensed versions. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From malleite at gmail.com Tue Mar 28 20:49:29 2006 From: malleite at gmail.com (Marco Leite) Date: Tue, 28 Mar 2006 22:49:29 -0300 Subject: [SciPy-user] Scipy test fails on SUSE 10.0 (gcc 4.0.2) on 64 bit machines Message-ID: <38c32cda0603281749w6ca02197xcca57eca87510149@mail.gmail.com> Hello, I've been trying to get scipy to work on 64bit systems, running SUSE 10.0 with gcc 4.0.2 and g77 3.3.5. The compilation goes ok, but the tests fails (see bellow). The same is happening for two diferent machines (Xeon and Athlon64). I know the doc's say gcc 3.x is recommended, but the same Suse distribution and software for 32 bits architecture works without any problems (it seems to work well even with the AMD optimized LAPACK library)... Does anybody have any sugestions? Thanks, Marco Leite ================================================ Python 2.4.1 (#1, Sep 12 2005, 23:33:18) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() Overwriting fft= from scipy.fftpack.basic(was from numpy.dft.fftpack) Overwriting ifft= from scipy.fftpack.basic(was from numpy.dft.fftpack) Found 4 tests for scipy.io.array_import Found 128 tests for scipy.linalg.fblas ..... (removed messages) ... ....zswap:n=4 ..zswap:n=3 ..FFFF*** glibc detected *** free(): invalid next size (fast): 0x0000000000965700 *** Aborted -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Mar 28 20:53:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Mar 2006 19:53:21 -0600 Subject: [SciPy-user] Scipy test fails on SUSE 10.0 (gcc 4.0.2) on 64 bit machines In-Reply-To: <38c32cda0603281749w6ca02197xcca57eca87510149@mail.gmail.com> References: <38c32cda0603281749w6ca02197xcca57eca87510149@mail.gmail.com> Message-ID: <4429E891.30606@gmail.com> Marco Leite wrote: > Hello, > > I've been trying to get scipy to work on 64bit systems, running SUSE 10.0 > with gcc 4.0.2 and g77 3.3.5. The compilation goes ok, but the tests > fails (see > bellow). The same is happening for two diferent machines (Xeon and > Athlon64). > > I know the doc's say gcc 3.x is recommended, but the same Suse > distribution and software for 32 bits architecture works without any > problems (it seems to > work well even with the AMD optimized LAPACK library)... > > Does anybody have any sugestions? Please try increasing the verbosity and the debug level of the test suite. E.g. scipy.test(10, 10) Then the test runner will print out the name of the test before running it. That we will know which tests are failing or crashing. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From malleite at gmail.com Tue Mar 28 21:11:10 2006 From: malleite at gmail.com (Marco Leite) Date: Tue, 28 Mar 2006 23:11:10 -0300 Subject: [SciPy-user] Scipy test fails on SUSE 10.0 (gcc 4.0.2) on 64 bit machines In-Reply-To: <4429E891.30606@gmail.com> References: <38c32cda0603281749w6ca02197xcca57eca87510149@mail.gmail.com> <4429E891.30606@gmail.com> Message-ID: <38c32cda0603281811j2e9e5d80t457abc8c7797b5be@mail.gmail.com> Hi, Bellow is the output with the increased verbosity level. All previous tests went ok, The full log is large but I can send it if it helps,.. Thank you Marco. ============================================ Python 2.4.1 (#1, Sep 12 2005, 23:33:18) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test(10,10) ...(removed - all ok)... check_y_stride_assert (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_zgemv) ... ok check_simple (scipy.linalg.tests.test_fblas.test_zscal) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_zscal)zscal:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_zscal) ... ok check_simple (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_x_bad_size (scipy.linalg.tests.test_fblas.test_zswap)zswap:n=4 ... ok check_x_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok check_y_bad_size (scipy.linalg.tests.test_fblas.test_zswap)zswap:n=3 ... ok check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok affine_transform 1 ... FAIL affine transform 2 ... FAIL affine transform 3 ... FAIL affine transform 4 ... FAIL affine transform 5*** glibc detected *** free(): invalid pointer: 0x0000000000c87740 *** Aborted On 3/28/06, Robert Kern wrote: > > Marco Leite wrote: > > Hello, > > > > I've been trying to get scipy to work on 64bit systems, running SUSE > 10.0 > > with gcc 4.0.2 and g77 3.3.5. The compilation goes ok, but the tests > > fails (see > > bellow). The same is happening for two diferent machines (Xeon and > > Athlon64). > > > > I know the doc's say gcc 3.x is recommended, but the same Suse > > distribution and software for 32 bits architecture works without any > > problems (it seems to > > work well even with the AMD optimized LAPACK library)... > > > > Does anybody have any sugestions? > > Please try increasing the verbosity and the debug level of the test suite. > E.g. > > scipy.test(10, 10) > > Then the test runner will print out the name of the test before running > it. That > we will know which tests are failing or crashing. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Tue Mar 28 21:23:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 28 Mar 2006 19:23:17 -0700 Subject: [SciPy-user] Scipy test fails on SUSE 10.0 (gcc 4.0.2) on 64 bit machines In-Reply-To: <38c32cda0603281811j2e9e5d80t457abc8c7797b5be@mail.gmail.com> References: <38c32cda0603281749w6ca02197xcca57eca87510149@mail.gmail.com> <4429E891.30606@gmail.com> <38c32cda0603281811j2e9e5d80t457abc8c7797b5be@mail.gmail.com> Message-ID: <4429EF95.8060807@ieee.org> Marco Leite wrote: > Hi, > > Bellow is the output with the increased verbosity level. > All previous tests went ok, The full log is large but I can send it > if it helps,.. > > Thank you > > Marco. > ============================================ > Python 2.4.1 (#1, Sep 12 2005, 23:33:18) > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > >>> scipy.test(10,10) > > ...(removed - all ok)... > > check_y_stride_assert (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_y_stride_transpose (scipy.linalg.tests.test_fblas.test_zgemv) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_zscal) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_zscal)zscal:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_zscal) ... ok > check_simple (scipy.linalg.tests.test_fblas.test_zswap) ... ok > check_x_and_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > check_x_bad_size (scipy.linalg.tests.test_fblas.test_zswap)zswap:n=4 > ... ok > check_x_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > check_y_bad_size (scipy.linalg.tests.test_fblas.test_zswap)zswap:n=3 > ... ok > check_y_stride (scipy.linalg.tests.test_fblas.test_zswap) ... ok > affine_transform 1 ... FAIL > affine transform 2 ... FAIL > affine transform 3 ... FAIL > affine transform 4 ... FAIL > affine transform 5*** glibc detected *** free(): invalid pointer: > 0x0000000000c87740 *** > Aborted > This is the ndimage package that is failing. Disable it for 64-bit systems because it is not 64-bit clean. -Travis From lroubeyrie at limair.asso.fr Wed Mar 29 01:46:09 2006 From: lroubeyrie at limair.asso.fr (Lionel Roubeyrie) Date: Wed, 29 Mar 2006 08:46:09 +0200 Subject: [SciPy-user] amax, amin, mean In-Reply-To: <200603281233.19855.lroubeyrie@limair.asso.fr> References: <200603281233.19855.lroubeyrie@limair.asso.fr> Message-ID: <200603290846.09968.lroubeyrie@limair.asso.fr> Sorry but I don't understand how to do, replacing nan by an number gives others errors: ###################################################### lionel[~]17>tv=[1,2,3,4,5,1.e-20,6] lionel[~]18>tv=MA.masked_values(tv, 1.e-20) lionel[~]19>amax(tv) --------------------------------------------------------------------------- MA.MA.MAError Traceback (most recent call last) /home/lionel/ /usr/lib/python2.4/site-packages/scipy_base/function_base.py in amax(m, axis) 179 axis = 0 180 else: --> 181 m = _asarray1d(m) 182 return maximum.reduce(m,axis) 183 /usr/lib/python2.4/site-packages/scipy_base/function_base.py in _asarray1d(arr) 152 """Ensure 1d array for one array. 153 """ --> 154 m = asarray(arr) 155 if len(m.shape)==0: 156 m = reshape(m,(1,)) /usr/lib/python2.4/site-packages/scipy_base/type_check.py in asarray(a, typecode, savespace) 23 r.savespace(savespace) 24 return r ---> 25 return multiarray.array(a,typecode,copy=0,savespace=savespace or 0) 26 27 ScalarType = [types.IntType, types.LongType, types.FloatType, types.ComplexType] /usr/lib/python2.4/site-packages/Numeric/MA/MA.py in __array__(self, t) 630 if self._mask is not None: 631 if Numeric.sometrue(Numeric.ravel(self._mask)): --> 632 raise MAError, \ 633 """Cannot automatically convert masked array to Numeric because data 634 is masked in one or more locations. MAError: Cannot automatically convert masked array to Numeric because data is masked in one or more locations. ###################################################### What do I have to do for having real computation on masked arrays? Thanks -L. Roubeyrie From schofield at ftw.at Wed Mar 29 02:19:50 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 29 Mar 2006 09:19:50 +0200 Subject: [SciPy-user] Scipy test fails on SUSE 10.0 (gcc 4.0.2) on 64 bit machines In-Reply-To: <4429EF95.8060807@ieee.org> References: <38c32cda0603281749w6ca02197xcca57eca87510149@mail.gmail.com> <4429E891.30606@gmail.com> <38c32cda0603281811j2e9e5d80t457abc8c7797b5be@mail.gmail.com> <4429EF95.8060807@ieee.org> Message-ID: <31587342-79F8-42AC-AD8C-F96F288E6324@ftw.at> On 29/03/2006, at 4:23 AM, Travis Oliphant wrote: > Marco Leite wrote: >> Hi, >> >> Bellow is the output with the increased verbosity level. >> All previous tests went ok, The full log is large but I can send it >> if it helps,.. >> > > This is the ndimage package that is failing. Disable it for 64-bit > systems because it is not 64-bit clean. > I've modified ndimage/__init__.py in SVN so that the package refuses to import on 64-bit systems. -- Ed From a.u.r.e.l.i.a.n at gmx.net Wed Mar 29 04:03:50 2006 From: a.u.r.e.l.i.a.n at gmx.net (=?ISO-8859-1?Q?=22Johannes_L=F6hnert=22?=) Date: Wed, 29 Mar 2006 11:03:50 +0200 (MEST) Subject: [SciPy-user] amax, amin, mean References: <200603290846.09968.lroubeyrie@limair.asso.fr> Message-ID: <18886.1143623030@www015.gmx.net> Hi, > [...] > /usr/lib/python2.4/site-packages/Numeric/MA/MA.py in __array__(self, t) it looks as if you installed the 'old' Numeric module. The current scipy (0.4.8 iirc) needs numpy, which one could call Numeric's successor. You won't need Numeric at all. Johannes -- Bis zu 70% Ihrer Onlinekosten sparen: GMX SmartSurfer! Kostenlos downloaden: http://www.gmx.net/de/go/smartsurfer From lroubeyrie at limair.asso.fr Wed Mar 29 05:15:58 2006 From: lroubeyrie at limair.asso.fr (Lionel Roubeyrie) Date: Wed, 29 Mar 2006 12:15:58 +0200 Subject: [SciPy-user] amax, amin, mean In-Reply-To: <18886.1143623030@www015.gmx.net> References: <200603290846.09968.lroubeyrie@limair.asso.fr> <18886.1143623030@www015.gmx.net> Message-ID: <200603291215.58830.lroubeyrie@limair.asso.fr> Hi, Effectively, I had an old Numeric package installed, and debian provides scipy using numeric. Now I have: ##################################### lionel[donn?es]21>from scipy import * lionel[donn?es]22>from numpy import ma as MA lionel[donn?es]23>test=MA.masked_object([1,2,3,4,nan,6], nan) lionel[donn?es]24>print amin(test), amax(test), mean(test) 1.0 6.0 nan lionel[donn?es]25>test=MA.masked_values([1,2,3,4,1.e-20,6], 1.e-20) lionel[donn?es]26>print amin(test), amax(test), mean(test) 1.0 6.0 3.2 ##################################### seems mean function doesn't like nan object. I know it's not the best way using NaN, but in a text file it's cleariest to use it. Thanks for your help PS: newbie in scipy, I search a function in scipy similar to the "factor" and "tapply" functions used in R. Are they exist? Le Mercredi 29 Mars 2006 11:03, Johannes L?hnert a ?crit?: > Hi, > > > [...] > > /usr/lib/python2.4/site-packages/Numeric/MA/MA.py in __array__(self, t) > > it looks as if you installed the 'old' Numeric module. The current scipy > (0.4.8 iirc) needs numpy, which one could call Numeric's successor. You > won't need Numeric at all. > > Johannes -- Lionel Roubeyrie - lroubeyrie at limair.asso.fr LIMAIR http://www.limair.asso.fr From david.huard at gmail.com Wed Mar 29 16:43:03 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 29 Mar 2006 16:43:03 -0500 Subject: [SciPy-user] Freezing numpy code Message-ID: <91cf711d0603291343h77f9fd8dnbd0b878d0923e79@mail.gmail.com> I would like to know if anyone has some experience with the freeze module. I tried to freeze a basic numpy script without success and wondered if there was a known problem or if everything should be fine. Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Wed Mar 29 17:03:04 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 29 Mar 2006 15:03:04 -0700 Subject: [SciPy-user] Freezing numpy code In-Reply-To: <91cf711d0603291343h77f9fd8dnbd0b878d0923e79@mail.gmail.com> References: <91cf711d0603291343h77f9fd8dnbd0b878d0923e79@mail.gmail.com> Message-ID: <442B0418.4090009@ee.byu.edu> David Huard wrote: > I would like to know if anyone has some experience with the freeze > module. I tried to freeze a basic numpy script without success and > wondered if there was a known problem or if everything should be fine. > It should work. Please post the problems you had (preferrably to numpy-discussion at lists.sourceforge.net) -Travis From basvandijk at home.nl Thu Mar 30 06:48:07 2006 From: basvandijk at home.nl (basvandijk at home.nl) Date: Thu, 30 Mar 2006 13:48:07 +0200 Subject: [SciPy-user] Peak Fitting Message-ID: <5029069.1143719287067.JavaMail.root@webmail2.groni1> Hello, I would like to integrate some data. The data has a somewhat similar form as the picture of the peak in [1]. I think a good function describing the data is: y = amp * exp( -exp (x - center/width ) - x - center/width + 1) Before I integrate I would like to fit it to get rid of the noise. I don't know much about linear algebra but I've read I can use a non linear least square fitting method to fit the data. Maybe I can also use others? My question is can I do this with Scipy and if so how? Greetings, Bas van Dijk [1] http://mathworld.wolfram.com/NonlinearLeastSquaresFitting.html From ckkart at hoc.net Thu Mar 30 07:43:51 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Thu, 30 Mar 2006 21:43:51 +0900 Subject: [SciPy-user] Peak Fitting In-Reply-To: <5029069.1143719287067.JavaMail.root@webmail2.groni1> References: <5029069.1143719287067.JavaMail.root@webmail2.groni1> Message-ID: <442BD287.5080001@hoc.net> basvandijk at home.nl wrote: > Hello, > > I would like to integrate some data. The data has a somewhat similar form as the > picture of the peak in [1]. I think a good function describing the data is: The peak in [1] actually is a gaussian, the formula is given below the graph: amp * exp(-(x-center)**2/(2*width**2)) > I don't know much about linear algebra but I've read I can use a non linear > least square fitting method to fit the data. Maybe I can also use others? My > question is can I do this with Scipy and if so how? with scipy you can do a leastsq fit like this assuming x and y are arrays with your data: from scipy import optimize def residuals(amp,center,width): return y-(amp * exp(-(x-center)**2/(2*width**2))) guess = [2.0, 4.5, 0.2] # the inital guess for the parameters out = optimize.leastsq(residuals, guess) solution = out[0] print solution Regards, Christian From Zhong.Huang at uth.tmc.edu Thu Mar 30 18:03:34 2006 From: Zhong.Huang at uth.tmc.edu (Huang, Zhong ) Date: Thu, 30 Mar 2006 17:03:34 -0600 Subject: [SciPy-user] problem of Installing scipy Message-ID: <7B54DE0D8F88E1418C0B804A737E5D90073C61@UTHEVS4.mail.uthouston.edu> Hi, I fail to build scipy. The the output of runing 'python setup.py install' is as following. 1. I also attached the output of running 2. python -c 'from numpy.f2py.diagnoise import run ; run()' 1. *****************************output of running install************************************************************ fft_opt_info: fftw3_info: /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['fftw3'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ FOUND: libraries = ['fftw3'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] djbfft_info: NOT AVAILABLE FOUND: libraries = ['fftw3'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] blas_opt_info: blas_mkl_info: /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['mkl', 'vml', 'guide'] found_libs=[ warnings.warn("Library error: libs=%s found_libs=%s" % \ NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['ptf77blas', 'ptcblas', 'atlas'] fo warnings.warn("Library error: libs=%s found_libs=%s" % \ NOT AVAILABLE atlas_blas_info: /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['f77blas', 'cblas', 'atlas'] found_ warnings.warn("Library error: libs=%s found_libs=%s" % \ NOT AVAILABLE /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:1273: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: Replacing _lib_names[0]=='blas' with 'fblas' /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['fblas'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ Replacing _lib_names[0]=='fblas' with 'fblas' FOUND: libraries = ['fblas'] library_dirs = ['/usr/local/lib'] language = f77 FOUND: libraries = ['fblas'] library_dirs = ['/usr/local/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 lapack_opt_info: lapack_mkl_info: mkl_info: NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['lapack_atlas'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: numpy.distutils.system_info.atlas_info NOT AVAILABLE /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:1192: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: Replacing _lib_names[0]=='lapack' with 'flapack' /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['flapack'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ Replacing _lib_names[0]=='flapack' with 'flapack' FOUND: libraries = ['flapack'] library_dirs = ['/usr/local/lib'] language = f77 FOUND: libraries = ['flapack', 'fblas'] library_dirs = ['/usr/local/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 non-existing path in 'Lib/linsolve': 'tests' Traceback (most recent call last): File "setup.py", line 48, in ? setup_package() File "setup.py", line 34, in setup_package config.add_subpackage('Lib') File "/usr/lib64/python2.4/site-packages/numpy/distutils/misc_util.py", line 728, in add_subpackage caller_level = 2) File "/usr/lib64/python2.4/site-packages/numpy/distutils/misc_util.py", line 711, in get_subpackage caller_level = caller_level + 1) File "/usr/lib64/python2.4/site-packages/numpy/distutils/misc_util.py", line 660, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./Lib/setup.py", line 12, in configuration config.add_subpackage('linsolve') File "/usr/lib64/python2.4/site-packages/numpy/distutils/misc_util.py", line 728, in add_subpackage caller_level = 2) File "/usr/lib64/python2.4/site-packages/numpy/distutils/misc_util.py", line 711, in get_subpackage caller_level = caller_level + 1) File "/usr/lib64/python2.4/site-packages/numpy/distutils/misc_util.py", line 660, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "Lib/linsolve/setup.py", line 58, in configuration config.add_subpackage( 'umfpack') File "/usr/lib64/python2.4/site-packages/numpy/distutils/misc_util.py", line 728, in add_subpackage caller_level = 2) File "/usr/lib64/python2.4/site-packages/numpy/distutils/misc_util.py", line 711, in get_subpackage caller_level = caller_level + 1) File "/usr/lib64/python2.4/site-packages/numpy/distutils/misc_util.py", line 646, in _get_configuration_from_setup_py ('.py', 'U', 1)) File "Lib/linsolve/umfpack/setup.py", line 22 libraries = ['cblas'], ^ SyntaxError: invalid syntax 2. ********************output of running python -c ' from numpy.f2py.diagonose import run; run()' ************************************ ------ os.name='posix' ------ sys.platform='linux2' ------ sys.version: 2.4.2 (#1, Nov 10 2005, 13:31:13) [GCC 3.4.4 (Gentoo 3.4.4-r1, ssp-3.4.4-1.0, pie-8.7.8)] ------ sys.prefix: /usr ------ sys.path=':/usr/lib/portage/pym:/home/zhuang/EMAN/lib:/home/zhuang/EMAN2/lib:/usr/lib/python24.zip:/usr/lib/python2.4:/usr/lib/python2.4/plat-linux2:/usr/lib/python2.4/lib-tk:/usr/lib64/python2.4/lib-dynload:/usr/lib64/python2.4/site-packages:/usr/lib64/python2.4/site-packages/Numeric:/usr/lib64/python2.4/site-packages/PIL:/usr/lib64/python2.4/site-packages/gtk-2.0:/usr/lib/python2.4/site-packages:/usr/lib/python2.4/site-packages/Numeric:/usr/lib/python2.4/site-packages/PIL:/usr/lib/python2.4/site-packages/gtk-2.0' ------ Found Numeric version '23.7' in /usr/lib64/python2.4/site-packages/Numeric/Numeric.pyc Found numarray version '1.3.1' in /usr/lib64/python2.4/site-packages/numarray/__init__.pyc Found new numpy version '0.9.7.2308' in /usr/lib64/python2.4/site-packages/numpy/__init__.pyc Found f2py2e version '2_2308' in /usr/lib64/python2.4/site-packages/numpy/f2py/f2py2e.pyc Found numpy.distutils version '0.4.0' in '/usr/lib64/python2.4/site-packages/numpy/distutils/__init__.pyc' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: customize CompaqFCompiler customize NoneFCompiler customize AbsoftFCompiler Could not locate executable ifort Could not locate executable ifc Could not locate executable ifort Could not locate executable efort Could not locate executable efc customize IntelFCompiler Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize SunFCompiler customize VastFCompiler customize GnuFCompiler customize IbmFCompiler customize Gnu95FCompiler customize IntelVisualFCompiler customize G95FCompiler customize IntelItaniumFCompiler customize PGroupFCompiler customize LaheyFCompiler customize CompaqVisualFCompiler customize MipsFCompiler customize HPUXFCompiler customize IntelItaniumVisualFCompiler customize NAGFCompiler List of available Fortran compilers: --fcompiler=gnu GNU Fortran Compiler (3.4.4) List of unavailable Fortran compilers: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=compaqv DIGITAL|Compaq Visual Fortran Compiler --fcompiler=g95 GNU Fortran 95 Compiler --fcompiler=gnu95 GNU 95 Fortran Compiler --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=none Fake Fortran compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=sun Sun|Forte Fortran 95 Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler List of unimplemented Fortran compilers: --fcompiler=f Fortran Company/NAG F Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: getNCPUs has_3dnow has_3dnowext has_mmx has_sse has_sse2 is_64bit is_AMD is_Opteron ------ From m.cooper at computer.org Thu Mar 30 20:55:59 2006 From: m.cooper at computer.org (Matthew Cooper) Date: Thu, 30 Mar 2006 17:55:59 -0800 Subject: [SciPy-user] maxentropy In-Reply-To: <8866A7C4-460D-4EE0-9102-54B71810EEA9@ftw.at> References: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> <441ABBC5.90304@ftw.at> <43f499ca0603171631w57400cf4n583bfb0c26e6fc5c@mail.gmail.com> <43f499ca0603211253k585c56eev3bd08fb42f6574b8@mail.gmail.com> <4422DB17.40402@ftw.at> <8866A7C4-460D-4EE0-9102-54B71810EEA9@ftw.at> Message-ID: <43f499ca0603301755v71e48642vf90db1becbfed72b@mail.gmail.com> Ed, I apologize for not looking at this sooner. I went through the new conditionalexample_high_level.py and I still think there a small change that needs to be made (I think it's small anyway). I think that we want F to be the size F = sparse.lil_matrix((len(f), numcorpus*numsamplespace)) where numcorpus = len(corpus) basically, the space over which we evaluate each feature to compute expectations under the current model is (X*N) where X is the size of the samplespace (number of classes) and N is the number of labeled training observations. for a feature f_i the expected value under the model is _{theta} = \sum_{n=1}^N \frac{1}{N} \sum_{x in samplespace} P_{\theta}(x|w_n) f_i(x,w_n) so that for each function, we need a look up table that covers all pairs of x from the samplespace and w_n from the training set. The first sum is simply the empirical context distribution which is uniform over the training set. The model distribution is only defined conditioning on contexts from the training set. This equation replaces an exponentially large space of contexts with only the N contexts from the training set. I don't think this alters your code, as long as the pmf and F matrices are initialized correctly. At test time, we do need to evaluate the feature functions on unseen documents, but this can be handled more easily. I have another question. I haven't installed your version of scipy outright since it was a bit of a pain to get the current stable distribution up on my machine. However, if I need to load a bunch of modules from your version to test the conditional models is there an easy way to do that? At the moment, I couldn't import sparseutils (I can't find the .py file since I probably haven't built it?). Thanks, Matt On 3/26/06, Ed Schofield wrote: > > > On 23/03/2006, at 6:29 PM, Ed Schofield wrote: > > > > > On 21/03/2006, at 9:53 PM, Matthew Cooper wrote: > > > >> > >> Hi Ed, > >> > >> I am playing around with the code on some more small examples and > >> everything has been fine. The thing that will hold me back from > >> testing on larger datasets is the F matrix which thus far requires > >> the > >> space of (context,label) pairs to be enumerable. I know that > >> internally you are using a sparse representation for this matrix. > >> Can > >> I initialize the model with a sparse matrix also? This also requires > >> changes with the indices_context parameter in the examples. > > > > Hi Matt, > > Yes, good point. I'd conveniently forgotten about this little problem > > ;) It turns out scipy's sparse matrices need extending to support > > this. > > And done. I've committed the new sparse matrix features to the ejs > branch and fixed conditional maxent models to work with them. The > examples seem to work fine too. Please let me know how you go with it! > > -- Ed > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at mailcan.com Thu Mar 30 21:23:06 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Thu, 30 Mar 2006 21:23:06 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 31, Issue 42 In-Reply-To: References: Message-ID: <200603302123.06721.pgmdevlist@mailcan.com> Salut, A couple of comments: > ##################################### > lionel[donn?es]21>from scipy import * > lionel[donn?es]22>from numpy import ma as MA > lionel[donn?es]23>test=MA.masked_object([1,2,3,4,nan,6], nan) > lionel[donn?es]24>print amin(test), amax(test), mean(test) It's a very bad idea to use nan as a masking value, as (nan == nan) is always False. The mask construction will fail, and you won't have any value actually masked. You can use `inf` instead, that seems to work. > lionel[donn?es]26>print amin(test), amax(test), mean(test) > 1.0 6.0 3.2 Try to use methods instead of functions. It should simplify your code and make it a bit more foolproof. A good part of scipy (a bit less of numpy) is not adapted to MaskedArrays. If you run into problems, please update the MaskedArray page of the wiki. A plus From pearu at scipy.org Thu Mar 30 23:45:21 2006 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 30 Mar 2006 22:45:21 -0600 (CST) Subject: [SciPy-user] problem of Installing scipy In-Reply-To: <7B54DE0D8F88E1418C0B804A737E5D90073C61@UTHEVS4.mail.uthouston.edu> References: <7B54DE0D8F88E1418C0B804A737E5D90073C61@UTHEVS4.mail.uthouston.edu> Message-ID: On Thu, 30 Mar 2006, Huang, Zhong wrote: > File "Lib/linsolve/umfpack/setup.py", line 22 > libraries = ['cblas'], > ^ > SyntaxError: invalid syntax This has been fixed in svn. Pearu From elcorto at gmx.net Fri Mar 31 01:59:28 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 31 Mar 2006 08:59:28 +0200 Subject: [SciPy-user] Peak Fitting In-Reply-To: <442BD287.5080001@hoc.net> References: <5029069.1143719287067.JavaMail.root@webmail2.groni1> <442BD287.5080001@hoc.net> Message-ID: <442CD350.9010100@gmx.net> Christian Kristukat wrote: > basvandijk at home.nl wrote: > >>Hello, >> >>I would like to integrate some data. The data has a somewhat similar form as the >>picture of the peak in [1]. I think a good function describing the data is: > > > The peak in [1] actually is a gaussian, the formula is given below the graph: > > amp * exp(-(x-center)**2/(2*width**2)) > > >>I don't know much about linear algebra but I've read I can use a non linear >>least square fitting method to fit the data. Maybe I can also use others? My >>question is can I do this with Scipy and if so how? > > > with scipy you can do a leastsq fit like this > assuming x and y are arrays with your data: > > from scipy import optimize > > def residuals(amp,center,width): > return y-(amp * exp(-(x-center)**2/(2*width**2))) > > guess = [2.0, 4.5, 0.2] # the inital guess for the parameters > > out = optimize.leastsq(residuals, guess) > > solution = out[0] > > print solution > You can also use a general purpose minimizer in optimize and calculate the residual error (least squares error) by yourself (actually that is what leastsq() does for you). Assuming you have x and y in the appropriate namespace): from scipy import optimize from numpy import dot, sqrt def err(params): amp = params[0] center = params[1] width = params[2] r = residuals(amp, center, width) return sqrt(dot(r,r)) You may also skip the sqrt(). out = optimize.fmin_powell(err, guess) -- Random number generation is the art of producing pure gibberish as quickly as possible. From lroubeyrie at limair.asso.fr Fri Mar 31 02:26:12 2006 From: lroubeyrie at limair.asso.fr (Lionel Roubeyrie) Date: Fri, 31 Mar 2006 09:26:12 +0200 Subject: [SciPy-user] SciPy-user Digest, Vol 31, Issue 42 In-Reply-To: <200603302123.06721.pgmdevlist@mailcan.com> References: <200603302123.06721.pgmdevlist@mailcan.com> Message-ID: <200603310926.12869.lroubeyrie@limair.asso.fr> Bonjour, thanks for your comments, I'll not use nan object anymore :-P Bonne journee Le Vendredi 31 Mars 2006 04:23, Pierre GM a ?crit?: > Salut, > > A couple of comments: > > ##################################### > > lionel[donn?es]21>from scipy import * > > lionel[donn?es]22>from numpy import ma as MA > > lionel[donn?es]23>test=MA.masked_object([1,2,3,4,nan,6], nan) > > lionel[donn?es]24>print amin(test), amax(test), mean(test) > > It's a very bad idea to use nan as a masking value, as (nan == nan) is > always False. The mask construction will fail, and you won't have any value > actually masked. You can use `inf` instead, that seems to work. > > > lionel[donn?es]26>print amin(test), amax(test), mean(test) > > 1.0 6.0 3.2 > > Try to use methods instead of functions. It should simplify your code and > make it a bit more foolproof. > > A good part of scipy (a bit less of numpy) is not adapted to MaskedArrays. > If you run into problems, please update the MaskedArray page of the wiki. A > plus -- Lionel Roubeyrie - lroubeyrie at limair.asso.fr LIMAIR http://www.limair.asso.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincenzo.cacciatore at gmail.com Fri Mar 31 03:21:29 2006 From: vincenzo.cacciatore at gmail.com (vincenzo cacciatore) Date: Fri, 31 Mar 2006 10:21:29 +0200 Subject: [SciPy-user] Hamming low pass fir filter Message-ID: <7b580e5d0603310021k44197db9q2a4a15be0930fd44@mail.gmail.com> Hi all, i'm a new list member. I would like to design and apply a FIR low pass filter with Hamming window. I'm using scipy module. First I create the windows with the signal.firwin function and after i apply this window to my signal with the signal.lfilter function. I didn't understand where i can specify what kind of filter i'm implementing (low, high or band pass). Thanks all, Vincenzo -------------- next part -------------- An HTML attachment was scrubbed... URL: From lroubeyrie at limair.asso.fr Fri Mar 31 07:51:58 2006 From: lroubeyrie at limair.asso.fr (Lionel Roubeyrie) Date: Fri, 31 Mar 2006 14:51:58 +0200 Subject: [SciPy-user] select Message-ID: <200603311451.58332.lroubeyrie@limair.asso.fr> Hi all, I have a little question about the select function, how can I select items between values: ####################################### lionel[~]58>t Sortie[58]: array([ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180, 198, 216, 234, 252, 270, 288, 306, 324, 342]) lionel[~]59>select( [t<90, t>300], [t,t] ) Sortie[59]: array([ 0, 18, 36, 54, 72, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 306, 324, 342]) Ok, it works, but: lionel[~]60>select( [t>90, t<300], [t,t] ) Sortie[60]: array([ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180, 198, 216, 234, 252, 270, 288, 306, 324, 342]) gives all items. How can I get out this problem? Thanks -- Lionel Roubeyrie - lroubeyrie at limair.asso.fr LIMAIR http://www.limair.asso.fr From pgmdevlist at mailcan.com Fri Mar 31 13:55:45 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Fri, 31 Mar 2006 13:55:45 -0500 Subject: [SciPy-user] select In-Reply-To: References: Message-ID: <200603311355.45790.pgmdevlist@mailcan.com> Lionel, Select is equivalent to if condition1: choice1 elsif condition2: choice2 else: default In your second example, if condition1 is false , then condition2 is always true: ( ! t>90 => t<90 => t<300). Same thing if you reverse the order of the conditions, of course. I find the base functions greater, equal, less... quite useful: select([greater(t,90)*less(t,300)],[t,]) is a good start. From hetland at tamu.edu Fri Mar 31 17:37:25 2006 From: hetland at tamu.edu (Rob Hetland) Date: Fri, 31 Mar 2006 16:37:25 -0600 Subject: [SciPy-user] Floating point exception in scipy Message-ID: <60752119-1E41-4D4A-980F-DB855D52E0DA@tamu.edu> Compiled on an Intel Mac os X using gcc 4.0.1 (the only one available on intel macs) and gfortran (from hpc.sf.net). Python is MacPython Universal build 2.4.3. Numpy compiles without a hitch, and tests with no errors. SciPy also compiles without errors, but I get a floating point exception when trying to test scipy. This includes doing scipy.test (10,10). Below are the details of some attempts. I'm really not sure where to begin, as it compiles fine. Jordan Mantha has claimed success basically following the PPC build instructions with this compiler configuration, but I have not had any luck. I have also tried to exclude modules that need umfpack, but that also failed in a similar way. Any ideas? -Rob mire:~/src/python$ python Python 2.4.3 (#1, Mar 30 2006, 11:02:16) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy import linsolve.umfpack -> failed: No module named _umfpack >>> scipy.test(10,10) Floating point exception mire:~/src/python$ python Python 2.4.3 (#1, Mar 30 2006, 11:02:16) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.special import linsolve.umfpack -> failed: No module named _umfpack Floating point exception mire:~/src/python$ python Python 2.4.3 (#1, Mar 30 2006, 11:02:16) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * import linsolve.umfpack -> failed: No module named _umfpack Floating point exception ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu