From oliphant.travis at ieee.org Fri Dec 1 18:25:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 01 Dec 2006 16:25:15 -0700 Subject: [SciPy-dev] "fancy index" assignment In-Reply-To: <200611301602.22063.a.u.r.e.l.i.a.n@gmx.net> References: <200611301602.22063.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <4570B9DB.6030502@ieee.org> Johannes Loehnert wrote: > Hi, > > I have a problem with fancy assignment. Even though left and right side > of the assignment have the same shape, an exception occurs. numpy is freshly > built 10 minutes ago. > This is a bug in NumPy. I've tracked it down and am trying to come up with a fix. Basically, I need to figure out the transpose tuple to use to invert a permutation that I'm using. If I do. b = a.transpose(perm) c = b.transpose(what) so that c and a have the same shape. I have a general formula for perm but I'm not quite sure how to translate that into a general formula for what. I may have to just calculate the inverse permutation in the code. -Travis. From oliphant.travis at ieee.org Fri Dec 1 18:25:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 01 Dec 2006 16:25:15 -0700 Subject: [SciPy-dev] "fancy index" assignment In-Reply-To: <200611301602.22063.a.u.r.e.l.i.a.n@gmx.net> References: <200611301602.22063.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <4570B9DB.6030502@ieee.org> Johannes Loehnert wrote: > Hi, > > I have a problem with fancy assignment. Even though left and right side > of the assignment have the same shape, an exception occurs. numpy is freshly > built 10 minutes ago. > This is a bug in NumPy. I've tracked it down and am trying to come up with a fix. Basically, I need to figure out the transpose tuple to use to invert a permutation that I'm using. If I do. b = a.transpose(perm) c = b.transpose(what) so that c and a have the same shape. I have a general formula for perm but I'm not quite sure how to translate that into a general formula for what. I may have to just calculate the inverse permutation in the code. -Travis. From nwagner at iam.uni-stuttgart.de Sat Dec 2 04:38:29 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 02 Dec 2006 10:38:29 +0100 Subject: [SciPy-dev] Scipy Tickets 320-322 Message-ID: Please can someone close Scipy Tickets 320-322. I cannot reproduce them with the latest svn version. Nils From david at ar.media.kyoto-u.ac.jp Mon Dec 4 07:39:53 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 04 Dec 2006 21:39:53 +0900 Subject: [SciPy-dev] "branching" a package inside scipy ? Message-ID: <45741719.1060406@ar.media.kyoto-u.ac.jp> Hi, It is a bit OT, but I would like to know if it is possible to "branch" a package under scipy hierarchy. Specifically, I am about to change heavily some internal of my toolbox pyem in the sandbox, which may involve some changes of scipy.clusters (I would wait for approval for committing the changes once done of course), but I would like to continue fixing some bugs and so on in the trunk. Is this possible for someone with write access to scipy SVN ? cheers, David From robert.kern at gmail.com Mon Dec 4 12:04:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 04 Dec 2006 11:04:13 -0600 Subject: [SciPy-dev] "branching" a package inside scipy ? In-Reply-To: <45741719.1060406@ar.media.kyoto-u.ac.jp> References: <45741719.1060406@ar.media.kyoto-u.ac.jp> Message-ID: <4574550D.2040007@gmail.com> David Cournapeau wrote: > Hi, > > It is a bit OT, but I would like to know if it is possible to > "branch" a package under scipy hierarchy. Specifically, I am about to > change heavily some internal of my toolbox pyem in the sandbox, which > may involve some changes of scipy.clusters (I would wait for approval > for committing the changes once done of course), but I would like to > continue fixing some bugs and so on in the trunk. > > Is this possible for someone with write access to scipy SVN ? Certainly. Here is a brief outline on how to do it. I recommend ignoring most of the first half, which describes the manual procedure, and using the svnmerge.py tool that I describe in the latter half. It simplifies the process immensely. http://projects.scipy.org/scipy/numpy/wiki/MakingBranches All of the examples there branch the whole trunk. The only change you have to make is to use the URL directly to the cluster/ directory: http://svn.scipy.org/svn/scipy/trunk --> http://svn.scipy.org/svn/scipy/trunk/Lib/cluster -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dd55 at cornell.edu Mon Dec 4 14:39:39 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 4 Dec 2006 14:39:39 -0500 Subject: [SciPy-dev] scipy-0.5.2 Message-ID: <200612041439.39225.dd55@cornell.edu> On October 25, a message was posted to scipy-dev suggesting Oct 30 as a target release date for scipy-0.5.2. I'm sure there are good reasons for the delay, but several people have asked on the lists about the availability of a scipy release that is compatible with numpy-1.0, and to my knowledge these inquiries have not been answered. Would someone knowledgeable post a status report concerning scipy-0.5.2 to the lists? I think the need for a compatible scipy release is causing problems for packagers who are trying to support a working numpy/scipy/matplotlib environment for scientific computing. Respectfully, Darren -- Darren S. Dale, Ph.D. Staff Scientist Cornell High Energy Synchrotron Source Cornell University 275 Wilson Lab Rt. 366 & Pine Tree Road Ithaca, NY 14853 dd55 at cornell.edu office: (607) 255-3819 fax: (607) 255-9001 http://www.chess.cornell.edu From oliphant at ee.byu.edu Mon Dec 4 15:26:06 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 04 Dec 2006 13:26:06 -0700 Subject: [SciPy-dev] scipy-0.5.2 In-Reply-To: <200612041439.39225.dd55@cornell.edu> References: <200612041439.39225.dd55@cornell.edu> Message-ID: <4574845E.4030802@ee.byu.edu> Darren Dale wrote: >On October 25, a message was posted to scipy-dev suggesting Oct 30 as a target >release date for scipy-0.5.2. I'm sure there are good reasons for the delay, >but several people have asked on the lists about the availability of a scipy >release that is compatible with numpy-1.0, and to my knowledge these >inquiries have not been answered. Would someone knowledgeable post a status >report concerning scipy-0.5.2 to the lists? I think the need for a compatible >scipy release is causing problems for packagers who are trying to support a >working numpy/scipy/matplotlib environment for scientific computing. > > > The problem is lack of help. It would appear that I and Ed Schofield are being relied on to release SciPy and neither of us has found the time to do it in the past month. There has been a large influx in the number of bug-reports and feature requests and I don't think anybody has been able to keep up. My feeling is we should just release 0.5.2 now and deal with bug-reports later. But, usually that means at least getting the tests to pass. In short, we need a release manager for SciPy. Anybody willing to take up the task? -Travis From stefan at sun.ac.za Mon Dec 4 16:28:44 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 4 Dec 2006 23:28:44 +0200 Subject: [SciPy-dev] scipy-0.5.2 In-Reply-To: <4574845E.4030802@ee.byu.edu> References: <200612041439.39225.dd55@cornell.edu> <4574845E.4030802@ee.byu.edu> Message-ID: <20061204212844.GV2877@mentat.za.net> On Mon, Dec 04, 2006 at 01:26:06PM -0700, Travis Oliphant wrote: > My feeling is we should just release 0.5.2 now and deal with bug-reports > later. But, usually that means at least getting the tests to pass. > > In short, we need a release manager for SciPy. Anybody willing to take > up the task? What does this entail? Building on Windows, Linux and MacOS platforms? Making sure the unit tests run? Building with the different ATLAS-versions for the different platforms? Does this need to be one person, or can we have three, each responsible for a different architecture? Regards St?fan From oliphant at ee.byu.edu Mon Dec 4 17:42:37 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 04 Dec 2006 15:42:37 -0700 Subject: [SciPy-dev] scipy-0.5.2 In-Reply-To: <20061204212844.GV2877@mentat.za.net> References: <200612041439.39225.dd55@cornell.edu> <4574845E.4030802@ee.byu.edu> <20061204212844.GV2877@mentat.za.net> Message-ID: <4574A45D.1000508@ee.byu.edu> Stefan van der Walt wrote: >On Mon, Dec 04, 2006 at 01:26:06PM -0700, Travis Oliphant wrote: > > >>My feeling is we should just release 0.5.2 now and deal with bug-reports >>later. But, usually that means at least getting the tests to pass. >> >>In short, we need a release manager for SciPy. Anybody willing to take >>up the task? >> >> > >What does this entail? Building on Windows, Linux and MacOS >platforms? Making sure the unit tests run? Building with the >different ATLAS-versions for the different platforms? > > Yes, and also deciding when to make a release. >Does this need to be one person, or can we have three, each >responsible for a different architecture? > > One person needs to decide when to make a release. There should be several people who help with making binaries and fixing up tests. -Travis From david at ar.media.kyoto-u.ac.jp Mon Dec 4 22:13:25 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 05 Dec 2006 12:13:25 +0900 Subject: [SciPy-dev] "branching" a package inside scipy ? In-Reply-To: <4574550D.2040007@gmail.com> References: <45741719.1060406@ar.media.kyoto-u.ac.jp> <4574550D.2040007@gmail.com> Message-ID: <4574E3D5.1060005@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > Certainly. Here is a brief outline on how to do it. I recommend ignoring most of > the first half, which describes the manual procedure, and using the svnmerge.py > tool that I describe in the latter half. It simplifies the process immensely. > > http://projects.scipy.org/scipy/numpy/wiki/MakingBranches > > All of the examples there branch the whole trunk. The only change you have to > make is to use the URL directly to the cluster/ directory: > > http://svn.scipy.org/svn/scipy/trunk > --> > http://svn.scipy.org/svn/scipy/trunk/Lib/cluster > Thanks, I am surprised I didn't find the link in the wiki (maybe the search does not take trac into account ?). Anyway, can I use anything I want for a branch name ? cheers, David From nwagner at iam.uni-stuttgart.de Tue Dec 5 03:31:39 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 05 Dec 2006 09:31:39 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 Message-ID: <45752E6B.7040903@iam.uni-stuttgart.de> Hi all, I have some trouble with UMFPACK 5.0. My site.cfg reads [DEFAULT] library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 include_dirs = /usr/include:/usr/local/include [amd] library_dirs = /usr/local/src/AMD/Lib include_dirs = /usr/local/src/AMD/Include, /usr/local/src/UFconfig amd_libs = amd [umfpack] library_dirs = /usr/local/src/UMFPACK/Lib include_dirs = /usr/local/src/UMFPACK/Include, /usr/local/src/UFconfig umfpack_libs = umfpack python setup.py results in building extension "scipy.linsolve.umfpack.__umfpack" sources creating build/src.linux-x86_64-2.4/scipy/linsolve creating build/src.linux-x86_64-2.4/scipy/linsolve/umfpack adding 'Lib/linsolve/umfpack/umfpack.i' to sources. creating build/src.linux-x86_64-2.4/Lib/linsolve creating build/src.linux-x86_64-2.4/Lib/linsolve/umfpack swig: Lib/linsolve/umfpack/umfpack.i swig -python -o build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c -outdir build/src.linux-x86_64-2.4/Lib/linsolve/umfpack Lib/linsolve/umfpack/umfpack.i Lib/linsolve/umfpack/umfpack.i:188: Error: Unable to find 'umfpack.h' Lib/linsolve/umfpack/umfpack.i:189: Error: Unable to find 'umfpack_solve.h' Lib/linsolve/umfpack/umfpack.i:190: Error: Unable to find 'umfpack_defaults.h' Lib/linsolve/umfpack/umfpack.i:191: Error: Unable to find 'umfpack_triplet_to_col.h' Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack_col_to_triplet.h' Lib/linsolve/umfpack/umfpack.i:193: Error: Unable to find 'umfpack_transpose.h' Lib/linsolve/umfpack/umfpack.i:194: Error: Unable to find 'umfpack_scale.h' Lib/linsolve/umfpack/umfpack.i:196: Error: Unable to find 'umfpack_report_symbolic.h' Lib/linsolve/umfpack/umfpack.i:197: Error: Unable to find 'umfpack_report_numeric.h' Lib/linsolve/umfpack/umfpack.i:198: Error: Unable to find 'umfpack_report_info.h' Lib/linsolve/umfpack/umfpack.i:199: Error: Unable to find 'umfpack_report_control.h' Lib/linsolve/umfpack/umfpack.i:211: Error: Unable to find 'umfpack_symbolic.h' Lib/linsolve/umfpack/umfpack.i:212: Error: Unable to find 'umfpack_numeric.h' Lib/linsolve/umfpack/umfpack.i:221: Error: Unable to find 'umfpack_free_symbolic.h' Lib/linsolve/umfpack/umfpack.i:222: Error: Unable to find 'umfpack_free_numeric.h' Lib/linsolve/umfpack/umfpack.i:244: Error: Unable to find 'umfpack_get_lunz.h' Lib/linsolve/umfpack/umfpack.i:268: Error: Unable to find 'umfpack_get_numeric.h' error: command 'swig' failed with exit status 1 ls -l /usr/local/src/UMFPACK/Include/ total 252 -rw-rw---- 1 root root 3712 2006-05-02 15:23 umfpack_col_to_triplet.h -rw-rw---- 1 root root 1892 2006-05-02 15:23 umfpack_defaults.h -rw-rw---- 1 root root 1690 2006-05-02 15:23 umfpack_free_numeric.h -rw-rw---- 1 root root 1708 2006-05-02 15:23 umfpack_free_symbolic.h -rw-rw---- 1 root root 6270 2006-05-02 15:23 umfpack_get_determinant.h -rw-rw---- 1 root root 3989 2006-05-02 15:23 umfpack_get_lunz.h -rw-rw---- 1 root root 8960 2006-05-02 15:24 umfpack_get_numeric.h -rw-rw---- 1 root root 13154 2006-05-02 15:24 umfpack_get_symbolic.h -rw-rw---- 1 root root 1029 2006-05-02 01:34 umfpack_global.h -rw-rw---- 1 root root 19493 2006-05-02 14:47 umfpack.h -rw-rw---- 1 root root 2585 2006-05-02 15:24 umfpack_load_numeric.h -rw-rw---- 1 root root 2612 2006-05-02 15:24 umfpack_load_symbolic.h -rw-rw---- 1 root root 23215 2006-05-02 15:24 umfpack_numeric.h -rw-rw---- 1 root root 5157 2006-05-02 15:24 umfpack_qsymbolic.h -rw-rw---- 1 root root 2197 2006-05-02 15:24 umfpack_report_control.h -rw-rw---- 1 root root 2639 2006-05-02 15:24 umfpack_report_info.h -rw-rw---- 1 root root 6997 2006-05-02 15:24 umfpack_report_matrix.h -rw-rw---- 1 root root 3612 2006-05-02 15:25 umfpack_report_numeric.h -rw-rw---- 1 root root 3520 2006-05-02 15:25 umfpack_report_perm.h -rw-rw---- 1 root root 2563 2006-05-02 15:25 umfpack_report_status.h -rw-rw---- 1 root root 3551 2006-05-02 15:25 umfpack_report_symbolic.h -rw-rw---- 1 root root 4893 2006-05-02 15:25 umfpack_report_triplet.h -rw-rw---- 1 root root 4507 2006-05-02 15:25 umfpack_report_vector.h -rw-rw---- 1 root root 2243 2006-05-02 15:25 umfpack_save_numeric.h -rw-rw---- 1 root root 2274 2006-05-02 15:25 umfpack_save_symbolic.h -rw-rw---- 1 root root 3161 2006-05-02 15:25 umfpack_scale.h -rw-rw---- 1 root root 10693 2006-05-02 15:25 umfpack_solve.h -rw-rw---- 1 root root 22283 2006-05-02 15:26 umfpack_symbolic.h -rw-rw---- 1 root root 2225 2006-05-01 14:50 umfpack_tictoc.h -rw-rw---- 1 root root 1719 2006-05-01 14:50 umfpack_timer.h -rw-rw---- 1 root root 7938 2006-05-02 15:26 umfpack_transpose.h -rw-rw---- 1 root root 10442 2006-05-02 15:26 umfpack_triplet_to_col.h -rw-rw---- 1 root root 5542 2006-05-02 15:26 umfpack_wsolve.h Any pointer how to fix this problem would be appreciated. Thanks in advance Nils From fullung at gmail.com Tue Dec 5 04:58:52 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 5 Dec 2006 11:58:52 +0200 Subject: [SciPy-dev] "branching" a package inside scipy ? In-Reply-To: <4574E3D5.1060005@ar.media.kyoto-u.ac.jp> References: <45741719.1060406@ar.media.kyoto-u.ac.jp> <4574550D.2040007@gmail.com> <4574E3D5.1060005@ar.media.kyoto-u.ac.jp> Message-ID: <5eec5f300612050158j54609b4cqd09efbc8722229b2@mail.gmail.com> Hello all On 12/5/06, David Cournapeau wrote: > Robert Kern wrote: > > > > Certainly. Here is a brief outline on how to do it. I recommend ignoring most of > > the first half, which describes the manual procedure, and using the svnmerge.py > > tool that I describe in the latter half. It simplifies the process immensely. > > > > http://projects.scipy.org/scipy/numpy/wiki/MakingBranches > > > > All of the examples there branch the whole trunk. The only change you have to > > make is to use the URL directly to the cluster/ directory: > > > > http://svn.scipy.org/svn/scipy/trunk > > --> > > http://svn.scipy.org/svn/scipy/trunk/Lib/cluster > > > Thanks, I am surprised I didn't find the link in the wiki (maybe the > search does not take trac into account ?). > > Anyway, can I use anything I want for a branch name ? Something like branches/ or branches/pyem-refactor is probably a good choice. The SCons folks also use svnmerge.py to take care of their branching and merging. You might want to check out their recipe too (seems to differ slightly from Robert's): http://scons.tigris.org/branching.html Cheers, Albert From robert.kern at gmail.com Tue Dec 5 14:08:29 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 05 Dec 2006 13:08:29 -0600 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <45752E6B.7040903@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> Message-ID: <4575C3AD.1030105@gmail.com> Nils Wagner wrote: > Hi all, > > I have some trouble with UMFPACK 5.0. > > My site.cfg reads > > [DEFAULT] > library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 > include_dirs = /usr/include:/usr/local/include > > [amd] > library_dirs = /usr/local/src/AMD/Lib > include_dirs = /usr/local/src/AMD/Include, /usr/local/src/UFconfig > amd_libs = amd > > [umfpack] > library_dirs = /usr/local/src/UMFPACK/Lib > include_dirs = /usr/local/src/UMFPACK/Include, /usr/local/src/UFconfig > umfpack_libs = umfpack You need to use the appropriate path separator for your platform. On UN*Xes, that's ":", not ",". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Tue Dec 5 14:58:13 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 05 Dec 2006 20:58:13 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <4575C3AD.1030105@gmail.com> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> Message-ID: On Tue, 05 Dec 2006 13:08:29 -0600 Robert Kern wrote: > Nils Wagner wrote: >> Hi all, >> >> I have some trouble with UMFPACK 5.0. >> >> My site.cfg reads >> >> [DEFAULT] >> library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 >> include_dirs = /usr/include:/usr/local/include >> >> [amd] >> library_dirs = /usr/local/src/AMD/Lib >> include_dirs = /usr/local/src/AMD/Include, >>/usr/local/src/UFconfig >> amd_libs = amd >> >> [umfpack] >> library_dirs = /usr/local/src/UMFPACK/Lib >> include_dirs = /usr/local/src/UMFPACK/Include, >>/usr/local/src/UFconfig >> umfpack_libs = umfpack > > You need to use the appropriate path separator for your >platform. On UN*Xes, > that's ":", not ",". > I found the example site.cfg using help (linsolve) I will try it tomorrow. Thank you Nils > > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless enigma > that is made terrible by our own mad attempt to >interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From mattknox_ca at hotmail.com Tue Dec 5 15:36:13 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 5 Dec 2006 15:36:13 -0500 Subject: [SciPy-dev] adding files to sandbox Message-ID: how does one go about getting code added to the sandbox? Several of my co-op students over the past few months have worked on a time series module off and on, and I think it is very close to being ready for some public feedback and usage. I don't claim that the code is super efficient and/or bug free, but the basic idea of the interface is in place. We're just polishing up some of the frequency conversion stuff, which I hope to have done within two weeks, then I'd like to distribute this code to the numpy/scipy community, and maybe get some feedback if possible. The functionality is modelled after FAME, but where possible I have tried to make it numpy-ish rather than FAME-ish when the two offer conflicting paradigms. Some of you may remember I made a post about this same topic quite a while ago, and had not delivered anything up to this point... but I really mean it this time :) So anyway, if someone could maybe give me some instructions on how to add code to the sandbox, or provide me with an email address that I could send the stuff to instead, that would be great. Thanks, - Matt Knox From robert.kern at gmail.com Tue Dec 5 16:00:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 05 Dec 2006 15:00:37 -0600 Subject: [SciPy-dev] adding files to sandbox In-Reply-To: References: Message-ID: <4575DDF5.3060706@gmail.com> Matt Knox wrote: > how does one go about getting code added to the sandbox? Several of my co-op students over the past few months have worked on a time series module off and on, and I think it is very close to being ready for some public feedback and usage. I don't claim that the code is super efficient and/or bug free, but the basic idea of the interface is in place. We're just polishing up some of the frequency conversion stuff, which I hope to have done within two weeks, then I'd like to distribute this code to the numpy/scipy community, and maybe get some feedback if possible. > > The functionality is modelled after FAME, but where possible I have tried to make it numpy-ish rather than FAME-ish when the two offer conflicting paradigms. > > Some of you may remember I made a post about this same topic quite a while ago, and had not delivered anything up to this point... but I really mean it this time :) > > So anyway, if someone could maybe give me some instructions on how to add code to the sandbox, or provide me with an email address that I could send the stuff to instead, that would be great. I am CCing Jeff Strunk, our system administrator. Jeff, can you please give Matt write access to the scipy SVN? Matt, if you also need a tutorial specifically on the process of checking stuff into SVN, let me know. Your question was a little ambiguous. :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gnchen at cortechs.net Tue Dec 5 16:27:55 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Tue, 05 Dec 2006 13:27:55 -0800 Subject: [SciPy-dev] compile scipy by using intel compiler Message-ID: <1165354075.6742.5.camel@cortechs25.cortechs.net> Hi! All, I have a dual opteron 285 with 8G ram machine. And I ran FC6 x86_64 on that. I did manage to get numpy (from svn) compiled by using icc 9.1.0.45 and mkl 9.0 ( got 3 errors when I ran the est). But no such luck for scipy (from svn). Below is the error: Lib/special/cephes/mconf.h(137): remark #193: zero used for undefined preprocessing identifier #if WORDS_BIGENDIAN /* Defined in pyconfig.h */ ^ Lib/special/cephes/const.c(92): error: floating-point operation result is out of range double INFINITY = 1.0/0.0; /* 99e999; */ ^ Lib/special/cephes/const.c(97): error: floating-point operation result is out of range double NAN = 1.0/0.0 - 1.0/0.0; ^ Lib/special/cephes/const.c(97): error: floating-point operation result is out of range double NAN = 1.0/0.0 - 1.0/0.0; ^ compilation aborted for Lib/special/cephes/const.c (code 2) error: Command "icc -O2 -g -fomit-frame-pointer -mcpu=pentium4 -mtune=pentium4 -march=pentium4 -msse3 -axW -Wall -fPIC -c Lib/special/cephes/const.c -o build/temp.linux-x86_64-2.4/Lib/special/cephes/const.o" failed with exit status 2 Did anyone has a solution for this? BTW, the 3 error I got from numpy are: File "/usr/lib64/python2.4/site-packages/numpy/lib/tests/test_ufunclike.py", line 25, in test_ufunclike Failed example: nx.sign(a) Expected: array([ 1., -1., 0., 0., 1., -1.]) Got: array([ 1., -1., -1., 0., 1., -1.]) ********************************************************************** File "/usr/lib64/python2.4/site-packages/numpy/lib/tests/test_ufunclike.py", line 40, in test_ufunclike Failed example: nx.sign(a, y) Expected: array([True, True, False, False, True, True], dtype=bool) Got: array([True, True, True, False, True, True], dtype=bool) ********************************************************************** File "/usr/lib64/python2.4/site-packages/numpy/lib/tests/test_ufunclike.py", line 43, in test_ufunclike Failed example: y Expected: array([True, True, False, False, True, True], dtype=bool) Got: array([True, True, True, False, True, True], dtype=bool) Are these error serious?? Or maybe I should get back to gcc? Anyone got a good speed up by using icc and mkl? -- Gen-Nan Chen, PhD -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Tue Dec 5 16:58:10 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 5 Dec 2006 16:58:10 -0500 Subject: [SciPy-dev] adding files to sandbox Message-ID: > I am CCing Jeff Strunk, our system administrator. Jeff, can you please give Matt > write access to the scipy SVN? > > Matt, if you also need a tutorial specifically on the process of checking stuff > into SVN, let me know. Your question was a little ambiguous. Hi Robert. I will need a tutorial on how to check stuff into SVN. My source control software experience is limited to a rather simplistic tool we use here at the office. I work exclusively in the Windows world too (for better or worse). Thanks, - Matt From robert.kern at gmail.com Tue Dec 5 17:40:45 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 05 Dec 2006 16:40:45 -0600 Subject: [SciPy-dev] adding files to sandbox In-Reply-To: References: Message-ID: <4575F56D.4010802@gmail.com> Matt Knox wrote: >> I am CCing Jeff Strunk, our system administrator. Jeff, can you please give Matt >> write access to the scipy SVN? >> >> Matt, if you also need a tutorial specifically on the process of checking stuff >> into SVN, let me know. Your question was a little ambiguous. > > Hi Robert. I will need a tutorial on how to check stuff into SVN. My source control software experience is limited to a rather simplistic tool we use here at the office. I work exclusively in the Windows world too (for better or worse). Now, I'm exclusively a non-Windows person, so (for the moment), I am simply going to point you towards TortoiseSVN, a Windows GUI client for working with Subversion. The online documentation is pretty good. http://tortoisesvn.net/ http://tortoisesvn.net/doc_release You can ignore anything about setting up a repository (since you are only accessing one that already exists). So start with "Checking Out a Working Copy". The URL for the trunk of scipy is this: http://svn.scipy.org/svn/scipy/trunk If you need some more conceptual documentation on how Subversion does things or what some of the terms mean, a good (and free) book on Subversion itself if _Version Control with Subversion_. http://svnbook.red-bean.com/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Tue Dec 5 19:25:10 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 6 Dec 2006 02:25:10 +0200 Subject: [SciPy-dev] compile scipy by using intel compiler In-Reply-To: <1165354075.6742.5.camel@cortechs25.cortechs.net> References: <1165354075.6742.5.camel@cortechs25.cortechs.net> Message-ID: <20061206002510.GA11882@dogbert.sdsl.sun.ac.za> Hello all On Tue, 05 Dec 2006, Gennan Chen wrote: > Hi! All, > > I have a dual opteron 285 with 8G ram machine. And I ran FC6 x86_64 on > that. I did manage to get numpy (from svn) compiled by using icc > 9.1.0.45 and mkl 9.0 ( got 3 errors when I ran the est). But no such > luck for scipy (from svn). Below is the error: > > Lib/special/cephes/mconf.h(137): remark #193: zero used for undefined > preprocessing identifier > #if WORDS_BIGENDIAN /* Defined in pyconfig.h */ > ^ Looks like a missing define. > Lib/special/cephes/const.c(92): error: floating-point operation result > is out of range > double INFINITY = 1.0/0.0; /* 99e999; */ > ^ > > Lib/special/cephes/const.c(97): error: floating-point operation result > is out of range > double NAN = 1.0/0.0 - 1.0/0.0; > ^ > > Lib/special/cephes/const.c(97): error: floating-point operation result > is out of range > double NAN = 1.0/0.0 - 1.0/0.0; > ^ IIRC, Robert (or someone) fixed these issues for the Visual Studio compiler by defining something. > compilation aborted for Lib/special/cephes/const.c (code 2) > error: Command "icc -O2 -g -fomit-frame-pointer -mcpu=pentium4 > -mtune=pentium4 -march=pentium4 -msse3 -axW -Wall -fPIC -c > Lib/special/cephes/const.c -o > build/temp.linux-x86_64-2.4/Lib/special/cephes/const.o" failed with exit > status 2 > > Did anyone has a solution for this? > > BTW, the 3 error I got from numpy are: > File > "/usr/lib64/python2.4/site-packages/numpy/lib/tests/test_ufunclike.py", > line 25, in test_ufunclike > Failed example: > nx.sign(a) > Expected: > array([ 1., -1., 0., 0., 1., -1.]) > Got: > array([ 1., -1., -1., 0., 1., -1.]) > ********************************************************************** > File > "/usr/lib64/python2.4/site-packages/numpy/lib/tests/test_ufunclike.py", > line 40, in test_ufunclike > Failed example: > nx.sign(a, y) > Expected: > array([True, True, False, False, True, True], dtype=bool) > Got: > array([True, True, True, False, True, True], dtype=bool) > ********************************************************************** > File > "/usr/lib64/python2.4/site-packages/numpy/lib/tests/test_ufunclike.py", > line 43, in test_ufunclike > Failed example: > y > Expected: > array([True, True, False, False, True, True], dtype=bool) > Got: > array([True, True, True, False, True, True], dtype=bool) > > > Are these error serious?? Probably not. IIRC, the Intel compiler seems to think a bit differently about the sign of NaN than GCC and MSVC do. > Or maybe I should get back to gcc? Anyone got a good speed up by using > icc and mkl? I don't have any NumPy-specific benchmarks, but the Intel compiler's auto-vectorization lead to a 4* speedup over MSVC on my Core 2 Duo for some of my own C code (and I guesstimate MSVC to be about as good as recent versions of GCC). The same auto-vectorization kicks in for all the NumPy ufuncs, so it's quite probable that the Intel compiler will help out. Either way, we should probably try to sort out the SciPy build. The benefits obtained from using the Intel compiler are definately worth the effort. Cheers, Albert From aisaac at american.edu Wed Dec 6 00:28:10 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 6 Dec 2006 00:28:10 -0500 Subject: [SciPy-dev] adding files to sandbox In-Reply-To: References: Message-ID: On Tue, 5 Dec 2006, Matt Knox apparently wrote: > Some of you may remember I made a post about this same > topic quite a while ago. Yes. > but I really mean it this time :) Please announce when its up. Cheers, Alan Isaac From nwagner at iam.uni-stuttgart.de Wed Dec 6 02:44:49 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 06 Dec 2006 08:44:49 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <4575C3AD.1030105@gmail.com> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> Message-ID: <457674F1.8040709@iam.uni-stuttgart.de> Robert Kern wrote: > Nils Wagner wrote: > >> Hi all, >> >> I have some trouble with UMFPACK 5.0. >> >> My site.cfg reads >> >> [DEFAULT] >> library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 >> include_dirs = /usr/include:/usr/local/include >> >> [amd] >> library_dirs = /usr/local/src/AMD/Lib >> include_dirs = /usr/local/src/AMD/Include, /usr/local/src/UFconfig >> amd_libs = amd >> >> [umfpack] >> library_dirs = /usr/local/src/UMFPACK/Lib >> include_dirs = /usr/local/src/UMFPACK/Include, /usr/local/src/UFconfig >> umfpack_libs = umfpack >> > > You need to use the appropriate path separator for your platform. On UN*Xes, > that's ":", not ",". > > > Hi Robert, I have replaced the commas by colons in my site.cfg. [DEFAULT] library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 include_dirs = /usr/include:/usr/local/include [amd] library_dirs = /usr/local/src/AMD/Lib include_dirs = /usr/local/src/AMD/Include:/usr/local/src/UFconfig amd_libs = amd [umfpack] library_dirs = /usr/local/src/UMFPACK/Lib include_dirs = /usr/local/src/UMFPACK/Include:/usr/local/src/UFconfig umfpack_libs = umfpack Now I get ... creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve/umfpack compile options: '-DSCIPY_UMFPACK_H -DSCIPY_AMD_H -DATLAS_INFO="\"3.7.11\"" -I/usr/local/src/UMFPACK/Include -I/usr/local/src/AMD/Include -I/usr/local/lib64/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c' gcc: build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c In file included from build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c:1440: /usr/local/src/UMFPACK/Include/umfpack.h:31:22: error: UFconfig.h: No such file or directory In file included from /usr/local/src/UMFPACK/Include/umfpack.h:48, from build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c:1440: /usr/local/src/UMFPACK/Include/umfpack_symbolic.h:23: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?umfpack_dl_symbolic? /usr/local/src/UMFPACK/Include/umfpack_symbolic.h:47: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?umfpack_zl_symbolic? In file included from /usr/local/src/UMFPACK/Include/umfpack.h:49, from build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c:1440: /usr/local/src/UMFPACK/Include/umfpack_numeric.h:22: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?umfpack_dl_numeric? /usr/local/src/UMFPACK/Include/umfpack_numeric.h:44: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?umfpack_zl_numeric? In file included from /usr/local/src/UMFPACK/Include/umfpack.h:50, from build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c:1440: /usr/local/src/UMFPACK/Include/umfpack_solve.h:24: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?umfpack_dl_solve? /usr/local/src/UMFPACK/Include/umfpack_solve.h:50: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?umfpack_zl_solve? In file included from /usr/local/src/UMFPACK/Include/umfpack.h:56, from build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c:1440: /usr/local/src/UMFPACK/Include/umfpack_qsymbolic.h:24: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?umfpack_dl_qsymbolic? /usr/local/src/UMFPACK/Include/umfpack_qsymbolic.h:50: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?umfpack_zl_qsymbolic? In file included from /usr/local/src/UMFPACK/Include/umfpack.h:57, ... locate UFconfig.h /usr/local/src/UFconfig/UFconfig.h Nils From cimrman3 at ntc.zcu.cz Wed Dec 6 05:31:11 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 06 Dec 2006 11:31:11 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <457674F1.8040709@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> Message-ID: <45769BEF.3040107@ntc.zcu.cz> Nils Wagner wrote: > I have replaced the commas by colons in my site.cfg. > > [DEFAULT] > library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 > include_dirs = /usr/include:/usr/local/include > > [amd] > library_dirs = /usr/local/src/AMD/Lib > include_dirs = /usr/local/src/AMD/Include:/usr/local/src/UFconfig > amd_libs = amd > > [umfpack] > library_dirs = /usr/local/src/UMFPACK/Lib > include_dirs = /usr/local/src/UMFPACK/Include:/usr/local/src/UFconfig > umfpack_libs = umfpack > > Now I get > ... > creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve > creating > build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve/umfpack > compile options: '-DSCIPY_UMFPACK_H -DSCIPY_AMD_H > -DATLAS_INFO="\"3.7.11\"" -I/usr/local/src/UMFPACK/Include > -I/usr/local/src/AMD/Include > -I/usr/local/lib64/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c' > gcc: build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c > In file included from > build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c:1440: > /usr/local/src/UMFPACK/Include/umfpack.h:31:22: error: UFconfig.h: No > such file or directory > ... > locate UFconfig.h > > /usr/local/src/UFconfig/UFconfig.h Is the file really there? Your slocate database may be old... r. From cimrman3 at ntc.zcu.cz Wed Dec 6 05:36:06 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 06 Dec 2006 11:36:06 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <457674F1.8040709@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> Message-ID: <45769D16.1020808@ntc.zcu.cz> Nils Wagner wrote: > I have replaced the commas by colons in my site.cfg. > > [DEFAULT] > library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 > include_dirs = /usr/include:/usr/local/include > > [amd] > library_dirs = /usr/local/src/AMD/Lib > include_dirs = /usr/local/src/AMD/Include:/usr/local/src/UFconfig > amd_libs = amd > > [umfpack] > library_dirs = /usr/local/src/UMFPACK/Lib > include_dirs = /usr/local/src/UMFPACK/Include:/usr/local/src/UFconfig > umfpack_libs = umfpack > > Now I get > ... > creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve > creating > build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve/umfpack > compile options: '-DSCIPY_UMFPACK_H -DSCIPY_AMD_H > -DATLAS_INFO="\"3.7.11\"" -I/usr/local/src/UMFPACK/Include > -I/usr/local/src/AMD/Include > -I/usr/local/lib64/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c' -I/usr/local/src/UFconfig is missing here, strange. Could you try to copy or link manually UFconfig.h into e.g. /usr/local/src/UMFPACK/Include? > gcc: build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c > In file included from > build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c:1440: > /usr/local/src/UMFPACK/Include/umfpack.h:31:22: error: UFconfig.h: No > such file or directory From nwagner at iam.uni-stuttgart.de Wed Dec 6 07:48:26 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 06 Dec 2006 13:48:26 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <45769D16.1020808@ntc.zcu.cz> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> Message-ID: <4576BC1A.6080902@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> I have replaced the commas by colons in my site.cfg. >> >> [DEFAULT] >> library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 >> include_dirs = /usr/include:/usr/local/include >> >> [amd] >> library_dirs = /usr/local/src/AMD/Lib >> include_dirs = /usr/local/src/AMD/Include:/usr/local/src/UFconfig >> amd_libs = amd >> >> [umfpack] >> library_dirs = /usr/local/src/UMFPACK/Lib >> include_dirs = /usr/local/src/UMFPACK/Include:/usr/local/src/UFconfig >> umfpack_libs = umfpack >> >> Now I get >> ... >> creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve >> creating >> build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve/umfpack >> compile options: '-DSCIPY_UMFPACK_H -DSCIPY_AMD_H >> -DATLAS_INFO="\"3.7.11\"" -I/usr/local/src/UMFPACK/Include >> -I/usr/local/src/AMD/Include >> -I/usr/local/lib64/python2.4/site-packages/numpy/core/include >> -I/usr/include/python2.4 -c' >> > > -I/usr/local/src/UFconfig is missing here, strange. Could you try to > copy or link manually UFconfig.h into e.g. /usr/local/src/UMFPACK/Include? > > I have put a copy of UFconfig.h into /usr/local/src/UMFPACK/Include. locate UFconfig.h /usr/local/src/UFconfig/UFconfig.h /usr/local/src/UMFPACK/Include/UFconfig.h With that I was able to install scipy. But scipy.test(1) yields Found 4 tests for scipy.io.recaster Warning: FAILURE importing tests for /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Warning: FAILURE importing tests for /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Found 4 tests for scipy.optimize.zeros Any idea ? Nils amd_info: libraries = ['amd'] library_dirs = ['/usr/local/src/AMD/Lib'] define_macros = [('SCIPY_AMD_H', None)] swig_opts = ['-I/usr/local/src/AMD/Include'] include_dirs = ['/usr/local/src/AMD/Include'] umfpack_info: libraries = ['umfpack', 'amd'] library_dirs = ['/usr/local/src/UMFPACK/Lib', '/usr/local/src/AMD/Lib'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] swig_opts = ['-I/usr/local/src/UMFPACK/Include', '-I/usr/local/src/AMD/Include'] include_dirs = ['/usr/local/src/UMFPACK/Include', '/usr/local/src/AMD/Include'] >> gcc: build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c >> In file included from >> build/src.linux-x86_64-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c:1440: >> /usr/local/src/UMFPACK/Include/umfpack.h:31:22: error: UFconfig.h: No >> such file or directory >> > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From nwagner at iam.uni-stuttgart.de Wed Dec 6 08:07:47 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 06 Dec 2006 14:07:47 +0100 Subject: [SciPy-dev] scipy-0.5.2 In-Reply-To: <4574845E.4030802@ee.byu.edu> References: <200612041439.39225.dd55@cornell.edu> <4574845E.4030802@ee.byu.edu> Message-ID: <4576C0A3.1090303@iam.uni-stuttgart.de> Travis Oliphant wrote: > Darren Dale wrote: > > >> On October 25, a message was posted to scipy-dev suggesting Oct 30 as a target >> release date for scipy-0.5.2. I'm sure there are good reasons for the delay, >> but several people have asked on the lists about the availability of a scipy >> release that is compatible with numpy-1.0, and to my knowledge these >> inquiries have not been answered. Would someone knowledgeable post a status >> report concerning scipy-0.5.2 to the lists? I think the need for a compatible >> scipy release is causing problems for packagers who are trying to support a >> working numpy/scipy/matplotlib environment for scientific computing. >> >> >> >> > The problem is lack of help. It would appear that I and Ed Schofield > are being relied on to release SciPy and neither of us has found the > time to do it in the past month. There has been a large influx in the > number of bug-reports and feature requests and I don't think anybody has > been able to keep up. > > My feeling is we should just release 0.5.2 now and deal with bug-reports > later. But, usually that means at least getting the tests to pass. > > All tests pass with the latest svn version. I have used scipy.test(1) on 32 and 64-bit machines. Nils > In short, we need a release manager for SciPy. Anybody willing to take > up the task? > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From nwagner at iam.uni-stuttgart.de Wed Dec 6 10:45:08 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 06 Dec 2006 16:45:08 +0100 Subject: [SciPy-dev] Website down Message-ID: <4576E584.2060306@iam.uni-stuttgart.de> Hi all, I cannot access the website www.scipy.org. Nils From jstrunk at enthought.com Wed Dec 6 10:59:36 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Wed, 6 Dec 2006 09:59:36 -0600 Subject: [SciPy-dev] Website down In-Reply-To: <4576E584.2060306@iam.uni-stuttgart.de> References: <4576E584.2060306@iam.uni-stuttgart.de> Message-ID: <200612060959.36996.jstrunk@enthought.com> There seems to be a problem with the SciPy moin instance getting out of control. I put up a temporary page explaining the problem. I am trying to see what is causing this problem. Sorry for the inconvenience, Jeff On Wednesday 06 December 2006 9:45 am, Nils Wagner wrote: > Hi all, > > I cannot access the website www.scipy.org. > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From jstrunk at enthought.com Wed Dec 6 10:59:36 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Wed, 6 Dec 2006 09:59:36 -0600 Subject: [SciPy-dev] Website down In-Reply-To: <4576E584.2060306@iam.uni-stuttgart.de> References: <4576E584.2060306@iam.uni-stuttgart.de> Message-ID: <200612060959.36996.jstrunk@enthought.com> There seems to be a problem with the SciPy moin instance getting out of control. I put up a temporary page explaining the problem. I am trying to see what is causing this problem. Sorry for the inconvenience, Jeff On Wednesday 06 December 2006 9:45 am, Nils Wagner wrote: > Hi all, > > I cannot access the website www.scipy.org. > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From jstrunk at enthought.com Wed Dec 6 11:24:02 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Wed, 6 Dec 2006 10:24:02 -0600 Subject: [SciPy-dev] Website down In-Reply-To: <200612060959.36996.jstrunk@enthought.com> References: <4576E584.2060306@iam.uni-stuttgart.de> <200612060959.36996.jstrunk@enthought.com> Message-ID: <200612061024.02400.jstrunk@enthought.com> I am making a static version of the moin site, and I will let y'all know when it is ready. Thanks, Jeff On Wednesday 06 December 2006 9:59 am, Jeff Strunk wrote: > There seems to be a problem with the SciPy moin instance getting out of > control. I put up a temporary page explaining the problem. > > I am trying to see what is causing this problem. > > Sorry for the inconvenience, > Jeff > > On Wednesday 06 December 2006 9:45 am, Nils Wagner wrote: > > Hi all, > > > > I cannot access the website www.scipy.org. > > > > Nils > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From jstrunk at enthought.com Wed Dec 6 12:01:49 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Wed, 6 Dec 2006 11:01:49 -0600 Subject: [SciPy-dev] Website down In-Reply-To: <200612061024.02400.jstrunk@enthought.com> References: <4576E584.2060306@iam.uni-stuttgart.de> <200612060959.36996.jstrunk@enthought.com> <200612061024.02400.jstrunk@enthought.com> Message-ID: <200612061101.49306.jstrunk@enthought.com> I think the moin-dump command was not the right way to go, but for now you can access the text. I am working to create a rich static mirror. Thanks for your patience, Jeff On Wednesday 06 December 2006 10:24 am, Jeff Strunk wrote: > I am making a static version of the moin site, and I will let y'all know > when it is ready. > > Thanks, > Jeff > > On Wednesday 06 December 2006 9:59 am, Jeff Strunk wrote: > > There seems to be a problem with the SciPy moin instance getting out of > > control. I put up a temporary page explaining the problem. > > > > I am trying to see what is causing this problem. > > > > Sorry for the inconvenience, > > Jeff > > > > On Wednesday 06 December 2006 9:45 am, Nils Wagner wrote: > > > Hi all, > > > > > > I cannot access the website www.scipy.org. > > > > > > Nils > > > > > > _______________________________________________ > > > Scipy-dev mailing list > > > Scipy-dev at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From mattknox_ca at hotmail.com Wed Dec 6 14:23:32 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Wed, 6 Dec 2006 14:23:32 -0500 Subject: [SciPy-dev] sandbox vs wiki Message-ID: I noticed a post from earlier (scikits ? [was Re: Kronecker sum]) which suggests to set up a wiki page and attach the files for contributed code. When would one take that approach vs adding to the sandbox in SVN? Specifically, I'm wondering what method would be preferred for the time series module I plan to contribute shortly. - Matt From robert.kern at gmail.com Wed Dec 6 14:40:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Dec 2006 13:40:41 -0600 Subject: [SciPy-dev] sandbox vs wiki In-Reply-To: References: Message-ID: <45771CB9.4000002@gmail.com> Matt Knox wrote: > I noticed a post from earlier (scikits ? [was Re: Kronecker sum]) which suggests to set up a wiki page and attach the files for contributed code. When would one take that approach vs adding to the sandbox in SVN? Specifically, I'm wondering what method would be preferred for the time series module I plan to contribute shortly. The sandbox is preferred if it is at all possible. The reasons that it might not be possible include one not having write access to SVN, or the license of the code being inappropriate for scipy. And even then, using the wiki should be reserved for short scripts and probably for just a short time; it is not a generally good mechanism for distributing code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jstrunk at enthought.com Wed Dec 6 16:09:29 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Wed, 6 Dec 2006 15:09:29 -0600 Subject: [SciPy-dev] Website down In-Reply-To: <200612061101.49306.jstrunk@enthought.com> References: <4576E584.2060306@iam.uni-stuttgart.de> <200612061024.02400.jstrunk@enthought.com> <200612061101.49306.jstrunk@enthought.com> Message-ID: <200612061509.29833.jstrunk@enthought.com> Moin has been upgraded and is back online. Hopefully the problems we were seeing were caused by a bug in the old version. At the moment, it seems to be doing alright. I'll keep an eye on the server load. Thanks, Jeff On Wednesday 06 December 2006 11:01 am, Jeff Strunk wrote: > I think the moin-dump command was not the right way to go, but for now you > can access the text. I am working to create a rich static mirror. > > Thanks for your patience, > Jeff > > On Wednesday 06 December 2006 10:24 am, Jeff Strunk wrote: > > I am making a static version of the moin site, and I will let y'all know > > when it is ready. > > > > Thanks, > > Jeff > > > > On Wednesday 06 December 2006 9:59 am, Jeff Strunk wrote: > > > There seems to be a problem with the SciPy moin instance getting out of > > > control. I put up a temporary page explaining the problem. > > > > > > I am trying to see what is causing this problem. > > > > > > Sorry for the inconvenience, > > > Jeff > > > > > > On Wednesday 06 December 2006 9:45 am, Nils Wagner wrote: > > > > Hi all, > > > > > > > > I cannot access the website www.scipy.org. > > > > > > > > Nils > > > > > > > > _______________________________________________ > > > > Scipy-dev mailing list > > > > Scipy-dev at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > _______________________________________________ > > > Scipy-dev mailing list > > > Scipy-dev at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From wnbell at gmail.com Wed Dec 6 17:16:39 2006 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 6 Dec 2006 16:16:39 -0600 Subject: [SciPy-dev] sparse matrix multiplication - poor performance Message-ID: There appears to be something terribly wrong with the current implementation of sparse matrix multiplication. Consider the test below done in both MATLAB and scipy. MATLAB is [660.5872 936.8530 702.5506 650.2876] times faster for these four problems. Changing N = 100,000 puts the ratio in the neighborhood of 4000. Also, doing "B = A*A.T" takes *hundreds* of times longer than "B = A*A" or "B = A.T*A". This is more than the time required for dense matrix multiplication (try N=1000). Can anyone comment on this? Is the SPARSKIT code really that slow? Also, I made an improvement [1] to the coo_matrix.tocsr() and coo_matrix.tocsr() methods. Is there a reason it hasn't been included? I'd commit it myself, but I don't have the privileges. Who do I need so speak to about SVN commit access? FWIW I've contributed a few bug reports and some additional SWIG wrappers for UMFPACK (via Robert Cimrman). The sparse matrix functionality is especially important for my work, so I'd like to help on that front. Thanks. [1] http://projects.scipy.org/scipy/scipy/ticket/326 --------------------MATLAB CODE-------------------- % 10K identity N = 10*1000; A = speye(N); tic; B = A'*A; toc; tic; B = A*A; toc; % 10K tridiagonal M = repmat((1:N)',1,3); A = spdiags(M,[-1 0 1],N,N); tic; B = A'*A; toc; tic; B = A*A; toc; --------------------MATLAB OUTPUT-------------------- Elapsed time is 0.004496 seconds. Elapsed time is 0.001932 seconds. Elapsed time is 0.011017 seconds. Elapsed time is 0.010257 seconds. --------------------SCIPY CODE-------------------- from scipy import * import time N = 10*1000 A = sparse.speye(N,N) start = time.clock() B = A.T*A print time.clock() - start start = time.clock() B = A*A print time.clock() - start A = sparse.spdiags(rand(3,N),[-1,0,1],N,N) start = time.clock() B = A.T*A print time.clock() - start start = time.clock() B = A*A print time.clock() - start --------------------SCIPY OUTPUT-------------------- 2.97 1.81 7.74 6.67 --------------------------------------------------------------- -- Nathan Bell wnbell at gmail.com From guyer at nist.gov Wed Dec 6 19:41:59 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 6 Dec 2006 19:41:59 -0500 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: References: Message-ID: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> On Dec 6, 2006, at 5:16 PM, Nathan Bell wrote: > There appears to be something terribly wrong with the current > implementation of sparse matrix multiplication. Consider the test > below done in both MATLAB and scipy. > > MATLAB is [660.5872 936.8530 702.5506 650.2876] times faster for > these four problems. Changing N = 100,000 puts the ratio in the > neighborhood of 4000. > > Also, doing "B = A*A.T" takes *hundreds* of times longer than "B = > A*A" or "B = A.T*A". This is more than the time required for dense > matrix multiplication (try N=1000). > > Can anyone comment on this? Is the SPARSKIT code really that slow? I don't know the reason for it, but I can verify the results (the scipy side of it, anyway). About a year ago, I posted some benchmarking results to , the upshot of which was that scipy.sparse was much slower than PySparse. I have no idea where that wiki page is since the site moved, though. On my 1.67 GHz PowerBook, scipy gives --------------------SCIPY OUTPUT-------------------- 5.61 1.95 28.74 5.64 whereas the PySparse equivalent is comparable to your MATLAB results: -------------------PYSPARSE CODE-------------------- import spmatrix from scipy import * from Numeric import * import time N = 10*1000 A = spmatrix.ll_mat(N, N, N) ids = arange(N) A.put(ones(N), ids, ids) start = time.clock() B = spmatrix.dot(A, A) print time.clock() - start start = time.clock() B = spmatrix.matrixmultiply(A, A) print time.clock() - start A = spmatrix.ll_mat(N, N, 3*N) ids1 = array((arange(N)-1, arange(N), arange(N)+1)) ids2 = array((arange(N), arange(N), arange(N))) A.put(array(rand(3,N)).flat, ids1.flat, ids2.flat) start = time.clock() B = spmatrix.dot(A, A) print time.clock() - start start = time.clock() B = spmatrix.matrixmultiply(A, A) print time.clock() - start ------------------PYSPARSE OUTPUT------------------- 0.0 0.0 0.01 0.02 From fperez.net at gmail.com Wed Dec 6 21:39:05 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 6 Dec 2006 19:39:05 -0700 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: Message-ID: Hi Scipy-dev, I'm forwarding this from William Stein, the lead developer of SAGE (http://modular.math.washington.edu/sage/). William, I won't have any email access until next Monday, but hopefully others may pitch in. It's probably also worth mentioning these two pages: http://scipy.org/Numpy_Example_List_With_Doc http://www.hjcb.nl/python/Arrays.html Their contents may serve as the starting point for material for docstring examples (this has been suggested several times before, it just hasn't happened). Cheers, f ---------- Forwarded message ---------- From: William Stein Date: Dec 5, 2006 11:41 PM Subject: [sage-devel] numpy in SAGE, etc. To: "sage-devel at googlegroups.com" Cc: "Fernando.Perez at colorado.edu" , oliphant at ee.byu.edu, pearu at cens.ioc.ee Hello, In case you don't know, SAGE-1.5 (http://sage.math.washington.edu/sage) will included numpy by default. Inclusion of the scipy distribution might not be far off either. We will also definitely continue to include GSL in SAGE and develop its unique functionality. I want SAGE to develop into a truly viable alternative to MATLAB (in addition to everything else it is), and it's clear to me that numpy/scipy/vtk/mayavi/gsl are crucial pieces of software if there is any hope of succcess. Anyway, to the first point of this email. I want to try some functions in numpy, so I type numpy.[tab], then say numpy.array? and I see some _minimal_ documentation but ABSOLUTELY NO EXAMPLES. The same is true for tons of the functions in numpy (and Numeric), and even Python for that matter. Anyway, this is simply *not* up to snuff for what is needed for SAGE. For SAGE my goal is that every mathematical function in the system is illustrated by examples that the user can paste into the interpreter and have work (and moreover, they are autotested). Currently there are about 12000 lines of such input already. I think this is unrelated to the whole issue with the official numpy documentation being commercial. Given the extremely limited number of examples in Numeric, numpy, and the official Python docs, it must be a conscience design decision to *not* have lots and lots of doctests. In SAGE, often files have way more docs and doctes than actual code -- again this is a design decision. The question, then, it was to do if numpy is to be included in SAGE in a way that satisfies our design goals? Some options include: (1) The file numpy/add_newdocs.py in the numpy distribution defines somehow docstrings for a lot of the numpy constructors. A SAGE developer could simply add tons of examples to this file, based on playing around, and reading the numpy book to learn what is relevant to illustrate. As each numpy distribution is released, we would *merge* this file with the one in the new numpy distribution (e.g., very easily using Mercurial). (2) You might think it would be possible to change the docstrings at runtime, but I think they may be hardcoded in (many are for code defined in extension classes). OK, so I don't have many options. Thoughts? Does anybody want to help? Any person who wants to learn numpy could probably easily write these examples along the way. Instead of just learning numpy, you could more systematically learn numpy and at the same time contribute tons of useful doctests. And finally, am I just wrong -- would Travis, etc., want these docstrings with tons of examples? Travis -- since I cc'd you, maybe you can just answer. I can completely understand if you don't want tons of doctests; it's fine if your design goals are different. By the way, SAGE Days 3 is in LA at IPAM Feb 17-21, and I hope both Fernando and Travis will consider coming. Some travel funding is available. By the way, most of the remarks above also apply to Networkx -- it's docs seem to have almost no examples. Actually, I don't think I know of *any* Python packages that do have much in the way of examples in the docstrings, at least nothing on the order of SAGE. ------------------------- Here's the official statement about the scipy module documentation, in the DEVELOPERS.txt file of the scipy distribution: "Currently there are * A SciPy tutorial by Travis E. Oliphant. This is maintained using LyX. The main advantage of this approach is that one can use mathematical formulas in documentation. * I (Pearu) have used reStructuredText formated .txt files to document various bits of software. This is mainly because ``docutils`` might become a standard tool to document Python modules. The disadvantage is that it does not support mathematical formulas (though, we might add this feature ourself using e.g. LaTeX syntax). * Various text files with almost no formatting and mostly badly out dated. * Documentation strings of Python functions, classes, and modules. Some SciPy modules are well-documented in this sense, others are very poorly documented. Another issue is that there is no consensus on how to format documentation strings, mainly because we haven't decided which tool to use to generate, for instance, HTML pages of documentation strings." -------- So evidently the scipy people have for some reason not even decided on a format for their documentation, which is partly why they don't have it. For the record, in SAGE there is a very precise documentation format that is systematically and uniformly applied throughout the system: * We liberally use latex in the docstrings. I wrote a Python function that preparses this to make it human readable, when one types foo? * The format of each docstring is as follows: function header """ 1-2 sentences summarizing what the function does. INPUT: var1 -- type, defaults, what it is var2 -- ... OUTPUT: description of output var or vars (if tuple) EXAMPLES: a *bunch* of examples, often a whole page. NOTES: misc other notes ALGORITHM: notes about the implementation or algorithm, if applicable AUTHORS: -- name (date): notes about what was done """ The INPUT and OUTPUT blocks are typeset as a latex verbatim environment. The rest is typeset using normal latex. It's good to use the itemize environment when necessary for lists, also. Essentially all the documtation in SAGE has exactly this format. VERY often I rewrite documentation for code people send me so that it is formated as above. (So if you send me code, please format it exactly as above!!!) The SAGE reference manual is autogenerated from the source code using a script I wrote that basically extracts the documentation (mainly using Python introspection), and puts it in the same format as the standard Python documentation suite, then runs the standard Python documentation tools on it. Note, however, that the structure of the reference manual is laid out by hand (i.e., order of chapters, some heading text, etc.), which is I think very important. I suggest scipy consider a similar strategy. Hey, I just got: "Successfully installed scipy-2006-12-05 Now cleaning up tmp files." on my MacBookPro, after installing the new Intel fortran compiler. -- William --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel at googlegroups.com To unsubscribe from this group, send email to sage-devel-unsubscribe at googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/ -~----------~----~----~----~------~----~------~--~--- From dahl.joachim at gmail.com Thu Dec 7 03:22:41 2006 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Thu, 7 Dec 2006 09:22:41 +0100 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> References: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> Message-ID: <47347f490612070022n1e0fc94dp503af72490d45c5c@mail.gmail.com> Can this be because the matrices are very simple and not stored in CCS or CRS format in matlab? For matrices with very simple structures, it probably makes sense to keep track of that, e.g., to precompute the sparsity pattern of the product. What happens in your tests for matrices with random sparsity patterns? This surprised me: >> N=10000; >> A=speye(N); >> tic, B=A*A; toc Elapsed time is 0.001630 seconds. >> tic, B=3*A; toc Elapsed time is 2.649450 seconds. Joachim On 12/7/06, Jonathan Guyer wrote: > > > On Dec 6, 2006, at 5:16 PM, Nathan Bell wrote: > > > There appears to be something terribly wrong with the current > > implementation of sparse matrix multiplication. Consider the test > > below done in both MATLAB and scipy. > > > > MATLAB is [660.5872 936.8530 702.5506 650.2876] times faster for > > these four problems. Changing N = 100,000 puts the ratio in the > > neighborhood of 4000. > > > > Also, doing "B = A*A.T" takes *hundreds* of times longer than "B = > > A*A" or "B = A.T*A". This is more than the time required for dense > > matrix multiplication (try N=1000). > > > > Can anyone comment on this? Is the SPARSKIT code really that slow? > > I don't know the reason for it, but I can verify the results (the > scipy side of it, anyway). About a year ago, I posted some > benchmarking results to SparseSolvers>, the upshot of which was that scipy.sparse was much > slower than PySparse. I have no idea where that wiki page is since > the site moved, though. > > > On my 1.67 GHz PowerBook, scipy gives > > --------------------SCIPY OUTPUT-------------------- > > 5.61 > 1.95 > 28.74 > 5.64 > > whereas the PySparse equivalent is comparable to your MATLAB results: > > -------------------PYSPARSE CODE-------------------- > > import spmatrix > from scipy import * > from Numeric import * > import time > N = 10*1000 > > A = spmatrix.ll_mat(N, N, N) > ids = arange(N) > A.put(ones(N), ids, ids) > start = time.clock() > B = spmatrix.dot(A, A) > print time.clock() - start > start = time.clock() > B = spmatrix.matrixmultiply(A, A) > print time.clock() - start > > > A = spmatrix.ll_mat(N, N, 3*N) > ids1 = array((arange(N)-1, arange(N), arange(N)+1)) > ids2 = array((arange(N), arange(N), arange(N))) > A.put(array(rand(3,N)).flat, ids1.flat, ids2.flat) > start = time.clock() > B = spmatrix.dot(A, A) > print time.clock() - start > start = time.clock() > B = spmatrix.matrixmultiply(A, A) > print time.clock() - start > > ------------------PYSPARSE OUTPUT------------------- > > 0.0 > 0.0 > 0.01 > 0.02 > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Dec 7 06:48:57 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 07 Dec 2006 12:48:57 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <4576BC1A.6080902@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> Message-ID: <4577FFA9.8030903@ntc.zcu.cz> Nils Wagner wrote: > Robert Cimrman wrote: >> Nils Wagner wrote: >> >>> I have replaced the commas by colons in my site.cfg. >>> >>> [DEFAULT] >>> library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 >>> include_dirs = /usr/include:/usr/local/include >>> >>> [amd] >>> library_dirs = /usr/local/src/AMD/Lib >>> include_dirs = /usr/local/src/AMD/Include:/usr/local/src/UFconfig >>> amd_libs = amd >>> >>> [umfpack] >>> library_dirs = /usr/local/src/UMFPACK/Lib >>> include_dirs = /usr/local/src/UMFPACK/Include:/usr/local/src/UFconfig >>> umfpack_libs = umfpack >>> >>> Now I get >>> ... >>> creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve >>> creating >>> build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve/umfpack >>> compile options: '-DSCIPY_UMFPACK_H -DSCIPY_AMD_H >>> -DATLAS_INFO="\"3.7.11\"" -I/usr/local/src/UMFPACK/Include >>> -I/usr/local/src/AMD/Include >>> -I/usr/local/lib64/python2.4/site-packages/numpy/core/include >>> -I/usr/include/python2.4 -c' >>> >> -I/usr/local/src/UFconfig is missing here, strange. Could you try to >> copy or link manually UFconfig.h into e.g. /usr/local/src/UMFPACK/Include? >> >> > I have put a copy of UFconfig.h into > > /usr/local/src/UMFPACK/Include. > > locate UFconfig.h > > /usr/local/src/UFconfig/UFconfig.h > /usr/local/src/UMFPACK/Include/UFconfig.h > > > With that I was able to install scipy. But scipy.test(1) yields > > Found 4 tests for scipy.io.recaster > Warning: FAILURE importing tests for 'scipy.linsolve.umfpack.umfpack' from '...y/linsolve/umfpack/umfpack.pyc'> > /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in ?) > Warning: FAILURE importing tests for from '.../linsolve/umfpack/__init__.pyc'> > /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in ?) > Found 4 tests for scipy.optimize.zeros > > Any idea ? I am puzzled. It might be a 64-bit issue? Could you send me the relevant part of compilation output? r. From nwagner at iam.uni-stuttgart.de Thu Dec 7 07:00:44 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 07 Dec 2006 13:00:44 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <4577FFA9.8030903@ntc.zcu.cz> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> Message-ID: <4578026C.2070708@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Robert Cimrman wrote: >> >>> Nils Wagner wrote: >>> >>> >>>> I have replaced the commas by colons in my site.cfg. >>>> >>>> [DEFAULT] >>>> library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 >>>> include_dirs = /usr/include:/usr/local/include >>>> >>>> [amd] >>>> library_dirs = /usr/local/src/AMD/Lib >>>> include_dirs = /usr/local/src/AMD/Include:/usr/local/src/UFconfig >>>> amd_libs = amd >>>> >>>> [umfpack] >>>> library_dirs = /usr/local/src/UMFPACK/Lib >>>> include_dirs = /usr/local/src/UMFPACK/Include:/usr/local/src/UFconfig >>>> umfpack_libs = umfpack >>>> >>>> Now I get >>>> ... >>>> creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve >>>> creating >>>> build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/linsolve/umfpack >>>> compile options: '-DSCIPY_UMFPACK_H -DSCIPY_AMD_H >>>> -DATLAS_INFO="\"3.7.11\"" -I/usr/local/src/UMFPACK/Include >>>> -I/usr/local/src/AMD/Include >>>> -I/usr/local/lib64/python2.4/site-packages/numpy/core/include >>>> -I/usr/include/python2.4 -c' >>>> >>>> >>> -I/usr/local/src/UFconfig is missing here, strange. Could you try to >>> copy or link manually UFconfig.h into e.g. /usr/local/src/UMFPACK/Include? >>> >>> >>> >> I have put a copy of UFconfig.h into >> >> /usr/local/src/UMFPACK/Include. >> >> locate UFconfig.h >> >> /usr/local/src/UFconfig/UFconfig.h >> /usr/local/src/UMFPACK/Include/UFconfig.h >> >> >> With that I was able to install scipy. But scipy.test(1) yields >> >> Found 4 tests for scipy.io.recaster >> Warning: FAILURE importing tests for > 'scipy.linsolve.umfpack.umfpack' from '...y/linsolve/umfpack/umfpack.pyc'> >> /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: >> AttributeError: 'module' object has no attribute 'umfpack' (in ?) >> Warning: FAILURE importing tests for > from '.../linsolve/umfpack/__init__.pyc'> >> /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: >> AttributeError: 'module' object has no attribute 'umfpack' (in ?) >> Found 4 tests for scipy.optimize.zeros >> >> Any idea ? >> > > I am puzzled. It might be a 64-bit issue? Could you send me the relevant > part of compilation output? > How can I redirect the output of python setup.py install into a file ? Nils > r. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From cimrman3 at ntc.zcu.cz Thu Dec 7 07:05:51 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 07 Dec 2006 13:05:51 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <4578026C.2070708@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> Message-ID: <4578039F.907@ntc.zcu.cz> Nils Wagner wrote: >> I am puzzled. It might be a 64-bit issue? Could you send me the relevant >> part of compilation output? >> > How can I redirect the output of python setup.py install into a file ? python setup.py install &> out.txt (using bash shell, both standard output and atandard error are redirected) or just python setup.py install > out.txt (standard output only) r. From nwagner at iam.uni-stuttgart.de Thu Dec 7 07:15:38 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 07 Dec 2006 13:15:38 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <4578039F.907@ntc.zcu.cz> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> Message-ID: <457805EA.3060405@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >>> I am puzzled. It might be a 64-bit issue? Could you send me the relevant >>> part of compilation output? >>> >>> >> How can I redirect the output of python setup.py install into a file ? >> > > python setup.py install &> out.txt (using bash shell, both standard > output and atandard error are redirected) > or just > python setup.py install > out.txt (standard output only) > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Ok I will send you out.txt off-list. Cheers Nils From cimrman3 at ntc.zcu.cz Thu Dec 7 07:41:27 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 07 Dec 2006 13:41:27 +0100 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: References: Message-ID: <45780BF7.5040109@ntc.zcu.cz> Nathan Bell wrote: > There appears to be something terribly wrong with the current > implementation of sparse matrix multiplication. Consider the test > below done in both MATLAB and scipy. > > MATLAB is [660.5872 936.8530 702.5506 650.2876] times faster for > these four problems. Changing N = 100,000 puts the ratio in the > neighborhood of 4000. > > Also, doing "B = A*A.T" takes *hundreds* of times longer than "B = > A*A" or "B = A.T*A". This is more than the time required for dense > matrix multiplication (try N=1000). > > Can anyone comment on this? Is the SPARSKIT code really that slow? I use sparse matrices only for solving large linear systems (-> don't need multiplication), but I would bet that, apart from sparskit speed, a suboptimal code path is taken. In any case, A_csr * B_csc (or A_csr * B_csr.T) should be very fast (in theory), since the sparsity data are in the "right" order. > Also, I made an improvement [1] to the coo_matrix.tocsr() and > coo_matrix.tocsr() methods. Is there a reason it hasn't been > included? I'd commit it myself, but I don't have the privileges. > Who do I need so speak to about SVN commit access? FWIW I've > contributed a few bug reports and some additional SWIG wrappers for > UMFPACK (via Robert Cimrman). The sparse matrix functionality is > especially important for my work, so I'd like to help on that front. Your help is very appreciated. While debugging, could you determine which code paths are really taken in the cases csr * csc csr.T * csc csr * csr, ... - some speed-up might be gained on python level, imho. If it is not enough, a sparskit replacement should be considered. Ed Schofield (and Travis O.) could know more? > [1] http://projects.scipy.org/scipy/scipy/ticket/326 I have merged your changes, but try to ask the administrator (Jeff Strunk?) to get SVN write access. r. From gruben at bigpond.net.au Thu Dec 7 08:19:07 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri, 08 Dec 2006 00:19:07 +1100 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: Message-ID: <457814CB.80307@bigpond.net.au> I suspect William is right about the lack of a standard format hindering the documentation. I hope something like his format can be adopted. I'd also specify some way of delineating examples so that they can be preprocessed and run as part of the unit tests. SAGE uses matplotlib and it's currently a de facto standard. Could it be adopted in examples somehow, with any preprocessor perhaps skipping examples which contained visual examples or modifying them to suppress plot windows appearing? Maybe a standard way to provide references to example code outside a docstring should also be defined so that examples covering multiple functions can be referenced from several places. I suspect this sort of thing would require breaking away from the standard docstring idea. As I've mentioned previously, this would also allow doing things like running the example code interactively. Another thing that William's email doesn't cover is non-function documentation; it might be worth defining a similar documentation format for numpy classes and types. Finally, should the NOTES: section be used for "see also" tags or should a SEE ALSO: section be added? Gary R. > (2) You might think it would be possible to change the docstrings > at runtime, but I think they may be hardcoded in (many are for > code defined in extension classes). > > OK, so I don't have many options. Thoughts? Does anybody > want to help? Any person who wants to learn numpy could > probably easily write these examples along the way. Instead of > just learning numpy, you could more systematically learn numpy and > at the same time contribute tons of useful doctests. > So evidently the scipy people have for some reason not even decided > on a format for their documentation, which is partly why they don't > have it. For the record, in SAGE there is a very precise documentation > format that is systematically and uniformly applied throughout the > system: > > * We liberally use latex in the docstrings. I wrote a Python function > that preparses this to make it human readable, when one types > foo? > > * The format of each docstring is as follows: > function header > """ > 1-2 sentences summarizing what the function does. > > INPUT: > var1 -- type, defaults, what it is > var2 -- ... > OUTPUT: > description of output var or vars (if tuple) > > EXAMPLES: > a *bunch* of examples, often a whole page. > > NOTES: > misc other notes > > ALGORITHM: > notes about the implementation or algorithm, if applicable > > AUTHORS: > -- name (date): notes about what was done > """ > > The INPUT and OUTPUT blocks are typeset as a latex verbatim > environment. The rest is typeset using normal latex. > It's good to use the itemize environment when necessary > for lists, also. From wnbell at gmail.com Thu Dec 7 08:22:58 2006 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 7 Dec 2006 07:22:58 -0600 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: <47347f490612070022n1e0fc94dp503af72490d45c5c@mail.gmail.com> References: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> <47347f490612070022n1e0fc94dp503af72490d45c5c@mail.gmail.com> Message-ID: On 12/7/06, Joachim Dahl wrote: > Can this be because the matrices are very simple and not stored in CCS or > CRS > format in matlab? For matrices with very simple structures, it probably > makes sense > to keep track of that, e.g., to precompute the sparsity pattern of the > product. > > What happens in your tests for matrices with random sparsity patterns? That's a possible concern - perhaps MATLAB treats k-diagonal matrices in a special manner? However, my experience suggests that this is not the cause - it's not that MATLAB is necessarily fast, just that scipy is unacceptably slow. Below is a test using a random symmetric matrix. Since the matrix is symmetric, it's not necessary to test CSR.T and CSC.T since those are just CSC and CSR respectively. Note that the CSC*CSR _is_ a bad code path (as Robert suggested there might be). However, even in the best of cases, Scipy is still much much slower. I thought the resizing might be the culprit, but that doesn't appear to be the case since CSC*CSC also resizes, but doesn't take as long as CSC*CSR. --------------------MATLAB CODE-------------------- N = 1000; A = speye(N); for i=1:N for j=1:5 A(i,floor(N*rand)+1) = 1; end end S = A+A'; nnz(A) tic; B = A*A; toc; tic; B = A'*A; toc; tic; B = A*A'; toc; --------------------MATLAB OUTPUT-------------------- ans = 5987 Elapsed time is 0.024908 seconds. Elapsed time is 0.032202 seconds. Elapsed time is 0.025415 seconds. --------------------SCIPY CODE-------------------- from scipy import * import time N = 1000 A = sparse.lil_matrix((N,N)) for i in range(N): A[i,i] = 1 for j in range(5): A[i,int(floor(N*rand()))] = 1 A = A.tocsr() S = A+A.T #make symmetric CSR = S.tocsr() CSC = S.tocsc() print "nnz S ",S.nnz start = time.clock() B = CSR*CSR print "CSR CSR = ",time.clock() - start start = time.clock() B = CSC*CSC print "CSC CSC = ",time.clock() - start start = time.clock() B = CSR*CSC print "CSR CSC = ",time.clock() - start start = time.clock() B = CSC*CSR print "CSC CSR = ",time.clock() - start --------------------SCIPY OUTPUT-------------------- nnz S 10960 CSR CSR = 0.8 Resizing... 217 381 21920 Resizing... 386 855 39076 Resizing... 630 536 63036 Resizing... 857 396 86326 Resizing... 980 171 98637 Resizing... 998 836 100593 Resizing... 999 878 100711 Resizing... 999 987 100724 CSC CSC = 0.58 CSR CSC = 0.61 Resizing... 217 381 21920 Resizing... 386 855 39076 Resizing... 630 536 63036 Resizing... 857 396 86326 Resizing... 980 171 98637 Resizing... 998 836 100593 Resizing... 999 878 100711 Resizing... 999 987 100724 CSC CSR = 84.05 -------------------- END OUTPUT-------------------- > This surprised me: > >> N=10000; > >> A=speye(N); > >> tic, B=A*A; toc > Elapsed time is 0.001630 seconds. > >> tic, B=3*A; toc > Elapsed time is 2.649450 seconds. That appears to be an aberrant result, I get: >> A = speye(10000); >> tic; B = A*A; toc; Elapsed time is 0.003558 seconds. >> tic; B = 3*A; toc; Elapsed time is 0.020551 seconds. >> A = speye(10000); >> tic; B = A*A; toc; Elapsed time is 0.003411 seconds. >> tic; B = 3*A; toc; Elapsed time is 0.000564 seconds. Can you reproduce your results? -- Nathan Bell wnbell at gmail.com From guyer at nist.gov Thu Dec 7 09:10:34 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Thu, 7 Dec 2006 09:10:34 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <457814CB.80307@bigpond.net.au> References: <457814CB.80307@bigpond.net.au> Message-ID: <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> On Dec 7, 2006, at 8:19 AM, Gary Ruben wrote: > I suspect William is right about the lack of a standard format > hindering > the documentation. I hope something like his format can be adopted. I'm all for a standard format (and certainly all for encouraging documentation (although I'm a bit skeptical that agreeing on a format is suddenly going to open the floodgates of heretofore amorphous documentation that the SciPy community has been hoarding to themselves)). It's a shame that he felt the need to invent YADMUS (Yet Another Docstring MarkUp Specification), though. Python has too many systems for this already. > I'd > also specify some way of delineating examples so that they can be > preprocessed and run as part of the unit tests. You mean like doctest? From dahl.joachim at gmail.com Thu Dec 7 10:14:52 2006 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Thu, 7 Dec 2006 16:14:52 +0100 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: References: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> <47347f490612070022n1e0fc94dp503af72490d45c5c@mail.gmail.com> Message-ID: <47347f490612070714k632dc744i8238e9370cd839c1@mail.gmail.com> > That appears to be an aberrant result, I get: > > >> A = speye(10000); > >> tic; B = A*A; toc; > Elapsed time is 0.003558 seconds. > >> tic; B = 3*A; toc; > Elapsed time is 0.020551 seconds. > >> A = speye(10000); > >> tic; B = A*A; toc; > Elapsed time is 0.003411 seconds. > >> tic; B = 3*A; toc; > Elapsed time is 0.000564 seconds. > > Can you reproduce your results? yes, this happens consistently for me with Matlab 7.2.0.294 on a Linux P4 computer. -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kamrik at gmail.com Thu Dec 7 11:40:58 2006 From: kamrik at gmail.com (Mark Koudritsky) Date: Thu, 7 Dec 2006 18:40:58 +0200 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> Message-ID: Even though I've recently suggested the following idea on this mailing list and got rather pessimistic response, I'll try it once again. I believe it's relatively easy to create a web site with one page per docstringable entity in the sources (function, class, module ?). This page can be either freely editable by the readers in a wiki style or can have the option to add comments at the bottom (which will be once in a while integrated into the main text by an editor, MySQL online docs work this way). The content of this page will be injected (preferably by an automatic script) as the doctring into the appropriate function before releasing a new build of the package (this last stage is the only non trivial one). One argument against wiki style documentation was that it tends to get messy. I believe that if the system suggests (rather than imposes) some structure, the mess could be avoided. For example the basic structure of page per entity + some standard template for the page (like the one suggested by William) would be sufficient to maintain a reasonable degree of order. From aisaac at american.edu Thu Dec 7 12:09:10 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 7 Dec 2006 12:09:10 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> Message-ID: On Thu, 7 Dec 2006, Mark Koudritsky apparently wrote: > I believe it's relatively easy to create a web site with one page per > docstringable entity in the sources (function, class, module ?). Sorry if this is orthogonal to your interests, but just wanted to make sure you knew about http://www.scipy.org/Numpy_Example_List_With_Doc Cheers, Alan Isaac From kamrik at gmail.com Thu Dec 7 12:55:19 2006 From: kamrik at gmail.com (Mark Koudritsky) Date: Thu, 7 Dec 2006 19:55:19 +0200 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> Message-ID: On 12/7/06, Alan G Isaac wrote: > Sorry if this is orthogonal to your interests, > but just wanted to make sure you knew about > http://www.scipy.org/Numpy_Example_List_With_Doc I've seen it only briefly before but now when you pointed out, I realized that it covers most of what I talked about in my previous message, thanks. Should we maybe add a paragraph on the main documentation page instructing people that "Numpy Example List" is the place to contribute functions documentation? Many people don't know about it (I didn't) Maybe it should be even renamed to something like "Numpy functions reference" ? From wnbell at gmail.com Thu Dec 7 17:23:42 2006 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 7 Dec 2006 16:23:42 -0600 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: <47347f490612070714k632dc744i8238e9370cd839c1@mail.gmail.com> References: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> <47347f490612070022n1e0fc94dp503af72490d45c5c@mail.gmail.com> <47347f490612070714k632dc744i8238e9370cd839c1@mail.gmail.com> Message-ID: On 12/7/06, Joachim Dahl wrote: > yes, this happens consistently for me with Matlab 7.2.0.294 on a Linux P4 > computer. I suggest you ask for your money back then :) That should always be an O(N) operation, no matter how convoluted the sparse storage format. Anyway I've done some digging and I think the following are true: SPARSKIT is not actually used, although it is mentioned sparsetools is what actually computes the matrix matrix products -- Nathan Bell wnbell at gmail.com From wnbell at gmail.com Thu Dec 7 17:33:04 2006 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 7 Dec 2006 16:33:04 -0600 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: References: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> <47347f490612070022n1e0fc94dp503af72490d45c5c@mail.gmail.com> <47347f490612070714k632dc744i8238e9370cd839c1@mail.gmail.com> Message-ID: On 12/7/06, Nathan Bell wrote: On 12/7/06, Joachim Dahl wrote: > yes, this happens consistently for me with Matlab 7.2.0.294 on a Linux P4 > computer. I suggest you ask for your money back then :) That should always be an O(N) operation, no matter how convoluted the sparse storage format. Anyway I've done some digging and I think the following are true: SPARSKIT is not actually used, although it is mentioned sparsetools is what actually computes the matrix matrix products sparsetools does an O(N^2) (in time) operation for sparse matrix multiplies[1] I don't know Fortran so I can't say the last for certain. My timing results certainly suggest sub-optimal complexity. Can anyone confirm this? Should we look to alternatives such as SPARSKIT [2] or SMMP and the like[3]? SPARSKIT seems to provide a number of important utility routines that are also currently slow (e.g. sparse matrix format conversions). [1] http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sparse/sparsetools/spblas.f.src [2] http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html [3] http://www.mgnet.org/~douglas/ccd-codes.html -- Nathan Bell wnbell at gmail.com From oliphant at ee.byu.edu Thu Dec 7 17:35:33 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 07 Dec 2006 15:35:33 -0700 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: References: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> <47347f490612070022n1e0fc94dp503af72490d45c5c@mail.gmail.com> <47347f490612070714k632dc744i8238e9370cd839c1@mail.gmail.com> Message-ID: <45789735.309@ee.byu.edu> Nathan Bell wrote: >On 12/7/06, Nathan Bell wrote: >On 12/7/06, Joachim Dahl wrote: > > >> yes, this happens consistently for me with Matlab 7.2.0.294 on a Linux P4 >>computer. >> >> > >I suggest you ask for your money back then :) That should always be >an O(N) operation, no matter how convoluted the sparse storage format. > > >Anyway I've done some digging and I think the following are true: > SPARSKIT is not actually used, although it is mentioned > sparsetools is what actually computes the matrix matrix products > sparsetools does an O(N^2) (in time) operation for sparse matrix multiplies[1] > >I don't know Fortran so I can't say the last for certain. My timing >results certainly suggest sub-optimal complexity. > > >Can anyone confirm this? Should we look to alternatives such as >SPARSKIT [2] or SMMP and the like[3]? SPARSKIT seems to provide a >number of important utility routines that are also currently slow >(e.g. sparse matrix format conversions). > > The licensing for SPARSKIT is (was) not suitable for use in SciPy. I have used it in the past but couldn't because of licensing issues. -Travis From david at ar.media.kyoto-u.ac.jp Fri Dec 8 03:27:42 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 08 Dec 2006 17:27:42 +0900 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> Message-ID: <457921FE.50708@ar.media.kyoto-u.ac.jp> >> I'd >> also specify some way of delineating examples so that they can be >> preprocessed and run as part of the unit tests. > > You mean like doctest? Ha, thank you for this information, I always wondered if this was possible. Is there a preferred way to do that for unit testing ? David From guyer at nist.gov Fri Dec 8 09:37:10 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Fri, 8 Dec 2006 09:37:10 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <457921FE.50708@ar.media.kyoto-u.ac.jp> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457921FE.50708@ar.media.kyoto-u.ac.jp> Message-ID: <7C9EAD87-8A16-4E7C-A628-233EEE452C02@nist.gov> On Dec 8, 2006, at 3:27 AM, David Cournapeau wrote: > >>> I'd >>> also specify some way of delineating examples so that they can be >>> preprocessed and run as part of the unit tests. >> >> You mean like doctest? > Ha, thank you for this information, I always wondered if this was > possible. Possible and really desirable. We've pretty much gotten rid of all our other unittests. doctests (including our full example scripts) cover all the things our old unittest suites did, and a great deal more, and they're a lot easier to write, read, and maintain. > Is there a preferred way to do that for unit testing ? I'm not sure exactly what you mean by this. If you mean, how do you go about integrating doctest with unittest, then the work's already done for you. doctest.DocTestSuite(module) returns a TestSuite suitable for unittest. FiPy adds a bunch of infrastructure to delay the actual imports when we test our entire codebase. I'll need to go back and see if this is still necessary, but we found that import failures in one or another non-essential part of FiPy would cause the entire test suite to fail to build, rather than just reporting test failures on that one bit. We may not need this, anymore, though, because those failed imports were biting us in other circumstances, so we've moved the imports of optional modules into the functions where they're actually invoked, instead of at the head of their modules. Regardless, we're happy to share any elements of our test harness that seem useful to SciPy. From aisaac at american.edu Fri Dec 8 10:32:00 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 8 Dec 2006 10:32:00 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <7C9EAD87-8A16-4E7C-A628-233EEE452C02@nist.gov> References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457921FE.50708@ar.media.kyoto-u.ac.jp><7C9EAD87-8A16-4E7C-A628-233EEE452C02@nist.gov> Message-ID: On Fri, 8 Dec 2006, Jonathan Guyer apparently wrote: > doctests (including our full example scripts) > cover all the things our old unittest suites did, and a great deal > more, and they're a lot easier to write, read, and maintain. Would you agree with the assessment here (at the bottom): http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/305292 ? Thanks, Alan Isaac From mattknox_ca at hotmail.com Fri Dec 8 11:15:39 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Fri, 8 Dec 2006 11:15:39 -0500 Subject: [SciPy-dev] timeseries module uploaded to sandbox Message-ID: I have uploaded my timeseries module to the sandbox now. This is by no means a polished product (hence, it's location in the sandbox), but feel free to play around with it. Please take a look at the README file first before you try to do anything with the code. Feel free to criticize, suggest improvements, etc, etc. Even if you think I am way off base here in terms of how I have designed the module, perhaps this can at least serve as a starting point for a discussion on how a proper time series module should be done. The code has been written by myself and a couple of my co-op students off an on. I do not believe the C code segfaults (at least I haven't got it to do that), but there probably are memory leaks. I'm not an expert with Python's reference counting (nor were any of my co-op students), but we did the best we could. - Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From guyer at nist.gov Fri Dec 8 11:41:31 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Fri, 8 Dec 2006 11:41:31 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457921FE.50708@ar.media.kyoto-u.ac.jp><7C9EAD87-8A16-4E7C-A628-233EEE452C02@nist.gov> Message-ID: On Dec 8, 2006, at 10:32 AM, Alan G Isaac wrote: > On Fri, 8 Dec 2006, Jonathan Guyer apparently wrote: >> doctests (including our full example scripts) >> cover all the things our old unittest suites did, and a great deal >> more, and they're a lot easier to write, read, and maintain. > > Would you agree with the assessment here (at the bottom): > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/305292 ? By and large, yes. I'm aware of the arguments that unit tests should be separate from code, but I'm not really persuaded by that in practice. I think it's much more likely that separate tests will be completely out of date and irrelevant (whether or not they pass). As far as > A potential > problem with doctest is that you may have so many tests that your > docstrings would hinder rather than help understanding of your code we do have a few cases of very pedantic testing that don't serve much use as documentation. For those, we put them in hidden _test* functions at the end of the code. They don't appear in our documentation, but they do get exercised. Putting them in a separate file would be OK, too. From wnbell at gmail.com Fri Dec 8 12:43:59 2006 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 8 Dec 2006 11:43:59 -0600 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: <45789735.309@ee.byu.edu> References: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> <47347f490612070022n1e0fc94dp503af72490d45c5c@mail.gmail.com> <47347f490612070714k632dc744i8238e9370cd839c1@mail.gmail.com> <45789735.309@ee.byu.edu> Message-ID: On 12/7/06, Travis Oliphant wrote: > The licensing for SPARSKIT is (was) not suitable for use in SciPy. I > have used it in the past but couldn't because of licensing issues. I've learned that SMMP[1] is public domain and well documented[2]. It consists of a single fortran file with no external dependencies. I successfully ran f2py on smmp.f and was able to load the module into python. Is there any reason that SMMP could/should not be used for sparse matrix multiplies? If not I can try making the necessary changes. [1] http://www.mgnet.org/~douglas/ccd-codes.html [2] http://citeseer.ist.psu.edu/445062.html -- Nathan Bell wnbell at gmail.com From mattknox_ca at hotmail.com Fri Dec 8 13:48:02 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Fri, 8 Dec 2006 13:48:02 -0500 Subject: [SciPy-dev] timeseries module uploaded to sandbox Message-ID: whoops, forgot to upload the doc and examples folders. Please check again if you looked for them earlier - Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From gruben at bigpond.net.au Fri Dec 8 19:30:16 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 09 Dec 2006 11:30:16 +1100 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> Message-ID: <457A0398.90406@bigpond.net.au> Hi Jonathan, > I'm all for a standard format (and certainly all for encouraging > documentation (although I'm a bit skeptical that agreeing on a format > is suddenly going to open the floodgates of heretofore amorphous > documentation that the SciPy community has been hoarding to > themselves)). > > It's a shame that he felt the need to invent YADMUS (Yet Another > Docstring MarkUp Specification), though. Python has too many systems > for this already. Can you suggest something existing for the extra markup? I've never used anything fancier than plain docstrings. The other existing options seem to be (from a brief look) reST/docutils epydoc pythondoc doxygen happydoc Some (all?) of these specify their own markup. Here's a summary: I'm told by a doxygen advocate that it now has proper Python support and should be considered. If someone else has looked at these maybe they can pipe in, otherwise I may have to find out the relative advantages and disadvantages. I like the idea of supporting LaTeX markup, cross references. I like the idea of supporting matplotlib examples (opinions?). I like Mark's idea of a wiki page per function/class/module if it can be done. I also don't mind William's YADMUS. >> I'd >> also specify some way of delineating examples so that they can be >> preprocessed and run as part of the unit tests. > > You mean like doctest? The problem I see with plain doctests is that you can't easily choose to run, say, just the 3rd example out of 5 without specifying some sort of delineation between them, but this would be easy; something like: # Ex1: or Ex2 etc. between each doctest. Gary R. From aisaac at american.edu Fri Dec 8 20:11:15 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 8 Dec 2006 20:11:15 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <457A0398.90406@bigpond.net.au> References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au> Message-ID: On Sat, 09 Dec 2006, Gary Ruben apparently wrote: > Can you suggest something existing for the extra markup? I've never used > anything fancier than plain docstrings. > The other existing options seem to be (from a brief look) > reST/docutils > epydoc > pythondoc > doxygen > happydoc epydoc has very good support for reST, which is a wonderful and powerful markup. Alan Isaac PS reST has writer in the sandbox that supports LaTeX as a text role! (Writes to LaTeX or nicely to XHTML+MathML.) When epydoc supports that, we'll really be in good shape! From guyer at nist.gov Fri Dec 8 21:15:23 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Fri, 8 Dec 2006 21:15:23 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <457A0398.90406@bigpond.net.au> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> Message-ID: On Dec 8, 2006, at 7:30 PM, Gary Ruben wrote: > Can you suggest something existing for the extra markup? I've never > used > anything fancier than plain docstrings. > The other existing options seem to be (from a brief look) > > reST/docutils > epydoc > pythondoc > doxygen > happydoc Right. These all exist. They all have their issues and their benefits. I just don't see a need for yet another one. We happen to use the epydoc tool to process reST markup. > I like the idea of supporting LaTeX markup, cross references. Alan addresses this. > I like the idea of supporting matplotlib examples (opinions?). Supporting how? You certainly can have matplotlib code in doctest lines and you can use reST's ..image:: declaration to include output in your documentation. > I like Mark's idea of a wiki page per function/class/module if it > can be > done. > I also don't mind William's YADMUS. It's not that I mind his markup; it's that I don't think Python benefits from having at least a half-dozen of them. If the energy that went into developing reST, epydoc, pydoc, doxygen for Python, happydoc, and SAGE-YADMUS had gone into improving any one of them, I think the entire Python community would be better off. > The problem I see with plain doctests is that you can't easily > choose to > run, say, just the 3rd example out of 5 without specifying some > sort of > delineation between them, but this would be easy; something like: > # Ex1: > or Ex2 etc. between each doctest. I'm not sure exactly what you mean by this. We have full examples of different applications of FiPy. Those are each in individual files where the entire thing is just a big docstring, including doctest lines, with a little bit of code at the end to take care of loading the doctest lines and running them. Those individual scripts can be run on their own (although simply as scripts, not as doctests). We also document the usage of individual methods with specific doctests. I can't imagine why I'd want to just run one of them. From guyer at nist.gov Fri Dec 8 21:17:20 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Fri, 8 Dec 2006 21:17:20 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au> Message-ID: <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> On Dec 8, 2006, at 8:11 PM, Alan G Isaac wrote: > epydoc has very good support for reST, > which is a wonderful and powerful markup. This is what we do, and basically like it. I've hacked epydoc 2.1 rather severely, mostly to separate markup from style (and mostly for LaTeX (I like what epydoc *does*, but I don't like how its output looks)). I keep meaning to clean up my changes and send them back to Ed Loper. Now that he's got epydoc 3 in progress, I think I'll just integrate my changes with that and then send it to him. > PS reST has writer in the sandbox that supports > LaTeX as a text role! (Writes to LaTeX or nicely > to XHTML+MathML.) Do you know if any of that actually works? The last time I looked at it, I couldn't figure out what to do with it. I think it would be great, though. From oliphant.travis at ieee.org Fri Dec 8 21:31:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 08 Dec 2006 19:31:04 -0700 Subject: [SciPy-dev] sparse matrix multiplication - poor performance In-Reply-To: References: <53EC7E92-C220-4249-96F9-4B309E3D3944@nist.gov> <47347f490612070022n1e0fc94dp503af72490d45c5c@mail.gmail.com> <47347f490612070714k632dc744i8238e9370cd839c1@mail.gmail.com> <45789735.309@ee.byu.edu> Message-ID: <457A1FE8.20004@ieee.org> Nathan Bell wrote: > On 12/7/06, Travis Oliphant wrote: > >> The licensing for SPARSKIT is (was) not suitable for use in SciPy. I >> have used it in the past but couldn't because of licensing issues. >> > > I've learned that SMMP[1] is public domain and well documented[2]. It > consists of a single fortran file with no external dependencies. I > successfully ran f2py on smmp.f and was able to load the module into > python. > > Is there any reason that SMMP could/should not be used for sparse > matrix multiplies? If not I can try making the necessary changes. > > No reason not to use it if the license is really suitable (the web-page seems to indicate that if it is used in "commercial" code then we have to contact the author first --- SciPy cannot have such encumbrances). If we can use it, then great. Which storage format does it work for? -Travis From aisaac at american.edu Sat Dec 9 00:31:49 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 9 Dec 2006 00:31:49 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> Message-ID: On Fri, 8 Dec 2006, Jonathan Guyer apparently wrote: > Do you know if any of that actually works? The last time > I looked at it, I couldn't figure out what to do with it. *If* it works? Yes: it works great. I use it all the time. Use it like any other writer, but use file extension .xml or FireFox won't recognize it. (And of course IE has always been completely braindead for XML, although perhaps IE 7 finally will parse XHTML+MathML? I have not tried.) *How* it works? Sorry. I did not contribute any of the Python code. If I missed your question, ask more. Cheers, Alan Isaac PS Hint: for LaTeX output, make raw-latex the default text role, and it all looks beautiful! From gruben at bigpond.net.au Sat Dec 9 04:26:27 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 09 Dec 2006 20:26:27 +1100 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> Message-ID: <457A8143.6020305@bigpond.net.au> > Supporting how? You certainly can have matplotlib code in doctest > lines and you can use reST's ..image:: declaration to include output > in your documentation. That's fine. I just raised it because I like graphical examples but I'm concerned about advocating reliance on any separate package, although matplotlib has probably gained enough acceptance that it's OK to assume it will be available on most numpy users' systems. >> The problem I see with plain doctests is that you can't easily >> choose to run, say, just the 3rd example out of 5 without >> specifying some sort of delineation between them, but this would be >> easy; something like: # Ex1: or Ex2 etc. between each doctest. > > I'm not sure exactly what you mean by this. We have full examples of > different applications of FiPy. Those are each in individual files > where the entire thing is just a big docstring, including doctest > lines, with a little bit of code at the end to take care of loading > the doctest lines and running them. Those individual scripts can be > run on their own (although simply as scripts, not as doctests). We > also document the usage of individual methods with specific doctests. > I can't imagine why I'd want to just run one of them. I was envisaging several small examples in docstrings in the code modules with the ability to run any of them interactively. I think this is how it would be used in SAGE and in ipython. I've now downloaded FiPy to take a look. I think it's very nicely documented. My impression is that the in-code examples have limited mark-up and just a couple of simple, non-graphical examples. This allows them to be uncluttered and remain comprehensible when getting the docstring interactively. Detailed examples, although heavily marked-up, are in separate modules. I'd be happy if numpy/scipy adopted this approach of limiting and separating the mark-up. The LaTeX source is available. I'd like to know what others think about following FiPy's example. What else needs to be considered? I think that to build a skeleton reference document, the numarray document could be a starting point as its source is available, except it seems, for the LaTeX class file. I couldn't find the Numeric doc source. I'm not sure whether it's OK to just include relevant bits of these, but it looks like it's OK, provided suitable licence stuff and attribution is included. A skeleton based on FiPy's manual and the available numarray LaTeX with a mini-spec for docstrings would be something I could have a go at. Gary R. From guyer at nist.gov Sat Dec 9 09:41:30 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Sat, 9 Dec 2006 09:41:30 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> Message-ID: <96AE967D-EE53-4009-B5E7-6F1D63353CCA@nist.gov> On Dec 9, 2006, at 12:31 AM, Alan G Isaac wrote: > On Fri, 8 Dec 2006, Jonathan Guyer apparently wrote: >> Do you know if any of that actually works? The last time >> I looked at it, I couldn't figure out what to do with it. > > > *If* it works? Yes: it works great. I use it all the time. > Hint: for LaTeX output, make raw-latex the default text > role, and it all looks beautiful! Thanks, I'll have to experiment. LaTeX output is what we're interested in, although we'd probably do more with the web if we could show math. From aisaac at american.edu Sat Dec 9 10:01:15 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 9 Dec 2006 10:01:15 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <96AE967D-EE53-4009-B5E7-6F1D63353CCA@nist.gov> References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov><96AE967D-EE53-4009-B5E7-6F1D63353CCA@nist.gov> Message-ID: > On Dec 9, 2006, at 12:31 AM, Alan G Isaac wrote: >> Hint: for LaTeX output, make raw-latex the default text >> role, and it all looks beautiful! On Sat, 9 Dec 2006, Jonathan Guyer apparently wrote: > Thanks, I'll have to experiment. LaTeX output is what > we're interested in That should have been: make **latex-math** the default role. Like this:: .. default-role:: latex-math Cheers, Alan Isaac From guyer at nist.gov Sat Dec 9 10:03:55 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Sat, 9 Dec 2006 10:03:55 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <457A8143.6020305@bigpond.net.au> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <457A8143.6020305@bigpond.net.au> Message-ID: <254673B1-549E-41C0-AA78-09F95C492CD3@nist.gov> On Dec 9, 2006, at 4:26 AM, Gary Ruben wrote: > I'm > concerned about advocating reliance on any separate package, although > matplotlib has probably gained enough acceptance that it's OK to > assume > it will be available on most numpy users' systems. We actually provide a set of abstraction classes for that reason, and our examples all just say things like "viewer = fipy.viewers.make (...)" to return something acceptable as long as at least one of our supported viewers is installed. Particularly after what we saw at SciPy'06, though, we're moving toward only supporting matplotlib (at least for 1D and 2D). > I was envisaging several small examples in docstrings in the code > modules with the ability to run any of them interactively. I think > this > is how it would be used in SAGE and in ipython. You can restrict doctests down to one module vs. another, but I don't know if there's any way to restrict it to only running the tests of a particular method (although I've never tried), and there's certainly no mechanism to run only some tests of a method but not others. I haven't missed that ability, but I can see how other uses might call for it. > I've now downloaded FiPy to take a look. I think it's very nicely > documented. Thank you. > My impression is that the in-code examples have limited > mark-up and just a couple of simple, non-graphical examples. The *NoiseVariable classes have some graphical examples, but other than that, we either haven't found the time or the need to do it. I'd be happy to hear of cases where you think graphics would be helpful in our API documentation. If anybody else wants to follow along at home, you can download our manuals from without having to get the rest of the package. fipy.pdf is the user guide, with examples. reference.pdf is the API documentation. > This allows > them to be uncluttered and remain comprehensible when getting the > docstring interactively. Detailed examples, although heavily marked- > up, > are in separate modules. We provide a mechanism, via our setup.py script, to strip the markup if people want to adapt our examples without figuring out how to write docstring/doctest markup. On the other hand, I've come to writing all of my research codes in docstring/doctest. The ability to interlace math and code makes it a lot easier for me to figure out my codes when I come back to them later and clearer to explain to other people. There's an emacs mode for writing doctest out there somewhere, and I've cobbled together some modifications of Alpha's Python mode that simplify writing doctest; I really need to finish that up and check it back in to AlphaTcl. > I'd be happy if numpy/scipy adopted this > approach of limiting and separating the mark-up. The LaTeX source is > available. Be aware that you might have trouble building our docs. We used to try to ensure that anybody could build it, but eventually decided that we were more concerned with our ability to produce manuals that looked the way we want than we were with the rare cases of anybody else trying to build it. It's not a huge deal, but you'll need a couple of LaTeX classes and you'll need to adjust your paths to find our hacked epydoc. Anyway, if you want help, write me directly and I'll try to get it working for you. From guyer at nist.gov Sat Dec 9 10:05:26 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Sat, 9 Dec 2006 10:05:26 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov><96AE967D-EE53-4009-B5E7-6F1D63353CCA@nist.gov> Message-ID: <57BCDE5B-F776-44FC-84BA-D8C554FBE571@nist.gov> On Dec 9, 2006, at 10:01 AM, Alan G Isaac wrote: > That should have been: > make **latex-math** the default role. > Like this:: > > .. default-role:: latex-math Ah, OK, thanks. From guyer at nist.gov Sat Dec 9 18:14:16 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Sat, 9 Dec 2006 18:14:16 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> Message-ID: <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> On Dec 9, 2006, at 12:31 AM, Alan G Isaac wrote: > On Fri, 8 Dec 2006, Jonathan Guyer apparently wrote: >> Do you know if any of that actually works? The last time >> I looked at it, I couldn't figure out what to do with it. > > > *If* it works? Yes: it works great. I see my confusion. I'd gotten the sandbox via svn along with docutils HEAD, and it doesn't work with that (it throws an error deep within docutils). I switched to docutils-0.4 and it seems fine. From aisaac at american.edu Sat Dec 9 21:52:31 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 9 Dec 2006 21:52:31 -0500 Subject: [SciPy-dev] rst2mathml In-Reply-To: <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov><7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> Message-ID: On Sat, 9 Dec 2006, Jonathan Guyer apparently wrote: > I see my confusion. I'd gotten the sandbox via svn along with > docutils HEAD, and it doesn't work with that (it throws an error deep > within docutils). I switched to docutils-0.4 and it seems > fine. This problem that docutils SVN head creates for the latex-math module in the sandbox should interest the docutils developers and perhaps the latex-math developer (Jens J?rgen Mortensen), so I'll forward your comment to the docutils list. You may wish to follow-up with any useful details. Cheers, Alan Isaac From guyer at nist.gov Sun Dec 10 14:02:35 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Sun, 10 Dec 2006 14:02:35 -0500 Subject: [SciPy-dev] rst2mathml In-Reply-To: References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov><7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> Message-ID: On Dec 9, 2006, at 9:52 PM, Alan G Isaac wrote: > I'll forward your comment to the > docutils list. You may wish to follow-up with any useful > details. I'll do that. From pgmdevlist at gmail.com Sun Dec 10 15:06:12 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 10 Dec 2006 15:06:12 -0500 Subject: [SciPy-dev] ANN: An alternative to numpy.core.ma Message-ID: <200612101506.12873.pgmdevlist@gmail.com> All, I just posted on the DeveloperZone of the wiki the latest version of maskedarray, an alternative to numpy.core.ma. You can download it here: http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/maskedarray-1.00.dev0040.tar.gz The package has three modules: core (with the basic functions of numpy.core.ma), extras (which adds some functions, such as apply_along_axis, or the concatenator mr_), and testutils (which adds support for maskedarray for the tests functions). It also comes with its test suite (available in the tests subdirectory). For those of you who were not aware of it, the new MaskedArray is a subclass of ndarray, and it accepts any subclass of ndarray as data. You can use it as you would with numpy.core.ma.MaskedArray. For those of you who already tested the package, the main modifications are: - the reorganization of the initial module in core+extras. - Data are now shared by default (in other terms, the copy flag defaults to False in MaskedArray.__new__), for consistency with the rest of numpy. - An additional boolean flag has been introduced: keep_mask (with a default of True). This flag is useful when trying to mask a mask array: it tells __new__ whether to keep the initial mask (in that case, the new mask will be combined with the old mask) or not (in that case, the new mask replaces the old one). - Some functions/routines that were missing have been added (any/all...) As always, this is a work in progress. In particular, I should really check for the bottlenecks: would anybody have some pointers ? If you wanna be on the safe, optimized side, stick to numpy.core.ma. Otherwise, please try this new implementation, and don't forget to give me some feedback! PS: Technical question: how can I delete some files in the DeveloperZone wiki ? The maskedarray.py, test_maskedarray.py, test_subclasses.py are out of date, and should be replaced by the package. Thanks a lot in advance ! From robert.kern at gmail.com Sun Dec 10 17:52:40 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 10 Dec 2006 16:52:40 -0600 Subject: [SciPy-dev] ANN: An alternative to numpy.core.ma In-Reply-To: <200612101506.12873.pgmdevlist@gmail.com> References: <200612101506.12873.pgmdevlist@gmail.com> Message-ID: <457C8FB8.5080606@gmail.com> Pierre GM wrote: > PS: > Technical question: how can I delete some files in the DeveloperZone wiki ? > The maskedarray.py, test_maskedarray.py, test_subclasses.py are out of date, > and should be replaced by the package. Thanks a lot in advance ! I've done it for you. You click on the attachment link, and then at the bottom of the page is a button to delete the attachment. However, this is only true if you have been granted the appropriate permissions; I believe I gave DELETE privileges only to people with numpy SVN checkin privileges. If you would like a more appropriate place to develop and distribute your code than a wiki page, ask Jeff Strunk (Cced here) for SVN checkin privileges to scipy. You can make a sandbox package for your code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.leslie at gmail.com Sun Dec 10 18:56:27 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 11 Dec 2006 10:56:27 +1100 Subject: [SciPy-dev] cholesky decomposition for banded matrices In-Reply-To: <456357AF.1070500@stanford.edu> References: <456357AF.1070500@stanford.edu> Message-ID: On 11/22/06, Jonathan Taylor wrote: > A week or so ago, I asked about generalized eigenvalue problems for > banded matrices -- turns out all I needed was a Cholesky decomposition. > > I added support for banded cholesky decomposition and solution of banded > linear systems with Hermitian or symmetric matrices in scipy.linalg with > some tests. The tests are not as exhaustive as they should be.... > > Patch is attached. Hi Jonathan, This patch worked fine for me, so I've checked it in in r2385. Cheers, Tim > > -- Jonathan > > > Index: Lib/linalg/info.py > =================================================================== > --- Lib/linalg/info.py (revision 2324) > +++ Lib/linalg/info.py (working copy) > @@ -7,6 +7,7 @@ > inv --- Find the inverse of a square matrix > solve --- Solve a linear system of equations > solve_banded --- Solve a linear system of equations with a banded matrix > + solveh_banded --- Solve a linear system of equations with a Hermitian or symmetric banded matrix, returning the Cholesky decomposition as well > det --- Find the determinant of a square matrix > norm --- matrix and vector norm > lstsq --- Solve linear least-squares problem > @@ -27,6 +28,7 @@ > diagsvd --- construct matrix of singular values from output of svd > orth --- construct orthonormal basis for range of A using svd > cholesky --- Cholesky decomposition of a matrix > + cholesky_banded --- Cholesky decomposition of a banded symmetric or Hermitian matrix > cho_factor --- Cholesky decomposition for use in solving linear system > cho_solve --- Solve previously factored linear system > qr --- QR decomposition of a matrix > Index: Lib/linalg/generic_flapack.pyf > =================================================================== > --- Lib/linalg/generic_flapack.pyf (revision 2324) > +++ Lib/linalg/generic_flapack.pyf (working copy) > @@ -13,6 +13,113 @@ > python module generic_flapack > interface > > + subroutine pbtrf(lower,n,kd,ab,ldab,info) > + > + ! Compute Cholesky decomposition of banded symmetric positive definite > + ! matrix: > + ! A = U^T * U, C = U if lower = 0 > + ! A = L * L^T, C = L if lower = 1 > + ! C is triangular matrix of the corresponding Cholesky decomposition. > + > + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,ab,&ldab,&info); > + callprotoargument char*,int*,int*,*,int*,int* > + > + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) > + integer intent(hide),depend(ab) :: n=shape(ab,1) > + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 > + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 > + > + dimension(ldab,n),intent(in,out,copy,out=c) :: ab > + integer intent(out) :: info > + > + end subroutine pbtrf > + > + subroutine pbtrf(lower,n,kd,ab,ldab,info) > + > + > + ! Compute Cholesky decomposition of banded symmetric positive definite > + ! matrix: > + ! A = U^H * U, C = U if lower = 0 > + ! A = L * L^H, C = L if lower = 1 > + ! C is triangular matrix of the corresponding Cholesky decomposition. > + > + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,ab,&ldab,&info); > + callprotoargument char*,int*,int*,*,int*,int* > + > + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) > + integer intent(hide),depend(ab) :: n=shape(ab,1) > + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 > + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 > + > + dimension(ldab,n),intent(in,out,copy,out=c) :: ab > + integer intent(out) :: info > + > + end subroutine pbtrf > + > + subroutine pbsv(lower,n,kd,nrhs,ab,ldab,b,ldb,info) > + > + ! > + ! Computes the solution to a real system of linear equations > + ! A * X = B, > + ! where A is an N-by-N symmetric positive definite band matrix and X > + ! and B are N-by-NRHS matrices. > + ! > + ! The Cholesky decomposition is used to factor A as > + ! A = U**T * U, if lower=1, or > + ! A = L * L**T, if lower=0 > + ! where U is an upper triangular band matrix, and L is a lower > + ! triangular band matrix, with the same number of superdiagonals or > + ! subdiagonals as A. The factored form of A is then used to solve the > + ! system of equations A * X = B. > + > + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,&nrhs,ab,&ldab,b,&ldb,&info); > + callprotoargument char*,int*,int*,int*,*,int*,*,int*,int* > + > + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) > + integer intent(hide),depend(ab) :: n=shape(ab,1) > + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 > + integer intent(hide),depend(b) :: ldb=shape(b,1) > + integer intent(hide),depend(b) :: nrhs=shape(b,0) > + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 > + > + dimension(nrhs,ldb),intent(in,out,copy,out=x) :: b > + dimension(ldab,n),intent(in,out,copy,out=c) :: ab > + integer intent(out) :: info > + > + end subroutine pbsv > + > + subroutine pbsv(lower,n,kd,nrhs,ab,ldab,b,ldb,info) > + > + ! > + ! Computes the solution to a real system of linear equations > + ! A * X = B, > + ! where A is an N-by-N Hermitian positive definite band matrix and X > + ! and B are N-by-NRHS matrices. > + ! > + ! The Cholesky decomposition is used to factor A as > + ! A = U**H * U, if lower=1, or > + ! A = L * L**H, if lower=0 > + ! where U is an upper triangular band matrix, and L is a lower > + ! triangular band matrix, with the same number of superdiagonals or > + ! subdiagonals as A. The factored form of A is then used to solve the > + ! system of equations A * X = B. > + > + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,&nrhs,ab,&ldab,b,&ldb,&info); > + callprotoargument char*,int*,int*,int*,*,int*,*,int*,int* > + > + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) > + integer intent(hide),depend(ab) :: n=shape(ab,1) > + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 > + integer intent(hide),depend(b) :: ldb=shape(b,1) > + integer intent(hide),depend(b) :: nrhs=shape(b,0) > + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 > + > + dimension(nrhs,ldb),intent(in,out,copy,out=x) :: b > + dimension(ldab,n),intent(in,out,copy,out=c) :: ab > + integer intent(out) :: info > + > + end subroutine pbsv > + > subroutine gebal(scale,permute,n,a,m,lo,hi,pivscale,info) > ! > ! ba,lo,hi,pivscale,info = gebal(a,scale=0,permute=0,overwrite_a=0) > Index: Lib/linalg/tests/test_cholesky_banded.py > =================================================================== > --- Lib/linalg/tests/test_cholesky_banded.py (revision 0) > +++ Lib/linalg/tests/test_cholesky_banded.py (revision 0) > @@ -0,0 +1,208 @@ > +import numpy as N > +import numpy.random as R > +import scipy.linalg as L > +from numpy.testing import * > + > +from scipy.linalg import cholesky_banded, solveh_banded > + > +def _upper2lower(ub): > + """ > + Convert upper triangular banded matrix to lower banded form. > + """ > + > + lb = N.zeros(ub.shape, ub.dtype) > + nrow, ncol = ub.shape > + for i in range(ub.shape[0]): > + lb[i,0:(ncol-i)] = ub[nrow-1-i,i:ncol] > + lb[i,(ncol-i):] = ub[nrow-1-i,0:i] > + return lb > + > +def _lower2upper(lb): > + """ > + Convert upper triangular banded matrix to lower banded form. > + """ > + > + ub = N.zeros(lb.shape, lb.dtype) > + nrow, ncol = lb.shape > + for i in range(lb.shape[0]): > + ub[nrow-1-i,i:ncol] = lb[i,0:(ncol-i)] > + ub[nrow-1-i,0:i] = lb[i,(ncol-i):] > + return ub > + > +def _triangle2unit(tb, lower=0): > + """ > + Take a banded triangular matrix and return its diagonal and the unit matrix: > + the banded triangular matrix with 1's on the diagonal. > + """ > + > + if lower: d = tb[0] > + else: d = tb[-1] > + > + if lower: return d, tb / d > + else: > + l = _upper2lower(tb) > + return d, _lower2upper(l / d) > + > + > +def band2array(a, lower=0, symmetric=False, hermitian=False): > + """ > + Take an upper or lower triangular banded matrix and return a matrix using > + LAPACK storage convention. For testing banded Cholesky decomposition, etc. > + """ > + > + n = a.shape[1] > + r = a.shape[0] > + _a = 0 > + > + if not lower: > + for j in range(r): > + _b = N.diag(a[r-1-j],k=j)[j:(n+j),j:(n+j)] > + _a += _b > + if symmetric and j > 0: _a += _b.T > + elif hermitian and j > 0: _a += _b.conjugate().T > + else: > + for j in range(r): > + _b = N.diag(a[j],k=j)[0:n,0:n] > + _a += _b > + if symmetric and j > 0: _a += _b.T > + elif hermitian and j > 0: _a += _b.conjugate().T > + _a = _a.T > + > + return _a > + > +class test_cholesky(NumpyTestCase): > + > + def setUp(self): > + self.a = N.array([N.linspace(0,0.4,5), [1]*5]) > + self.b = N.array([[1]*5, N.linspace(0.1,0.5,5)]) > + self.b[1,-1] = 0. > + > + > + def test_UPLO(self): > + u, _ = L.flapack.dpbtrf(self.a) > + l, _ = L.flapack.dpbtrf(self.b, lower=1) > + assert_almost_equal(band2array(u), band2array(l, lower=1).T) > + > + def test_band2array(self): > + assert_almost_equal(band2array(self.a, symmetric=True), band2array(self.b, lower=1, symmetric=True)) > + > + def test_factor1(self): > + c = band2array(L.flapack.dpbtrf(self.a)[0]).T > + assert_almost_equal(c, N.linalg.cholesky(band2array(self.a, symmetric=True))) > + > + def test_factor2(self): > + c = band2array(L.flapack.dpbtrf(self.a)[0]) > + a = band2array(self.a, symmetric=True) > + assert_almost_equal(N.dot(c.T, c), a) > + > + def test_factor3(self): > + c = band2array(L.flapack.dpbtrf(self.b, lower=1)[0], lower=1) > + b = band2array(self.b, symmetric=True, lower=1) > + assert_almost_equal(N.dot(c, c.T), b) > + > + def test_chol_banded1(self): > + c = cholesky_banded(self.a) > + a = band2array(self.a, symmetric=True) > + _c = band2array(c) > + assert_almost_equal(N.dot(_c.T, _c), a) > + > + def test_chol_banded2(self): > + c = cholesky_banded(self.b, lower=1) > + a = band2array(self.a, symmetric=True) > + _c = band2array(c, lower=1) > + assert_almost_equal(N.dot(_c, _c.T), a) > + > + def test_upper2lower(self): > + assert_almost_equal(self.b, _upper2lower(self.a)) > + assert_almost_equal(self.b, _upper2lower(_lower2upper(self.b))) > + > + def test_lower2upper(self): > + assert_almost_equal(self.a, _lower2upper(self.b)) > + assert_almost_equal(self.a, _lower2upper(_upper2lower(self.a))) > + > +class test_complex_cholesky(test_cholesky): > + > + def setUp(self): > + n = 10 > + self.c = N.array([[20]*5,[1j]*4+[0],[0.01]*3+[0]*2]) > + > + _c = band2array(self.c, hermitian=True, lower=1) > + self.b = self.c > + self.a = _lower2upper(self.b) > + > + > + def test_UPLO(self): > + u, _ = L.flapack.zpbtrf(self.a) > + l, _ = L.flapack.zpbtrf(self.b, lower=1) > + assert_almost_equal(band2array(u), band2array(l, lower=1).T) > + > + def test_band2array(self): > + assert_almost_equal(band2array(self.a, hermitian=True), band2array(self.b, lower=1, hermitian=True).conjugate()) > + > + def test_factor1(self): > + c = band2array(L.flapack.zpbtrf(self.a)[0]).T.conjugate() > + assert_almost_equal(c, N.linalg.cholesky(band2array(self.a, hermitian=True))) > + > + def test_factor2(self): > + c = band2array(L.flapack.zpbtrf(self.a)[0]) > + a = band2array(self.a, hermitian=True) > + assert_almost_equal(N.dot(c.conjugate().T, c), a) > + > + def test_factor3(self): > + c = band2array(L.flapack.zpbtrf(self.b, lower=1)[0], lower=1) > + b = band2array(self.b, hermitian=True, lower=1) > + assert_almost_equal(N.dot(c, c.conjugate().T), b) > + > + def test_chol_banded1(self): > + c = cholesky_banded(self.a) > + a = band2array(self.a, hermitian=True) > + _c = band2array(c) > + assert_almost_equal(N.dot(_c.conjugate().T, _c), a) > + > + def test_chol_banded2(self): > + c = cholesky_banded(self.b, lower=1) > + b = band2array(self.b, hermitian=True, lower=1) > + _c = band2array(c, lower=1) > + assert_almost_equal(N.dot(_c, _c.conjugate().T), b) > + > +class test_random_cholesky(test_cholesky): > + def setUp(self): > + self.c = R.uniform(low=-1,high=0, size=(3,10)) > + self.c[0] = -N.sum(band2array(self.c, lower=1, symmetric=True), axis=1) + 0.1 > + self.b = self.c > + self.a = _lower2upper(self.b) > + > + def test_unit(self): > + decomp = cholesky_banded(self.c, lower=1) > + d, l = _triangle2unit(decomp, lower=1) > + dd = d**2 > + _l = band2array(l, lower=1) > + assert_almost_equal(N.dot(_l, N.dot(N.diag(dd), _l.T)), > + band2array(self.c, lower=1, symmetric=True)) > + > + > +class test_sym_solver(NumpyTestCase): > + > + def test_solveh(self): > + c, x = solveh_banded(self.a, N.identity(5)) > + a = band2array(self.a, symmetric=True) > + assert_almost_equal(x, L.inv(a)) > + _c = band2array(c) > + assert_almost_equal(N.dot(_c.T, _c), a) > + > + def setUp(self): > + self.a = N.array([N.linspace(0,0.4,5), [1]*5]) > + self.b = N.array([[1]*5, N.linspace(0.1,0.5,5)]) > + self.rhs = N.identity(5) > + > + def test_solveU(self): > + c,x,info = L.flapack.dpbsv(self.a, self.rhs) > + assert_almost_equal(N.dot(x, band2array(self.a, symmetric=True)), N.identity(5)) > + > + def test_solveL(self): > + c,x,info = L.flapack.dpbsv(self.b, self.rhs,lower=1) > + assert_almost_equal(N.dot(x, band2array(self.a, symmetric=True)), N.identity(5)) > + assert_almost_equal(N.dot(x, band2array(self.b, symmetric=True, lower=1)), N.identity(5)) > + > +if __name__ == "__main__": > + NumpyTest().run() > Index: Lib/linalg/basic.py > =================================================================== > --- Lib/linalg/basic.py (revision 2324) > +++ Lib/linalg/basic.py (working copy) > @@ -10,7 +10,7 @@ > __all__ = ['solve','inv','det','lstsq','norm','pinv','pinv2', > 'tri','tril','triu','toeplitz','hankel','lu_solve', > 'cho_solve','solve_banded','LinAlgError','kron', > - 'all_mat'] > + 'all_mat', 'cholesky_banded', 'solveh_banded'] > > #from blas import get_blas_funcs > from flinalg import get_flinalg_funcs > @@ -170,7 +170,94 @@ > raise ValueError,\ > 'illegal value in %-th argument of internal gbsv'%(-info) > > +def solveh_banded(ab, b, overwrite_ab=0, overwrite_b=0, > + lower=0): > + """ solveh_banded(ab, b, overwrite_ab=0, overwrite_b=0) -> c, x > > + Solve a linear system of equations a * x = b for x where > + a is a banded symmetric or Hermitian positive definite > + matrix stored in lower diagonal ordered form (lower=1) > + > + a11 a22 a33 a44 a55 a66 > + a21 a32 a43 a54 a65 * > + a31 a42 a53 a64 * * > + > + or upper diagonal ordered form > + > + * * a31 a42 a53 a64 > + * a21 a32 a43 a54 a65 > + a11 a22 a33 a44 a55 a66 > + > + Inputs: > + > + ab -- An N x l > + b -- An N x nrhs matrix or N vector. > + overwrite_y - Discard data in y, where y is ab or b. > + lower - is ab in lower or upper form? > + > + Outputs: > + > + c: the Cholesky factorization of ab > + x: the solution to ab * x = b > + > + """ > + ab, b = map(asarray_chkfinite,(ab,b)) > + > + pbsv, = get_lapack_funcs(('pbsv',),(ab,b)) > + c,x,info = pbsv(ab,b, > + lower=lower, > + overwrite_ab=overwrite_ab, > + overwrite_b=overwrite_b) > + if info==0: > + return c, x > + if info>0: > + raise LinAlgError, "%d-th leading minor not positive definite" % info > + raise ValueError,\ > + 'illegal value in %d-th argument of internal pbsv'%(-info) > + > +def cholesky_banded(ab, overwrite_ab=0, lower=0): > + """ cholesky_banded(ab, overwrite_ab=0, lower=0) -> c > + > + Compute the Cholesky decomposition of a > + banded symmetric or Hermitian positive definite > + matrix stored in lower diagonal ordered form (lower=1) > + > + a11 a22 a33 a44 a55 a66 > + a21 a32 a43 a54 a65 * > + a31 a42 a53 a64 * * > + > + or upper diagonal ordered form > + > + * * a31 a42 a53 a64 > + * a21 a32 a43 a54 a65 > + a11 a22 a33 a44 a55 a66 > + > + Inputs: > + > + ab -- An N x l > + overwrite_ab - Discard data in ab > + lower - is ab in lower or upper form? > + > + Outputs: > + > + c: the Cholesky factorization of ab > + > + """ > + ab = asarray_chkfinite(ab) > + > + pbtrf, = get_lapack_funcs(('pbtrf',),(ab,)) > + c,info = pbtrf(ab, > + lower=lower, > + overwrite_ab=overwrite_ab) > + > + if info==0: > + return c > + if info>0: > + raise LinAlgError, "%d-th leading minor not positive definite" % info > + raise ValueError,\ > + 'illegal value in %d-th argument of internal pbtrf'%(-info) > + > + > # matrix inversion > def inv(a, overwrite_a=0): > """ inv(a, overwrite_a=0) -> a_inv > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > From tim.leslie at gmail.com Sun Dec 10 18:56:27 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 11 Dec 2006 10:56:27 +1100 Subject: [SciPy-dev] cholesky decomposition for banded matrices In-Reply-To: <456357AF.1070500@stanford.edu> References: <456357AF.1070500@stanford.edu> Message-ID: On 11/22/06, Jonathan Taylor wrote: > A week or so ago, I asked about generalized eigenvalue problems for > banded matrices -- turns out all I needed was a Cholesky decomposition. > > I added support for banded cholesky decomposition and solution of banded > linear systems with Hermitian or symmetric matrices in scipy.linalg with > some tests. The tests are not as exhaustive as they should be.... > > Patch is attached. Hi Jonathan, This patch worked fine for me, so I've checked it in in r2385. Cheers, Tim > > -- Jonathan > > > Index: Lib/linalg/info.py > =================================================================== > --- Lib/linalg/info.py (revision 2324) > +++ Lib/linalg/info.py (working copy) > @@ -7,6 +7,7 @@ > inv --- Find the inverse of a square matrix > solve --- Solve a linear system of equations > solve_banded --- Solve a linear system of equations with a banded matrix > + solveh_banded --- Solve a linear system of equations with a Hermitian or symmetric banded matrix, returning the Cholesky decomposition as well > det --- Find the determinant of a square matrix > norm --- matrix and vector norm > lstsq --- Solve linear least-squares problem > @@ -27,6 +28,7 @@ > diagsvd --- construct matrix of singular values from output of svd > orth --- construct orthonormal basis for range of A using svd > cholesky --- Cholesky decomposition of a matrix > + cholesky_banded --- Cholesky decomposition of a banded symmetric or Hermitian matrix > cho_factor --- Cholesky decomposition for use in solving linear system > cho_solve --- Solve previously factored linear system > qr --- QR decomposition of a matrix > Index: Lib/linalg/generic_flapack.pyf > =================================================================== > --- Lib/linalg/generic_flapack.pyf (revision 2324) > +++ Lib/linalg/generic_flapack.pyf (working copy) > @@ -13,6 +13,113 @@ > python module generic_flapack > interface > > + subroutine pbtrf(lower,n,kd,ab,ldab,info) > + > + ! Compute Cholesky decomposition of banded symmetric positive definite > + ! matrix: > + ! A = U^T * U, C = U if lower = 0 > + ! A = L * L^T, C = L if lower = 1 > + ! C is triangular matrix of the corresponding Cholesky decomposition. > + > + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,ab,&ldab,&info); > + callprotoargument char*,int*,int*,*,int*,int* > + > + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) > + integer intent(hide),depend(ab) :: n=shape(ab,1) > + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 > + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 > + > + dimension(ldab,n),intent(in,out,copy,out=c) :: ab > + integer intent(out) :: info > + > + end subroutine pbtrf > + > + subroutine pbtrf(lower,n,kd,ab,ldab,info) > + > + > + ! Compute Cholesky decomposition of banded symmetric positive definite > + ! matrix: > + ! A = U^H * U, C = U if lower = 0 > + ! A = L * L^H, C = L if lower = 1 > + ! C is triangular matrix of the corresponding Cholesky decomposition. > + > + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,ab,&ldab,&info); > + callprotoargument char*,int*,int*,*,int*,int* > + > + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) > + integer intent(hide),depend(ab) :: n=shape(ab,1) > + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 > + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 > + > + dimension(ldab,n),intent(in,out,copy,out=c) :: ab > + integer intent(out) :: info > + > + end subroutine pbtrf > + > + subroutine pbsv(lower,n,kd,nrhs,ab,ldab,b,ldb,info) > + > + ! > + ! Computes the solution to a real system of linear equations > + ! A * X = B, > + ! where A is an N-by-N symmetric positive definite band matrix and X > + ! and B are N-by-NRHS matrices. > + ! > + ! The Cholesky decomposition is used to factor A as > + ! A = U**T * U, if lower=1, or > + ! A = L * L**T, if lower=0 > + ! where U is an upper triangular band matrix, and L is a lower > + ! triangular band matrix, with the same number of superdiagonals or > + ! subdiagonals as A. The factored form of A is then used to solve the > + ! system of equations A * X = B. > + > + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,&nrhs,ab,&ldab,b,&ldb,&info); > + callprotoargument char*,int*,int*,int*,*,int*,*,int*,int* > + > + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) > + integer intent(hide),depend(ab) :: n=shape(ab,1) > + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 > + integer intent(hide),depend(b) :: ldb=shape(b,1) > + integer intent(hide),depend(b) :: nrhs=shape(b,0) > + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 > + > + dimension(nrhs,ldb),intent(in,out,copy,out=x) :: b > + dimension(ldab,n),intent(in,out,copy,out=c) :: ab > + integer intent(out) :: info > + > + end subroutine pbsv > + > + subroutine pbsv(lower,n,kd,nrhs,ab,ldab,b,ldb,info) > + > + ! > + ! Computes the solution to a real system of linear equations > + ! A * X = B, > + ! where A is an N-by-N Hermitian positive definite band matrix and X > + ! and B are N-by-NRHS matrices. > + ! > + ! The Cholesky decomposition is used to factor A as > + ! A = U**H * U, if lower=1, or > + ! A = L * L**H, if lower=0 > + ! where U is an upper triangular band matrix, and L is a lower > + ! triangular band matrix, with the same number of superdiagonals or > + ! subdiagonals as A. The factored form of A is then used to solve the > + ! system of equations A * X = B. > + > + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,&nrhs,ab,&ldab,b,&ldb,&info); > + callprotoargument char*,int*,int*,int*,*,int*,*,int*,int* > + > + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) > + integer intent(hide),depend(ab) :: n=shape(ab,1) > + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 > + integer intent(hide),depend(b) :: ldb=shape(b,1) > + integer intent(hide),depend(b) :: nrhs=shape(b,0) > + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 > + > + dimension(nrhs,ldb),intent(in,out,copy,out=x) :: b > + dimension(ldab,n),intent(in,out,copy,out=c) :: ab > + integer intent(out) :: info > + > + end subroutine pbsv > + > subroutine gebal(scale,permute,n,a,m,lo,hi,pivscale,info) > ! > ! ba,lo,hi,pivscale,info = gebal(a,scale=0,permute=0,overwrite_a=0) > Index: Lib/linalg/tests/test_cholesky_banded.py > =================================================================== > --- Lib/linalg/tests/test_cholesky_banded.py (revision 0) > +++ Lib/linalg/tests/test_cholesky_banded.py (revision 0) > @@ -0,0 +1,208 @@ > +import numpy as N > +import numpy.random as R > +import scipy.linalg as L > +from numpy.testing import * > + > +from scipy.linalg import cholesky_banded, solveh_banded > + > +def _upper2lower(ub): > + """ > + Convert upper triangular banded matrix to lower banded form. > + """ > + > + lb = N.zeros(ub.shape, ub.dtype) > + nrow, ncol = ub.shape > + for i in range(ub.shape[0]): > + lb[i,0:(ncol-i)] = ub[nrow-1-i,i:ncol] > + lb[i,(ncol-i):] = ub[nrow-1-i,0:i] > + return lb > + > +def _lower2upper(lb): > + """ > + Convert upper triangular banded matrix to lower banded form. > + """ > + > + ub = N.zeros(lb.shape, lb.dtype) > + nrow, ncol = lb.shape > + for i in range(lb.shape[0]): > + ub[nrow-1-i,i:ncol] = lb[i,0:(ncol-i)] > + ub[nrow-1-i,0:i] = lb[i,(ncol-i):] > + return ub > + > +def _triangle2unit(tb, lower=0): > + """ > + Take a banded triangular matrix and return its diagonal and the unit matrix: > + the banded triangular matrix with 1's on the diagonal. > + """ > + > + if lower: d = tb[0] > + else: d = tb[-1] > + > + if lower: return d, tb / d > + else: > + l = _upper2lower(tb) > + return d, _lower2upper(l / d) > + > + > +def band2array(a, lower=0, symmetric=False, hermitian=False): > + """ > + Take an upper or lower triangular banded matrix and return a matrix using > + LAPACK storage convention. For testing banded Cholesky decomposition, etc. > + """ > + > + n = a.shape[1] > + r = a.shape[0] > + _a = 0 > + > + if not lower: > + for j in range(r): > + _b = N.diag(a[r-1-j],k=j)[j:(n+j),j:(n+j)] > + _a += _b > + if symmetric and j > 0: _a += _b.T > + elif hermitian and j > 0: _a += _b.conjugate().T > + else: > + for j in range(r): > + _b = N.diag(a[j],k=j)[0:n,0:n] > + _a += _b > + if symmetric and j > 0: _a += _b.T > + elif hermitian and j > 0: _a += _b.conjugate().T > + _a = _a.T > + > + return _a > + > +class test_cholesky(NumpyTestCase): > + > + def setUp(self): > + self.a = N.array([N.linspace(0,0.4,5), [1]*5]) > + self.b = N.array([[1]*5, N.linspace(0.1,0.5,5)]) > + self.b[1,-1] = 0. > + > + > + def test_UPLO(self): > + u, _ = L.flapack.dpbtrf(self.a) > + l, _ = L.flapack.dpbtrf(self.b, lower=1) > + assert_almost_equal(band2array(u), band2array(l, lower=1).T) > + > + def test_band2array(self): > + assert_almost_equal(band2array(self.a, symmetric=True), band2array(self.b, lower=1, symmetric=True)) > + > + def test_factor1(self): > + c = band2array(L.flapack.dpbtrf(self.a)[0]).T > + assert_almost_equal(c, N.linalg.cholesky(band2array(self.a, symmetric=True))) > + > + def test_factor2(self): > + c = band2array(L.flapack.dpbtrf(self.a)[0]) > + a = band2array(self.a, symmetric=True) > + assert_almost_equal(N.dot(c.T, c), a) > + > + def test_factor3(self): > + c = band2array(L.flapack.dpbtrf(self.b, lower=1)[0], lower=1) > + b = band2array(self.b, symmetric=True, lower=1) > + assert_almost_equal(N.dot(c, c.T), b) > + > + def test_chol_banded1(self): > + c = cholesky_banded(self.a) > + a = band2array(self.a, symmetric=True) > + _c = band2array(c) > + assert_almost_equal(N.dot(_c.T, _c), a) > + > + def test_chol_banded2(self): > + c = cholesky_banded(self.b, lower=1) > + a = band2array(self.a, symmetric=True) > + _c = band2array(c, lower=1) > + assert_almost_equal(N.dot(_c, _c.T), a) > + > + def test_upper2lower(self): > + assert_almost_equal(self.b, _upper2lower(self.a)) > + assert_almost_equal(self.b, _upper2lower(_lower2upper(self.b))) > + > + def test_lower2upper(self): > + assert_almost_equal(self.a, _lower2upper(self.b)) > + assert_almost_equal(self.a, _lower2upper(_upper2lower(self.a))) > + > +class test_complex_cholesky(test_cholesky): > + > + def setUp(self): > + n = 10 > + self.c = N.array([[20]*5,[1j]*4+[0],[0.01]*3+[0]*2]) > + > + _c = band2array(self.c, hermitian=True, lower=1) > + self.b = self.c > + self.a = _lower2upper(self.b) > + > + > + def test_UPLO(self): > + u, _ = L.flapack.zpbtrf(self.a) > + l, _ = L.flapack.zpbtrf(self.b, lower=1) > + assert_almost_equal(band2array(u), band2array(l, lower=1).T) > + > + def test_band2array(self): > + assert_almost_equal(band2array(self.a, hermitian=True), band2array(self.b, lower=1, hermitian=True).conjugate()) > + > + def test_factor1(self): > + c = band2array(L.flapack.zpbtrf(self.a)[0]).T.conjugate() > + assert_almost_equal(c, N.linalg.cholesky(band2array(self.a, hermitian=True))) > + > + def test_factor2(self): > + c = band2array(L.flapack.zpbtrf(self.a)[0]) > + a = band2array(self.a, hermitian=True) > + assert_almost_equal(N.dot(c.conjugate().T, c), a) > + > + def test_factor3(self): > + c = band2array(L.flapack.zpbtrf(self.b, lower=1)[0], lower=1) > + b = band2array(self.b, hermitian=True, lower=1) > + assert_almost_equal(N.dot(c, c.conjugate().T), b) > + > + def test_chol_banded1(self): > + c = cholesky_banded(self.a) > + a = band2array(self.a, hermitian=True) > + _c = band2array(c) > + assert_almost_equal(N.dot(_c.conjugate().T, _c), a) > + > + def test_chol_banded2(self): > + c = cholesky_banded(self.b, lower=1) > + b = band2array(self.b, hermitian=True, lower=1) > + _c = band2array(c, lower=1) > + assert_almost_equal(N.dot(_c, _c.conjugate().T), b) > + > +class test_random_cholesky(test_cholesky): > + def setUp(self): > + self.c = R.uniform(low=-1,high=0, size=(3,10)) > + self.c[0] = -N.sum(band2array(self.c, lower=1, symmetric=True), axis=1) + 0.1 > + self.b = self.c > + self.a = _lower2upper(self.b) > + > + def test_unit(self): > + decomp = cholesky_banded(self.c, lower=1) > + d, l = _triangle2unit(decomp, lower=1) > + dd = d**2 > + _l = band2array(l, lower=1) > + assert_almost_equal(N.dot(_l, N.dot(N.diag(dd), _l.T)), > + band2array(self.c, lower=1, symmetric=True)) > + > + > +class test_sym_solver(NumpyTestCase): > + > + def test_solveh(self): > + c, x = solveh_banded(self.a, N.identity(5)) > + a = band2array(self.a, symmetric=True) > + assert_almost_equal(x, L.inv(a)) > + _c = band2array(c) > + assert_almost_equal(N.dot(_c.T, _c), a) > + > + def setUp(self): > + self.a = N.array([N.linspace(0,0.4,5), [1]*5]) > + self.b = N.array([[1]*5, N.linspace(0.1,0.5,5)]) > + self.rhs = N.identity(5) > + > + def test_solveU(self): > + c,x,info = L.flapack.dpbsv(self.a, self.rhs) > + assert_almost_equal(N.dot(x, band2array(self.a, symmetric=True)), N.identity(5)) > + > + def test_solveL(self): > + c,x,info = L.flapack.dpbsv(self.b, self.rhs,lower=1) > + assert_almost_equal(N.dot(x, band2array(self.a, symmetric=True)), N.identity(5)) > + assert_almost_equal(N.dot(x, band2array(self.b, symmetric=True, lower=1)), N.identity(5)) > + > +if __name__ == "__main__": > + NumpyTest().run() > Index: Lib/linalg/basic.py > =================================================================== > --- Lib/linalg/basic.py (revision 2324) > +++ Lib/linalg/basic.py (working copy) > @@ -10,7 +10,7 @@ > __all__ = ['solve','inv','det','lstsq','norm','pinv','pinv2', > 'tri','tril','triu','toeplitz','hankel','lu_solve', > 'cho_solve','solve_banded','LinAlgError','kron', > - 'all_mat'] > + 'all_mat', 'cholesky_banded', 'solveh_banded'] > > #from blas import get_blas_funcs > from flinalg import get_flinalg_funcs > @@ -170,7 +170,94 @@ > raise ValueError,\ > 'illegal value in %-th argument of internal gbsv'%(-info) > > +def solveh_banded(ab, b, overwrite_ab=0, overwrite_b=0, > + lower=0): > + """ solveh_banded(ab, b, overwrite_ab=0, overwrite_b=0) -> c, x > > + Solve a linear system of equations a * x = b for x where > + a is a banded symmetric or Hermitian positive definite > + matrix stored in lower diagonal ordered form (lower=1) > + > + a11 a22 a33 a44 a55 a66 > + a21 a32 a43 a54 a65 * > + a31 a42 a53 a64 * * > + > + or upper diagonal ordered form > + > + * * a31 a42 a53 a64 > + * a21 a32 a43 a54 a65 > + a11 a22 a33 a44 a55 a66 > + > + Inputs: > + > + ab -- An N x l > + b -- An N x nrhs matrix or N vector. > + overwrite_y - Discard data in y, where y is ab or b. > + lower - is ab in lower or upper form? > + > + Outputs: > + > + c: the Cholesky factorization of ab > + x: the solution to ab * x = b > + > + """ > + ab, b = map(asarray_chkfinite,(ab,b)) > + > + pbsv, = get_lapack_funcs(('pbsv',),(ab,b)) > + c,x,info = pbsv(ab,b, > + lower=lower, > + overwrite_ab=overwrite_ab, > + overwrite_b=overwrite_b) > + if info==0: > + return c, x > + if info>0: > + raise LinAlgError, "%d-th leading minor not positive definite" % info > + raise ValueError,\ > + 'illegal value in %d-th argument of internal pbsv'%(-info) > + > +def cholesky_banded(ab, overwrite_ab=0, lower=0): > + """ cholesky_banded(ab, overwrite_ab=0, lower=0) -> c > + > + Compute the Cholesky decomposition of a > + banded symmetric or Hermitian positive definite > + matrix stored in lower diagonal ordered form (lower=1) > + > + a11 a22 a33 a44 a55 a66 > + a21 a32 a43 a54 a65 * > + a31 a42 a53 a64 * * > + > + or upper diagonal ordered form > + > + * * a31 a42 a53 a64 > + * a21 a32 a43 a54 a65 > + a11 a22 a33 a44 a55 a66 > + > + Inputs: > + > + ab -- An N x l > + overwrite_ab - Discard data in ab > + lower - is ab in lower or upper form? > + > + Outputs: > + > + c: the Cholesky factorization of ab > + > + """ > + ab = asarray_chkfinite(ab) > + > + pbtrf, = get_lapack_funcs(('pbtrf',),(ab,)) > + c,info = pbtrf(ab, > + lower=lower, > + overwrite_ab=overwrite_ab) > + > + if info==0: > + return c > + if info>0: > + raise LinAlgError, "%d-th leading minor not positive definite" % info > + raise ValueError,\ > + 'illegal value in %d-th argument of internal pbtrf'%(-info) > + > + > # matrix inversion > def inv(a, overwrite_a=0): > """ inv(a, overwrite_a=0) -> a_inv > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > From nwagner at iam.uni-stuttgart.de Mon Dec 11 02:38:34 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 11 Dec 2006 08:38:34 +0100 Subject: [SciPy-dev] ZVODE, a Double Precision Complex ODE Solver Message-ID: <457D0AFA.60203@iam.uni-stuttgart.de> This might be of interest. Nils From: Alan Hindmarsh Date: Wed, 6 Dec 2006 18:20:32 -0500 Subject: ZVODE, a Double Precision Complex ODE Solver Of the many solvers freely available for ODE initial value problems, there seem to be very few, if any, for the case of double precision complex dependent variables, even though there have been occasional user calls for this functionality. (Of course a complex system can be treated as a real system of twice the size, but direct treatment as a complex system is preferable.) I have provided such a solver: ZVODE. ZVODE is a straightforward modification of the Fortran ODE solver DVODE, for double precision real systems, making it part of a trio, SVODE/DVODE/ZVODE. ZVODE is available in two locations: (1) Netlib, in /ode (see file zvode.f), and (2) the LLNL CASC software site, www.llnl.gov/CASC/software.html (see VODE). The latter site also includes a demonstration program, zvdemo. Alan Hindmarsh Center for Applied Scientific Computing Lawrence Livermore National Laboratory From cimrman3 at ntc.zcu.cz Mon Dec 11 11:24:16 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 11 Dec 2006 17:24:16 +0100 Subject: [SciPy-dev] FE+sparse utilities Message-ID: <457D8630.8060509@ntc.zcu.cz> Hi, often, people ask here how to quickly assemble small dense matrices into a global sparse matrix in the finite element sense. I have made a small package that could help with this task (BSD license, so that it could be included in scipy, if there is interest). It works directly with CSR sparse matrices and the whole matrix structure is preallocated prior to assembling -> no conversion overhead. It is available at http://147.228.107.66/mymoin/SFE. Note that this is a very preliminary web site on my work computer, so it may be down occasionally (especially during weekends :)). I would greatly appreciate any feedback, r. From jensj at fysik.dtu.dk Tue Dec 12 10:59:07 2006 From: jensj at fysik.dtu.dk (Jens =?ISO-8859-1?Q?J=F8rgen?= Mortensen) Date: Tue, 12 Dec 2006 16:59:07 +0100 Subject: [SciPy-dev] [Docutils-users] rst2mathml In-Reply-To: References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> Message-ID: <1165939147.25986.24.camel@doppler.fysik.dtu.dk> On Sat, 2006-12-09 at 21:52 -0500, Alan G Isaac wrote: > On Sat, 9 Dec 2006, Jonathan Guyer apparently wrote: > > I see my confusion. I'd gotten the sandbox via svn along with > > docutils HEAD, and it doesn't work with that (it throws an error deep > > within docutils). I switched to docutils-0.4 and it seems > > fine. > > This problem that docutils SVN head creates for the > latex-math module in the sandbox should interest the > docutils developers and perhaps the latex-math developer > (Jens J?rgen Mortensen), so I'll forward your comment to the > docutils list. You may wish to follow-up with any useful > details. The problem was that I registered the latex-math directive in an old- fashioned way that did not work with new docutils versions. I fixed it in svn so that it should work with both old and new versions. I also changed the name of the rst2latex.py script to rst2latexmath.py. Jens J?rgen > Cheers, > Alan Isaac > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Docutils-users mailing list > Docutils-users at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/docutils-users > > Please use "Reply All" to reply to the list. From david.huard at gmail.com Tue Dec 12 12:20:08 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 12 Dec 2006 12:20:08 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> Message-ID: <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> Hi all, I followed this thread with much interest and I wouldn't like it to die without some kind of concensus being reached for the Scipy documention system. Am I correct to say that Epydoc + REST/Latex seems the way to go ? If this is the case, what's next ? I'm not familiar with any of this, but I'd be great if someone knowledgeable could define a roadmap and create a couple of tickets so that people like me could contribute small steps. Cheers, David 2006/12/9, Jonathan Guyer : > > > On Dec 9, 2006, at 12:31 AM, Alan G Isaac wrote: > > > On Fri, 8 Dec 2006, Jonathan Guyer apparently wrote: > >> Do you know if any of that actually works? The last time > >> I looked at it, I couldn't figure out what to do with it. > > > > > > *If* it works? Yes: it works great. > > I see my confusion. I'd gotten the sandbox via svn along with > docutils HEAD, and it doesn't work with that (it throws an error deep > within docutils). > I switched to docutils-0.4 and it seems fine. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Dec 12 12:38:28 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 12 Dec 2006 12:38:28 -0500 Subject: [SciPy-dev] rst2mathml References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <1165939147.25986.24.camel@doppler.fysik.dtu.dk> Message-ID: Jens J?rgen Mortensen wrote: > On Sat, 2006-12-09 at 21:52 -0500, Alan G Isaac wrote: >> On Sat, 9 Dec 2006, Jonathan Guyer apparently wrote: >> > I see my confusion. I'd gotten the sandbox via svn along with >> > docutils HEAD, and it doesn't work with that (it throws an error deep >> > within docutils). I switched to docutils-0.4 and it seems >> > fine. >> >> This problem that docutils SVN head creates for the >> latex-math module in the sandbox should interest the >> docutils developers and perhaps the latex-math developer >> (Jens J?rgen Mortensen), so I'll forward your comment to the >> docutils list. You may wish to follow-up with any useful >> details. > > The problem was that I registered the latex-math directive in an old- > fashioned way that did not work with new docutils versions. I fixed it > in svn so that it should work with both old and new versions. I also > changed the name of the rst2latex.py script to rst2latexmath.py. > I'm interested in playing with this. I grabbed the current docutils svn, but it doesn't seem to work: rst2latex.py README.txt > stuff README.txt:1: (ERROR/3) Unknown interpreted text role "latex-math". .. default-role:: latex-math I don't know anything about rest or docutils, any hints? From aisaac at american.edu Tue Dec 12 13:06:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 12 Dec 2006 13:06:05 -0500 Subject: [SciPy-dev] rst2mathml In-Reply-To: References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov><7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov><1165939147.25986.24.camel@doppler.fysik.dtu.dk> Message-ID: On Tue, 12 Dec 2006, Neal Becker apparently wrote: > rst2latex.py README.txt > stuff > README.txt:1: (ERROR/3) Unknown interpreted text role "latex-math". Did you mean to use rst2latexmath.py? (In the sandbox; recently renamed.) Cheers, Alan Isaac From david.huard at gmail.com Tue Dec 12 13:19:06 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 12 Dec 2006 13:19:06 -0500 Subject: [SciPy-dev] rst2mathml In-Reply-To: References: <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <1165939147.25986.24.camel@doppler.fysik.dtu.dk> Message-ID: <91cf711d0612121019l50e53ff4h5debfac4074211b6@mail.gmail.com> I don't know anything about Rest either, but I think that the setup script fetches another version of rst2latex. If you use the version in the sandbox, rst2latexmath.py, it should work. By the way, it's pretty neat. Kudos to the devs. David 2006/12/12, Neal Becker : > > Jens J?rgen Mortensen wrote: > > > On Sat, 2006-12-09 at 21:52 -0500, Alan G Isaac wrote: > >> On Sat, 9 Dec 2006, Jonathan Guyer apparently wrote: > >> > I see my confusion. I'd gotten the sandbox via svn along with > >> > docutils HEAD, and it doesn't work with that (it throws an error deep > >> > within docutils). I switched to docutils-0.4 and it seems > >> > fine. > >> > >> This problem that docutils SVN head creates for the > >> latex-math module in the sandbox should interest the > >> docutils developers and perhaps the latex-math developer > >> (Jens J?rgen Mortensen), so I'll forward your comment to the > >> docutils list. You may wish to follow-up with any useful > >> details. > > > > The problem was that I registered the latex-math directive in an old- > > fashioned way that did not work with new docutils versions. I fixed it > > in svn so that it should work with both old and new versions. I also > > changed the name of the rst2latex.py script to rst2latexmath.py. > > > I'm interested in playing with this. I grabbed the current docutils svn, > but it doesn't seem to work: > rst2latex.py README.txt > stuff > README.txt:1: (ERROR/3) Unknown interpreted text role "latex-math". > > .. default-role:: latex-math > > I don't know anything about rest or docutils, any hints? > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Dec 12 13:46:45 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 12 Dec 2006 13:46:45 -0500 Subject: [SciPy-dev] rst2mathml References: <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <1165939147.25986.24.camel@doppler.fysik.dtu.dk> <91cf711d0612121019l50e53ff4h5debfac4074211b6@mail.gmail.com> Message-ID: David Huard wrote: > I don't know anything about Rest either, but I think that the setup script > fetches another version of rst2latex. If you use the version in the > sandbox, rst2latexmath.py, it should work. > > By the way, it's pretty neat. Kudos to the devs. > > David > > > > 2006/12/12, Neal Becker : >> >> Jens J?rgen Mortensen wrote: >> >> > On Sat, 2006-12-09 at 21:52 -0500, Alan G Isaac wrote: >> >> On Sat, 9 Dec 2006, Jonathan Guyer apparently wrote: >> >> > I see my confusion. I'd gotten the sandbox via svn along with >> >> > docutils HEAD, and it doesn't work with that (it throws an error >> >> > deep >> >> > within docutils). I switched to docutils-0.4 and it seems >> >> > fine. >> >> >> >> This problem that docutils SVN head creates for the >> >> latex-math module in the sandbox should interest the >> >> docutils developers and perhaps the latex-math developer >> >> (Jens J?rgen Mortensen), so I'll forward your comment to the >> >> docutils list. You may wish to follow-up with any useful >> >> details. >> > >> > The problem was that I registered the latex-math directive in an old- >> > fashioned way that did not work with new docutils versions. I fixed it >> > in svn so that it should work with both old and new versions. I also >> > changed the name of the rst2latex.py script to rst2latexmath.py. >> > >> I'm interested in playing with this. I grabbed the current docutils svn, >> but it doesn't seem to work: >> rst2latex.py README.txt > stuff >> README.txt:1: (ERROR/3) Unknown interpreted text role "latex-math". >> >> .. default-role:: latex-math >> >> I don't know anything about rest or docutils, any hints? >> OK, so the installation procedure is: 1) svn checkout svn://svn.berlios.de/docutils/trunk docutils 2) ( cd docutils/docutils; python setup.py build && sudo python setup.py install ) 3) cd sandbox/jensj/latex_math/; sudo install *.py /usr/bin I was a bit confused by the doc docutils/sandbox/jensj/latex_math/README.txt, which says "The plug-in adds..." So I'm looking around the docutils install directory for a place to install plugins. I guess that's not the correct procedure. From guyer at nist.gov Tue Dec 12 14:07:08 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Tue, 12 Dec 2006 14:07:08 -0500 Subject: [SciPy-dev] rst2mathml In-Reply-To: References: <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <1165939147.25986.24.camel@doppler.fysik.dtu.dk> <91cf711d0612121019l50e53ff4h5debfac4074211b6@mail.gmail.com> Message-ID: <7F5B41D9-C9F0-4B9F-AF87-3BCEEEF5F6DD@nist.gov> On Dec 12, 2006, at 1:46 PM, Neal Becker wrote: > So I'm looking around the docutils install directory for a place to > install > plugins. I guess that's not the correct procedure. Since we have Jens' ear, I'm sure he can confirm, but it does not appear that the latex-math role is actually added to doctutils yet. You need to run one of the latex_math script to get the effect. From david.huard at gmail.com Tue Dec 12 14:09:55 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 12 Dec 2006 14:09:55 -0500 Subject: [SciPy-dev] rst2mathml In-Reply-To: References: <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <1165939147.25986.24.camel@doppler.fysik.dtu.dk> <91cf711d0612121019l50e53ff4h5debfac4074211b6@mail.gmail.com> Message-ID: <91cf711d0612121109v47e21b14m43da8a355344ee84@mail.gmail.com> Neal, maybe the easiest solution is to add two lines to the setup.py scripts: 'scripts' : ['tools/rst2html.py', 'tools/rst2s5.py', 'tools/rst2latex.py', 'tools/rst2newlatex.py', 'tools/rst2xml.py', 'tools/rst2pseudoxml.py', '../sandbox/jensj/latex_math/tools/rst2latexmath.py', '../sandbox/jensj/latex_math/tools/rst2mathml.py'],} David 2006/12/12, Neal Becker : > > David Huard wrote: > > > I don't know anything about Rest either, but I think that the setup > script > > fetches another version of rst2latex. If you use the version in the > > sandbox, rst2latexmath.py, it should work. > > > > By the way, it's pretty neat. Kudos to the devs. > > > > David > > > > > > > > 2006/12/12, Neal Becker : > >> > >> Jens J?rgen Mortensen wrote: > >> > >> > On Sat, 2006-12-09 at 21:52 -0500, Alan G Isaac wrote: > >> >> On Sat, 9 Dec 2006, Jonathan Guyer apparently wrote: > >> >> > I see my confusion. I'd gotten the sandbox via svn along with > >> >> > docutils HEAD, and it doesn't work with that (it throws an error > >> >> > deep > >> >> > within docutils). I switched to docutils-0.4 and it seems > >> >> > fine. > >> >> > >> >> This problem that docutils SVN head creates for the > >> >> latex-math module in the sandbox should interest the > >> >> docutils developers and perhaps the latex-math developer > >> >> (Jens J?rgen Mortensen), so I'll forward your comment to the > >> >> docutils list. You may wish to follow-up with any useful > >> >> details. > >> > > >> > The problem was that I registered the latex-math directive in an old- > >> > fashioned way that did not work with new docutils versions. I fixed > it > >> > in svn so that it should work with both old and new versions. I also > >> > changed the name of the rst2latex.py script to rst2latexmath.py. > >> > > >> I'm interested in playing with this. I grabbed the current docutils > svn, > >> but it doesn't seem to work: > >> rst2latex.py README.txt > stuff > >> README.txt:1: (ERROR/3) Unknown interpreted text role "latex-math". > >> > >> .. default-role:: latex-math > >> > >> I don't know anything about rest or docutils, any hints? > >> > > OK, so the installation procedure is: > > 1) svn checkout svn://svn.berlios.de/docutils/trunk docutils > 2) ( cd docutils/docutils; python setup.py build && sudo python setup.py > install ) > 3) cd sandbox/jensj/latex_math/; sudo install *.py /usr/bin > > I was a bit confused by the doc > docutils/sandbox/jensj/latex_math/README.txt, which says > "The plug-in adds..." > > So I'm looking around the docutils install directory for a place to > install > plugins. I guess that's not the correct procedure. > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Tue Dec 12 15:15:43 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 12 Dec 2006 15:15:43 -0500 Subject: [SciPy-dev] time series implementation approach Message-ID: Hi everyone. I have been discussing the approach I've used for my time series module (available in the sandbox) with another reader of this mailing list, and there is one particular issue that we seem to disagree on that I would like to hear some other thoughts on, if anyone has any opinions one way or the other. I'm going to just outline the two proposed approaches and highlight some pros/cons of each. And I'm certainly open to hearing another completely different approach all together if you have ideas. Common to both implementations is a Date class, where each Date has a frequency (daily, monthly, business days, etc) and a value. The value represents periods since the origin, where the origin is taken to be some chosen fixed date (currently 1st period in the year 1850). Also, every time series object has a frequency, and a starting date, in both proposed implementations. == Implementation A == Sub-class masked array. This allows usage of all the currently available functions and methods for masked array and minimizes the amount of work needing to be done on actually writing any custom internals. Indexing for the timeseries object is always done relative to the start and end dates of the series. So for example, if series1.start == '1 jan 1999' (shown as a string here for clarity, but not implemented as a string), and this was a daily frequency series, then series1[0] represents Jan 1, 1999, series1[2] represents the value at Jan 3, 1999, etc... The __getitem__ and __setitem__ methods would be overwritten to additionally accept a Date object of the same frequency as the series, so you could do something like: jan25val = series1[Date(freq='d',year=1999,month=1,day=25)] Functions would be provided to take multiple series and align them appropriately so they can be added together, and so forth. The drawback of this approach (relative to the next one to be discussed) is that an index used for one series has no inherent meaning to any other series unless you explicitly aligned them ahead of time. Doing something like: foo = series1[5:25] + series2[5:25] , doesn't make any sense unless you are careful to align the two series before hand. == Implementation B == Construct a new Class (let's call it ShiftingArray) that has no inherent size. It stores an underlying data array that is hidden from the user, and when points outside the bounds of this underlying array are requested, the array is dynamically resized to accomodate these new bounds. Index X means the same thing for any ShiftingArray. If I add two shifting arrays, they are aligned appropriately behind the scenes with no user intervention. The TimeSeries class is then constructed as a sub-class of ShiftingArray. This makes it possible to do things like the following: startDate = Date(freq='d',year=1999,month=1,day=25)endDate = startDate + 50 mySlice = slice(int(startDate),int(endDate))foo1 = series1[mySlice]foo2 = series2[mySlice] blah = series1 + series2 without worrying about where series1 and series2 start and end. A problem with this approach is that there is more overhead than just sub classing masked array because the dynamic shifting has a cost, and existing functions will have to be wrapped in order to act on the time series objects. The internals of the Class will be more complicated, but it takes away some micro-management from the user. ====================== So... I realize this was a bit long winded, and for that I apologize, but if you have any thoughts on the subject, please share. Thanks, - Matt Knox -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Tue Dec 12 15:57:00 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 12 Dec 2006 15:57:00 -0500 Subject: [SciPy-dev] time series implementation approach In-Reply-To: References: Message-ID: <200612121557.00892.pgmdevlist@gmail.com> Folks, I promised Matt I would whine about the descriptions ;) Implementation A (MaskedArrays): * Slicing remains natural: should you need the last 10 values, regardless of the date, just use [-10:], which will output the values and update the corresponding starting date. * A call to foo[2] really returns the third value, regardless of the starting point. * Adding 2 series is only possible if the series have the same frequency and the same starting date. So something like foo = series1[5:25] + series2[5:25] still make sense. Aligning series (viz, setting the same starting and ending points) is done with a simple function. * Fine if you're more interested in the data than the dates. implementation B (ShiftingArrays): * Slicing is trickier: should you need the last 10 values, regardless of the date, you need to find the index of the latest valid value, and count 10 backwards. Too bad if the latest value was masked. * A call to foo[3] returns the third element of the dynamic array, which may fall before the series actually starts. * Things get really simple and fast once you've learned to think in terms of dynamic arrays, as long as you don't lose track of the starting point. * Ideal if you're more interested in the dates than the values. Note that in any case, we suppose that the series are regularly spaced, but we can work on that. The code relies strongly on mx.DateTime. Part of the dirty work is done in C (for switching from one frequency to another especially). From bsouthey at gmail.com Tue Dec 12 17:32:47 2006 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 12 Dec 2006 16:32:47 -0600 Subject: [SciPy-dev] time series implementation approach In-Reply-To: References: Message-ID: Hi, I guess my real concern is how either of these implementations are going to be used and abused. How does either implementation handles unevenly spaced points (both within and between series) and when the series are in different units (such as days versus weeks)? I read from Pierre's email that 'the series are regularly spaced' but I do think you need to address it sooner than later because it may show major flaws in one implementation. With uneven spacing, you probably need a sparse structure to avoid wasting resources. With different units, one of the series must be converted either by the user or by code (which could be rather complex to get correct). Also, what you really mean by 'blah = series1 + series2'? Do you mean concatenation as with strings, or summation as with numbers, or some sort of merging of values? >From a different angle, Implementation A looks to be easier to maintain and troubleshoot than Implementation B. Also, Implementation B appears to me to have a larger overhead than Implementation A as many things have to happen 'behind the scenes'. However, it may have the advantage of only being slow in a few places and much quicker in general. Many of the time series methods can be applied to other series than just time. Are you going to allow other types of series? Regards Bruce On 12/12/06, Matt Knox wrote: > > Hi everyone. I have been discussing the approach I've used for my time > series module (available in the sandbox) with another reader of this mailing > list, and there is one particular issue that we seem to disagree on that I > would like to hear some other thoughts on, if anyone has any opinions one > way or the other. > > I'm going to just outline the two proposed approaches and highlight some > pros/cons of each. And I'm certainly open to hearing another completely > different approach all together if you have ideas. > > Common to both implementations is a Date class, where each Date has a > frequency (daily, monthly, business days, etc) and a value. The value > represents periods since the origin, where the origin is taken to be some > chosen fixed date (currently 1st period in the year 1850). Also, every time > series object has a frequency, and a starting date, in both proposed > implementations. > > == Implementation A == > > Sub-class masked array. This allows usage of all the currently available > functions and methods for masked array and minimizes the amount of work > needing to be done on actually writing any custom internals. Indexing for > the timeseries object is always done relative to the start and end dates of > the series. So for example, if series1.start == '1 jan 1999' (shown as a > string here for clarity, but not implemented as a string), and this was a > daily frequency series, then series1[0] represents Jan 1, 1999, series1[2] > represents the value at Jan 3, 1999, etc... > > The __getitem__ and __setitem__ methods would be overwritten to additionally > accept a Date object of the same frequency as the series, so you could do > something like: jan25val = > series1[Date(freq='d',year=1999,month=1,day=25)] > > Functions would be provided to take multiple series and align them > appropriately so they can be added together, and so forth. > > The drawback of this approach (relative to the next one to be discussed) is > that an index used for one series has no inherent meaning to any other > series unless you explicitly aligned them ahead of time. Doing something > like: foo = series1[5:25] + series2[5:25] , doesn't make any sense unless > you are careful to align the two series before hand. > > == Implementation B == > > Construct a new Class (let's call it ShiftingArray) that has no inherent > size. It stores an underlying data array that is hidden from the user, and > when points outside the bounds of this underlying array are requested, the > array is dynamically resized to accomodate these new bounds. Index X means > the same thing for any ShiftingArray. If I add two shifting arrays, they are > aligned appropriately behind the scenes with no user intervention. The > TimeSeries class is then constructed as a sub-class of ShiftingArray. This > makes it possible to do things like the following: > > startDate = Date(freq='d',year=1999,month=1,day=25) > endDate = startDate + 50 > > mySlice = slice(int(startDate),int(endDate)) > foo1 = series1[mySlice] > foo2 = series2[mySlice] > > blah = series1 + series2 > > without worrying about where series1 and series2 start and end. > > A problem with this approach is that there is more overhead than just sub > classing masked array because the dynamic shifting has a cost, and existing > functions will have to be wrapped in order to act on the time series > objects. The internals of the Class will be more complicated, but it takes > away some micro-management from the user. > > ====================== > > So... I realize this was a bit long winded, and for that I apologize, but if > you have any thoughts on the subject, please share. > > Thanks, > > - Matt Knox > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > From mattknox_ca at hotmail.com Tue Dec 12 18:20:54 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 12 Dec 2006 18:20:54 -0500 Subject: [SciPy-dev] time series implementation approach Message-ID: > How does either implementation handles unevenly spaced points (both > within and between series) and when the series are in different units > (such as days versus weeks)? > I read from Pierre's email that 'the series are regularly spaced' but > I do think you need to address it sooner than later because it may > show major flaws in one implementation. With uneven spacing, you > probably need a sparse structure to avoid wasting resources. > With different units, one of the series must be converted either by > the user or by code (which could be rather complex to get correct). This would be handled the same in either implementation. I have already written functions in C that convert a series of one frequency to another specified frequency. Yes, these conversion functions are somewhat complex to write, but it only needs to be done once. The method is user-specifiable (eg. if going from daily to monthly, you can specify to average the values for each month, sum them, etc...). In either approach the underlying data is just a masked array, so missing values would just be masked. What the indices of the series represent is dependent on the frequency of the series. So if the series is monthly frequency, and index x represents January 2005, then index x+1 represents February 2005, etc. > Also, what you really mean by 'blah = series1 + series2'? > Do you mean concatenation as with strings, or summation as with > numbers, or some sort of merging of values? I mean element-wise addition. Eg. Suppose series1 has the following data: jan 1, 2005 = 1 jan 2, 2005 = 1 jan 3, 2005 = 1 and series 2 has the following data: jan 2, 2005 = 1 jan 3, 2005 = 1 jan 4, 2005 = 1 then blah = series1 + series2 would give: jan 2, 2005 = 2 jan 3, 2005 = 2 behind the scenes what happens is that series1 and series2 are resized so that their indices match up, and then the underlying masked arrays are just added together as normal. You can take a look at my example script in the scipy sandbox if you want a clearer idea of how the current design works. http://svn.scipy.org/svn/scipy/trunk/Lib/sandbox/timeseries/examples/ > Many of the time series methods can be applied to other series than > just time. Are you going to allow other types of series? no reason not to, but I'm mainly concerned with figuring out a good structure to the TimeSeries and Date classes right now that will provide a good foundation to build on. - Matt From pgmdevlist at gmail.com Tue Dec 12 18:40:24 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 12 Dec 2006 18:40:24 -0500 Subject: [SciPy-dev] time series implementation approach In-Reply-To: References: Message-ID: <200612121840.24868.backtopop@gmail.com> On Tuesday 12 December 2006 18:20, Matt Knox wrote: > > How does either implementation handles unevenly spaced points (both > > within and between series) and when the series are in different units > > (such as days versus weeks)? > > > > I read from Pierre's email that 'the series are regularly spaced' but > > I do think you need to address it sooner than later because it may > > show major flaws in one implementation. With uneven spacing, you > > probably need a sparse structure to avoid wasting resources. > > With different units, one of the series must be converted either by > > the user or by code (which could be rather complex to get correct). The way we're going, a series is regularly spaced with a given frequency. Combination of series (for example, addition) is valid only if the series have the same frequencies. If not, some conversion should take place (we're discussing on the best way to do it, that's tricky indeed), but it'll be up to the user to decide which series should be converted. About gaps in the data. Well, that's yet another point of discussion. I think that the series should be made regular, with the smaller reasonable frequency. Gaps are then masked. For example, I have to work with daily values from a station that didn't record anything for some periods up to several years at a time. In order to do evn the simplest thing as plotting the data, I had to convert the data to a daily timestep, and masked the dates for which no data was recorded. Yes, that's a waste of resources, but it's far easier than trying to figure when I have to jump from one timestep to another. And it gives the possibility to work directly with moving averages, plots, and so forth. And I can always select a period on which I don't have missing data to play with methods that can't handle them. [thinking aloud: yeah, we could keep handle uneven series by storing the timestep (expressed as date.relativedelta, for example) along with the data, just like a mask is stored along the data in a masked array. But then, to combine 2 series, you would have to check for the compatibility of their timesteps as well. That's getting messy. Nah, the easiest is really to stick with regular freqs, and provide functions to fill the gaps with ma.masked] > > Also, what you really mean by 'blah = series1 + series2'? > > Do you mean concatenation as with strings, or summation as with > > numbers, or some sort of merging of values? > > I mean element-wise addition. > > behind the scenes what happens is that series1 and series2 are resized so > that their indices match up, and then the underlying masked arrays are just > added together as normal. You can take a look at my example script in the > scipy sandbox if you want a clearer idea of how the current design works. You end up with a series that starts in 01/01/2005 and ends in 01/04/2005, but where the first and last data are masked. > > Many of the time series methods can be applied to other series than > > just time. Are you going to allow other types of series? > > no reason not to, but I'm mainly concerned with figuring out a good > structure to the TimeSeries and Date classes right now that will provide a > good foundation to build on. I agree with Matt. Here, we're interested in objects that keep temporal information along some regular information (be it amount of rain, or share value). What I think you mean by timeseries method (autocorrelation coefficients, for example) is yet another problem. And as nothing to do with dates per se (more with masked arrays...) From dalcinl at gmail.com Tue Dec 12 18:49:52 2006 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Tue, 12 Dec 2006 20:49:52 -0300 Subject: [SciPy-dev] FE+sparse utilities In-Reply-To: <457D8630.8060509@ntc.zcu.cz> References: <457D8630.8060509@ntc.zcu.cz> Message-ID: On 12/11/06, Robert Cimrman wrote: > > I would greatly appreciate any feedback, Hi, Robert... I will take a look tomorow to your code. I am very interested in this. I am developing a big, complete wrapper to PETSc libraries (petsc4py) and this subject is a priority for me. I currently try to avoid assembles in the Python side. However, the most efficient way I've found is to write in the C side a new (Python) type object containing a pointer to the real matrix structure, and implement the mapping protocol. This way, you can almost skip Python bytecode. Regards, -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From jensj at fysik.dtu.dk Wed Dec 13 02:29:19 2006 From: jensj at fysik.dtu.dk (Jens =?ISO-8859-1?Q?J=F8rgen?= Mortensen) Date: Wed, 13 Dec 2006 08:29:19 +0100 Subject: [SciPy-dev] rst2mathml In-Reply-To: <7F5B41D9-C9F0-4B9F-AF87-3BCEEEF5F6DD@nist.gov> References: <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <1165939147.25986.24.camel@doppler.fysik.dtu.dk> <91cf711d0612121019l50e53ff4h5debfac4074211b6@mail.gmail.com> <7F5B41D9-C9F0-4B9F-AF87-3BCEEEF5F6DD@nist.gov> Message-ID: <1165994960.25986.26.camel@doppler.fysik.dtu.dk> On Tue, 2006-12-12 at 14:07 -0500, Jonathan Guyer wrote: > On Dec 12, 2006, at 1:46 PM, Neal Becker wrote: > > > So I'm looking around the docutils install directory for a place to > > install > > plugins. I guess that's not the correct procedure. > > Since we have Jens' ear, I'm sure he can confirm, but it does not > appear that the latex-math role is actually added to doctutils yet. Correct! You need to use the special rst2mathml.py and rst2latexmath.py scripts. > You need to run one of the latex_math script to get the effect. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From david.douard at logilab.fr Wed Dec 13 04:55:32 2006 From: david.douard at logilab.fr (David Douard) Date: Wed, 13 Dec 2006 10:55:32 +0100 Subject: [SciPy-dev] time series implementation approach In-Reply-To: References: Message-ID: <20061213095532.GA4268@crater.logilab.fr> Hi, I have not read carefully enougt te propsed implementations so I won't give my POV on this. I have coded a pure Python timeserie ojbect for a project I'm working on right now. I attach the code here. The TS class I've coded if basically a non-regular TS: the time vector is kept as a simple float numpy array, values representing gmticks (mx.DateTime). TS can be linearly interpolated or step TS. Arithmetics on S does not reaquire TS to share te same time support. This is just in case someone could be interested... Note that this is unfinished WiP, probably buggy code, etc. On Tue, Dec 12, 2006 at 03:15:43PM -0500, Matt Knox wrote: > Hi everyone. I have been discussing the approach I've used for my time series module (available in the sandbox) with another reader of this mailing list, and there is one particular issue that we seem to disagree on that I would like to hear some other thoughts on, if anyone has any opinions one way or the other. > I'm going to just outline the two proposed approaches and highlight some pros/cons of each. And I'm certainly open to hearing another completely different approach all together if you have ideas. > Common to both implementations is a Date class, where each Date has a frequency (daily, monthly, business days, etc) and a value. The value represents periods since the origin, where the origin is taken to be some chosen fixed date (currently 1st period in the year 1850). Also, every time series object has a frequency, and a starting date, in both proposed implementations. > == Implementation A == > Sub-class masked array. This allows usage of all the currently available functions and methods for masked array and minimizes the amount of work needing to be done on actually writing any custom internals. Indexing for the timeseries object is always done relative to the start and end dates of the series. So for example, if series1.start == '1 jan 1999' (shown as a string here for clarity, but not implemented as a string), and this was a daily frequency series, then series1[0] represents Jan 1, 1999, series1[2] represents the value at Jan 3, 1999, etc... > The __getitem__ and __setitem__ methods would be overwritten to additionally accept a Date object of the same frequency as the series, so you could do something like: jan25val = series1[Date(freq='d',year=1999,month=1,day=25)] > Functions would be provided to take multiple series and align them appropriately so they can be added together, and so forth. > The drawback of this approach (relative to the next one to be discussed) is that an index used for one series has no inherent meaning to any other series unless you explicitly aligned them ahead of time. Doing something like: foo = series1[5:25] + series2[5:25] , doesn't make any sense unless you are careful to align the two series before hand. > > == Implementation B == > Construct a new Class (let's call it ShiftingArray) that has no inherent size. It stores an underlying data array that is hidden from the user, and when points outside the bounds of this underlying array are requested, the array is dynamically resized to accomodate these new bounds. Index X means the same thing for any ShiftingArray. If I add two shifting arrays, they are aligned appropriately behind the scenes with no user intervention. The TimeSeries class is then constructed as a sub-class of ShiftingArray. This makes it possible to do things like the following: > startDate = Date(freq='d',year=1999,month=1,day=25)endDate = startDate + 50 > mySlice = slice(int(startDate),int(endDate))foo1 = series1[mySlice]foo2 = series2[mySlice] > blah = series1 + series2 > without worrying about where series1 and series2 start and end. > A problem with this approach is that there is more overhead than just sub classing masked array because the dynamic shifting has a cost, and existing functions will have to be wrapped in order to act on the time series objects. The internals of the Class will be more complicated, but it takes away some micro-management from the user. > ====================== > So... I realize this was a bit long winded, and for that I apologize, but if you have any thoughts on the subject, please share. > Thanks, > - Matt Knox > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: timeseries.py Type: text/x-python Size: 14200 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_timeseries.py Type: text/x-python Size: 10180 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From pgmdevlist at gmail.com Wed Dec 13 06:06:17 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 13 Dec 2006 06:06:17 -0500 Subject: [SciPy-dev] time series implementation approach In-Reply-To: <20061213095532.GA4268@crater.logilab.fr> References: <20061213095532.GA4268@crater.logilab.fr> Message-ID: <200612130606.17919.pgmdevlist@gmail.com> David, Merci beaucoup pour les sources. I guess a lot of people in this list have their own implementation of timeseries... Storing a date along data gives indeed the possibility to deal with gapped series. However, things becomes quite messy when you have to combine several series: how do you deal with the gaps? Your approach (for the glance I got) assumes either a step or a linear interpolation, which should work nicely in your case, but is doubtfully applicable to other cases. In fact, you just gave me an idea. In maskedarrays, the mask can be nomask (viz, no masked data, surprise), which greatly simplifies most operations. Here, we could have a frequency of nofreq, which would indicate that the time step is not constant: one simple option is then to cast it to the smallest reasonable timestep, and set the missing values to "masked". (By reasonable, I mean average: if most of your data is roughly monthly but for a couple of daily ones, stick to monthly. Unless you really want to go daily. Dammit, we gonna have to leave this possibility). So yeah, we may eventually have to consider varying frequencies. Which would mean that the shifting array approach will have to be reexamined. Maybe not: if freq is nofreq, then it'll panic, for sure. But if freq is set, then it remains a quite viable option. OK, now I stop thinking aloud. David, could you tell us more about what you need ? What kind of data do you have to work with ? Are missing dates something you have to deal with on a very regular basis ? From cimrman3 at ntc.zcu.cz Wed Dec 13 07:52:55 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 13 Dec 2006 13:52:55 +0100 Subject: [SciPy-dev] FE+sparse utilities In-Reply-To: References: <457D8630.8060509@ntc.zcu.cz> Message-ID: <457FF7A7.6000309@ntc.zcu.cz> Hi Lisandro, Lisandro Dalcin wrote: > On 12/11/06, Robert Cimrman wrote: >> I would greatly appreciate any feedback, > > Hi, Robert... I will take a look tomorow to your code. I am very > interested in this. I am developing a big, complete wrapper to PETSc > libraries (petsc4py) and this subject is a priority for me. I I have always wanted to have PETSc accessible in Python so that I could try it easily, great! > currently try to avoid assembles in the Python side. However, the most > efficient way I've found is to write in the C side a new (Python) type > object containing a pointer to the real matrix structure, and > implement the mapping protocol. This way, you can almost skip Python > bytecode. Feutils is a very simple package good for: 1. given finite element connectivity of degrees of freedom (= mesh connectivity, if 1 DOF per node), prepare a global CSR matrix with nonzero pattern exactly corresponding to that connectivity. Thus, when assembling, no reallocations, data shifts etc. are needed; 2. given a bunch of element contributions, assemble them into the global matrix. The actual matrix allocation as well as element contribution assembling are done in C (via swig), too. But I use standard numpy (array) and scipy (CSR sparse matrix) data types to store the data. The key point is to compute in one C call many (at least 1000) local element contributions, and them assemble all of them also in one function call - that way the time spent in Python is minimized. The C code assumes 32-bit integers for indices and 64-bit floats for values. I can make a 64-bit version, too, if there is need. I will add this e-mail's content to the web page (http://ui505p06-mbs.ntc.zcu.cz/mymoin/SFE). Thanks for your interest, r. From ndbecker2 at gmail.com Wed Dec 13 09:08:47 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 13 Dec 2006 09:08:47 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> Message-ID: David Huard wrote: > Hi all, > > I followed this thread with much interest and I wouldn't like it to die > without some kind of concensus being reached for the Scipy documention > system. Am I correct to say that Epydoc + REST/Latex seems the way to go ? > > If this is the case, what's next ? I'm not familiar with any of this, but > I'd be great if someone knowledgeable could define a roadmap and create a > couple of tickets so that people like me could contribute small steps. > I'm curious about this also. Is the idea that epydoc can be hacked to work with a hacked docutils? So, the plan is to incorporate docutils modifications (so that rst2latex would recognize latex-math role), and then to somehow hook this into epydoc? (And, neither of these is yet available for testing, right?) From david.huard at gmail.com Wed Dec 13 09:37:15 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 13 Dec 2006 09:37:15 -0500 Subject: [SciPy-dev] time series implementation approach In-Reply-To: <200612130606.17919.pgmdevlist@gmail.com> References: <20061213095532.GA4268@crater.logilab.fr> <200612130606.17919.pgmdevlist@gmail.com> Message-ID: <91cf711d0612130637g13a99c11sf63f83098168252f@mail.gmail.com> For the record: I use time series to 1. Convert daily data (precip, ET, disharges) to aggregated monthly data. 2. Extract the longest continuous series from a gappy series (sometimes there are dates but missing values (-99 for instance), and sometimes the dates themselves are missing). Wich list: To be able to apply a method on logical chunks of data. For instance, from a daily series, compute the monthly mean and variance. Or is that already implemented ? David Huard 2006/12/13, Pierre GM : > > David, > Merci beaucoup pour les sources. > I guess a lot of people in this list have their own implementation of > timeseries... > > Storing a date along data gives indeed the possibility to deal with gapped > series. However, things becomes quite messy when you have to combine > several > series: how do you deal with the gaps? Your approach (for the glance I > got) > assumes either a step or a linear interpolation, which should work nicely > in > your case, but is doubtfully applicable to other cases. > > In fact, you just gave me an idea. In maskedarrays, the mask can be nomask > (viz, no masked data, surprise), which greatly simplifies most operations. > Here, we could have a frequency of nofreq, which would indicate that the > time > step is not constant: one simple option is then to cast it to the smallest > reasonable timestep, and set the missing values to "masked". (By > reasonable, > I mean average: if most of your data is roughly monthly but for a couple > of > daily ones, stick to monthly. Unless you really want to go daily. Dammit, > we > gonna have to leave this possibility). > > So yeah, we may eventually have to consider varying frequencies. Which > would > mean that the shifting array approach will have to be reexamined. Maybe > not: > if freq is nofreq, then it'll panic, for sure. But if freq is set, then it > remains a quite viable option. > > OK, now I stop thinking aloud. David, could you tell us more about what > you > need ? What kind of data do you have to work with ? Are missing dates > something you have to deal with on a very regular basis ? > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Dec 13 09:44:04 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 13 Dec 2006 09:44:04 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov><7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov><91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> Message-ID: On Wed, 13 Dec 2006, Neal Becker apparently wrote: > I'm curious about this also. Is the idea that epydoc can be hacked to work > with a hacked docutils? So, the plan is to incorporate docutils > modifications (so that rst2latex would recognize latex-math role), and then > to somehow hook this into epydoc? (And, neither of these is yet available > for testing, right?) Do not forget rest2mathml, which produces XHTML+MathML. FireFox handles this great. (I haven't tried IE 7: does it yet handle XML?) If someone can patch (already wonderful) epydoc to allow this, that would allow truly lovely online documentation of technical code. (In the meantime, I imagine you could pass the LaTeX raw: the people who care about the math will often be able to read the LaTeX.) Cheers, Alan Isaac From guyer at nist.gov Wed Dec 13 09:42:00 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 13 Dec 2006 09:42:00 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> Message-ID: On Dec 13, 2006, at 9:08 AM, Neal Becker wrote: > I'm curious about this also. Is the idea that epydoc can be hacked > to work > with a hacked docutils? So, the plan is to incorporate docutils > modifications (so that rst2latex would recognize latex-math role), > and then > to somehow hook this into epydoc? (And, neither of these is yet > available > for testing, right?) Unhacked epydoc works with unhacked docutils right now. My hacks to epydoc are because I don't like how it looks on the page and too much of the LaTeX output is hard-coded. I refactored it so that you can supply your own .sty file to get the output to look the way you want (and I wrote a style to mimic the output that epydoc already had for people who like the way it was (presumably Ed Loper is one of those)). We build our manuals (and webpage) via a "build_docs" command to our setup.py script. This command invokes the eypdoc command-line tool, which in turn imports docutils. I'm not sure what would be needed to get epydoc to see the latex-math role. epydoc appeared to have stagnated for awhile, but it's been active lately, with the last alpha release in August and svn checkins as recently as ten days ago, so I think this is probably a good time to put our two cents in for what we'd like to see. Certainly it's a good time for me to bring my changes up to date and pass them back. From ndbecker2 at gmail.com Wed Dec 13 10:29:55 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 13 Dec 2006 10:29:55 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> Message-ID: Jonathan Guyer wrote: > > On Dec 13, 2006, at 9:08 AM, Neal Becker wrote: > >> I'm curious about this also. Is the idea that epydoc can be hacked >> to work >> with a hacked docutils? So, the plan is to incorporate docutils >> modifications (so that rst2latex would recognize latex-math role), >> and then >> to somehow hook this into epydoc? (And, neither of these is yet >> available >> for testing, right?) > > Unhacked epydoc works with unhacked docutils right now. ... Sorry, I'm not sure what you mean here. I am looking at epydoc3.0alpha3 source. I see that epydoc can _parse_ rst. Is that what you mean by working with docutils? I also see that epydoc can write (some kind of) latex. Is that the idea? From mattknox_ca at hotmail.com Wed Dec 13 10:31:07 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Wed, 13 Dec 2006 10:31:07 -0500 Subject: [SciPy-dev] time series implementation approach Message-ID: Hi David, computing monthly averages from a daily series is already implemented. Here is the code that would accomplish that with the current implementation: ======================== import timeseries as ts import numpy as np startDate = ts.Date(freq='DAILY', year=1999, month=5, day=15) # create a series with random data (note that the values could also be a masked array) myDailyData = ts.TimeSeries(np.random.uniform(-100,100,600), freq='DAILY',observed='AVERAGED',startIndex=startDate) # compute monthly mean (observed argument is optional, and will use the observed attribute of the time series if not specified) monthlyAverageSeries = myDaiyData.convert(freq='MONTHLY',observed='AVERAGED') print monthlyAverageSeries ======================== Computing monthly variance can not be done easily at the moment. The frequency conversion code is implemented in C and the method of conversion is applied at the same time in one step (averaged in this example). Pierre made a good suggestion that I am going to look at doing shortly, which is to just group the data in C and return a 2 dimensional array to python, and then in python apply a user specified function along the axis. It will be slower than doing the whole thing in C, but I think the flexibility is worth the trade off. This would make it very easy to compute monthly variance if that is what you were after. As far as irregularily spaced data goes... I think Pierre's suggestion of a "nofreq" frequency would work pretty well. It should be fairly easy to support that special case, and also support converting it to a more rigid frequency. I'm still wavering on the ShiftingArray vs sub-classing MaskedArray approach as the backbone of the TimeSeries object. At this moment I think I am leaning more towards sub-classing MaskedArray (bet you didn't expect that, Pierre!), but who knows what I will feel like this afternoon. - Matt From guyer at nist.gov Wed Dec 13 11:02:34 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 13 Dec 2006 11:02:34 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> Message-ID: <2FFBCC54-9C58-4EC6-8D38-C15FDB768511@nist.gov> On Dec 13, 2006, at 10:29 AM, Neal Becker wrote: > Sorry, I'm not sure what you mean here. I am looking at > epydoc3.0alpha3 > source. I have no experience with epydoc 3, other than knowing it exists. It's on my list. > I see that epydoc can _parse_ rst. Is that what you mean by > working with docutils? Yes. What did you mean? > I also see that epydoc can write (some kind of) > latex. Is that the idea? Yes. Feed Python code (with reST docstrings) through epydoc and you get LaTeX or HTML out. For straight reST files, like READMEs, we just invoke docutils directly. I don't think epydoc is of any use there. From pgmdevlist at gmail.com Wed Dec 13 11:21:19 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 13 Dec 2006 11:21:19 -0500 Subject: [SciPy-dev] time series implementation approach In-Reply-To: References: Message-ID: <200612131121.19938.pgmdevlist@gmail.com> On Wednesday 13 December 2006 10:31, Matt Knox wrote: > Hi David, David, We're working on the same kind of data. I already have routines for my TimeSeries to group daily data to season (a season being defined as a group of months such as DJF, MAM...), and compute whatever stats on those groups. The bottleneck is the grouping of the data, done entirely in Python as I speak no C whatsoever. It works pretty fine, but it's slow. And that's somehow an easy case. Matt gave me some far more gruesome examples of manipulation... > Computing monthly variance can not be done easily at the moment. The > frequency conversion code is implemented in C and the method of conversion > is applied at the same time in one step (averaged in this example). Pierre > made a good suggestion that I am going to look at doing shortly, which is > to just group the data in C and return a 2 dimensional array to python, Well, ultimately the idea would to group a 2D array (where each column represents a 1D series) in C, and get a 3D array. > I'm still wavering on the ShiftingArray vs sub-classing MaskedArray > approach as the backbone of the TimeSeries object. At this moment I think I > am leaning more towards sub-classing MaskedArray (bet you didn't expect > that, Pierre!), but who knows what I will feel like this afternoon. Oh, you'll probably still be leaning towards MA (I'll make sure of that). And we can still have the ShiftingTimeSeries approach in parallel From ndbecker2 at gmail.com Wed Dec 13 11:21:17 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 13 Dec 2006 11:21:17 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <2FFBCC54-9C58-4EC6-8D38-C15FDB768511@nist.gov> Message-ID: Jonathan Guyer wrote: > > On Dec 13, 2006, at 10:29 AM, Neal Becker wrote: > >> Sorry, I'm not sure what you mean here. I am looking at >> epydoc3.0alpha3 >> source. > > I have no experience with epydoc 3, other than knowing it exists. > It's on my list. > >> I see that epydoc can _parse_ rst. Is that what you mean by >> working with docutils? > > Yes. What did you mean? > >> I also see that epydoc can write (some kind of) >> latex. Is that the idea? > > Yes. > > Feed Python code (with reST docstrings) through epydoc and you get > LaTeX or HTML out. For straight reST files, like READMEs, we just > invoke docutils directly. I don't think epydoc is of any use there. I'm looking to markup my python code, including latex so I can nicely include math. I see I could use epydoc, and that epydoc can read rst, but does this allow me to include latex? If so, how? And what backends would then be used to generate either pdf or (x)html? From guyer at nist.gov Wed Dec 13 11:52:32 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 13 Dec 2006 11:52:32 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <2FFBCC54-9C58-4EC6-8D38-C15FDB768511@nist.gov> Message-ID: <24147904-1277-4349-B805-C164E66B579E@nist.gov> On Dec 13, 2006, at 11:21 AM, Neal Becker wrote: > I'm looking to markup my python code, including latex so I can nicely > include math. I see I could use epydoc, and that epydoc can read > rst, but > does this allow me to include latex? If so, how? And what > backends would > then be used to generate either pdf or (x)html? Right now, you use .. raw:: latex directives. Kind of clumsy for inline expressions, but not too bad for display equations. For long stretches of inline math in text, we just do ".. raw:: latex" and write everything in LaTeX. Not ideal, but works OK. I'm certainly looking forward to cleaning everything up with latex-math. Once you have that, invoking epydoc --latex mypython.py generates a directory of .tex files and epydoc --html mypython.py generates a directory of .html files. Epydoc has a --pdf option, but it goes via .dvi --> .ps --> .pdf, which is kind of silly in this day and age. epydoc generates a master api.tex file, but we ignore and/or manipulate that and use our own that we feed straight through pdflatex. From david.huard at gmail.com Wed Dec 13 13:11:30 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 13 Dec 2006 13:11:30 -0500 Subject: [SciPy-dev] time series implementation approach In-Reply-To: References: Message-ID: <91cf711d0612131011u3fccb6ebvb75f4bad47c923d0@mail.gmail.com> Hi Matt, I don't know how difficult that would be to implement but something like this would be very useful: monthlyAverageSeries = myDailyData.convert(freq='MONTHLY', apply='var') where apply can be any array method. In my case, where I have to compute aggregated data, monthly sums of daily values, i'd just do monthlySeries = myDailyData.convert(freq='MONTHLY', apply='sum') et voila ! And it would spare you the job to code those functions. A lazy implementation could be to make a callback to python, but I'm pretty sure Travis has something in the API that would allow this directly from the C code (PyUFunc_O_O_method ?) Cheers, David monthlyAverageSeries = myDaiyData.convert > (freq='MONTHLY',observed='AVERAGED') > > print monthlyAverageSeries > ======================== > > Computing monthly variance can not be done easily at the moment. The > frequency conversion code is implemented in C and the method of conversion > is applied at the same time in one step (averaged in this example). Pierre > made a good suggestion that I am going to look at doing shortly, which is to > just group the data in C and return a 2 dimensional array to python, and > then in python apply a user specified function along the axis. It will be > slower than doing the whole thing in C, but I think the flexibility is worth > the trade off. This would make it very easy to compute monthly variance if > that is what you were after. > > As far as irregularily spaced data goes... I think Pierre's suggestion of > a "nofreq" frequency would work pretty well. It should be fairly easy to > support that special case, and also support converting it to a more rigid > frequency. > > I'm still wavering on the ShiftingArray vs sub-classing MaskedArray > approach as the backbone of the TimeSeries object. At this moment I think I > am leaning more towards sub-classing MaskedArray (bet you didn't expect > that, Pierre!), but who knows what I will feel like this afternoon. > > - Matt > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Dec 13 13:29:46 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 13 Dec 2006 13:29:46 -0500 Subject: [SciPy-dev] time series implementation approach In-Reply-To: <91cf711d0612131011u3fccb6ebvb75f4bad47c923d0@mail.gmail.com> References: <91cf711d0612131011u3fccb6ebvb75f4bad47c923d0@mail.gmail.com> Message-ID: <200612131329.47300.pgmdevlist@gmail.com> > implementation could be to make a callback to python, but I'm pretty sure > Travis has something in the API that would allow this directly from the C > code (PyUFunc_O_O_method ?) David, I think it'd be easier if the C grouping function send back an array that you would have to process, for several reasons: - you have full control on the postprocessing - you don't have to group your data each time you need a new postprocessing - masked data are very easy to deal with in python, and I can only assume that they're more difficult to deal with in C. Suppose you have your daily series that you resampled to monthly: with this approach, you just group data once, and then can compute your mean and variance on the groups. you need a median ? Well, your data's already grouped. There were gaps in your data ? Well, you have masked values. Matt pointed quite justly that this approach doesn't work for some pathological cases, but that shouldn't be that common... From pgmdevlist at gmail.com Wed Dec 13 13:34:25 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 13 Dec 2006 13:34:25 -0500 Subject: [SciPy-dev] time series implementation approach In-Reply-To: <91cf711d0612131011u3fccb6ebvb75f4bad47c923d0@mail.gmail.com> References: <91cf711d0612131011u3fccb6ebvb75f4bad47c923d0@mail.gmail.com> Message-ID: <200612131334.26090.pgmdevlist@gmail.com> (not to self: don't send an email when it's not finished) > where apply can be any array method. In my case, where I have to compute > aggregated data, monthly sums of daily values, i'd just do > > monthlySeries = myDailyData.convert(freq='MONTHLY', apply='sum') > et voila ! As a follow-up: David, you should really check what Matt did. In addition to "freq", his time series have an attribute "observed", that dictate how to postprocess the data. Your "apply" flag is an idea we were toying with, and is not incompatible with the approach I just described. Should you need the grouped data, you'd just use "apply=None". From dalcinl at gmail.com Wed Dec 13 14:02:19 2006 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Wed, 13 Dec 2006 16:02:19 -0300 Subject: [SciPy-dev] FE+sparse utilities In-Reply-To: <457FF7A7.6000309@ntc.zcu.cz> References: <457D8630.8060509@ntc.zcu.cz> <457FF7A7.6000309@ntc.zcu.cz> Message-ID: On 12/13/06, Robert Cimrman wrote: > I have always wanted to have PETSc accessible in Python so that I could > try it easily, great! You can find it at PyPI. I routinely use it to solve NS equations with monolitic FEM formulation using millons of elements (in a cluster, of course). The element matrices computation and global matrix assemble are both in the C side. But the Python side is great to manage the linear/nonlinear (KSP/SNES) solution. An let me say that PETSc is really a great, great library, specially if you want to go to multiprocessor architectures. For sigle-proc machines, you don't even need MPI. > Feutils is a very simple package good for: > 1. given finite element connectivity of degrees of freedom (= mesh > connectivity, if 1 DOF per node), prepare a global CSR matrix with > nonzero pattern exactly corresponding to that connectivity. Thus, when > assembling, no reallocations, data shifts etc. are needed; > 2. given a bunch of element contributions, assemble them into the global > matrix. So I understand you mean this bunch should be a contiguous n-dim array. What (3, 1, 4, 4) stands for in he line below? mtxInEls = nm.ones( (3, 1, 4, 4), dtype = nm.float64 ) > The actual matrix allocation as well as element contribution assembling > are done in C (via swig), too. But I use standard numpy (array) and > scipy (CSR sparse matrix) data types to store the data. The key point is > to compute in one C call many (at least 1000) local element > contributions, and them assemble all of them also in one function call > that way the time spent in Python is minimized. Ok! So you work with scipy CSR matrices. I fully understand your approach now. I think I will follow your approach in my own project for a next release. But I still think many users will feel more confortable of being able to do something like A[rows, cols] = values where rows, cols, values should be interpreted as stacked arrays with the right sizes. But then it there should be a way to specify if you want to just put or instead add values in the global matrix structure. Regards, -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From cimrman3 at ntc.zcu.cz Thu Dec 14 04:58:44 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 Dec 2006 10:58:44 +0100 Subject: [SciPy-dev] FE+sparse utilities In-Reply-To: References: <457D8630.8060509@ntc.zcu.cz> <457FF7A7.6000309@ntc.zcu.cz> Message-ID: <45812054.3070408@ntc.zcu.cz> Lisandro Dalcin wrote: > On 12/13/06, Robert Cimrman wrote: >> I have always wanted to have PETSc accessible in Python so that I could >> try it easily, great! > > You can find it at PyPI. I routinely use it to solve NS equations with > monolitic FEM formulation using millons of elements (in a cluster, of > course). The element matrices computation and global matrix assemble > are both in the C side. But the Python side is great to manage the > linear/nonlinear (KSP/SNES) solution. I do the same thing - logic in Python, real computations (not covered by numpy/scipy already) in C. > An let me say that PETSc is really a great, great library, specially > if you want to go to multiprocessor architectures. For sigle-proc > machines, you don't even need MPI. Yeah, it is great. I have looked at it several times already in the past, but never really coded anything with it, though :( >> Feutils is a very simple package good for: >> 1. given finite element connectivity of degrees of freedom (= mesh >> connectivity, if 1 DOF per node), prepare a global CSR matrix with >> nonzero pattern exactly corresponding to that connectivity. Thus, when >> assembling, no reallocations, data shifts etc. are needed; >> 2. given a bunch of element contributions, assemble them into the global >> matrix. > > So I understand you mean this bunch should be a contiguous n-dim > array. What (3, 1, 4, 4) stands for in he line below? > > mtxInEls = nm.ones( (3, 1, 4, 4), dtype = nm.float64 ) If you look at the next line 'assembleMatrix( mtx, mtxInEls, [0, 10, 50], conn )' you can notice that the third argument is a list of three numbers, saying that mtxInEls contain element contribution of three elements conn[[0,10,50],:]. -> this is the '3' in the shape tuple. next is the number of quadrature points, but you assemble local matrix already integrated over the element, so it equals '1'. The last two numbers are the actual dimensions of the local matrix (4x4). >> The actual matrix allocation as well as element contribution assembling >> are done in C (via swig), too. But I use standard numpy (array) and >> scipy (CSR sparse matrix) data types to store the data. The key point is >> to compute in one C call many (at least 1000) local element >> contributions, and them assemble all of them also in one function call >> that way the time spent in Python is minimized. > > Ok! So you work with scipy CSR matrices. I fully understand your > approach now. I think I will follow your approach in my own project > for a next release. > > But I still think many users will feel more confortable of being able > to do something like > > A[rows, cols] = values > > where rows, cols, values should be interpreted as stacked arrays with > the right sizes. But then it there should be a way to specify if you > want to just put or instead add values in the global matrix structure. A[rows, cols] = values should put values, while A[rows, cols] += values should add them? I admit that my approach is a bit more low-level, but in the full code, this is not visible to the user, who just defines the terms of the equations and the rest is automatic (look at the simple example in poisson.pdf to get a feel). And one could wrap it so that it is used under covers in expression like yours. :) Cheers, r. From cimrman3 at ntc.zcu.cz Thu Dec 14 07:19:40 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 Dec 2006 13:19:40 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <457805EA.3060405@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> <457805EA.3060405@iam.uni-stuttgart.de> Message-ID: <4581415C.6010001@ntc.zcu.cz> Nils Wagner wrote: > Robert Cimrman wrote: >> Nils Wagner wrote: >> >>>> I am puzzled. It might be a 64-bit issue? Could you send me the relevant >>>> part of compilation output? >>>> >>>> >>> How can I redirect the output of python setup.py install into a file ? >>> >> python setup.py install &> out.txt (using bash shell, both standard >> output and atandard error are redirected) >> or just >> python setup.py install > out.txt (standard output only) >> r. >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > Ok I will send you out.txt off-list. Nils, I have tried umfpack on 64 bits, fresh scipy install. To get it compiled, just copy UFconfig.h into the directory where you have umfpack.h (..UFsparse/UMFPACK/Include). Then everything works like a charm. scipy.test(10,10) ... Ran 1616 tests in 78.011s OK Out[2]: I am clueless why only the first path specified in site.cfg is taken into account (I have tried both ',' and ':' as separators). Does anybody here have an idea? example: include_dirs = ..../UFsparse/UMFPACK/Include:..../UFsparse/UFconfig - the second path is ignored r. From nwagner at iam.uni-stuttgart.de Thu Dec 14 08:03:54 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 14 Dec 2006 14:03:54 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <4581415C.6010001@ntc.zcu.cz> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> <457805EA.3060405@iam.uni-stuttgart.de> <4581415C.6010001@ntc.zcu.cz> Message-ID: <45814BBA.2040508@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Robert Cimrman wrote: >> >>> Nils Wagner wrote: >>> >>> >>>>> I am puzzled. It might be a 64-bit issue? Could you send me the relevant >>>>> part of compilation output? >>>>> >>>>> >>>>> >>>> How can I redirect the output of python setup.py install into a file ? >>>> >>>> >>> python setup.py install &> out.txt (using bash shell, both standard >>> output and atandard error are redirected) >>> or just >>> python setup.py install > out.txt (standard output only) >>> r. >>> >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> Ok I will send you out.txt off-list. >> > > Nils, > > I have tried umfpack on 64 bits, fresh scipy install. > To get it compiled, just copy UFconfig.h into the directory where you > have umfpack.h (..UFsparse/UMFPACK/Include). Then everything works like > a charm. > > scipy.test(10,10) > ... > Ran 1616 tests in 78.011s > OK > Out[2]: > > I am clueless why only the first path specified in site.cfg is taken > into account (I have tried both ',' and ':' as separators). Does anybody > here have an idea? > > example: > include_dirs = ..../UFsparse/UMFPACK/Include:..../UFsparse/UFconfig > - the second path is ignored > > r. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Robert, I have tried that before, but it doesn't work. We have discussed that off-list. Are you using version 5.0 or the latest 5.0.2 ? Nils This my site.cfg file for UMFPACKv5.0 [DEFAULT] library_dirs = /usr/lib:/usr/local/lib:/usr/lib64 include_dirs = /usr/include:/usr/local/include [amd] library_dirs = /usr/local/src/AMD/Lib include_dirs = /usr/local/src/AMD/Include:/usr/local/src/UFconfig amd_libs = amd [umfpack] library_dirs = /usr/local/src/UMFPACK/Lib include_dirs = /usr/local/src/UMFPACK/Include:/usr/local/src/UFconfig umfpack_libs = umfpack ls -l /usr/local/src/UMFPACK/Include/ total 256 -rw-r--r-- 1 root root 4023 2006-12-06 12:54 UFconfig.h -rw-rw---- 1 root root 3712 2006-05-02 15:23 umfpack_col_to_triplet.h -rw-rw---- 1 root root 1892 2006-05-02 15:23 umfpack_defaults.h -rw-rw---- 1 root root 1690 2006-05-02 15:23 umfpack_free_numeric.h -rw-rw---- 1 root root 1708 2006-05-02 15:23 umfpack_free_symbolic.h -rw-rw---- 1 root root 6270 2006-05-02 15:23 umfpack_get_determinant.h -rw-rw---- 1 root root 3989 2006-05-02 15:23 umfpack_get_lunz.h -rw-rw---- 1 root root 8960 2006-05-02 15:24 umfpack_get_numeric.h -rw-rw---- 1 root root 13154 2006-05-02 15:24 umfpack_get_symbolic.h -rw-rw---- 1 root root 1029 2006-05-02 01:34 umfpack_global.h -rw-rw---- 1 root root 19493 2006-05-02 14:47 umfpack.h -rw-rw---- 1 root root 2585 2006-05-02 15:24 umfpack_load_numeric.h -rw-rw---- 1 root root 2612 2006-05-02 15:24 umfpack_load_symbolic.h -rw-rw---- 1 root root 23215 2006-05-02 15:24 umfpack_numeric.h -rw-rw---- 1 root root 5157 2006-05-02 15:24 umfpack_qsymbolic.h -rw-rw---- 1 root root 2197 2006-05-02 15:24 umfpack_report_control.h -rw-rw---- 1 root root 2639 2006-05-02 15:24 umfpack_report_info.h -rw-rw---- 1 root root 6997 2006-05-02 15:24 umfpack_report_matrix.h -rw-rw---- 1 root root 3612 2006-05-02 15:25 umfpack_report_numeric.h -rw-rw---- 1 root root 3520 2006-05-02 15:25 umfpack_report_perm.h -rw-rw---- 1 root root 2563 2006-05-02 15:25 umfpack_report_status.h -rw-rw---- 1 root root 3551 2006-05-02 15:25 umfpack_report_symbolic.h -rw-rw---- 1 root root 4893 2006-05-02 15:25 umfpack_report_triplet.h -rw-rw---- 1 root root 4507 2006-05-02 15:25 umfpack_report_vector.h -rw-rw---- 1 root root 2243 2006-05-02 15:25 umfpack_save_numeric.h -rw-rw---- 1 root root 2274 2006-05-02 15:25 umfpack_save_symbolic.h -rw-rw---- 1 root root 3161 2006-05-02 15:25 umfpack_scale.h -rw-rw---- 1 root root 10693 2006-05-02 15:25 umfpack_solve.h -rw-rw---- 1 root root 22283 2006-05-02 15:26 umfpack_symbolic.h -rw-rw---- 1 root root 2225 2006-05-01 14:50 umfpack_tictoc.h -rw-rw---- 1 root root 1719 2006-05-01 14:50 umfpack_timer.h -rw-rw---- 1 root root 7938 2006-05-02 15:26 umfpack_transpose.h -rw-rw---- 1 root root 10442 2006-05-02 15:26 umfpack_triplet_to_col.h -rw-rw---- 1 root root 5542 2006-05-02 15:26 umfpack_wsolve.h From cimrman3 at ntc.zcu.cz Thu Dec 14 08:33:49 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 Dec 2006 14:33:49 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <45814BBA.2040508@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> <457805EA.3060405@iam.uni-stuttgart.de> <4581415C.6010001@ntc.zcu.cz> <45814BBA.2040508@iam.uni-stuttgart.de> Message-ID: <458152BD.7000904@ntc.zcu.cz> Nils Wagner wrote: > Robert Cimrman wrote: >> Nils Wagner wrote: >> >>> Robert Cimrman wrote: >>> >>>> Nils Wagner wrote: >>>> >>>> >>>>>> I am puzzled. It might be a 64-bit issue? Could you send me the relevant >>>>>> part of compilation output? >>>>>> >>>>>> >>>>>> >>>>> How can I redirect the output of python setup.py install into a file ? >>>>> >>>>> >>>> python setup.py install &> out.txt (using bash shell, both standard >>>> output and atandard error are redirected) >>>> or just >>>> python setup.py install > out.txt (standard output only) >>>> r. >>>> >>>> _______________________________________________ >>>> Scipy-dev mailing list >>>> Scipy-dev at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>>> >>>> >>> Ok I will send you out.txt off-list. >>> >> Nils, >> >> I have tried umfpack on 64 bits, fresh scipy install. >> To get it compiled, just copy UFconfig.h into the directory where you >> have umfpack.h (..UFsparse/UMFPACK/Include). Then everything works like >> a charm. >> >> scipy.test(10,10) >> ... >> Ran 1616 tests in 78.011s >> OK >> Out[2]: >> >> I am clueless why only the first path specified in site.cfg is taken >> into account (I have tried both ',' and ':' as separators). Does anybody >> here have an idea? >> >> example: >> include_dirs = ..../UFsparse/UMFPACK/Include:..../UFsparse/UFconfig >> - the second path is ignored >> >> r. >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > Robert, > > I have tried that before, but it doesn't work. We have discussed that > off-list. > Are you using version 5.0 or the latest 5.0.2 ? > > Nils I have just installed 5.0.2 (as part of SuiteSparse package) and the same trick did help. Ran 1616 tests in 84.285s OK Out[4]: import scipy.linsolve as ls In [5]: ls.umfpack.UMFPACK_VERSION Out[5]: 'UMFPACK V5.0.2 (Dec 2, 2006)' Did you rebuild numpy whenever you changed your site.cfg? That way scipy could see your changes (it looks into .../usr/lib/python2.4/site-packages/numpy/distutils/site.cfg), not into the numpy installation directory, IMHO.) From nwagner at iam.uni-stuttgart.de Thu Dec 14 08:44:06 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 14 Dec 2006 14:44:06 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <458152BD.7000904@ntc.zcu.cz> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> <457805EA.3060405@iam.uni-stuttgart.de> <4581415C.6010001@ntc.zcu.cz> <45814BBA.2040508@iam.uni-stuttgart.de> <458152BD.7000904@ntc.zcu.cz> Message-ID: <45815526.2040806@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Robert Cimrman wrote: >> >>> Nils Wagner wrote: >>> >>> >>>> Robert Cimrman wrote: >>>> >>>> >>>>> Nils Wagner wrote: >>>>> >>>>> >>>>> >>>>>>> I am puzzled. It might be a 64-bit issue? Could you send me the relevant >>>>>>> part of compilation output? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> How can I redirect the output of python setup.py install into a file ? >>>>>> >>>>>> >>>>>> >>>>> python setup.py install &> out.txt (using bash shell, both standard >>>>> output and atandard error are redirected) >>>>> or just >>>>> python setup.py install > out.txt (standard output only) >>>>> r. >>>>> >>>>> _______________________________________________ >>>>> Scipy-dev mailing list >>>>> Scipy-dev at scipy.org >>>>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>>>> >>>>> >>>>> >>>> Ok I will send you out.txt off-list. >>>> >>>> >>> Nils, >>> >>> I have tried umfpack on 64 bits, fresh scipy install. >>> To get it compiled, just copy UFconfig.h into the directory where you >>> have umfpack.h (..UFsparse/UMFPACK/Include). Then everything works like >>> a charm. >>> >>> scipy.test(10,10) >>> ... >>> Ran 1616 tests in 78.011s >>> OK >>> Out[2]: >>> >>> I am clueless why only the first path specified in site.cfg is taken >>> into account (I have tried both ',' and ':' as separators). Does anybody >>> here have an idea? >>> >>> example: >>> include_dirs = ..../UFsparse/UMFPACK/Include:..../UFsparse/UFconfig >>> - the second path is ignored >>> >>> r. >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> Robert, >> >> I have tried that before, but it doesn't work. We have discussed that >> off-list. >> Are you using version 5.0 or the latest 5.0.2 ? >> >> Nils >> > > I have just installed 5.0.2 (as part of SuiteSparse package) and the > same trick did help. > > > Ran 1616 tests in 84.285s > > OK > Out[4]: > > import scipy.linsolve as ls > In [5]: ls.umfpack.UMFPACK_VERSION > Out[5]: 'UMFPACK V5.0.2 (Dec 2, 2006)' > > Did you rebuild numpy whenever you changed your site.cfg? That way scipy > could see your changes (it looks into > .../usr/lib/python2.4/site-packages/numpy/distutils/site.cfg), not into > the numpy installation directory, IMHO.) > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > My site.cfg is in the scipy directory not in the numpy directory. Is that wrong ? Nils From cimrman3 at ntc.zcu.cz Thu Dec 14 08:47:49 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 Dec 2006 14:47:49 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <45815526.2040806@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> <457805EA.3060405@iam.uni-stuttgart.de> <4581415C.6010001@ntc.zcu.cz> <45814BBA.2040508@iam.uni-stuttgart.de> <458152BD.7000904@ntc.zcu.cz> <45815526.2040806@iam.uni-stuttgart.de> Message-ID: <45815605.90208@ntc.zcu.cz> Nils Wagner wrote: >> > My site.cfg is in the scipy directory not in the numpy directory. Is > that wrong ? Not sure. I use the numpy directory, because that way it is used for both numpy and scipy (and that's sure :). r. From nwagner at iam.uni-stuttgart.de Thu Dec 14 08:50:22 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 14 Dec 2006 14:50:22 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <45815605.90208@ntc.zcu.cz> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> <457805EA.3060405@iam.uni-stuttgart.de> <4581415C.6010001@ntc.zcu.cz> <45814BBA.2040508@iam.uni-stuttgart.de> <458152BD.7000904@ntc.zcu.cz> <45815526.2040806@iam.uni-stuttgart.de> <45815605.90208@ntc.zcu.cz> Message-ID: <4581569E.4060700@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >>> >>> >> My site.cfg is in the scipy directory not in the numpy directory. Is >> that wrong ? >> > > Not sure. I use the numpy directory, because that way it is used for > both numpy and scipy (and that's sure :). > > r. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Ok I will try that asap. Cheers Nils From dd55 at cornell.edu Thu Dec 14 09:21:24 2006 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 14 Dec 2006 09:21:24 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> References: <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> Message-ID: <200612140921.24638.dd55@cornell.edu> On Tuesday 12 December 2006 12:20, David Huard wrote: > Hi all, > > I followed this thread with much interest and I wouldn't like it to die > without some kind of concensus being reached for the Scipy documention > system. Am I correct to say that Epydoc + REST/Latex seems the way to go ? > > If this is the case, what's next ? I'm not familiar with any of this, but > I'd be great if someone knowledgeable could define a roadmap and create a > couple of tickets so that people like me could contribute small steps. I have also been very interested in this discussion. Once the issue is settled, would someone please write a wiki page outlining the preferred format for scipy/numpy documentation, along with a short explanation of how to do the markup? Maybe every function could get a documentation template, which would make it easier for less experienced SciPy users (such as myself) to contribute to the community. Darren From nwagner at iam.uni-stuttgart.de Thu Dec 14 10:13:31 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 14 Dec 2006 16:13:31 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <4581569E.4060700@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> <457805EA.3060405@iam.uni-stuttgart.de> <4581415C.6010001@ntc.zcu.cz> <45814BBA.2040508@iam.uni-stuttgart.de> <458152BD.7000904@ntc.zcu.cz> <45815526.2040806@iam.uni-stuttgart.de> <45815605.90208@ntc.zcu.cz> <4581569E.4060700@iam.uni-stuttgart.de> Message-ID: <45816A1B.8010902@iam.uni-stuttgart.de> Nils Wagner wrote: > Robert Cimrman wrote: > >> Nils Wagner wrote: >> >> >>>> >>>> >>>> >>> My site.cfg is in the scipy directory not in the numpy directory. Is >>> that wrong ? >>> >>> >> Not sure. I use the numpy directory, because that way it is used for >> both numpy and scipy (and that's sure :). >> >> r. >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> > Ok I will try that asap. > > Cheers > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > I have moved site.cfg into the numpy directory and rebuild numpy/scipy from scratch, but my old problem persists. scipy.test(1) yields Warning: FAILURE importing tests for /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Warning: FAILURE importing tests for /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Any pointer would be appreciated. Nils From cimrman3 at ntc.zcu.cz Thu Dec 14 12:04:01 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 Dec 2006 18:04:01 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <45816A1B.8010902@iam.uni-stuttgart.de> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> <457805EA.3060405@iam.uni-stuttgart.de> <4581415C.6010001@ntc.zcu.cz> <45814BBA.2040508@iam.uni-stuttgart.de> <458152BD.7000904@ntc.zcu.cz> <45815526.2040806@iam.uni-stuttgart.de> <45815605.90208@ntc.zcu.cz> <4581569E.4060700@iam.uni-stuttgart.de> <45816A1B.8010902@iam.uni-stuttgart.de> Message-ID: <45818401.5080309@ntc.zcu.cz> Nils Wagner wrote: > Nils Wagner wrote: >> Robert Cimrman wrote: >>> Nils Wagner wrote: >>>> My site.cfg is in the scipy directory not in the numpy directory. Is >>>> that wrong ? >>>> >>>> >>> Not sure. I use the numpy directory, because that way it is used for >>> both numpy and scipy (and that's sure :). >>> >>> r. >>> >> Ok I will try that asap. >> >> Cheers >> Nils > > I have moved site.cfg into the numpy directory and rebuild numpy/scipy > from scratch, but > my old problem persists. > > scipy.test(1) yields > > Warning: FAILURE importing tests for 'scipy.linsolve.umfpack.umfpack' from '...y/linsolve/umfpack/umfpack.pyc'> > /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in ?) > Warning: FAILURE importing tests for from '.../linsolve/umfpack/__init__.pyc'> > /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in ?) > > Any pointer would be appreciated. So scipy builds and installs ok, yet the umfpack module is not found, right? This is the contents of the relevant installation directories in my case: ls /home/share/software/usr/lib/python2.4/site-packages/scipy/linsolve/ _csuperlu.so* __init__.py setup.py _superlu.pyc _dsuperlu.so* __init__.pyc setup.pyc tests/ info.py linsolve.py _ssuperlu.so* umfpack/ info.pyc linsolve.pyc _superlu.py _zsuperlu.so* ls /home/share/software/usr/lib/python2.4/site-packages/scipy/linsolve/umfpack/ info.py __init__.py setup.py tests/ _umfpack.py _umfpack.pyc info.pyc __init__.pyc setup.pyc umfpack.py umfpack.pyc __umfpack.so* ls /home/share/software/usr/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/ test_umfpack.py test_umfpack.pyc try_umfpack.py What do you have in there? r. From nwagner at iam.uni-stuttgart.de Thu Dec 14 12:40:11 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 14 Dec 2006 18:40:11 +0100 Subject: [SciPy-dev] Trouble with UMFPACK 5.0 In-Reply-To: <45818401.5080309@ntc.zcu.cz> References: <45752E6B.7040903@iam.uni-stuttgart.de> <4575C3AD.1030105@gmail.com> <457674F1.8040709@iam.uni-stuttgart.de> <45769D16.1020808@ntc.zcu.cz> <4576BC1A.6080902@iam.uni-stuttgart.de> <4577FFA9.8030903@ntc.zcu.cz> <4578026C.2070708@iam.uni-stuttgart.de> <4578039F.907@ntc.zcu.cz> <457805EA.3060405@iam.uni-stuttgart.de> <4581415C.6010001@ntc.zcu.cz> <45814BBA.2040508@iam.uni-stuttgart.de> <458152BD.7000904@ntc.zcu.cz> <45815526.2040806@iam.uni-stuttgart.de> <45815605.90208@ntc.zcu.cz> <4581569E.4060700@iam.uni-stuttgart.de> <45816A1B.8010902@iam.uni-stuttgart.de> <45818401.5080309@ntc.zcu.cz> Message-ID: On Thu, 14 Dec 2006 18:04:01 +0100 Robert Cimrman wrote: > Nils Wagner wrote: >> Nils Wagner wrote: >>> Robert Cimrman wrote: >>>> Nils Wagner wrote: >>>>> My site.cfg is in the scipy directory not in the numpy >>>>>directory. Is >>>>> that wrong ? >>>>> >>>>> >>>> Not sure. I use the numpy directory, because that way it >>>>is used for >>>> both numpy and scipy (and that's sure :). >>>> >>>> r. >>>> >>> Ok I will try that asap. >>> >>> Cheers >>> Nils >> >> I have moved site.cfg into the numpy directory and >>rebuild numpy/scipy >> from scratch, but >> my old problem persists. >> >> scipy.test(1) yields >> >> Warning: FAILURE importing tests for > 'scipy.linsolve.umfpack.umfpack' from >>'...y/linsolve/umfpack/umfpack.pyc'> >> /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: >> AttributeError: 'module' object has no attribute >>'umfpack' (in ?) >> Warning: FAILURE importing tests for >'scipy.linsolve.umfpack' >> from '.../linsolve/umfpack/__init__.pyc'> >> /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: >> AttributeError: 'module' object has no attribute >>'umfpack' (in ?) >> >> Any pointer would be appreciated. > > So scipy builds and installs ok, yet the umfpack module >is not found, > right? > > This is the contents of the relevant installation >directories in my case: > > ls >/home/share/software/usr/lib/python2.4/site-packages/scipy/linsolve/ > _csuperlu.so* __init__.py setup.py _superlu.pyc > _dsuperlu.so* __init__.pyc setup.pyc tests/ > info.py linsolve.py _ssuperlu.so* umfpack/ > info.pyc linsolve.pyc _superlu.py > _zsuperlu.so* > > ls > /home/share/software/usr/lib/python2.4/site-packages/scipy/linsolve/umfpack/ > info.py __init__.py setup.py tests/ > _umfpack.py _umfpack.pyc > info.pyc __init__.pyc setup.pyc umfpack.py > umfpack.pyc __umfpack.so* > > ls > /home/share/software/usr/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/ > test_umfpack.py test_umfpack.pyc try_umfpack.py > > What do you have in there? > > r. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev /usr/local/lib64/python2.4/site-packages/scipy/linsolve _csuperlu.so info.pyc linsolve.py setup.pyc _superlu.pyc _dsuperlu.so __init__.py linsolve.pyc _ssuperlu.so umfpack info.py __init__.pyc setup.py _superlu.py _zsuperlu.so /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack info.py __init__.py setup.py tests umfpack.py umfpack.pyc info.pyc __init__.pyc setup.pyc _umfpack.py _umfpack.pyc __umfpack.so /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests test_umfpack.py try_umfpack.py Nils From loredo at astro.cornell.edu Thu Dec 14 14:34:04 2006 From: loredo at astro.cornell.edu (Tom Loredo) Date: Thu, 14 Dec 2006 14:34:04 -0500 Subject: [SciPy-dev] test import and check_integer errors for 0.5.2 on FC3/Py2.4 In-Reply-To: References: Message-ID: <1166124844.4581a72c1c110@astrosun2.astro.cornell.edu> Hi folks, I just installed the brand-spanking-new scipy-0.5.2 (thanks, packagers!) with numpy-1.0.1/Python-2.4.4 on Fedora Core 3. scipy.test(1) goes through 648 tests (I think there should be more), producing four import failures and a test ERROR, shown below. If there's something I should do on my end to fix these, please let me know! -Tom Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py:23: ImportError: cannot import name calc_lwork (in ?) Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py:23: ImportError: cannot import name calc_lwork (in ?) Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/usr/local/lib/python2.4/site-packages/scipy/stats/__init__.py", line 7, in ? from stats import * File "/usr/local/lib/python2.4/site-packages/scipy/stats/stats.py", line 190, in ? import scipy.special as special File "/usr/local/lib/python2.4/site-packages/scipy/special/__init__.py", line 10, in ? import orthogonal File "/usr/local/lib/python2.4/site-packages/scipy/special/orthogonal.py", line 65, in ? from scipy.linalg import eig File "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line 23, in ? from scipy.linalg import calc_lwork ImportError: cannot import name calc_lwork FAILED (errors=1) ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From clovisgo at gmail.com Fri Dec 15 05:57:29 2006 From: clovisgo at gmail.com (Clovis Goldemberg) Date: Fri, 15 Dec 2006 07:57:29 -0300 Subject: [SciPy-dev] Installation problems of Scipy 0.5.2 under Windows/AMD Message-ID: <6f239f130612150257g308c77bat288a48b29f693531@mail.gmail.com> I tried the scipy-0.5.2.win32-py2.5.exe installer in 4 different Windows XP machines. Two were Intel (Celeron) boxes and scipy.test() runs fine. Two were AMD (XP) boxes and scipy.test() fails. In all these machines I have Numpy.1.0.1. numpy.test() works fine in all them. I tried to investigate the failure in the AMD machines but was unable to reach a final conclusion. Since I am a scipy newbie I may be making stupid things... When I run scipy.test() the command window crashes Python. The message is "python.exe has encountered a problem and needs to close. We are sorry for the inconvenience". I first tried to run scipy.test (verbosity=10) in order to receive more information. Tests run sucessfully until "check_algebraic_log_weight (scipy.integrate.tests.test_quadpack.test_quad)" crashes Python. In order to debug, I commented all the tests that crashed Python. These tests are: scipy\fftpack\tests\test_basic.py scipy\fftpack\tests\test_pseudo_diffs.py scipy\integrate\tests\test_integrate.py scipy\integrate\tests\test_quad.py scipy\interpolate\tests\test_fitpack.py scipy\interpolate\tests\test_interpolate.py scipy\linalg\tests\test_basic.py scipy\linalg\tests\test_decomp.py scipy\linalg\tests\test_matfuncs.py scipy\optimize\tests\test_cobyla.py scipy\optimize\tests\test_optimize.py scipy\special\tests\test_basic.py scipy\optimize\stats\test_distributions.py scipy\optimize\stats\test_morestats.py If these files were renamed to something else, all other scipy tests run fine on the AMD boxes. I tried to recompile scipy using the pre-built AMD ATLAS binaries. Scipy recompilation was sucessfull but the tests still fail. Any suggestion? clovisgo University of Sao Paulo/Brazil -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Dec 15 13:24:28 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 15 Dec 2006 11:24:28 -0700 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <200612140921.24638.dd55@cornell.edu> References: <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <200612140921.24638.dd55@cornell.edu> Message-ID: On 12/14/06, Darren Dale wrote: > > On Tuesday 12 December 2006 12:20, David Huard wrote: > > Hi all, > > > > I followed this thread with much interest and I wouldn't like it to die > > without some kind of concensus being reached for the Scipy documention > > system. Am I correct to say that Epydoc + REST/Latex seems the way to go > ? > > > > If this is the case, what's next ? I'm not familiar with any of this, > but > > I'd be great if someone knowledgeable could define a roadmap and create > a > > couple of tickets so that people like me could contribute small steps. > > I have also been very interested in this discussion. Once the issue is > settled, would someone please write a wiki page outlining the preferred > format for scipy/numpy documentation, along with a short explanation of > how > to do the markup? Numpy contains a good deal of C code that needs documentation, will Epydoc do the job? It might make sense to use different systems for the python and C code, say Epydoc for the former and doxygen for the latter. Also, the docstrings for the python visible functions in numpy are separated from the code and probably need to be parsed. Some code that writes phony functions with the appropriate comments might do the trick, although some provision would probably be needed to pull out the function signature. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Fri Dec 15 16:46:04 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Fri, 15 Dec 2006 16:46:04 -0500 Subject: [SciPy-dev] time series implementation approach Message-ID: I have just completed a fairly major rework of a lot of the guts of the timeseries module (particularly with respect to frequency conversion). Take a look at the examples script for a few ideas. But one major addition worth mentioning is that you can now do things like: from numpy import ma # assume I have defined a daily time series called myDailyData monthlyAverageSeries = myDailyData.convert(freq='MONTHLY', func=ma.average) David, you mentioned wanting to compute monthly variance... that should be fairly easy to do now, except there is no masked array version of the var function, so currently you would have to make your own version of var that works on masked arrays and then pass that in. Also, the origin is no longer based on the year 1850, it is based on the same 0 date mx.DateTime uses (0 AD). Secondly frequency is still not supported very well, so if you need to work with intraday data, the module probably won't work very well right now. - Matt From gruben at bigpond.net.au Sat Dec 16 02:25:49 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 16 Dec 2006 18:25:49 +1100 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> Message-ID: <45839F7D.1070400@bigpond.net.au> In case I gave the impression I was planning to do something on this, I still am. It's just that I've been very busy. I am still planning to try to build the FiPy docs and then spin a skeleton document based on it for numpy, perhaps with content from the numarray docs, and another for scipy during the upcoming Christmas break. Each of these docs would have one appendix which contains a reference built from the individual method/function docstrings. Each would also have another appendix of long full-module examples with LaTeX and plotting results. I was also going to try to write a docstring documentation spec as part of this. This is quite a bit of work for me as I expect I need a few days preparation to get a Linux distro up to date with a build system. I thought the concensus was headed toward epydoc + ReST with LaTeX markup, so I was planning to adopt FiPy's nice model for using this. I still hadn't ruled out doxygen and I was planning on looking at it too to see if it has advantages. My guess is that either would be fine, although people here might be more comfortable with epydoc. David, if this sounds like a good plan to you, perhaps you can move ahead with this, as it'll be a few days before I can start. Gary R. David Huard wrote: > Hi all, > > I followed this thread with much interest and I wouldn't like it to die > without some kind of concensus being reached for the Scipy documention > system. Am I correct to say that Epydoc + REST/Latex seems the way to go ? > > If this is the case, what's next ? I'm not familiar with any of this, > but I'd be great if someone knowledgeable could define a roadmap and > create a couple of tickets so that people like me could contribute small > steps. > > Cheers, > > David From otto at tronarp.se Sat Dec 16 07:16:36 2006 From: otto at tronarp.se (Otto Tronarp) Date: Sat, 16 Dec 2006 13:16:36 +0100 Subject: [SciPy-dev] time series implementation approach In-Reply-To: References: Message-ID: <20061216131636.91p59gojggs04sc0@mathcore.kicks-ass.org> Quoting Matt Knox : > Also, the origin is no longer based on the year 1850, it is based on > the same 0 date mx.DateTime uses (0 AD). Secondly frequency is > still not supported very well, so if you need to work with intraday > data, the module probably won't work very well right now. I haven't seen anyone raise this issue so I will. Is it really necessary to depend on mx.DateTime, couldn't you make do with the datetime module? I know that there are more functionallity available in mx.DateTime, but it's allways nice to stick with the standard library and avoid a dependency on yet-another-library. Just a thought.... Otto From mattknox_ca at hotmail.com Sat Dec 16 10:34:40 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Sat, 16 Dec 2006 10:34:40 -0500 Subject: [SciPy-dev] time series implementation approach Message-ID: > I haven't seen anyone raise this issue so I will. Is it really > necessary to depend on mx.DateTime, couldn't you make do with the > datetime module? I know that there are more functionallity available > in mx.DateTime, but it's allways nice to stick with the standard > library and avoid a dependency on yet-another-library. > > Just a thought.... > > Otto If someone showed me a way to re-write the code using the standard datetime module, I would definitely be open to that. I think it would be pretty tough though. Particularly with the C code, the mx datetime module provides a nice c api which the standard datetime module does not appear to match. So I suspect the code would become a fair bit more complicated because a lot of the functionality that comes with the mx datetime module would have to be rebuilt from scratch. If someone feels motivated to do that though, I'd love to see it. - Matt From adamadamadamamiadam at gmail.com Sat Dec 16 18:19:40 2006 From: adamadamadamamiadam at gmail.com (CakeProphet) Date: Sat, 16 Dec 2006 23:19:40 +0000 (UTC) Subject: [SciPy-dev] Period table? Message-ID: I think it would be fairly useful if scipy contained a module with the entire period table as a Python data type of some sort with various information (atomic number, mass, etc) of all the elements. This sound cool to anybody else? From loredo at astro.cornell.edu Sat Dec 16 20:40:20 2006 From: loredo at astro.cornell.edu (Tom Loredo) Date: Sat, 16 Dec 2006 20:40:20 -0500 Subject: [SciPy-dev] Test failures for 0.5.2 on OS X Message-ID: <1166319620.4584a004a6109@astrosun2.astro.cornell.edu> Hi folks- I just installed numpy-1.0.1 and scipy-0.5.2 on a PPC G4 running Universal Python 2.4.4 on OS 10.4.8. numpy passes all tests, but scipy.test() reports two warnings and three failed tests, copied below. I do not know in what situation the test failures should be of concern, but the check_dot failures look pretty basic. The warnings suggest two sets of tests were ignored. -Tom Ran 1596 tests in 33.399s FAILED (failures=3) Warning: FAILURE importing tests for /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/ test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Warning: FAILURE importing tests for /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/ test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) ====================================================================== FAIL: test_smallest_same_kind (scipy.io.tests.test_recaster.test_recaster) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/io/tests/test_recaster.py", line 61, in test_smallest_same_kind assert C is not None, 'Got unexpected None from %s' % T AssertionError: Got unexpected None from ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/lib/blas/tests/ test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9988617897033691+5.4938433182077039e-37j) DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9988617897033691+5.4938289689114293e-37j) DESIRED: (-9+2j) ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From robert.kern at gmail.com Sat Dec 16 20:54:12 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 16 Dec 2006 19:54:12 -0600 Subject: [SciPy-dev] Test failures for 0.5.2 on OS X In-Reply-To: <1166319620.4584a004a6109@astrosun2.astro.cornell.edu> References: <1166319620.4584a004a6109@astrosun2.astro.cornell.edu> Message-ID: <4584A344.3070405@gmail.com> Tom Loredo wrote: > Hi folks- > > I just installed numpy-1.0.1 and scipy-0.5.2 on a PPC G4 running Universal > Python 2.4.4 on OS 10.4.8. numpy passes all tests, but scipy.test() reports > two warnings and three failed tests, copied below. I do not know in what > situation the test failures should be of concern, but the check_dot failures > look pretty basic. The warnings suggest two sets of tests were ignored. The two warnings simply come from the fact that you did not build the UMFPACK bindings. Thus, the tests for them are skipped. The recaster features are relatively new. It seems that long doubles were not accounted for. Please open up a ticket and assign it to the author, Matthew Brett. The check_dot failures are more or less known. I think they may stem from gfortran bugs as I think I recall non-Mac platforms with gfortran also giving the same errors, but I could be wrong. If someone using Linux and gfortran could verify (or repudiate), I would appreciate it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.leslie at gmail.com Sat Dec 16 23:13:40 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sun, 17 Dec 2006 15:13:40 +1100 Subject: [SciPy-dev] Period table? In-Reply-To: References: Message-ID: On 12/17/06, CakeProphet wrote: > I think it would be fairly useful if scipy contained a module with the entire > period table as a Python data type of some sort with various information (atomic > number, mass, etc) of all the elements. > > > This sound cool to anybody else? This sounds pretty cool to me. I think if you were to submit the code someone would be able to include it into the sandbox for testing and review. Cheers, Tim > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From jeff at taupro.com Sun Dec 17 02:08:47 2006 From: jeff at taupro.com (Jeff Rush) Date: Sun, 17 Dec 2006 01:08:47 -0600 Subject: [SciPy-dev] Period table? In-Reply-To: References: Message-ID: <4584ECFF.90600@taupro.com> CakeProphet wrote: > I think it would be fairly useful if scipy contained a module with the entire > period table as a Python data type of some sort with various information (atomic > number, mass, etc) of all the elements. I've thought about that for years, never taking the time to make it happen. The trick is not just to list the elements in a Python dictionary, with physical attributes, but to provide various filter and search operations -- list all metals, nobel elements, given a weight range return the matching set of elements, handle isotope mapping back to their base elements, etc. Designed and documented well, it would be a neat component. -Jeff From val at vtek.com Sun Dec 17 18:15:38 2006 From: val at vtek.com (val) Date: Sun, 17 Dec 2006 18:15:38 -0500 Subject: [SciPy-dev] Period table? References: <4584ECFF.90600@taupro.com> Message-ID: <0c9101c72231$43977610$6400a8c0@D380> Jeff, this indeed sounds cool. Kind of a *naturally organized* data framework for atomic, spectral, and other data. Kind of atomic google? Going on with your impressive list, does it make sense to integrate into the search engine a computational engine to *compute missing data*? E.g., atomic/spectral physics engine to compute intensities for spectral transitions using the run-time computed wavefunctions, and/or the neural network engine/estimator (based on available data). cheers, val ----- Original Message ----- From: "Jeff Rush" To: "SciPy Developers List" Sent: Sunday, December 17, 2006 2:08 AM Subject: Re: [SciPy-dev] Period table? > CakeProphet wrote: >> I think it would be fairly useful if scipy contained a module with the >> entire >> period table as a Python data type of some sort with various information >> (atomic >> number, mass, etc) of all the elements. > > I've thought about that for years, never taking the time to make it > happen. > The trick is not just to list the elements in a Python dictionary, with > physical attributes, but to provide various filter and search > operations -- > list all metals, nobel elements, given a weight range return the matching > set > of elements, handle isotope mapping back to their base elements, etc. > Designed and documented well, it would be a neat component. > > -Jeff > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From adamadamadamamiadam at gmail.com Sun Dec 17 18:20:13 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Sun, 17 Dec 2006 18:20:13 -0500 Subject: [SciPy-dev] Period table? Message-ID: <453f59300612171520s56793320sd576a2140b33db84@mail.gmail.com> Well.. I can definetely sketch up a bit of code to bring the idea to life, but I'll need some help with the specifics. How would you quickly find the actual information to place in the table? I'm a bit too lazy to look up each element and manually hard-code in the data myself. -- "What's money? A man is a success if he gets up in the morning and goes to bed at night and in between does what he wants to do." ~ Bob Dylan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.leslie at gmail.com Sun Dec 17 18:33:02 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 18 Dec 2006 10:33:02 +1100 Subject: [SciPy-dev] Period table? In-Reply-To: <453f59300612171520s56793320sd576a2140b33db84@mail.gmail.com> References: <453f59300612171520s56793320sd576a2140b33db84@mail.gmail.com> Message-ID: On 12/18/06, Adam Curtis wrote: > Well.. I can definetely sketch up a bit of code to bring the idea to life, > but I'll need some help with the specifics. > > How would you quickly find the actual information to place in the table? I'm > a bit too lazy to look up each element and manually hard-code in the data > myself. I was thinking about this last night and I think the best way to handle it would be to work out the data structures and interfaces first and worry about importing actual data later. I'm sure that once we know the format we need the data in we can write a script to import it all. Either that or find an undergraduate student who wants to help out on a project... Tim > > -- > "What's money? A man is a success if he gets up in the morning and goes to > bed at night and in between does what he wants to do." ~ Bob Dylan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > From dd55 at cornell.edu Sun Dec 17 18:36:09 2006 From: dd55 at cornell.edu (Darren Dale) Date: Sun, 17 Dec 2006 18:36:09 -0500 Subject: [SciPy-dev] Period table? In-Reply-To: <453f59300612171520s56793320sd576a2140b33db84@mail.gmail.com> References: <453f59300612171520s56793320sd576a2140b33db84@mail.gmail.com> Message-ID: <200612171836.09291.dd55@cornell.edu> On Sunday 17 December 2006 6:20 pm, Adam Curtis wrote: > Well.. I can definetely sketch up a bit of code to bring the idea to life, > but I'll need some help with the specifics. > > How would you quickly find the actual information to place in the table? > I'm a bit too lazy to look up each element and manually hard-code in the > data myself. There might be something useful here: http://physics.nist.gov/PhysRefData/contents.html Here is some very useful information concerning X-ray spectroscopy calculations: http://www.cstl.nist.gov/acd/839.01/xrfdownload.html I have a set of python tools for loading the database assembled by W.T. Elam, but I haven't written the code for X-ray fluorescence calculations yet. Darren -- Darren S. Dale, Ph.D. dd55 at cornell.edu From adamadamadamamiadam at gmail.com Sun Dec 17 21:15:11 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Sun, 17 Dec 2006 21:15:11 -0500 Subject: [SciPy-dev] Period table? Message-ID: <453f59300612171815q6a722547h443440734d696bda@mail.gmail.com> Well I've sketched up some starting source code for it, to get a feel of how to organize everything. Basically there will be a "periodic" class and an "element" class. Element objects will consist of various data like atomic number, atomic mass, name, symbol, melting point, freezing point, critical point, density, etc... as well as various methods to calculate things like the state of the element at a given temperature. The periodic class will act as the organized framework for the whole thing. It will contain several dictionaries mapping unique qualities of each element to their element objects, so that you can quickly do element lookups. I think using dictionaries instead of one big sequence works better in this case, because it allows for faster lookups to the fairly non-dynamic and unchanging collection of data. On the user end, the periodic table would work much like a combination between an array and a dictionary. You can use symbol, name, or atomic number as dictionary keys to look up the desired element, and the whole table of elements can be iterated over like a list. Methods will be used to return lists of elements by group, period, series, number of valence electrons, state at a given room temperature, density, etc... Default temperature measurements will be in Celsius, although there will be unit conversions supported (I'm assuming scipy has a unit conversion package I could borrow from) I'm not much of a chemistry buff, so I'll need some help filling in the actual information, but I can work on the organizational structure in the meantime. -------------- next part -------------- An HTML attachment was scrubbed... URL: From val at vtek.com Sun Dec 17 23:07:30 2006 From: val at vtek.com (val) Date: Sun, 17 Dec 2006 23:07:30 -0500 Subject: [SciPy-dev] Period table? References: <453f59300612171815q6a722547h443440734d696bda@mail.gmail.com> Message-ID: <0e7b01c7225a$0c9a2c60$6400a8c0@D380> Sounds very reasonable to me, Adam. My understanding is that any metadata can be added to an element, right? And the search can also be done with the (new) data as a search string (if one is interested in interpretation of experimental data). I'd be happy to help with "chemistry". cheers, val ----- Original Message ----- From: Adam Curtis To: scipy-dev at scipy.org Sent: Sunday, December 17, 2006 9:15 PM Subject: Re: [SciPy-dev] Period table? Well I've sketched up some starting source code for it, to get a feel of how to organize everything. Basically there will be a "periodic" class and an "element" class. Element objects will consist of various data like atomic number, atomic mass, name, symbol, melting point, freezing point, critical point, density, etc... as well as various methods to calculate things like the state of the element at a given temperature. The periodic class will act as the organized framework for the whole thing. It will contain several dictionaries mapping unique qualities of each element to their element objects, so that you can quickly do element lookups. I think using dictionaries instead of one big sequence works better in this case, because it allows for faster lookups to the fairly non-dynamic and unchanging collection of data. On the user end, the periodic table would work much like a combination between an array and a dictionary. You can use symbol, name, or atomic number as dictionary keys to look up the desired element, and the whole table of elements can be iterated over like a list. Methods will be used to return lists of elements by group, period, series, number of valence electrons, state at a given room temperature, density, etc... Default temperature measurements will be in Celsius, although there will be unit conversions supported (I'm assuming scipy has a unit conversion package I could borrow from) I'm not much of a chemistry buff, so I'll need some help filling in the actual information, but I can work on the organizational structure in the meantime. ------------------------------------------------------------------------------ _______________________________________________ Scipy-dev mailing list Scipy-dev at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Mon Dec 18 12:03:31 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 18 Dec 2006 12:03:31 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <45839F7D.1070400@bigpond.net.au> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <45839F7D.1070400@bigpond.net.au> Message-ID: <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> Gary, I got epydoc running on plain docstrings and the latex-math role from Jens running. However, I havent't looked at how to use both together. In any case, its not yet clear to me how the information should be organized. The first post on this thread suggested: """ 1-2 sentences summarizing what the function does. INPUT: var1 -- type, defaults, what it is var2 -- ... OUTPUT: description of output var or vars (if tuple) EXAMPLES: a *bunch* of examples, often a whole page. NOTES: misc other notes ALGORITHM: notes about the implementation or algorithm, if applicable AUTHORS: -- name (date): notes about what was done """ But I think this is maybe too heavy for doctrings. Anyone using IDLE will see large yellow docs pop up. I think docstrings should limit themselves to: """ 1-2 sentences summarizing what the function does. INPUT: var1 -- type, defaults, what it is var2 -- ... OUTPUT: description of output var or vars (if tuple) SEE ALSO: """ The question now is: where too put all the other stuff, namely the examples, notes, algorithm and authorship. Unless I'm mistaken, epydoc has support for documentation outside of the docstring, so we could use that. Another idea is to assign "documentation attributes" to functions. For instance: def func(a,b,c): """definition :Input: ... :Output: ... :See also: ... """ func.__examples__=[ """ >>> func(1,2,3) [4,5,6] ... """, """...""", ...] func.__author__='Bobby' func.__notes__=' Those attributes could be accessed by an example function def example(function, i=None): """Run example i from function. If i is None, run all examples. """ and eventually an ipython magic command (func?!). The other solution is to put the examples in completely different files, or fetch them from a wiki page (but I don't think this is implemented yet in epydoc). I think we should dig more into what epydoc offers before rushing into this. Then we could propose a couple of different layouts, and once everyone agrees on which is best, go ahead and apply it at large. I started looking at FiPY's documentation and their setup is impressive. I'll look more deeply into it. David 2006/12/16, Gary Ruben : > > In case I gave the impression I was planning to do something on this, I > still am. It's just that I've been very busy. I am still planning to try > to build the FiPy docs and then spin a skeleton document based on it for > numpy, perhaps with content from the numarray docs, and another for > scipy during the upcoming Christmas break. > > Each of these docs would have one appendix which contains a reference > built from the individual method/function docstrings. Each would also > have another appendix of long full-module examples with LaTeX and > plotting results. I was also going to try to write a docstring > documentation spec as part of this. > > This is quite a bit of work for me as I expect I need a few days > preparation to get a Linux distro up to date with a build system. I > thought the concensus was headed toward epydoc + ReST with LaTeX markup, > so I was planning to adopt FiPy's nice model for using this. I still > hadn't ruled out doxygen and I was planning on looking at it too to see > if it has advantages. My guess is that either would be fine, although > people here might be more comfortable with epydoc. > > David, if this sounds like a good plan to you, perhaps you can move > ahead with this, as it'll be a few days before I can start. > > Gary R. > > David Huard wrote: > > Hi all, > > > > I followed this thread with much interest and I wouldn't like it to die > > without some kind of concensus being reached for the Scipy documention > > system. Am I correct to say that Epydoc + REST/Latex seems the way to go > ? > > > > If this is the case, what's next ? I'm not familiar with any of this, > > but I'd be great if someone knowledgeable could define a roadmap and > > create a couple of tickets so that people like me could contribute small > > steps. > > > > Cheers, > > > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Dec 18 13:14:22 2006 From: aisaac at american.edu (Alan Isaac) Date: Mon, 18 Dec 2006 13:14:22 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> References: <457814CB.80307@bigpond.net.au><2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov><457A0398.90406@bigpond.net.au><406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov><7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov><91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com><45839F7D.1070400@bigpond.net.au><91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> Message-ID: On Mon, 18 Dec 2006, David Huard wrote: > I got epydoc running on plain docstrings and the > latex-math role from Jens running. However, I havent't > looked at how to use both together. I'm in the same position. But I guess this should be simple for anyone familiar with docutils, so I'll copy this to docutils-users. Cheers, Alan Isaac From loredo at astro.cornell.edu Mon Dec 18 15:32:06 2006 From: loredo at astro.cornell.edu (Tom Loredo) Date: Mon, 18 Dec 2006 15:32:06 -0500 Subject: [SciPy-dev] Verify check_dot failure for linux/gfortran Message-ID: <1166473926.4586fac62e46a@astrosun2.astro.cornell.edu> Hi folks, I'm copying this part of a thread on 0.5.2 failures in OS X. In that thread Robert Kern asks for verification of a test failure on linux/gfortran; I thought it would have a better chance of being noticed in a new thread. -Tom OS X failures for scipy-0.5.2 built with OS 10.4.8, Py-2.4.4, numpy-1.0.1, gcc-4 and gfortran (from hpc.sourceforge.net): ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/lib/blas/tests/ test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9988617897033691+5.4938433182077039e-37j) DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9988617897033691+5.4938289689114293e-37j) DESIRED: (-9+2j) Robert Kern wrote: The check_dot failures are more or less known. I think they may stem from gfortran bugs as I think I recall non-Mac platforms with gfortran also giving the same errors, but I could be wrong. If someone using Linux and gfortran could verify (or repudiate), I would appreciate it. ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From adamadamadamamiadam at gmail.com Mon Dec 18 17:14:35 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Mon, 18 Dec 2006 17:14:35 -0500 Subject: [SciPy-dev] Period table? Message-ID: <453f59300612181414k76f16fbdj36e7441039ce1baa@mail.gmail.com> Yeah, the element class will accept any number of **kwargs, but the basics are included as positional arguments as well. How much information do we need to be given in order to automatically calculate the rest of the data? I know with the atomic number you can find the electron configuration, which can be used to find the period, group, and valence electron number. What do you think would be the minimal amount of data required to be inputted? -- "What's money? A man is a success if he gets up in the morning and goes to bed at night and in between does what he wants to do." ~ Bob Dylan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gruben at bigpond.net.au Mon Dec 18 17:15:34 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Tue, 19 Dec 2006 09:15:34 +1100 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <45839F7D.1070400@bigpond.net.au> <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> Message-ID: <45871306.7080709@bigpond.net.au> Thanks for moving on this David, I'm still getting a machine set up to work on this, but I'm closer now. I like your doc attributes idea as it provides a very obvious way to access the extra docs via ipython, helper functions etc. Depending on how epydoc supports docs outside of the docstring, that may also be a good solution. I think we should avoid small examples in their own modules and reserve external modules for large, detailed examples a'la FiPy. If the 'dynamically accessing a wiki for docs' idea were implemented, it would have to be as an adjunct to completely self-contained documentation, so I'd keep it as an idea, but forget about implementing it now. I also think that docstrings should also have at least one minimal, typical-case call example. There's a compromise between bloat and help and I always find examples an enormous help. Gary R. David Huard wrote: > Gary, > > I got epydoc running on plain docstrings and the latex-math role from > Jens running. However, I havent't looked at how to use both together. > > In any case, its not yet clear to me how the information should be > organized. The first post on this thread suggested: > > """ > 1-2 sentences summarizing what the function does. > > INPUT: > var1 -- type, defaults, what it is > var2 -- ... > OUTPUT: > description of output var or vars (if tuple) > > EXAMPLES: > a *bunch* of examples, often a whole page. > > NOTES: > misc other notes > > ALGORITHM: > notes about the implementation or algorithm, if applicable > > AUTHORS: > -- name (date): notes about what was done > """ > > But I think this is maybe too heavy for doctrings. Anyone using IDLE > will see large yellow docs pop up. I think docstrings should limit > themselves to: > > """ > 1-2 sentences summarizing what the function does. > > INPUT: > var1 -- type, defaults, what it is > var2 -- ... > OUTPUT: > description of output var or vars (if tuple) > > SEE ALSO: > """ > > The question now is: where too put all the other stuff, namely the > examples, notes, algorithm and authorship. > Unless I'm mistaken, epydoc has support for documentation outside of the > docstring, so we could use that. Another idea is to assign > "documentation attributes" to functions. For instance: > > def func(a,b,c): > """definition > :Input: ... > :Output: ... > :See also: ... > """ > func.__examples__=[ > """ > >>> func(1,2,3) > [4,5,6] > ... > """, > """...""", ...] > func.__author__='Bobby' > func.__notes__=' > Those attributes could be accessed by an example function > def example(function, i=None): > """Run example i from function. > If i is None, run all examples. > """ > and eventually an ipython magic command (func?!). > > The other solution is to put the examples in completely different files, > or fetch them from a wiki page (but I don't think this is implemented > yet in epydoc). > > I think we should dig more into what epydoc offers before rushing into > this. Then we could propose a couple of different layouts, and once > everyone agrees on which is best, go ahead and apply it at large. > > I started looking at FiPY's documentation and their setup is impressive. > I'll look more deeply into it. > > David From adamadamadamamiadam at gmail.com Mon Dec 18 17:55:22 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Mon, 18 Dec 2006 22:55:22 +0000 (UTC) Subject: [SciPy-dev] Period table? References: Message-ID: How exactly do I submit the source code I currently have written? (sorry, a bit unfamiliar with how this works) From tim.leslie at gmail.com Mon Dec 18 18:17:37 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Tue, 19 Dec 2006 10:17:37 +1100 Subject: [SciPy-dev] Period table? In-Reply-To: References: Message-ID: On 12/19/06, Adam Curtis wrote: > > How exactly do I submit the source code I currently have written? (sorry, a bit > unfamiliar with how this works) > For the time being you can simply send an attachment to the mailing list. I'm sure you'd be able to get svn write access to work on this. I believe Jeff Strunk is the man to speak to to organise this. Cheers, Tim > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Mon Dec 18 18:32:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 18 Dec 2006 17:32:04 -0600 Subject: [SciPy-dev] Period table? In-Reply-To: References: Message-ID: <458724F4.2080108@gmail.com> Adam Curtis wrote: > How exactly do I submit the source code I currently have written? (sorry, a bit > unfamiliar with how this works) Register for the www.scipy.org wiki and make a page describing your code. Add it to the page as an attachment. When we've seen the code and hashed out the design a little bit, then we'll talk about putting it in the Subversion repository. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From val at vtek.com Mon Dec 18 19:07:23 2006 From: val at vtek.com (val) Date: Mon, 18 Dec 2006 19:07:23 -0500 Subject: [SciPy-dev] Period table? References: <453f59300612181414k76f16fbdj36e7441039ce1baa@mail.gmail.com> Message-ID: <107a01c72301$aa43fbb0$6400a8c0@D380> To me, the minimal amount of data to calculate a new data is an open-ended issue and a matter of quality of the calculated result. The docstring may include the description of how a new data is computed. New (comp) methods can be added/integrated into the Element class step-by-step. E.g., widely used interpolation techniques can be helpful for ranges not available (directly measured). BTW, i heard the Traits module (from Enthought.com) can be helpful in the situations when one needs handle structured physical data (metadata, support and check for dimernsionalities, relations to other data, check of ranges, etc). Can expert people elaborate and point to examples? cheers, val ----- Original Message ----- From: Adam Curtis To: scipy-dev Sent: Monday, December 18, 2006 5:14 PM Subject: Re: [SciPy-dev] Period table? Yeah, the element class will accept any number of **kwargs, but the basics are included as positional arguments as well. How much information do we need to be given in order to automatically calculate the rest of the data? I know with the atomic number you can find the electron configuration, which can be used to find the period, group, and valence electron number. What do you think would be the minimal amount of data required to be inputted? -- "What's money? A man is a success if he gets up in the morning and goes to bed at night and in between does what he wants to do." ~ Bob Dylan _______________________________________________ Scipy-dev mailing list Scipy-dev at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev From adamadamadamamiadam at gmail.com Tue Dec 19 11:27:01 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Tue, 19 Dec 2006 16:27:01 +0000 (UTC) Subject: [SciPy-dev] Period table? References: <458724F4.2080108@gmail.com> Message-ID: Robert Kern gmail.com> writes: > > Adam Curtis wrote: > > How exactly do I submit the source code I currently have written? (sorry, a bit > > unfamiliar with how this works) > > Register for the www.scipy.org wiki and make a page describing your code. Add it > to the page as an attachment. When we've seen the code and hashed out the design > a little bit, then we'll talk about putting it in the Subversion repository. > Yeah, you'll probably be doing a lot of review. I'm not exactly known for my legibility. From adamadamadamamiadam at gmail.com Tue Dec 19 11:30:57 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Tue, 19 Dec 2006 16:30:57 +0000 (UTC) Subject: [SciPy-dev] Temperature (and other unit) conversions Message-ID: I think an interesting way too approach unit conversions would be to make actual unit classes (for example: a Temperature class), which would inherit from float and would keep track of what unit was being used so that it could seemlessly convert units together during arithmetic operations of differing unit. The idea could be extended to other areas as well. Thoughts? From alexandre.fayolle at logilab.fr Tue Dec 19 11:46:26 2006 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Tue, 19 Dec 2006 17:46:26 +0100 Subject: [SciPy-dev] Temperature (and other unit) conversions In-Reply-To: References: Message-ID: <20061219164626.GI5637@crater.logilab.fr> On Tue, Dec 19, 2006 at 04:30:57PM +0000, Adam Curtis wrote: > I think an interesting way too approach unit conversions would be to make > actual unit classes (for example: a Temperature class), which would inherit > from float and would keep track of what unit was being used so that it could > seemlessly convert units together during arithmetic operations of differing > unit. There's something for unit management in scientific-python, by Konrad Hinsen. -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science Reprise et maintenance de sites CPS: http://www.migration-cms.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From adamadamadamamiadam at gmail.com Tue Dec 19 11:48:35 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Tue, 19 Dec 2006 16:48:35 +0000 (UTC) Subject: [SciPy-dev] Temperature (and other unit) conversions References: Message-ID: Adam Curtis gmail.com> writes: > > I think an interesting way too approach unit conversions would be to make > actual unit classes (for example: a Temperature class), which would inherit > from float and would keep track of what unit was being used so that it could > seemlessly convert units together during arithmetic operations of differing > unit. > > The idea could be extended to other areas as well. Thoughts? > Example of how it might work: x = Temperature(20, "c") #Room temperature print x y = x.converto("f") #Room temperature in Farenheit print y print y == x print Temperature(30, "c") + y #Addition of measurements with different units Output: 20 (c) xx (f) #heh, don't know the exact conversion 50 (c) #First measurement in an operation would be used for the return value From adamadamadamamiadam at gmail.com Tue Dec 19 11:49:59 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Tue, 19 Dec 2006 16:49:59 +0000 (UTC) Subject: [SciPy-dev] Temperature (and other unit) conversions References: <20061219164626.GI5637@crater.logilab.fr> Message-ID: Alexandre Fayolle logilab.fr> writes: > There's something for unit management in scientific-python, by Konrad > Hinsen. > Ah. Alright then. From adamadamadamamiadam at gmail.com Tue Dec 19 14:04:09 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Tue, 19 Dec 2006 19:04:09 +0000 (UTC) Subject: [SciPy-dev] Period table? References: <453f59300612181414k76f16fbdj36e7441039ce1baa@mail.gmail.com> <107a01c72301$aa43fbb0$6400a8c0@D380> Message-ID: val vtek.com> writes: > To me, the minimal amount of data to calculate a new data is > an open-ended issue and a matter of quality of the calculated result. > The docstring may include the description of how a new data is computed. > New (comp) methods can be added/integrated into the Element class > step-by-step. E.g., widely used interpolation techniques > can be helpful for ranges not available (directly measured). Yeah, I agree. While it may be convient to calculate and interpolate all of the data from a few primitive base values, it may not always come out accurate, and there are many special quirks within the periodic table itself despite its regularity. Accuracy should be the top priority, followed by power and ease of use. From david.huard at gmail.com Tue Dec 19 15:34:18 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 19 Dec 2006 15:34:18 -0500 Subject: [SciPy-dev] Fwd: [sage-devel] numpy in SAGE, etc. References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <45839F7D.1070400@bigpond.net.au> <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> Message-ID: Le Mon, 18 Dec 2006 13:14:22 -0500, Alan Isaac a ?crit?: > On Mon, 18 Dec 2006, David Huard wrote: >> I got epydoc running on plain docstrings and the >> latex-math role from Jens running. However, I havent't >> looked at how to use both together. > > I'm in the same position. But > I guess this should be simple for anyone familiar with > docutils, so I'll copy this to docutils-users. > > Cheers, > Alan Isaac If I understand correctly, I'd have to add the latex-math role to rst/roles.py and the latex-math directive to the rst/directives directory and register it in the __init__. However, the directives defined in Jens' sandbox are writer specific, so I'm a bit lost. A little bit of context: The SciPy and NumPy folks are looking at the various documentation systems out there to build the API documentation and tutorials. Up to now, the combination epydoc+reST seems to most powerful. However, Latex formulas are a must for those packages and the raw role is a bit low-level for our needs, hence the interest in including the latex-math role and directive in the trunk so that epydoc can run smoothly using it. Thanks a lot, David From jensj at fysik.dtu.dk Wed Dec 20 08:10:09 2006 From: jensj at fysik.dtu.dk (Jens =?ISO-8859-1?Q?J=F8rgen?= Mortensen) Date: Wed, 20 Dec 2006 14:10:09 +0100 Subject: [SciPy-dev] [Docutils-users] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: References: <457814CB.80307@bigpond.net.au> <2BE1BE79-F32A-43AC-851E-440942DC0720@nist.gov> <457A0398.90406@bigpond.net.au> <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <45839F7D.1070400@bigpond.net.au> <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> Message-ID: <1166620209.29049.8.camel@doppler.fysik.dtu.dk> On Tue, 2006-12-19 at 15:34 -0500, David Huard wrote: > If I understand correctly, I'd have to add the latex-math > role to rst/roles.py and the latex-math directive to the > rst/directives directory and register it in the __init__. However, the > directives defined in Jens' sandbox are writer specific, so I'm a bit lost. You will have to merge the code from rst2mathml.py and rst2latexmath.py. I would start from the code in rst2mathml.py, where the node class (latex_math) will have to be changed to something like this: class latex_math(nodes.Element): tagname = '#latex-math' def __init__(self, rawsource, mathml_tree, latex): nodes.Element.__init__(self, rawsource) self.mathml_tree = mathml_tree self.latex = latex Also the latex_math_role function and the latex_math_directive class will have to be modified a bit to use the new node class. And then the visit/depart methods should be added to the writers. Hope that helps, Jens J?rgen > A little bit of context: > The SciPy and NumPy folks are looking at the > various documentation systems out there to build the API documentation > and tutorials. Up to now, the combination epydoc+reST seems to most > powerful. However, Latex formulas are a must for those packages and the > raw role is a bit low-level for our needs, hence the interest in > including the latex-math role and directive in the trunk so that epydoc > can run smoothly using it. From nwagner at iam.uni-stuttgart.de Wed Dec 20 08:21:52 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 20 Dec 2006 14:21:52 +0100 Subject: [SciPy-dev] scipy.io.mmwrite() of sparse matrices Message-ID: <458938F0.3000406@iam.uni-stuttgart.de> Hi all, It's currently not possible to store sparse matrices using scipy.io.mmwrite(). Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/scipy/io/mmio.py", line 269, in mmwrite typecode = a.gettypecode() File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 237, in __getattr__ raise AttributeError, attr + " not found" AttributeError: gettypecode not found http://projects.scipy.org/scipy/scipy/ticket/317 Please can someone fix this problem in the near future ? Thanks in advance. Nils >>> scipy.__version__ '0.5.3.dev2439' From david.huard at gmail.com Wed Dec 20 11:53:09 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 20 Dec 2006 11:53:09 -0500 Subject: [SciPy-dev] [Docutils-users] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <1166620209.29049.8.camel@doppler.fysik.dtu.dk> References: <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <45839F7D.1070400@bigpond.net.au> <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> <1166620209.29049.8.camel@doppler.fysik.dtu.dk> Message-ID: <91cf711d0612200853m5663ee62q58762be395350ecd@mail.gmail.com> Thanks Jens, For those who would like to try it out, here is the patch made on a recent svn checkout of docutils. The patch adds the latex-math role and directive to docutils. Using it, you should be able to run epydoc and generate pdf output from the docstrings of any python module (correctly formated, that is). The html output doesn't work very well for me, but that may be due to my browser not supporting MathML fonts. The next step would be to add support for the :input:, :ouput:, :example: and :see also: tags. It shouldn't be difficult, but I'll leave that to someone else : ) To see it in action, try epydoc --pdf epytest.py Cheers, David 2006/12/20, Jens J?rgen Mortensen : > > On Tue, 2006-12-19 at 15:34 -0500, David Huard wrote: > > If I understand correctly, I'd have to add the latex-math > > role to rst/roles.py and the latex-math directive to the > > rst/directives directory and register it in the __init__. However, the > > directives defined in Jens' sandbox are writer specific, so I'm a bit > lost. > > You will have to merge the code from rst2mathml.py and rst2latexmath.py. > I would start from the code in rst2mathml.py, where the node class > (latex_math) will have to be changed to something like this: > > class latex_math(nodes.Element): > tagname = '#latex-math' > def __init__(self, rawsource, mathml_tree, latex): > nodes.Element.__init__(self, rawsource) > self.mathml_tree = mathml_tree > self.latex = latex > > Also the latex_math_role function and the latex_math_directive class > will have to be modified a bit to use the new node class. > > And then the visit/depart methods should be added to the writers. > > Hope that helps, > Jens J?rgen > > > A little bit of context: > > The SciPy and NumPy folks are looking at the > > various documentation systems out there to build the API documentation > > and tutorials. Up to now, the combination epydoc+reST seems to most > > powerful. However, Latex formulas are a must for those packages and the > > raw role is a bit low-level for our needs, hence the interest in > > including the latex-math role and directive in the trunk so that epydoc > > can run smoothly using it. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: epytest.py Type: text/x-python Size: 332 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: latex-math.patch Type: text/x-patch Size: 6821 bytes Desc: not available URL: From mattknox_ca at hotmail.com Wed Dec 20 12:33:58 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Wed, 20 Dec 2006 12:33:58 -0500 Subject: [SciPy-dev] TimeSeries class now sub-class of MaskedArray Message-ID: I changed the basic structure of the TimeSeries class in the timeseries module in the sandbox. Previously it was a subclass of a new ShiftingArray class (code for that still available in the sandbox), but now it is just a subclass of MaskedArray. See my earlier post "time series implementation approach" for a description of the two approaches. Both approaches have their pros/cons, but the ShiftingArray approach (no defined start and end points to the TimeSeries with dynamic expansion to accomodate indices given to it) was much more complicated to implement, and potentially "un-natural" for people used to working with standard arrays. If anyone has had a chance to look at the module at all, I would be very interested to hear some initials thoughts and opinions (positive or negative). In particular, I'd like to know if people feel that this module is on the right track as a starting point for a timeseries module, or way off base and needs to be radically rethought. - Matt From aisaac at american.edu Wed Dec 20 13:22:22 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 20 Dec 2006 13:22:22 -0500 Subject: [SciPy-dev] math in scipy docs In-Reply-To: <91cf711d0612200853m5663ee62q58762be395350ecd@mail.gmail.com> References: <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov><7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov><91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com><45839F7D.1070400@bigpond.net.au><91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com><1166620209.29049.8.camel@doppler.fysik.dtu.dk><91cf711d0612200853m5663ee62q58762be395350ecd@mail.gmail.com> Message-ID: On Wed, 20 Dec 2006, David Huard apparently wrote: > The html output doesn't work very well for me, but that > may be due to my browser not supporting MathML fonts. It is XHTML+MathML. So you need to: 1. give the document a .xml extension (or otherwise ensure that it is served as XML) 2. Use FireFox, since IE support for XML is nil (through version 6, anyway) Cheers, Alan Isaac From david.huard at gmail.com Wed Dec 20 14:23:05 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 20 Dec 2006 14:23:05 -0500 Subject: [SciPy-dev] TimeSeries class now sub-class of MaskedArray In-Reply-To: References: Message-ID: <91cf711d0612201123p4c946c3cgaccfb0f724bb1120@mail.gmail.com> Hi Matt, The init script doesn't build cseries.c David 2006/12/20, Matt Knox : > > > I changed the basic structure of the TimeSeries class in the timeseries > module in the sandbox. Previously it was a subclass of a new ShiftingArray > class (code for that still available in the sandbox), but now it is just a > subclass of MaskedArray. > > See my earlier post "time series implementation approach" for a > description of the two approaches. Both approaches have their pros/cons, but > the ShiftingArray approach (no defined start and end points to the > TimeSeries with dynamic expansion to accomodate indices given to it) was > much more complicated to implement, and potentially "un-natural" for people > used to working with standard arrays. > > If anyone has had a chance to look at the module at all, I would be very > interested to hear some initials thoughts and opinions (positive or > negative). In particular, I'd like to know if people feel that this module > is on the right track as a starting point for a timeseries module, or way > off base and needs to be radically rethought. > > - Matt > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Wed Dec 20 14:36:16 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Wed, 20 Dec 2006 14:36:16 -0500 Subject: [SciPy-dev] TimeSeries class now sub-class of MaskedArray Message-ID: > Matt, > > The init script doesn't build cseries.c > > David I'm not sure how I would do that really. I just have a manual script I run with hard coded paths on windows for compiling. In particular, since it uses files from the mx DateTime module during compilation, I'm not sure how I would do that in a generic way without hard coding the paths. So if someone has a generic way of doing this that they can share, that would be great. This is what I use on windows to do it: import sys, os os.chdir("Q:\\work\\matt\\python\\timeseries\\") from distutils.core import setup, Extension setup(name="cseries", version="1.0", ext_modules=[Extension("cseries", sources=["Q:\\work\\matt\\python\\timeseries\\cseries.c"], include_dirs=["C:\\Python24\\Lib\\site-packages\\numpy\\core\\include\\numpy","C:\\Python24\\Lib\\site-packages\\mx\\DateTime\\mxDateTime"] ) ] ) os.chdir("Q:\\work\\matt\\python\\timeseries\\build_utils") # to build the cseries extension, run: # python cseries-setup.py build_ext -i - Matt From v-nijs at kellogg.northwestern.edu Wed Dec 20 17:00:38 2006 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Wed, 20 Dec 2006 16:00:38 -0600 Subject: [SciPy-dev] Tutorial Message-ID: I am a new user of Scipy but have used Ox (www.doornik.com) and Gauss for about 10 years. As I am learning about Scipy I noticed that some basic things like how to read/write data are missing from the tutorial. I'd love to add to the tutorial as I learn myself but I get the 'You are not allowed to edit this page.' warning when I press edit in the wiki. Is there a way to gain access to this page or should I send suggestions/code snippets to the dev-list (seems inefficient)? Thanks, Vincent From robert.kern at gmail.com Wed Dec 20 17:27:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 20 Dec 2006 16:27:53 -0600 Subject: [SciPy-dev] Tutorial In-Reply-To: References: Message-ID: <4589B8E9.9080208@gmail.com> Vincent Nijs wrote: > I am a new user of Scipy but have used Ox (www.doornik.com) and Gauss for > about 10 years. As I am learning about Scipy I noticed that some basic > things like how to read/write data are missing from the tutorial. > > I'd love to add to the tutorial as I learn myself but I get the 'You are not > allowed to edit this page.' warning when I press edit in the wiki. You must create an account first. http://www.scipy.org/UserPreferences -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Wed Dec 20 18:14:22 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 20 Dec 2006 18:14:22 -0500 Subject: [SciPy-dev] TimeSeries class now sub-class of MaskedArray In-Reply-To: References: Message-ID: <200612201814.22635.pgmdevlist@gmail.com> On Wednesday 20 December 2006 14:36, Matt Knox wrote: > > Matt, > > > > The init script doesn't build cseries.c > > > > David > > I'm not sure how I would do that really. I just have a manual script I run > with hard coded paths on windows for compiling. I had to write a small setup.py to get cseries.c to compile. I hope Matt won't mind my having posted it on the svn server. Note that you may wanna check that the `cseries.c` in `src` is the same as the `cseries.c` in root... (I thought it'd be better to put the c code in its own folder. Not that it matters much...) From adamadamadamamiadam at gmail.com Thu Dec 21 15:42:41 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Thu, 21 Dec 2006 20:42:41 +0000 (UTC) Subject: [SciPy-dev] Period table? References: <458724F4.2080108@gmail.com> Message-ID: Robert Kern gmail.com> writes: > Register for the www.scipy.org wiki and make a page describing your code. > Add it to the page as an attachment. When we've seen the code and hashed > out the design a little bit, then we'll talk about putting it in the > Subversion repository. Alright... the page is up on the wiki. Link: http://projects.scipy.org/scipy/scipy/wiki/PeriodicTable From tim.leslie at gmail.com Thu Dec 21 18:05:32 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Fri, 22 Dec 2006 10:05:32 +1100 Subject: [SciPy-dev] Period table? In-Reply-To: References: <458724F4.2080108@gmail.com> Message-ID: On 12/22/06, Adam Curtis wrote: > Robert Kern gmail.com> writes: > > Register for the www.scipy.org wiki and make a page describing your code. > > Add it to the page as an attachment. When we've seen the code and hashed > > out the design a little bit, then we'll talk about putting it in the > > Subversion repository. > > Alright... the page is up on the wiki. > > Link: http://projects.scipy.org/scipy/scipy/wiki/PeriodicTable > Thanks for putting that up Adam. It all looks pretty good so far, but the following caught my eye: * All temperatures should be in Celsius. Conversions will be performed by methods (or a package in scipy, if scipy has a unit conversion system) Would it not be better to work in units of Kelvin? In my experience Kelvin is used more often than Celcius in scientific work. What do others think? Cheers, Tim > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From dd55 at cornell.edu Thu Dec 21 18:20:34 2006 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 21 Dec 2006 18:20:34 -0500 Subject: [SciPy-dev] Period table? In-Reply-To: References: Message-ID: <200612211820.34359.dd55@cornell.edu> On Thursday 21 December 2006 6:05 pm, Tim Leslie wrote: > On 12/22/06, Adam Curtis wrote: > > Robert Kern gmail.com> writes: > > > Register for the www.scipy.org wiki and make a page describing your > > > code. Add it to the page as an attachment. When we've seen the code and > > > hashed out the design a little bit, then we'll talk about putting it in > > > the Subversion repository. > > > > Alright... the page is up on the wiki. > > > > Link: http://projects.scipy.org/scipy/scipy/wiki/PeriodicTable > > Thanks for putting that up Adam. It all looks pretty good so far, but > the following caught my eye: > > * All temperatures should be in Celsius. Conversions will be performed > by methods (or a package in scipy, if scipy has a unit conversion > system) > > Would it not be better to work in units of Kelvin? In my experience > Kelvin is used more often than Celcius in scientific work. What do > others think? I agree, Kelvin would be the most natural choice. Plenty of thermodynamics calculations demand temperature be in units of Kelvin. Darren From adamadamadamamiadam at gmail.com Thu Dec 21 22:26:41 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Fri, 22 Dec 2006 03:26:41 +0000 (UTC) Subject: [SciPy-dev] Period table? References: <200612211820.34359.dd55@cornell.edu> Message-ID: Kelvin it is then. I'm not very familiar on scientific standards. I guess it -does- make more sense though... Changing that now. From gruben at bigpond.net.au Fri Dec 22 02:34:22 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri, 22 Dec 2006 18:34:22 +1100 Subject: [SciPy-dev] [Docutils-users] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <91cf711d0612200853m5663ee62q58762be395350ecd@mail.gmail.com> References: <406544C4-1AB6-47D8-82A9-435B4F19C6A2@nist.gov> <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <45839F7D.1070400@bigpond.net.au> <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> <1166620209.29049.8.camel@doppler.fysik.dtu.dk> <91cf711d0612200853m5663ee62q58762be395350ecd@mail.gmail.com> Message-ID: <458B8A7E.9090003@bigpond.net.au> Hi David, I applied your patch, but I've missed something because I get an error when it tries to import parse_latex_math latex_math from latexparser. i.e. latexparser is nowhere to be found. Is this in some other file which Jens has or do I need to rename something? I wasn't able to work this out from reading others' posts to this thread. thanks, Gary David Huard wrote: > Thanks Jens, > > For those who would like to try it out, here is the patch made on a > recent svn checkout of docutils. > The patch adds the latex-math role and directive to docutils. Using it, > you should be able to run epydoc and > generate pdf output from the docstrings of any python module (correctly > formated, that is). The html output doesn't work very well for me, but > that may be due to my browser not supporting MathML fonts. > > The next step would be to add support for the :input:, :ouput:, > :example: and :see also: tags. It shouldn't be difficult, but I'll leave > that to someone else : ) > > To see it in action, try > > epydoc --pdf epytest.py From david.huard at gmail.com Fri Dec 22 11:43:16 2006 From: david.huard at gmail.com (David Huard) Date: Fri, 22 Dec 2006 11:43:16 -0500 Subject: [SciPy-dev] [Docutils-users] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <458B8A7E.9090003@bigpond.net.au> References: <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <45839F7D.1070400@bigpond.net.au> <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> <1166620209.29049.8.camel@doppler.fysik.dtu.dk> <91cf711d0612200853m5663ee62q58762be395350ecd@mail.gmail.com> <458B8A7E.9090003@bigpond.net.au> Message-ID: <91cf711d0612220843g181954a4l535d1398328da7f4@mail.gmail.com> Hi Gary, No, it's my mistake. The svn diff didn't catch one file because it wasn't versioned. I've attached a new patch including it. Sorry I missed that. David 2006/12/22, Gary Ruben : > > Hi David, > I applied your patch, but I've missed something because I get an error > when it tries to import parse_latex_math latex_math from latexparser. > i.e. latexparser is nowhere to be found. Is this in some other file > which Jens has or do I need to rename something? I wasn't able to work > this out from reading others' posts to this thread. > thanks, > Gary > > David Huard wrote: > > Thanks Jens, > > > > For those who would like to try it out, here is the patch made on a > > recent svn checkout of docutils. > > The patch adds the latex-math role and directive to docutils. Using it, > > you should be able to run epydoc and > > generate pdf output from the docstrings of any python module (correctly > > formated, that is). The html output doesn't work very well for me, but > > that may be due to my browser not supporting MathML fonts. > > > > The next step would be to add support for the :input:, :ouput:, > > :example: and :see also: tags. It shouldn't be difficult, but I'll leave > > that to someone else : ) > > > > To see it in action, try > > > > epydoc --pdf epytest.py > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: latexmath.patch Type: text/x-patch Size: 23468 bytes Desc: not available URL: From gruben at bigpond.net.au Fri Dec 22 20:05:42 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 23 Dec 2006 12:05:42 +1100 Subject: [SciPy-dev] [Docutils-users] Fwd: [sage-devel] numpy in SAGE, etc. In-Reply-To: <91cf711d0612220843g181954a4l535d1398328da7f4@mail.gmail.com> References: <7359BA99-5730-4839-896C-E7FD7FCC9DEF@nist.gov> <91cf711d0612120920x49101fcag696f68e1fa786432@mail.gmail.com> <45839F7D.1070400@bigpond.net.au> <91cf711d0612180903h7d1f6cbfpe105ec5f645fafde@mail.gmail.com> <1166620209.29049.8.camel@doppler.fysik.dtu.dk> <91cf711d0612200853m5663ee62q58762be395350ecd@mail.gmail.com> <458B8A7E.9090003@bigpond.net.au> <91cf711d0612220843g181954a4l535d1398328da7f4@mail.gmail.com> Message-ID: <458C80E6.7070106@bigpond.net.au> Thanks David, It works now. Actually, epydoc did break at the LaTeX point on my system because it wants fancyheadings.sty, which is a deprecated LaTeX package not package managed by MikTeX - I just replaced it in the .tex output with fancyhdr.sty, because I couldn't be bothered installing fancyheadings, and it worked. This is a problem in epydoc and not a problem with your patch. We probably don't need to use the epydoc preamble anyway. thanks, Gary David Huard wrote: > Hi Gary, > No, it's my mistake. The svn diff didn't catch one file because it > wasn't versioned. > I've attached a new patch including it. > > Sorry I missed that. > > David From millman at berkeley.edu Sat Dec 23 08:04:46 2006 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 23 Dec 2006 05:04:46 -0800 Subject: [SciPy-dev] ScipyTest vs. NumpyTest Message-ID: Hey everyone, I have a couple of fairly trivial changes that I would like to make, but I wanted to run them by everyone before going ahead. I was looking at the testing system and it seems a little confusing at this point. I would like to clean this up and make the testing more consistent. First, I wasn't easily able to find documentation. The DEVELOPERS.txt file states that every SciPy module should contain: 75 + a directory ``tests/`` that contains files ``test_.py`` 76 corresponding to modules ``xxx/{.py,.so,/}``. See below for 77 more details. But there is nothing more below. The old Plone SciPy site had a development/testguide.html page, but I couldn't find a corresponding page on the new sites. I migrated the information from the old Plone site: http://projects.scipy.org/scipy/scipy/wiki/TestingGuidelines I need to finish the migration to reflect the current state of affairs, which I will do as I find time. Ultimately, more documentation about the testing is needed in the code. Second, it seems that after ScipyTest was changed to NumpyTest in changeset 2029 there is some confusion about whether to use the new name or not: http://projects.scipy.org/scipy/numpy/changeset/2029 Currently the scipy codebase uses both. I think that for consistency we should only use NumpyTest. I am happy to make the changes, but I wanted to make sure that everyone agreed that this is reasonable. I also think that the NumpyTest docstring should be updated: http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/testing/info.py If no one objects, I will file a ticket with numpy. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From adamadamadamamiadam at gmail.com Mon Dec 25 15:46:27 2006 From: adamadamadamamiadam at gmail.com (Adam Curtis) Date: Mon, 25 Dec 2006 20:46:27 +0000 (UTC) Subject: [SciPy-dev] Astronomy stuff Message-ID: So I just remembered this "moonphase" package I have lying around my library that was given to me by an astronomer guy. Think we could possibly use some of his code (with his permission) and make some changes to it so that it fits into the library? From tim.leslie at gmail.com Tue Dec 26 20:47:50 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Wed, 27 Dec 2006 12:47:50 +1100 Subject: [SciPy-dev] Time to require python 2.4? Message-ID: Hi All, I'd like to start a discussion to see whether it's time for scipy to require Python 2.4. 2.4 has been out for over two years now (released in Nov 2004) and there is at least one outstanding patch for scipy which requires 2.4[1]. AFAIK there are no major distributions for which 2.3 is the default version. What do other people think? Are there any plans to make 2.4 a requirement in the near future? Are there many people who still use 2.3? Is this something that might be considered for the next release? Cheers, Tim [1] http://projects.scipy.org/scipy/scipy/ticket/196 From robert.kern at gmail.com Tue Dec 26 21:17:08 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 Dec 2006 21:17:08 -0500 Subject: [SciPy-dev] Time to require python 2.4? In-Reply-To: References: Message-ID: <4591D7A4.90605@gmail.com> Tim Leslie wrote: > Hi All, > > I'd like to start a discussion to see whether it's time for scipy to > require Python 2.4. 2.4 has been out for over two years now (released > in Nov 2004) and there is at least one outstanding patch for scipy > which requires 2.4[1]. AFAIK there are no major distributions for > which 2.3 is the default version. Debian stable and OS X are still major distributions, in my mind. > What do other people think? Are there any plans to make 2.4 a > requirement in the near future? Are there many people who still use > 2.3? Is this something that might be considered for the next release? Yes, I think, many people still use Python 2.3. As for the patch, there's nothing intrinsic to Python 2.4 that it requires. It simply takes advantage of Python 2.4 syntax sugar; it does not have to do so. IMO, widely-used libraries should continue to support Python distributions until there is a really compelling new feature that cannot be implemented in the old versions and that we should not drop support for old versions just because they are old. Most of the "missing features" that I encountered while targeting Python 2.3 were set(), generator expressions, and decorators. All of them are very easily worked around in 2.3. So my question to you is, specifically what features are only in 2.4 that you think we need to use to support new features in scipy? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.leslie at gmail.com Tue Dec 26 21:56:11 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Wed, 27 Dec 2006 13:56:11 +1100 Subject: [SciPy-dev] Time to require python 2.4? In-Reply-To: <4591D7A4.90605@gmail.com> References: <4591D7A4.90605@gmail.com> Message-ID: On 12/27/06, Robert Kern wrote: > Tim Leslie wrote: > > Hi All, > > > > I'd like to start a discussion to see whether it's time for scipy to > > require Python 2.4. 2.4 has been out for over two years now (released > > in Nov 2004) and there is at least one outstanding patch for scipy > > which requires 2.4[1]. AFAIK there are no major distributions for > > which 2.3 is the default version. > > Debian stable and OS X are still major distributions, in my mind. > My bad, I probably should have said "easily available" rather than "default version". > > What do other people think? Are there any plans to make 2.4 a > > requirement in the near future? Are there many people who still use > > 2.3? Is this something that might be considered for the next release? > > Yes, I think, many people still use Python 2.3. As for the patch, there's > nothing intrinsic to Python 2.4 that it requires. It simply takes advantage of > Python 2.4 syntax sugar; it does not have to do so. > This is true. > IMO, widely-used libraries should continue to support Python distributions until > there is a really compelling new feature that cannot be implemented in the old > versions and that we should not drop support for old versions just because they > are old. Most of the "missing features" that I encountered while targeting > Python 2.3 were set(), generator expressions, and decorators. All of them are > very easily worked around in 2.3. > > So my question to you is, specifically what features are only in 2.4 that you > think we need to use to support new features in scipy? > There are none in particular, I just wanted to test the water and see what the status on this issue was. The ticket posted was merely the catalyst for enquiry. If the benefits of the new features aren't considered enough to warrant a change then I'm happy for us to stick with 2.3. I'd like to hear where other people stand on this issue and perhaps see what criteria would be sufficient to require an update in the required python version. Thanks for your input on this Robert. Cheers, Tim > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From akinoame1 at gmail.com Wed Dec 27 02:30:52 2006 From: akinoame1 at gmail.com (Denis Simakov) Date: Wed, 27 Dec 2006 09:30:52 +0200 Subject: [SciPy-dev] Time to require python 2.4? In-Reply-To: References: Message-ID: <73eb51090612262330qfe06f3bwa2d343ef59007f1f@mail.gmail.com> Our institute's servers all have Python 2.3 installed. At the moment it is not real to persuade admins to upgrade (and upgrade is indeed difficult as you need to bother a lot of users which have processes running). So personally I would very much like to see SciPy continuing to work on 2.3. Denis On 12/27/06, Tim Leslie wrote: > Hi All, > > I'd like to start a discussion to see whether it's time for scipy to > require Python 2.4. 2.4 has been out for over two years now (released > in Nov 2004) and there is at least one outstanding patch for scipy > which requires 2.4[1]. AFAIK there are no major distributions for > which 2.3 is the default version. > > What do other people think? Are there any plans to make 2.4 a > requirement in the near future? Are there many people who still use > 2.3? Is this something that might be considered for the next release? > > Cheers, > > Tim > > [1] http://projects.scipy.org/scipy/scipy/ticket/196 From wnbell at gmail.com Fri Dec 29 18:33:03 2006 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 29 Dec 2006 17:33:03 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! Message-ID: I've rewritten sparse/sparsetools to speed up some of the slower bits in current code. The previous implementation of CSR/CSC matrix addition, multiplication, and conversions were all very slow, often orders of magnitude slower than MATLAB or PySparse. In the new implementation: - CSR<->CSC conversion is cheap (linear time) - sparse addition is fast (linear time) - sparse multiplication is fast (noticeably faster than MATLAB in my testing) - CSR column indices do not need to be sorted (same for CSC row indices) - the results of all the above operations contain no explicit zeros - COO->CSR and COO->CSC is linear time and sums duplicate entries This last point is useful for constructing stiffness/mass matrices. Matrix multiplication is essentially optimal (modulo Strassen-like algorithms) and is based on the SMMP algorithm I mentioned a while ago. I implemented the functions in C++ with SWIG bindings. The C++ consists of a single header file with templated functions for each operation. SWIG makes it easy to instantiate templates for each data type (float,double,long double,npy_cfloat,npy_cdouble,npy_clongdouble). For the complex types I had some trouble at first, as they do no support arithmetic operations out of the box, that is, A+B works when A and B are floats or doubles, but not npy_cfloats or npy_cdoubles. To remedy this I wrote C++ operators for the complex types (in another header file) to implement the necessary operations. I used the SWIG example in the NumPy codebase (with some alterations), so strided arrays should be handled correctly. Also, when two different types are used together (e.g. a CSR matrix of type float64 is added to a CSR matrix of type float32) SWIG overloading takes care of the conversion. It will determine that float32->float64 is a safe cast and use the float64 version of the function. Some care is needed to ensure that the most economical function is chosen, but I think this can be accomplished just by ordering the functions appropriately (e.g float32 before float64). Currently the indices are assumed to be C ints. If we wanted both 32bit and 64bit ints it would be possible to template the index type as well. One challenge in writing sparse matrix routines is that the size of the output is not known in advance. For example, multiplying two sparse matrices together leads to an unpredicitable number of nonzeros in the result. To get around this I used STL vectors as SWIG ARGOUTs. This produces a nice pythonic interface for most of the functions. For functions operating on dense 2D arrays I chose to use an INPLACE array, since the size is known in advance. The code is available here: http://graphics.cs.uiuc.edu/~wnbell/sparse/sparse.tgz The sparse directory in the archive is indented to replace the sparse directory in scipy/Lib/ I've successfully installed the above code with the standard scipy setup.py and all the unittests pass. A simple benchmark for sparse matrix multiplication SciPy and MATLAB is available here: http://graphics.cs.uiuc.edu/~wnbell/sparse/sparse_times.m http://graphics.cs.uiuc.edu/~wnbell/sparse/sparse_times.py On my laptop SciPy takes about 0.33 seconds to MATLAB's 0.55, so 40% faster. The code was all written my myself, aside from the bits pilfered from the NumPy SWIG examples and SciPy UMFPACK SWIG wrapper, so there shouldn't be any license issues. I welcome any comments, questions, and criticism you have. If you know a better or more robust way to interface the C++ with Scipy please speak up!. Also, who should I contact regarding SVN commit privileges? -- Nathan Bell wnbell at gmail.com From robert.kern at gmail.com Fri Dec 29 19:03:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 29 Dec 2006 19:03:17 -0500 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: Message-ID: <4595ACC5.5060806@gmail.com> Nathan Bell wrote: > Also, who should I contact regarding SVN commit privileges? Me. You might have to wait until Tuesday, due to the holidays, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From v-nijs at kellogg.northwestern.edu Sat Dec 30 15:45:19 2006 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Sat, 30 Dec 2006 14:45:19 -0600 Subject: [SciPy-dev] Scipy cookbook++ ? In-Reply-To: <4595ACC5.5060806@gmail.com> Message-ID: The cookbook on Scipy.org is really great. I learned a lot from the simple examples shown there. In its current form it is a great alternative to a tutorial. I wonder, however, if it will remain easy to use if people add a lot of information to these pages. Perhaps scipy could use something like the tips and scripts posting set-up on the vim site (see links below). On this site tips/scripts get posted/updated to a database. There is an interface to search and browse for scripts. You can also list tips/scripts by number of views/downloads or ratings. http://www.vim.org/tips/index.php http://www.vim.org/scripts/index.php Vincent