From tim.leslie at gmail.com Wed Jan 3 02:06:40 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Wed, 3 Jan 2007 18:06:40 +1100 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <20061013164030.GB16246@mentat.za.net> References: <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> <3a1077e70610121215v5161ba80q35e33c9fb71f195e@mail.gmail.com> <20061012224815.GK32224@mentat.za.net> <3a1077e70610121648v39fdefcew6a989fd81d8ba73c@mail.gmail.com> <20061013003856.GN32224@mentat.za.net> <3a1077e70610130742s54d73815ra22a39392b4ce827@mail.gmail.com> <20061013164030.GB16246@mentat.za.net> Message-ID: Hi All, I've been working through a bunch of tickets and noticed this one has stagnated. Can someone comment on the status of this. http://projects.scipy.org/scipy/scipy/ticket/286 The second patch to this ticket has not been applied? Is there a reason not to? I noticed the other thread on this topic suggested that a complete solution might require moving delaunay out of the sandbox. Is this still the case? Cheers, Tim On 10/14/06, Stefan van der Walt wrote: > Hi John > > On Fri, Oct 13, 2006 at 03:42:13PM +0100, John Travers wrote: > > Attached is a very simple script (will make it a test later) which > > uses the test data from netlib->dierckx to check the use fo regrid and > > surfit. The plot attached shows the results. There could be an error > > in my dealing with meshgrid (I hate they way it swaps axes), but I > > think it is right. The surfit part issues a warning on interpolation. > > If you increase s to say 20 (0 is needed for interpolation) then the > > warning goes, but then you are smoothing (as can be seen from the > > corresponding output plot - which was down sampled for file size). > > I think matplotlib may be throwing a spanner in the wheels here by > using its own interpolation. If you use > > imshow(x,interpolation='nearest') > > you get a plot like > > http://mentat.za.net/results/interpolate.png > > (I changed to a more densely sampled grid). > > As for the meshgrid behaviour, take a look at numpy's mgrid, which > doesn't do the argument swapping. > > > I'll make this into a test at some point and also improve the interp2d > > interface to be more flexible to input array layout - but this will > > have to be nest week due to time constraints (I need to work...) > > I'm sorry, I still havn't had time to merge the patch -- will > hopefully be able to do that over the weekend at least. > > > Hope this demonstrates my point (and that yopu don't find an obvious > > error in my code...) > > Thanks for the demo! Pretty pictures always make for a convincing > argument. > > Cheers > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From jtravs at gmail.com Wed Jan 3 04:00:58 2007 From: jtravs at gmail.com (John Travers) Date: Wed, 3 Jan 2007 09:00:58 +0000 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: References: <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> <3a1077e70610121215v5161ba80q35e33c9fb71f195e@mail.gmail.com> <20061012224815.GK32224@mentat.za.net> <3a1077e70610121648v39fdefcew6a989fd81d8ba73c@mail.gmail.com> <20061013003856.GN32224@mentat.za.net> <3a1077e70610130742s54d73815ra22a39392b4ce827@mail.gmail.com> <20061013164030.GB16246@mentat.za.net> Message-ID: <3a1077e70701030100g76140645h30e9a6534ac8c5d0@mail.gmail.com> On 03/01/07, Tim Leslie wrote: > Hi All, I've been working through a bunch of tickets and noticed this > one has stagnated. Can someone comment on the status of this. > > http://projects.scipy.org/scipy/scipy/ticket/286 > > The second patch to this ticket has not been applied? Is there a > reason not to? I noticed the other thread on this topic suggested that > a complete solution might require moving delaunay out of the sandbox. > Is this still the case? > > Cheers, > > Tim Hi Tim, everybody, I am (slowly) trying to work on improving the interp2d function as well as spline fitting in general. The first patch attached to the ticket you mention is applied and provides the RectBivariateSpline class for fitting a spline to rectangular gridded data. The second patch changed interp2d to use this class rather than the firtpack surfit function (which absolutely should not be used for interpolation). This was not applied as it was decided that interp2d really must support non-uniform grid data. The only solution to this problem (surft still needs replacing) proposed to date is to use Rokert Kern's delaunay package from the sandbox. In order to achieve this I proposed that the spline *fitting* code be taken out of scipy.interpolate and put into scipy.spline. And that scipy.interpolate should import methods from both the spline and delaunay package to provide interp1d, interp2d, interpnd only. My first step towards this was to create a spline package in the sandbox. In doing so I got side tracked into cleaning it up (moving to purely f2py etc.) And then I got seriously side tracked at work (PhD thesis work), so my timing wasn't great. TODO: 1. finish the sandbox.spline package (I could skip cleaning it up for now, I'm about half way through on my machine (uncommitted) - do people think moving to clean f2py interface is really worth it?) 2. create a sandbox.interpolate package that contains the new interp functionality based on spline and delaunay only 3. after testing, move all three packages out of the sandbox What do people think about all of this? It will lead to quite large API changes (but on a high level, i.e. scipy.interpolate.RectBivariateSpline will become scipy.spline.RectBivariateSpline) Cheers, John From oliphant at ee.byu.edu Wed Jan 3 17:09:57 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 03 Jan 2007 15:09:57 -0700 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: Message-ID: <459C29B5.40706@ee.byu.edu> Nathan Bell wrote: >I've rewritten sparse/sparsetools to speed up some of the slower bits >in current code. The previous implementation of CSR/CSC matrix >addition, multiplication, and conversions were all very slow, often >orders of magnitude slower than MATLAB or PySparse. > > > Very nice. Thank your for your help on these algorithms. >In the new implementation: >- CSR<->CSC conversion is cheap (linear time) >- sparse addition is fast (linear time) >- sparse multiplication is fast (noticeably faster than MATLAB in my testing) >- CSR column indices do not need to be sorted (same for CSC row indices) >- the results of all the above operations contain no explicit zeros >- COO->CSR and COO->CSC is linear time and sums duplicate entries > > >This last point is useful for constructing stiffness/mass matrices. >Matrix multiplication is essentially optimal (modulo Strassen-like >algorithms) and is based on the SMMP algorithm I mentioned a while >ago. > > Excellent. Thank you. >I implemented the functions in C++ with SWIG bindings. > This isn't my most favorite way to include things in SciPy, but it can work. SciPy is all about whoever contributes and *their* favorite way. >The code is available here: >http://graphics.cs.uiuc.edu/~wnbell/sparse/sparse.tgz > >The sparse directory in the archive is indented to replace the sparse >directory in scipy/Lib/ >I've successfully installed the above code with the standard scipy >setup.py and all the unittests pass. > > This sounds good. >A simple benchmark for sparse matrix multiplication SciPy and MATLAB >is available here: >http://graphics.cs.uiuc.edu/~wnbell/sparse/sparse_times.m >http://graphics.cs.uiuc.edu/~wnbell/sparse/sparse_times.py > >On my laptop SciPy takes about 0.33 seconds to MATLAB's 0.55, so 40% faster. > >The code was all written my myself, aside from the bits pilfered from >the NumPy SWIG examples and SciPy UMFPACK SWIG wrapper, so there >shouldn't be any license issues. > >I welcome any comments, questions, and criticism you have. If you >know a better or more robust way to interface the C++ with Scipy >please speak up!. > > The only other way I know of is weave, hand-written wrappers, boost-python (which seems like a totally different paradigm), and c-wrappers which are then called using c-types. We might think about using weave, but swig is also fine because you don't have to have swig to get it to work. How are you coming on getting access to the SVN repository? I might also be able to give you access. I looked over the code and could not find where the multiplication was actually being done (I got lost in the interface). Does the tar file really contain all of your code? Sparse matrices do need work and this has the potential to really help things. -Travis > >Also, who should I contact regarding SVN commit privileges? > > > From wnbell at gmail.com Wed Jan 3 18:51:08 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 3 Jan 2007 17:51:08 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: <459C29B5.40706@ee.byu.edu> References: <459C29B5.40706@ee.byu.edu> Message-ID: On 1/3/07, Travis Oliphant wrote: > How are you coming on getting access to the SVN repository? I might > also be able to give you access. I looked over the code and could not > find where the multiplication was actually being done (I got lost in the > interface). Does the tar file really contain all of your code? I now have access to SVN (thanks Jeff and Robert). All of the algorithms are in sparse/sparsetools/sparsetools.h of the archive. csrmucsr() is the function that handles matrix multiplication. Also on the web here: http://graphics.cs.uiuc.edu/~wnbell/sparse/sparse/sparsetools/sparsetools.h Unless someone has an objection, I plan to commit the changes in a week or so (once I've written a more comprehensive set of unittests). -- Nathan Bell wnbell at gmail.com From cimrman3 at ntc.zcu.cz Thu Jan 4 06:55:02 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 04 Jan 2007 12:55:02 +0100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: Message-ID: <459CEB16.80308@ntc.zcu.cz> Nathan Bell wrote: > I've rewritten sparse/sparsetools to speed up some of the slower bits > in current code. The previous implementation of CSR/CSC matrix > addition, multiplication, and conversions were all very slow, often > orders of magnitude slower than MATLAB or PySparse. > > In the new implementation: > - CSR<->CSC conversion is cheap (linear time) > - sparse addition is fast (linear time) > - sparse multiplication is fast (noticeably faster than MATLAB in my testing) > - CSR column indices do not need to be sorted (same for CSC row indices) > - the results of all the above operations contain no explicit zeros > - COO->CSR and COO->CSC is linear time and sums duplicate entries > > > This last point is useful for constructing stiffness/mass matrices. > Matrix multiplication is essentially optimal (modulo Strassen-like > algorithms) and is based on the SMMP algorithm I mentioned a while > ago. Hi Nathan, I have not had time to look at your code yet (just returned from holidays) but the description looks great! One umfpack-related note - the solver requires the CSC/CSR column/row indices sorted in ascending order. Does your implementation contain a function to ensure_sorted_indices()? r. From wnbell at gmail.com Thu Jan 4 19:05:15 2007 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 4 Jan 2007 18:05:15 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: <459CEB16.80308@ntc.zcu.cz> References: <459CEB16.80308@ntc.zcu.cz> Message-ID: On 1/4/07, Robert Cimrman wrote: > Hi Nathan, I have not had time to look at your code yet (just returned > from holidays) but the description looks great! > > One umfpack-related note - the solver requires the CSC/CSR column/row > indices sorted in ascending order. Does your implementation contain a > function to ensure_sorted_indices()? Not currently, but it would be trivial to add. Converting from CSR<->CSC (a linear time operation) has the side effect of sorting the column/row indices. Two conversions (CSR->CSC->CSR) would produce the desired result in the original format. I will add ensure_sorted_indices() to csr_matrix and csc_matrix. -- Nathan Bell wnbell at gmail.com From nwagner at iam.uni-stuttgart.de Fri Jan 5 03:51:52 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 05 Jan 2007 09:51:52 +0100 Subject: [SciPy-dev] undefined symbol: atl_f77wrap_izamax__ Message-ID: Hi, My recent findings concerning the installation problems on openSUSE 10.2 are listed below. I have added two symbolic links (Thanks to Andreas Hanke) ln -s /lib64/libgcc_s.so.1 /usr/lib64/libgcc_s.so ln -s /lib/libgcc_s.so.1 /usr/lib/libgcc_s.so I was able to install numpy. But if I import numpy I get >>> import numpy Traceback (most recent call last): File "", line 1, in File "/usr/local/lib64/python2.5/site-packages/numpy/__init__.py", line 40, in import linalg File "/usr/local/lib64/python2.5/site-packages/numpy/linalg/__init__.py", line 4, in from linalg import * File "/usr/local/lib64/python2.5/site-packages/numpy/linalg/linalg.py", line 25, in from numpy.linalg import lapack_lite ImportError: /usr/local/lib64/python2.5/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: atl_f77wrap_izamax__ I have installed ATLAS 3.7.11 from scratch. Any pointer is appreciated. Thanks in advance. Nils From nwagner at iam.uni-stuttgart.de Fri Jan 5 04:41:36 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 05 Jan 2007 10:41:36 +0100 Subject: [SciPy-dev] *** glibc detected *** /usr/bin/python: double free or corruption (!prev): 0x0000000002263320 *** Message-ID: Hi, Oops, I forgot to remove the build directory in numpy. Sorry for the noise ! Finally I was able to install numpy and scipy ! Now scipy.test() freezes with Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. .................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. ....................................../usr/local/lib64/python2.5/site-packages/scipy/interpolate/fitpack2.py:457: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ...........*** glibc detected *** /usr/bin/python: double free or corruption (!prev): 0x0000000002263320 *** Can someone reproduce this ? Should I file a bug report ? Nils From robert.kern at gmail.com Fri Jan 5 04:48:55 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 05 Jan 2007 03:48:55 -0600 Subject: [SciPy-dev] *** glibc detected *** /usr/bin/python: double free or corruption (!prev): 0x0000000002263320 *** In-Reply-To: References: Message-ID: <459E1F07.3060105@gmail.com> Nils Wagner wrote: > Can someone reproduce this ? Not on my MacBook, no. > Should I file a bug report ? Yes, when you locate the function that is giving that error. Use scipy.test(1,10) . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cimrman3 at ntc.zcu.cz Fri Jan 5 05:44:10 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 05 Jan 2007 11:44:10 +0100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> Message-ID: <459E2BFA.80505@ntc.zcu.cz> Nathan Bell wrote: > On 1/4/07, Robert Cimrman wrote: >> Hi Nathan, I have not had time to look at your code yet (just returned >> from holidays) but the description looks great! >> >> One umfpack-related note - the solver requires the CSC/CSR column/row >> indices sorted in ascending order. Does your implementation contain a >> function to ensure_sorted_indices()? > > Not currently, but it would be trivial to add. Converting from > CSR<->CSC (a linear time operation) has the side effect of sorting the > column/row indices. Two conversions (CSR->CSC->CSR) would produce the > desired result in the original format. > > I will add ensure_sorted_indices() to csr_matrix and csc_matrix. Good! Going CSR->CSC->CSR would be too inefficient, IMHO. Maybe we could add a flag to the CSR/CSC matrix class saying that it has sorted indices to make the check trivial. It should be added as keyword argument to the matrix constructor, too. r. From wnbell at gmail.com Fri Jan 5 09:02:49 2007 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 5 Jan 2007 08:02:49 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: <459E2BFA.80505@ntc.zcu.cz> References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> Message-ID: On 1/5/07, Robert Cimrman wrote: > Good! Going CSR->CSC->CSR would be too inefficient, IMHO. Maybe we could > add a flag to the CSR/CSC matrix class saying that it has sorted indices > to make the check trivial. It should be added as keyword argument to the > matrix constructor, too. Well, while that's certainly possible I'd recommend not doing it. First, it's possible for the flag to become invalid if the user manipulates the data directly. This leads to hard to find and hard to reproduce errors. Second, the conversion is an extremely fast linear-time operation. In my testing, one conversion is only twice as expensive as a copy()[1]. In the case of UMFPACK, a couple conversions will be insignificant to the time required by the LU factorization. In short, I think it's fair to assume that any call to ensure_sorted_indices() will be followed by some non-trivial algorithm that dominates the overall cost. I plan to commit my code to SVN later today. Once you're up and running with the new sparsetools the relative costs of the conversions/operations will be more clear. [1] for a 1Mx1M matrix with 5M nnz, the conversion takes 1.02 seconds, while a copy alone takes 0.53 seconds -- Nathan Bell wnbell at gmail.com From cimrman3 at ntc.zcu.cz Fri Jan 5 09:44:33 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 05 Jan 2007 15:44:33 +0100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> Message-ID: <459E6451.6040600@ntc.zcu.cz> Nathan Bell wrote: > On 1/5/07, Robert Cimrman wrote: >> Good! Going CSR->CSC->CSR would be too inefficient, IMHO. Maybe we could >> add a flag to the CSR/CSC matrix class saying that it has sorted indices >> to make the check trivial. It should be added as keyword argument to the >> matrix constructor, too. > > Well, while that's certainly possible I'd recommend not doing it. > First, it's possible for the flag to become invalid if the user > manipulates the data directly. This leads to hard to find and hard to > reproduce errors. Second, the conversion is an extremely fast > linear-time operation. In my testing, one conversion is only twice as > expensive as a copy()[1]. In the case of UMFPACK, a couple > conversions will be insignificant to the time required by the LU > factorization. In short, I think it's fair to assume that any call to > ensure_sorted_indices() will be followed by some non-trivial algorithm > that dominates the overall cost. ok, I agree that the flag is not a good idea. > I plan to commit my code to SVN later today. Once you're up and > running with the new sparsetools the relative costs of the > conversions/operations will be more clear. > > [1] for a 1Mx1M matrix with 5M nnz, the conversion takes 1.02 seconds, while > a copy alone takes 0.53 seconds Here by 'conversion' you mean CSR->CSC->CSR and ensure_sorted_indices() will be based on that? If yes, and assuming that a temporary copy is made (am I right?) what about ensure_sorted_indices() working inplace (just (quick,arg)sorting row by row)? I would like both mtx2 = mtx.ensure_sorted_indices() mtx.ensure_sorted_indices( inplace = True ) (returning None?) Well, I must try it first to be able to have relevant comments, but my computer is undergoing a software upgrade now and various stuff does not work. So sorry for the possibly unclear/stupid questions. :) anyway thanks for your work on the sparse module! r. From oliphant at ee.byu.edu Fri Jan 5 10:50:46 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 05 Jan 2007 08:50:46 -0700 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459C29B5.40706@ee.byu.edu> Message-ID: <459E73D6.3030801@ee.byu.edu> Nathan Bell wrote: >On 1/3/07, Travis Oliphant wrote: > > >>How are you coming on getting access to the SVN repository? I might >>also be able to give you access. I looked over the code and could not >>find where the multiplication was actually being done (I got lost in the >>interface). Does the tar file really contain all of your code? >> >> > >I now have access to SVN (thanks Jeff and Robert). > >All of the algorithms are in sparse/sparsetools/sparsetools.h of the >archive. csrmucsr() is the function that handles matrix >multiplication. > >Also on the web here: >http://graphics.cs.uiuc.edu/~wnbell/sparse/sparse/sparsetools/sparsetools.h > >Unless someone has an objection, I plan to commit the changes in a >week or so (once I've written a more comprehensive set of unittests). > > No objection here. -Travis From wnbell at gmail.com Sat Jan 6 01:05:19 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 6 Jan 2007 00:05:19 -0600 Subject: [SciPy-dev] scipy.io.mmwrite() of sparse matrices In-Reply-To: <458938F0.3000406@iam.uni-stuttgart.de> References: <458938F0.3000406@iam.uni-stuttgart.de> Message-ID: On 12/20/06, Nils Wagner wrote: > Hi all, > > It's currently not possible to store sparse matrices using > scipy.io.mmwrite(). > http://projects.scipy.org/scipy/scipy/ticket/317 > > Please can someone fix this problem in the near future ? Fixed. How do I close tickets in Trac? Do I lack the permission? -- Nathan Bell wnbell at gmail.com From tim.leslie at gmail.com Sat Jan 6 01:11:38 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 6 Jan 2007 17:11:38 +1100 Subject: [SciPy-dev] *** glibc detected *** /usr/bin/python: double free or corruption (!prev): 0x0000000002263320 *** In-Reply-To: References: Message-ID: Hi Nils, This was my bad. I've fixed it up in the latest svn. Ran 1596 tests in 2.765s OK Out[6]: In [7]: scipy.__version__ Out[7]: '0.5.3.dev2495' Sorry for the breakage of svn. Tim On 1/5/07, Nils Wagner wrote: > Hi, > > Oops, I forgot to remove the build directory in numpy. > Sorry for the noise ! > Finally I was able to install numpy and scipy ! > > Now > > scipy.test() freezes with > > Resizing... 20 7 35 > Resizing... 23 7 47 > Resizing... 24 25 58 > Resizing... 28 7 68 > Resizing... 28 27 73 > .....Use minimum degree ordering on A'+A. > .................Resizing... 16 17 24 > Resizing... 20 7 35 > Resizing... 23 7 47 > Resizing... 24 25 58 > Resizing... 28 7 68 > Resizing... 28 27 73 > .....Use minimum degree ordering on A'+A. > ....................................../usr/local/lib64/python2.5/site-packages/scipy/interpolate/fitpack2.py:457: > UserWarning: > The coefficients of the spline returned have been computed > as the > minimal norm least-squares solution of a (numerically) > rank deficient > system (deficiency=7). If deficiency is large, the results > may be > inaccurate. Deficiency may strongly depend on the value of > eps. > warnings.warn(message) > ...........*** glibc detected *** /usr/bin/python: double > free or corruption (!prev): 0x0000000002263320 *** > > Can someone reproduce this ? > > Should I file a bug report ? > > Nils > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Sat Jan 6 01:39:48 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 06 Jan 2007 00:39:48 -0600 Subject: [SciPy-dev] scipy.io.mmwrite() of sparse matrices In-Reply-To: References: <458938F0.3000406@iam.uni-stuttgart.de> Message-ID: <459F4434.9050308@gmail.com> Nathan Bell wrote: > How do I close tickets in Trac? Do I lack the permission? You probably do. You should have been given the "developer" role in Trac when you were given your account, but apparently, you have not. I can't check or fix that myself right now, so I'll poke our sysadmin. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.leslie at gmail.com Sat Jan 6 01:44:49 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 6 Jan 2007 17:44:49 +1100 Subject: [SciPy-dev] scipy.io.mmwrite() of sparse matrices In-Reply-To: References: <458938F0.3000406@iam.uni-stuttgart.de> Message-ID: On 1/6/07, Nathan Bell wrote: > On 12/20/06, Nils Wagner wrote: > > Hi all, > > > > It's currently not possible to store sparse matrices using > > scipy.io.mmwrite(). > > > http://projects.scipy.org/scipy/scipy/ticket/317 > > > > Please can someone fix this problem in the near future ? > > Fixed. > > > How do I close tickets in Trac? Do I lack the permission? > Your fix also takes care of ticket #278. http://projects.scipy.org/scipy/scipy/ticket/278 I've closed this one, but I'll leave #317 for you to close once your permissions get sorted out. Cheers, Tim > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From tim.leslie at gmail.com Sat Jan 6 02:17:14 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 6 Jan 2007 18:17:14 +1100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> Message-ID: On 1/6/07, Nathan Bell wrote: > I plan to commit my code to SVN later today. Once you're up and > running with the new sparsetools the relative costs of the > conversions/operations will be more clear. > Hi Nathan, I've grabbed your code from svn (thanks for putting it in!) but I'm having a bit of a problem with the tests. Running scipy.tests() results in 83 test failures, for example: ====================================================================== ERROR: check_transpose (scipy.sparse.tests.test_sparse.test_lil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 202, in check_transpose assert_array_equal(a.todense(), b) File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 367, in todense return asmatrix(self.toarray()) File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 959, in toarray return self.tocsr().toarray() File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 955, in tocsr indptr,colind,data = sparsetools.csctocsr(self.shape[0],self.shape[1],self.indptr,self.rowind,self.data) AttributeError: 'module' object has no attribute 'csctocsr' A lot of them have this (or similar) AttributeErrors. Looking at dir(scipy.sparse.sparsetools) shows a different set of functions to what appear in sparsetools.py. >>> dir(scipy.sparse.sparsetools) ['__doc__', '__file__', '__name__', '__version__', 'ccootocsc', 'ccscadd', 'ccscextract', 'ccscgetel', 'ccscmucsc', 'ccscmucsr', 'ccscmul', 'ccscmux', 'ccscsetel', 'ccsctocoo', 'ccsctofull', 'ccsrmucsc', 'ccsrmux', 'cdiatocsc', 'cfulltocsc', 'ctransp', 'dcootocsc', 'dcscadd', 'dcscextract', 'dcscgetel', 'dcscmucsc', 'dcscmucsr', 'dcscmul', 'dcscmux', 'dcscsetel', 'dcsctocoo', 'dcsctofull', 'dcsrmucsc', 'dcsrmux', 'ddiatocsc', 'dfulltocsc', 'dtransp', 'scootocsc', 'scscadd', 'scscextract', 'scscgetel', 'scscmucsc', 'scscmucsr', 'scscmul', 'scscmux', 'scscsetel', 'scsctocoo', 'scsctofull', 'scsrmucsc', 'scsrmux', 'sdiatocsc', 'sfulltocsc', 'stransp', 'zcootocsc', 'zcscadd', 'zcscextract', 'zcscgetel', 'zcscmucsc', 'zcscmucsr', 'zcscmul', 'zcscmux', 'zcscsetel', 'zcsctocoo', 'zcsctofull', 'zcsrmucsc', 'zcsrmux', 'zdiatocsc', 'zfulltocsc', 'ztransp'] Do you have this same problem, or this something I might have done wrong somewhere along the line. Cheers, Tim > [1] for a 1Mx1M matrix with 5M nnz, the conversion takes 1.02 seconds, while > a copy alone takes 0.53 seconds > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From wnbell at gmail.com Sat Jan 6 02:36:00 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 6 Jan 2007 01:36:00 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> Message-ID: On 1/6/07, Tim Leslie wrote:> > Hi Nathan, > > I've grabbed your code from svn (thanks for putting it in!) but I'm > having a bit of a problem with the tests. Running scipy.tests() > results in 83 test failures, for example: > >>> dir(scipy.sparse.sparsetools) > ['__doc__', '__file__', '__name__', '__version__', 'ccootocsc', > 'ccscadd', 'ccscextract', 'ccscgetel', 'ccscmucsc', 'ccscmucsr', > 'ccscmul', 'ccscmux', 'ccscsetel', 'ccsctocoo', 'ccsctofull', > 'ccsrmucsc', 'ccsrmux', 'cdiatocsc', 'cfulltocsc', 'ctransp', > 'dcootocsc', 'dcscadd', 'dcscextract', 'dcscgetel', 'dcscmucsc', > 'dcscmucsr', 'dcscmul', 'dcscmux', 'dcscsetel', 'dcsctocoo', > 'dcsctofull', 'dcsrmucsc', 'dcsrmux', 'ddiatocsc', 'dfulltocsc', > 'dtransp', 'scootocsc', 'scscadd', 'scscextract', 'scscgetel', > 'scscmucsc', 'scscmucsr', 'scscmul', 'scscmux', 'scscsetel', > 'scsctocoo', 'scsctofull', 'scsrmucsc', 'scsrmux', 'sdiatocsc', > 'sfulltocsc', 'stransp', 'zcootocsc', 'zcscadd', 'zcscextract', > 'zcscgetel', 'zcscmucsc', 'zcscmucsr', 'zcscmul', 'zcscmux', > 'zcscsetel', 'zcsctocoo', 'zcsctofull', 'zcsrmucsc', 'zcsrmux', > 'zdiatocsc', 'zfulltocsc', 'ztransp'] > > Do you have this same problem, or this something I might have done > wrong somewhere along the line. > Those appear to be from the previous fortran implementation. Can you try: rm -rf scipy/build rm -rf scipy/Lib/sparse svn update and see if that remedies the problem? I suspect you just have a stray _sparsetools.so from before. -- Nathan Bell wnbell at gmail.com From tim.leslie at gmail.com Sat Jan 6 03:10:55 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 6 Jan 2007 19:10:55 +1100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> Message-ID: On 1/6/07, Nathan Bell wrote: > On 1/6/07, Tim Leslie wrote:> > > Hi Nathan, > > > > I've grabbed your code from svn (thanks for putting it in!) but I'm > > having a bit of a problem with the tests. Running scipy.tests() > > results in 83 test failures, for example: > > > >>> dir(scipy.sparse.sparsetools) > > ['__doc__', '__file__', '__name__', '__version__', 'ccootocsc', > > 'ccscadd', 'ccscextract', 'ccscgetel', 'ccscmucsc', 'ccscmucsr', > > 'ccscmul', 'ccscmux', 'ccscsetel', 'ccsctocoo', 'ccsctofull', > > 'ccsrmucsc', 'ccsrmux', 'cdiatocsc', 'cfulltocsc', 'ctransp', > > 'dcootocsc', 'dcscadd', 'dcscextract', 'dcscgetel', 'dcscmucsc', > > 'dcscmucsr', 'dcscmul', 'dcscmux', 'dcscsetel', 'dcsctocoo', > > 'dcsctofull', 'dcsrmucsc', 'dcsrmux', 'ddiatocsc', 'dfulltocsc', > > 'dtransp', 'scootocsc', 'scscadd', 'scscextract', 'scscgetel', > > 'scscmucsc', 'scscmucsr', 'scscmul', 'scscmux', 'scscsetel', > > 'scsctocoo', 'scsctofull', 'scsrmucsc', 'scsrmux', 'sdiatocsc', > > 'sfulltocsc', 'stransp', 'zcootocsc', 'zcscadd', 'zcscextract', > > 'zcscgetel', 'zcscmucsc', 'zcscmucsr', 'zcscmul', 'zcscmux', > > 'zcscsetel', 'zcsctocoo', 'zcsctofull', 'zcsrmucsc', 'zcsrmux', > > 'zdiatocsc', 'zfulltocsc', 'ztransp'] > > > > Do you have this same problem, or this something I might have done > > wrong somewhere along the line. > > > > Those appear to be from the previous fortran implementation. Can you try: > > rm -rf scipy/build > rm -rf scipy/Lib/sparse > svn update > > and see if that remedies the problem? I suspect you just have a stray > _sparsetools.so from before. That solved the problem, thanks! I now have 4 remaining test failures of the form: ====================================================================== ERROR: Test for new slice functionality (EJS) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 283, in check_get_horiz_slice assert_array_equal(B[1,:], A[1,:].todense()) File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 364, in todense return asmatrix(self.toarray()) File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 1460, in toarray sparsetools.csrtodense(self.shape[0],self.shape[1],self.indptr,self.colind,self.data,data) File "/usr/lib/python2.4/site-packages/scipy/sparse/sparsetools.py", line 380, in csrtodense return _sparsetools.csrtodense(*args) NotImplementedError: No matching function for overloaded 'csrtodense' Any ideas? Cheers, Tim > > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From wnbell at gmail.com Sat Jan 6 03:26:11 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 6 Jan 2007 02:26:11 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> Message-ID: On 1/6/07, Tim Leslie wrote: > That solved the problem, thanks! I now have 4 remaining test failures > of the form: > > ====================================================================== > ERROR: Test for new slice functionality (EJS) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 283, in check_get_horiz_slice > assert_array_equal(B[1,:], A[1,:].todense()) > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line > 364, in todense > return asmatrix(self.toarray()) > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line > 1460, in toarray > sparsetools.csrtodense(self.shape[0],self.shape[1],self.indptr,self.colind,self.data,data) > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparsetools.py", > line 380, in csrtodense > return _sparsetools.csrtodense(*args) > NotImplementedError: No matching function for overloaded 'csrtodense' > > > Any ideas? I've made some changes to sparse.py in the last hour, so first update to version 2499. On my system, all unittests pass with 2499. The error is caused when SWIG cannot find a version of the (overloaded) function with the correct types. This usually means that one of the arguments is of the wrong type. For example I recently fixed some bugs caused by self.shape being a float tuple instead of an int tuple. Update to 2499 and let me know if the problem persists. If so, see if you can determine the types of the arguments. Thanks for the quick feedback! -- Nathan Bell wnbell at gmail.com From wnbell at gmail.com Sat Jan 6 03:29:12 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 6 Jan 2007 02:29:12 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: <459E6451.6040600@ntc.zcu.cz> References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> <459E6451.6040600@ntc.zcu.cz> Message-ID: On 1/5/07, Robert Cimrman wrote: > Here by 'conversion' you mean CSR->CSC->CSR and ensure_sorted_indices() > will be based on that? If yes, and assuming that a temporary copy is > made (am I right?) what about ensure_sorted_indices() working inplace > (just (quick,arg)sorting row by row)? > I would like both > mtx2 = mtx.ensure_sorted_indices() > mtx.ensure_sorted_indices( inplace = True ) (returning None?) I wrote an implementation of ensure_sorted_indices() with an inplace option and updated umfpack.py accordingly (please double check this). For csr_matrix I have: def ensure_sorted_indices(self,inplace=False): """Return a copy of this matrix where the column indices are sorted """ if inplace: temp = self.tocsc().tocsr() self.colind = temp.colind self.indptr = temp.indptr self.data = temp.data else: return self.tocsc().tocsr() Of course this does not actually perform an inplace sort :) However it is sufficient to make your umfpack.py work as expected. Give this implementation a try and let me know if it's still too slow. If it is, then we can think about how to perform a true inplace sort. I suspect argsorting row by row will be significantly slower than this method (due to the python overhead). -- Nathan Bell wnbell at gmail.com From tim.leslie at gmail.com Sat Jan 6 03:33:54 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 6 Jan 2007 19:33:54 +1100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> Message-ID: On 1/6/07, Nathan Bell wrote: > On 1/6/07, Tim Leslie wrote: > > That solved the problem, thanks! I now have 4 remaining test failures > > of the form: > > > > ====================================================================== > > ERROR: Test for new slice functionality (EJS) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > > line 283, in check_get_horiz_slice > > assert_array_equal(B[1,:], A[1,:].todense()) > > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line > > 364, in todense > > return asmatrix(self.toarray()) > > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line > > 1460, in toarray > > sparsetools.csrtodense(self.shape[0],self.shape[1],self.indptr,self.colind,self.data,data) > > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparsetools.py", > > line 380, in csrtodense > > return _sparsetools.csrtodense(*args) > > NotImplementedError: No matching function for overloaded 'csrtodense' > > > > > > Any ideas? > > > I've made some changes to sparse.py in the last hour, so first update > to version 2499. On my system, all unittests pass with 2499. > > The error is caused when SWIG cannot find a version of the > (overloaded) function with the correct types. This usually means that > one of the arguments is of the wrong type. For example I recently > fixed some bugs caused by self.shape being a float tuple instead of an > int tuple. > > Update to 2499 and let me know if the problem persists. If so, see if > you can determine the types of the arguments. > > Thanks for the quick feedback! Those errors are with 2499. I'll do some more debugging and see if I can find what's going on. Cheers, Tim > > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From tim.leslie at gmail.com Sat Jan 6 04:39:06 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 6 Jan 2007 20:39:06 +1100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> Message-ID: On 1/6/07, Tim Leslie wrote: > On 1/6/07, Nathan Bell wrote: > > On 1/6/07, Tim Leslie wrote: > > > That solved the problem, thanks! I now have 4 remaining test failures > > > of the form: > > > > > > ====================================================================== > > > ERROR: Test for new slice functionality (EJS) > > > ---------------------------------------------------------------------- > > > Traceback (most recent call last): > > > File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > > > line 283, in check_get_horiz_slice > > > assert_array_equal(B[1,:], A[1,:].todense()) > > > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line > > > 364, in todense > > > return asmatrix(self.toarray()) > > > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line > > > 1460, in toarray > > > sparsetools.csrtodense(self.shape[0],self.shape[1],self.indptr,self.colind,self.data,data) > > > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparsetools.py", > > > line 380, in csrtodense > > > return _sparsetools.csrtodense(*args) > > > NotImplementedError: No matching function for overloaded 'csrtodense' > > > > > > > > > Any ideas? > > > > > > I've made some changes to sparse.py in the last hour, so first update > > to version 2499. On my system, all unittests pass with 2499. > > > > The error is caused when SWIG cannot find a version of the > > (overloaded) function with the correct types. This usually means that > > one of the arguments is of the wrong type. For example I recently > > fixed some bugs caused by self.shape being a float tuple instead of an > > int tuple. > > > > Update to 2499 and let me know if the problem persists. If so, see if > > you can determine the types of the arguments. > > > > Thanks for the quick feedback! > > Those errors are with 2499. I'll do some more debugging and see if I > can find what's going on. OK, it looks like it's a 64 bit problem. The tests pass on my 32 bit machine, it's only the 64 bit one that's a problem. It seems that some parameters are of dtype int64 when they need to be int32. I'll keep debugging this and keep you informed. Tim > > Cheers, > > Tim > > > > > -- > > Nathan Bell wnbell at gmail.com > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > From tim.leslie at gmail.com Sat Jan 6 05:01:56 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 6 Jan 2007 21:01:56 +1100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459E2BFA.80505@ntc.zcu.cz> Message-ID: On 1/6/07, Tim Leslie wrote: > On 1/6/07, Tim Leslie wrote: > > On 1/6/07, Nathan Bell wrote: > > > On 1/6/07, Tim Leslie wrote: > > > > That solved the problem, thanks! I now have 4 remaining test failures > > > > of the form: > > > > > > > > ====================================================================== > > > > ERROR: Test for new slice functionality (EJS) > > > > ---------------------------------------------------------------------- > > > > Traceback (most recent call last): > > > > File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > > > > line 283, in check_get_horiz_slice > > > > assert_array_equal(B[1,:], A[1,:].todense()) > > > > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line > > > > 364, in todense > > > > return asmatrix(self.toarray()) > > > > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line > > > > 1460, in toarray > > > > sparsetools.csrtodense(self.shape[0],self.shape[1],self.indptr,self.colind,self.data,data) > > > > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparsetools.py", > > > > line 380, in csrtodense > > > > return _sparsetools.csrtodense(*args) > > > > NotImplementedError: No matching function for overloaded 'csrtodense' > > > > > > > > > > > > Any ideas? > > > > > > > > > I've made some changes to sparse.py in the last hour, so first update > > > to version 2499. On my system, all unittests pass with 2499. > > > > > > The error is caused when SWIG cannot find a version of the > > > (overloaded) function with the correct types. This usually means that > > > one of the arguments is of the wrong type. For example I recently > > > fixed some bugs caused by self.shape being a float tuple instead of an > > > int tuple. > > > > > > Update to 2499 and let me know if the problem persists. If so, see if > > > you can determine the types of the arguments. > > > > > > Thanks for the quick feedback! > > > > Those errors are with 2499. I'll do some more debugging and see if I > > can find what's going on. > > OK, it looks like it's a 64 bit problem. The tests pass on my 32 bit > machine, it's only the 64 bit one that's a problem. It seems that some > parameters are of dtype int64 when they need to be int32. I'll keep > debugging this and keep you informed. > Problem solved. There were some arrays which were created without dtype=numpy.intc. These have been fixed in revision 2500. Cheers, Tim > Tim > > > > > Cheers, > > > > Tim > > > > > > > > -- > > > Nathan Bell wnbell at gmail.com > > > _______________________________________________ > > > Scipy-dev mailing list > > > Scipy-dev at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > From wnbell at gmail.com Sat Jan 6 05:10:50 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 6 Jan 2007 04:10:50 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459E2BFA.80505@ntc.zcu.cz> Message-ID: On 1/6/07, Tim Leslie wrote: > Problem solved. There were some arrays which were created without > dtype=numpy.intc. These have been fixed in revision 2500. Great! Thanks for following up on that bug. In the future I plan to template the index type in sparsetools to provide support for both int32 and int64. Until then we'll need to stick with numpy.intc. I don't expect many people will have SciPy matrices with > 2^31 rows anytime soon, but it's good to be future-proof :) Does numpy support arrays with dimension > 2^31? -- Nathan Bell wnbell at gmail.com From nwagner at iam.uni-stuttgart.de Sat Jan 6 06:43:14 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 06 Jan 2007 12:43:14 +0100 Subject: [SciPy-dev] sparsetools_wrap.cxx Message-ID: Hi, I cannot install the latest scipy (r2500) due to g++: Lib/sparse/sparsetools/sparsetools_wrap.cxx Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function ?int SWIG_Python_ConvertFunctionPtr(PyObject*, void**, swig_type_info*)?: Lib/sparse/sparsetools/sparsetools_wrap.cxx:2000: Fehler: ung?ltige Umwandlung von ?const char*? in ?char*? Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function ?int require_size(PyArrayObject*, int*, int)?: Lib/sparse/sparsetools/sparsetools_wrap.cxx:2677: Warnung: format ?%d? erwartet Typ ?int?, aber Argument 3 hat Typ ?long int? Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function ?void SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, swig_type_info**, swig_type_info**)?: Lib/sparse/sparsetools/sparsetools_wrap.cxx:21614: Fehler: ung?ltige Umwandlung von ?const char*? in ?char*? Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function ?int SWIG_Python_ConvertFunctionPtr(PyObject*, void**, swig_type_info*)?: Lib/sparse/sparsetools/sparsetools_wrap.cxx:2000: Fehler: ung?ltige Umwandlung von ?const char*? in ?char*? Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function ?int require_size(PyArrayObject*, int*, int)?: Lib/sparse/sparsetools/sparsetools_wrap.cxx:2677: Warnung: format ?%d? erwartet Typ ?int?, aber Argument 3 hat Typ ?long int? Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function ?void SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, swig_type_info**, swig_type_info**)?: Lib/sparse/sparsetools/sparsetools_wrap.cxx:21614: Fehler: ung?ltige Umwandlung von ?const char*? in ?char*? error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -I/usr/local/lib64/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c Lib/sparse/sparsetools/sparsetools_wrap.cxx -o build/temp.linux-x86_64-2.5/Lib/sparse/sparsetools/sparsetools_wrap.o" failed with exit status 1 Nils From wnbell at gmail.com Sat Jan 6 07:23:59 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 6 Jan 2007 06:23:59 -0600 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: References: Message-ID: On 1/6/07, Nils Wagner wrote: > Hi, > > I cannot install the latest scipy (r2500) due to > > g++: Lib/sparse/sparsetools/sparsetools_wrap.cxx > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function > ?int SWIG_Python_ConvertFunctionPtr(PyObject*, void**, > swig_type_info*)?: > Lib/sparse/sparsetools/sparsetools_wrap.cxx:2000: Fehler: > ung?ltige Umwandlung von ?const char*? in ?char*? > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function > ?int require_size(PyArrayObject*, int*, int)?: > Lib/sparse/sparsetools/sparsetools_wrap.cxx:2677: Warnung: > format ?%d? erwartet Typ ?int?, aber Argument 3 hat Typ > ?long int? This appears to be a problem with require_size() in numpy.i http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sparse/sparsetools/numpy.i The sparsetools numpy.i was taken from the file of the same name in the numpy/doc/swig http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/doc/swig/numpy.i The problem seems to be that require_size() assumes npy_intp to be a C int, but on other platforms it may be a long int. I think what we really want here is NPY_INTP_FMT which is #defined correctly. http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/core/include/numpy/ndarrayobject.h#L942 I'm not completely comfortable with the magic in numpy.i, so I'll wait for others to weigh-in before making any changes. -- Nathan Bell wnbell at gmail.com From oliphant at ee.byu.edu Sat Jan 6 22:41:10 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat, 06 Jan 2007 20:41:10 -0700 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459E2BFA.80505@ntc.zcu.cz> Message-ID: <45A06BD6.9060303@ee.byu.edu> Nathan Bell wrote: > On 1/6/07, Tim Leslie wrote: > >> Problem solved. There were some arrays which were created without >> dtype=numpy.intc. These have been fixed in revision 2500. >> > > Great! Thanks for following up on that bug. > > In the future I plan to template the index type in sparsetools to > provide support for both int32 and int64. Until then we'll need to > stick with numpy.intc. > > I don't expect many people will have SciPy matrices with > 2^31 rows > anytime soon, but it's good to be future-proof :) > > Does numpy support arrays with dimension > 2^31? > > Yes, but only on 64-bit machines.. The c-type of the dimension is an integer large-enough to hold a pointer on the platform (npy_intp) -Travis From nwagner at iam.uni-stuttgart.de Sun Jan 7 04:34:28 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 07 Jan 2007 10:34:28 +0100 Subject: [SciPy-dev] www.scipy.org is down Message-ID: From v-nijs at kellogg.northwestern.edu Sun Jan 7 13:21:21 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Sun, 07 Jan 2007 12:21:21 -0600 Subject: [SciPy-dev] www.scipy.org is down In-Reply-To: Message-ID: It is already back up. Vincent On 1/7/07 3:34 AM, "Nils Wagner" wrote: > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From wnbell at gmail.com Mon Jan 8 00:29:11 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 7 Jan 2007 23:29:11 -0600 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: References: Message-ID: I've made the necessary changes to numpy.i and regenerated sparsetools_wrap.cxx in ver 2502. Please let us know if that clears things up. -- Nathan Bell wnbell at gmail.com From robert.kern at gmail.com Mon Jan 8 00:34:09 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 07 Jan 2007 23:34:09 -0600 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: References: Message-ID: <45A1D7D1.9020709@gmail.com> Nathan Bell wrote: > I've made the necessary changes to numpy.i and regenerated > sparsetools_wrap.cxx in ver 2502. Please let us know if that clears > things up. Please use SWIG CVS to generate those files. Most of the errors that Nils reported are from non-const-preserving assignments in the SWIG runtime functions. g++ 4 is stricter about those than g++ 3. SWIG CVS corrects those problems. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Mon Jan 8 04:56:35 2007 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 8 Jan 2007 03:56:35 -0600 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: <45A1D7D1.9020709@gmail.com> References: <45A1D7D1.9020709@gmail.com> Message-ID: On 1/7/07, Robert Kern wrote: > Please use SWIG CVS to generate those files. Most of the errors that Nils > reported are from non-const-preserving assignments in the SWIG runtime > functions. g++ 4 is stricter about those than g++ 3. SWIG CVS corrects those > problems. Hrm, I've had no problems with mine[1], even g++ -Wall runs silently on sparsetools_wrap.cxx. Anyway, thanks for the heads-up. I've regenerated the files w/ SWIG 1.3.32 (current SVN version). [1] g++ (GCC) 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5) -- Nathan Bell wnbell at gmail.com From robert.kern at gmail.com Mon Jan 8 05:48:09 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 08 Jan 2007 04:48:09 -0600 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: References: <45A1D7D1.9020709@gmail.com> Message-ID: <45A22169.3050303@gmail.com> Nathan Bell wrote: > On 1/7/07, Robert Kern wrote: >> Please use SWIG CVS to generate those files. Most of the errors that Nils >> reported are from non-const-preserving assignments in the SWIG runtime >> functions. g++ 4 is stricter about those than g++ 3. SWIG CVS corrects those >> problems. > > Hrm, I've had no problems with mine[1], even g++ -Wall runs silently > on sparsetools_wrap.cxx. > > Anyway, thanks for the heads-up. I've regenerated the files w/ SWIG > 1.3.32 (current SVN version). > > [1] g++ (GCC) 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5) Maybe they backed off after 4.0. i686-apple-darwin8-g++-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5367) Anyways, thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cimrman3 at ntc.zcu.cz Mon Jan 8 10:23:52 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 08 Jan 2007 16:23:52 +0100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> <459E6451.6040600@ntc.zcu.cz> Message-ID: <45A26208.2050801@ntc.zcu.cz> Nathan Bell wrote: > On 1/5/07, Robert Cimrman wrote: > >> Here by 'conversion' you mean CSR->CSC->CSR and ensure_sorted_indices() >> will be based on that? If yes, and assuming that a temporary copy is >> made (am I right?) what about ensure_sorted_indices() working inplace >> (just (quick,arg)sorting row by row)? >> I would like both >> mtx2 = mtx.ensure_sorted_indices() >> mtx.ensure_sorted_indices( inplace = True ) (returning None?) > > > I wrote an implementation of ensure_sorted_indices() with an inplace > option and updated umfpack.py accordingly (please double check this). > > For csr_matrix I have: > > def ensure_sorted_indices(self,inplace=False): > """Return a copy of this matrix where the column indices are sorted > """ > if inplace: > temp = self.tocsc().tocsr() > self.colind = temp.colind > self.indptr = temp.indptr > self.data = temp.data > else: > return self.tocsc().tocsr() > > > Of course this does not actually perform an inplace sort :) However > it is sufficient to make your umfpack.py work as expected. OK, will check it ASAP (read: tomorrow :)). > Give this implementation a try and let me know if it's still too slow. > If it is, then we can think about how to perform a true inplace sort. > I suspect argsorting row by row will be significantly slower than > this method (due to the python overhead). Well, I have some code related to sorting the sparse matrix indices already in my feutils module (in C), so all I have to do is to implement it into your templated C++ code, so that it works for all possible matrix data dtypes. r. From millman at berkeley.edu Mon Jan 8 16:31:20 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 8 Jan 2007 13:31:20 -0800 Subject: [SciPy-dev] ScipyTest vs. NumpyTest In-Reply-To: References: Message-ID: Hello again, I sent an email right before Christmas proposing to better document and clean up some of the testing code in scipy. I have created a ticket with my proposal: http://projects.scipy.org/scipy/scipy/ticket/342 I think it is pretty straightforward, so unless I hear some objections I will plan to go ahead and start making the changes after a few days. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From cimrman3 at ntc.zcu.cz Tue Jan 9 09:04:55 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 09 Jan 2007 15:04:55 +0100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: <45A26208.2050801@ntc.zcu.cz> References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> <459E6451.6040600@ntc.zcu.cz> <45A26208.2050801@ntc.zcu.cz> Message-ID: <45A3A107.9070205@ntc.zcu.cz> Robert Cimrman wrote: > Nathan Bell wrote: >> Give this implementation a try and let me know if it's still too slow. >> If it is, then we can think about how to perform a true inplace sort. >> I suspect argsorting row by row will be significantly slower than >> this method (due to the python overhead). > > Well, I have some code related to sorting the sparse matrix indices > already in my feutils module (in C), so all I have to do is to implement > it into your templated C++ code, so that it works for all possible > matrix data dtypes. Hi Nathan, I have just commited the C++ version of ensure_sorted_indices(). So far it is for the inplace branch only, but it can be easily extended. I have also written a small test case. Benchmarks were not done yet, though... You have updated the files under my hands, so I had to quickly merge the changes -> I hope no bugs were introduced :) r. From wnbell at gmail.com Tue Jan 9 09:39:37 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 9 Jan 2007 08:39:37 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: <45A3A107.9070205@ntc.zcu.cz> References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> <459E6451.6040600@ntc.zcu.cz> <45A26208.2050801@ntc.zcu.cz> <45A3A107.9070205@ntc.zcu.cz> Message-ID: On 1/9/07, Robert Cimrman wrote: > I have just commited the C++ version of ensure_sorted_indices(). > So far it is for the inplace branch only, but it can be easily extended. > I have also written a small test case. Benchmarks were not done yet, > though... > > You have updated the files under my hands, so I had to quickly merge the > changes -> I hope no bugs were introduced :) I just compiled and tested it myself. On my system it takes only 1/4th the time of the old method for a 1Mx1M matrix with ~5 nonzeros per row (2D Poisson problem w/ 5 point stencil). So it's safe to say it's faster :) There was a compiler warning about an "unused variable i", but everything else looks fine. Thanks for your contribution! -- Nathan Bell wnbell at gmail.com From cimrman3 at ntc.zcu.cz Tue Jan 9 10:40:07 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 09 Jan 2007 16:40:07 +0100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459CEB16.80308@ntc.zcu.cz> <459E2BFA.80505@ntc.zcu.cz> <459E6451.6040600@ntc.zcu.cz> <45A26208.2050801@ntc.zcu.cz> <45A3A107.9070205@ntc.zcu.cz> Message-ID: <45A3B757.6090607@ntc.zcu.cz> Nathan Bell wrote: > I just compiled and tested it myself. On my system it takes only > 1/4th the time of the old method for a 1Mx1M matrix with ~5 nonzeros > per row (2D Poisson problem w/ 5 point stencil). So it's safe to say > it's faster :) It's not the best implementation, either - the reorderings are done via a temporary buffer row by row (array[i] = array_copy[permutation[i]]). A truly in-place (cyclical) algorithm could be faster (though another array marking already permuted items would be needed, imho), but I have not had power to search/figure it now (implemented it myself several times in past). > There was a compiler warning about an "unused variable i", but > everything else looks fine. Thanks for your contribution! My matrices tend to be much denser (tens to hundreds nonzeros per row), but the python FE code is not working right now (new gcc + swig issues after an upgrade) so let's call it a success for now... :) r. From jeremit0 at gmail.com Tue Jan 9 11:40:25 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Tue, 9 Jan 2007 11:40:25 -0500 Subject: [SciPy-dev] sparsetools_wrap.cxx Message-ID: <3db594f70701090840w31fbc996k1432ba7c97e57549@mail.gmail.com> I just checked out the latest version of scipy (revision 2512). When compiling I get the following errors: g++: Lib/sparse/sparsetools/sparsetools_wrap.cxx Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'int SWIG_Python_ConvertPtr(PyObject*, void**, swig_type_info*, int)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:1209: error: invalid conversion from 'const char*' to 'char*' Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'int SWIG_Python_ConvertPtr(PyObject*, void**, swig_type_info*, int)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:1209: error: invalid conversion from 'const char*' to 'char*' Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'void SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, swig_type_info**, swig_type_info**)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:19802: error: invalid conversion from 'const char*' to 'char*' Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'void SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, swig_type_info**, swig_type_info**)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:19802: error: invalid conversion from 'const char*' to 'char*' lipo: can't figure out the architecture type of: /var/tmp//ccr6FDtl.out Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'int SWIG_Python_ConvertPtr(PyObject*, void**, swig_type_info*, int)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:1209: error: invalid conversion from 'const char*' to 'char*' Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'int SWIG_Python_ConvertPtr(PyObject*, void**, swig_type_info*, int)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:1209: error: invalid conversion from 'const char*' to 'char*' Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'void SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, swig_type_info**, swig_type_info**)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:19802: error: invalid conversion from 'const char*' to 'char*' Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'void SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, swig_type_info**, swig_type_info**)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:19802: error: invalid conversion from 'const char*' to 'char*' How can I fix this so I can compile/install scipy? I am using Mac OSX Intel. Thanks, Jeremy From nwagner at iam.uni-stuttgart.de Tue Jan 9 11:52:28 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 09 Jan 2007 17:52:28 +0100 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: <3db594f70701090840w31fbc996k1432ba7c97e57549@mail.gmail.com> References: <3db594f70701090840w31fbc996k1432ba7c97e57549@mail.gmail.com> Message-ID: On Tue, 9 Jan 2007 11:40:25 -0500 "Jeremy Conlin" wrote: > I just checked out the latest version of scipy (revision >2512). When > compiling I get the following errors: > > g++: Lib/sparse/sparsetools/sparsetools_wrap.cxx > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function >'int > SWIG_Python_ConvertPtr(PyObject*, void**, >swig_type_info*, int)': > Lib/sparse/sparsetools/sparsetools_wrap.cxx:1209: error: >invalid > conversion from 'const char*' to 'char*' > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function >'int > SWIG_Python_ConvertPtr(PyObject*, void**, >swig_type_info*, int)': > Lib/sparse/sparsetools/sparsetools_wrap.cxx:1209: error: >invalid > conversion from 'const char*' to 'char*' > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function >'void > SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, > swig_type_info**, swig_type_info**)': > Lib/sparse/sparsetools/sparsetools_wrap.cxx:19802: >error: invalid > conversion from 'const char*' to 'char*' > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function >'void > SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, > swig_type_info**, swig_type_info**)': > Lib/sparse/sparsetools/sparsetools_wrap.cxx:19802: >error: invalid > conversion from 'const char*' to 'char*' > lipo: can't figure out the architecture type of: >/var/tmp//ccr6FDtl.out > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function >'int > SWIG_Python_ConvertPtr(PyObject*, void**, >swig_type_info*, int)': > Lib/sparse/sparsetools/sparsetools_wrap.cxx:1209: error: >invalid > conversion from 'const char*' to 'char*' > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function >'int > SWIG_Python_ConvertPtr(PyObject*, void**, >swig_type_info*, int)': > Lib/sparse/sparsetools/sparsetools_wrap.cxx:1209: error: >invalid > conversion from 'const char*' to 'char*' > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function >'void > SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, > swig_type_info**, swig_type_info**)': > Lib/sparse/sparsetools/sparsetools_wrap.cxx:19802: >error: invalid > conversion from 'const char*' to 'char*' > Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function >'void > SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, > swig_type_info**, swig_type_info**)': > Lib/sparse/sparsetools/sparsetools_wrap.cxx:19802: >error: invalid > conversion from 'const char*' to 'char*' > > > How can I fix this so I can compile/install scipy? I am >using Mac OSX Intel. > > Thanks, > Jeremy > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev AFAIK you should remove the following directories first rm -rf build scipy/Lib/sparse (don't forget svn update afterwards) site-packages/scipy HTH Nils From wnbell at gmail.com Tue Jan 9 11:53:33 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 9 Jan 2007 10:53:33 -0600 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: <3db594f70701090840w31fbc996k1432ba7c97e57549@mail.gmail.com> References: <3db594f70701090840w31fbc996k1432ba7c97e57549@mail.gmail.com> Message-ID: On 1/9/07, Jeremy Conlin wrote: > I just checked out the latest version of scipy (revision 2512). When > compiling I get the following errors: Try 2513, it should fix your problem. -- Nathan Bell wnbell at gmail.com From wnbell at gmail.com Tue Jan 9 11:54:17 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 9 Jan 2007 10:54:17 -0600 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: <45A3B757.6090607@ntc.zcu.cz> References: <459E2BFA.80505@ntc.zcu.cz> <459E6451.6040600@ntc.zcu.cz> <45A26208.2050801@ntc.zcu.cz> <45A3A107.9070205@ntc.zcu.cz> <45A3B757.6090607@ntc.zcu.cz> Message-ID: On 1/9/07, Robert Cimrman wrote: > My matrices tend to be much denser (tens to hundreds nonzeros per row), > but the python FE code is not working right now (new gcc + swig issues > after an upgrade) so let's call it a success for now... :) Well, for an NxM matrix with R nonzeros per row, the old method is O( N*R + N + M) while row-by-row sorting is O(N * R log R + M) so asymptotically the old method wins (for sufficiently large R). However, even for R=1000 my testing shows the latter to still be noticeably faster (see below). For the common case (R < 100), the row by row sort is ~5-8x faster. While these tests aren't comprehensive, I expect the row by row sort to maintain an advantage in all (practical) cases of interest. from scipy import * N = 10000 #R = 1000 A = sparse.spdiags(rand(1000,N),arange(1000)-500,N,N) In [5]: time B = A.tocsr().tocsc() CPU times: user 2.12 s, sys: 0.84 s, total: 2.96 s Wall time: 2.98 In [6]: time A.ensure_sorted_indices(True) CPU times: user 0.89 s, sys: 0.00 s, total: 0.89 s Wall time: 0.89 #R = 50 A = sparse.spdiags(rand(50,N),arange(50)-25,N,N) In [30]: time B = A.tocsr().tocsc() CPU times: user 0.11 s, sys: 0.06 s, total: 0.17 s Wall time: 0.18 In [31]: time A.ensure_sorted_indices(True) CPU times: user 0.03 s, sys: 0.00 s, total: 0.03 s Wall time: 0.03 -- Nathan Bell wnbell at gmail.com From wnbell at gmail.com Tue Jan 9 12:00:50 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 9 Jan 2007 11:00:50 -0600 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: References: <3db594f70701090840w31fbc996k1432ba7c97e57549@mail.gmail.com> Message-ID: On 1/9/07, Nathan Bell wrote: > On 1/9/07, Jeremy Conlin wrote: > > I just checked out the latest version of scipy (revision 2512). When > > compiling I get the following errors: > > Try 2513, it should fix your problem. I should have elaborated. The problem is caused by using an older version of SWIG to generate the wrappers. As Robert Kern pointed out in a previous thread: """ Please use SWIG CVS to generate those files. Most of the errors that Nils reported are from non-const-preserving assignments in the SWIG runtime functions. g++ 4 is stricter about those than g++ 3. SWIG CVS corrects those problems. """ It appears that Apple's g++4 is more sensitive to this issue than other those in other distributions. When generating the SWIG wrappers for sparsetools, be sure to use the most recent version of SWIG (i.e. the one currently in svn). -- Nathan Bell wnbell at gmail.com From jeremit0 at gmail.com Tue Jan 9 13:21:13 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Tue, 9 Jan 2007 13:21:13 -0500 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: References: <3db594f70701090840w31fbc996k1432ba7c97e57549@mail.gmail.com> Message-ID: <3db594f70701091021w4d0a45b3w3bfed626734ea935@mail.gmail.com> On 1/9/07, Nathan Bell wrote: > On 1/9/07, Nathan Bell wrote: > > On 1/9/07, Jeremy Conlin wrote: > > > I just checked out the latest version of scipy (revision 2512). When > > > compiling I get the following errors: > > > > Try 2513, it should fix your problem. > > I should have elaborated. The problem is caused by using an older > version of SWIG to generate the wrappers. As Robert Kern pointed out > in a previous thread: > > """ > Please use SWIG CVS to generate those files. Most of the errors that Nils > reported are from non-const-preserving assignments in the SWIG runtime > functions. g++ 4 is stricter about those than g++ 3. SWIG CVS corrects those > problems. > """ > > It appears that Apple's g++4 is more sensitive to this issue than > other those in other distributions. > > > When generating the SWIG wrappers for sparsetools, be sure to use the > most recent version of SWIG (i.e. the one currently in svn). > > Thanks for the quick update. Now it compiles and installs. Jeremy From nwagner at iam.uni-stuttgart.de Tue Jan 9 13:31:41 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 09 Jan 2007 19:31:41 +0100 Subject: [SciPy-dev] swig/python detected a memory leak of type 'void *', no destructor found Message-ID: Hi, Running scipy.test(1) using r2513 results in in (0, 7) 0.0 (0, 2) 1.0 (0, 1) 2.0 (1, 5) 3.0 (1, 4) 4.0 out (0, 1) 2.0 (0, 2) 1.0 (0, 7) 0.0 (1, 4) 4.0 (1, 5) 3.0 ............Use minimum degree ordering on A'+A. ......................Use minimum degree ordering on A'+A. ....................................................................................................................................Residual: 1.05006987327e-07 .......................swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. .swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. .swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. .Use minimum degree ordering on A'+A. .swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. .Use minimum degree ordering on A'+A. .....................................................................................................................................................................................................................................................................................................................................................................................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ...Result may be inaccurate, approximate err = 1.61487962772e-08 ...Result may be inaccurate, approximate err = 1.25504040833e-10 ................................................................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ................................ ---------------------------------------------------------------------- Ran 1610 tests in 7.443s OK Nils From robert.kern at gmail.com Tue Jan 9 14:20:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Jan 2007 13:20:15 -0600 Subject: [SciPy-dev] swig/python detected a memory leak of type 'void *', no destructor found In-Reply-To: References: Message-ID: <45A3EAEF.5020606@gmail.com> Nils Wagner wrote: > Hi, > > Running scipy.test(1) using r2513 results in You know the drill. Try to find which test is failing, please, using scipy.test(1, 10) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Tue Jan 9 14:28:18 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 09 Jan 2007 20:28:18 +0100 Subject: [SciPy-dev] swig/python detected a memory leak of type 'void *', no destructor found In-Reply-To: <45A3EAEF.5020606@gmail.com> References: <45A3EAEF.5020606@gmail.com> Message-ID: On Tue, 09 Jan 2007 13:20:15 -0600 Robert Kern wrote: > Nils Wagner wrote: >> Hi, >> >> Running scipy.test(1) using r2513 results in > > You know the drill. Try to find which test is failing, >please, using > > scipy.test(1, 10) > > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless enigma > that is made terrible by our own mad attempt to >interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev O.K. here is the critical part of the output check_djbfft (scipy.fftpack.tests.test_basic.test_rfft) ... ok Getting factors of complex matrixswig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. ... ok Getting factors of real matrixswig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. ... ok Solve with UMFPACK: double precision complexswig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. ... ok Solve: single precision complexUse minimum degree ordering on A'+A. ... ok Solve with UMFPACK: double precisionswig/python detected a memory leak of type 'void *', no destructor found. swig/python detected a memory leak of type 'void *', no destructor found. ... ok I am using UMFPACKv4.4 (SuiteSparse UMFPACK 5.02 doesn't work for me) >>> scipy.show_config() amd_info: libraries = ['amd'] library_dirs = ['/home/nwagner/src/UMFPACKv4.4/AMD/Lib'] define_macros = [('SCIPY_AMD_H', None)] swig_opts = ['-I/home/nwagner/src/UMFPACKv4.4/AMD/Include'] include_dirs = ['/home/nwagner/src/UMFPACKv4.4/AMD/Include'] umfpack_info: libraries = ['umfpack', 'amd'] library_dirs = ['/home/nwagner/src/UMFPACKv4.4/UMFPACK/Lib', '/home/nwagner/src/UMFPACKv4.4/AMD/Lib'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] swig_opts = ['-I/home/nwagner/src/UMFPACKv4.4/UMFPACK/Include', '-I/home/nwagner/src/UMFPACKv4.4/AMD/Include'] include_dirs = ['/home/nwagner/src/UMFPACKv4.4/UMFPACK/Include', '/home/nwagner/src/UMFPACKv4.4/AMD/Include'] Should I use valgrind to get further information ? If yes how do I use valgrind in that case ? Nils From jtravs at gmail.com Tue Jan 9 17:57:31 2007 From: jtravs at gmail.com (John Travers) Date: Tue, 9 Jan 2007 22:57:31 +0000 Subject: [SciPy-dev] fitpack tests Message-ID: <3a1077e70701091457g44e240d3k7053751b632777b9@mail.gmail.com> I'm in the process of sorting out the tests for fitpack as part of my ongoing attempt to split out a relatively clean spline module - see, I am slowly working on it :-). At the moment only fitpack2.py has a unit test suite. fitpack.py has a number of test functions at the bottom (if it is run as a script). I'm planning of moving these into a unit test suite (if it makes sense - i'm not sure it does at the mo), and then adding extra tests from the original fortran library. I'll then use these tests to check that my new code doesn't break anything... BUT: At the moment tests 1 and 2 don't pass on the original code (in addition, I had to remove a keyword from the splrep function calls to get them to run at all, the attached patch does this - which I'll commit to svn soon). Tests 3-5 run, but I still need to verify the output. Tests 1 and 2 fail with ier=10 returned from fitpack which indicates invalid input data. I'm wondering if anybody with the historical knowledge/wisdom of this module knows if a) I'm doing something stupid!! b) what may be wrong if not? Thanks for any help, John From cimrman3 at ntc.zcu.cz Wed Jan 10 04:46:54 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 10:46:54 +0100 Subject: [SciPy-dev] sparsetools - new and (hopefully) improved! In-Reply-To: References: <459E2BFA.80505@ntc.zcu.cz> <459E6451.6040600@ntc.zcu.cz> <45A26208.2050801@ntc.zcu.cz> <45A3A107.9070205@ntc.zcu.cz> <45A3B757.6090607@ntc.zcu.cz> Message-ID: <45A4B60E.1070208@ntc.zcu.cz> Nathan Bell wrote: > Well, for an NxM matrix with R nonzeros per row, the old method is > O( N*R + N + M) > while row-by-row sorting is > O(N * R log R + M) > so asymptotically the old method wins (for sufficiently large R). > However, even for R=1000 my testing shows the latter to still be > noticeably faster (see below). For the common case (R < 100), the row > by row sort is ~5-8x faster. While these tests aren't comprehensive, > I expect the row by row sort to maintain an advantage in all > (practical) cases of interest. So it's ok. I have just looked at http://en.wikipedia.org/wiki/Category:Sort_algorithms, there are some non-comparison sort algorithms with just O(R) complexity, usually needing a temp array of size of the range of values (which is already there) and moreover the values to be sort never repeat -> we might be able to do even better. But I am happy for now. The main advantage is the halved memory usage -> we can see it on the 'sys' part of the times below. > from scipy import * > N = 10000 > > #R = 1000 > A = sparse.spdiags(rand(1000,N),arange(1000)-500,N,N) > > In [5]: time B = A.tocsr().tocsc() > CPU times: user 2.12 s, sys: 0.84 s, total: 2.96 s > Wall time: 2.98 > In [6]: time A.ensure_sorted_indices(True) > CPU times: user 0.89 s, sys: 0.00 s, total: 0.89 s > Wall time: 0.89 From cimrman3 at ntc.zcu.cz Wed Jan 10 04:49:26 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 10:49:26 +0100 Subject: [SciPy-dev] sparsetools_wrap.cxx In-Reply-To: <3db594f70701091021w4d0a45b3w3bfed626734ea935@mail.gmail.com> References: <3db594f70701090840w31fbc996k1432ba7c97e57549@mail.gmail.com> <3db594f70701091021w4d0a45b3w3bfed626734ea935@mail.gmail.com> Message-ID: <45A4B6A6.5090002@ntc.zcu.cz> Jeremy Conlin wrote: > On 1/9/07, Nathan Bell wrote: >> On 1/9/07, Nathan Bell wrote: >>> On 1/9/07, Jeremy Conlin wrote: >>>> I just checked out the latest version of scipy (revision 2512). When >>>> compiling I get the following errors: >>> Try 2513, it should fix your problem. >> When generating the SWIG wrappers for sparsetools, be sure to use the >> most recent version of SWIG (i.e. the one currently in svn). >> >> > Thanks for the quick update. Now it compiles and installs. And sorry, it was my fault! r. From jeff at taupro.com Wed Jan 10 06:08:49 2007 From: jeff at taupro.com (Jeff Rush) Date: Wed, 10 Jan 2007 05:08:49 -0600 Subject: [SciPy-dev] Reminder: Early Bird Registration for PyCon Ending Soon Message-ID: <45A4C941.7080903@taupro.com> Greetings. As the co-chair for the upcoming Python conference, being held in Dallas (Addison) Texas, I want to remind folk to register before early bird registration prices end. The event is the fifth international Python Conference, being held Feb 23-25, 2007 at the Marriott-Quorum in Addison, with early-bird registration ending **Jan 15**. The conference draws approximately 400-500 attendees from diverse backgrounds such as scientists from national and medical labs, college/K-12 educators, web engineers and the myriad of IT developers and programming hobbyists. Those new to the Python language are welcome, and we're offering a half-day "Python 101" tutorial on the day before the conference, Thursday Feb 22 to help you get up to speed and better enjoy the rest of the conference. Some of the really cool talks are: - Topographica: Python used for Computational Neuroscience - Python and wxPython for Experimental Economics - Interactive Parallel and Distributed Computing with IPython - Understanding and Using NumPy - IPython: getting the most out of working interactively in Python - Accessing and serving scientific datasets with Python - Galaxy: A Python based web framework for comparative genomics - PyDX: mathematics is code - Visual Python in a Computational Physics Course - Sony Pictures Imageworks Being run by the Python community as a non-profit event, the conference strives to be inexpensive, with registration being only $260 (or $195 if you register prior to Jan 15th), with a further discount for students. On the day before the conference we are running a full day of classroom tutorials (extra charge per class) and then after the conference is a free four-days of sprints, which are informal gatherings of programmers to work together in coding on various projects. Sprints are excellent opportunities to do agile pair-programming side-by-side with experienced programmers and make new friends. Other activities are lightning talks, which are 5-minute presentations to show off a cool technology or spread the word about a project, open space talks, which are spontaneous gatherings around a topic and, new this year, a Python Lab where experienced and novice programmers will work together to solve challenging problems and then present their solutions. The conference is also running four keynote talks by leaders in the programming field, with a special focus on education this year: "The Power of Dangerous Ideas: Python and One Laptop per Child" by Ivan Krstic, senior member of the One Laptop per Child project "Premise: eLearning does not Belong in Public Schools" by Adele Goldberg, of SmallTalk fame "Python 3000" by Guido van Rossum, creator of Python "The Importance of Programming Literacy" by Robert M. "r0ml" Lefkowitz, a frequent speaker at O'Reilly conferences I believe you will find the conference educational and enjoyable. More information about the conference along with the full schedule of presentations with abstracts, is available online: http://us.pycon.org/ Thanks for any help you can give in spreading the word, Jeff Rush Co-Chair PyCon 2007 From cimrman3 at ntc.zcu.cz Wed Jan 10 07:24:54 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 13:24:54 +0100 Subject: [SciPy-dev] new sparsetools + umfpack Message-ID: <45A4DB16.3070102@ntc.zcu.cz> New sparsetools implementation does not require CSR/CSC column/row indices to be sorted in ascending order, while UMFPACK does. Currently, every call to umfpack module (e.g. via linsolve.solve) implicitly calls ensure_sorted_indices(inplace=True) method of CSR/CSC matrix, which is rather fast, but in many cases is not necessary and the overhead can become significant when solving lots of linear systems (e.g. FE-discretized evolutionary nonlinear PDEs) - in this case, only one call to ensure_sorted_indices() is necessary, then only values change, not the structure. Therefore, if there are no objections, I would like to remove the implicit call to ensure_sorted_indices() from umfpack and require user to call it explicitly (only if necessary) - an exception will be raised by umfpack if the matrix is not in correct form, so it cannot lead to hidden bugs. I will document this in linsolve, of course. r. From oliphant at ee.byu.edu Wed Jan 10 07:48:34 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 05:48:34 -0700 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4DB16.3070102@ntc.zcu.cz> References: <45A4DB16.3070102@ntc.zcu.cz> Message-ID: <45A4E0A2.9020908@ee.byu.edu> Robert Cimrman wrote: > New sparsetools implementation does not require CSR/CSC column/row > indices to be sorted in ascending order, while UMFPACK does. Currently, > every call to umfpack module (e.g. via linsolve.solve) implicitly calls > ensure_sorted_indices(inplace=True) method of CSR/CSC matrix, which is > rather fast, but in many cases is not necessary and the overhead can > become significant when solving lots of linear systems (e.g. > FE-discretized evolutionary nonlinear PDEs) - in this case, only one > call to ensure_sorted_indices() is necessary, then only values change, > not the structure. > > Therefore, if there are no objections, I would like to remove the > implicit call to ensure_sorted_indices() from umfpack and require user > to call it explicitly (only if necessary) - an exception will be raised > by umfpack if the matrix is not in correct form, so it cannot lead to > hidden bugs. I will document this in linsolve, of course. > Perhaps it would be useful to put a flag on sparse matrices (sorted_indices) or something like that which is true if the indices are sorted and updated when operations that could change it are encountered. We place flags on nd arrays for exactly this purpose (certain algorithms require properties of the arrays which are not always guaranteed but which we don't want to force nor explicitly check all the time). -Travis From nwagner at iam.uni-stuttgart.de Wed Jan 10 07:46:33 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 10 Jan 2007 13:46:33 +0100 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4DB16.3070102@ntc.zcu.cz> References: <45A4DB16.3070102@ntc.zcu.cz> Message-ID: <45A4E029.7030002@iam.uni-stuttgart.de> Robert Cimrman wrote: > New sparsetools implementation does not require CSR/CSC column/row > indices to be sorted in ascending order, while UMFPACK does. Currently, > every call to umfpack module (e.g. via linsolve.solve) implicitly calls > ensure_sorted_indices(inplace=True) method of CSR/CSC matrix, which is > rather fast, but in many cases is not necessary and the overhead can > become significant when solving lots of linear systems (e.g. > FE-discretized evolutionary nonlinear PDEs) - in this case, only one > call to ensure_sorted_indices() is necessary, then only values change, > not the structure. > > Therefore, if there are no objections, I would like to remove the > implicit call to ensure_sorted_indices() from umfpack and require user > to call it explicitly (only if necessary) - an exception will be raised > by umfpack if the matrix is not in correct form, so it cannot lead to > hidden bugs. I will document this in linsolve, of course. > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Just curious. Is the umfpack directory in the sandbox obsolete ? Nils From cimrman3 at ntc.zcu.cz Wed Jan 10 08:12:02 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 14:12:02 +0100 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4E029.7030002@iam.uni-stuttgart.de> References: <45A4DB16.3070102@ntc.zcu.cz> <45A4E029.7030002@iam.uni-stuttgart.de> Message-ID: <45A4E622.50401@ntc.zcu.cz> Nils Wagner wrote: > Just curious. Is the umfpack directory in the sandbox obsolete ? Yes, I have just removed it. Because of my SVN-clumsiness, I have also committed my sandbox/setup.py and sandbox/xplt/setup.py with some local changes. I am trying to revert it, but cannot connect to SVN right now. r. From cimrman3 at ntc.zcu.cz Wed Jan 10 08:16:25 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 14:16:25 +0100 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4E622.50401@ntc.zcu.cz> References: <45A4DB16.3070102@ntc.zcu.cz> <45A4E029.7030002@iam.uni-stuttgart.de> <45A4E622.50401@ntc.zcu.cz> Message-ID: <45A4E729.7010606@ntc.zcu.cz> Robert Cimrman wrote: > Nils Wagner wrote: >> Just curious. Is the umfpack directory in the sandbox obsolete ? > > Yes, I have just removed it. Because of my SVN-clumsiness, I have also > committed my sandbox/setup.py and sandbox/xplt/setup.py with some local > changes. I am trying to revert it, but cannot connect to SVN right now. done. From wnbell at gmail.com Wed Jan 10 08:35:13 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 10 Jan 2007 07:35:13 -0600 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4DB16.3070102@ntc.zcu.cz> References: <45A4DB16.3070102@ntc.zcu.cz> Message-ID: On 1/10/07, Robert Cimrman wrote: > New sparsetools implementation does not require CSR/CSC column/row > indices to be sorted in ascending order, while UMFPACK does. Currently, > every call to umfpack module (e.g. via linsolve.solve) implicitly calls > ensure_sorted_indices(inplace=True) method of CSR/CSC matrix, which is > rather fast, but in many cases is not necessary and the overhead can > become significant when solving lots of linear systems (e.g. > FE-discretized evolutionary nonlinear PDEs) - in this case, only one > call to ensure_sorted_indices() is necessary, then only values change, > not the structure. I remain unconvinced that this is a real performance problem. Given the speed of ensure_sorted_indices(), (it's even faster than copy()) I can't see how repeated calls would increase the total run time by more than a few %. Do you have evidence that this is a bottleneck? > Therefore, if there are no objections, I would like to remove the > implicit call to ensure_sorted_indices() from umfpack and require user > to call it explicitly (only if necessary) - an exception will be raised > by umfpack if the matrix is not in correct form, so it cannot lead to > hidden bugs. I will document this in linsolve, of course. Personally, I think it's best to hide these details from the user as much as possible. If the performance penalty is more than 10%, then perhaps it should be changed. Otherwise I think simplicity wins. Keep in mind that many SciPy users have little interest in learning the underlying implementation of things like sparse matrix formats. Informing them to use lil_matrix for construction but csr_matrix for arithmetic is fine (they can just take your word for it), but requiring that they ensure_sorted_indices() before calling umfpack is a bit too much IMO. -- Nathan Bell wnbell at gmail.com From cimrman3 at ntc.zcu.cz Wed Jan 10 08:20:57 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 14:20:57 +0100 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4E0A2.9020908@ee.byu.edu> References: <45A4DB16.3070102@ntc.zcu.cz> <45A4E0A2.9020908@ee.byu.edu> Message-ID: <45A4E839.7050205@ntc.zcu.cz> Travis Oliphant wrote: > Robert Cimrman wrote: >> New sparsetools implementation does not require CSR/CSC column/row >> indices to be sorted in ascending order, while UMFPACK does. Currently, >> every call to umfpack module (e.g. via linsolve.solve) implicitly calls >> ensure_sorted_indices(inplace=True) method of CSR/CSC matrix, which is >> rather fast, but in many cases is not necessary and the overhead can >> become significant when solving lots of linear systems (e.g. >> FE-discretized evolutionary nonlinear PDEs) - in this case, only one >> call to ensure_sorted_indices() is necessary, then only values change, >> not the structure. >> >> Therefore, if there are no objections, I would like to remove the >> implicit call to ensure_sorted_indices() from umfpack and require user >> to call it explicitly (only if necessary) - an exception will be raised >> by umfpack if the matrix is not in correct form, so it cannot lead to >> hidden bugs. I will document this in linsolve, of course. >> > > Perhaps it would be useful to put a flag on sparse matrices > (sorted_indices) or something like that which is true if the indices are > sorted and updated when operations that could change it are encountered. > > We place flags on nd arrays for exactly this purpose (certain algorithms > require properties of the arrays which are not always guaranteed but > which we don't want to force nor explicitly check all the time). I have proposed this to Nathan (see previous discussion), but then agreed that such a flag could become easily invalid by direct user manipulation of the sparse matrix data - sparse matrices differ in this aspect from nd arrays, since people often use (and must use) the internal representation to get speed. r. From wnbell at gmail.com Wed Jan 10 08:48:24 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 10 Jan 2007 07:48:24 -0600 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4E0A2.9020908@ee.byu.edu> References: <45A4DB16.3070102@ntc.zcu.cz> <45A4E0A2.9020908@ee.byu.edu> Message-ID: On 1/10/07, Travis Oliphant wrote: > Perhaps it would be useful to put a flag on sparse matrices > (sorted_indices) or something like that which is true if the indices are > sorted and updated when operations that could change it are encountered. In the case of sparse matrices, how would you prevent the user from manipulating the arrays directly? I can imagine a user writing a function that operates directly on the underlying arrays (indptr,colind/rowind,data) without invalidating the flag. -- Nathan Bell wnbell at gmail.com From cimrman3 at ntc.zcu.cz Wed Jan 10 09:11:24 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 15:11:24 +0100 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: References: <45A4DB16.3070102@ntc.zcu.cz> Message-ID: <45A4F40C.3080500@ntc.zcu.cz> Nathan Bell wrote: > On 1/10/07, Robert Cimrman wrote: > I remain unconvinced that this is a real performance problem. Given > the speed of ensure_sorted_indices(), (it's even faster than copy()) I > can't see how repeated calls would increase the total run time by more > than a few %. Do you have evidence that this is a bottleneck? I have got my code running, so: with <29339x29339 sparse matrix of type '' with 927955 stored elements (space for 927955) in Compressed Sparse Row format> note that my matrix has sorted indices apriori, so the time spent in ensure_sorted_indices() is minimal possible sfe: Stokes... nls: iter: 0, out-of-balance: 1.812362e-02 (rel: 1.000000e+00) ->>>>>>>>>> 0.24667596817 rezidual: 0.05 [s] ... rezidual assembling solve: 3.29 [s] ... umfpack solution matrix: 0.31 [s] ... matrix assembling nls: iter: 1, out-of-balance: 1.036336e-16 (rel: 5.718151e-15) sfe: Navier-Stokes... nls: iter: 0, out-of-balance: 1.818563e-04 (rel: 1.000000e+00) ->>>>>>>>>> 0.249101161957 rezidual: 0.08 [s] solve: 3.27 [s] matrix: 0.76 [s] nls: iter: 1, out-of-balance: 4.214136e-07 (rel: 2.317289e-03) ->>>>>>>>>> 0.244355916977 rezidual: 0.10 [s] solve: 3.29 [s] matrix: 0.78 [s] nls: iter: 2, out-of-balance: 2.091538e-09 (rel: 1.150105e-05) ensure_sorted_indices() call is denoted by '->>>>>>>>>>' so it is definitely more than 10%. Note that I have a modified version of umfpack.py, which calls ensure_sorted_indices() in symbolic() only (call in numeric() is not needed). > Personally, I think it's best to hide these details from the user as > much as possible. If the performance penalty is more than 10%, then > perhaps it should be changed. Otherwise I think simplicity wins. > > Keep in mind that many SciPy users have little interest in learning > the underlying implementation of things like sparse matrix formats. > Informing them to use lil_matrix for construction but csr_matrix for > arithmetic is fine (they can just take your word for it), but > requiring that they ensure_sorted_indices() before calling umfpack is > a bit too much IMO. I agree generally, but on the other hand installing umfpack itself is not trivial either - such people usually know what they are doing. The default solver code path would not be touched, of course... r. From matthew.brett at gmail.com Wed Jan 10 09:15:23 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 10 Jan 2007 14:15:23 +0000 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4F40C.3080500@ntc.zcu.cz> References: <45A4DB16.3070102@ntc.zcu.cz> <45A4F40C.3080500@ntc.zcu.cz> Message-ID: <1e2af89e0701100615y4ed5a089l5a34a9e31cbb76de@mail.gmail.com> On 1/10/07, Robert Cimrman wrote: > Nathan Bell wrote: > > On 1/10/07, Robert Cimrman wrote: > > I remain unconvinced that this is a real performance problem. Given > > the speed of ensure_sorted_indices(), (it's even faster than copy()) I > > can't see how repeated calls would increase the total run time by more > > than a few %. Do you have evidence that this is a bottleneck? Maybe different interfaces for sorted and (possibly) unsorted? Or is this not practical? Matthew From cimrman3 at ntc.zcu.cz Wed Jan 10 09:23:20 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 15:23:20 +0100 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <1e2af89e0701100615y4ed5a089l5a34a9e31cbb76de@mail.gmail.com> References: <45A4DB16.3070102@ntc.zcu.cz> <45A4F40C.3080500@ntc.zcu.cz> <1e2af89e0701100615y4ed5a089l5a34a9e31cbb76de@mail.gmail.com> Message-ID: <45A4F6D8.1010207@ntc.zcu.cz> Matthew Brett wrote: > On 1/10/07, Robert Cimrman wrote: >> Nathan Bell wrote: >>> On 1/10/07, Robert Cimrman wrote: >>> I remain unconvinced that this is a real performance problem. Given >>> the speed of ensure_sorted_indices(), (it's even faster than copy()) I >>> can't see how repeated calls would increase the total run time by more >>> than a few %. Do you have evidence that this is a bottleneck? > > Maybe different interfaces for sorted and (possibly) unsorted? Or is > this not practical? This could be done via linsolve.use_solver() easily, good idea. r. From wnbell at gmail.com Wed Jan 10 09:31:52 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 10 Jan 2007 08:31:52 -0600 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4F40C.3080500@ntc.zcu.cz> References: <45A4DB16.3070102@ntc.zcu.cz> <45A4F40C.3080500@ntc.zcu.cz> Message-ID: On 1/10/07, Robert Cimrman wrote: > ->>>>>>>>>> 0.249101161957 > rezidual: 0.08 [s] > solve: 3.27 [s] > matrix: 0.76 [s] > nls: iter: 1, out-of-balance: 4.214136e-07 (rel: 2.317289e-03) > ->>>>>>>>>> 0.244355916977 > rezidual: 0.10 [s] > solve: 3.29 [s] > matrix: 0.78 [s] > nls: iter: 2, out-of-balance: 2.091538e-09 (rel: 1.150105e-05) > > ensure_sorted_indices() call is denoted by '->>>>>>>>>>' so it is > definitely more than 10%. Note that I have a modified version of > umfpack.py, which calls ensure_sorted_indices() in symbolic() only (call > in numeric() is not needed). > I may be misreading this, but isn't it 0.25/3.27 ~= 7.6%? > I agree generally, but on the other hand installing umfpack itself is > not trivial either - such people usually know what they are doing. > The default solver code path would not be touched, of course... That's a fair argument. I have one more suggestion though. Suppose we implemented a function check_sorted_indices() in sparsetools. This (trivial) function should be a few times faster than ensure_sorted_indices(), and perhaps it would reduce the overhead to something on the order of 1-3%. This test could even be used within ensure_sorted_indices() to possibly avoid the first sort. e.g. void ensure_sorted_indices(){ if(check_sorted_indices()) return; else //do normal sort } Re: Matthew Brett > Maybe different interfaces for sorted and (possibly) unsorted? Or is > this not practical? That would be a reasonable place - anyone who called it with non-standard arguments would take on the responsibility of sorting themselves. Anyone who just wanted to solve a few systems could safely ignore sorting issues. -- Nathan Bell wnbell at gmail.com From oliphant at ee.byu.edu Wed Jan 10 09:43:28 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 07:43:28 -0700 Subject: [SciPy-dev] Docstring standards for NumPy and SciPy Message-ID: <45A4FB90.9000809@ee.byu.edu> There was a lively discussion on the SciPy List before Christmas regarding establishing a standard for documentation strings for NumPy / SciPy. I am very interested in establishing such a standard. A hearty thanks goes to William Stein for encouraging the discussion. I hope it is very clear that the developers of NumPy / SciPy are quite interested in good documentation strings but recognize that producing them can be fairly tedious and un-interesting work. This is the best explanation I can come up with for the relative paucity of documentation rather than some underlying agenda *not* to produce them. I suspect a standard has not been established largely because of all the discussions taking place within the documentation communities of epydoc, docutils, etc. and a relative unclarity on what to do about Math in docstrings. I'd like to get something done within the next few days (like placing the standard on a wiki and/or placing a HOWTO_DOCUMENT file in the distribution of NumPy). My preference is to use our own basic format for documentation with something that will translate the result into something that the epydoc package can process (like epytext or reStructuredText). The reason, I'd like to use our own simple syntax, is that I'm not fully happy with either epytext or reStructuredText. In general, I don't like a lot of line-noise and "formatting" extras. Unfortuntately both epytext and reStructuredText seem to have their fair share of such things. Robert wrote some great documentation for a few functions (apparently following a reStructuredText format). While I liked that he did this, it showed me that I don't very much like all the line-noise needed for structured text. I've looked through a large number of documentation strings that I've written over the years and believe that the following format suffices. I would like all documentation to follow this format. This format attempts to be a combination of epytext and restructured text with additions for latex-math. The purpose is to make a docstring readable but also allowing for some structured text directives. At some point we will have a sub-routine that will translate docstrings in this format to pure epytext or pure restructured text. """ one-line summary not using variable names or the function name A few sentences giving an extended description. Inputs: var1 -- Explanation variable2 -- Explanation Outputs: named, list, of, outputs named -- explanation list -- more detail of -- blah, blah. outputs -- even more blah Additional Inputs: kwdarg1 -- A little-used input not always needed. kwdarg2 -- Some keyword arguments can and should be given in Inputs Section. This is just for "little-used" inputs. Algorithm: Notes about the implemenation algorithm (if needed). This can have multiple paragraphs as can all sections. Notes: Additional notes if needed Authors: name (date): notes about what was done name (date): major documentation updates can be included here also. See also: * func1 -- any notes about the relationship * func2 -- * func3 -- (or this can be a comma separated list) func1, func2, func3 (For NumPy functions, these do not need to have numpy. namespace in front of them) (For SciPy they don't need the scipy. namespace in front of them). (Use np and sp for abbreviations to numpy and scipy if you need to reference the other package). Examples: examples in doctest format Comments: This section should include anything that should not be displayed in a help or other hard-copy output. Such things as substitution-directed directives should go here. """ Additional Information: For paragraphs, indentation is significant and indicates indentation in the output. New paragraphs are marked with blank line. Text-emphasis: Use *italics*, **bold**, and `courier` if needed in any explanations (but not for variable names and doctest code or multi-line code) Math: Use \[...\] or $...$ for math in latex format (remember to use the raw-format for your text string in such cases). Place it in a new-paragraph for displaystyle or in-line for inline style. References: Use L{reference-link} for any code links (except in the see-also section). The reference-link should contain the full path-name (unless the function is in the same name-space as this one is. Use http:// for any URL's Lists: * item1 - subitem + subsubitem * item2 * item3 or 1. item1 a. subitem i. subsubitem1 ii. subsubitem2 2. item2 3. item3 for lists. Definitions: descripition This is my description for any definitions needed. Addtional Code-blocks: {{{ for multi-line code-blocks that are not examples to be run but should be formatted as code. }}} Tables: Tables should be constructed as either: +------------------------+------------+----------+ | Header row, column 1 | Header 2 | Header 3 | +========================+============+==========+ | body row 1, column 1 | column 2 | column 3 | +------------------------+------------+----------+ | body row 2 | Cells may span | +------------------------+-----------------------+ or || Header row, column 1 || Header 2 || Header 3 || ------------------------------------------------------- || body row, column 1 || column 2 || column 3 || || body row 2 |||| Cells may span columns || Footnotes: [1] or [CITATION3] for Footnotes which are placed at the bottom of the docstring as [1] Footnote [CITATION3] Additional note. Substitution: Use |somename|{optional text} with (the next line is placed at the bottom of the docstring in the Comments: section) .. |somename| image::myfile.png or .. |somename| somedirective:: {optional text} for placing restructured text directives in the main text. Please address comments to this proposal, very soon. I'd like to finalize it within a few days. -Travis From cimrman3 at ntc.zcu.cz Wed Jan 10 09:39:43 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 15:39:43 +0100 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: References: <45A4DB16.3070102@ntc.zcu.cz> <45A4F40C.3080500@ntc.zcu.cz> Message-ID: <45A4FAAF.5040804@ntc.zcu.cz> Nathan Bell wrote: > I may be misreading this, but isn't it 0.25/3.27 ~= 7.6%? Yeah, I that's when one just looks and does not actually count... > Re: Matthew Brett >> Maybe different interfaces for sorted and (possibly) unsorted? Or is >> this not practical? > > That would be a reasonable place - anyone who called it with > non-standard arguments would take on the responsibility of sorting > themselves. Anyone who just wanted to solve a few systems could > safely ignore sorting issues. IMHO everyone will be happy with this (see my answer to Matthew). I will implement it, if you and others agree. r. From wnbell at gmail.com Wed Jan 10 09:44:43 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 10 Jan 2007 08:44:43 -0600 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A4FAAF.5040804@ntc.zcu.cz> References: <45A4DB16.3070102@ntc.zcu.cz> <45A4F40C.3080500@ntc.zcu.cz> <45A4FAAF.5040804@ntc.zcu.cz> Message-ID: On 1/10/07, Robert Cimrman wrote: > Nathan Bell wrote: > > I may be misreading this, but isn't it 0.25/3.27 ~= 7.6%? > > Yeah, I that's when one just looks and does not actually count... Meh, it happens :) > IMHO everyone will be happy with this (see my answer to Matthew). I will > implement it, if you and others agree. Sounds good to me. -- Nathan Bell wnbell at gmail.com From oliphant at ee.byu.edu Wed Jan 10 10:32:39 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 08:32:39 -0700 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: References: <45A4DB16.3070102@ntc.zcu.cz> Message-ID: <45A50717.8080601@ee.byu.edu> Nathan Bell wrote: >On 1/10/07, Robert Cimrman wrote: > > > > >>Therefore, if there are no objections, I would like to remove the >>implicit call to ensure_sorted_indices() from umfpack and require user >>to call it explicitly (only if necessary) - an exception will be raised >>by umfpack if the matrix is not in correct form, so it cannot lead to >>hidden bugs. I will document this in linsolve, of course. >> >> > >Personally, I think it's best to hide these details from the user as >much as possible. If the performance penalty is more than 10%, then >perhaps it should be changed. Otherwise I think simplicity wins. > > This is my feeling as well. I get a little uncomfortable when the average UMFPACK user has to be aware of calling ensure_sorted_indices. That seems a little too much for me, as well. Having a flag on the object just means that anybody who manipulates the data directly would need to call the update-flag routine or risk later problems. This is the case with NumPy. If you manipulate the strides but don't update the flags (in C), then you can break things. This should not be onerous and somebody who is manipulating the underlying format directly. Because they must understand these issues to play with things directly. I think it is better to force power users to understand that they must call the "._update_sort_flag" method if they manipulate the entries rather than regular users having to understand that they must call "enusre_sorted_indices" before calling a simple solve routine. My $.02 -Travis From cimrman3 at ntc.zcu.cz Wed Jan 10 10:50:57 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 10 Jan 2007 16:50:57 +0100 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A50717.8080601@ee.byu.edu> References: <45A4DB16.3070102@ntc.zcu.cz> <45A50717.8080601@ee.byu.edu> Message-ID: <45A50B61.3080104@ntc.zcu.cz> I have updated linsolve.use_solver(): """ Valid keyword arguments with defaults (other ignored): useUmfpack = True assumeSortedIndices = False The default sparse solver is umfpack when available. This can be changed by passing useUmfpack = False, which then causes the always present SuperLU based solver to be used. Umfpack requires a CSR/CSC matrix to have sorted column/row indices. If sure that the matrix fulfills this, pass assumeSortedIndices = True to gain some speed. """ It was the easiest solution to allow 1) regular users not to bother about matrix indices 2) power users to gain speed. The definition of use_solver is now use_solver( **kwargs ) (previous one was too clumsy). The same can be achieved with new umfpack.configure() function for those using umfpack directly. cheers, r. From nwagner at iam.uni-stuttgart.de Wed Jan 10 10:54:18 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 10 Jan 2007 16:54:18 +0100 Subject: [SciPy-dev] new sparsetools + umfpack In-Reply-To: <45A50B61.3080104@ntc.zcu.cz> References: <45A4DB16.3070102@ntc.zcu.cz> <45A50717.8080601@ee.byu.edu> <45A50B61.3080104@ntc.zcu.cz> Message-ID: <45A50C2A.1000706@iam.uni-stuttgart.de> Robert Cimrman wrote: > I have updated linsolve.use_solver(): > > """ > Valid keyword arguments with defaults (other ignored): > useUmfpack = True > assumeSortedIndices = False > > The default sparse solver is umfpack when available. This can be > changed by passing useUmfpack = False, which then causes the always > present SuperLU based solver to be used. > > There is a pending bug concerning SuperLU. How about that ? http://projects.scipy.org/scipy/scipy/ticket/311 Nils > Umfpack requires a CSR/CSC matrix to have sorted column/row indices. > If sure that the matrix fulfills this, pass assumeSortedIndices = True > to gain some speed. > """ > > It was the easiest solution to allow 1) regular users not to bother > about matrix indices 2) power users to gain speed. > > The definition of use_solver is now use_solver( **kwargs ) (previous one > was too clumsy). > > The same can be achieved with new umfpack.configure() function for those > using umfpack directly. > > cheers, > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From charlesr.harris at gmail.com Wed Jan 10 13:45:47 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 10 Jan 2007 11:45:47 -0700 Subject: [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: On 1/10/07, Travis Oliphant wrote: > > > There was a lively discussion on the SciPy List before Christmas > regarding establishing a standard for documentation strings for NumPy / > SciPy. > > I am very interested in establishing such a standard. A hearty thanks > goes to William Stein for encouraging the discussion. I hope it is > very clear that the developers of NumPy / SciPy are quite interested in > good documentation strings but recognize that producing them can be > fairly tedious and un-interesting work. This is the best explanation I > can come up with for the relative paucity of documentation rather than > some underlying agenda *not* to produce them. I suspect a standard has > not been established largely because of all the discussions taking place > within the documentation communities of epydoc, docutils, etc. and a > relative unclarity on what to do about Math in docstrings. > > I'd like to get something done within the next few days (like placing > the standard on a wiki and/or placing a HOWTO_DOCUMENT file in the > distribution of NumPy). > > My preference is to use our own basic format for documentation with > something that will translate the result into something that the epydoc > package can process (like epytext or reStructuredText). The reason, I'd > like to use our own simple syntax, is that I'm not fully happy with > either epytext or reStructuredText. In general, I don't like a lot of > line-noise and "formatting" extras. Unfortuntately both epytext and > reStructuredText seem to have their fair share of such things. > > Robert wrote some great documentation for a few functions (apparently > following a reStructuredText format). While I liked that he did this, it > showed me that I don't very much like all the line-noise needed for > structured text. > > I've looked through a large number of documentation strings that I've > written over the years and believe that the following format suffices. > I would like all documentation to follow this format. > > This format attempts to be a combination of epytext and restructured > text with additions for latex-math. The purpose is to make a docstring > readable but also allowing for some structured text directives. At some > point we will have a sub-routine that will translate docstrings in this > format to pure epytext or pure restructured text. > > """ > one-line summary not using variable names or the function name > > A few sentences giving an extended description. > > Inputs: > var1 -- Explanation > variable2 -- Explanation > > Outputs: named, list, of, outputs > named -- explanation > list -- more detail > of -- blah, blah. > outputs -- even more blah > > Additional Inputs: > kwdarg1 -- A little-used input not always needed. > kwdarg2 -- Some keyword arguments can and should be given in Inputs > Section. This is just for "little-used" inputs. I've been using Required Arguments arg1 -- blah, blah Keyword Arguments kw1 -- blah, blah Return ret -- blah Not all arguments are inputs, some are for outputs, so the word input is a bit confusing. I note that Robert has been using word parameters instead without distinquishing keyword arguments. Anyway, if we are going to have a separate entry for keyword arguments, I would rather put it right after the required arguments. I've also been single quoting variable names in the text but am not happy about that. Should we have someway to mark variable names? Algorithm: > Notes about the implemenation algorithm (if needed). > > This can have multiple paragraphs as can all sections. > > Notes: > Additional notes if needed > > Authors: > name (date): notes about what was done > name (date): major documentation updates can be included here also. > > See also: > * func1 -- any notes about the relationship > * func2 -- > * func3 -- > (or this can be a comma separated list) > func1, func2, func3 > > (For NumPy functions, these do not need to have numpy. namespace in > front of them) > (For SciPy they don't need the scipy. namespace in front of them). > (Use np and sp for abbreviations to numpy and scipy if you need to > reference > the other package). > > Examples: > examples in doctest format > > Comments: > This section should include anything that should not be displayed in > a help > or other hard-copy output. Such things as substitution-directed > directives > should go here. > """ > > Additional Information: > > For paragraphs, indentation is significant and indicates indentation in > the output. New paragraphs are marked with blank line. > > Text-emphasis: > > Use *italics*, **bold**, and `courier` if needed in any explanations > (but not for variable names and doctest code or multi-line code) > > Math: > > Use \[...\] or $...$ for math in latex format (remember to use the > raw-format for your text string in such cases). Place it in a > new-paragraph for displaystyle or in-line for inline style. Why is a raw format string required? The $...$ should work inline, and for out of line one could use $[ $] or maybe $$...$$. If the whole is going to be translated anyway, we are free to choose our own markup. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Wed Jan 10 13:54:06 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 11:54:06 -0700 Subject: [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <45A5364E.7020300@ee.byu.edu> > > Why is a raw format string required? The $...$ should work inline, and > for out of line one could use $[ $] or maybe $$...$$. If the whole is > going to be translated anyway, we are free to choose our own markup. You are right, it's not required unless you are using things like \alpha between the $ $ (which I do frequently enough that I just make it a habit of using raw strings when there is latex. -Travis From oliphant at ee.byu.edu Wed Jan 10 13:59:37 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 11:59:37 -0700 Subject: [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <45A53799.4040708@ee.byu.edu> > > I've been using > > Required Arguments > arg1 -- blah, blah > > Keyword Arguments > kw1 -- blah, blah > > Return > ret -- blah > > > Not all arguments are inputs, some are for outputs, so the word input > is a bit confusing. I note that Robert has been using word parameters > instead without distinquishing keyword arguments. Anyway, if we are > going to have a separate entry for keyword arguments, I would rather > put it right after the required arguments. I've also been single > quoting variable names in the text but am not happy about that. Should > we have someway to mark variable names? True, not all arguments are for inputs. So, put them in the output section. That's why they are called inputs and outputs. It's also why I want to label the outputs section with the name you are giving to what actually is returned by the function (rather than through a keyword argument which would be in the output section but labeled differently). If the function returns a tuple, then this should be noted using a comma-separated list of names that are then explained. I personally dislike marking variable names and don't like to read documentation with variable names that are cluttered with marks. That's one of the things that bothers me the most about reST. From tim.leslie at gmail.com Thu Jan 11 00:15:37 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Thu, 11 Jan 2007 16:15:37 +1100 Subject: [SciPy-dev] Refactoring of csc/csr sparse matrices Message-ID: Hi All, i've been cleaning up some of the sparse.py code, and I noticed that there's a lot of duplicated code between csr_matrix and csc_matrix. I was able to resolve quite a lot of this into a single parent class _cs_matrix which I checked in yesterday. I've made some further changes, but they involve possible changes to the interface, so I'd like to get some opinions before committing them. The rowind/colind attributes in the respective classes play equivalent roles. By changing their names to be the same (say 'indices'), the two classes could be further reconciled into a single base class. The problem is that changing these names would break the current interface. This could be un-broken by using __getattr__/__setattr__ to trap all calls to rowind/colind and pass them on to 'indices'. So, the question is should we a) make no change, b) make the change and change the interface or c) make the change but keep the old interface. I'm personally in favour or c), but I'd like to hear what other people have to say. Cheers, Tim From wnbell at gmail.com Thu Jan 11 01:02:42 2007 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 11 Jan 2007 00:02:42 -0600 Subject: [SciPy-dev] Refactoring of csc/csr sparse matrices In-Reply-To: References: Message-ID: On 1/10/07, Tim Leslie wrote: > The problem is that changing these names would break the current > interface. This could be un-broken by using __getattr__/__setattr__ to > trap all calls to rowind/colind and pass them on to 'indices'. > > So, the question is should we a) make no change, b) make the change > and change the interface or c) make the change but keep the old > interface. I'm personally in favour or c), but I'd like to hear what > other people have to say. Option C is fine with me. Should deprecation warning be printed if rowind/colind is used? Also, in your current code you have things like: 487 def __add__(self, other, self_ind, other_ind, fn, cls): 493 elif isspmatrix(other): 494 other = other.tocsc() 495 if (other.shape != self.shape): 496 raise ValueError, "inconsistent shapes" 497 if other_ind: 498 other = other.tocsc() 499 other_ind = other.rowind 500 else: 501 other = other.tocsr() 502 other_ind = other.colind With the change to .indices, a somewhat better/more efficient approach would be: 487 def __add__(self, other, self_ind, other_ind, fn, cls): 493 elif isspmatrix(other): 494 other = cls(other) 495 if (other.shape != self.shape): 496 raise ValueError, "inconsistent shapes" The constructor of cls should do the proper conversion (if necessary) for you. With this, I believe self_ind and other_ind become unnecessary. Thanks for refactoring those classes, maintaining them separately was a bit tedious :) -- Nathan Bell wnbell at gmail.com From tim.leslie at gmail.com Thu Jan 11 01:06:14 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Thu, 11 Jan 2007 17:06:14 +1100 Subject: [SciPy-dev] Refactoring of csc/csr sparse matrices In-Reply-To: References: Message-ID: On 1/11/07, Nathan Bell wrote: > On 1/10/07, Tim Leslie wrote: > > The problem is that changing these names would break the current > > interface. This could be un-broken by using __getattr__/__setattr__ to > > trap all calls to rowind/colind and pass them on to 'indices'. > > > > So, the question is should we a) make no change, b) make the change > > and change the interface or c) make the change but keep the old > > interface. I'm personally in favour or c), but I'd like to hear what > > other people have to say. > > Option C is fine with me. Should deprecation warning be printed if > rowind/colind is used? > That's definitely a possible option. What do other people think? > > > Also, in your current code you have things like: > 487 def __add__(self, other, self_ind, other_ind, fn, cls): > > 493 elif isspmatrix(other): > 494 other = other.tocsc() > 495 if (other.shape != self.shape): > 496 raise ValueError, "inconsistent shapes" > 497 if other_ind: > 498 other = other.tocsc() > 499 other_ind = other.rowind > 500 else: > 501 other = other.tocsr() > 502 other_ind = other.colind > > > With the change to .indices, a somewhat better/more efficient approach would be: > > 487 def __add__(self, other, self_ind, other_ind, fn, cls): > > 493 elif isspmatrix(other): > 494 other = cls(other) > 495 if (other.shape != self.shape): > 496 raise ValueError, "inconsistent shapes" > > The constructor of cls should do the proper conversion (if necessary) > for you. With this, I believe self_ind and other_ind become > unnecessary. > Yep, those and many other changes are sitting here in my patch waiting to be applied. > Thanks for refactoring those classes, maintaining them separately was > a bit tedious :) > No worries :-) Tim > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From edschofield at gmail.com Thu Jan 11 04:07:20 2007 From: edschofield at gmail.com (Ed Schofield) Date: Thu, 11 Jan 2007 10:07:20 +0100 Subject: [SciPy-dev] Refactoring of csc/csr sparse matrices In-Reply-To: References: Message-ID: <1b5a37350701110107q6e813443x9773441ee0cfc661@mail.gmail.com> On 1/11/07, Tim Leslie wrote: > On 1/11/07, Nathan Bell wrote: > > On 1/10/07, Tim Leslie wrote: > > > The problem is that changing these names would break the current > > > interface. This could be un-broken by using __getattr__/__setattr__ to > > > trap all calls to rowind/colind and pass them on to 'indices'. > > > > > > So, the question is should we a) make no change, b) make the change > > > and change the interface or c) make the change but keep the old > > > interface. I'm personally in favour or c), but I'd like to hear what > > > other people have to say. > > > > Option C is fine with me. Should deprecation warning be printed if > > rowind/colind is used? > > > > That's definitely a possible option. What do other people think? Yeah, well done. All that duplicate code is painful to maintain, and there have sometimes been bugs fixed in one of the two classes but forgotten in the other. I agree we should start with option (c), but I think we should view the rowind and colind attributes as internals anyway, not as part of the interface. Ideally, we should keep adding more high-level methods so that accessing rowind or colind outside the sparse module is rarely necessary. -- Ed From cimrman3 at ntc.zcu.cz Thu Jan 11 04:48:30 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 11 Jan 2007 10:48:30 +0100 Subject: [SciPy-dev] Refactoring of csc/csr sparse matrices In-Reply-To: <1b5a37350701110107q6e813443x9773441ee0cfc661@mail.gmail.com> References: <1b5a37350701110107q6e813443x9773441ee0cfc661@mail.gmail.com> Message-ID: <45A607EE.5020300@ntc.zcu.cz> Ed Schofield wrote: > On 1/11/07, Tim Leslie wrote: >> On 1/11/07, Nathan Bell wrote: >>> On 1/10/07, Tim Leslie wrote: >>>> The problem is that changing these names would break the current >>>> interface. This could be un-broken by using __getattr__/__setattr__ to >>>> trap all calls to rowind/colind and pass them on to 'indices'. >>>> >>>> So, the question is should we a) make no change, b) make the change >>>> and change the interface or c) make the change but keep the old >>>> interface. I'm personally in favour or c), but I'd like to hear what >>>> other people have to say. >>> Option C is fine with me. Should deprecation warning be printed if >>> rowind/colind is used? >>> >> That's definitely a possible option. What do other people think? > > Yeah, well done. All that duplicate code is painful to maintain, and > there have sometimes been bugs fixed in one of the two classes but > forgotten in the other. I agree we should start with option (c), but I > think we should view the rowind and colind attributes as internals > anyway, not as part of the interface. Ideally, we should keep adding > more high-level methods so that accessing rowind or colind outside the > sparse module is rarely necessary. Good work! The rowind/colind dichotomy was bothering me a long time, too. +1 for c). I personally use the inner data in my FE assembling code, so I would like to have methods to get them, no matter their internal names, e.g. get_data(), get_ptr(), get_indices(), maybe with '_' to indicate that these are accessing 'private' data. r. From nwagner at iam.uni-stuttgart.de Thu Jan 11 04:52:16 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 11 Jan 2007 10:52:16 +0100 Subject: [SciPy-dev] Inconsistent behavior in optimize wrt the type of the initial guess (array versus matrix) Message-ID: <45A608D0.9020506@iam.uni-stuttgart.de> Hi, I have observed some inconsistent behavior of the optimization routines wrt to the type of the initial guess. I mean matrix versus array. For example optimize.fmin_ncg works with a matrix input while fmin_bfgs segfaults. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912509653888 (LWP 30417)] dotblas_matrixproduct (dummy=, args=) at _dotblas.c:233 233 Py_DECREF(ap1); (gdb) bt #0 dotblas_matrixproduct (dummy=, args=) at _dotblas.c:233 Any comments ? Nils from scipy import * def g(x): return 1./(1-cos(x)) def g_p(x): return -sin(x)/(1.-cos(x))**2 def d(x): return pow(x,2)+pow((g(x)-1.0),2) # return sqrt(x**2+(g(x)-1.0)**2) def d_p(x): return 2*x+2*(g(x)-1.0)*g_p(x) def f(x): return x+(g(x)-1.)*g_p(x) x_0 = matrix(0.3) print x_0 x_opt = optimize.fmin_cg(d,x_0) # ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() #x_opt = optimize.fmin_powell(d,x_0) # ValueError: Initial guess must be a scalar or rank-1 sequence. #x_opt = optimize.fmin_bfgs(d,x_0) # Segfaults with a matrix input #x_opt = optimize.fmin_ncg(d,x_0,d_p) # Works for me with a matrix input print x_opt From gruben at bigpond.net.au Thu Jan 11 06:39:31 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Thu, 11 Jan 2007 22:39:31 +1100 Subject: [SciPy-dev] scipy/numpy documentation Message-ID: <45A621F3.3000705@bigpond.net.au> Continuing the documentation topic, I just spent a few hours unsuccessfully trying to build the numarray documentation under Windows. It uses the Python standard documentation system which requires many dependencies under Windows (and I suspect under Linux) and the set-up is poorly documented. The main Python documentation is based on a custom LaTeX 'manual.cls' class with specially defined macros. This is run through various tools to convert images, generate html versions, etc. We have the choice of sticking with this system. I don't know what the implications of avoiding it by basing the spun-off version on a more standard LaTeX class are. For example, FiPy uses the memoir class, with which I am reasonably familiar. It is good for making books, but we wouldn't be able to take advantage of any tools which use the Python manual class with all the special Python LaTeX documentation markup. I'm guessing it's set up so that Windows compiled html (.chm) files can be easily generated in addition to the online html. This seems like a big advantage, so my tendency is to stick with this format. Other options, if we move away from the manual class, are the standard LaTeX book class or Koma-script. Which way should we go with this? If it's agreed to stick with the Python documentation system, I think I'll have to try again under Linux. Gary R. From tim.leslie at gmail.com Thu Jan 11 09:07:43 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Fri, 12 Jan 2007 01:07:43 +1100 Subject: [SciPy-dev] Refactoring of csc/csr sparse matrices In-Reply-To: <45A607EE.5020300@ntc.zcu.cz> References: <1b5a37350701110107q6e813443x9773441ee0cfc661@mail.gmail.com> <45A607EE.5020300@ntc.zcu.cz> Message-ID: On 1/11/07, Robert Cimrman wrote: > Ed Schofield wrote: > > On 1/11/07, Tim Leslie wrote: > >> On 1/11/07, Nathan Bell wrote: > >>> On 1/10/07, Tim Leslie wrote: > >>>> The problem is that changing these names would break the current > >>>> interface. This could be un-broken by using __getattr__/__setattr__ to > >>>> trap all calls to rowind/colind and pass them on to 'indices'. > >>>> > >>>> So, the question is should we a) make no change, b) make the change > >>>> and change the interface or c) make the change but keep the old > >>>> interface. I'm personally in favour or c), but I'd like to hear what > >>>> other people have to say. > >>> Option C is fine with me. Should deprecation warning be printed if > >>> rowind/colind is used? > >>> > >> That's definitely a possible option. What do other people think? > > > > Yeah, well done. All that duplicate code is painful to maintain, and > > there have sometimes been bugs fixed in one of the two classes but > > forgotten in the other. I agree we should start with option (c), but I > > think we should view the rowind and colind attributes as internals > > anyway, not as part of the interface. Ideally, we should keep adding > > more high-level methods so that accessing rowind or colind outside the > > sparse module is rarely necessary. > > Good work! The rowind/colind dichotomy was bothering me a long time, > too. +1 for c). > > I personally use the inner data in my FE assembling code, so I would > like to have methods to get them, no matter their internal names, e.g. > get_data(), get_ptr(), get_indices(), maybe with '_' to indicate that > these are accessing 'private' data. > OK, it seems like a lot of positive feedback for option c), so I'll clean up my patch to do that and check it in in the next 24 hours and we can work out the next evolution of the interface from there. Cheers, Tim > r. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From tim.leslie at gmail.com Thu Jan 11 09:10:35 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Fri, 12 Jan 2007 01:10:35 +1100 Subject: [SciPy-dev] Inconsistent behavior in optimize wrt the type of the initial guess (array versus matrix) In-Reply-To: <45A608D0.9020506@iam.uni-stuttgart.de> References: <45A608D0.9020506@iam.uni-stuttgart.de> Message-ID: On 1/11/07, Nils Wagner wrote: > Hi, > > I have observed some inconsistent behavior of the optimization routines > wrt to the type of the initial guess. > I mean matrix versus array. > > For example optimize.fmin_ncg works with a matrix input while fmin_bfgs > segfaults. I can confirm this segfault. Nils, would you like to open a ticket for this so it doesn't get lost? Cheers, Tim > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 46912509653888 (LWP 30417)] > dotblas_matrixproduct (dummy=, args= optimized out>) at _dotblas.c:233 > 233 Py_DECREF(ap1); > (gdb) bt > #0 dotblas_matrixproduct (dummy=, args= optimized out>) at _dotblas.c:233 > > Any comments ? > > Nils > > from scipy import * > > def g(x): > return 1./(1-cos(x)) > > def g_p(x): > return -sin(x)/(1.-cos(x))**2 > > def d(x): > return pow(x,2)+pow((g(x)-1.0),2) > # return sqrt(x**2+(g(x)-1.0)**2) > > def d_p(x): > return 2*x+2*(g(x)-1.0)*g_p(x) > > def f(x): > return x+(g(x)-1.)*g_p(x) > > x_0 = matrix(0.3) > print x_0 > x_opt = optimize.fmin_cg(d,x_0) # ValueError: The truth value of an > array with more than one element is ambiguous. Use a.any() or a.all() > #x_opt = optimize.fmin_powell(d,x_0) # ValueError: Initial guess must be > a scalar or rank-1 sequence. > #x_opt = optimize.fmin_bfgs(d,x_0) # Segfaults with a matrix input > #x_opt = optimize.fmin_ncg(d,x_0,d_p) # Works for me with a matrix input > print x_opt > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From nwagner at iam.uni-stuttgart.de Thu Jan 11 09:18:03 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 11 Jan 2007 15:18:03 +0100 Subject: [SciPy-dev] Inconsistent behavior in optimize wrt the type of the initial guess (array versus matrix) In-Reply-To: References: <45A608D0.9020506@iam.uni-stuttgart.de> Message-ID: <45A6471B.7020102@iam.uni-stuttgart.de> Tim Leslie wrote: > On 1/11/07, Nils Wagner wrote: > >> Hi, >> >> I have observed some inconsistent behavior of the optimization routines >> wrt to the type of the initial guess. >> I mean matrix versus array. >> >> For example optimize.fmin_ncg works with a matrix input while fmin_bfgs >> segfaults. >> > > I can confirm this segfault. Nils, would you like to open a ticket for > this so it doesn't get lost? > > Cheers, > > Tim > > Hi Tim, I have already filed a ticket. Maybe it's the wrong place but I guess the segfault is connected with numpy/core. Is that correct ? http://projects.scipy.org/scipy/numpy/ticket/418 Cheers, Nils From tim.leslie at gmail.com Thu Jan 11 09:30:11 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Fri, 12 Jan 2007 01:30:11 +1100 Subject: [SciPy-dev] Inconsistent behavior in optimize wrt the type of the initial guess (array versus matrix) In-Reply-To: <45A6471B.7020102@iam.uni-stuttgart.de> References: <45A608D0.9020506@iam.uni-stuttgart.de> <45A6471B.7020102@iam.uni-stuttgart.de> Message-ID: On 1/12/07, Nils Wagner wrote: > Tim Leslie wrote: > > On 1/11/07, Nils Wagner wrote: > > > >> Hi, > >> > >> I have observed some inconsistent behavior of the optimization routines > >> wrt to the type of the initial guess. > >> I mean matrix versus array. > >> > >> For example optimize.fmin_ncg works with a matrix input while fmin_bfgs > >> segfaults. > >> > > > > I can confirm this segfault. Nils, would you like to open a ticket for > > this so it doesn't get lost? > > > > Cheers, > > > > Tim > > > > > Hi Tim, > > I have already filed a ticket. Maybe it's the wrong place but I guess > the segfault is connected with numpy/core. > Is that correct ? > http://projects.scipy.org/scipy/numpy/ticket/418 Ah ok great, I hadn't noticed that there. I would have filed it as a scipy bug, since it needs parts of scipy to trigger it, even if the bug does turn out to be in numpy, but I'm sure it doesn't really matter too much. Tim > > Cheers, > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From ravi.rajagopal at amd.com Thu Jan 11 10:46:49 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Thu, 11 Jan 2007 10:46:49 -0500 Subject: [SciPy-dev] Refactoring of csc/csr sparse matrices In-Reply-To: References: <45A607EE.5020300@ntc.zcu.cz> Message-ID: <200701111046.49567.ravi@ati.com> On Thursday 11 January 2007 9:07 am, Tim Leslie wrote: > OK, it seems like a lot of positive feedback for option c), so I'll > clean up my patch to do that and check it in in the next 24 hours and > we can work out the next evolution of the interface from there. I am probably too late (time zone differences), but I would like a deprecation warning for the old methods. I agree with Ed that the original methods shouldn't be considered as interfaces, and hence deprecating them would eventually let us remove it (fewer lines of code to maintain in the long run). I am not a heavy user of this module, though; so please tell me if I am missing something. Regards, Ravi From nwagner at iam.uni-stuttgart.de Mon Jan 15 11:45:27 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 15 Jan 2007 17:45:27 +0100 Subject: [SciPy-dev] L, U, P, Q, R, do_recip = umfpack.lu(B) Message-ID: <45ABAFA7.6080604@iam.uni-stuttgart.de> Hi all, I am using L, U, P, Q, R, do_recip = umfpack.lu(B) to compute the LU factors of a sparse matrix B. do_recip should be boolean but is . Am I missing something ? Any idea ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: centrosymmetric.py Type: text/x-python Size: 872 bytes Desc: not available URL: From wnbell at gmail.com Mon Jan 15 14:14:03 2007 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 15 Jan 2007 13:14:03 -0600 Subject: [SciPy-dev] L, U, P, Q, R, do_recip = umfpack.lu(B) In-Reply-To: <45ABAFA7.6080604@iam.uni-stuttgart.de> References: <45ABAFA7.6080604@iam.uni-stuttgart.de> Message-ID: On 1/15/07, Nils Wagner wrote: > Hi all, > > I am using L, U, P, Q, R, do_recip = umfpack.lu(B) to compute the LU > factors of > a sparse matrix B. > do_recip should be boolean but is . > Am I missing something ? It should be a boolean, I just didn't cast it properly. It's been fixed in SVN. -- Nathan Bell wnbell at gmail.com From nwagner at iam.uni-stuttgart.de Tue Jan 16 04:50:20 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 16 Jan 2007 10:50:20 +0100 Subject: [SciPy-dev] [Fwd: [Bug 218406] g77 fails with: cannot find -lgcc_s] Message-ID: <45AC9FDC.8070502@iam.uni-stuttgart.de> This might be of interest ... Nils -------------- next part -------------- An embedded message was scrubbed... From: bugzilla_noreply at novell.com Subject: [Bug 218406] g77 fails with: cannot find -lgcc_s Date: Tue, 16 Jan 2007 02:47:46 -0700 (MST) Size: 2991 URL: From oliphant at ee.byu.edu Tue Jan 16 18:33:34 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 16 Jan 2007 16:33:34 -0700 Subject: [SciPy-dev] [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> Message-ID: <45AD60CE.6090102@ee.byu.edu> Edward Loper wrote: >[I sent this 5 days ago, but it's been held because I was not >subscribed -- so I decided to just go ahead & subscribe and resend > > >it. Apologies if it ends up being a dup.] > > I'm ccing to the users list, but the discussion has been taking place on the developers list, so I'm addressing it there. >I'm glad to hear that you're making a push towards using standardized >markup in docstrings -- I think this is a worthy goal. I wanted to >respond to a few points that have come up, though. > >First, I'd pretty strongly recommend against inventing your own >markup language. It increases the barrier for contributions, makes >life more difficult for tools, and takes up that much more brain >space that could be devoted to better things. > I'm not really convinced by this argument. I agree we shouldn't be flippant about introducing new markup and that is not what I've proposed. But, already we must learn multiple markup. For example, Moin Moin uses one way to describe tables and restructured text. Basically, I've said that the existing markups do not seem geared toward mathematical description (as latex must be hacked on to them). In addition, I don't like the look of existing markup docstrings --- especially in the parameter section. That's where my biggest problem really lies. What should be clear ends up with all kinds of un-necessary back-ticks. I also don't really like the extra ":" at the beginning of a paragraph to denote a section. I could live with the underline though. In the end, none of the markup languages seem to have been designed with the scientific user community in mind and so I'm not feeling a particular need to cram my brain into what they came up with. Basically, scipy docstrings have been developed already and the follow a sort of markup. Why should they all be changed (in essentially unnecessary ways because a computer program could be written to change them and they will look "messier" in the end) just to satisfy a particular markup language that was designed without inquiring about our needs. This is not a case of Not Invented Here. I'm very happy to use an existing markup standard. In fact, I basically like restructured Text. There are only a few issues I have with it. Ideally, we can get those issues resolved in epydoc itself. So that we can use a slightly modified form of restructured Text in the docstrings. I'm all for simplifying things, but there is a limit to how much I'm willing to give up when we can easily automate the conversions to what epydoc currently expects. >Plus, it's >surprisingly hard to do right, even if you're translating from your >markup to an existing one -- there are just too many corner cases to >consider. I know Travis has reservations about the amount of 'line >noise,' but believe me, there are good reasons why that 'line noise' >is there, and the authors of ReST have done a *very* good job at >keeping it to a minimum. > > Well that is debatable in the specific instances of parameter descriptions. The extra back-ticks are annoying to say the least, and un-necessary. >Given the expressive power that's needed for scipy docs, I would >recommend using ReST. Epytext is a much simpler markup language, and >most likely won't be expressive enough. (e.g., it has no support for >tables.) > >Whatever markup language you settle on, be sure to indicate it by >setting module-level __docformat__ variables, as described in PEP >258. __docformat__ should be a string containing the name of the >module's markup language. The name of the markup language may >optionally be followed by a language code (such as en for English). >Conventionally, the definition of the __docformat__ variable >immediately follows the module's docstring. E.g.: > > __docformat__ = 'restructuredtext' > >Other standard values include 'plaintext' and 'epytext'. > > SciPy is big enough that I see no reason we cannot define a slightly modified form of restructured Text (i.e. it uses MoinMoin tables, gets rid of back-ticks in parameter lists, understands math ($ $), and has specific layout for certain sections. >As for extending ReST and/or epydoc to support any specializiations >you want to make, I don't think it'll be that hard. E.g., adding >'input' and 'output' as aliases for 'parameters' and 'returns' is >pretty simple. And adding support for generating latex-math should >be pretty straight-forward. I think concerns about the markup for >marking latex-math are perhaps exaggerated, given that the *contents* >of latex-math expressions are quite likely to look like line-noise to >the uninitiated. :) I've patched my local version of docutils to >support inline math with `x=12`:math: and block math with: > >.. math:: F(x,y;w) = \langle w, \Phi(x,y) \rangle > >And I've been pretty happy with how well it reads. And for people >who aren't latex gurus, it may be more obvious what's going on if >they see :math:`..big latex expr..` than if they just see $..big >latex expr..$. > >If you really think that's too line-noise-like, then you could set >the default role to be math, so `x=12` would render as math. But >then you'd need to explicitly mark crossreferences, so I doubt that >would be a win overall. > >[Alan Isaac] > > > >>Must items (e.g., parameters) in a consolidated field be >>marked as interpreted text (with back ticks). >> Yes. It does seem redundant, so I will ask why. >> >> >> > >I wouldn't mind changing this to work both with & without the >backticks around parameter names. At the time when I implemented it, >I just checked what the standard practice within docutils for writing >consolidated fields was, and wrote a parser for that. > > Allowing us not to have backticks in parameter names would help me like using restructured Text quite a bit. I see no reason why parameter lists cannot be handled specially. After all, it is the most important part of a docstring. > > >>Is table support adequate in reST? >> >> >> > >See restructuredtext.html#tables> > >If ReST table support isn't expressive enough for you, then you must >be using some pretty complex tables. :) > > Moin Moin uses a different way to describe tables. :-( >[Alan Isaac] > > > >> math, so we could inline `f(x)=x^2` rather than >> :latex-math:`f(x)=x^2`. >> >> >> > >As I noted above, this would mean you'd have to explicitly mark >crossreferences to python objects with some tag -- rst can't read >your mind to know whether `foo` refers to a math expression or a >variable. > > > > > >>It may be worth asking whether >> epydoc developers would be willing to pass $f(x)=x^2$ >> as latex-math. >> >> >> > >Overall, I'm reluctant to make changes to the markup language(s) >themselves that aren't supported by the markup language's own >extension facilities. > > > That understandable reluctance is why we need to make changes to the standard for SciPy docstrings. Math support is critical and it just isn't built-in to restructured Text as well as it could be. Having to do :latex-math:` ` for in-line math is silly when $$ has been the way to define latex math for a long time. In summary, my biggest issues with just straight restructured Text are 1) back-ticks in parameter lists. 2) the way math is included 3) doesn't understand Moin Moin tables 4) doesn't seem to have special processing for standard section headers (I may be wrong about this as I'm not an reST expert. I also don't really like the way bold, italics, and courier are handled. My favorite is now *bold* /italics/ and `fixed-width`. I like the {{{ code }}} for code blocks that Moin Moin uses, but that's not a big deal to me. I can live with the :: that restructured-Text uses. It seems like we are going to have to customize (hack) epydoc to do what we want anyway. Why then can we not "tweak" reST a little-bit too. -Travis From david.huard at gmail.com Tue Jan 16 20:29:08 2007 From: david.huard at gmail.com (David Huard) Date: Tue, 16 Jan 2007 20:29:08 -0500 Subject: [SciPy-dev] [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AD60CE.6090102@ee.byu.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> <45AD60CE.6090102@ee.byu.edu> Message-ID: <91cf711d0701161729q1bf45c24sec41acae7c84c595@mail.gmail.com> 2007/1/16, Travis Oliphant : > > In summary, my biggest issues with just straight restructured Text are > > 1) back-ticks in parameter lists. > 2) the way math is included > 3) doesn't understand Moin Moin tables > 4) doesn't seem to have special processing for standard section headers > (I may be wrong about this as I'm not an reST expert. > > I also don't really like the way bold, italics, and courier are > handled. My favorite is now *bold* /italics/ and `fixed-width`. > > I like the {{{ code }}} for code blocks that Moin Moin uses, but that's > not a big deal to me. I can live with the :: that restructured-Text uses. > > It seems like we are going to have to customize (hack) epydoc to do what > we want anyway. Why then can we not "tweak" reST a little-bit too. > > -Travis > I understand Travis's reluctance to introduce a markup that doesn't look as good as it could, but I'm bothered by the precedent it creates. I imagine that in a couple of years from now, there will be a bunch of graphical IDEs processing reST markup in the docstrings and generating a nice output. If we go along with the SciPy markup, I'm worried user will be disapointed to see that their favorite IDE cannot process SciPy and NumPy docstrings correctly. Now that we've contacted epydoc's developper, maybe its time we discussed the issue with the docutils folks, and see what their take on this is. If they are willing to accept patches implementing solutions to some of the issues Travis mentionned, the SciPy docstring effort would serve other projects as well, be maintained over time and remain compatible with third party software. To my knowledge, there is no convention yet regarding reST docstrings, so let's use this opportunity to voice our needs. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Jan 16 20:56:11 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 16 Jan 2007 20:56:11 -0500 Subject: [SciPy-dev] [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AD60CE.6090102@ee.byu.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu><45AD60CE.6090102@ee.byu.edu> Message-ID: On Tue, 16 Jan 2007, Travis Oliphant apparently wrote: > In summary, my biggest issues with just straight > restructured Text are > 1) back-ticks in parameter lists. I understood Ed to say: 1) this can probably be dispensed with easily enough 2) if dispensed with, we may lose cross refs from the parameters? I assume I misunderstood (2), since this is just a matter of how definition lists are parsed in consolidated fields, and definition lists definitely do not require the back tics. So I *think* Ed is saying both that this problem can be overcome and that he is willing to help with it. > 2) the way math is included I understood Ed to say that for inline math we could just make LaTeX the default role, so that we write e.g., `f(x)=x^2`. Back ticks look at least as good as dollar signs, IMO. The cost is that cross refs then must be explicitly marked. How important are these considered to be for the documentation? (How did you intend to mark them?) I did not get a sense of how easy it would be to make dollar signs special (so they could be used to indicate a math role). I guess it would not be hard, but I feel pretty confident that this would NOT be welcomed as some kind of patch to reST. (But I'm just a user.) > 3) doesn't understand Moin Moin tables This seems just a matter of hacking reST, it seems to me. I hazard that the reST developers would welcome a patch to handle Moin Moin tables. In the meantime I ask, what features missing from reST tables would actually be used? > 4) doesn't seem to have special processing for standard section headers > (I may be wrong about this as I'm not an reST expert. I am not sure what you mean here. Section headers can be styled as you wish, pretty much. What kind of "processing" is desired here? > I also don't really like the way bold, italics, and > courier are handled. My favorite is now *bold* /italics/ > and `fixed-width`. This seems to me not worth haggling over. Right? Really **bold**, *italics*, and ``fixed-width`` are just as good. (Note that you could even use single back ticks for fixed width by hijacking the default role, but it seems better to save that for math?) Remember that each time you steal a character you need some way to escape it to get it back when needed: reST minimizes this. > I like the {{{ code }}} for code blocks that Moin Moin uses, but that's > not a big deal to me. I can live with the :: that restructured-Text uses. I think the reST convention is much cleaner. fwiw, Alan Isaac From oliphant at ee.byu.edu Wed Jan 17 13:24:31 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 17 Jan 2007 11:24:31 -0700 Subject: [SciPy-dev] [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <45AE69DF.80909@ee.byu.edu> Travis Oliphant wrote: >There was a lively discussion on the SciPy List before Christmas >regarding establishing a standard for documentation strings for NumPy / >SciPy. > > After some more lively discussion, here is my latest proposal. We will use reST for the docstrings with specific sections expected as given. The Notes and Examples section are optional but the Examples section is strongly encouraged. """ one-line summary not using variable names or the function name A few more sentences giving an extended description :Parameters: var1 : any-type information Explanation var2 : type Explanation long_variable_name Explanation :Keywords: only_seldom_used_keywords : type Explanation any_common_keywords Should be placed in the parameters section Notes ------ Any algorithm or other notes that may be needed. Examples -------- Doctest-formated examples """ Remaining questions: 1) I'm still unclear on how to include math --- please help. 2) I'm still unclear on what to do about see-also. I know we want to be conservative in how this is used, but perhaps ought to give some help. 3) I don't really like using :Keywords: like I'm using it above. I would prefer another special field name that epydoc understands. Perhaps we can just use spacing in the :Parameters: section to convey the same split in the authors interpretation of the parameters. Something like this: :Parameters: var1 : any-type information Explanation var2 : type Explanation long_variable_name Explanation only_seldom_used_keywords : type Explanation any_common_keywords Should be placed in the parameters section -Travis From robert.kern at gmail.com Thu Jan 18 00:14:20 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Jan 2007 23:14:20 -0600 Subject: [SciPy-dev] [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AE69DF.80909@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> Message-ID: <45AF022C.1020904@gmail.com> Travis Oliphant wrote: > After some more lively discussion, here is my latest proposal. We will > use reST for the docstrings with specific sections expected as given. > The Notes and Examples section are optional but the Examples section is > strongly encouraged. There's another section that we should encourage (since authors tend not to document it otherwise): Raises ------ ValueError if the input matrix is not square. LinAlgError if the input matrix is not symmetric positive definite. I don't think it needs any special parsing; it can be unstructured. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Thu Jan 18 00:44:37 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 17 Jan 2007 22:44:37 -0700 Subject: [SciPy-dev] [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AF022C.1020904@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AF022C.1020904@gmail.com> Message-ID: <45AF0945.2040209@ee.byu.edu> Robert Kern wrote: > Travis Oliphant wrote: > >> After some more lively discussion, here is my latest proposal. We will >> use reST for the docstrings with specific sections expected as given. >> The Notes and Examples section are optional but the Examples section is >> strongly encouraged. >> > > There's another section that we should encourage (since authors tend not to > document it otherwise): > > > Good idea. There is a consolidated field named Exceptions that epydoc recognizes which we could also use. -Travis From aisaac at american.edu Thu Jan 18 13:43:25 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 18 Jan 2007 13:43:25 -0500 Subject: [SciPy-dev] [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <369FDD09-3A75-49F3-9CD1-0CCD143CD493@gradient.cis.upenn.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu><45AD60CE.6090102@ee.byu.edu><369FDD09-3A75-49F3-9CD1-0CCD143CD493@gradient.cis.upenn.edu> Message-ID: > Travis wrote: >> Moin Moin uses a different way to describe tables. :-( On Wed, 17 Jan 2007, Edward Loper apparently wrote: > Is there a reason you're attached to MoinMoin's syntax > instead of rst's? MoinMoin's doesn't seem particularly > more readable to me. If you're using rst for most of your > markup, why not use it for tables? Since this thread has grown rather long, I'm going to risk being a bit repetitive. 1. reST tables *are* more limited, but nobody has yet illustrated with a table that would be - needed in the docs - incompatible with the reST tables 2. It would startle me if a patch to handle MoinMoin tables would be rejected by the reST developers Cheers, Alan Isaac From fperez.net at gmail.com Thu Jan 18 14:41:30 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 18 Jan 2007 12:41:30 -0700 Subject: [SciPy-dev] A patch for weave.catalog Message-ID: <45AFCD6A.8030209@gmail.com> Hi all, does anyone object to the patch at the end? Rationale: when running using scipy in parallel on a cluster whose members share an NFS filesystem, the current code can blow up because the test file is hardcoded to dir+'/dummy', and all the processes race to manipulate the same file. My patch tries to fix this using portable calls (as far as the docs say, both socket.gethostname() and os.getpid() are fully portable). If nobody objects in 24 hours or I get an explicit OK before that from a core dev, I'll be happy to commit this. I've tested it and it solves my recurrent deadlocks on my system. I suspect scipy hadn't seen too much parallel use before, so I'm sure we'll begin finding other similar little gremlins as time goes by. I'll be happy to fix the ones I can. Cheers, f ############ Index: Lib/weave/catalog.py =================================================================== --- Lib/weave/catalog.py (revision 2579) +++ Lib/weave/catalog.py (working copy) @@ -33,6 +33,7 @@ import os,sys,string import pickle +import socket import tempfile try: @@ -127,13 +128,30 @@ os.mkdir(p) def is_writable(dir): - dummy = os.path.join(dir, "dummy") + """Determine whether a given directory is writable in a portable manner. + + :Parameters: + - dir: string + A string represeting a path to a directory on the filesystem. + + :Returns: + True or False. + """ + + # Do NOT use a hardcoded name here due to the danger from race conditions + # on NFS when multiple processes are accessing the same base directory in + # parallel. We use both hostname and pocess id for the prefix in an + # attempt to ensure that there can really be no name collisions (tempfile + # appends 6 random chars to this prefix). + prefix = 'dummy_%s_%s_' % (socket.gethostname(),os.getpid()) try: - open(dummy, 'w') - except IOError: - return 0 - os.unlink(dummy) - return 1 + tmp = tempfile.TemporaryFile(prefix=prefix,dir=dir) + except OSError: + return False + # The underlying file is destroyed upon closing the file object (under + # *nix, it was unlinked at creation time) + tmp.close() + return True def whoami(): """return a string identifying the user.""" From robert.kern at gmail.com Thu Jan 18 14:48:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 Jan 2007 13:48:26 -0600 Subject: [SciPy-dev] A patch for weave.catalog In-Reply-To: <45AFCD6A.8030209@gmail.com> References: <45AFCD6A.8030209@gmail.com> Message-ID: <45AFCF0A.4010108@gmail.com> Fernando Perez wrote: > Hi all, > > does anyone object to the patch at the end? > > Rationale: when running using scipy in parallel on a cluster whose members > share an NFS filesystem, the current code can blow up because the test file is > hardcoded to dir+'/dummy', and all the processes race to manipulate the same > file. My patch tries to fix this using portable calls (as far as the docs > say, both socket.gethostname() and os.getpid() are fully portable). > > If nobody objects in 24 hours or I get an explicit OK before that from a core > dev, I'll be happy to commit this. I've tested it and it solves my recurrent > deadlocks on my system. Looks good to me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Thu Jan 18 14:52:24 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 18 Jan 2007 12:52:24 -0700 Subject: [SciPy-dev] A patch for weave.catalog In-Reply-To: <45AFCF0A.4010108@gmail.com> References: <45AFCD6A.8030209@gmail.com> <45AFCF0A.4010108@gmail.com> Message-ID: On 1/18/07, Robert Kern wrote: > Fernando Perez wrote: > > If nobody objects in 24 hours or I get an explicit OK before that from a core > > dev, I'll be happy to commit this. I've tested it and it solves my recurrent > > deadlocks on my system. > > Looks good to me. Thanks for the quick reply. Done as r2580. Cheers, f From Norbert.Nemec.list at gmx.de Mon Jan 22 15:18:26 2007 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Mon, 22 Jan 2007 21:18:26 +0100 Subject: [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <45B51C12.8010108@gmx.de> Nice! I'm wondering whether the default values of optional arguments should be mentioned in the docstrings? Travis Oliphant wrote: > There was a lively discussion on the SciPy List before Christmas > regarding establishing a standard for documentation strings for NumPy / > SciPy. > > I am very interested in establishing such a standard. A hearty thanks > goes to William Stein for encouraging the discussion. I hope it is > very clear that the developers of NumPy / SciPy are quite interested in > good documentation strings but recognize that producing them can be > fairly tedious and un-interesting work. This is the best explanation I > can come up with for the relative paucity of documentation rather than > some underlying agenda *not* to produce them. I suspect a standard has > not been established largely because of all the discussions taking place > within the documentation communities of epydoc, docutils, etc. and a > relative unclarity on what to do about Math in docstrings. > > I'd like to get something done within the next few days (like placing > the standard on a wiki and/or placing a HOWTO_DOCUMENT file in the > distribution of NumPy). > > My preference is to use our own basic format for documentation with > something that will translate the result into something that the epydoc > package can process (like epytext or reStructuredText). The reason, I'd > like to use our own simple syntax, is that I'm not fully happy with > either epytext or reStructuredText. In general, I don't like a lot of > line-noise and "formatting" extras. Unfortuntately both epytext and > reStructuredText seem to have their fair share of such things. > > Robert wrote some great documentation for a few functions (apparently > following a reStructuredText format). While I liked that he did this, it > showed me that I don't very much like all the line-noise needed for > structured text. > > I've looked through a large number of documentation strings that I've > written over the years and believe that the following format suffices. > I would like all documentation to follow this format. > > This format attempts to be a combination of epytext and restructured > text with additions for latex-math. The purpose is to make a docstring > readable but also allowing for some structured text directives. At some > point we will have a sub-routine that will translate docstrings in this > format to pure epytext or pure restructured text. > > """ > one-line summary not using variable names or the function name > > A few sentences giving an extended description. > > Inputs: > var1 -- Explanation > variable2 -- Explanation > > Outputs: named, list, of, outputs > named -- explanation > list -- more detail > of -- blah, blah. > outputs -- even more blah > > Additional Inputs: > kwdarg1 -- A little-used input not always needed. > kwdarg2 -- Some keyword arguments can and should be given in Inputs > Section. This is just for "little-used" inputs. > > Algorithm: > Notes about the implemenation algorithm (if needed). > > This can have multiple paragraphs as can all sections. > > Notes: > Additional notes if needed > > Authors: > name (date): notes about what was done > name (date): major documentation updates can be included here also. > > See also: > * func1 -- any notes about the relationship > * func2 -- > * func3 -- > (or this can be a comma separated list) > func1, func2, func3 > > (For NumPy functions, these do not need to have numpy. namespace in > front of them) > (For SciPy they don't need the scipy. namespace in front of them). > (Use np and sp for abbreviations to numpy and scipy if you need to > reference > the other package). > > Examples: > examples in doctest format > > Comments: > This section should include anything that should not be displayed in > a help > or other hard-copy output. Such things as substitution-directed > directives > should go here. > """ > > Additional Information: > > For paragraphs, indentation is significant and indicates indentation in > the output. New paragraphs are marked with blank line. > > Text-emphasis: > > Use *italics*, **bold**, and `courier` if needed in any explanations > (but not for variable names and doctest code or multi-line code) > > Math: > > Use \[...\] or $...$ for math in latex format (remember to use the > raw-format for your text string in such cases). Place it in a > new-paragraph for displaystyle or in-line for inline style. > > > References: > > Use L{reference-link} for any code links (except in the see-also > section). The reference-link should > contain the full path-name (unless the function is in the same > name-space as this one is. > > Use http:// for any URL's > > Lists: > > * item1 > - subitem > + subsubitem > * item2 > * item3 > > or > > 1. item1 > a. subitem > i. subsubitem1 > ii. subsubitem2 > 2. item2 > 3. item3 > > for lists. > > Definitions: > > descripition > This is my description > for any definitions needed. > > Addtional Code-blocks: > > {{{ > for multi-line code-blocks that are not examples to be run but should be > formatted as code. > }}} > > Tables: > > Tables should be constructed as either: > > +------------------------+------------+----------+ > | Header row, column 1 | Header 2 | Header 3 | > +========================+============+==========+ > | body row 1, column 1 | column 2 | column 3 | > +------------------------+------------+----------+ > | body row 2 | Cells may span | > +------------------------+-----------------------+ > > or > > || Header row, column 1 || Header 2 || Header 3 || > ------------------------------------------------------- > || body row, column 1 || column 2 || column 3 || > || body row 2 |||| Cells may span columns || > > > Footnotes: > > [1] or [CITATION3] for Footnotes which are placed at the bottom of the > docstring as > > [1] Footnote > [CITATION3] Additional note. > > > Substitution: > > Use |somename|{optional text} with > > (the next line is placed at the bottom of the docstring in the Comments: > section) > .. |somename| image::myfile.png > > or > > .. |somename| somedirective:: > > {optional text} > > for placing restructured text directives in the main text. > > > Please address comments to this proposal, very soon. I'd like to > finalize it within a few days. > > -Travis > > > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From robert.kern at gmail.com Mon Jan 22 15:24:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Jan 2007 14:24:24 -0600 Subject: [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: <45B51C12.8010108@gmx.de> References: <45A4FB90.9000809@ee.byu.edu> <45B51C12.8010108@gmx.de> Message-ID: <45B51D78.4030200@gmail.com> Norbert Nemec wrote: > Nice! > > I'm wondering whether the default values of optional arguments should be > mentioned in the docstrings? I've suggested "Don't." as they can be found from the signature. The signature should always be available from regular Python functions and should be included in the docstring for extension functions. Anything that requires more explanation than that can be described in prose in the parameter's description. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david.huard at gmail.com Mon Jan 22 17:13:59 2007 From: david.huard at gmail.com (David Huard) Date: Mon, 22 Jan 2007 17:13:59 -0500 Subject: [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: <45B51D78.4030200@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45B51C12.8010108@gmx.de> <45B51D78.4030200@gmail.com> Message-ID: <91cf711d0701221413m539aa104va808a6fde83bbfd6@mail.gmail.com> I tried to run epydoc on the example described in the wiki Jarrod created last week, and there are some things that don't quite work. 1. The examples section appear before the parameters and Return values, which looks strange. I tried to fix this by adding an :example: field to epydoc, but then the output doesn't look so good. 2. The output format of return values and parameters look different in the pdf. 3. I can't get cross references to link to other functions in the pdf. This is probably dumb, but I can't find the info or any example saying how to do it. 4. Is unicode supposed to work out of the box (with the appropriate #coding comment) ? It looks supported, but it doesn't work for me, the unicode elements appear as the string codes in the pdf. 5. There seems to be a problem with mixing two types of parameter definitions. For example, :Parameters: var1 : type Explanation var2 : type Explanation :OtherParameters: - `r`: Compact description Returns almost the right thing, but the indentation is not quite right and the type for var2 is put on a new line. Also, it may be good practice to let the first line of the docstring be the function signature, as function wrappers may just define *args, **kwds, which is not very informative. This would have no impact on the pdf, because epydoc replaces the uninformative function signature by the one given in this first line. David 2007/1/22, Robert Kern : > > Norbert Nemec wrote: > > Nice! > > > > I'm wondering whether the default values of optional arguments should be > > mentioned in the docstrings? > > I've suggested "Don't." as they can be found from the signature. The > signature > should always be available from regular Python functions and should be > included > in the docstring for extension functions. Anything that requires more > explanation than that can be described in prose in the parameter's > description. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Mon Jan 22 17:20:43 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 22 Jan 2007 15:20:43 -0700 Subject: [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: <91cf711d0701221413m539aa104va808a6fde83bbfd6@mail.gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45B51C12.8010108@gmx.de> <45B51D78.4030200@gmail.com> <91cf711d0701221413m539aa104va808a6fde83bbfd6@mail.gmail.com> Message-ID: <45B538BB.3060209@ee.byu.edu> David Huard wrote: > I tried to run epydoc on the example described in the wiki Jarrod > created last week, and there are some things that don't quite work. > > 1. The examples section appear before the parameters and Return > values, which looks strange. I tried to fix this by adding an > :example: field to epydoc, but then the output doesn't look so good. > > 2. The output format of return values and parameters look different in > the pdf. Epydoc needs some changes in order to "do the right" thing with the docstrings. This is known. I think the changes are minor. -Travis From nwagner at iam.uni-stuttgart.de Tue Jan 23 04:46:13 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 23 Jan 2007 10:46:13 +0100 Subject: [SciPy-dev] sparse revisited Message-ID: <45B5D965.9020206@iam.uni-stuttgart.de> Hi, A program of my colleague is broken due to the recent changes wrt sparse. So I turned back to r2450 and it works fine. This is the result with the latest svn version Solving frequency 450.0 Traceback (most recent call last): File "main_ce_v3.py", line 49, in ? x = spsolve(k_dyn, fq) File "/usr/local/lib64/python2.4/site-packages/scipy/linsolve/linsolve.py", line 86, in spsolve autoTranspose = True ) File "/usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/umfpack.py", line 581, in linsolve self.numeric( mtx ) File "/usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/umfpack.py", line 429, in numeric raise RuntimeError, '%s failed with %s' % (self.funs.numeric, RuntimeError: failed with UMFPACK_ERROR_out_of_memory The matrices are very large and I am not sure if I should file a ticket. Any comments how to proceed (Tim, Nathan) ? Nils THANKS.txt might need some update. I cannot find some developers e.g. Tim Leslie, Nathan Bell. From tim.leslie at gmail.com Tue Jan 23 08:10:25 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Wed, 24 Jan 2007 00:10:25 +1100 Subject: [SciPy-dev] sparse revisited In-Reply-To: <45B5D965.9020206@iam.uni-stuttgart.de> References: <45B5D965.9020206@iam.uni-stuttgart.de> Message-ID: Hi Nils, I don't have time to look into this right now, but I'll add it to my todo list for the next day or two and get back to you (hopefully with a fix). I'll also update the THANKS file so you can find my details to contact me (which you're welcome to do). Cheers, Tim On 1/23/07, Nils Wagner wrote: > Hi, > > A program of my colleague is broken due to the recent changes wrt sparse. > So I turned back to r2450 and it works fine. > > This is the result with the latest svn version > > Solving frequency 450.0 > Traceback (most recent call last): > File "main_ce_v3.py", line 49, in ? > x = spsolve(k_dyn, fq) > File > "/usr/local/lib64/python2.4/site-packages/scipy/linsolve/linsolve.py", > line 86, in spsolve > autoTranspose = True ) > File > "/usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/umfpack.py", > > line 581, in linsolve > self.numeric( mtx ) > File > "/usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/umfpack.py", > > line 429, in numeric > raise RuntimeError, '%s failed with %s' % > (self.funs.numeric, > RuntimeError: 0x2aff5e4412a8> failed with UMFPACK_ERROR_out_of_memory > > The matrices are very large and I am not sure if I should file a ticket. > > Any comments how to proceed (Tim, Nathan) ? > > Nils > > THANKS.txt might need some update. I cannot find some developers e.g. > Tim Leslie, Nathan Bell. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From david.huard at gmail.com Tue Jan 23 14:47:13 2007 From: david.huard at gmail.com (David Huard) Date: Tue, 23 Jan 2007 14:47:13 -0500 Subject: [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: <45B538BB.3060209@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> <45B51C12.8010108@gmx.de> <45B51D78.4030200@gmail.com> <91cf711d0701221413m539aa104va808a6fde83bbfd6@mail.gmail.com> <45B538BB.3060209@ee.byu.edu> Message-ID: <91cf711d0701231147u49476c1fwb97df93c1a850e52@mail.gmail.com> 2007/1/22, Travis Oliphant : > > Epydoc needs some changes in order to "do the right" thing with the > docstrings. This is known. I think the changes are minor. > Here is the minor patch for the minor changes ; ). I only modified the latex backend, so some stuff will break or just not render in html. The patch solves 1 and 2. Issue 5 was caused by mixing tabs and spaces. David -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: returnandexample.patch Type: text/x-patch Size: 9104 bytes Desc: not available URL: From ravi.rajagopal at amd.com Tue Jan 23 16:37:33 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Tue, 23 Jan 2007 16:37:33 -0500 Subject: [SciPy-dev] Lapack tests too stringent? Message-ID: <200701231637.33139.ravi@ati.com> Hi, Running scipy.test(10,10) on revision 2591 causes a failure in check_syevr_irange (scipy.lib.tests.test_lapack.test_flapack_float): AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769474, 9.18222523], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) The problem with this test is that the double precision version of this test passes on my machine but not the single precision version. To eliminate SciPy and ATLAS from the chain of causation, I ran the attached program with the reference BLAS and the reference LAPACK from netlib. I have similar failures on 32-bit and on 64-bit machines, both running up to date Fedora Core 6. With LAPACK 3.1.0, here are the eigenvalue results for double precision for the array from the test: -0.669924337185138 0.487693886153334 9.1822304510318 The above agrees with the reference values in the scipy test above. However, with single precision, the values are: -0.669923424720764 0.487694501876831 9.18223190307617 This does not meet the requirement for arrays being almost equal; is this because eps is dependent on double precision and is too low for single precision? To confirm that this is indeed an issue with LAPACK, I ran it on old RHEL3 32-bit machine with gcc/g77 (as opposed to gcc/gfortran on my FC6 machines) and with lapack 3.0 (up to date with all patches). In this case too, there is a difference between single precision and double precision values: Single Double -0.669924139976501 -0.669924337185137 0.487694054841995 0.487693886153333 9.18223094940186 9.1822304510318 Do others see similar numbers? The attached program can be compiled using gcc -o sytest syevr.c -llapack -lblas -lg2c where -lg2c is not required if you are using gfortran. To switch between single and double precision tests, change the typedef at the top of the file, and the first letter of the call on line 66. In my tests, changing the value of abstol did not make any difference. I wonder why I have not encountered this before since the tests have been in SciPy for a long time according to the SVN log. Regards, Ravi -------------- next part -------------- A non-text attachment was scrubbed... Name: syevr.c Type: text/x-csrc Size: 2621 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Wed Jan 24 11:38:11 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 24 Jan 2007 17:38:11 +0100 Subject: [SciPy-dev] Callback feature for iterative solvers In-Reply-To: <1b5a37350701240833j342991c5mb0dd9f8cc124743b@mail.gmail.com> References: <1b5a37350701240635m2e99cd54j26c439ce5be6ae52@mail.gmail.com> <45B7768D.4080209@iam.uni-stuttgart.de> <1b5a37350701240833j342991c5mb0dd9f8cc124743b@mail.gmail.com> Message-ID: <45B78B73.7040405@iam.uni-stuttgart.de> Ed Schofield wrote: > > On 1/24/07, *Nils Wagner* > wrote: > > Hi Ed, > > Great ! > Thank you very much ! > It seems to work well with the exception of GMRES. > Please find attached a short test. > Am I missing something ? > > > No, I agree. It seems that there's something wrong with GMRES -- the > iter_ return parameter from gmresrevcom() is never incremented for > this example. I'm not familiar with the algorithm -- perhaps it really > does converge in one iteration? If not, it's just a bug. Could you > file a bug against gmres() in Tracker? > > -- Ed > Hi Ed, Done. Maybe you can add some comments. Again thank you very much for the new functionality !! http://projects.scipy.org/scipy/scipy/ticket/360 Cheers, Nils From edschofield at gmail.com Wed Jan 24 12:45:54 2007 From: edschofield at gmail.com (Ed Schofield) Date: Wed, 24 Jan 2007 18:45:54 +0100 Subject: [SciPy-dev] Callback feature for iterative solvers In-Reply-To: <45B78B73.7040405@iam.uni-stuttgart.de> References: <1b5a37350701240635m2e99cd54j26c439ce5be6ae52@mail.gmail.com> <45B7768D.4080209@iam.uni-stuttgart.de> <1b5a37350701240833j342991c5mb0dd9f8cc124743b@mail.gmail.com> <45B78B73.7040405@iam.uni-stuttgart.de> Message-ID: <1b5a37350701240945q309f2855y60ad2516bf1d959@mail.gmail.com> On 1/24/07, Nils Wagner wrote: > > Ed Schofield wrote: > > > > On 1/24/07, *Nils Wagner* > > wrote: > > > > Hi Ed, > > > > Great ! > > Thank you very much ! > > It seems to work well with the exception of GMRES. > > Please find attached a short test. > > Am I missing something ? > > > > > > No, I agree. It seems that there's something wrong with GMRES > > ... > Hi Ed, > > Done. Maybe you can add some comments. > Again thank you very much for the new functionality !! > > http://projects.scipy.org/scipy/scipy/ticket/360 Actually, I'm mystified about the meaning of iter_ in gmres() and the others. Can anyone shed any light on this? Why is the output value of iter_ from revcom() used as the *input* value in the next iteration? Why is the maxiter argument only used once, for the first call to revcom(), and then apparently ignored for all subsequent calls? I'm inclined to revert the callback patch -- it seems broken. But if the revcom Fortran functions can perform multiple iterations each, we can't easily call back Python functions each iteration. Is there a better solution? -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Wed Jan 24 19:22:27 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 24 Jan 2007 17:22:27 -0700 Subject: [SciPy-dev] Callback feature for iterative solvers In-Reply-To: <1b5a37350701240945q309f2855y60ad2516bf1d959@mail.gmail.com> References: <1b5a37350701240635m2e99cd54j26c439ce5be6ae52@mail.gmail.com> <45B7768D.4080209@iam.uni-stuttgart.de> <1b5a37350701240833j342991c5mb0dd9f8cc124743b@mail.gmail.com> <45B78B73.7040405@iam.uni-stuttgart.de> <1b5a37350701240945q309f2855y60ad2516bf1d959@mail.gmail.com> Message-ID: <45B7F843.1000608@ee.byu.edu> Ed Schofield wrote: > > On 1/24/07, *Nils Wagner* > wrote: > > Ed Schofield wrote: > > > > On 1/24/07, *Nils Wagner* > > >> wrote: > > > > Hi Ed, > > > > Great ! > > Thank you very much ! > > It seems to work well with the exception of GMRES. > > Please find attached a short test. > > Am I missing something ? > > > > > > > No, I agree. It seems that there's something wrong with GMRES > > > ... > Hi Ed, > > Done. Maybe you can add some comments. > Again thank you very much for the new functionality !! > > http://projects.scipy.org/scipy/scipy/ticket/360 > > > Actually, I'm mystified about the meaning of iter_ in gmres() and the > others. Can anyone shed any light on this? Why is the output value of > iter_ from revcom() used as the *input* value in the next iteration? > Why is the maxiter argument only used once, for the first call to > revcom(), and then apparently ignored for all subsequent calls? The iter input is set in the FORTRAN to MAXIT only when IJOB=1 (i.e. the first time through). Every-other time the fortran code is called, it's looking for the output of either the function-call, or the function-call with gradient. > > I'm inclined to revert the callback patch -- it seems broken. But if > the revcom Fortran functions can perform multiple iterations each, we > can't easily call back Python functions each iteration. Is there a > better solution? Sometimes, the iteration returns looking for a matrix-times vector result. Other times it wants a right-multiplication, and other times it may be looking for a pre-conditioning result. Basically, any-time the FORTRAN code needs to call a Python function, it returns from the Fortran code. Then, the Python code can be called. I don't see why the callback function can't either be integrated into the get_matvec classes or called only when the matvec is requested. Best, -Travis From nwagner at iam.uni-stuttgart.de Fri Jan 26 12:20:54 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 26 Jan 2007 18:20:54 +0100 Subject: [SciPy-dev] Fwd: lapack license Message-ID: This might be of interest. Nils --- the forwarded message follows --- -------------- next part -------------- An embedded message was scrubbed... From: Patrick Alken Subject: lapack license Date: Fri, 26 Jan 2007 10:16:27 -0700 Size: 4307 URL: From a.schmolck at gmx.net Mon Jan 29 18:23:00 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: 29 Jan 2007 23:23:00 +0000 Subject: [SciPy-dev] mlabwrap and scipy -- calling matlab from python Message-ID: Hi, I'm the author of mlabwrap a very high-level python to matlab bridge that's currently hosted at . There has been talk off-list about integrating mlabwrap into scipy and I'm following up on Jarrod Millman's suggestion to move this on-list. Robert Kern has suggested that mlabwrap could be the first scikit package hosted under scikits.scipy.org and I think it would be great to have mlabwrap integrated into the greater scipy-cosmos -- whilst mlabwrap is clearly too specific to go into scipy proper, matlab is very widely used and as python is getting increasingly popular for scientific computing I think there should be considerably demand for painlessly integrating matlab and python and being somehow part of scipy should increase exposure and help maintenance. The only caveat is that I myself have limited resources (I'm currently writing up a thesis and am also looking for a job [1]), so I can't pour much time into helping to set up the scikits infrastructure. I have however spent a good chunk of this weekend to bring mlabwrap up to scratch and just uploaded a new (alpha) version of mlabwrap to above sourceforge page; numpy and 64bit machines ought to work now (Numeric is also still supported) and thanks to Matthew Brett installation should be even easier (under linux that is; I have only been able to test the new version under 32 and 64-bit linux, confirmation that windows and OS-X installs also work would be very welcome). Mlabwrap has been pretty stable so far (no major bug reports as of yet and it has been around for a few years with users on all major platforms), but unfortunately I haven't had as much time to test this release as I'd like to, hence the alpha -- the unittests pass fine but there might be unspotted memory violations etc. There are 2 changes related to 64-bit readiness that might have introduced issues * I've changed the handle for the matlab session (lHandle) from int to a Py(C)Object * I've changed int to npy_intp at various places; I'm still effectively using ints and PyArray_FromDims in the matlab-array->ndarray direction -- as far as I can tell The Mathworks haven't yet properly nailed 64 bit support yet; there are mwSize and mwIndex types analogous to npy_intp, only that the docs in most places claim they are the same as int (somewhat implausibly, since it appears that from 7.3 you need to use them for 64-bit sparse arrays: ). What I've done is that I've used mwSize and mwIndex in place of int where appropriate (and #define'd them as int if they are undefined), but cast to int (or int*) on calling PyArray_FromDims. I'm not sure what a better solution would currently be; this appears to work on 32 and 64-bit linux. I'd be grateful if people could give it a spin and even more for any code reviews or improvements (the code isn't very long -- about 500 non-comment lines of C++ and python each and only the C++ is problematic); valgrinding would also be very welcome (I haven't got around to it so far with the newest version). If you have experience porting Numeric code to numpy and with 64-bit issues and you'd like to use existing matlab code from python, now would be a great time to try mlabwrap out and/or have a look at the code. The sooner a shiny new mlabwrap 1.0 version ends up wherever it's meant to end up, the better. cheers, alexander schmolck Footnotes: [1] So should you happen to be currently looking for someone with scientific computing, pattern recognition and cognitive science expertise and pretty strong skills in python, matlab as well as a number of other languages... From zpincus at stanford.edu Tue Jan 30 04:02:47 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Tue, 30 Jan 2007 01:02:47 -0800 Subject: [SciPy-dev] Patch: Add 'insert' method to fitpack.py (in scipy.interpolate) Message-ID: <7D97FC89-1AC4-4727-ACEA-41FDB539DCF5@stanford.edu> Hello, In the fortran fitpack library (aka ddierckx) that underlies scipy.interpolate, an insert.f function is provided to insert new knots into a given spline. Attached is a patch which exposes this function to python via the low- level fitpack.py interface. This patch properly handles both regular and parametric splines, and allows a 'knot multiplicity' to be specified so that a given knot can be inserted multiple times (which can be handled more efficiently in the c-level code than in python- level code). The inclusion of this function will make it easy to, among other things, convert a b-spline into a Bezier curve (useful for plotting purposes). (This can be accomplished by inserting new knots at the current knot points until each knot has a multiplicity equal to the spline degree.) I hope the patch will prove useful, Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine -------------- next part -------------- A non-text attachment was scrubbed... Name: insert.patch Type: application/octet-stream Size: 5891 bytes Desc: not available URL: From jtravs at gmail.com Tue Jan 30 07:18:36 2007 From: jtravs at gmail.com (John Travers) Date: Tue, 30 Jan 2007 12:18:36 +0000 Subject: [SciPy-dev] Patch: Add 'insert' method to fitpack.py (in scipy.interpolate) In-Reply-To: <7D97FC89-1AC4-4727-ACEA-41FDB539DCF5@stanford.edu> References: <7D97FC89-1AC4-4727-ACEA-41FDB539DCF5@stanford.edu> Message-ID: <3a1077e70701300418s4341492keb7bea08c8ce319c@mail.gmail.com> On 30/01/07, Zachary Pincus wrote: > Hello, > > In the fortran fitpack library (aka ddierckx) that underlies > scipy.interpolate, an insert.f function is provided to insert new > knots into a given spline. > > Attached is a patch which exposes this function to python via the low- > level fitpack.py interface. This patch properly handles both regular > and parametric splines, and allows a 'knot multiplicity' to be > specified so that a given knot can be inserted multiple times (which > can be handled more efficiently in the c-level code than in python- > level code). > > The inclusion of this function will make it easy to, among other > things, convert a b-spline into a Bezier curve (useful for plotting > purposes). (This can be accomplished by inserting new knots at the > current knot points until each knot has a multiplicity equal to the > spline degree.) > > I hope the patch will prove useful, Thanks for the contribution! I've committed it to subversion. I have only checked that it compiles and doesn't break anything. I'll add a test case later. John From nwagner at iam.uni-stuttgart.de Tue Jan 30 13:27:39 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 30 Jan 2007 19:27:39 +0100 Subject: [SciPy-dev] sparse revisited In-Reply-To: References: <45B5D965.9020206@iam.uni-stuttgart.de> Message-ID: On Wed, 24 Jan 2007 00:10:25 +1100 "Tim Leslie" wrote: > Hi Nils, > > I don't have time to look into this right now, but I'll >add it to my > todo list for the next day or two and get back to you >(hopefully with > a fix). I'll also update the THANKS file so you can find >my details to > contact me (which you're welcome to do). > > Cheers, > > Tim Hi Tim, I have applied the patch by mauger for ticket #311. It works fine. I mean our tests passed. Please can you apply the patch to svn. How about the other issues concerning sparse ? I look forward to hearing from you. Best regards, Nils From wnbell at gmail.com Wed Jan 31 01:17:57 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 31 Jan 2007 00:17:57 -0600 Subject: [SciPy-dev] sparse revisited In-Reply-To: References: <45B5D965.9020206@iam.uni-stuttgart.de> Message-ID: On 1/30/07, Nils Wagner wrote: > I have applied the patch by mauger for ticket #311. > It works fine. I mean our tests passed. Please can you > apply the patch to svn. I'll give it a try tomorrow. > How about the other issues concerning sparse ? > I look forward to hearing from you. Concerning your UMFPACK_ERROR_out_of_memory errors, are you sure that the factors actually fit in memory? You could try it in MATLAB (which also uses UMFPACK) to eliminate this possibility. Without more guidance it's difficult to say what the issue could be. UMFPACK should detect malformed matrices, so I doubt it's a format issue. Also, will someone who knows the implementation of lil_matrix fix http://projects.scipy.org/scipy/scipy/ticket/347 or at least roll back the recent changes (assuming they are the culprit)? -- Nathan Bell wnbell at gmail.com From nwagner at iam.uni-stuttgart.de Wed Jan 31 03:59:22 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 31 Jan 2007 09:59:22 +0100 Subject: [SciPy-dev] sparse revisited In-Reply-To: References: <45B5D965.9020206@iam.uni-stuttgart.de> Message-ID: <45C05A6A.1020400@iam.uni-stuttgart.de> Nathan Bell wrote: > On 1/30/07, Nils Wagner wrote: > >> I have applied the patch by mauger for ticket #311. >> It works fine. I mean our tests passed. Please can you >> apply the patch to svn. >> > > I'll give it a try tomorrow. > > >> How about the other issues concerning sparse ? >> I look forward to hearing from you. >> > > Concerning your UMFPACK_ERROR_out_of_memory errors, are you sure that > the factors actually fit in memory? You could try it in MATLAB (which > also uses UMFPACK) to eliminate this possibility. Without more > guidance it's difficult to say what the issue could be. UMFPACK > should detect malformed matrices, so I doubt it's a format issue. > > > The problem is that I didn't get an UMFPACK_out_of_memory error with r2450. Hence my conclusion is that the recent changes to sparse are responsible for the problem. Nils P.S. Nathan, I will upload the test including the large input matrices on our webpage and send you the login data off-list. > Also, will someone who knows the implementation of lil_matrix fix > http://projects.scipy.org/scipy/scipy/ticket/347 or at least roll back > the recent changes (assuming they are the culprit)? > > > From wnbell at gmail.com Wed Jan 31 23:16:24 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 31 Jan 2007 22:16:24 -0600 Subject: [SciPy-dev] sparse revisited In-Reply-To: References: <45B5D965.9020206@iam.uni-stuttgart.de> Message-ID: On 1/30/07, Nils Wagner wrote: > I have applied the patch by mauger for ticket #311. > It works fine. I mean our tests passed. Please can you > apply the patch to svn. The patch resolved my problems as well. I've committed the changes to svn and closed the ticket: http://projects.scipy.org/scipy/scipy/ticket/311 -- Nathan Bell wnbell at gmail.com