From timvictor at gmail.com Tue Dec 1 01:53:16 2009 From: timvictor at gmail.com (Tim Victor) Date: Tue, 1 Dec 2009 01:53:16 -0500 Subject: [SciPy-dev] Possible fix for scipy.sparse.lil_matrix column-slicing problem In-Reply-To: <4B139F28.8060807@ntc.zcu.cz> References: <1cd32cbb0911251329se97c355x8ec78903c4260ce@mail.gmail.com> <4B139F28.8060807@ntc.zcu.cz> Message-ID: On Mon, Nov 30, 2009 at 5:32 AM, Robert Cimrman wrote: > Hi Tim, > > Tim Victor wrote: >> On Thu, Nov 26, 2009 at 10:58 PM, Nathan Bell wrote: >>> On Thu, Nov 26, 2009 at 7:21 PM, Tim Victor wrote: >>>> I didn't know that I could do that! I've just created an account and >>>> added a comment there with a link to this email thread. Thanks for the >>>> reply. >>>> >>> Nice work! >>> >>> Your patch has been merged into r6121 [1]. ?The previous >>> implementation was a bit of a nightmare so I'm glad someone finally >>> took the time to simplify it :) >>> >>> Thanks for your contribution. ?I hope it is the first of many! >>> >>> [1] http://projects.scipy.org/scipy/changeset/6121 >>> >>> -- >>> Nathan Bell wnbell at gmail.com >>> http://www.wnbell.com/ >> >> Thanks for the really quick turnaround and the kind words, Nathan. I >> enjoyed working on it. It was kinda like helping someone else on their >> class project when I was a student. That was always more fun than >> doing my own work. >> >> I might see what other open tickets there are. It seems like a good >> way to learn something about the different parts of the system. >> >> Best regards, >> >> Tim Victor > > Is there are reason why you removed the special case of x being a scalar, namely: > > - ? ? ? ?elif issequence(i) and issequence(j): > - ? ? ? ? ? ?if np.isscalar(x): > - ? ? ? ? ? ? ? ?for ii, jj in zip(i, j): > - ? ? ? ? ? ? ? ? ? ?self._insertat(ii, jj, x) > - ? ? ? ? ? ?else: > - ? ? ? ? ? ? ? ?for ii, jj, xx in zip(i, j, x): > - ? ? ? ? ? ? ? ? ? ?self._insertat(ii, jj, xx) > > This removal broke a code of mine, which now takes forever, and behaves in a > different way. Try this: > > In [1]: import scipy.sparse as spp > In [2]: a = spp.lil_matrix((1000, 1000)) > In [3]: a > Out[3]: > <1000x1000 sparse matrix of type '' > ? ? ? ? with 0 stored elements in LInked List format> > In [4]: import numpy as np > In [5]: ir = ic = np.arange(1000) > In [6]: a[ir, ic] = 1 > > The result is a matrix with all the entries set to 1 (= full!), not just the > diagonal, which was the previous (IMHO good) behaviour. In the real code I do > not set the diagonal, but some other elements given by two lists ir, ic, but > the code above shows the symptoms. > > I can fix easily my code by not using the LIL matrix: > > In [15]: a = spp.coo_matrix((np.ones((ir.shape[0],)), (ir, ic))) > In [16]: a > Out[16]: > <1000x1000 sparse matrix of type '' > ? ? ? ? with 1000 stored elements in COOrdinate format> > > but I wonder, if the above change in behaviour was intended... > > cheers, > r. --------------------- Robert, > Is there are reason why you removed the special case of x being a scalar, namely: Actually I don't think it was the special case of x being a scalar, but rather the special case of the row index and the column index both being sequences. Let me know if this looks wrong to you, but using NumPy dense arrays and matrices as a reference for correct behavior, I get: m = sp.zeros((10,10)) # set first column of m to ones: m[range(10),0] = 1 # set diagonal of m to -1: m[range(10),range(10)] = -1 m (prints:) array([[-1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 1., -1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 1., 0., -1., 0., 0., 0., 0., 0., 0., 0.], [ 1., 0., 0., -1., 0., 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., -1., 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0., -1., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0., 0., -1., 0., 0., 0.], [ 1., 0., 0., 0., 0., 0., 0., -1., 0., 0.], [ 1., 0., 0., 0., 0., 0., 0., 0., -1., 0.], [ 1., 0., 0., 0., 0., 0., 0., 0., 0., -1.]]) The first case, assigning a scalar with a range for the row index and a scalar for the column index, lil_matrix still works the way it did. It iterates the range of rows and sets column 0 for each. The second case, iterating both of the ranges and using the row/column indexes in pairs, was inadvertently changed. A third case suggests that it isn't only about assigning a scalar, since assigning from a sequence using pairwise-matched sequences of row and column indices should work similarly: # assign random values to the reverse diagonal: m[range(10),range(9,-1,-1)] = sp.rand(10) m (prints:) array([[-1. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.90216781], [ 1. , -1. , 0. , 0. , 0. , 0. , 0. , 0. , 0.54728632, 0. ], [ 1. , 0. , -1. , 0. , 0. , 0. , 0. , 0.97993962, 0. , 0. ], [ 1. , 0. , 0. , -1. , 0. , 0. , 0.66932184, 0. , 0. , 0. ], [ 1. , 0. , 0. , 0. , -1. , 0.64246538, 0. , 0. , 0. , 0. ], [ 1. , 0. , 0. , 0. , 0.93092643, -1. , 0. , 0. , 0. , 0. ], [ 1. , 0. , 0. , 0.25711642, 0. , 0. , -1. , 0. , 0. , 0. ], [ 1. , 0. , 0.28826595, 0. , 0. , 0. , 0. , -1. , 0. , 0. ], [ 1. , 0.01331807, 0. , 0. , 0. , 0. , 0. , 0. , -1. , 0. ], [ 0.86963499, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , -1. ]]) Yes, I broke that. :-) I haven't used this behavior or seen it documented and it wasn't covered by a unit test, and I missed it. (In my defense, lil_matrix is broken even worse for that third case in both SciPy versions 0.7.0 and 0.7.1.) Robert, can you (or anyone else out there) tell me if this covers it all, or whether there is some more general array-indexing behavior that should be implemented? I'll be happy to put in a case to handle it. I'm reminded of the Zen of Python "Explicit is better than implicit." guideline. Best regards, and many apologies for the inconvenience, Tim From dsdale24 at gmail.com Tue Dec 1 08:08:27 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Tue, 1 Dec 2009 08:08:27 -0500 Subject: [SciPy-dev] trouble building from svn In-Reply-To: References: Message-ID: On Mon, Nov 30, 2009 at 7:15 PM, Charles R Harris wrote: > > > On Mon, Nov 30, 2009 at 2:53 PM, Darren Dale wrote: >> >> I am attempting to build scipy from svn sources on OS X 10.6. I get >> the following error, could anyone please advise? >> >> C compiler: gcc-4.2 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes >> -arch x86_64 -pipe >> > > What numpy version? svn trunk From dsdale24 at gmail.com Tue Dec 1 08:10:52 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Tue, 1 Dec 2009 08:10:52 -0500 Subject: [SciPy-dev] trouble building from svn In-Reply-To: <4B149D72.8010007@ar.media.kyoto-u.ac.jp> References: <4B149D72.8010007@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, Nov 30, 2009 at 11:37 PM, David Cournapeau wrote: > Darren Dale wrote: >> I am attempting to build scipy from svn sources on OS X 10.6. I get >> the following error, could anyone please advise? >> >> C compiler: gcc-4.2 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes >> -arch x86_64 -pipe >> > > Most likely you have LDFLAGS defined in your environment, which screws > up the build. LDFLAGS (and other similar variables) do not work as > expected with distutils, they *override* the options instead of > completing them. > > The actual error is that the link step is missing the -shared option, > hence the missing errors, as the linker tries to build an executable. Thanks David, I think you are probably right. I had set some environment variables to build packages like hdf5, which defaults to 32 bit because "uname" returns i386 on snow leopard. I'll not at work today, so I will try again on the mac tomorrow. From cimrman3 at ntc.zcu.cz Tue Dec 1 08:11:51 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 01 Dec 2009 14:11:51 +0100 Subject: [SciPy-dev] Possible fix for scipy.sparse.lil_matrix column-slicing problem In-Reply-To: References: <1cd32cbb0911251329se97c355x8ec78903c4260ce@mail.gmail.com> <4B139F28.8060807@ntc.zcu.cz> Message-ID: <4B151617.1040009@ntc.zcu.cz> Hi Tim, Tim Victor wrote: > On Mon, Nov 30, 2009 at 5:32 AM, Robert Cimrman wrote: >> Is there are reason why you removed the special case of x being a scalar, namely: >> >> - elif issequence(i) and issequence(j): >> - if np.isscalar(x): >> - for ii, jj in zip(i, j): >> - self._insertat(ii, jj, x) >> - else: >> - for ii, jj, xx in zip(i, j, x): >> - self._insertat(ii, jj, xx) >> >> This removal broke a code of mine, which now takes forever, and behaves in a >> different way. Try this: >> >> In [1]: import scipy.sparse as spp >> In [2]: a = spp.lil_matrix((1000, 1000)) >> In [3]: a >> Out[3]: >> <1000x1000 sparse matrix of type '' >> with 0 stored elements in LInked List format> >> In [4]: import numpy as np >> In [5]: ir = ic = np.arange(1000) >> In [6]: a[ir, ic] = 1 >> >> The result is a matrix with all the entries set to 1 (= full!), not just the >> diagonal, which was the previous (IMHO good) behaviour. In the real code I do >> not set the diagonal, but some other elements given by two lists ir, ic, but >> the code above shows the symptoms. >> >> I can fix easily my code by not using the LIL matrix: >> >> In [15]: a = spp.coo_matrix((np.ones((ir.shape[0],)), (ir, ic))) >> In [16]: a >> Out[16]: >> <1000x1000 sparse matrix of type '' >> with 1000 stored elements in COOrdinate format> >> >> but I wonder, if the above change in behaviour was intended... >> >> cheers, >> r. > --------------------- > > Robert, > >> Is there are reason why you removed the special case of x being a scalar, namely: > > Actually I don't think it was the special case of x being a scalar, > but rather the special case of the row index and the column index both > being sequences. Yes, x being scalar is in a sense a subset of the case when i, j, x are three sequences of the same length. > Let me know if this looks wrong to you, but using NumPy dense arrays > and matrices as a reference for correct behavior, I get: > > m = sp.zeros((10,10)) > > # set first column of m to ones: > m[range(10),0] = 1 > > # set diagonal of m to -1: > m[range(10),range(10)] = -1 > m > > (prints:) > array([[-1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], > [ 1., -1., 0., 0., 0., 0., 0., 0., 0., 0.], > [ 1., 0., -1., 0., 0., 0., 0., 0., 0., 0.], > [ 1., 0., 0., -1., 0., 0., 0., 0., 0., 0.], > [ 1., 0., 0., 0., -1., 0., 0., 0., 0., 0.], > [ 1., 0., 0., 0., 0., -1., 0., 0., 0., 0.], > [ 1., 0., 0., 0., 0., 0., -1., 0., 0., 0.], > [ 1., 0., 0., 0., 0., 0., 0., -1., 0., 0.], > [ 1., 0., 0., 0., 0., 0., 0., 0., -1., 0.], > [ 1., 0., 0., 0., 0., 0., 0., 0., 0., -1.]]) > > The first case, assigning a scalar with a range for the row index and > a scalar for the column index, lil_matrix still works the way it did. > It iterates the range of rows and sets column 0 for each. The second > case, iterating both of the ranges and using the row/column indexes in > pairs, was inadvertently changed. > > A third case suggests that it isn't only about assigning a scalar, > since assigning from a sequence using pairwise-matched sequences of > row and column indices should work similarly: > > # assign random values to the reverse diagonal: > m[range(10),range(9,-1,-1)] = sp.rand(10) > m > > (prints:) > array([[-1. , 0. , 0. , 0. , 0. , > 0. , 0. , 0. , 0. , 0.90216781], > [ 1. , -1. , 0. , 0. , 0. , > 0. , 0. , 0. , 0.54728632, 0. ], > [ 1. , 0. , -1. , 0. , 0. , > 0. , 0. , 0.97993962, 0. , 0. ], > [ 1. , 0. , 0. , -1. , 0. , > 0. , 0.66932184, 0. , 0. , 0. ], > [ 1. , 0. , 0. , 0. , -1. , > 0.64246538, 0. , 0. , 0. , 0. ], > [ 1. , 0. , 0. , 0. , 0.93092643, > -1. , 0. , 0. , 0. , 0. ], > [ 1. , 0. , 0. , 0.25711642, 0. , > 0. , -1. , 0. , 0. , 0. ], > [ 1. , 0. , 0.28826595, 0. , 0. , > 0. , 0. , -1. , 0. , 0. ], > [ 1. , 0.01331807, 0. , 0. , 0. , > 0. , 0. , 0. , -1. , 0. ], > [ 0.86963499, 0. , 0. , 0. , 0. , > 0. , 0. , 0. , 0. , -1. ]]) > > Yes, I broke that. :-) I haven't used this behavior or seen it > documented and it wasn't covered by a unit test, and I missed it. (In > my defense, lil_matrix is broken even worse for that third case in > both SciPy versions 0.7.0 and 0.7.1.) > > Robert, can you (or anyone else out there) tell me if this covers it > all, or whether there is some more general array-indexing behavior > that should be implemented? I'll be happy to put in a case to handle > it. I think scipy.sparse indexing should follow the behavior of numpy dense arrays. This is what current SVN scipy does (0.8.0.dev6122): In [1]: import scipy.sparse as spp In [2]: a = spp.lil_matrix((10,10)) In [3]: a[range(10),0] = 1 This is ok. In [5]: a[range(10),range(10)] = -1 In [8]: print a.todense() ------> print(a.todense()) [[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]] This is IMHO not ok (what other sparse matrix users think?) In [9]: import scipy as sp In [10]: a[range(10),range(9,-1,-1)] = sp.rand(10) In [12]: print a.todense() -------> print(a.todense()) [[ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982] [ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982] [ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982] [ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982] [ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982] [ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982] [ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982] [ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982] [ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982] [ 0.71485802 0.95410746 0.07350778 0.77786453 0.28376647 0.35679679 0.34837324 0.48012982 0.87469439 0.98490982]] same as above... > I'm reminded of the Zen of Python "Explicit is better than implicit." guideline. :) Consider also this: the current behavior (broadcasting the index arrays to the whole rectangle) is not compatible with NumPy, and does not allow setting elements e.g. in a random sequence of positions. On the other hand, the broadcasting behaviour can be easily, explicitely, obtained by using numpy.mgrid and similar functions. > Best regards, and many apologies for the inconvenience, No problem, any help and code contribution is more than welcome! I guess that fixing this issue should not be too difficult, so you could make another stab :-) If there is a consensus here, of course... (Nathan?) cheers, r. From timvictor at gmail.com Tue Dec 1 11:00:12 2009 From: timvictor at gmail.com (Tim Victor) Date: Tue, 1 Dec 2009 11:00:12 -0500 Subject: [SciPy-dev] Possible fix for scipy.sparse.lil_matrix column-slicing problem In-Reply-To: <4B151617.1040009@ntc.zcu.cz> References: <1cd32cbb0911251329se97c355x8ec78903c4260ce@mail.gmail.com> <4B139F28.8060807@ntc.zcu.cz> <4B151617.1040009@ntc.zcu.cz> Message-ID: On Tue, Dec 1, 2009 at 8:11 AM, Robert Cimrman wrote: > Hi Tim, > > Tim Victor wrote: >> On Mon, Nov 30, 2009 at 5:32 AM, Robert Cimrman wrote: >>> Is there are reason why you removed the special case of x being a scalar, namely: >>> >>> - ? ? ? ?elif issequence(i) and issequence(j): >>> - ? ? ? ? ? ?if np.isscalar(x): >>> - ? ? ? ? ? ? ? ?for ii, jj in zip(i, j): >>> - ? ? ? ? ? ? ? ? ? ?self._insertat(ii, jj, x) >>> - ? ? ? ? ? ?else: >>> - ? ? ? ? ? ? ? ?for ii, jj, xx in zip(i, j, x): >>> - ? ? ? ? ? ? ? ? ? ?self._insertat(ii, jj, xx) >>> >>> This removal broke a code of mine, which now takes forever, and behaves in a >>> different way. Try this: >>> >>> In [1]: import scipy.sparse as spp >>> In [2]: a = spp.lil_matrix((1000, 1000)) >>> In [3]: a >>> Out[3]: >>> <1000x1000 sparse matrix of type '' >>> ? ? ? ? with 0 stored elements in LInked List format> >>> In [4]: import numpy as np >>> In [5]: ir = ic = np.arange(1000) >>> In [6]: a[ir, ic] = 1 >>> >>> The result is a matrix with all the entries set to 1 (= full!), not just the >>> diagonal, which was the previous (IMHO good) behaviour. In the real code I do >>> not set the diagonal, but some other elements given by two lists ir, ic, but >>> the code above shows the symptoms. >>> >>> I can fix easily my code by not using the LIL matrix: >>> >>> In [15]: a = spp.coo_matrix((np.ones((ir.shape[0],)), (ir, ic))) >>> In [16]: a >>> Out[16]: >>> <1000x1000 sparse matrix of type '' >>> ? ? ? ? with 1000 stored elements in COOrdinate format> >>> >>> but I wonder, if the above change in behaviour was intended... >>> >>> cheers, >>> r. >> --------------------- >> >> Robert, >> >>> Is there are reason why you removed the special case of x being a scalar, namely: >> >> Actually I don't think it was the special case of x being a scalar, >> but rather the special case of the row index and the column index both >> being sequences. > > Yes, x being scalar is in a sense a subset of the case when i, j, x are three > sequences of the same length. > >> Let me know if this looks wrong to you, but using NumPy dense arrays >> and matrices as a reference for correct behavior, I get: >> >> m = sp.zeros((10,10)) >> >> # set first column of m to ones: >> m[range(10),0] = 1 >> >> # set diagonal of m to -1: >> m[range(10),range(10)] = -1 >> m >> >> (prints:) >> array([[-1., ?0., ?0., ?0., ?0., ?0., ?0., ?0., ?0., ?0.], >> ? ? ? ?[ 1., -1., ?0., ?0., ?0., ?0., ?0., ?0., ?0., ?0.], >> ? ? ? ?[ 1., ?0., -1., ?0., ?0., ?0., ?0., ?0., ?0., ?0.], >> ? ? ? ?[ 1., ?0., ?0., -1., ?0., ?0., ?0., ?0., ?0., ?0.], >> ? ? ? ?[ 1., ?0., ?0., ?0., -1., ?0., ?0., ?0., ?0., ?0.], >> ? ? ? ?[ 1., ?0., ?0., ?0., ?0., -1., ?0., ?0., ?0., ?0.], >> ? ? ? ?[ 1., ?0., ?0., ?0., ?0., ?0., -1., ?0., ?0., ?0.], >> ? ? ? ?[ 1., ?0., ?0., ?0., ?0., ?0., ?0., -1., ?0., ?0.], >> ? ? ? ?[ 1., ?0., ?0., ?0., ?0., ?0., ?0., ?0., -1., ?0.], >> ? ? ? ?[ 1., ?0., ?0., ?0., ?0., ?0., ?0., ?0., ?0., -1.]]) >> >> The first case, assigning a scalar with a range for the row index and >> a scalar for the column index, lil_matrix still works the way it did. >> It iterates the range of rows and sets column 0 for each. The second >> case, iterating both of the ranges and using the row/column indexes in >> pairs, was inadvertently changed. >> >> A third case suggests that it isn't only about assigning a scalar, >> since assigning from a sequence using pairwise-matched sequences of >> row and column indices should work similarly: >> >> # assign random values to the reverse diagonal: >> m[range(10),range(9,-1,-1)] = sp.rand(10) >> m >> >> (prints:) >> array([[-1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, >> ? ? ? ? ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0.90216781], >> ? ? ? ?[ 1. ? ? ? ?, -1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, >> ? ? ? ? ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0.54728632, ?0. ? ? ? ?], >> ? ? ? ?[ 1. ? ? ? ?, ?0. ? ? ? ?, -1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, >> ? ? ? ? ?0. ? ? ? ?, ?0. ? ? ? ?, ?0.97993962, ?0. ? ? ? ?, ?0. ? ? ? ?], >> ? ? ? ?[ 1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, -1. ? ? ? ?, ?0. ? ? ? ?, >> ? ? ? ? ?0. ? ? ? ?, ?0.66932184, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?], >> ? ? ? ?[ 1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, -1. ? ? ? ?, >> ? ? ? ? ?0.64246538, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?], >> ? ? ? ?[ 1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0.93092643, >> ? ? ? ? -1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?], >> ? ? ? ?[ 1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0.25711642, ?0. ? ? ? ?, >> ? ? ? ? ?0. ? ? ? ?, -1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?], >> ? ? ? ?[ 1. ? ? ? ?, ?0. ? ? ? ?, ?0.28826595, ?0. ? ? ? ?, ?0. ? ? ? ?, >> ? ? ? ? ?0. ? ? ? ?, ?0. ? ? ? ?, -1. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?], >> ? ? ? ?[ 1. ? ? ? ?, ?0.01331807, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, >> ? ? ? ? ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, -1. ? ? ? ?, ?0. ? ? ? ?], >> ? ? ? ?[ 0.86963499, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, >> ? ? ? ? ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. ? ? ? ?, -1. ? ? ? ?]]) >> >> Yes, I broke that. :-) I haven't used this behavior or seen it >> documented and it wasn't covered by a unit test, and I missed it. (In >> my defense, lil_matrix is broken even worse for that third case in >> both SciPy versions 0.7.0 and 0.7.1.) >> >> Robert, can you (or anyone else out there) tell me if this covers it >> all, or whether there is some more general array-indexing behavior >> that should be implemented? I'll be happy to put in a case to handle >> it. > > > I think scipy.sparse indexing should follow the behavior of numpy dense arrays. > ?This is what current SVN scipy does (0.8.0.dev6122): > > In [1]: import scipy.sparse as spp > In [2]: a = spp.lil_matrix((10,10)) > In [3]: a[range(10),0] = 1 > > This is ok. > > In [5]: a[range(10),range(10)] = -1 > In [8]: print a.todense() > ------> print(a.todense()) > [[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] > ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] > ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] > ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] > ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] > ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] > ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] > ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] > ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] > ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]] > > This is IMHO not ok (what other sparse matrix users think?) > > In [9]: import scipy as sp > In [10]: a[range(10),range(9,-1,-1)] = sp.rand(10) > In [12]: print a.todense() > -------> print(a.todense()) > [[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982] > ?[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982] > ?[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982] > ?[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982] > ?[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982] > ?[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982] > ?[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982] > ?[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982] > ?[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982] > ?[ 0.71485802 ?0.95410746 ?0.07350778 ?0.77786453 ?0.28376647 ?0.35679679 > ? ?0.34837324 ?0.48012982 ?0.87469439 ?0.98490982]] > > same as above... > >> I'm reminded of the Zen of Python "Explicit is better than implicit." guideline. > > :) > > Consider also this: the current behavior (broadcasting the index arrays to the > whole rectangle) is not compatible with NumPy, and does not allow setting > elements e.g. in a random sequence of positions. On the other hand, the > broadcasting behaviour can be easily, explicitely, obtained by using > numpy.mgrid and similar functions. > >> Best regards, and many apologies for the inconvenience, > > No problem, any help and code contribution is more than welcome! > > I guess that fixing this issue should not be too difficult, so you could make > another stab :-) If there is a consensus here, of course... (Nathan?) > > cheers, > r. Yes, I agree with you 100%, Robert. The behavior of NumPy for dense arrays should be the guide, and I tried to follow it but didn't know to check that case. I don't defend how my version handles your case where the i and j indexes are both sequences. The behavior that you expect is correct and I plan to fix it to make your code work. I would however like to make sure that I understand it well and get it all correct this time--including correctly handling the case where the right-hand side is also a sequence. Best regards, Tim Victor From cimrman3 at ntc.zcu.cz Tue Dec 1 11:13:33 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 01 Dec 2009 17:13:33 +0100 Subject: [SciPy-dev] Possible fix for scipy.sparse.lil_matrix column-slicing problem In-Reply-To: References: <1cd32cbb0911251329se97c355x8ec78903c4260ce@mail.gmail.com> <4B139F28.8060807@ntc.zcu.cz> <4B151617.1040009@ntc.zcu.cz> Message-ID: <4B1540AD.3080005@ntc.zcu.cz> Tim Victor wrote: > On Tue, Dec 1, 2009 at 8:11 AM, Robert Cimrman wrote: >> I think scipy.sparse indexing should follow the behavior of numpy dense arrays. >> This is what current SVN scipy does (0.8.0.dev6122): >> >> In [1]: import scipy.sparse as spp >> In [2]: a = spp.lil_matrix((10,10)) >> In [3]: a[range(10),0] = 1 >> >> This is ok. >> >> In [5]: a[range(10),range(10)] = -1 >> In [8]: print a.todense() >> ------> print(a.todense()) >> [[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >> [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >> [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >> [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >> [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >> [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >> [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >> [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >> [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >> [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]] >> >> This is IMHO not ok (what other sparse matrix users think?) >> >> In [9]: import scipy as sp >> In [10]: a[range(10),range(9,-1,-1)] = sp.rand(10) >> In [12]: print a.todense() >> >> >> same as above... >> >>> I'm reminded of the Zen of Python "Explicit is better than implicit." guideline. >> :) >> >> Consider also this: the current behavior (broadcasting the index arrays to the >> whole rectangle) is not compatible with NumPy, and does not allow setting >> elements e.g. in a random sequence of positions. On the other hand, the >> broadcasting behaviour can be easily, explicitely, obtained by using >> numpy.mgrid and similar functions. >> >>> Best regards, and many apologies for the inconvenience, >> No problem, any help and code contribution is more than welcome! >> >> I guess that fixing this issue should not be too difficult, so you could make >> another stab :-) If there is a consensus here, of course... (Nathan?) >> >> cheers, >> r. > > Yes, I agree with you 100%, Robert. The behavior of NumPy for dense > arrays should be the guide, and I tried to follow it but didn't know > to check that case. No problem at all. It was a coincidence that I stumbled on a case that was not covered by the tests. I do not even uses lil_matrix much :) > I don't defend how my version handles your case where the i and j > indexes are both sequences. The behavior that you expect is correct > and I plan to fix it to make your code work. I would however like to > make sure that I understand it well and get it all correct this > time--including correctly handling the case where the right-hand side > is also a sequence. Sure! I am not an expert in this either, so let's wait a bit if somebody chimes in... Could you summarize this discussion in a new ticket, if you have a little spare time? Note that I do not push this to be fixed soon by any means, my code already runs ok with the current version. So take my "bugreport" easy ;) Best, r. From wnbell at gmail.com Tue Dec 1 14:32:27 2009 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 1 Dec 2009 14:32:27 -0500 Subject: [SciPy-dev] slow setitem In-Reply-To: References: Message-ID: 2009/11/30 Benny Malengier : > A question about the sparse matrix and suggestion for speed improvement. > > I have a code of 1000*1000 sparse matrix of 11 full diagonals. Using csr. > Assigning the matrix via setdiags takes 14 seconds, of which 13 > seconds is in the check_format function of sparse/base.py. > > This is due to setdiag doing: > > ? ? self[i, i + k] = v > > while setitem in compressed.py always does a > > ? ? self.check_format(full_check=True) > > Removing the check_format reduces assignment of the matrix to 1 second. > Is it really needed that the assignmentof setdiag is again checked via > check_format? > Allowing for a setitem that does not call check_format would be usefull. > > One could set a dirty flag on a setitem call, and do a check_format > only when a computation is performed and the flag is dirty. On the > other hand, I'm not making errors, and to have a flag like > NOCHECKS=TRUE would be handy as it would speed up the code a lot. > I could create a solution that is acceptable for scipy, but then I'd > need to know how such a solution should look like. > > Note that there are quite some zeroes in my diagonals now too, that > are now set to 0., if setitem would be fast, I could drop setdiag > call, and do a fast setitem call. Or are sparse matrixes supposed to > be used differently? Hi Pavol, Try creating your matrix with the spdiags() function or the related dia_matrix class. Constructing a matrix from diagonals with one of these methods will be at least 100x faster than inserting into a csr_matrix. We don't optimize the csr_matrix element insertion code because it's inherently slow, checks or no checks. Every time you introduce a new nonzero the csr_matrix must perform an O(N) update to the underlying data structure (hence, inserting N elements is O(N^2)). -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From ssclift at gmail.com Tue Dec 1 20:34:34 2009 From: ssclift at gmail.com (Simon Clift) Date: Tue, 1 Dec 2009 20:34:34 -0500 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? Message-ID: <200912012034.34511.ssclift@gmail.com> Hi folks, The tri-diagonal linear system routines (sdcz/gt*) in LAPACK don't seem to be interfaced to the scipy.linalg.flapack module. These are (relatively) straightforward algorithms but it's always annoying when you need them and have to create your own implementation. Is there a broader reason for not completing the interface? Would anyone object if I went ahead and patched scipy/linalg/generic_flapack.pyf to allow the call? Thanks -- Simon -- 1129 Ibbetson Lane Mississauga, Ontario L5C 1K9 Canada From dwf at cs.toronto.edu Tue Dec 1 21:06:40 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 1 Dec 2009 21:06:40 -0500 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: <200912012034.34511.ssclift@gmail.com> References: <200912012034.34511.ssclift@gmail.com> Message-ID: <5B54FE1F-1E65-4157-9344-FC4086CBEDC4@cs.toronto.edu> On 1-Dec-09, at 8:34 PM, Simon Clift wrote: > Is there a broader reason for not completing the interface? Nope. I think they were just wrapped on an as-needed basis. This came up a little while ago w.r.t. general banded systems as well: http://mail.scipy.org/pipermail/scipy-user/2009-October/023083.html > Would anyone object if I went ahead and patched scipy/linalg/ > generic_flapack.pyf to allow > the call? Doubtful. :) Although if you do, it might be a good idea to add a function to scipy.linalg with a comprehensible name (like solve_tridiagonal to go along with solve_banded [which actually does wrap dgbsv, my post there is wrong in that regard]). David From benny.malengier at gmail.com Wed Dec 2 03:24:27 2009 From: benny.malengier at gmail.com (Benny Malengier) Date: Wed, 2 Dec 2009 09:24:27 +0100 Subject: [SciPy-dev] slow setitem In-Reply-To: References: Message-ID: 2009/12/1 Nathan Bell : > 2009/11/30 Benny Malengier : >> A question about the sparse matrix and suggestion for speed improvement. >> >> I have a code of 1000*1000 sparse matrix of 11 full diagonals. Using csr. >> Assigning the matrix via setdiags takes 14 seconds, of which 13 >> seconds is in the check_format function of sparse/base.py. >> >> This is due to setdiag doing: >> >> ? ? self[i, i + k] = v >> >> while setitem in compressed.py always does a >> >> ? ? self.check_format(full_check=True) >> >> Removing the check_format reduces assignment of the matrix to 1 second. >> Is it really needed that the assignmentof setdiag is again checked via >> check_format? >> Allowing for a setitem that does not call check_format would be usefull. >> >> One could set a dirty flag on a setitem call, and do a check_format >> only when a computation is performed and the flag is dirty. On the >> other hand, I'm not making errors, and to have a flag like >> NOCHECKS=TRUE would be handy as it would speed up the code a lot. >> I could create a solution that is acceptable for scipy, but then I'd >> need to know how such a solution should look like. >> >> Note that there are quite some zeroes in my diagonals now too, that >> are now set to 0., if setitem would be fast, I could drop setdiag >> call, and do a fast setitem call. Or are sparse matrixes supposed to >> be used differently? > > Hi Pavol, > > Try creating your matrix with the spdiags() function or the related > dia_matrix class. ?Constructing a matrix from diagonals with one of > these methods will be at least 100x faster than inserting into a > csr_matrix. > > We don't optimize the csr_matrix element insertion code because it's > inherently slow, checks or no checks. ?Every time you introduce a new > nonzero the csr_matrix must perform an O(N) update to the underlying > data structure (hence, inserting N elements is O(N^2)). Thanks, As an update, we need to solve a matrixequation some 2000 times, but it is a fixed band structure so there is no problem with changing the data structure after the first creation via spdiags. After having looked at the scipy code I see we can do it a lot more performant probably by using a array as required for lapack, and then doing sl.lapack.get_lapack_funcs(['gbsv']) to obtain the banded lapack solver. I'll try if really faster later this week. With spsolve we made it performant now by doing direct setting of spmat.data, as fortnunately the data structure of a sparse matrix is available in the api, and it is easy to know where an element of a banded matrix ends up in a csr matrix. This made me think it might be usefull to create a banded matrix class on top of csr, it would allow for really fast setting of complete diagonals. But as said, probably quicker to just use the lapack array structure for a banded sparse matrix (but not if far off diagonals as arise eg in discretization via method of lines of 2/3D problems) So, looking back at this, I think the main problem for scipy is: 1/need to add documentation in functions as setdiag and in base classes that it is slow, so not good to use in loops 2/need to add documentation in http://docs.scipy.org/doc/scipy/reference/sparse.html in the API of sparse matrixes, so indicating one can access data, indexes, .... of csc and csr matrixes to achieve great speedup (if you know what you are doing). 3/need to add documentation with example of how one can solve sparse matrixic using lapack. A new users that wants to solve a sparse matrix system with scipy looks in the documentation and arrives at spsolve, unaware it probably is not the best way to solve his problem. It is strange for experienced people to see new PhD students starting and having no idea that blas and lapack exist. Note that if you search in google on 'lapack scipy' you do not find any example at all. So I guess when finished coding, I should make a small tutorial or so about solving a large system with scipy. The doc of sparse I find so lacking I wouldn't know where to begin to update it. I wouldn't mind spending a day improving the doc of sparse, but I'm really using this part of scipy for the first time, so it would be colored with my interpretation. Benny > > -- > Nathan Bell wnbell at gmail.com > http://www.wnbell.com/ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From gael.varoquaux at normalesup.org Wed Dec 2 03:41:18 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 2 Dec 2009 09:41:18 +0100 Subject: [SciPy-dev] slow setitem In-Reply-To: References: Message-ID: <20091202084118.GA17292@phare.normalesup.org> On Wed, Dec 02, 2009 at 09:24:27AM +0100, Benny Malengier wrote: > So, looking back at this, I think the main problem for scipy is: > 1/need to add documentation in functions as setdiag and in base > classes that it is slow, so not good to use in loops > 2/need to add documentation in > http://docs.scipy.org/doc/scipy/reference/sparse.html in the API of > sparse matrixes, so indicating one can access data, indexes, .... of > csc and csr matrixes to achieve great speedup (if you know what you > are doing). > 3/need to add documentation with example of how one can solve sparse > matrixic using lapack. A new users that wants to solve a sparse matrix > system with scipy looks in the documentation and arrives at spsolve, > unaware it probably is not the best way to solve his problem. It is > strange for experienced people to see new PhD students starting and > having no idea that blas and lapack exist. > Note that if you search in google on 'lapack scipy' you do not find > any example at all. Hey Benny, If you get time, you should know that you can easily improve the scipy documentation: 1) Read http://docs.scipy.org/numpy/Front%20Page/ 2) Get a login 3) On the page: http://docs.scipy.org/doc/scipy/reference/sparse.html (or any other documentation page) click on the 'edit page' link, on the bottom left. As you have experienced, there is a welth of knowledge to acquire to be efficient with numerical calculation. Presenting it to the user in a way that he finds it quickly without being drowned by needless information is hard. Input from a user that has just gone through the process of learning is invaluable. Ga?l From benny.malengier at gmail.com Wed Dec 2 03:43:12 2009 From: benny.malengier at gmail.com (Benny Malengier) Date: Wed, 2 Dec 2009 09:43:12 +0100 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: <5B54FE1F-1E65-4157-9344-FC4086CBEDC4@cs.toronto.edu> References: <200912012034.34511.ssclift@gmail.com> <5B54FE1F-1E65-4157-9344-FC4086CBEDC4@cs.toronto.edu> Message-ID: 2009/12/2 David Warde-Farley : > On 1-Dec-09, at 8:34 PM, Simon Clift wrote: > >> Is there a broader reason for not completing the interface? > > Nope. I think they were just wrapped on an as-needed basis. This came > up a little while ago w.r.t. general banded systems as well: > > ? ? ? ?http://mail.scipy.org/pipermail/scipy-user/2009-October/023083.html > >> Would anyone object if I went ahead and patched scipy/linalg/ >> generic_flapack.pyf to allow >> the call? > > > Doubtful. :) Although if you do, it might be a good idea to add a > function to scipy.linalg with a comprehensible name (like > solve_tridiagonal to go along with solve_banded [which actually does > wrap dgbsv, my post there is wrong in that regard]). > Interesting, this is exactly the function I needed for my problem, but I was looking in scipy.sparse.linalg, so did not notice banded matrix solver was present in scipy.linalg. In my logic, the "matrix diagonal orded form" of http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_banded.html#scipy.linalg.solve_banded would be a type of sparse matrix one can manipulate. This would allow things like changing matrix diagonal orded form sparse matrix to a csr matrix, adding some extra elements off the diagonals, and then calling a more generic solver. Benny > David > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From benny.malengier at gmail.com Wed Dec 2 03:53:29 2009 From: benny.malengier at gmail.com (Benny Malengier) Date: Wed, 2 Dec 2009 09:53:29 +0100 Subject: [SciPy-dev] edit rights scipy doc Message-ID: I'd like to have edit rights to the scipy docs login: bmcage Reason: scipy user, I also maintain the odes scikit. Greetings, Benny Malengier From gael.varoquaux at normalesup.org Wed Dec 2 03:55:22 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 2 Dec 2009 09:55:22 +0100 Subject: [SciPy-dev] edit rights scipy doc In-Reply-To: References: Message-ID: <20091202085522.GC17292@phare.normalesup.org> On Wed, Dec 02, 2009 at 09:53:29AM +0100, Benny Malengier wrote: > I'd like to have edit rights to the scipy docs > login: bmcage I added you. You should be good to good. Cheers, Ga?l From dsdale24 at gmail.com Wed Dec 2 10:02:11 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Wed, 2 Dec 2009 10:02:11 -0500 Subject: [SciPy-dev] trouble building from svn In-Reply-To: References: <4B149D72.8010007@ar.media.kyoto-u.ac.jp> Message-ID: On Tue, Dec 1, 2009 at 8:10 AM, Darren Dale wrote: > On Mon, Nov 30, 2009 at 11:37 PM, David Cournapeau > wrote: >> Darren Dale wrote: >>> I am attempting to build scipy from svn sources on OS X 10.6. I get >>> the following error, could anyone please advise? >>> >>> C compiler: gcc-4.2 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes >>> -arch x86_64 -pipe >>> >> >> Most likely you have LDFLAGS defined in your environment, which screws >> up the build. LDFLAGS (and other similar variables) do not work as >> expected with distutils, they *override* the options instead of >> completing them. >> >> The actual error is that the link step is missing the -shared option, >> hence the missing errors, as the linker tries to build an executable. > > Thanks David, I think you are probably right. I had set some > environment variables to build packages like hdf5, which defaults to > 32 bit because "uname" returns i386 on snow leopard. I'll not at work > today, so I will try again on the mac tomorrow. > I removed the following environment variables: #export CFLAGS="-arch x86_64" #export LDFLAGS="-arch x86_64" #export FFLAGS="-arch x86_64" and was able to build scipy again. Thank you David! From forrest.bao at gmail.com Wed Dec 2 15:10:08 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Wed, 2 Dec 2009 14:10:08 -0600 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 Message-ID: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> Hi there, I just tried to setup numpy 1.4 rc1 with icc 11.1 and I got the error below during compilation: compilation aborted for build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) error: Command "icc -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/include/python2.4 -Ibuild/src.linux-x86_64-2.4/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.4/numpy/core/src/umath -c build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c -o build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.o" failed with exit status 2 So I downloaded numpy 1.3 instead and set up successfully. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Dec 2 15:22:20 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 2 Dec 2009 13:22:20 -0700 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> Message-ID: On Wed, Dec 2, 2009 at 1:10 PM, Forrest Sheng Bao wrote: > Hi there, I just tried to setup numpy 1.4 rc1 with icc 11.1 and I got the > error below during compilation: > > > compilation aborted for > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) > error: Command "icc -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC > -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core/include/numpy > -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core > -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath > -Inumpy/core/include -I/usr/include/python2.4 > -Ibuild/src.linux-x86_64-2.4/numpy/core/src/multiarray > -Ibuild/src.linux-x86_64-2.4/numpy/core/src/umath -c > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c -o > build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.o" > failed with exit status 2 > > So I downloaded numpy 1.3 instead and set up successfully. > > Hmm, the error message isn't very informative, was there any more? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Wed Dec 2 15:44:02 2009 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 2 Dec 2009 15:44:02 -0500 Subject: [SciPy-dev] slow setitem In-Reply-To: <20091202084118.GA17292@phare.normalesup.org> References: <20091202084118.GA17292@phare.normalesup.org> Message-ID: On Wed, Dec 2, 2009 at 3:41 AM, Gael Varoquaux wrote: > > If you get time, you should know that you can easily improve the scipy > documentation: > > ?1) Read http://docs.scipy.org/numpy/Front%20Page/ > ?2) Get a login > ?3) On the page: http://docs.scipy.org/doc/scipy/reference/sparse.html > ? ?(or any other documentation page) click on the 'edit page' link, > ? ?on the bottom left. > > As you have experienced, there is a welth of knowledge to acquire to be > efficient with numerical calculation. Presenting it to the user in a way > that he finds it quickly without being drowned by needless information is > hard. Input from a user that has just gone through the process of > learning is invaluable. > Many moons ago I started writing more comprehensive scipy.sparse documentation on the wiki: http://www.scipy.org/SciPyPackages/Sparse That content is still accurate so it can be merged into the new documentation. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From dwf at cs.toronto.edu Wed Dec 2 15:48:53 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 2 Dec 2009 15:48:53 -0500 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: References: <200912012034.34511.ssclift@gmail.com> <5B54FE1F-1E65-4157-9344-FC4086CBEDC4@cs.toronto.edu> Message-ID: On 2-Dec-09, at 3:43 AM, Benny Malengier wrote: > would be a type of sparse matrix one can manipulate. This would allow > things like changing matrix diagonal orded form sparse matrix to a csr > matrix, adding some extra elements off the diagonals, and then calling > a more generic solver. Well, often when people talk about sparse matrices they mean (often unstructured) matrices that are _very_ sparse, like, large systems with a very small fraction of the matrix elements non-zero. Banded systems don't really fit in this kind of way of thinking because they are a) structured and b) relatively dense. That said, I'm not certain there are any strong feelings on the matter from the maintainers of scipy.sparse, so if you'd like to see this kind of functionality you might as well open a ticket and submit a patch. Note that converting to CSR and then adding elements would probably be an inefficient way to do it. In general the compressed formats don't lend themselves to insertion; dok_matrix might be a better bet, with a conversion to CSR when you freeze the contents. David From cournape at gmail.com Wed Dec 2 15:55:10 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 3 Dec 2009 05:55:10 +0900 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> Message-ID: <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> On Thu, Dec 3, 2009 at 5:10 AM, Forrest Sheng Bao wrote: > Hi there, I just tried to setup numpy 1.4 rc1 with icc 11.1 and I got the > error below during compilation: > > > compilation aborted for > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) > error: Command "icc -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC > -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core/include/numpy > -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core > -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath > -Inumpy/core/include -I/usr/include/python2.4 > -Ibuild/src.linux-x86_64-2.4/numpy/core/src/multiarray > -Ibuild/src.linux-x86_64-2.4/numpy/core/src/umath -c > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c -o > build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.o" we will need more info. I have just tried icc 11 on linux ia32, against python 2.6, and it built without any issue. Make sure to build from a clean tree, David From forrest.bao at gmail.com Wed Dec 2 20:43:31 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Wed, 2 Dec 2009 19:43:31 -0600 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> Message-ID: <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> That's the only thing I saw on my shell. Anything before this were pages of syntax warnings. Are there any log files? Cheers, Forrest On Wed, Dec 2, 2009 at 2:55 PM, David Cournapeau wrote: > On Thu, Dec 3, 2009 at 5:10 AM, Forrest Sheng Bao > wrote: > > Hi there, I just tried to setup numpy 1.4 rc1 with icc 11.1 and I got the > > error below during compilation: > > > > > > compilation aborted for > > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) > > error: Command "icc -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall > > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > > --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC > > -Inumpy/core/include > -Ibuild/src.linux-x86_64-2.4/numpy/core/include/numpy > > -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core > > -Inumpy/core/src/npymath -Inumpy/core/src/multiarray > -Inumpy/core/src/umath > > -Inumpy/core/include -I/usr/include/python2.4 > > -Ibuild/src.linux-x86_64-2.4/numpy/core/src/multiarray > > -Ibuild/src.linux-x86_64-2.4/numpy/core/src/umath -c > > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c -o > > > build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.o" > > we will need more info. I have just tried icc 11 on linux ia32, > against python 2.6, and it built without any issue. > > Make sure to build from a clean tree, > > David > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Dec 2 20:48:42 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 2 Dec 2009 18:48:42 -0700 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> Message-ID: On Wed, Dec 2, 2009 at 6:43 PM, Forrest Sheng Bao wrote: > That's the only thing I saw on my shell. Anything before this were pages of > syntax warnings. Are there any log files? > > Redirect the output: python setup.py build &> logfile and zip up the result. You can also examine it to see if any of the warnings look suspicious. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Thu Dec 3 11:53:21 2009 From: lists at onerussian.com (Yaroslav Halchenko) Date: Thu, 3 Dec 2009 11:53:21 -0500 Subject: [SciPy-dev] Ball Tree class In-Reply-To: <58df6dc20910291501gdf4ea83k7817147616fdf5be@mail.gmail.com> References: <58df6dc20910291501gdf4ea83k7817147616fdf5be@mail.gmail.com> Message-ID: <20091203165320.GA3462@onerussian.com> Hi Jake and SciPy, I wonder what is the status of this endeavor? For our little project we need fast search for a spherical neighborhood in a cloud of points. Only now I discovered that scipy has KDTree and cKDTree which could be used, BUT both interfaces are somewhat misfit for this simple goal one way or another. Also I've ran into libkdtree++ library with Python bindings which seems to do close what I need (just does range search, not sphere but it could be more or less efficiently computed post-hoc) and do it very efficiently upon my simple tests: http://libkdtree.alioth.debian.org/ but it is under Artistic License 2.0... but may be it might be of use to inspire modification of scipy's ways to interface with the user ;) And Jake, how are you going along with your project since there were no follow ups on this thread I wonder if there was any progress? On Thu, 29 Oct 2009, Jake VanderPlas wrote: > Hello, > I've been using scipy.spatial.KDTree for my research, and found it > very useful. Recently, though, my datasets have been getting larger - > upwards of 10,000 points in 1000-2000 dimensions. > The KDTree starts getting slow with such a > large dimensionality. I've addressed this by writing a C++ Ball Tree > code, and wrapping it using swig and numpy.i. I've been wanting to > begin contributing to the scipy project, and I think this would be a > great place to start. I'd like to begin the process of adding > this to the scipy.spatial package. > A few questions: Is it preferable to have an > implementation in C rather than C++? Cython, swig, or hand-wrapped code? > cKDTree is written in Cython, with C. Should I stick to that > convention to maintain uniformity in the scipy.spatial package? > Please let me know what you think > -Jake > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -- .-. =------------------------------ /v\ ----------------------------= Keep in touch // \\ (yoh@|www.)onerussian.com Yaroslav Halchenko /( )\ ICQ#: 60653192 Linux User ^^-^^ [175555] From jakevdp at gmail.com Thu Dec 3 12:13:28 2009 From: jakevdp at gmail.com (Jake VanderPlas) Date: Thu, 3 Dec 2009 09:13:28 -0800 Subject: [SciPy-dev] Ball Tree class Message-ID: <58df6dc20912030913x14b34ac1h55f8ca133f4116a5@mail.gmail.com> >Hi Jake and SciPy, > >I wonder what is the status of this endeavor? > >For our little project we need fast search for a spherical neighborhood >in a cloud of points. Only now I discovered that scipy has KDTree and >cKDTree which could be used, BUT both interfaces are somewhat >misfit for this simple goal one way or another. > >Also I've ran into libkdtree++ library with Python bindings which seems >to do close what I need (just does range search, not sphere but it could >be more or less efficiently computed post-hoc) and do it very >efficiently upon my simple tests: http://libkdtree.alioth.debian.org/ >but it is under Artistic License 2.0... but may be it might be of use >to inspire modification of scipy's ways to interface with the user >;) > >And Jake, how are you going along with your project since there were no >follow ups on this thread I wonder if there was any progress? Yaroslav, I have not done much more since the previous emails. I got the impression from people's responses that the community is not interested in this code until it can be both more general and more complete (allowing for flexible distance metrics, multiple data types, approximate searches, alternate tree construction schemes, etc.) I have not had the time to work on adding those elements. As it stands, the code is available for review in ticket #1048: http://projects.scipy.org/scipy/ticket/1048 It does have the capability of doing fast spherical neighborhood searches with a Euclidean distance metric. Take a look, and I'd appreciate any feedback you have. -Jake From charlesr.harris at gmail.com Thu Dec 3 12:32:02 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 3 Dec 2009 10:32:02 -0700 Subject: [SciPy-dev] Ball Tree class In-Reply-To: <58df6dc20912030913x14b34ac1h55f8ca133f4116a5@mail.gmail.com> References: <58df6dc20912030913x14b34ac1h55f8ca133f4116a5@mail.gmail.com> Message-ID: On Thu, Dec 3, 2009 at 10:13 AM, Jake VanderPlas wrote: > >Hi Jake and SciPy, > > > >I wonder what is the status of this endeavor? > > > >For our little project we need fast search for a spherical neighborhood > >in a cloud of points. Only now I discovered that scipy has KDTree and > >cKDTree which could be used, BUT both interfaces are somewhat > >misfit for this simple goal one way or another. > > > >Also I've ran into libkdtree++ library with Python bindings which seems > >to do close what I need (just does range search, not sphere but it could > >be more or less efficiently computed post-hoc) and do it very > >efficiently upon my simple tests: http://libkdtree.alioth.debian.org/ > >but it is under Artistic License 2.0... but may be it might be of use > >to inspire modification of scipy's ways to interface with the user > >;) > > > >And Jake, how are you going along with your project since there were no > >follow ups on this thread I wonder if there was any progress? > > Yaroslav, > I have not done much more since the previous emails. I got the > impression from people's responses that the community is not > interested in this code until it can be both more general and more > complete (allowing for flexible distance metrics, multiple data types, > approximate searches, alternate tree construction schemes, etc.) > This is a common problem in open source, propose something simple and you get asked to undertake world domination ;) I think simple Euclidean distances and double would be a good start as long as the design doesn't require a complete refactoring to add more general metrics. The code itself needs a lot of style fixes. If you don't have the time maybe Yaroslav can pick it up. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakevdp at gmail.com Thu Dec 3 14:08:46 2009 From: jakevdp at gmail.com (Jake VanderPlas) Date: Thu, 3 Dec 2009 11:08:46 -0800 Subject: [SciPy-dev] SciPy-Dev Digest, Vol 74, Issue 5 In-Reply-To: References: Message-ID: <58df6dc20912031108h2b4c83fcp2fa21ba576ba75ec@mail.gmail.com> >> >Hi Jake and SciPy, >> > >> >I wonder what is the status of this endeavor? >> > >> >For our little project we need fast search for a spherical neighborhood >> >in a cloud of points. ?Only now I discovered that scipy has KDTree and >> >cKDTree which could be ?used, BUT both interfaces are somewhat >> >misfit for this simple goal one way or another. >> > >> >Also I've ran into libkdtree++ library with Python bindings which seems >> >to do close what I need (just does range search, not sphere but it could >> >be more or less efficiently computed post-hoc) and do it very >> >efficiently upon my simple tests: http://libkdtree.alioth.debian.org/ >> >but it is under Artistic License 2.0... ?but may be it might be of use >> >to inspire modification of scipy's ways to interface with the user >> >;) >> > >> >And Jake, how are you going along with your project since there were no >> >follow ups on this thread I wonder if there was any progress? >> >> Yaroslav, >> I have not done much more since the previous emails. ?I got the >> impression from people's responses that the community is not >> interested in this code until it can be both more general and more >> complete (allowing for flexible distance metrics, multiple data types, >> approximate searches, alternate tree construction schemes, etc.) >> > > This is a common problem in open source, propose something simple and you > get asked to undertake world domination ;) I think simple Euclidean > distances and double would be a good start as long as the design doesn't > require a complete refactoring to add more general metrics. The code itself > needs a lot of style fixes. If you don't have the time maybe Yaroslav can > pick it up. > > Chuck Chuck, Can you give me some pointers on what style-fixes are needed? Are you referring to the C++ code, the python wrapper, or both? I'm relatively new to the python open-source world, and would love to gain some experience in this sort of thing. As far as future flexibility goes, the C++ code is templated and able to handle pointers to custom distance functions. The current python wrapper is more rigid. -Jake From forrest.bao at gmail.com Thu Dec 3 14:36:56 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Thu, 3 Dec 2009 13:36:56 -0600 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> Message-ID: <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> I took a look at the compilation log and though the error might be here: numpy/core/src/npymath/npy_math.c.src(398): remark #1418: external function definition with no prior declaration float npy_log2_1pf(float x) ^ numpy/core/src/npymath/npy_math.c.src(401): remark #1572: floating-point equality and inequality comparisons are unreliable if (u == 1) { ^ numpy/core/src/npymath/npy_math.c.src(408): remark #1418: external function definition with no prior declaration float npy_exp2_1mf(float x) ^ numpy/core/src/npymath/npy_math.c.src(411): remark #1572: floating-point equality and inequality comparisons are unreliable if (u == 1.0) { ^ numpy/core/src/npymath/npy_math.c.src(413): remark #1572: floating-point equality and inequality comparisons are unreliable } else if (u - 1 == -1) { ^ numpy/core/src/npymath/npy_math.c.src(398): remark #1418: external function definition with no prior declaration double npy_log2_1p(double x) ^ numpy/core/src/npymath/npy_math.c.src(401): remark #1572: floating-point equality and inequality comparisons are unreliable if (u == 1) { ^ numpy/core/src/npymath/npy_math.c.src(408): remark #1418: external function definition with no prior declaration double npy_exp2_1m(double x) ^ numpy/core/src/npymath/npy_math.c.src(411): remark #1572: floating-point equality and inequality comparisons are unreliable if (u == 1.0) { ^ numpy/core/src/npymath/npy_math.c.src(413): remark #1572: floating-point equality and inequality comparisons are unreliable } else if (u - 1 == -1) { ^ numpy/core/src/npymath/npy_math.c.src(398): remark #1418: external function definition with no prior declaration npy_longdouble npy_log2_1pl(npy_longdouble x) ^ numpy/core/src/npymath/npy_math.c.src(401): remark #1572: floating-point equality and inequality comparisons are unreliable if (u == 1) { ^ numpy/core/src/npymath/npy_math.c.src(408): remark #1418: external function definition with no prior declaration npy_longdouble npy_exp2_1ml(npy_longdouble x) ^ numpy/core/src/npymath/npy_math.c.src(411): remark #1572: floating-point equality and inequality comparisons are unreliable if (u == 1.0) { ^ numpy/core/src/npymath/npy_math.c.src(413): remark #1572: floating-point equality and inequality comparisons are unreliable } else if (u - 1 == -1) { ^ compilation aborted for build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) Cheers, Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Dec 3 14:42:02 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 3 Dec 2009 20:42:02 +0100 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> Message-ID: Hi, It's not, remarks are even lower than warnings, so no error here ;) Matthieu 2009/12/3 Forrest Sheng Bao : > I took a look at the compilation log and though the error might be here: > > numpy/core/src/npymath/npy_math.c.src(398): remark #1418: external function > definition with no prior declaration > ? float npy_log2_1pf(float x) > ??????? ^ > > numpy/core/src/npymath/npy_math.c.src(401): remark #1572: floating-point > equality and inequality comparisons are unreliable > ????? if (u == 1) { > ?????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(408): remark #1418: external function > definition with no prior declaration > ? float npy_exp2_1mf(float x) > ??????? ^ > > numpy/core/src/npymath/npy_math.c.src(411): remark #1572: floating-point > equality and inequality comparisons are unreliable > ????? if (u == 1.0) { > ?????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(413): remark #1572: floating-point > equality and inequality comparisons are unreliable > ????? } else if (u - 1 == -1) { > ????????????????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(398): remark #1418: external function > definition with no prior declaration > ? double npy_log2_1p(double x) > ???????? ^ > > numpy/core/src/npymath/npy_math.c.src(401): remark #1572: floating-point > equality and inequality comparisons are unreliable > ????? if (u == 1) { > ?????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(408): remark #1418: external function > definition with no prior declaration > ? double npy_exp2_1m(double x) > ???????? ^ > > numpy/core/src/npymath/npy_math.c.src(411): remark #1572: floating-point > equality and inequality comparisons are unreliable > ????? if (u == 1.0) { > ?????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(413): remark #1572: floating-point > equality and inequality comparisons are unreliable > ????? } else if (u - 1 == -1) { > ????????????????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(398): remark #1418: external function > definition with no prior declaration > ? npy_longdouble npy_log2_1pl(npy_longdouble x) > ???????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(401): remark #1572: floating-point > equality and inequality comparisons are unreliable > ????? if (u == 1) { > ?????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(408): remark #1418: external function > definition with no prior declaration > ? npy_longdouble npy_exp2_1ml(npy_longdouble x) > ???????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(411): remark #1572: floating-point > equality and inequality comparisons are unreliable > ????? if (u == 1.0) { > ?????????????? ^ > > numpy/core/src/npymath/npy_math.c.src(413): remark #1572: floating-point > equality and inequality comparisons are unreliable > ????? } else if (u - 1 == -1) { > ????????????????????????? ^ > > compilation aborted for > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) > > > Cheers, > Forrest > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From forrest.bao at gmail.com Thu Dec 3 14:46:18 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Thu, 3 Dec 2009 13:46:18 -0600 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> Message-ID: <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> But right after those, i saw this compilation aborted for build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) error: Command "icc -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/include/python2.4 -Ibuild/src.linux-x86_64-2.4/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.4/numpy/core/src/umath -c build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c -o build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.o" failed with exit status 2 I couldn't see any other reason it aborted the compilation. If I have to find a warning, then this is the only one: numpy/core/src/npymath/npy_math_private.h(414): error: a float, double or long double type must be included in the type specifier for a _Complex or _Imaginary type complex c99_z; Cheers, Forrest On Thu, Dec 3, 2009 at 1:42 PM, Matthieu Brucher wrote: > Hi, > > It's not, remarks are even lower than warnings, so no error here ;) > > Matthieu > > 2009/12/3 Forrest Sheng Bao : > > I took a look at the compilation log and though the error might be here: > > > > numpy/core/src/npymath/npy_math.c.src(398): remark #1418: external > function > > definition with no prior declaration > > float npy_log2_1pf(float x) > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(401): remark #1572: floating-point > > equality and inequality comparisons are unreliable > > if (u == 1) { > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(408): remark #1418: external > function > > definition with no prior declaration > > float npy_exp2_1mf(float x) > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(411): remark #1572: floating-point > > equality and inequality comparisons are unreliable > > if (u == 1.0) { > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(413): remark #1572: floating-point > > equality and inequality comparisons are unreliable > > } else if (u - 1 == -1) { > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(398): remark #1418: external > function > > definition with no prior declaration > > double npy_log2_1p(double x) > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(401): remark #1572: floating-point > > equality and inequality comparisons are unreliable > > if (u == 1) { > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(408): remark #1418: external > function > > definition with no prior declaration > > double npy_exp2_1m(double x) > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(411): remark #1572: floating-point > > equality and inequality comparisons are unreliable > > if (u == 1.0) { > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(413): remark #1572: floating-point > > equality and inequality comparisons are unreliable > > } else if (u - 1 == -1) { > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(398): remark #1418: external > function > > definition with no prior declaration > > npy_longdouble npy_log2_1pl(npy_longdouble x) > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(401): remark #1572: floating-point > > equality and inequality comparisons are unreliable > > if (u == 1) { > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(408): remark #1418: external > function > > definition with no prior declaration > > npy_longdouble npy_exp2_1ml(npy_longdouble x) > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(411): remark #1572: floating-point > > equality and inequality comparisons are unreliable > > if (u == 1.0) { > > ^ > > > > numpy/core/src/npymath/npy_math.c.src(413): remark #1572: floating-point > > equality and inequality comparisons are unreliable > > } else if (u - 1 == -1) { > > ^ > > > > compilation aborted for > > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) > > > > > > Cheers, > > Forrest > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Dec 3 15:13:29 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 3 Dec 2009 13:13:29 -0700 Subject: [SciPy-dev] SciPy-Dev Digest, Vol 74, Issue 5 In-Reply-To: <58df6dc20912031108h2b4c83fcp2fa21ba576ba75ec@mail.gmail.com> References: <58df6dc20912031108h2b4c83fcp2fa21ba576ba75ec@mail.gmail.com> Message-ID: On Thu, Dec 3, 2009 at 12:08 PM, Jake VanderPlas wrote: > >> >Hi Jake and SciPy, > >> > > >> >I wonder what is the status of this endeavor? > >> > > >> >For our little project we need fast search for a spherical neighborhood > >> >in a cloud of points. Only now I discovered that scipy has KDTree and > >> >cKDTree which could be used, BUT both interfaces are somewhat > >> >misfit for this simple goal one way or another. > >> > > >> >Also I've ran into libkdtree++ library with Python bindings which seems > >> >to do close what I need (just does range search, not sphere but it > could > >> >be more or less efficiently computed post-hoc) and do it very > >> >efficiently upon my simple tests: http://libkdtree.alioth.debian.org/ > >> >but it is under Artistic License 2.0... but may be it might be of use > >> >to inspire modification of scipy's ways to interface with the user > >> >;) > >> > > >> >And Jake, how are you going along with your project since there were no > >> >follow ups on this thread I wonder if there was any progress? > >> > >> Yaroslav, > >> I have not done much more since the previous emails. I got the > >> impression from people's responses that the community is not > >> interested in this code until it can be both more general and more > >> complete (allowing for flexible distance metrics, multiple data types, > >> approximate searches, alternate tree construction schemes, etc.) > >> > > > > This is a common problem in open source, propose something simple and you > > get asked to undertake world domination ;) I think simple Euclidean > > distances and double would be a good start as long as the design doesn't > > require a complete refactoring to add more general metrics. The code > itself > > needs a lot of style fixes. If you don't have the time maybe Yaroslav can > > pick it up. > > > > Chuck > > Chuck, > Can you give me some pointers on what style-fixes are needed? Are you > referring to the C++ code, the python wrapper, or both? I'm > relatively new to the python open-source world, and would love to gain > some experience in this sort of thing. As far as future flexibility > goes, the C++ code is templated and able to handle pointers to custom > distance functions. The current python wrapper is more rigid. > I was referring to the c++ code. Things like indentation, whitespace, debugging code and such are what caught my eye. Python pep 7 is a good start for style. I'll take a closer look at your code this weekend. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Dec 3 15:25:21 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 3 Dec 2009 13:25:21 -0700 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> Message-ID: On Thu, Dec 3, 2009 at 12:46 PM, Forrest Sheng Bao wrote: > But right after those, i saw this > > > compilation aborted for > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) > error: Command "icc -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC > -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core/include/numpy > -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core > -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath > -Inumpy/core/include -I/usr/include/python2.4 > -Ibuild/src.linux-x86_64-2.4/numpy/core/src/multiarray > -Ibuild/src.linux-x86_64-2.4/numpy/core/src/umath -c > build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c -o > build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.o" > failed with exit status 2 > > I couldn't see any other reason it aborted the compilation. > > If I have to find a warning, then this is the only one: > > numpy/core/src/npymath/npy_math_private.h(414): error: a float, double or > long double type must be included in the type specifier for a _Complex or > _Imaginary type > complex c99_z; > > > Hmm, that does look suspicious. Just for reference, what os are you running on? I also wonder if 11.1 differs from David's 11? Another possibility is an include file mixup somewhere, what other compilers are on the system? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Dec 3 15:42:25 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 3 Dec 2009 21:42:25 +0100 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> Message-ID: This isn't a warning, it's an error, and it may be the reason of the compilation failure if it is the first real reported error. It's strange though, as it might looh as a C++ template error (in some ways). Matthieu > If I have to find a warning, then this is the only one: > > numpy/core/src/npymath/npy_math_private.h(414): error: a float, double or > long double type must be included in the type specifier for a _Complex or > _Imaginary type > ????????? complex c99_z; -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From forrest.bao at gmail.com Thu Dec 3 16:09:11 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Thu, 3 Dec 2009 15:09:11 -0600 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> Message-ID: <889df5f00912031309r45055297p66906d49724a58f9@mail.gmail.com> The other compiler is gcc, e.g., gcc, g++, gfortran. But I never compiled numpy-1.4rc1 with gcc. I always use icc since that's the default one on my computer. The OS is CentOS 5.2 grendel:$ uname -a Linux xxx.hpcc.ttu.edu 2.6.18-128.7.1.el5_lustre.1.8.1.1 #1 SMP Tue Oct 6 05:48:57 MDT 2009 x86_64 x86_64 x86_64 GNU/Linux grendel:$ icc -v Version 11.1 Cheers, Forrest On Thu, Dec 3, 2009 at 2:25 PM, Charles R Harris wrote: > > > On Thu, Dec 3, 2009 at 12:46 PM, Forrest Sheng Bao wrote: > >> But right after those, i saw this >> >> >> compilation aborted for >> build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c (code 2) >> error: Command "icc -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector >> --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC >> -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core/include/numpy >> -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core >> -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath >> -Inumpy/core/include -I/usr/include/python2.4 >> -Ibuild/src.linux-x86_64-2.4/numpy/core/src/multiarray >> -Ibuild/src.linux-x86_64-2.4/numpy/core/src/umath -c >> build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.c -o >> build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/numpy/core/src/npymath/npy_math.o" >> failed with exit status 2 >> >> I couldn't see any other reason it aborted the compilation. >> >> If I have to find a warning, then this is the only one: >> >> numpy/core/src/npymath/npy_math_private.h(414): error: a float, double or >> long double type must be included in the type specifier for a _Complex or >> _Imaginary type >> complex c99_z; >> >> >> > Hmm, that does look suspicious. Just for reference, what os are you running > on? I also wonder if 11.1 differs from David's 11? Another possibility is an > include file mixup somewhere, what other compilers are on the system? > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Dec 3 19:15:23 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 04 Dec 2009 02:15:23 +0200 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> Message-ID: <1259885722.15923.0.camel@idol> to, 2009-12-03 kello 13:25 -0700, Charles R Harris kirjoitti: [clip] > numpy/core/src/npymath/npy_math_private.h(414): error: a > float, double or long double type must be included in the type > specifier for a _Complex or _Imaginary type > complex c99_z; This should probably read "complex double" instead of "complex" -- IIRC, you always need the type specifier if you want to be pedantic... -- Pauli Virtanen From forrest.bao at gmail.com Thu Dec 3 23:20:44 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Thu, 3 Dec 2009 22:20:44 -0600 Subject: [SciPy-dev] numpy 1.3.0 import error Message-ID: <889df5f00912032020g3bc3ab41r83daa1b357d1b979@mail.gmail.com> Hi there, I just compiled and installed numpy 1.3.0 on my system. But when I try to import numpy, I got this error: from numpy import * Traceback (most recent call last): File "", line 1, in ? File "/home/sbao/apps/lib64/python2.4/site-packages/numpy/__init__.py", line 130, in ? import add_newdocs File "/home/sbao/apps/lib64/python2.4/site-packages/numpy/add_newdocs.py", line 9, in ? from lib import add_newdoc File "/home/sbao/apps/lib64/python2.4/site-packages/numpy/lib/__init__.py", line 4, in ? from type_check import * File "/home/sbao/apps/lib64/python2.4/site-packages/numpy/lib/type_check.py", line 8, in ? import numpy.core.numeric as _nx File "/home/sbao/apps/lib64/python2.4/site-packages/numpy/core/__init__.py", line 5, in ? import multiarray ImportError: /home/sbao/apps/lib64/python2.4/site-packages/numpy/core/multiarray.so: undefined symbol: __intel_security_cookie The CPU is an Intel Xeon. Any developer know what's the reason? My compiler is icc 11.1. Cheers, Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu Dec 3 23:47:23 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 04 Dec 2009 13:47:23 +0900 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <1259885722.15923.0.camel@idol> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> <1259885722.15923.0.camel@idol> Message-ID: <4B18945B.6010201@ar.media.kyoto-u.ac.jp> Pauli Virtanen wrote: > to, 2009-12-03 kello 13:25 -0700, Charles R Harris kirjoitti: > [clip] > >> numpy/core/src/npymath/npy_math_private.h(414): error: a >> float, double or long double type must be included in the type >> specifier for a _Complex or _Imaginary type >> complex c99_z; >> > > This should probably read "complex double" instead of "complex" -- IIRC, > you always need the type specifier if you want to be pedantic... > I must have forgotten to clean something, because I can now reproduce the problem myself... It is fixed in both trunk and 1.4.x, and this time I have double checked with numscons that it does work and that all tests pass (there is one test failure related to clog). cheers, David From david at ar.media.kyoto-u.ac.jp Thu Dec 3 23:56:53 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 04 Dec 2009 13:56:53 +0900 Subject: [SciPy-dev] numpy 1.3.0 import error In-Reply-To: <889df5f00912032020g3bc3ab41r83daa1b357d1b979@mail.gmail.com> References: <889df5f00912032020g3bc3ab41r83daa1b357d1b979@mail.gmail.com> Message-ID: <4B189695.8080400@ar.media.kyoto-u.ac.jp> Forrest Sheng Bao wrote: > Hi there, > > I just compiled and installed numpy 1.3.0 on my system. But when I try > to import numpy, I got this error: > > from numpy import * > Traceback (most recent call last): > File "", line 1, in ? > File > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/__init__.py", > line 130, in ? > import add_newdocs > File > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/add_newdocs.py", > line 9, in ? > from lib import add_newdoc > File > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/lib/__init__.py", > line 4, in ? > from type_check import * > File > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/lib/type_check.py", > line 8, in ? > import numpy.core.numeric as _nx > File > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/core/__init__.py", > line 5, in ? > import multiarray > ImportError: > /home/sbao/apps/lib64/python2.4/site-packages/numpy/core/multiarray.so: > undefined symbol: __intel_security_cookie > > The CPU is an Intel Xeon. Any developer know what's the reason? Most likely a problem with building with icc. __intel_security_cookie is a canary against buffer overflow, but I cannot find much reference about it. Make sure neither CFLAGS nor LDFLAGS is set when you build numpy - I would not be surprised if this is specific to some security-related flags in icc. You could look which Intel library define this symbol, by running nm + grep on the Intel compiler library directory. As I mentioned in your other thread, I could finally reproduce your error, fixed numpy sources, and 1.4.x should now be buildable. I can myself import numpy built with icc 11.0 on ia32. David From ssclift at gmail.com Fri Dec 4 19:52:37 2009 From: ssclift at gmail.com (Simon Clift) Date: Fri, 4 Dec 2009 19:52:37 -0500 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: References: <200912012034.34511.ssclift@gmail.com> Message-ID: <200912041952.37808.ssclift@gmail.com> On Wednesday 02 December 2009 15:48:53 David Warde-Farley wrote: > On 2-Dec-09, at 3:43 AM, Benny Malengier wrote: > > would be a type of sparse matrix one can manipulate. T > Note that converting to CSR and then adding elements would probably be > an inefficient way to do it. Yes, a tridiagonal matrix usually arises from 1D problems and finite differences for advection diffusion problems (which I need to solve). Usually those kind of Newtonian problems are positive definite so you can LU factorize without pivoting. My problem is particularly well conditioned, so I'm interested in that, done at top speed. Banded usually arises from 2D,3D finite differences on regular grids. That structure lends itself to multiplying out bands using vector routines. That can be particularly efficient when you are building linear systems where the coefficients are changing or non-linear. CSR is most useful on irregular grids, as already noted. Each has got its use and changing formats, especially if the problems are large and time is of the essence, is usually a bad idea. -- 1129 Ibbetson Lane Mississauga, Ontario L5C 1K9 Canada From forrest.bao at gmail.com Fri Dec 4 20:36:57 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Fri, 4 Dec 2009 19:36:57 -0600 Subject: [SciPy-dev] numpy 1.3.0 import error In-Reply-To: <4B189695.8080400@ar.media.kyoto-u.ac.jp> References: <889df5f00912032020g3bc3ab41r83daa1b357d1b979@mail.gmail.com> <4B189695.8080400@ar.media.kyoto-u.ac.jp> Message-ID: <889df5f00912041736m44602a12s7aaab0dcc4547ef3@mail.gmail.com> Hi David, do you know how to check my CFLAGS or LDFLAGS before compiling? I used the set.py to do everything. Cheers, Forrest On Thu, Dec 3, 2009 at 10:56 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Forrest Sheng Bao wrote: > > Hi there, > > > > I just compiled and installed numpy 1.3.0 on my system. But when I try > > to import numpy, I got this error: > > > > from numpy import * > > Traceback (most recent call last): > > File "", line 1, in ? > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/__init__.py", > > line 130, in ? > > import add_newdocs > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/add_newdocs.py", > > line 9, in ? > > from lib import add_newdoc > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/lib/__init__.py", > > line 4, in ? > > from type_check import * > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/lib/type_check.py", > > line 8, in ? > > import numpy.core.numeric as _nx > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/core/__init__.py", > > line 5, in ? > > import multiarray > > ImportError: > > /home/sbao/apps/lib64/python2.4/site-packages/numpy/core/multiarray.so: > > undefined symbol: __intel_security_cookie > > > > The CPU is an Intel Xeon. Any developer know what's the reason? > > Most likely a problem with building with icc. __intel_security_cookie is > a canary against buffer overflow, but I cannot find much reference about > it. Make sure neither CFLAGS nor LDFLAGS is set when you build numpy - I > would not be surprised if this is specific to some security-related > flags in icc. You could look which Intel library define this symbol, by > running nm + grep on the Intel compiler library directory. > > As I mentioned in your other thread, I could finally reproduce your > error, fixed numpy sources, and 1.4.x should now be buildable. I can > myself import numpy built with icc 11.0 on ia32. > > David > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From forrest.bao at gmail.com Fri Dec 4 20:54:46 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Fri, 4 Dec 2009 19:54:46 -0600 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <4B18945B.6010201@ar.media.kyoto-u.ac.jp> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <5b8d13220912021255x6f190f8am92b0447b8125cf36@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> <1259885722.15923.0.camel@idol> <4B18945B.6010201@ar.media.kyoto-u.ac.jp> Message-ID: <889df5f00912041754j19590c34i7f6e1096b232e977@mail.gmail.com> Hi David, Is there any delay between SourceForge and its mirror? I just downloaded from SF.net but the same problem happened again. Cheers, Forrest On Thu, Dec 3, 2009 at 10:47 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Pauli Virtanen wrote: > > to, 2009-12-03 kello 13:25 -0700, Charles R Harris kirjoitti: > > [clip] > > > >> numpy/core/src/npymath/npy_math_private.h(414): error: a > >> float, double or long double type must be included in the type > >> specifier for a _Complex or _Imaginary type > >> complex c99_z; > >> > > > > This should probably read "complex double" instead of "complex" -- IIRC, > > you always need the type specifier if you want to be pedantic... > > > > I must have forgotten to clean something, because I can now reproduce > the problem myself... It is fixed in both trunk and 1.4.x, and this time > I have double checked with numscons that it does work and that all tests > pass (there is one test failure related to clog). > > cheers, > > David > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Dec 4 21:16:28 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 4 Dec 2009 19:16:28 -0700 Subject: [SciPy-dev] numpy 1.4 rc1 conflict with icc 11.1 In-Reply-To: <889df5f00912041754j19590c34i7f6e1096b232e977@mail.gmail.com> References: <889df5f00912021210r7601fe04te502a288d30fe7c8@mail.gmail.com> <889df5f00912021743v50b0f777x93c8240874f96db1@mail.gmail.com> <889df5f00912031136i687c9b6dh34c6002fdedbe553@mail.gmail.com> <889df5f00912031146r24849b4akfcdbe40cf4ffb1e@mail.gmail.com> <1259885722.15923.0.camel@idol> <4B18945B.6010201@ar.media.kyoto-u.ac.jp> <889df5f00912041754j19590c34i7f6e1096b232e977@mail.gmail.com> Message-ID: On Fri, Dec 4, 2009 at 6:54 PM, Forrest Sheng Bao wrote: > Hi David, > > Is there any delay between SourceForge and its mirror? I just downloaded > from SF.net but the same problem happened again. > > It is in subversion but won't be on sourceforge until the next release candidate. You should check out the repository if you want to help us debug this problem. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Dec 4 21:17:49 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 4 Dec 2009 19:17:49 -0700 Subject: [SciPy-dev] numpy 1.3.0 import error In-Reply-To: <889df5f00912041736m44602a12s7aaab0dcc4547ef3@mail.gmail.com> References: <889df5f00912032020g3bc3ab41r83daa1b357d1b979@mail.gmail.com> <4B189695.8080400@ar.media.kyoto-u.ac.jp> <889df5f00912041736m44602a12s7aaab0dcc4547ef3@mail.gmail.com> Message-ID: On Fri, Dec 4, 2009 at 6:36 PM, Forrest Sheng Bao wrote: > Hi David, do you know how to check my CFLAGS or LDFLAGS before compiling? I > used the set.py to do everything. > > env | grep CFLAGS Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Sat Dec 5 04:42:44 2009 From: benny.malengier at gmail.com (Benny Malengier) Date: Sat, 5 Dec 2009 10:42:44 +0100 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: <200912041952.37808.ssclift@gmail.com> References: <200912012034.34511.ssclift@gmail.com> <200912041952.37808.ssclift@gmail.com> Message-ID: 2009/12/5 Simon Clift : > On Wednesday 02 December 2009 15:48:53 David Warde-Farley wrote: >> On 2-Dec-09, at 3:43 AM, Benny Malengier wrote: >> > would be a type of sparse matrix one can manipulate. T >> Note that converting to CSR and then adding elements would probably be >> an inefficient way to do it. > > Yes, a tridiagonal matrix usually arises from 1D problems and finite > differences for advection diffusion problems (which I need to solve). ?Usually > those kind of Newtonian problems are positive definite so you can LU factorize > without pivoting. ?My problem is particularly well conditioned, so I'm > interested in that, done at top speed. I do the same but with 3 components and a moving interface boundary. So then it is banded + an entry in the last column for the interface. So my remark was about having code for the non-moving interface boundary which would be banded (or with one component tridiagonal), but when you move to a moving boundary (Landau transformation applied, so the grid is moving with the internal grid fixed over the (0,1) interval), you could take the code written for the banded system, and just add the one column. To have such code, you would need to be able to change from an implementation of a sparse banded matrix class to a CSR sparse matrix, without the need to change the rest of the code. Well, I did not work any details so perhaps it would not be that easy, but that is the background of my request for having banded/tridiag as type of sparse matrices. > Banded usually arises from 2D,3D finite differences on regular grids. ?That > structure lends itself to multiplying out bands using vector routines. ?That > can be particularly efficient when you are building linear systems where the > coefficients are changing or non-linear. I don't follow here. If you have 2D nxn grid, then the bands are one off the diagonal, and then n off the diagonal. The banded matrix implementations don't allow for this as they are a fixed number of lower and upper diagonals, so you need to use csr sparse matrix also here, no? One could make a generic banded matrix implementation on top of csr as I suggested, in which you say beforehand what specific off diagonals can have values. That would be nice to do 2D/3D method of lines. By coincidence, I just have a mailing thread in the sundials user list about such a thing: http://old.nabble.com/Re%3A-CVODE-SUNDIALS-Feature-Wish-%3A-operator-splitting-td26645909.html I obtained a private reply of a person who extended CVODE to work on the sparse matrix class of the Meschach C library with the specific goal of doing 2D/3D method of lines on top of CVODE. Anyway, if you actually do convection-diffusion of one component, then I would advise you to use CVODE, or via the pysundials package, or via the higher level abstraction scikits.odes I wrote (am writing), I don't think a tridiag solver will be able to beat an adaptive time stepping BDF scheme. I have done that extensively, also with added algebraic condition on mass conservation, with great results. Benny > CSR is most useful on irregular grids, as already noted. > > Each has got its use and changing formats, especially if the problems are > large and time is of the essence, is usually a bad idea. > > -- > 1129 Ibbetson Lane > Mississauga, Ontario > L5C 1K9 ? ? ? Canada > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From forrest.bao at gmail.com Sat Dec 5 13:04:37 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Sat, 5 Dec 2009 12:04:37 -0600 Subject: [SciPy-dev] numpy 1.3.0 import error In-Reply-To: <4B189695.8080400@ar.media.kyoto-u.ac.jp> References: <889df5f00912032020g3bc3ab41r83daa1b357d1b979@mail.gmail.com> <4B189695.8080400@ar.media.kyoto-u.ac.jp> Message-ID: <889df5f00912051004v2e9c96e3td8c88f10f32af353@mail.gmail.com> Hi David, Do you think this could be a problem that I didn't compile python using icc as well? The python comes as default of CentOS and the numpy is compiled in icc by me. Could that be the reason of the error? Cheers, Forrest On Thu, Dec 3, 2009 at 10:56 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Forrest Sheng Bao wrote: > > Hi there, > > > > I just compiled and installed numpy 1.3.0 on my system. But when I try > > to import numpy, I got this error: > > > > from numpy import * > > Traceback (most recent call last): > > File "", line 1, in ? > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/__init__.py", > > line 130, in ? > > import add_newdocs > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/add_newdocs.py", > > line 9, in ? > > from lib import add_newdoc > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/lib/__init__.py", > > line 4, in ? > > from type_check import * > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/lib/type_check.py", > > line 8, in ? > > import numpy.core.numeric as _nx > > File > > "/home/sbao/apps/lib64/python2.4/site-packages/numpy/core/__init__.py", > > line 5, in ? > > import multiarray > > ImportError: > > /home/sbao/apps/lib64/python2.4/site-packages/numpy/core/multiarray.so: > > undefined symbol: __intel_security_cookie > > > > The CPU is an Intel Xeon. Any developer know what's the reason? > > Most likely a problem with building with icc. __intel_security_cookie is > a canary against buffer overflow, but I cannot find much reference about > it. Make sure neither CFLAGS nor LDFLAGS is set when you build numpy - I > would not be surprised if this is specific to some security-related > flags in icc. You could look which Intel library define this symbol, by > running nm + grep on the Intel compiler library directory. > > As I mentioned in your other thread, I could finally reproduce your > error, fixed numpy sources, and 1.4.x should now be buildable. I can > myself import numpy built with icc 11.0 on ia32. > > David > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Dec 6 11:46:58 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 6 Dec 2009 11:46:58 -0500 Subject: [SciPy-dev] doc question: special.orthogonal.p_roots and co Message-ID: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> I was looking for the Legendre points and weights for Gaussian quadrature. In the source of integrate.quadrature, I found the use of special.orthogonal.p_roots It is not in my (oldish) htmlhelp p_roots and co are not links in http://docs.scipy.org/scipy/docs/scipy.special.orthogonal/ on the doc page for them, they show up as Note: This docstring is obsolete; the corresponding object is no longer present in SVN. http://docs.scipy.org/scipy/docs/scipy.special.orthogonal.p_roots/ in trunk they are still available: http://projects.scipy.org/scipy/browser/trunk/scipy/special/orthogonal.py#L602 Is this a documentation bug, or are there some changes that I didn't see? Josef From d.l.goldsmith at gmail.com Sun Dec 6 14:07:03 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 6 Dec 2009 11:07:03 -0800 Subject: [SciPy-dev] doc question: special.orthogonal.p_roots and co In-Reply-To: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> References: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> Message-ID: <45d1ab480912061107s69ff7cf6pb3c9130e2ae3b60d@mail.gmail.com> On Sun, Dec 6, 2009 at 8:46 AM, wrote: > I was looking for the Legendre points and weights for Gaussian quadrature. > In the source of integrate.quadrature, I found the use of > special.orthogonal.p_roots > It is not in my (oldish) htmlhelp > > p_roots and co are not links in > > http://docs.scipy.org/scipy/docs/scipy.special.orthogonal/ > > on the doc page for them, they show up as > Note: This docstring is obsolete; the corresponding object is no > longer present in SVN. > > http://docs.scipy.org/scipy/docs/scipy.special.orthogonal.p_roots/ > > in trunk they are still available: > > http://projects.scipy.org/scipy/browser/trunk/scipy/special/orthogonal.py#L602 > > Is this a documentation bug, or are there some changes that I didn't see? > > Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > Hi, Josef. (If no one answers more definitively before I can) I'll look into it. DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Dec 6 15:59:16 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 6 Dec 2009 21:59:16 +0100 Subject: [SciPy-dev] doc question: special.orthogonal.p_roots and co In-Reply-To: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> References: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> Message-ID: On Sun, Dec 6, 2009 at 5:46 PM, wrote: > I was looking for the Legendre points and weights for Gaussian quadrature. > In the source of integrate.quadrature, I found the use of > special.orthogonal.p_roots > It is not in my (oldish) htmlhelp > > p_roots and co are not links in > > http://docs.scipy.org/scipy/docs/scipy.special.orthogonal/ > > on the doc page for them, they show up as > Note: This docstring is obsolete; the corresponding object is no > longer present in SVN. > > http://docs.scipy.org/scipy/docs/scipy.special.orthogonal.p_roots/ > > in trunk they are still available: > > http://projects.scipy.org/scipy/browser/trunk/scipy/special/orthogonal.py#L602 > > Is this a documentation bug, or are there some changes that I didn't see? > In rev 6070 Pauli added an __all__ dict to orthogonal.py that does not include those functions. I think pydocweb only generates pages for objects in __all__ if that exists. So it looks like that is the reason. Should all the xx_roots funcs be in __all__ in your opinion? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Dec 6 16:43:03 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 6 Dec 2009 16:43:03 -0500 Subject: [SciPy-dev] doc question: special.orthogonal.p_roots and co In-Reply-To: References: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> Message-ID: <1cd32cbb0912061343n4aa51c5cjdc0691c9e091362@mail.gmail.com> On Sun, Dec 6, 2009 at 3:59 PM, Ralf Gommers wrote: > > > On Sun, Dec 6, 2009 at 5:46 PM, wrote: >> >> I was looking for the Legendre points and weights for Gaussian quadrature. >> In the source of integrate.quadrature, I found the use of >> special.orthogonal.p_roots >> It is not in my (oldish) htmlhelp >> >> p_roots and co are not links in >> >> http://docs.scipy.org/scipy/docs/scipy.special.orthogonal/ >> >> on the doc page for them, they show up as >> Note: This docstring is obsolete; the corresponding object is no >> longer present in SVN. >> >> http://docs.scipy.org/scipy/docs/scipy.special.orthogonal.p_roots/ >> >> in trunk they are still available: >> >> http://projects.scipy.org/scipy/browser/trunk/scipy/special/orthogonal.py#L602 >> >> Is this a documentation bug, or are there some changes that I didn't see? > > In rev 6070 Pauli added an __all__ dict to orthogonal.py that does not > include those functions. I think pydocweb only generates pages for objects > in __all__ if that exists. So it looks like that is the reason. > > Should all the xx_roots funcs be in __all__ in your opinion? I would think so, but I just saw them for the first time and found them only because I looked at the source of integrate.quadrature. Josef > > Cheers, > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From d.l.goldsmith at gmail.com Sun Dec 6 17:43:33 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 6 Dec 2009 14:43:33 -0800 Subject: [SciPy-dev] doc question: special.orthogonal.p_roots and co In-Reply-To: <1cd32cbb0912061343n4aa51c5cjdc0691c9e091362@mail.gmail.com> References: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> <1cd32cbb0912061343n4aa51c5cjdc0691c9e091362@mail.gmail.com> Message-ID: <45d1ab480912061443n4746106bx42d2bbf9085f2524@mail.gmail.com> On Sun, Dec 6, 2009 at 1:43 PM, wrote: > On Sun, Dec 6, 2009 at 3:59 PM, Ralf Gommers > wrote: > > > > > > On Sun, Dec 6, 2009 at 5:46 PM, wrote: > >> > >> I was looking for the Legendre points and weights for Gaussian > quadrature. > >> In the source of integrate.quadrature, I found the use of > >> special.orthogonal.p_roots > >> It is not in my (oldish) htmlhelp > >> > >> p_roots and co are not links in > >> > >> http://docs.scipy.org/scipy/docs/scipy.special.orthogonal/ > >> > >> on the doc page for them, they show up as > >> Note: This docstring is obsolete; the corresponding object is no > >> longer present in SVN. > >> > >> http://docs.scipy.org/scipy/docs/scipy.special.orthogonal.p_roots/ > >> > >> in trunk they are still available: > >> > >> > http://projects.scipy.org/scipy/browser/trunk/scipy/special/orthogonal.py#L602 > >> > >> Is this a documentation bug, or are there some changes that I didn't > see? > > > > In rev 6070 Pauli added an __all__ dict to orthogonal.py that does not > > include those functions. I think pydocweb only generates pages for > objects > > in __all__ if that exists. So it looks like that is the reason. > > > > Should all the xx_roots funcs be in __all__ in your opinion? > > I would think so, but I just saw them for the first time and found them > only because I looked at the source of integrate.quadrature. > > Josef > > > > > Cheers, > > Ralf > > Perhaps Pauli can comment on why he left them out... DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sun Dec 6 21:13:59 2009 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 6 Dec 2009 21:13:59 -0500 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: References: <200912012034.34511.ssclift@gmail.com> <5B54FE1F-1E65-4157-9344-FC4086CBEDC4@cs.toronto.edu> Message-ID: On Wed, Dec 2, 2009 at 3:43 AM, Benny Malengier wrote: > > Interesting, this is exactly the function I needed for my problem, but > I was looking in scipy.sparse.linalg, so did not notice banded matrix > solver was present in scipy.linalg. > > In my logic, the "matrix diagonal orded form" of > http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_banded.html#scipy.linalg.solve_banded > would be a type of sparse matrix one can manipulate. This would allow > things like changing matrix diagonal orded form sparse matrix to a csr > matrix, adding some extra elements off the diagonals, and then calling > a more generic solver. > Can't you do that already with scipy.sparse.dia_matrix? If I'm not mistaken, dia_matrix is (slightly) more general than the banded format but similarly efficient. In an ideal world scipy.sparse.spsolve() would detect the case that A was a dia_matrix (with small bandwidth) and invoke the LAPACK method in scipy.linalg instead of using the general sparse LU solver. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From fwereade at googlemail.com Sun Dec 6 21:29:58 2009 From: fwereade at googlemail.com (William Reade) Date: Mon, 7 Dec 2009 02:29:58 +0000 Subject: [SciPy-dev] errors in linalg test_basic Message-ID: Hi I'm developing a package called Ironclad; its purpose is to get compiled CPython extensions to work transparently in IronPython. SciPy looks pretty healthy at the moment (if rather slow in places): I can run 2608 tests*, and I get 160 errors and 17 failures. Many of those problems are actually inconsequential, but a number of them are all too real; hence this post. The problem I'm currently investigating is in linalg.basic.lstsq(), at the call to the 'gelss' lapack function: when overwrite_a and overwrite_b are False, I get a segfault somewhere (it works fine when they're both True; haven't tried mixed). By way of investigation, I hunted through the source as far as generic_flapack.pyf; then I got scared and decided to ask for help. My thought process so far is as follows: Presumably, those arguments signify that the results should be written into a new block of memory somewhere; when we're overwriting pre-existing arrays, everything is fine, so I imagine there's a problem with the new result array(s?). However, I have no idea by what mechanism those arrays should be created, so I don't really know where to start looking. Can anyone suggest a good direction? Thanks in advance William From d.l.goldsmith at gmail.com Sun Dec 6 21:31:54 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 6 Dec 2009 18:31:54 -0800 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: References: <200912012034.34511.ssclift@gmail.com> <5B54FE1F-1E65-4157-9344-FC4086CBEDC4@cs.toronto.edu> Message-ID: <45d1ab480912061831p63db4936x6a5762f4c069a857@mail.gmail.com> On Sun, Dec 6, 2009 at 6:13 PM, Nathan Bell wrote: > On Wed, Dec 2, 2009 at 3:43 AM, Benny Malengier > wrote: > > > > Interesting, this is exactly the function I needed for my problem, but > > I was looking in scipy.sparse.linalg, so did not notice banded matrix > > solver was present in scipy.linalg. > > > > In my logic, the "matrix diagonal orded form" of > > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_banded.html#scipy.linalg.solve_banded > > would be a type of sparse matrix one can manipulate. This would allow > > things like changing matrix diagonal orded form sparse matrix to a csr > > matrix, adding some extra elements off the diagonals, and then calling > > a more generic solver. > > > > Can't you do that already with scipy.sparse.dia_matrix? If I'm not > mistaken, dia_matrix is (slightly) more general than the banded format > but similarly efficient. > > In an ideal world scipy.sparse.spsolve() would detect the case that A > was a dia_matrix (with small bandwidth) and invoke the LAPACK method > in scipy.linalg instead of using the general sparse LU solver. > Perhaps you might file this as an "enhancement" "issue"? DG > > -- > Nathan Bell wnbell at gmail.com > http://www.wnbell.com/ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsalvati at u.washington.edu Sun Dec 6 22:52:49 2009 From: jsalvati at u.washington.edu (John Salvatier) Date: Sun, 6 Dec 2009 19:52:49 -0800 Subject: [SciPy-dev] What instructions should I give scikits.bvp_solver users to get gfortran? Message-ID: <113e17f20912061952j4e927d2cp9a4f1e4481d8c1a5@mail.gmail.com> Hello, I have recently mostly completed by boundary problem solver scikit ( http://pypi.python.org/pypi/scikits.bvp_solver) and someone mentioned to me that it would be useful to provide instructions for getting gfortran in my documentation (http://packages.python.org/scikits.bvp_solver/). What instructions should I give my readers? For mac users, I suspect the best instructions are to tell them to use the Scipy Superpack, but I don't know what I should tell Windows users. Best Regards, John Salvatier -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Dec 6 22:55:18 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 6 Dec 2009 21:55:18 -0600 Subject: [SciPy-dev] What instructions should I give scikits.bvp_solver users to get gfortran? In-Reply-To: <113e17f20912061952j4e927d2cp9a4f1e4481d8c1a5@mail.gmail.com> References: <113e17f20912061952j4e927d2cp9a4f1e4481d8c1a5@mail.gmail.com> Message-ID: <3d375d730912061955t2b6eaf65v19f209f0eddbba3c@mail.gmail.com> On Sun, Dec 6, 2009 at 21:52, John Salvatier wrote: > Hello, > > I have recently mostly completed by boundary problem solver scikit > (http://pypi.python.org/pypi/scikits.bvp_solver) and someone mentioned to me > that it would be useful to provide instructions for getting gfortran in my > documentation (http://packages.python.org/scikits.bvp_solver/). What > instructions should I give my readers? For mac users, I suspect the best > instructions are to tell them to use the Scipy Superpack, but I don't know > what I should tell Windows users. Mac users should get their gfortran binaries from here: http://r.research.att.com/tools/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon Dec 7 01:48:32 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 07 Dec 2009 15:48:32 +0900 Subject: [SciPy-dev] numpy 1.3.0 import error In-Reply-To: <889df5f00912051004v2e9c96e3td8c88f10f32af353@mail.gmail.com> References: <889df5f00912032020g3bc3ab41r83daa1b357d1b979@mail.gmail.com> <4B189695.8080400@ar.media.kyoto-u.ac.jp> <889df5f00912051004v2e9c96e3td8c88f10f32af353@mail.gmail.com> Message-ID: <4B1CA540.1080708@ar.media.kyoto-u.ac.jp> Forrest Sheng Bao wrote: > Hi David, > > Do you think this could be a problem that I didn't compile python > using icc as well? No, I don't think so. You should not compile your own python unless you know what you're doing: by doing so, you mostly likely have to recompile *every* C extension. > The python comes as default of CentOS and the numpy is compiled in icc > by me. Could that be the reason of the error? It is of course caused by compiling with ICC. Compiling numpy with the non default compiler on your platform is a tricky business most of the times, you should use gcc on Linux if possible and don't want to waste time. It is possible to compile numpy with ICC, but you have to know what you are doing - if you are not familiar with compiling complex packages from sources, you should not do it. cheers, David From nwagner at iam.uni-stuttgart.de Mon Dec 7 02:51:34 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 07 Dec 2009 08:51:34 +0100 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: <45d1ab480912061831p63db4936x6a5762f4c069a857@mail.gmail.com> References: <200912012034.34511.ssclift@gmail.com> <5B54FE1F-1E65-4157-9344-FC4086CBEDC4@cs.toronto.edu> <45d1ab480912061831p63db4936x6a5762f4c069a857@mail.gmail.com> Message-ID: On Sun, 6 Dec 2009 18:31:54 -0800 David Goldsmith wrote: > On Sun, Dec 6, 2009 at 6:13 PM, Nathan Bell > wrote: > >> On Wed, Dec 2, 2009 at 3:43 AM, Benny Malengier >> wrote: >> > >> > Interesting, this is exactly the function I needed for >>my problem, but >> > I was looking in scipy.sparse.linalg, so did not >>notice banded matrix >> > solver was present in scipy.linalg. >> > >> > In my logic, the "matrix diagonal orded form" of >> > >> http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_banded.html#scipy.linalg.solve_banded >> > would be a type of sparse matrix one can manipulate. >>This would allow >> > things like changing matrix diagonal orded form sparse >>matrix to a csr >> > matrix, adding some extra elements off the diagonals, >>and then calling >> > a more generic solver. >> > >> >> Can't you do that already with scipy.sparse.dia_matrix? >> If I'm not >> mistaken, dia_matrix is (slightly) more general than the >>banded format >> but similarly efficient. >> >> In an ideal world scipy.sparse.spsolve() would detect >>the case that A >> was a dia_matrix (with small bandwidth) and invoke the >>LAPACK method >> in scipy.linalg instead of using the general sparse LU >>solver. >> > > Perhaps you might file this as an "enhancement" "issue"? > What do you make of it ? http://projects.scipy.org/scipy/ticket/456 Nils From d.l.goldsmith at gmail.com Mon Dec 7 03:01:37 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 7 Dec 2009 00:01:37 -0800 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: References: <200912012034.34511.ssclift@gmail.com> <5B54FE1F-1E65-4157-9344-FC4086CBEDC4@cs.toronto.edu> <45d1ab480912061831p63db4936x6a5762f4c069a857@mail.gmail.com> Message-ID: <45d1ab480912070001h483bf7b6rd24c6ae17100672b@mail.gmail.com> On Sun, Dec 6, 2009 at 11:51 PM, Nils Wagner wrote: > > Perhaps you might file this as an "enhancement" "issue"? > > > What do you make of it ? > > http://projects.scipy.org/scipy/ticket/456 > > Nils > A little open ended, but otherwise looks good to me (but I probably won't "own" it - whoever eventually does is the person who will decide whether they want more "itemization", or whether they'll just decide themselves. DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav+sp at iki.fi Mon Dec 7 04:56:37 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Mon, 7 Dec 2009 09:56:37 +0000 (UTC) Subject: [SciPy-dev] What instructions should I give scikits.bvp_solver users to get gfortran? References: <113e17f20912061952j4e927d2cp9a4f1e4481d8c1a5@mail.gmail.com> Message-ID: Sun, 06 Dec 2009 19:52:49 -0800, John Salvatier wrote: > Hello, > > I have recently mostly completed by boundary problem solver scikit ( > http://pypi.python.org/pypi/scikits.bvp_solver) and someone mentioned to > me that it would be useful to provide instructions for getting gfortran > in my documentation (http://packages.python.org/scikits.bvp_solver/). > What instructions should I give my readers? For mac users, I suspect the > best instructions are to tell them to use the Scipy Superpack, but I > don't know what I should tell Windows users. Windows users are well off with Python(x,y), it comes with GFortran. I believe also the newer (newest?) Mingw's come with GFortran, but I'm not sure right now. There are also some GFortran Windows binaries linked to from GCC's wiki. Linux users of course should use the GFortran shipped with their distribution. -- Pauli Virtanen From forrest.bao at gmail.com Mon Dec 7 05:30:21 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Mon, 7 Dec 2009 04:30:21 -0600 Subject: [SciPy-dev] numpy 1.3.0 import error In-Reply-To: <4B1CA540.1080708@ar.media.kyoto-u.ac.jp> References: <889df5f00912032020g3bc3ab41r83daa1b357d1b979@mail.gmail.com> <4B189695.8080400@ar.media.kyoto-u.ac.jp> <889df5f00912051004v2e9c96e3td8c88f10f32af353@mail.gmail.com> <4B1CA540.1080708@ar.media.kyoto-u.ac.jp> Message-ID: <889df5f00912070230n4bd4282co8bf754e376eebc8a@mail.gmail.com> Hi David, Thanks for your reply. Please allow me ask a dummy question. How does setup.py script know which compiler to use? Thru environmental variables? Any where I can specify the compiler in setup.py or other configuration file before building? Cheers, Forrest On Mon, Dec 7, 2009 at 12:48 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > > The python comes as default of CentOS and the numpy is compiled in icc > > by me. Could that be the reason of the error? > > It is of course caused by compiling with ICC. Compiling numpy with the > non default compiler on your platform is a tricky business most of the > times, you should use gcc on Linux if possible and don't want to waste > time. It is possible to compile numpy with ICC, but you have to know > what you are doing - if you are not familiar with compiling complex > packages from sources, you should not do it. > > cheers, > > David > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Dec 7 09:07:38 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 7 Dec 2009 09:07:38 -0500 Subject: [SciPy-dev] What instructions should I give scikits.bvp_solver users to get gfortran? In-Reply-To: References: <113e17f20912061952j4e927d2cp9a4f1e4481d8c1a5@mail.gmail.com> Message-ID: <1cd32cbb0912070607x1061d1f4p94df251b5a0f1870@mail.gmail.com> On Mon, Dec 7, 2009 at 4:56 AM, Pauli Virtanen wrote: > Sun, 06 Dec 2009 19:52:49 -0800, John Salvatier wrote: > >> Hello, >> >> I have recently mostly completed by boundary problem solver scikit ( >> http://pypi.python.org/pypi/scikits.bvp_solver) and someone mentioned to >> me that it would be useful to provide instructions for getting gfortran >> in my documentation (http://packages.python.org/scikits.bvp_solver/). >> What instructions should I give my readers? For mac users, I suspect the >> best instructions are to tell them to use the Scipy Superpack, but I >> don't know what I should tell Windows users. > > Windows users are well off with Python(x,y), it comes with GFortran. > I believe also the newer (newest?) Mingw's come with GFortran, but I'm > not sure right now. The new release of MingW, that came out last summer, comes with gfortran. Do you know what the compatibility is between g77 and gfortran? Can extension for numpy be build with gfortran when numpy and lapack have been built with g77? Will we run into similar problems on Windows as the switch between g95 and gfortran on linux? (I haven't upgraded yet to the new MingW, because I didn't find any compatibility information on the internet and I didn't want to spend a lot of time to find out by trial and error.) Josef > > There are also some GFortran Windows binaries linked to from GCC's wiki. > > Linux users of course should use the GFortran shipped with their > distribution. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From cournape at gmail.com Mon Dec 7 09:20:29 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 7 Dec 2009 23:20:29 +0900 Subject: [SciPy-dev] What instructions should I give scikits.bvp_solver users to get gfortran? In-Reply-To: <1cd32cbb0912070607x1061d1f4p94df251b5a0f1870@mail.gmail.com> References: <113e17f20912061952j4e927d2cp9a4f1e4481d8c1a5@mail.gmail.com> <1cd32cbb0912070607x1061d1f4p94df251b5a0f1870@mail.gmail.com> Message-ID: <5b8d13220912070620g50856fbfib1206f73d525f3f7@mail.gmail.com> On Mon, Dec 7, 2009 at 11:07 PM, wrote: > On Mon, Dec 7, 2009 at 4:56 AM, Pauli Virtanen wrote: >> Sun, 06 Dec 2009 19:52:49 -0800, John Salvatier wrote: >> >>> Hello, >>> >>> I have recently mostly completed by boundary problem solver scikit ( >>> http://pypi.python.org/pypi/scikits.bvp_solver) and someone mentioned to >>> me that it would be useful to provide instructions for getting gfortran >>> in my documentation (http://packages.python.org/scikits.bvp_solver/). >>> What instructions should I give my readers? For mac users, I suspect the >>> best instructions are to tell them to use the Scipy Superpack, but I >>> don't know what I should tell Windows users. >> >> Windows users are well off with Python(x,y), it comes with GFortran. >> I believe also the newer (newest?) Mingw's come with GFortran, but I'm >> not sure right now. > > The new release of MingW, that came out last summer, comes with > gfortran. > > Do you know what the compatibility is between g77 and gfortran? > Can extension for numpy be build with gfortran when numpy and > lapack have been built with g77? Will we run into similar > problems on Windows as the switch between g95 and gfortran > on linux? Yes, it will most likely be even worse, given that our situation is already quite messy on windows with mingw / MSVS stuff, David From arkapravobhaumik at gmail.com Mon Dec 7 15:47:49 2009 From: arkapravobhaumik at gmail.com (Arkapravo Bhaumik) Date: Mon, 7 Dec 2009 20:47:49 +0000 Subject: [SciPy-dev] Any particular IDE ? Message-ID: Hey Guys Just wanted to ask you, for python specific do you all recommend any particular IDE ? I have been using Dr.Python (which is WOW ! ) , Scite and recently trying out Komodo IDE. I have seen there is a great drive to use eclipse , in the industry. What IDE you guys use ? Regards Arkapravo -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Mon Dec 7 17:04:32 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 7 Dec 2009 14:04:32 -0800 Subject: [SciPy-dev] Any particular IDE ? In-Reply-To: References: Message-ID: <45d1ab480912071404i2953ff65v9f4f9df9c48c38f9@mail.gmail.com> I happily use SPE (Stani's Python Editor), but I remember Chris Barker (post same question to numy-discussion, as I don't think he subscribes to this one) had me persuaded to try something else (which I never did; inertia is a very powerful thing) when we talked about this at SciPyCon09. DG On Mon, Dec 7, 2009 at 12:47 PM, Arkapravo Bhaumik < arkapravobhaumik at gmail.com> wrote: > Hey Guys > > Just wanted to ask you, for python specific do you all recommend any > particular IDE ? > > I have been using Dr.Python (which is WOW ! ) , Scite and recently trying > out Komodo IDE. I have seen there is a great drive to use eclipse , in the > industry. What IDE you guys use ? > > Regards > > Arkapravo > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssclift at gmail.com Mon Dec 7 19:50:28 2009 From: ssclift at gmail.com (Simon Clift) Date: Mon, 7 Dec 2009 19:50:28 -0500 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: References: <200912012034.34511.ssclift@gmail.com> <200912041952.37808.ssclift@gmail.com> Message-ID: <200912071950.28812.ssclift@gmail.com> On Saturday 05 December 2009 04:42:44 Benny Malengier wrote: > 2009/12/5 Simon Clift : > I do the same but with 3 components and a moving interface boundary. > So then it is banded + an entry in the last column... that is the background of my request for having banded/tridiag as > type of sparse matrices. Ah ha... sorry, I used to T.A. a 4th-year sparse linear algebra course on a regular basis, the pedagogical habit dies hard on this subject. :) > The banded matrix > implementations don't allow for this as they are a fixed number of > lower and upper diagonals, so you need to use csr sparse matrix also > here, no? Ah, yes, in that case I'd do my own "sparse bands" structure. My current problem is embarrassingly well conditioned, so as long as I can do ILU(0) and matrix-vector multiply the structure is adequate for Krylov sub-space solutions. I don't need to add fill from LU factorization. > I don't think a tridiag solver will be able to beat an adaptive time > stepping BDF scheme. Ah, I require positive coefficients. That requirement kind of snookers using anything higher order in time or space, as I have learned the hard way. -- 1129 Ibbetson Lane Mississauga, Ontario L5C 1K9 Canada From gokhansever at gmail.com Tue Dec 8 01:52:37 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Tue, 8 Dec 2009 00:52:37 -0600 Subject: [SciPy-dev] Any particular IDE ? In-Reply-To: References: Message-ID: <49d6b3500912072252h56cd0341lf9dc55911d119595@mail.gmail.com> On Mon, Dec 7, 2009 at 2:47 PM, Arkapravo Bhaumik < arkapravobhaumik at gmail.com> wrote: > Hey Guys > > Just wanted to ask you, for python specific do you all recommend any > particular IDE ? > > I have been using Dr.Python (which is WOW ! ) , Scite and recently trying > out Komodo IDE. I have seen there is a great drive to use eclipse , in the > industry. What IDE you guys use ? > > Regards > > Arkapravo > > Although not a complete IDE, I suggest trying VIM/GVIM. With some modifications it is possible to have IDE like features on it: Python and vim: Make your own IDE To me adding IPython on top of this, the fastest and most productive it gets for my coding needs. However, once in a while I load Eclipse+PyDev for its useful visual debugging features. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From schut at sarvision.nl Tue Dec 8 03:54:57 2009 From: schut at sarvision.nl (Vincent Schut) Date: Tue, 08 Dec 2009 09:54:57 +0100 Subject: [SciPy-dev] Any particular IDE ? In-Reply-To: References: Message-ID: Arkapravo Bhaumik wrote: > Hey Guys > > Just wanted to ask you, for python specific do you all recommend any > particular IDE ? > > I have been using Dr.Python (which is WOW ! ) , Scite and recently > trying out Komodo IDE. I have seen there is a great drive to use eclipse > , in the industry. What IDE you guys use ? I'm regularly switching between eclipse+pydev (sweet but bloated & a little slow sometimes) and ulipad, which is a lot snappier (and, next to pydev, really one of the best python editors imho, lots and lots better than all these others that I also tried) but which lacks some of eclipse' visual appeal... Vincent. > > Regards > > Arkapravo > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From william.ratcliff at gmail.com Tue Dec 8 04:16:53 2009 From: william.ratcliff at gmail.com (william ratcliff) Date: Tue, 8 Dec 2009 04:16:53 -0500 Subject: [SciPy-dev] Any particular IDE ? In-Reply-To: References: Message-ID: <827183970912080116q32142d62pa1913ba3c947ae17@mail.gmail.com> I use wing which is rather nice... On Tue, Dec 8, 2009 at 3:54 AM, Vincent Schut wrote: > Arkapravo Bhaumik wrote: > > Hey Guys > > > > Just wanted to ask you, for python specific do you all recommend any > > particular IDE ? > > > > I have been using Dr.Python (which is WOW ! ) , Scite and recently > > trying out Komodo IDE. I have seen there is a great drive to use eclipse > > , in the industry. What IDE you guys use ? > > I'm regularly switching between eclipse+pydev (sweet but bloated & a > little slow sometimes) and ulipad, which is a lot snappier (and, next to > pydev, really one of the best python editors imho, lots and lots better > than all these others that I also tried) but which lacks some of > eclipse' visual appeal... > > Vincent. > > > > > Regards > > > > Arkapravo > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Tue Dec 8 08:55:25 2009 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 8 Dec 2009 14:55:25 +0100 Subject: [SciPy-dev] Tri-diagonal LAPACK Routines - Shall I interface them? In-Reply-To: References: <200912012034.34511.ssclift@gmail.com> <5B54FE1F-1E65-4157-9344-FC4086CBEDC4@cs.toronto.edu> Message-ID: 2009/12/7 Nathan Bell : > On Wed, Dec 2, 2009 at 3:43 AM, Benny Malengier > wrote: >> >> Interesting, this is exactly the function I needed for my problem, but >> I was looking in scipy.sparse.linalg, so did not notice banded matrix >> solver was present in scipy.linalg. >> >> In my logic, the "matrix diagonal orded form" of >> http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_banded.html#scipy.linalg.solve_banded >> would be a type of sparse matrix one can manipulate. This would allow >> things like changing matrix diagonal orded form sparse matrix to a csr >> matrix, adding some extra elements off the diagonals, and then calling >> a more generic solver. >> > > Can't you do that already with scipy.sparse.dia_matrix? ?If I'm not > mistaken, dia_matrix is (slightly) more general than the banded format > but similarly efficient. > > In an ideal world scipy.sparse.spsolve() would detect the case that A > was a dia_matrix (with small bandwidth) and invoke the LAPACK method > in scipy.linalg instead of using the general sparse LU solver. Yes, dia can do that, but Lapack expects another dia format, so there would be some undesired overhead (see http://www.netlib.org/lapack/double/dgbsv.f or http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_banded.html#scipy.linalg.solve_banded) On the other hand, umfpack desires csr, so from that point of view, off diagonal dia matrices as stored by dia, would have to be converted in the csr format as I suggested to implement a dia sparse matrix directly. The present dia implementation would do A.to_coo().to_csr() before a solution can be obtained from umfpack. Benny > > -- > Nathan Bell wnbell at gmail.com > http://www.wnbell.com/ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ralf.gommers at googlemail.com Tue Dec 8 09:11:04 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 8 Dec 2009 22:11:04 +0800 Subject: [SciPy-dev] doc question: special.orthogonal.p_roots and co In-Reply-To: <1cd32cbb0912061343n4aa51c5cjdc0691c9e091362@mail.gmail.com> References: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> <1cd32cbb0912061343n4aa51c5cjdc0691c9e091362@mail.gmail.com> Message-ID: On Mon, Dec 7, 2009 at 5:43 AM, wrote: > On Sun, Dec 6, 2009 at 3:59 PM, Ralf Gommers > wrote: > > > > > > On Sun, Dec 6, 2009 at 5:46 PM, wrote: > >> > >> I was looking for the Legendre points and weights for Gaussian > quadrature. > >> In the source of integrate.quadrature, I found the use of > >> special.orthogonal.p_roots > >> It is not in my (oldish) htmlhelp > >> > >> p_roots and co are not links in > >> > >> http://docs.scipy.org/scipy/docs/scipy.special.orthogonal/ > >> > >> on the doc page for them, they show up as > >> Note: This docstring is obsolete; the corresponding object is no > >> longer present in SVN. > >> > >> http://docs.scipy.org/scipy/docs/scipy.special.orthogonal.p_roots/ > >> > >> in trunk they are still available: > >> > >> > http://projects.scipy.org/scipy/browser/trunk/scipy/special/orthogonal.py#L602 > >> > >> Is this a documentation bug, or are there some changes that I didn't > see? > > > > In rev 6070 Pauli added an __all__ dict to orthogonal.py that does not > > include those functions. I think pydocweb only generates pages for > objects > > in __all__ if that exists. So it looks like that is the reason. > > > > Should all the xx_roots funcs be in __all__ in your opinion? > > I would think so, but I just saw them for the first time and found them > only because I looked at the source of integrate.quadrature. > > Now that I looked a bit closer, I see that all the xx_roots functions have a corresponding new (and I suppose improved) function in orthogonal.py. For 'p_roots' this is 'legendre', they seem to return the same thing. So just use the latter I think. Leaving the xx_roots functions out of __all__ was done on purpose then I guess, and those functions are still floating around only because of backwards compatibility reasons. The one thing that needs to be done is to update the module docstring to reflect that. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Dec 8 09:47:16 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 8 Dec 2009 09:47:16 -0500 Subject: [SciPy-dev] doc question: special.orthogonal.p_roots and co In-Reply-To: References: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> <1cd32cbb0912061343n4aa51c5cjdc0691c9e091362@mail.gmail.com> Message-ID: <1cd32cbb0912080647p6a0bf047p4a41c9b3e353d9ff@mail.gmail.com> On Tue, Dec 8, 2009 at 9:11 AM, Ralf Gommers wrote: > > > On Mon, Dec 7, 2009 at 5:43 AM, wrote: >> >> On Sun, Dec 6, 2009 at 3:59 PM, Ralf Gommers >> wrote: >> > >> > >> > On Sun, Dec 6, 2009 at 5:46 PM, wrote: >> >> >> >> I was looking for the Legendre points and weights for Gaussian >> >> quadrature. >> >> In the source of integrate.quadrature, I found the use of >> >> special.orthogonal.p_roots >> >> It is not in my (oldish) htmlhelp >> >> >> >> p_roots and co are not links in >> >> >> >> http://docs.scipy.org/scipy/docs/scipy.special.orthogonal/ >> >> >> >> on the doc page for them, they show up as >> >> Note: This docstring is obsolete; the corresponding object is no >> >> longer present in SVN. >> >> >> >> http://docs.scipy.org/scipy/docs/scipy.special.orthogonal.p_roots/ >> >> >> >> in trunk they are still available: >> >> >> >> >> >> http://projects.scipy.org/scipy/browser/trunk/scipy/special/orthogonal.py#L602 >> >> >> >> Is this a documentation bug, or are there some changes that I didn't >> >> see? >> > >> > In rev 6070 Pauli added an __all__ dict to orthogonal.py that does not >> > include those functions. I think pydocweb only generates pages for >> > objects >> > in __all__ if that exists. So it looks like that is the reason. >> > >> > Should all the xx_roots funcs be in __all__ in your opinion? >> >> I would think so, but I just saw them for the first time and found them >> only because I looked at the source of integrate.quadrature. >> > Now that I looked a bit closer, I see that all the xx_roots functions have a > corresponding new (and I suppose improved) function in orthogonal.py. For > 'p_roots' this is 'legendre', they seem to return the same thing. So just > use the latter I think. > > Leaving the xx_roots functions out of __all__ was done on purpose then I > guess, and those functions are still floating around only because of > backwards compatibility reasons. The one thing that needs to be done is to > update the module docstring to reflect that. I had seen legendre before I went looking for p_roots. legendre calls p_roots and returns a polynomial, which I don't know how to work with. I was just looking for weights and points, and p_roots returns exactly what I saw in other references. Given that p_roots is used, I assume it is not depreciated, the fact that these functions are listed in the docs also indicates that they are not internal private functions. Josef > > Cheers, > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From oliphant at enthought.com Wed Dec 9 06:33:21 2009 From: oliphant at enthought.com (Travis Oliphant) Date: Wed, 9 Dec 2009 05:33:21 -0600 Subject: [SciPy-dev] doc question: special.orthogonal.p_roots and co In-Reply-To: <1cd32cbb0912080647p6a0bf047p4a41c9b3e353d9ff@mail.gmail.com> References: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> <1cd32cbb0912061343n4aa51c5cjdc0691c9e091362@mail.gmail.com> <1cd32cbb0912080647p6a0bf047p4a41c9b3e353d9ff@mail.gmail.com> Message-ID: <645BA5FF-E292-4405-80C7-FC6A2033B8C0@enthought.com> On Dec 8, 2009, at 8:47 AM, josef.pktd at gmail.com wrote: > On Tue, Dec 8, 2009 at 9:11 AM, Ralf Gommers > wrote: >>>> >>>> In rev 6070 Pauli added an __all__ dict to orthogonal.py that >>>> does not >>>> include those functions. I think pydocweb only generates pages for >>>> objects >>>> in __all__ if that exists. So it looks like that is the reason. >>>> >>>> Should all the xx_roots funcs be in __all__ in your opinion? I agree with Josef, and think they should be in the __all__. They are a simpler way to access just the roots and weights. They are documented themselves (which is usually an indicator that they are intended to be used outside the single file). I'll add them to the __all__ if there are no strong objections. -Travis -- Travis Oliphant Enthought Inc. 1-512-536-1057 http://www.enthought.com oliphant at enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakevdp at gmail.com Wed Dec 9 19:05:51 2009 From: jakevdp at gmail.com (Jake VanderPlas) Date: Wed, 9 Dec 2009 16:05:51 -0800 Subject: [SciPy-dev] scipy.sparse.linalg segfaults for complex data Message-ID: <58df6dc20912091605k6e60d7at19b833149a3d083e@mail.gmail.com> Hello, I just opened a ticket for this problem (#1067), but I'd like to get this out to the list. I've found that the iterative solvers in scipy.sparse.linalg produce Segmentation Faults for complex inputs. Below is a simple example (for scipy v0.7.1). Should these functions support complex data? If not, does anyone know a good way to solve general linear systems for complex input? I say general because I don't actually have a matrix per se, but a LinearOperator which implements its matvec method using scipy.fftpack. Thanks Jake import numpy as np from scipy.sparse.linalg import cg N = 2 M = np.random.random((N,N)) + 1j*np.random.random((N,N)) v = np.random.random(N) cg(M,v) #segmentation fault is here From josef.pktd at gmail.com Wed Dec 9 19:17:17 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 9 Dec 2009 19:17:17 -0500 Subject: [SciPy-dev] scipy.sparse.linalg segfaults for complex data In-Reply-To: <58df6dc20912091605k6e60d7at19b833149a3d083e@mail.gmail.com> References: <58df6dc20912091605k6e60d7at19b833149a3d083e@mail.gmail.com> Message-ID: <1cd32cbb0912091617s14209350w5442f115789a7cee@mail.gmail.com> On Wed, Dec 9, 2009 at 7:05 PM, Jake VanderPlas wrote: > Hello, > I just opened a ticket for this problem (#1067), but I'd like to get > this out to the list. ?I've found that the iterative solvers in > scipy.sparse.linalg produce Segmentation Faults for complex inputs. > Below is a simple example (for scipy v0.7.1). ?Should these functions > support complex data? ?If not, does anyone know a good way to solve > general linear systems for complex input? ?I say general because I > don't actually have a matrix per se, but a LinearOperator which > implements its matvec method using scipy.fftpack. > Thanks > ? Jake > > > import numpy as np > from scipy.sparse.linalg import cg > > N = 2 > M = np.random.random((N,N)) + 1j*np.random.random((N,N)) > v = np.random.random(N) > cg(M,v) ?#segmentation fault is here no problems here (WindowsXP, no umfpack) >>> N=2 >>> M = np.random.random((N,N)) + 1j*np.random.random((N,N)) >>> v = np.random.random(N) >>> import scipy.sparse.linalg as sla >>> sla.cg(M,v) (array([ 0.26265867+0.61820443j, 1.48541897-2.13027697j]), 20) >>> scipy.version.version '0.8.0.dev6118' >>> np.version.version '1.4.0rc1' Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From jakevdp at gmail.com Thu Dec 10 14:07:03 2009 From: jakevdp at gmail.com (Jake VanderPlas) Date: Thu, 10 Dec 2009 11:07:03 -0800 Subject: [SciPy-dev] scipy.sparse.linalg segfaults for complex data Message-ID: <58df6dc20912101107v6eb40c80l62eba4f9d5864cae@mail.gmail.com> > On Wed, Dec 9, 2009 at 7:05 PM, Jake VanderPlas wrote: >> Hello, >> I just opened a ticket for this problem (#1067), but I'd like to get >> this out to the list. ?I've found that the iterative solvers in >> scipy.sparse.linalg produce Segmentation Faults for complex inputs. >> Below is a simple example (for scipy v0.7.1). ?Should these functions >> support complex data? ?If not, does anyone know a good way to solve >> general linear systems for complex input? ?I say general because I >> don't actually have a matrix per se, but a LinearOperator which >> implements its matvec method using scipy.fftpack. >> Thanks >> ? Jake >> >> >> import numpy as np >> from scipy.sparse.linalg import cg >> >> N = 2 >> M = np.random.random((N,N)) + 1j*np.random.random((N,N)) >> v = np.random.random(N) >> cg(M,v) ?#segmentation fault is here > > no problems here (WindowsXP, no umfpack) > >>>> N=2 >>>> M = np.random.random((N,N)) + 1j*np.random.random((N,N)) >>>> v = np.random.random(N) >>>> import scipy.sparse.linalg as sla >>>> sla.cg(M,v) > (array([ 0.26265867+0.61820443j, 1.48541897-2.13027697j]), 20) > >>>> scipy.version.version > '0.8.0.dev6118' >>>> np.version.version > '1.4.0rc1' > > Josef >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> I'm on a 64-bit Linux machine, running redhat 5 >>> scipy.__version__ '0.7.1' >>> numpy.__version__ '1.3.0' Just as a check, I tested this on our old 32 bit installation, and it seems to work fine. Could it be a 64 vs 32 bit problem? -Jake From d.l.goldsmith at gmail.com Thu Dec 10 16:59:27 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 10 Dec 2009 13:59:27 -0800 Subject: [SciPy-dev] scipy.org down? Message-ID: <45d1ab480912101359u6ea3d463m6d9644f85d2d3d4e@mail.gmail.com> www.scipy.org gives me: Server not found Firefox can't find the server at www.scipy.org etc., etc. DG From robert.kern at gmail.com Thu Dec 10 17:11:03 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 10 Dec 2009 16:11:03 -0600 Subject: [SciPy-dev] scipy.org down? In-Reply-To: <45d1ab480912101359u6ea3d463m6d9644f85d2d3d4e@mail.gmail.com> References: <45d1ab480912101359u6ea3d463m6d9644f85d2d3d4e@mail.gmail.com> Message-ID: <3d375d730912101411x1fe0a0cft210d15c4a86f6552@mail.gmail.com> On Thu, Dec 10, 2009 at 15:59, David Goldsmith wrote: > www.scipy.org gives me: > > Server not found Firefox can't find the server at www.scipy.org It's up and accessible outside of Enthought's network. "Server not found" is usually a problem with your DNS, not the site itself. "Unable to connect" would be the message if your DNS properly resolved the address but the server was not up. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav+sp at iki.fi Fri Dec 11 04:03:43 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Fri, 11 Dec 2009 09:03:43 +0000 (UTC) Subject: [SciPy-dev] scipy.sparse.linalg segfaults for complex data References: <58df6dc20912101107v6eb40c80l62eba4f9d5864cae@mail.gmail.com> Message-ID: Thu, 10 Dec 2009 11:07:03 -0800, Jake VanderPlas wrote: [clip] > I'm on a 64-bit Linux machine, running redhat 5 >>>> scipy.__version__ > '0.7.1' >>>> numpy.__version__ > '1.3.0' > Just as a check, I tested this on our old 32 bit installation, and it > seems to work fine. Could it be a 64 vs 32 bit problem? Works for me on 64-bit, Numpy 1.2.1 and Scipy 0.7.0. -- Pauli Virtanen From erik.tollerud at gmail.com Fri Dec 11 21:38:18 2009 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Fri, 11 Dec 2009 18:38:18 -0800 Subject: [SciPy-dev] Bugfix patch for scipy.special.hermitenorm Message-ID: I uncovered a bug in the normalized hermite polynomial function in scipy.special - whenever it was called with a numpy array, it kept giving an error about having the wrong number of inputs. I traced this down to what I'm guessing came from someone changing something in the hermite function and note realizing that hermitenorm was seperate. Anyway, the diff against the current svn is attached. -------------- next part -------------- A non-text attachment was scrubbed... Name: hermitenormfix.diff Type: text/x-patch Size: 539 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Sat Dec 12 09:48:41 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 12 Dec 2009 15:48:41 +0100 Subject: [SciPy-dev] test_lambertw.test_values ... segfault Message-ID: test_data.test_boost(,) ... ok test_lambertw.test_values ... Program received signal SIGSEGV, Segmentation fault. 0x00007ffff7b27c7a in PyErr_Restore () from /usr/lib64/libpython2.6.so.1.0 (gdb) bt #0 0x00007ffff7b27c7a in PyErr_Restore () from /usr/lib64/libpython2.6.so.1.0 #1 0x00007ffff7b2772f in PyErr_SetString () from /usr/lib64/libpython2.6.so.1.0 #2 0x00007ffff7aacc24 in ?? () from /usr/lib64/libpython2.6.so.1.0 #3 0x00007ffff7ab7ad3 in PyFloat_AsDouble () from /usr/lib64/libpython2.6.so.1.0 #4 0x00007ffff7ad97b2 in PyString_Format () from /usr/lib64/libpython2.6.so.1.0 #5 0x00007ffff7a9e286 in ?? () from /usr/lib64/libpython2.6.so.1.0 #6 0x00007ffff7a9eb8e in PyNumber_Remainder () from /usr/lib64/libpython2.6.so.1.0 #7 0x00007fffe5233bd2 in __pyx_f_5scipy_7special_8lambertw_lambertw_scalar (__pyx_v_z=..., __pyx_v_k=, __pyx_v_tol=) at scipy/special/lambertw.c:1078 #8 0x00007fffe523244c in __pyx_f_5scipy_7special_8lambertw__apply_func_to_1d_vec ( __pyx_v_args=, __pyx_v_dimensions=0x3ba0ad8, __pyx_v_steps=0x3fd6d90, __pyx_v_func= 0x297a2873626120) at scipy/special/lambertw.c:1160 #9 0x00007ffff60d16d9 in PyUFunc_GenericFunction (self=, args=, kwds= 0x0, mps=) at numpy/core/src/umath/ufunc_object.c:2056 #10 0x00007ffff60d2283 in ufunc_generic_call (self=0x20ffa90, args=, kwds=0x0) at numpy/core/src/umath/ufunc_object.c:3518 #11 0x00007ffff7a9de32 in PyObject_Call () from /usr/lib64/libpython2.6.so.1.0 #12 0x00007fffe5232870 in __pyx_pf_5scipy_7special_8lambertw_lambertw (__pyx_self=, __pyx_args=, __pyx_kwds=) at scipy/special/lambertw.c:1272 #13 0x00007ffff7b1abf9 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0 #14 0x00007ffff7b1f54e in PyEval_EvalCodeEx () from /usr/lib64/libpython2.6.so.1.0 #15 0x00007ffff7abd7f2 in ?? () from /usr/lib64/libpython2.6.so.1.0 From tmp50 at ukr.net Sat Dec 12 10:09:19 2009 From: tmp50 at ukr.net (Dmitrey) Date: Sat, 12 Dec 2009 17:09:19 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc Message-ID: hello all, I have class oofun (in a package FuncDesigner, that is already used by lots of people) where some operations on numpy arrays (along with Python lists and numbers) are overloaded, such as __mul__, __add__, __pow__, __div__ etc. There is a problem with numpy arrays: if I use a*f where a is array and f is oofun, it returns array with elements a[i]*f. Would it omit calling array.__mul__ and call only oofun.__rmul__, like it is done in numpy.matrix and Python lists/numbers, all would work ok as expected; but now it makes FuncDesigner code to work much slower or unexpectedly or doesn't work at all instead. So, does anyone mind if I'll commit some changes to numpy __mul__, __div__ etc? I intend to implement the following walkaround: Now the code looks like this for array: ??? def __mul__(self, i): ??????? return asarray(multiply(self, i)) and like this for numpy/matrixlib/defmatrix.py: ??? def __mul__(self, other): ??????? if isinstance(other,(N.ndarray, list, tuple)) : ??????????? return N.dot(self, asmatrix(other)) ??????? if isscalar(other) or not hasattr(other, '__rmul__') : ??????????? return N.dot(self, other) ??????? return NotImplemented and I want to add an empty class named "CNumpyLeftOperatorOverloaded" to numpy, and if someone defines his class as a child of the one, __mul__ and others will not invoke __div__ etc, calling otherClass.__rdiv__ etc: ??? def __mul__(self, i): ??????? return asarray(multiply(self, i)) if not isinstance(i,CNumpyLeftOperatorOverloaded) else i.__rmul__(self) and declare my class as a child of the one. As far as I understood, the changes should be added to numpy/core/defcherarray.py So, does anyone mind me to implement it? D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Dec 12 15:56:21 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 12 Dec 2009 13:56:21 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: 2009/12/12 Dmitrey > hello all, > I have class oofun (in a package FuncDesigner, that is already used by lots > of people) where some operations on numpy arrays (along with Python lists > and numbers) are overloaded, such as __mul__, __add__, __pow__, __div__ etc. > > There is a problem with numpy arrays: if I use a*f where a is array and f > is oofun, it returns array with elements a[i]*f. Would it omit calling > array.__mul__ and call only oofun.__rmul__, like it is done in numpy.matrix > and Python lists/numbers, all would work ok as expected; but now it makes > FuncDesigner code to work much slower or unexpectedly or doesn't work at all > instead. > So, does anyone mind if I'll commit some changes to numpy __mul__, __div__ > etc? > I intend to implement the following walkaround: > Now the code looks like this for array: > > def __mul__(self, i): > return asarray(multiply(self, i)) > > and like this for numpy/matrixlib/defmatrix.py: > > def __mul__(self, other): > if isinstance(other,(N.ndarray, list, tuple)) : > return N.dot(self, asmatrix(other)) > if isscalar(other) or not hasattr(other, '__rmul__') : > return N.dot(self, other) > return NotImplemented > > and I want to add an empty class named "CNumpyLeftOperatorOverloaded" to > numpy, and if someone defines his class as a child of the one, __mul__ and > others will not invoke __div__ etc, calling otherClass.__rdiv__ etc: > > def __mul__(self, i): > return asarray(multiply(self, i)) if not > isinstance(i,CNumpyLeftOperatorOverloaded) else i.__rmul__(self) > and declare my class as a child of the one. > > As far as I understood, the changes should be added to > numpy/core/defcherarray.py > So, does anyone mind me to implement it? > D. > > Sounds like you are exporting an array interface or subclassing ndarray. If the latter, you might be able to manipulate the value of __array_priority__. I haven't experimented with these things myself. As to the proposed solutions, they are the start of a slippery slope of trying to identify all objects to which they should apply. I don't think we want to go there. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Dec 12 16:16:58 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 12 Dec 2009 15:16:58 -0600 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: <3d375d730912121316x471a0b2due5ae2c2ec349c3b1@mail.gmail.com> On Sat, Dec 12, 2009 at 14:56, Charles R Harris wrote: > > 2009/12/12 Dmitrey >> >> hello all, >> I have class oofun (in a package FuncDesigner, that is already used by >> lots of people) where some operations on numpy arrays (along with Python >> lists and numbers) are overloaded, such as __mul__, __add__, __pow__, >> __div__ etc. >> There is a problem with numpy arrays: if I use a*f where a is array and f >> is oofun, it returns array with elements a[i]*f. Would it omit calling >> array.__mul__ and call only oofun.__rmul__, like it is done in numpy.matrix >> and Python lists/numbers, all would work ok as expected; but now it makes >> FuncDesigner code to work much slower or unexpectedly or doesn't work at all >> instead. >> So, does anyone mind if I'll commit some changes to numpy __mul__, __div__ >> etc? >> I intend to implement the following walkaround: >> Now the code looks like this for array: >> >> ??? def __mul__(self, i): >> ??????? return asarray(multiply(self, i)) >> >> and like this for numpy/matrixlib/defmatrix.py: >> >> ??? def __mul__(self, other): >> ??????? if isinstance(other,(N.ndarray, list, tuple)) : >> ??????????? return N.dot(self, asmatrix(other)) >> ??????? if isscalar(other) or not hasattr(other, '__rmul__') : >> ??????????? return N.dot(self, other) >> ??????? return NotImplemented >> >> and I want to add an empty class named "CNumpyLeftOperatorOverloaded" to >> numpy, and if someone defines his class as a child of the one, __mul__ and >> others will not invoke __div__ etc, calling otherClass.__rdiv__ etc: >> >> ??? def __mul__(self, i): >> ??????? return asarray(multiply(self, i)) if not >> isinstance(i,CNumpyLeftOperatorOverloaded) else i.__rmul__(self) >> and declare my class as a child of the one. >> >> As far as I understood, the changes should be added to >> numpy/core/defcherarray.py >> So, does anyone mind me to implement it? >> D. >> > > Sounds like you are exporting an array interface or subclassing ndarray. If > the latter, you might be able to manipulate the value of __array_priority__. > I haven't experimented with these things myself. I don't think he's subclassing ndarray, but does have a class that shouldn't be interpreted by ndarray as a scalar. In any case, __array_priority__ doesn't matter; it just controls which type the output of a multi-input ufunc. > As to the proposed solutions, they are the start of a slippery slope of > trying to identify all objects to which they should apply. I don't think we > want to go there. I think what he is asking for is an empty mixin class which other folks could subclass to mark their classes. It would say "Hey, ndarray! Let my __mul__, __rmul__, etc., take priority over yours, regardless of which of us comes first in the expression." Otherwise, ndarray will gladly consume pretty much any object on the other side of the operator because it will treat it as an object scalar. We could also define a standard attribute that could mark such classes instead of requiring a mixin subclass. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Sat Dec 12 16:52:12 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 12 Dec 2009 14:52:12 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <3d375d730912121316x471a0b2due5ae2c2ec349c3b1@mail.gmail.com> References: <3d375d730912121316x471a0b2due5ae2c2ec349c3b1@mail.gmail.com> Message-ID: On Sat, Dec 12, 2009 at 2:16 PM, Robert Kern wrote: > I think what he is asking for is an empty mixin class which other > folks could subclass to mark their classes. It would say "Hey, > ndarray! Let my __mul__, __rmul__, etc., take priority over yours, > regardless of which of us comes first in the expression." Otherwise, > ndarray will gladly consume pretty much any object on the other side > of the operator because it will treat it as an object scalar. > > We could also define a standard attribute that could mark such classes > instead of requiring a mixin subclass. > > Ah, I completely misunderstood. Nevermind... I think such a mixin base class would be useful. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Dec 12 17:06:15 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 12 Dec 2009 17:06:15 -0500 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: <3d375d730912121316x471a0b2due5ae2c2ec349c3b1@mail.gmail.com> Message-ID: <1cd32cbb0912121406l1cdda01et6d1d19d89d6e648b@mail.gmail.com> On Sat, Dec 12, 2009 at 4:52 PM, Charles R Harris wrote: > > > On Sat, Dec 12, 2009 at 2:16 PM, Robert Kern wrote: > > > >> >> I think what he is asking for is an empty mixin class which other >> folks could subclass to mark their classes. It would say "Hey, >> ndarray! Let my __mul__, __rmul__, etc., take priority over yours, >> regardless of which of us comes first in the expression." Otherwise, >> ndarray will gladly consume pretty much any object on the other side >> of the operator because it will treat it as an object scalar. >> >> We could also define a standard attribute that could mark >> >> such classes >> instead of requiring a mixin subclass. >> > > Ah, I completely misunderstood. Nevermind... > > I think such a mixin base class would be useful. In a similar direction, I was recently thinking whether it would be possible to get an attribute or some indication for classes that implement a similar interface as numpy.arrays, i.e. same list of methods. This would make it easier to write functions for any kind of ducks. Instead of converting with np.array or np.asarray, we could check whether the instances in the function arguments implement the required interface, and if yes just use the methods without converting to arrays. I would rule out matrices in this because it changes the meaning of the multiplication completely. I don't know whether this could be handled by a mixin class. ? Josef > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From forrest.bao at gmail.com Sat Dec 12 21:29:40 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Sat, 12 Dec 2009 20:29:40 -0600 Subject: [SciPy-dev] numpy.linalg = scipy.linalg ? Message-ID: <889df5f00912121829q3e652729gda15b360dae0731e@mail.gmail.com> Hi there, It seems that both numpy and scipy have linalg. Are numpy.linalg and scipy.linalg the same? Cheers, Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Dec 12 22:20:43 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 12 Dec 2009 22:20:43 -0500 Subject: [SciPy-dev] numpy.linalg = scipy.linalg ? In-Reply-To: <889df5f00912121829q3e652729gda15b360dae0731e@mail.gmail.com> References: <889df5f00912121829q3e652729gda15b360dae0731e@mail.gmail.com> Message-ID: <1cd32cbb0912121920m5c8aaaf5gb881a41c4b31e3bb@mail.gmail.com> On Sat, Dec 12, 2009 at 9:29 PM, Forrest Sheng Bao wrote: > Hi there, > It seems that both numpy and scipy have linalg. Are numpy.linalg and > scipy.linalg the same? see http://advice.mechanicalkern.com/question/10/whats-the-difference-between-numpylinalg-and-scipylinalg The only way, to find out which lapack/blas routines are used, is to go through the functions (docs or source), at least from what I have seen. (e.g linalg.lstsq call different lapack functions, but even reading the help for the lapack functions it didn't become clear to me what the real difference in the calculation method is.) Cheers, Josef > Cheers, > Forrest > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From tmp50 at ukr.net Sun Dec 13 03:38:42 2009 From: tmp50 at ukr.net (Dmitrey) Date: Sun, 13 Dec 2009 10:38:42 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <1cd32cbb0912121406l1cdda01et6d1d19d89d6e648b@mail.gmail.com> Message-ID: On Sat, Dec 12, 2009 at 4:52 PM, Robert Kern wrote: > We could also define a standard attribute that could mark such classes instead of requiring a mixin subclass. I had thought about the idea, but I suspect it will work slower: ??? def __mul__(self, i): ??????? return asarray(multiply(self, i)) if not getattr(i, '_numpyLeftOperatorsOverloaded', False) else i.__rmul__(self) vs ??? def __mul__(self, i): ??????? return asarray(multiply(self, i)) if not isinstance(i,CNumpyLeftOperatorOverloaded) else i.__rmul__(self) But it doesn't matter for me essentially which way to chose; so which one do you agree to be implemented? On Sat, Dec 12, 2009 at 4:52 PM, Charles R Harris wrote: >Instead of converting with np.array or np.asarray, we could check whether the instances in the function arguments implement the required interface, and if yes just use the methods without converting to arrays. Do you mean something like ??? def __mul__(self, i): ??????? return asarray(multiply(self, i)) if not hasattr(i, '__rmul__', False) else i.__rmul__(self) ? I guess this is wrong idea, because sometimes user will want to get result that is masked array of i instances. BTW, lots of data types have its own __rmul__ (Python lists, numbers, even numpy arrays and matrices), so I guess it will not work as expected (as it is done now) for them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sun Dec 13 09:07:58 2009 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 13 Dec 2009 09:07:58 -0500 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: <4B24F53E.1060009@american.edu> On 12/13/2009 3:38 AM, Dmitrey wrote: > def __mul__(self, i): > return asarray(multiply(self, i)) if not getattr(i, > '_numpyLeftOperatorsOverloaded', False) else i.__rmul__(self) > vs > def __mul__(self, i): > return asarray(multiply(self, i)) if not > isinstance(i,CNumpyLeftOperatorOverloaded) else i.__rmul__(self) > I am not at all speaking against this, but I am wondering what exactly your object is getting out of overriding multiplication. Also, for clarification, the proposal is not just to override multiplication, but *all* arithmetic operations. Right? (The examples have all been multiplication.) Alan Isaac From charlesr.harris at gmail.com Sun Dec 13 11:11:47 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 13 Dec 2009 09:11:47 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <4B24F53E.1060009@american.edu> References: <4B24F53E.1060009@american.edu> Message-ID: On Sun, Dec 13, 2009 at 7:07 AM, Alan G Isaac wrote: > On 12/13/2009 3:38 AM, Dmitrey wrote: > > def __mul__(self, i): > > return asarray(multiply(self, i)) if not getattr(i, > > '_numpyLeftOperatorsOverloaded', False) else i.__rmul__(self) > > vs > > def __mul__(self, i): > > return asarray(multiply(self, i)) if not > > isinstance(i,CNumpyLeftOperatorOverloaded) else i.__rmul__(self) > > > > > I am not at all speaking against this, but I am wondering > what exactly your object is getting out of overriding multiplication. > > Also, for clarification, the proposal is not just to override > multiplication, but *all* arithmetic operations. Right? > (The examples have all been multiplication.) > > Yes, I think we would want to override all of them, but in ndarray itself. For instance, what happens now is: In [1]: from numpy.polynomial import Polynomial as poly In [2]: p = poly([0,1]) In [3]: ones(2) * p Out[3]: array([poly([ 0. 1.], [-1. 1.]), poly([ 0. 1.], [-1. 1.])], dtype=object) In [4]: p * ones(2) Out[4]: Polynomial([ 0., 1., 1.], [-1., 1.]) This is because the * ufunc treats the Polynomial class as a scalar, promotes the ones(2) to an object array, and does the multiplication. What a mixin class would do is exempt the polynomial from being treated as a scalar and force python to call its __rmul__ method instead. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Sun Dec 13 17:05:21 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Sun, 13 Dec 2009 23:05:21 +0100 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <3d375d730912121316x471a0b2due5ae2c2ec349c3b1@mail.gmail.com> References: <3d375d730912121316x471a0b2due5ae2c2ec349c3b1@mail.gmail.com> Message-ID: <341c737efbf7e5d5afec2000725a76c5.squirrel@webmail.uio.no> Robert Kern wrote: > On Sat, Dec 12, 2009 at 14:56, Charles R Harris > wrote: >> >> 2009/12/12 Dmitrey >>> >>> hello all, >>> I have class oofun (in a package FuncDesigner, that is already used by >>> lots of people) where some operations on numpy arrays (along with >>> Python >>> lists and numbers) are overloaded, such as __mul__, __add__, __pow__, >>> __div__ etc. >>> There is a problem with numpy arrays: if I use a*f where a is array and >>> f >>> is oofun, it returns array with elements a[i]*f. Would it omit calling >>> array.__mul__ and call only oofun.__rmul__, like it is done in >>> numpy.matrix >>> and Python lists/numbers, all would work ok as expected; but now it >>> makes >>> FuncDesigner code to work much slower or unexpectedly or doesn't work >>> at all >>> instead. >>> So, does anyone mind if I'll commit some changes to numpy __mul__, >>> __div__ >>> etc? >>> I intend to implement the following walkaround: >>> Now the code looks like this for array: >>> >>> ??? def __mul__(self, i): >>> ??????? return asarray(multiply(self, i)) >>> >>> and like this for numpy/matrixlib/defmatrix.py: >>> >>> ??? def __mul__(self, other): >>> ??????? if isinstance(other,(N.ndarray, list, tuple)) : >>> ??????????? return N.dot(self, asmatrix(other)) >>> ??????? if isscalar(other) or not hasattr(other, '__rmul__') : >>> ??????????? return N.dot(self, other) >>> ??????? return NotImplemented >>> >>> and I want to add an empty class named "CNumpyLeftOperatorOverloaded" >>> to >>> numpy, and if someone defines his class as a child of the one, __mul__ >>> and >>> others will not invoke __div__ etc, calling otherClass.__rdiv__ etc: >>> >>> ??? def __mul__(self, i): >>> ??????? return asarray(multiply(self, i)) if not >>> isinstance(i,CNumpyLeftOperatorOverloaded) else i.__rmul__(self) >>> and declare my class as a child of the one. >>> >>> As far as I understood, the changes should be added to >>> numpy/core/defcherarray.py >>> So, does anyone mind me to implement it? >>> D. >>> >> >> Sounds like you are exporting an array interface or subclassing ndarray. >> If >> the latter, you might be able to manipulate the value of >> __array_priority__. >> I haven't experimented with these things myself. > > I don't think he's subclassing ndarray, but does have a class that > shouldn't be interpreted by ndarray as a scalar. In any case, > __array_priority__ doesn't matter; it just controls which type the > output of a multi-input ufunc. > >> As to the proposed solutions, they are the start of a slippery slope of >> trying to identify all objects to which they should apply. I don't think >> we >> want to go there. > > I think what he is asking for is an empty mixin class which other > folks could subclass to mark their classes. It would say "Hey, > ndarray! Let my __mul__, __rmul__, etc., take priority over yours, > regardless of which of us comes first in the expression." Otherwise, > ndarray will gladly consume pretty much any object on the other side > of the operator because it will treat it as an object scalar. > > We could also define a standard attribute that could mark such classes > instead of requiring a mixin subclass. FWIW, I'd like to submit a patch to Sage for making NumPy arrays and Sage matrices work better together which would require this functionality. +1 for an attribute and not a mixin, as that would allow Cython classes as well. Something like "__do_not_treat_as_scalar__ = True". The idea is simply to stop ndarray from handling the operation, and let other classes implement specific support for arithmetic with NumPy arrays. Which is not a slippery slope at all. Dag Sverre From wnbell at gmail.com Sun Dec 13 17:19:55 2009 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 13 Dec 2009 17:19:55 -0500 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <3d375d730912121316x471a0b2due5ae2c2ec349c3b1@mail.gmail.com> References: <3d375d730912121316x471a0b2due5ae2c2ec349c3b1@mail.gmail.com> Message-ID: On Sat, Dec 12, 2009 at 4:16 PM, Robert Kern wrote: > > I think what he is asking for is an empty mixin class which other > folks could subclass to mark their classes. It would say "Hey, > ndarray! Let my __mul__, __rmul__, etc., take priority over yours, > regardless of which of us comes first in the expression." Otherwise, > ndarray will gladly consume pretty much any object on the other side > of the operator because it will treat it as an object scalar. > > We could also define a standard attribute that could mark such classes > instead of requiring a mixin subclass. > We could use this functionality in scipy.sparse too. In particular, it would be nice if asarray(some_sparse_matrix) just worked so we could toss (presumably small) sparse matrices into functions expecting ndarrays. Like Dmitrey, we need to invoke sparse.__rmul__(dense) when encountering dense * sparse. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From gael.varoquaux at normalesup.org Sun Dec 13 18:55:50 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 Dec 2009 00:55:50 +0100 Subject: [SciPy-dev] [Ann] EuroScipy 2010 Message-ID: <20091213235550.GB27356@phare.normalesup.org> ========================== Announcing EuroScipy 2010 ========================== --------------------------------------------------- The 3rd European meeting on Python in Science --------------------------------------------------- **Paris, Ecole Normale Sup?rieure, July 8-11 2010** We are happy to announce the 3rd EuroScipy meeting, in Paris, July 2010. The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic research and state of the art industry. Important dates ================== ====================================== =================================== **Registration opens** Sunday March 29 **Paper submission deadline** Sunday May 9 **Program announced** Sunday May 22 **Tutorials tracks** Thursday July 8 - Friday July 9 **Conference track** Saturday July 10 - Sunday July 11 ====================================== =================================== Tutorial ========= There will be two tutorial tracks at the conference, an introductory one, to bring up to speed with the Python language as a scientific tool, and an advanced track, during which experts of the field will lecture on specific advanced topics such as advanced use of numpy, scientific visualization, software engineering... Main conference topics ======================== We will be soliciting talks on the follow topics: - Presentations of scientific tools and libraries using the Python language, including but not limited to: - Vector and array manipulation - Parallel computing - Scientific visualization - Scientific data flow and persistence - Algorithms implemented or exposed in Python - Web applications and portals for science and engineering - Reports on the use of Python in scientific achievements or ongoing projects. - General-purpose Python tools that can be of special interest to the scientific community. Keynote Speaker: Hans Petter Langtangen ========================================== We are excited to welcome Hans Petter Langtangen as our keynote speaker. - Director of scientific computing and bio-medical research at Simula labs, Oslo - Author of the famous book Python scripting for computational science http://www.springer.com/math/cse/book/978-3-540-73915-9 -- Ga?l Varoquaux, conference co-chair Nicolas Chauvat, conference co-chair Program committee ................. Romain Brette (ENS Paris, DEC) Mike M?ller (Python Academy) Christophe Pradal (CIRAD/INRIA, DigiPlantes team) Pierre Raybault (CEA, DAM) Jarrod Millman (UC Berkeley, Helen Wills NeuroScience institute) From erik.tollerud at gmail.com Sun Dec 13 20:56:09 2009 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Sun, 13 Dec 2009 17:56:09 -0800 Subject: [SciPy-dev] Bugfix patch for scipy.special.hermitenorm In-Reply-To: References: Message-ID: I just realized I probably should have submitted this to the trac bug database... so that has been done as ticket #1068 On Fri, Dec 11, 2009 at 6:38 PM, Erik Tollerud wrote: > I uncovered a bug in the normalized hermite polynomial function in > scipy.special - whenever it was called with a numpy array, it kept > giving an error about having the wrong number of inputs. ?I traced > this down to what I'm guessing came from someone changing something in > the hermite function and note realizing that hermitenorm was seperate. > ?Anyway, the diff against the current svn is attached. > From d.l.goldsmith at gmail.com Sun Dec 13 22:30:16 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 13 Dec 2009 19:30:16 -0800 Subject: [SciPy-dev] Possible inconsistency in arrays.indexing.rst Message-ID: <45d1ab480912131930i6b31d91ew368b0f7a5f2a8462@mail.gmail.com> Hi, folks. arrays.indexing.rst has the following two passages, along with a Note asserting that they are inconsistent: Passage 1) in the section titled "Basic Slicing": "Basic slicing occurs when [the selection] obj is a slice object (constructed by start:stop:step notation inside of brackets), an integer, or a tuple of slice objects and integers ... In order to remain backward compatible with a common usage in Numeric, basic slicing is also initiated if the selection object is any sequence (such as a list) containing slice objects..." Passage 2) in the "Warning" just preceding the section titled "Record Access": "... x[[1,2,3]] will trigger advanced indexing, whereas x[[1,2,slice(None)]] will trigger basic slicing." Note (originally following the "Warning," but now moved to the Discussion): "the above warning needs explanation as the last part is at odds with the definition of basic indexing." But is it? The "definition" of "basic slicing" (I find no "definition," as such, of "basic indexing") includes the statement that "basic slicing is initiated if the selection object is any sequence ... containing slice objects," but the way the prior sentence (the main part of the "definition") is written, it is not crystal clear whether _any_ sequence of individual integers constitutes a "slice object" or not - I can see it being interpreted either way. And either way - if it is an inconsistency, then it needs to be corrected, and if it isn't, then the fact that someone thought it was clearly points to the issue needing clarification - some reworking is necessary. So, I put it to you, the experts: is this an inconsistency or not? Also w.r.t. the Note: it states "this section may need some tuning": I assume that if "this" were meant to refer to the (immediately following) section, the writer (who is not readily identified, as the author in the log is "Source") would have said "The following section," and I thus conclude that "this" refers to the section the Note ends, but I would appreciate confirmation from the author (I hope s/he reads this), along with, more importantly, clarification of precisely what it is about the section that the Note writer feels needs "tuning." Thanks! DG From tmp50 at ukr.net Mon Dec 14 06:25:07 2009 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 14 Dec 2009 13:25:07 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <341c737efbf7e5d5afec2000725a76c5.squirrel@webmail.uio.no> Message-ID: I don't think the field name '__do_not_treat_as_scalar__ ' is a good choice. At first, it will be unclear what it is used for, at 2nd, some users / classes, that certainly are not scalars with overloaded __rdiv__ etc sometimes will prefer numpy __div__ method to be used. As for me, I would prefer something like '__use_self_operators = True' or '__use_numpy_operations = False' or '__involve_self_overloaded_methods__ = True'. I have to release FuncDesigner tomorrow (to keep my quarterly schedule) so please inform me ASAP which name do you finally choose. Thank you in advance, D. Dag Sverre wrote: FWIW, I'd like to submit a patch to Sage for making NumPy arrays and Sage matrices work better together which would require this functionality. +1 for an attribute and not a mixin, as that would allow Cython classes as well. Something like "__do_not_treat_as_scalar__ = True". The idea is simply to stop ndarray from handling the operation, and let other classes implement specific support for arithmetic with NumPy arrays. Which is not a slippery slope at all. Dag Sverre -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Dec 14 06:18:38 2009 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 14 Dec 2009 03:18:38 -0800 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) Message-ID: <961fa2b40912140318k6a0e2c02na7fe6a7612e716cb@mail.gmail.com> As mentioned previously[0], I've written a scipy.sparse-compatible wrapper for the CHOLMOD sparse Cholesky routines. I considered making it 'scikits.cholmod' (cf. scikits.umfpack), but creating a new scikit every time someone needs a sparse linear algebra routine seems like it will become very silly very quickly, so instead I hereby declare the existence of 'scikits.sparse' as a home for all such routines. (Of course, it currently only contains scikits.sparse.cholmod). Manual: http://packages.python.org/scikits.sparse/ Source: hg clone https://scikits-sparse.googlecode.com/hg/ scikits.sparse Homepage: http://code.google.com/p/scikits-sparse Bug tracker: http://code.google.com/p/scikits-sparse/issues/list Mailing list: scikits-sparse-discuss at lists.vorpus.org http://lists.vorpus.org/cgi-bin/mailman/listinfo/scikits-sparse-discuss I would have sucked scikits.umfpack in, except that it uses SWIG, which I don't understand and am not really inspired to learn, at least for a v0.1 release. Also, there appear to still be some sort of complicated entanglements with scipy.sparse (e.g. in at least part of the test suite). Anyone feeling inspired? It's not a very complicated interface; just rewrapping it might be as easy as anything else. SuiteSparseQR would also be a natural fit, since it uses the (already wrapped) CHOLMOD matrix interfaces. [0] http://mail.scipy.org/pipermail/scipy-dev/2009-November/013244.html Share and enjoy, -- Nathaniel From gael.varoquaux at normalesup.org Mon Dec 14 07:47:04 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 Dec 2009 13:47:04 +0100 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <961fa2b40912140318k6a0e2c02na7fe6a7612e716cb@mail.gmail.com> References: <961fa2b40912140318k6a0e2c02na7fe6a7612e716cb@mail.gmail.com> Message-ID: <20091214124704.GC12484@phare.normalesup.org> On Mon, Dec 14, 2009 at 03:18:38AM -0800, Nathaniel Smith wrote: > As mentioned previously[0], I've written a scipy.sparse-compatible > wrapper for the CHOLMOD sparse Cholesky routines. I considered making > it 'scikits.cholmod' (cf. scikits.umfpack), but creating a new scikit > every time someone needs a sparse linear algebra routine seems like it > will become very silly very quickly, so instead I hereby declare the > existence of 'scikits.sparse' as a home for all such routines. (Of > course, it currently only contains scikits.sparse.cholmod). Cool. I wish that you will pick up momentum. Ga?l From charlesr.harris at gmail.com Mon Dec 14 13:06:46 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 14 Dec 2009 11:06:46 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: <341c737efbf7e5d5afec2000725a76c5.squirrel@webmail.uio.no> Message-ID: 2009/12/14 Dmitrey > I don't think the field name '__do_not_treat_as_scalar__ ' is a good > choice. At first, it will be unclear what it is used for, at 2nd, some users > / classes, that certainly are not scalars with overloaded __rdiv__ etc > sometimes will prefer numpy __div__ method to be used. > As for me, I would prefer something like '__use_self_operators = True' or > '__use_numpy_operations = False' or '__involve_self_overloaded_methods__ = > True'. > I have to release FuncDesigner tomorrow (to keep my quarterly schedule) so > please inform me ASAP which name do you finally choose. > Thank you in advance, D. > > I think it should have numpy or ndarray in the name somewhere to indicate that it is numpy specific. Hmm , maybe, __supercede_ndarray__, __disallow_ndarray__, __deny_ndarray__, __reject_ndarray__, __refuse_ndarray__, __exclude_ndarray__, __reject_ndarray__, ... My preference among those would be __deny_ndarray__. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Mon Dec 14 13:16:13 2009 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 14 Dec 2009 20:16:13 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: Message-ID: --- ???????? ????????? --- ?? ????: Charles R Harris ????: SciPy Developers List ????: 14 ???????, 20:06:46 ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc 2009/12/14 Dmitrey > I don't think the field name '__do_not_treat_as_scalar__ ' is a good > choice. At first, it will be unclear what it is used for, at 2nd, some users > / classes, that certainly are not scalars with overloaded __rdiv__ etc > sometimes will prefer numpy __div__ method to be used. > As for me, I would prefer something like '__use_self_operators = True' or > '__use_numpy_operations = False' or '__involve_self_overloaded_methods__ = > True'. > I have to release FuncDesigner tomorrow (to keep my quarterly schedule) so > please inform me ASAP which name do you finally choose. > Thank you in advance, D. > > I think it should have numpy or ndarray in the name somewhere to indicate that it is numpy specific. Hmm , maybe, __supercede_ndarray__, __disallow_ndarray__, __deny_ndarray__, __reject_ndarray__, __refuse_ndarray__, __exclude_ndarray__, __reject_ndarray__, ... My preference among those would be __deny_ndarray__. But isn't the issue present with numpy matrices or scipy.sparse matrices as well? So I guess instead of ndarray another word should be used.D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Dec 14 13:26:12 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 14 Dec 2009 11:26:12 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: 2009/12/14 Dmitrey > > > --- ???????? ????????? --- > ?? ????: Charles R Harris > ????: SciPy Developers List > ????: 14 ???????, 20:06:46 > ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc > > 2009/12/14 Dmitrey > > > I don't think the field name '__do_not_treat_as_scalar__ ' is a good > > choice. At first, it will be unclear what it is used for, at 2nd, some > users > > / classes, that certainly are not scalars with overloaded __rdiv__ etc > > sometimes will prefer numpy __div__ method to be used. > > As for me, I would prefer something like '__use_self_operators = True' or > > > '__use_numpy_operations = False' or '__involve_self_overloaded_methods__ > = > > True'. > > I have to release FuncDesigner tomorrow (to keep my quarterly schedule) > so > > please inform me ASAP which name do you finally choose. > > Thank you in advance, D. > > > > > I think it should have numpy or ndarray in the name somewhere to indicate > that it is numpy specific. Hmm , maybe, > > __supercede_ndarray__, > __disallow_ndarray__, > __deny_ndarray__, > __reject_ndarray__, > __refuse_ndarray__, > __exclude_ndarray__, > __reject_ndarray__, ... > > My preference among those would be __deny_ndarray__. > > > But isn't the issue present with numpy matrices or scipy.sparse matrices as > well? > So I guess instead of ndarray another word should be used. > __has_precedence__ __is_prior__ Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Mon Dec 14 13:40:41 2009 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 14 Dec 2009 20:40:41 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: Message-ID: --- ???????? ????????? --- ?? ????: Charles R Harris ????: SciPy Developers List ????: 14 ???????, 20:26:12 ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc 2009/12/14 Dmitrey > > > --- ???????? ????????? --- > ?? ????: Charles R Harris > ????: SciPy Developers List > ????: 14 ???????, 20:06:46 > ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc > > 2009/12/14 Dmitrey > > > I don't think the field name '__do_not_treat_as_scalar__ ' is a good > > choice. At first, it will be unclear what it is used for, at 2nd, some > users > > / classes, that certainly are not scalars with overloaded __rdiv__ etc > > sometimes will prefer numpy __div__ method to be used. > > As for me, I would prefer something like '__use_self_operators = True' or > > > '__use_numpy_operations = False' or '__involve_self_overloaded_methods__ > = > > True'. > > I have to release FuncDesigner tomorrow (to keep my quarterly schedule) > so > > please inform me ASAP which name do you finally choose. > > Thank you in advance, D. > > > > > I think it should have numpy or ndarray in the name somewhere to indicate > that it is numpy specific. Hmm , maybe, > > __supercede_ndarray__, > __disallow_ndarray__, > __deny_ndarray__, > __reject_ndarray__, > __refuse_ndarray__, > __exclude_ndarray__, > __reject_ndarray__, ... > > My preference among those would be __deny_ndarray__. > > > But isn't the issue present with numpy matrices or scipy.sparse matrices as > well? > So I guess instead of ndarray another word should be used. > __has_precedence__ __is_prior__ Chuck I guess some numpy developers should choose the final name in numpy IRC channel and inform the list (ASAP) about their collective (and hence final) decision. I'm not skilled in English quite enough, but so short names seems too uninformative to me, they will be not used too often so I guess more informative should be preferred. D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Dec 14 15:01:53 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 14 Dec 2009 13:01:53 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: 2009/12/14 Dmitrey > > > --- ???????? ????????? --- > ?? ????: Charles R Harris > ????: SciPy Developers List > ????: 14 ???????, 20:26:12 > ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc > > 2009/12/14 Dmitrey > > > > > > > --- ???????? ????????? --- > > ?? ????: Charles R Harris > > ????: SciPy Developers List > > ????: 14 ???????, 20:06:46 > > ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc > > > > 2009/12/14 Dmitrey > > > > > I don't think the field name '__do_not_treat_as_scalar__ ' is a good > > > choice. At first, it will be unclear what it is used for, at 2nd, some > > users > > > / classes, that certainly are not scalars with overloaded __rdiv__ etc > > > sometimes will prefer numpy __div__ method to be used. > > > As for me, I would prefer something like '__use_self_operators = True' > or > > > > > '__use_numpy_operations = False' or > '__involve_self_overloaded_methods__ > > = > > > True'. > > > I have to release FuncDesigner tomorrow (to keep my quarterly schedule) > > > so > > > please inform me ASAP which name do you finally choose. > > > Thank you in advance, D. > > > > > > > > I think it should have numpy or ndarray in the name somewhere to indicate > > > that it is numpy specific. Hmm , maybe, > > > > __supercede_ndarray__, > > __disallow_ndarray__, > > __deny_ndarray__, > > __reject_ndarray__, > > __refuse_ndarray__, > > __exclude_ndarray__, > > __reject_ndarray__, ... > > > > My preference among those would be __deny_ndarray__. > > > > > > But isn't the issue present with numpy matrices or scipy.sparse matrices > as > > well? > > So I guess instead of ndarray another word should be used. > > > > __has_precedence__ > __is_prior__ > > Chuck > > > I guess some numpy developers should choose the final name in numpy IRC > channel and inform the list (ASAP) about their collective (and hence final) > decision. I'm not skilled in English quite enough, but so short names seems > too uninformative to me, they will be not used too often so I guess more > informative should be preferred. > I don't think we should rush this. I'm waiting for more people to weigh in or offer different solutions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Mon Dec 14 16:03:37 2009 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 14 Dec 2009 23:03:37 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: It seems I was wrong - the issue doesn't matter for matrices, so we could use something like __exclude_ndarray_operations__ . If there will be no collective opinion of numpy developers in term of several hours, please don't mind me to commit the changes, it would be very important to have quarterly release of FuncDesigner being free of the bug. Regards, D. P.S. scipy.org server is down for now. --- ???????? ????????? --- ?? ????: Charles R Harris ????: SciPy Developers List ????: Dec 14, 2009 22:01:53 ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc 2009/12/14 Dmitrey > > > > > > > --- ???????? ????????? --- > > ?? ????: Charles R Harris > > ????: SciPy Developers List > > ????: 14 ???????, 20:26:12 > > ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc > > > > 2009/12/14 Dmitrey > > > > > > > > > > > --- ???????? ????????? --- > > > ?? ????: Charles R Harris > > > ????: SciPy Developers List > > > ????: 14 ???????, 20:06:46 > > > ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc > > > > > > 2009/12/14 Dmitrey > > > > > > > I don't think the field name '__do_not_treat_as_scalar__ ' is a good > > > > choice. At first, it will be unclear what it is used for, at 2nd, some > > > users > > > > / classes, that certainly are not scalars with overloaded __rdiv__ etc > > > > sometimes will prefer numpy __div__ method to be used. > > > > As for me, I would prefer something like '__use_self_operators = True' > > or > > > > > > > '__use_numpy_operations = False' or > > '__involve_self_overloaded_methods__ > > > = > > > > True'. > > > > I have to release FuncDesigner tomorrow (to keep my quarterly schedule) > > > > > so > > > > please inform me ASAP which name do you finally choose. > > > > Thank you in advance, D. > > > > > > > > > > > I think it should have numpy or ndarray in the name somewhere to indicate > > > > > that it is numpy specific. Hmm , maybe, > > > > > > __supercede_ndarray__, > > > __disallow_ndarray__, > > > __deny_ndarray__, > > > __reject_ndarray__, > > > __refuse_ndarray__, > > > __exclude_ndarray__, > > > __reject_ndarray__, ... > > > > > > My preference among those would be __deny_ndarray__. > > > > > > > > > But isn't the issue present with numpy matrices or scipy.sparse matrices > > as > > > well? > > > So I guess instead of ndarray another word should be used. > > > > > > > __has_precedence__ > > __is_prior__ > > > > Chuck > > > > > > I guess some numpy developers should choose the final name in numpy IRC > > channel and inform the list (ASAP) about their collective (and hence final) > > decision. I'm not skilled in English quite enough, but so short names seems > > too uninformative to me, they will be not used too often so I guess more > > informative should be preferred. > > > > I don't think we should rush this. I'm waiting for more people to weigh in > or offer different solutions. > > Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Dec 14 16:24:08 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 Dec 2009 15:24:08 -0600 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: <3d375d730912141324j2b810159jb7ab2850dd455bcb@mail.gmail.com> 2009/12/14 Dmitrey : > It seems I was wrong - the issue doesn't matter for matrices, so we could > use something like __exclude_ndarray_operations__ . If there will be no > collective opinion of numpy developers in term of several hours, please > don't mind me to commit the changes, it would be very important to have > quarterly release of FuncDesigner being free of the bug. Commit what changes to which repository? We won't be modifying anything in numpy or scipy in that time frame, sorry. > P.S. scipy.org server is down for now. It's up again. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dagss at student.matnat.uio.no Mon Dec 14 16:34:21 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 14 Dec 2009 22:34:21 +0100 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: <4B26AF5D.2080908@student.matnat.uio.no> Dmitrey wrote: > It seems I was wrong - the issue doesn't matter for matrices, so we > could use something like __exclude_ndarray_operations__ . If there will > be no collective opinion of numpy developers in term of several hours, > please don't mind me to commit the changes, it would be very important > to have quarterly release of FuncDesigner being free of the bug. > Regards, D. Please, calm down. It's not unlikely that there will pass another quarter before the release of the next version of NumPy anyway; 1.4.0 is in release candidate mode. And won't your users use old versions of NumPy for some time to come anyway? -- Dag Sverre From d.l.goldsmith at gmail.com Mon Dec 14 16:57:55 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 14 Dec 2009 13:57:55 -0800 Subject: [SciPy-dev] More "Notes" from "Source" Message-ID: <45d1ab480912141357q4ba5de7q8893bdb2dac48ff6@mail.gmail.com> Hi, folks. The following were "XXX Notes" in arrays.ndarray.rst (I don't know who wrote these originally - they're in the "Source" revision.): 0) "Note XXX: update and check these docstrings." (applies to the sections titled "Memory layout," "ctypes foreign function interface," and "Array conversion.") 1) "Note XXX: update the dtype attribute docstring: setting etc." (applies to the section titled "Data type.") 2) "Note XXX: write all attributes explicitly here instead of relying on the auto* stuff?" (applies to the section titled "Arithmetic and comparison operations") My questions are: 0) What, precisely, are meant by "update" and "check" here? 1) What does ": setting, etc." refer to? (I assume "update" has the same meaning as in 0)) 2) "write all attributes explicitly here instead of relying on the auto* stuff_?_": Evidently, unless the "?" is a typo, the author of this Note wasn't sure this should be done - should it, and if so, why? As always, thanks! DG From d.l.goldsmith at gmail.com Mon Dec 14 17:25:32 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 14 Dec 2009 14:25:32 -0800 Subject: [SciPy-dev] Problems in c-api.ufunc.rst Message-ID: <45d1ab480912141425w282d798cx49c10f735eb91c44@mail.gmail.com> "param func: Must to an array of length..." Must cast to? Must point to? What verb belongs here? "param identity: XXX: Undocumented" (from list of params following the first Note in the section titled Functions.) Is this a pointer to self? An implementation of the identity mapping? (Which would be functionally equivalent to the former, correct?) "Undocumented" on purpose (i.e., I can just delete the "XXX")? Thanks! DG From tmp50 at ukr.net Tue Dec 15 06:53:46 2009 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 15 Dec 2009 13:53:46 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <3d375d730912141324j2b810159jb7ab2850dd455bcb@mail.gmail.com> Message-ID: ?? ????: Robert Kern Commit what changes to which repository? We won't be modifying anything in numpy or scipy in that time frame, sorry. I meant commit those several lines of code into numpy repository, then I would propose FuncDesigner users to install? numpy version from latest svn snapshot. Ok, now I'll create a ticket and inform FD users they should wait for next numpy release to get rid of the bug. D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Dec 15 17:23:44 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 15 Dec 2009 23:23:44 +0100 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) Message-ID: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> Quoting Nathaniel Smith : > As mentioned previously[0], I've written a scipy.sparse-compatible > wrapper for the CHOLMOD sparse Cholesky routines. I considered making > it 'scikits.cholmod' (cf. scikits.umfpack), but creating a new scikit > every time someone needs a sparse linear algebra routine seems like it > will become very silly very quickly, so instead I hereby declare the > existence of 'scikits.sparse' as a home for all such routines. (Of > course, it currently only contains scikits.sparse.cholmod). > > Manual: > http://packages.python.org/scikits.sparse/ > Source: > hg clone https://scikits-sparse.googlecode.com/hg/ scikits.sparse > Homepage: > http://code.google.com/p/scikits-sparse > Bug tracker: > http://code.google.com/p/scikits-sparse/issues/list > Mailing list: > scikits-sparse-discuss at lists.vorpus.org > http://lists.vorpus.org/cgi-bin/mailman/listinfo/scikits-sparse-discuss > > I would have sucked scikits.umfpack in, except that it uses SWIG, > which I don't understand and am not really inspired to learn, at least > for a v0.1 release. Also, there appear to still be some sort of > complicated entanglements with scipy.sparse (e.g. in at least part of > the test suite). Anyone feeling inspired? It's not a very complicated > interface; just rewrapping it might be as easy as anything else. It would be great to have all the suitesparse in one scikit, thanks for working in that direction. Concerning the test entanglement - all direct umfpack references should be removed from scipy, the tests should live in the scikit IMHO. It's just my lack of time it's not done yet. As for wrappers, they just translate the numpy array arguments to the C arrays that umfpack expects - I guess it's the same you do with cython, so it should be easy to adapt. The umfpack scikit also uses a simple reuse mechanisms for the partial solution objects (symbolic, numeric, the LU factors etc.) - it would be great if this could be preserved. I cannot assist you right now by code as I am out of town this week, but I will gladly help with the conversion later. As for the wrapper licence, the umfpack scikit has been BSD, but I guess GPL is ok too, especially if the underlying library is GPL. Do you have a strong opinion on this? cheers, r. > SuiteSparseQR would also be a natural fit, since it uses the (already > wrapped) CHOLMOD matrix interfaces. > > [0] http://mail.scipy.org/pipermail/scipy-dev/2009-November/013244.html > > Share and enjoy, > -- Nathaniel ----- End forwarded message ----- From cburns at berkeley.edu Wed Dec 16 02:58:33 2009 From: cburns at berkeley.edu (Christopher Burns) Date: Wed, 16 Dec 2009 13:28:33 +0530 Subject: [SciPy-dev] documenting objects in stats.distributions Message-ID: <764e38540912152358x7beb4369s9c4375c68c14c3f4@mail.gmail.com> We're working on documentation at the SciPy.in sprints and have run into a strange documentation problem. Several of the distributions are objects, not classes, so the documentation page gives a warning "Unknown section Methods". See: http://docs.scipy.org/scipy/docs/scipy.stats.alpha/ Is there a way to update the docs to remove this warning? This is the actual code: class alpha_gen(rv_continuous): def _pdf(self, x, a): return 1.0/arr(x**2)/special.ndtr(a)*norm.pdf(a-1.0/x) def _cdf(self, x, a): return special.ndtr(a-1.0/x) / special.ndtr(a) def _ppf(self, q, a): return 1.0/arr(a-special.ndtri(q*special.ndtr(a))) def _stats(self, a): return [inf]*2 + [nan]*2 alpha = alpha_gen(a=0.0,name='alpha',shapes='a',extradoc=""" Alpha distribution alpha.pdf(x,a) = 1/(x**2*Phi(a)*sqrt(2*pi)) * exp(-1/2 * (a-1/x)**2) where Phi(alpha) is the normal CDF, x > 0, and a > 0. """) Thanks! -- Christopher Burns Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley 510-643-4053 http://cirl.berkeley.edu/ From dagss at student.matnat.uio.no Wed Dec 16 02:59:08 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 Dec 2009 08:59:08 +0100 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> Message-ID: <4B28934C.5070606@student.matnat.uio.no> Robert Cimrman wrote: > Quoting Nathaniel Smith : > > >> As mentioned previously[0], I've written a scipy.sparse-compatible >> wrapper for the CHOLMOD sparse Cholesky routines. I considered making >> it 'scikits.cholmod' (cf. scikits.umfpack), but creating a new scikit >> every time someone needs a sparse linear algebra routine seems like it >> will become very silly very quickly, so instead I hereby declare the >> existence of 'scikits.sparse' as a home for all such routines. (Of >> course, it currently only contains scikits.sparse.cholmod). >> >> Manual: >> http://packages.python.org/scikits.sparse/ >> Source: >> hg clone https://scikits-sparse.googlecode.com/hg/ scikits.sparse >> Homepage: >> http://code.google.com/p/scikits-sparse >> Bug tracker: >> http://code.google.com/p/scikits-sparse/issues/list >> Mailing list: >> scikits-sparse-discuss at lists.vorpus.org >> http://lists.vorpus.org/cgi-bin/mailman/listinfo/scikits-sparse-discuss >> >> I would have sucked scikits.umfpack in, except that it uses SWIG, >> which I don't understand and am not really inspired to learn, at least >> for a v0.1 release. Also, there appear to still be some sort of >> complicated entanglements with scipy.sparse (e.g. in at least part of >> the test suite). Anyone feeling inspired? It's not a very complicated >> interface; just rewrapping it might be as easy as anything else. >> > > It would be great to have all the suitesparse in one scikit, thanks > for working in that direction. > > Concerning the test entanglement - all direct umfpack references > should be removed from scipy, the tests should live in the scikit > IMHO. It's just my lack of time it's not done yet. As for wrappers, > they just translate the numpy array arguments to the C arrays that > umfpack expects - I guess it's the same you do with cython, so it > should be easy to adapt. The umfpack scikit also uses a simple reuse > mechanisms for the partial solution objects (symbolic, numeric, the LU > factors etc.) - it would be great if this could be preserved. I cannot > assist you right now by code as I am out of town this week, but I will > gladly help with the conversion later. > > As for the wrapper licence, the umfpack scikit has been BSD, but I > guess GPL is ok too, especially if the underlying library is GPL. Do > you have a strong opinion on this? > I'm not sure if you have a choice -- I believe SuiteSparse is under GPL, and I'd say a wrapper is clearly "derivative work"? IANAL, but just something to keep in mind. Keeping it GPL will at least be on the safe side. (Some parts of SuiteSparse might be under LGPL though, which would be ok, but if the scikit is going for be for all of SuiteSparse it would be less confusing to stick with GPL for the whole.) Dag Sverre From dagss at student.matnat.uio.no Wed Dec 16 03:12:49 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 Dec 2009 09:12:49 +0100 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: <4B289681.3080901@student.matnat.uio.no> Charles R Harris wrote: > > > 2009/12/14 Dmitrey > > > > > --- ???????? ????????? --- > ?? ????: Charles R Harris > > ????: SciPy Developers List > > ????: 14 ???????, 20:26:12 > ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc > > > --- ???????? ????????? --- > > ?? ????: Charles R Harris > > > ????: SciPy Developers List > > > ????: 14 ???????, 20:06:46 > > ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc > > > I think it should have numpy or ndarray in the name > somewhere to indicate > > that it is numpy specific. Hmm , maybe, > > > > __supercede_ndarray__, > > __disallow_ndarray__, > > __deny_ndarray__, > > __reject_ndarray__, > > __refuse_ndarray__, > > __exclude_ndarray__, > > __reject_ndarray__, ... > > > > My preference among those would be __deny_ndarray__. > > > > > > But isn't the issue present with numpy matrices or > scipy.sparse matrices as > > well? > > So I guess instead of ndarray another word should be used. > > > > __has_precedence__ > __is_prior__ > > Chuck > > N > I guess some numpy developers should choose the final name in > numpy IRC channel and inform the list (ASAP) about their > collective (and hence final) decision. I'm not skilled in English > quite enough, but so short names seems too uninformative to me, > they will be not used too often so I guess more informative should > be preferred. > > > I don't think we should rush this. I'm waiting for more people to > weigh in or offer different solutions. Well, a different solution would be to have a standard with "operand precedence", i.e. an integer which could be compared between objects, and the highest one wins and gets to decide how the arithmetic operation is carried out. This could be used to simplify NumPy itself. I.e. assign e.g. __operand_precedence__ = 0 for ndarray, 100 for matrix, and use it internally in NumPy to decide who carries out the operation. Between other libraries it gets messy to try to figure out what range of values to use though. One would really want to be able to make statements about a partially ordered set ("I take priority over ndarray; a Frobnicator takes priority over me..."), but it's probably way too time consuming and complicated. I like the "precedence" word though. So my suggestions are __operand_precedence__ # boolean or integer? __numpy_operand_precedence__ # boolean or integer? __precedence_over_ndarray__ # bool Dag Sverre From ralf.gommers at googlemail.com Wed Dec 16 03:30:36 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 16 Dec 2009 16:30:36 +0800 Subject: [SciPy-dev] documenting objects in stats.distributions In-Reply-To: <764e38540912152358x7beb4369s9c4375c68c14c3f4@mail.gmail.com> References: <764e38540912152358x7beb4369s9c4375c68c14c3f4@mail.gmail.com> Message-ID: On Wed, Dec 16, 2009 at 3:58 PM, Christopher Burns wrote: > We're working on documentation at the SciPy.in sprints and have run > into a strange documentation problem. Several of the distributions > are objects, not classes, so the documentation page gives a warning > "Unknown section Methods". > > See: > http://docs.scipy.org/scipy/docs/scipy.stats.alpha/ > > Is there a way to update the docs to remove this warning? > No, to remove the warning pydocweb (the wiki app) should recognize the Methods section in class instance docstrings I think. A related point: the docstrings for the stats distributions are generated from a template right now, so any edits for those docstrings in the doc wiki can not be merged without a lot of tweaking. If you're also interested in working on other areas of the scipy docs, that might be more effective. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Wed Dec 16 03:34:52 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 16 Dec 2009 00:34:52 -0800 Subject: [SciPy-dev] documenting objects in stats.distributions In-Reply-To: <764e38540912152358x7beb4369s9c4375c68c14c3f4@mail.gmail.com> References: <764e38540912152358x7beb4369s9c4375c68c14c3f4@mail.gmail.com> Message-ID: <45d1ab480912160034p49c08c4av321a5192f7cd76ef@mail.gmail.com> On Tue, Dec 15, 2009 at 11:58 PM, Christopher Burns wrote: > We're working on documentation at the SciPy.in sprints and have run Thanks! > into a strange documentation problem. ?Several of the distributions > are objects, not classes, so the documentation page gives a warning Ummm, what do you mean "objects, not classes"? Instances of classes? I trust that, though it might not have been the answer you were hoping for, Ralf's reply answered your question, yes? DG > "Unknown section Methods". > > See: > http://docs.scipy.org/scipy/docs/scipy.stats.alpha/ > > Is there a way to update the docs to remove this warning? > > This is the actual code: > > class alpha_gen(rv_continuous): > ? def _pdf(self, x, a): > ? ? ? return 1.0/arr(x**2)/special.ndtr(a)*norm.pdf(a-1.0/x) > ? def _cdf(self, x, a): > ? ? ? return special.ndtr(a-1.0/x) / special.ndtr(a) > ? def _ppf(self, q, a): > ? ? ? return 1.0/arr(a-special.ndtri(q*special.ndtr(a))) > ? def _stats(self, a): > ? ? ? return [inf]*2 + [nan]*2 > alpha = alpha_gen(a=0.0,name='alpha',shapes='a',extradoc=""" > > Alpha distribution > > alpha.pdf(x,a) = 1/(x**2*Phi(a)*sqrt(2*pi)) * exp(-1/2 * (a-1/x)**2) > where Phi(alpha) is the normal CDF, x > 0, and a > 0. > """) > > Thanks! > > -- > Christopher Burns > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > 510-643-4053 > http://cirl.berkeley.edu/ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From cburns at berkeley.edu Wed Dec 16 05:59:44 2009 From: cburns at berkeley.edu (Christopher Burns) Date: Wed, 16 Dec 2009 16:29:44 +0530 Subject: [SciPy-dev] documenting objects in stats.distributions In-Reply-To: References: <764e38540912152358x7beb4369s9c4375c68c14c3f4@mail.gmail.com> Message-ID: <764e38540912160259r5fdc9676wef8b111d9fdaa262@mail.gmail.com> On Wed, Dec 16, 2009 at 2:00 PM, Ralf Gommers wrote: > No, to remove the warning pydocweb (the wiki app) should recognize the > Methods section in class instance docstrings I think. > > A related point: the docstrings for the stats distributions are generated > from a template right now, so any edits for those docstrings in the doc wiki > can not be merged without a lot of tweaking. If you're also interested in > working on other areas of the scipy docs, that might be more effective. pydocweb doesn't appear to recognize the Methods section in the instance docstrings. Looking at the code I see that the classes we were editing all subclass rv_continuous. So if we update the docstring for rv_continuous, like to have spaces between parameters, then that will propagate to the children, is that correct? I see there are 80 children: cburns at stats 14:41:19 $ pwd /Users/cburns/src/scipy-trunk/scipy/stats cburns at stats 14:41:24 $ grin '\(rv_continuous\)' distributions.py | wc -l 80 I've only looked at a few of the child classes, but in each of those the instance docstring displays the warning about the Methods section. Glancing at the pydocweb source I'm wondering if it's mapping these docstrings to a NumpyFunctionDocString? The NumpyFunctionDocString docstrings don't have a Method section. Since so many of these inherit their docstring from a parent class, is it possible to show that fact on the documentation editor page? Otherwise people will make the same mistake we made and edit the generated docstrings instead of the template. Chris From cburns at berkeley.edu Wed Dec 16 06:03:48 2009 From: cburns at berkeley.edu (Christopher Burns) Date: Wed, 16 Dec 2009 16:33:48 +0530 Subject: [SciPy-dev] documenting objects in stats.distributions In-Reply-To: <45d1ab480912160034p49c08c4av321a5192f7cd76ef@mail.gmail.com> References: <764e38540912152358x7beb4369s9c4375c68c14c3f4@mail.gmail.com> <45d1ab480912160034p49c08c4av321a5192f7cd76ef@mail.gmail.com> Message-ID: <764e38540912160303t7d727bbdlb9e734d2572b495a@mail.gmail.com> On Wed, Dec 16, 2009 at 2:04 PM, David Goldsmith wrote: > > Ummm, what do you mean "objects, not classes"? ?Instances of classes? objects: In [21]: stats.distributions.alpha Out[21]: As opposed to classes: In [27]: stats.distributions.rv_continuous Out[27]: Sorry, I'll refer to them as instances, we were working in ipython and I just used the terms that were infront of me. > I trust that, though it might not have been the answer you were hoping > for, Ralf's reply answered your question, yes? (see other post) Chris From madhusudancs at gmail.com Wed Dec 16 06:20:54 2009 From: madhusudancs at gmail.com (Madhusudan C.S) Date: Wed, 16 Dec 2009 16:50:54 +0530 Subject: [SciPy-dev] Buildout for Pydocweb Message-ID: Hi Pauli, Stefan and others, I thought of working on pydocweb during SciPy.in 2009 Sprints. But even before I started going through the code of pydocweb, I felt pydocweb lacks a build system. So I sat for half a day and have integrated buildout[0]. Buildout provides an environment to build, test and deploy apps. Recently many, many django apps have started using buildout. It is a nice way to maintain our app and requires less time for maintaining. So if you are interested in using buildout for pydocweb, I will be happy to mail you my patches for review. [0] - http://www.buildout.org/ -- Thanks and regards, Madhusudan.C.S Blogs at: www.madhusudancs.info My Online Identity: madhusudancs -------------- next part -------------- An HTML attachment was scrubbed... URL: From madhusudancs at gmail.com Wed Dec 16 08:31:47 2009 From: madhusudancs at gmail.com (Madhusudan C.S) Date: Wed, 16 Dec 2009 19:01:47 +0530 Subject: [SciPy-dev] Buildout for Pydocweb In-Reply-To: References: Message-ID: Hi Pauli, Stefan and others, 2009/12/16 Madhusudan C.S > Hi Pauli, Stefan and others, > I thought of working on pydocweb during SciPy.in 2009 > Sprints. But even before I started going through the code > of pydocweb, I felt pydocweb lacks a build system. So I > sat for half a day and have integrated buildout[0]. > > Buildout provides an environment to build, test and > deploy apps. Recently many, many django apps have > started using buildout. It is a nice way to maintain > our app and requires less time for maintaining. So if you > are interested in using buildout for pydocweb, I will be > happy to mail you my patches for review. > > [0] - http://www.buildout.org/ > > Sorry for not sending the patches in my previous mail, I created a bzr branch on launchpad with my code for buildout at [1]. Please take a look. [1] - https://code.launchpad.net/~madhusudancs/+junk/pydocweb -- Thanks and regards, Madhusudan.C.S Blogs at: www.madhusudancs.info My Online Identity: madhusudancs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Dec 16 09:03:46 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 16 Dec 2009 22:03:46 +0800 Subject: [SciPy-dev] documenting objects in stats.distributions In-Reply-To: <764e38540912160259r5fdc9676wef8b111d9fdaa262@mail.gmail.com> References: <764e38540912152358x7beb4369s9c4375c68c14c3f4@mail.gmail.com> <764e38540912160259r5fdc9676wef8b111d9fdaa262@mail.gmail.com> Message-ID: On Wed, Dec 16, 2009 at 6:59 PM, Christopher Burns wrote: > On Wed, Dec 16, 2009 at 2:00 PM, Ralf Gommers > wrote: > > No, to remove the warning pydocweb (the wiki app) should recognize the > > Methods section in class instance docstrings I think. > > > > A related point: the docstrings for the stats distributions are generated > > from a template right now, so any edits for those docstrings in the doc > wiki > > can not be merged without a lot of tweaking. If you're also interested in > > working on other areas of the scipy docs, that might be more effective. > > pydocweb doesn't appear to recognize the Methods section in the > instance docstrings. Looking at the code I see that the classes we > were editing all subclass rv_continuous. So if we update the > docstring for rv_continuous, like to have spaces between parameters, > then that will propagate to the children, is that correct? > That is right, but that template should disappear any day now. I spent some time on a patch to build up all docstrings in stats.distributions in a more flexible way (I also fixed up the spaces issues etc already). Josef has already reviewed and will apply the changes when he can (I think). See http://projects.scipy.org/scipy/ticket/1055 > > I see there are 80 children: > cburns at stats 14:41:19 $ pwd > /Users/cburns/src/scipy-trunk/scipy/stats > cburns at stats 14:41:24 $ grin '\(rv_continuous\)' distributions.py | wc -l > 80 > > I've only looked at a few of the child classes, but in each of those > the instance docstring displays the warning about the Methods section. > Glancing at the pydocweb source I'm wondering if it's mapping these > docstrings to a NumpyFunctionDocString? The NumpyFunctionDocString > docstrings don't have a Method section. > > Not sure there. I think Pauli is the only real expert on pydocweb. But while you're looking at it, if you see a way to fix it that would be awesome. > Since so many of these inherit their docstring from a parent class, is > it possible to show that fact on the documentation editor page? > Otherwise people will make the same mistake we made and edit the > generated docstrings instead of the template. > One of the admins might be able to do this, but editors/reviewers can not apply changes to multiple pages at once. I realized before a warning like this could be handy (or a "no edit" flag or similar), but manually going through so many pages is a real pain. While we're at it, other docstrings that are not yet handled well in the doc editor are those of ndimage.filters. It's better than stats.distributions but also requires tweaking. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Wed Dec 16 17:51:51 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 16 Dec 2009 14:51:51 -0800 Subject: [SciPy-dev] More unclaimed xxx Qs Message-ID: <45d1ab480912161451u57d2ae49lb2bd1a06f28687f7@mail.gmail.com> >From basics.indexing.rst: "Note XXX: Combine numpy.doc.indexing with material section 2.2 Basic indexing? Or incorporate the material directly here? from immediately following: See also Indexing routines" Again, the Author of this is "Source"; if "Source" is unresponsive, perhaps someone else can conjecture as to why "Source" wasn't sure whether these things should be done or not. Thanks, DG From sebastian.walter at gmail.com Wed Dec 16 17:53:16 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Wed, 16 Dec 2009 23:53:16 +0100 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <4B289681.3080901@student.matnat.uio.no> References: <4B289681.3080901@student.matnat.uio.no> Message-ID: I have also implemented/wrapped various automatic differentiation tools in python (http://github.com/b45ch1/algopy , http://github.com/b45ch1/pyadolc, http://github.com/b45ch1/pycppad if someone is interested) and I have also come across some numpy glitches and inconsistencies. However, this particular problem I have not encountered. I'm not sure I understand the rationale why an expression like numpy.array([1,2] * oofun(1) should call the oofun.__rmul__ operator. Hence, I'm a little sceptical about this enhancement. What is wrong with the following implementation? It works perfectly fine... --------------- start code snippet ----------------- class oofun: def __init__(self,x): self.x = x def __mul__(self, rhs): print 'called __mul__' if isinstance(rhs, oofun): return oofun(self.x * rhs.x) else: return rhs * self def __rmul__(self, lhs): print 'called __rmul__' return oofun(self.x * lhs) def __str__(self): return str(self.x)+'a' def __repr__(self): return str(self) ------------- end code snippet ---------------- --------- output ---------- basti at shlp:~/Desktop$ python live_demo.py called __mul__ called __rmul__ called __rmul__ called __rmul__ [2.0a 2.0a 2.0a] called __rmul__ called __rmul__ called __rmul__ [2.0a 2.0a 2.0a] ------------- end output -------------- regards, Sebastian 2009/12/16 Dag Sverre Seljebotn : > Charles R Harris wrote: >> >> >> 2009/12/14 Dmitrey > >> >> >> >> ? ? --- ???????? ????????? --- >> ? ? ?? ????: Charles R Harris > ? ? > >> ? ? ????: SciPy Developers List > ? ? > >> ? ? ????: 14 ???????, 20:26:12 >> ? ? ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc >> >> ? ? ? ? > --- ???????? ????????? --- >> ? ? ? ? > ?? ????: Charles R Harris > ? ? ? ? > >> ? ? ? ? > ????: SciPy Developers List > ? ? ? ? > >> ? ? ? ? > ????: 14 ???????, 20:06:46 >> ? ? ? ? > ????: Re: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc >> >> ? ? ? ? > I think it should have numpy or ndarray in the name >> ? ? ? ? somewhere to indicate >> ? ? ? ? > that it is numpy specific. Hmm , maybe, >> ? ? ? ? > >> ? ? ? ? > __supercede_ndarray__, >> ? ? ? ? > __disallow_ndarray__, >> ? ? ? ? > __deny_ndarray__, >> ? ? ? ? > __reject_ndarray__, >> ? ? ? ? > __refuse_ndarray__, >> ? ? ? ? > __exclude_ndarray__, >> ? ? ? ? > __reject_ndarray__, ... >> ? ? ? ? > >> ? ? ? ? > My preference among those would be __deny_ndarray__. >> ? ? ? ? > >> ? ? ? ? > >> ? ? ? ? > But isn't the issue present with numpy matrices or >> ? ? ? ? scipy.sparse matrices as >> ? ? ? ? > well? >> ? ? ? ? > So I guess instead of ndarray another word should be used. >> ? ? ? ? > >> >> ? ? ? ? __has_precedence__ >> ? ? ? ? __is_prior__ >> >> ? ? ? ? Chuck >> >> ? ? N >> ? ? I guess some numpy developers should choose the final name in >> ? ? numpy IRC channel and inform the list (ASAP) about their >> ? ? collective (and hence final) decision. I'm not skilled in English >> ? ? quite enough, but so short names seems too uninformative to me, >> ? ? they will be not used too often so I guess more informative should >> ? ? be preferred. >> >> >> I don't think we should rush this. I'm waiting for more people to >> weigh in or offer different solutions. > > Well, a different solution would be to have a standard with "operand > precedence", i.e. an integer which could be compared between objects, > and the highest one wins and gets to decide how the arithmetic operation > is carried out. > > This could be used to simplify NumPy itself. I.e. assign e.g. > __operand_precedence__ = 0 for ndarray, 100 for matrix, and use it > internally in NumPy to decide who carries out the operation. > > Between other libraries it gets messy to try to figure out what range of > values to use though. One would really want to be able to make > statements about a partially ordered set ("I take priority over ndarray; > a Frobnicator takes priority over me..."), but it's probably way too > time consuming and complicated. > > I like the "precedence" word though. So my suggestions are > ?__operand_precedence__ # boolean or integer? > ?__numpy_operand_precedence__ # boolean or integer? > ?__precedence_over_ndarray__ # bool > > > Dag Sverre > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From d.l.goldsmith at gmail.com Wed Dec 16 18:05:50 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 16 Dec 2009 15:05:50 -0800 Subject: [SciPy-dev] XXX in basics.rst Message-ID: <45d1ab480912161505y38f973ceq38f6575b20e18f28@mail.gmail.com> "Note XXX: there is overlap between this text extracted from numpy.doc and "Guide to Numpy" chapter 2. Needs combining?" Opinions? What, ultimately, is to be the relationship between "Guide to Numpy" and the "real-time" docs: do we want to preserve this duplication of content, e.g., for user convenience, or consolidate? If the latter, does that imply that we either: a) delete this file (basics.rst) and direct users seeking this content to GtN, Chap. 2; or b) ask Travis to modify (or for permission to modify) GtN; or is there another "combining" alternative I'm not seeing? DG From d.l.goldsmith at gmail.com Wed Dec 16 18:36:46 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 16 Dec 2009 15:36:46 -0800 Subject: [SciPy-dev] XXX in basics.types.rst Message-ID: <45d1ab480912161536s16fb71c4g4931b9ab4338269f@mail.gmail.com> Essentially the same issue I just posted about vis-a-vis basics.rst. (I just confirmed that this is the only other "XXX" item w/ this particular issue.) DG ---------- Forwarded message ---------- From: David Goldsmith Date: Wed, Dec 16, 2009 at 3:05 PM Subject: XXX in basics.rst To: scipy-dev at scipy.org "Note XXX: there is overlap between this text extracted from numpy.doc and "Guide to Numpy" chapter 2. Needs combining?" Opinions? ?What, ultimately, is to be the relationship between "Guide to Numpy" and the "real-time" docs: do we want to preserve this duplication of content, e.g., for user convenience, or consolidate? If the latter, does that imply that we either: a) delete this file (basics.rst) and direct users seeking this content to GtN, Chap. 2; or b) ask Travis to modify (or for permission to modify) GtN; or is there another "combining" alternative I'm not seeing? DG From d.l.goldsmith at gmail.com Wed Dec 16 21:31:34 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 16 Dec 2009 18:31:34 -0800 Subject: [SciPy-dev] c-info.beyond-basics.rst xxx Message-ID: <45d1ab480912161831n14e683a0y736d2a4bbaaa436f@mail.gmail.com> Following "Specific features of ndarray sub-typing Some special methods and attributes are used by arrays in order to facilitate the interoperation of sub-types with the base ndarray type." "Source" wrote: "Note XXX: some of the documentation below needs to be moved to the reference guide." Dear reader, please conjecture what - either specifically or based on principle - "Source" felt should be so moved. DG From d.l.goldsmith at gmail.com Wed Dec 16 21:38:22 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 16 Dec 2009 18:38:22 -0800 Subject: [SciPy-dev] user/misc.rst/ xxx Message-ID: <45d1ab480912161838v30eebe2apf59218e28b8ec3c4@mail.gmail.com> The "Source" revision begins: "Miscellaneous Note XXX: This section is not yet written." and yet contains 5 sections and at least 100 lines of content. XXX indeed. (Actually, I'd characterize it more as ???) DG From charlesr.harris at gmail.com Wed Dec 16 22:18:11 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 16 Dec 2009 20:18:11 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: <4B289681.3080901@student.matnat.uio.no> Message-ID: 2009/12/16 Sebastian Walter > I have also implemented/wrapped various automatic differentiation > tools in python > (http://github.com/b45ch1/algopy , http://github.com/b45ch1/pyadolc, > http://github.com/b45ch1/pycppad if someone is interested) > and I have also come across some numpy glitches and inconsistencies. > > However, this particular problem I have not encountered. I'm not sure > I understand the rationale why an expression like numpy.array([1,2] * > oofun(1) should call the oofun.__rmul__ operator. > Hence, I'm a little sceptical about this enhancement. > > What is wrong with the following implementation? It works perfectly fine... > > --------------- start code snippet ----------------- > > class oofun: > def __init__(self,x): > self.x = x > > def __mul__(self, rhs): > print 'called __mul__' > if isinstance(rhs, oofun): > return oofun(self.x * rhs.x) > else: > return rhs * self > > def __rmul__(self, lhs): > print 'called __rmul__' > return oofun(self.x * lhs) > > def __str__(self): > return str(self.x)+'a' > > def __repr__(self): > return str(self) > > ------------- end code snippet ---------------- > > --------- output ---------- > basti at shlp:~/Desktop$ python live_demo.py > called __mul__ > called __rmul__ > called __rmul__ > called __rmul__ > [2.0a 2.0a 2.0a] > called __rmul__ > called __rmul__ > called __rmul__ > [2.0a 2.0a 2.0a] > ------------- end output -------------- > > > That makes the behavior consistent. But suppose as a convenience one wants to implement left/right multiplication by python integers but doesn't want to allow multiplication by arrays? Or wants multiplication by arrays array-wise, not element-wise? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Dec 16 22:35:04 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 16 Dec 2009 22:35:04 -0500 Subject: [SciPy-dev] user/misc.rst/ xxx In-Reply-To: <45d1ab480912161838v30eebe2apf59218e28b8ec3c4@mail.gmail.com> References: <45d1ab480912161838v30eebe2apf59218e28b8ec3c4@mail.gmail.com> Message-ID: <5DE46EC5-5D08-42B4-B213-945C80FBC230@gmail.com> On Dec 16, 2009, at 9:38 PM, David Goldsmith wrote: > The "Source" revision begins: > > "Miscellaneous > > Note XXX: This section is not yet written." > > and yet contains 5 sections and at least 100 lines of content. > > XXX indeed. (Actually, I'd characterize it more as ???) Because it's not written indeed, just outlined. Nice outline, true, but nothing more than that yet,. Keep the warning in the content page of the user guide in mind. I'm surprised you haven't commented yet on the two 'Broadcasting' entries (the second should be byteswapping) ;) But more seriously, you raise a good point: most of the focus so far was on the reference, ie content automatically added from the docstrings, while the user guide is not as strong. Some reorganization is probably needed. In my mind, the reference is only that, a compendium of all the numpy objects (from functions to class attributes), while the user guide should present the various concepts behind numpy (type of arrays, dtypes, subclassing, broadcasting...). In other terms, part of what is currently in the reference could be ported to the user guide (but the user guide should systematically point to the reference)... From d.l.goldsmith at gmail.com Thu Dec 17 02:28:36 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 16 Dec 2009 23:28:36 -0800 Subject: [SciPy-dev] user/misc.rst/ xxx In-Reply-To: <5DE46EC5-5D08-42B4-B213-945C80FBC230@gmail.com> References: <45d1ab480912161838v30eebe2apf59218e28b8ec3c4@mail.gmail.com> <5DE46EC5-5D08-42B4-B213-945C80FBC230@gmail.com> Message-ID: <45d1ab480912162328p24364a9el51eee63f18a566c6@mail.gmail.com> On Wed, Dec 16, 2009 at 7:35 PM, Pierre GM wrote: > On Dec 16, 2009, at 9:38 PM, David Goldsmith wrote: >> The "Source" revision begins: >> >> "Miscellaneous >> >> Note XXX: This section is not yet written." >> >> and yet contains 5 sections and at least 100 lines of content. >> >> XXX indeed. ?(Actually, I'd characterize it more as ???) > > Because it's not written indeed, just outlined. Nice outline, true, but nothing more than that yet,. Keep the warning in the content page of the user guide in mind. > I'm surprised you haven't commented yet on the two 'Broadcasting' entries (the second should be byteswapping) ;) > > But more seriously, you raise a good point: most of the focus so far was on the reference, ie content automatically added from the docstrings, while the user guide is not as strong. Some reorganization is probably needed. > In my mind, the reference is only that, a compendium of all the numpy objects (from functions to class attributes), while the user guide should present the various concepts behind numpy (type of arrays, dtypes, subclassing, broadcasting...). In other terms, part of what is currently in the reference could be ported to the user guide (but the user guide should systematically point to the reference)... > Pierre has kindly laid down the beginnings of an action plan (and here I thought no one was paying attention to my little missives) - thank you Pierre! Any volunteers for a committee to flesh it out, e.g., * Identify portions which need reorganization, and provide recommendations for how those portions should in fact be reorganized. * Specify the concepts we want in the UG - especially any that aren't already represented - including a brief statement for each as to why they should be included, and what, more or less, an acceptable presentation of them should cover/include. * Identify what in the Ref. it makes sense to port to the UG and why (and vice-versa to the extent that it might be appropriate). * QA/QC for the UG systematically pointing to the Ref. * Any other "items of guidance" writers/editors can refer to when fulfilling these requirements. (Just to be explicit about it, I see the "UG Action Plan Committee" as a distinct entity, though of course it may not have an empty intersection with the set "Writers/Editors"). Again, thanks Pierre! DG From d.l.goldsmith at gmail.com Thu Dec 17 02:31:20 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 16 Dec 2009 23:31:20 -0800 Subject: [SciPy-dev] user/misc.rst/ xxx In-Reply-To: <45d1ab480912162328p24364a9el51eee63f18a566c6@mail.gmail.com> References: <45d1ab480912161838v30eebe2apf59218e28b8ec3c4@mail.gmail.com> <5DE46EC5-5D08-42B4-B213-945C80FBC230@gmail.com> <45d1ab480912162328p24364a9el51eee63f18a566c6@mail.gmail.com> Message-ID: <45d1ab480912162331r6b3347d3rb2e0007d6fa040da@mail.gmail.com> On Wed, Dec 16, 2009 at 11:28 PM, David Goldsmith wrote: > On Wed, Dec 16, 2009 at 7:35 PM, Pierre GM wrote: >> On Dec 16, 2009, at 9:38 PM, David Goldsmith wrote: >>> The "Source" revision begins: >>> >>> "Miscellaneous >>> >>> Note XXX: This section is not yet written." >>> >>> and yet contains 5 sections and at least 100 lines of content. >>> >>> XXX indeed. ?(Actually, I'd characterize it more as ???) >> >> Because it's not written indeed, just outlined. Nice outline, true, but nothing more than that yet,. Keep the warning in the content page of the user guide in mind. >> I'm surprised you haven't commented yet on the two 'Broadcasting' entries (the second should be byteswapping) ;) >> >> But more seriously, you raise a good point: most of the focus so far was on the reference, ie content automatically added from the docstrings, while the user guide is not as strong. Some reorganization is probably needed. >> In my mind, the reference is only that, a compendium of all the numpy objects (from functions to class attributes), while the user guide should present the various concepts behind numpy (type of arrays, dtypes, subclassing, broadcasting...). In other terms, part of what is currently in the reference could be ported to the user guide (but the user guide should systematically point to the reference)... >> > > Pierre has kindly laid down the beginnings of an action plan (and here > I thought no one was paying attention to my little missives) - thank > you Pierre! ?Any volunteers for a committee to flesh it out, e.g., > > * Identify portions which need reorganization, and provide > recommendations for how those portions should in fact be reorganized. > > * Specify the concepts we want in the UG - especially any that aren't > already represented - including a brief statement for each as to why > they should be included, and what, more or less, an acceptable > presentation of them should cover/include. > > * Identify what in the Ref. it makes sense to port to the UG and why > (and vice-versa to the extent that it might be appropriate). > > * QA/QC for the UG systematically pointing to the Ref. > > * Any other "items of guidance" writers/editors can refer to when > fulfilling these requirements. > > (Just to be explicit about it, I see the "UG Action Plan Committee" as > a distinct entity, though of course it may not have an empty > intersection with the set "Writers/Editors"). > > Again, thanks Pierre! > > DG > Oh, and I should also be explicit that at this point in time, I'm only talking about the NumPy UG and Ref. (unless the same committee decides they want to do the same thing for SciPy). DG From tmp50 at ukr.net Thu Dec 17 02:38:51 2009 From: tmp50 at ukr.net (Dmitrey) Date: Thu, 17 Dec 2009 09:38:51 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: Message-ID: ?? ????: Sebastian Walter I have also implemented/wrapped various automatic differentiation tools in python (http://github.com/b45ch1/algopy , http://github.com/b45ch1/pyadolc, http://github.com/b45ch1/pycppad if someone is interested) and I have also come across some numpy glitches and inconsistencies. However, this particular problem I have not encountered. I'm not sure I understand the rationale why an expression like numpy.array([1,2] * oofun(1) should call the oofun.__rmul__ operator. Hence, I'm a little sceptical about this enhancement. What is wrong with the following implementation? It works perfectly fine... --------------- start code snippet ----------------- class oofun: def __init__(self,x): self.x = x def __mul__(self, rhs): print 'called __mul__' if isinstance(rhs, oofun): return oofun(self.x * rhs.x) else: return rhs * self def __rmul__(self, lhs): print 'called __rmul__' return oofun(self.x * lhs) def __str__(self): return str(self.x)+'a' def __repr__(self): return str(self) ------------- end code snippet ---------------- --------- output ---------- basti at shlp:~/Desktop$ python live_demo.py called __mul__ called __rmul__ called __rmul__ called __rmul__ [2.0a 2.0a 2.0a] called __rmul__ called __rmul__ called __rmul__ [2.0a 2.0a 2.0a] But I don't want to get ndarray of N oofuncs, I just want to get single oofunc. At first, evaluating of each oofunc is rather costly, so for N >> 1 I get serious slow down; at second, sometimes my code doesn't work at all - for example, when I create a constraint c = oof1 < oof2 (or lots of other samples). I expect result of a*oof, a/oof etc to be of type oofun while it doesn't (it yields ndarray of oofuns). -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Thu Dec 17 04:45:49 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 17 Dec 2009 10:45:49 +0100 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: <4B289681.3080901@student.matnat.uio.no> Message-ID: <4B29FDCD.7070901@student.matnat.uio.no> Sebastian Walter wrote: > I have also implemented/wrapped various automatic differentiation > tools in python > (http://github.com/b45ch1/algopy , http://github.com/b45ch1/pyadolc, > http://github.com/b45ch1/pycppad if someone is interested) > and I have also come across some numpy glitches and inconsistencies. > > However, this particular problem I have not encountered. I'm not sure > I understand the rationale why an expression like numpy.array([1,2] * > oofun(1) should call the oofun.__rmul__ operator. > Hence, I'm a little sceptical about this enhancement. > > What is wrong with the following implementation? It works perfectly fine... > Well, if you feel that way, then simply don't use this feature. This just leads to more flexibility for writers of libraries which depend on NumPy. The suggestion isn't really inconsistent with Python. Even if it is true that left-mul takes precedence in Python, it is NOT common that an object handles all arithmetic operations whether they make sense or not. So 2 * MyClassWhichKnowsAboutPythonIntegers() works, because the "2" doesn't try to do anything and rmul is called! NumPy is special because rmul never gets called with NumPy arrays, which can be quite inconvenient at times. Dag Sverre From sebastian.walter at gmail.com Thu Dec 17 08:12:13 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Thu, 17 Dec 2009 14:12:13 +0100 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <4B29FDCD.7070901@student.matnat.uio.no> References: <4B289681.3080901@student.matnat.uio.no> <4B29FDCD.7070901@student.matnat.uio.no> Message-ID: 1) From an abstract point of view, oofun should be a data type. And the container of choice would be a numpy.array of dtype ofun, not of dtype object. so array([1,2,3],dtype=float) * oofun(2) should return array([2,4,6],dtype=returntype(oofun,float)) I.e. there should be a way to implement new data types. Maybe add something like class oofun: dtype = True then ndarray.__mul__ would call appropriate interface functions provided by the oofun class. I think that's what josef suggested in an earlier post. 2) I understand that the current behaviour is not nice, e.g. for array * matrix the matrix.__rmul__ operator should be called. In fact, I run into the same problem a lot. I think the only special case of interest are objects that are themselves containers. E.g. matrix is a container, lil_matrix is a container, etc. ndarray knows how to treat some containers, e.g. lists. The question is, what should ndarray do when it encounters an object that is a container it doesn't know how to handle. Wouldn't the following make sense: When ndarray.__mul__(self, other) is called, and other is a container, it should simply check if it knows how to handle that container. If ndarray doesn't know, it should call the other.__rmul__ and hope that `other` knows what to do. E.g. class matrix: iscontainer = True .... class ndarray: def __mul__(self, other): if other.iscontainer: if not know_what_to_do_with(other): other.__rmul__(self) else: do what is done now: treat other as dtype=object then array * matrix would call the ndarray.__mul__ operator it then realizes that matrix is a container but ndarray doesnt know what to do with it. It would therefore call matrix.__rmul__ If other is not a container it is treated as object and we would get the current behaviour. This would allow to stay backward compatible. On Thu, Dec 17, 2009 at 10:45 AM, Dag Sverre Seljebotn wrote: > Sebastian Walter wrote: >> I have also implemented/wrapped various automatic differentiation >> tools in python >> (http://github.com/b45ch1/algopy , http://github.com/b45ch1/pyadolc, >> http://github.com/b45ch1/pycppad if someone is interested) >> and I have also come across some numpy glitches and inconsistencies. >> >> However, this particular problem I have not encountered. I'm not sure >> I understand the rationale why an expression like numpy.array([1,2] * >> oofun(1) should ?call the oofun.__rmul__ operator. >> Hence, I'm a little sceptical about this enhancement. >> >> What is wrong with the following implementation? It works perfectly fine... >> > Well, if you feel that way, then simply don't use this feature. This > just leads to more flexibility for writers of libraries which depend on > NumPy. > > The suggestion isn't really inconsistent with Python. Even if it is true > that left-mul takes precedence in Python, it is NOT common that an > object handles all arithmetic operations whether they make sense or not. So > > 2 * MyClassWhichKnowsAboutPythonIntegers() > > works, because the "2" doesn't try to do anything and rmul is called! > NumPy is special because rmul never gets called with NumPy arrays, which > can be quite inconvenient at times. > > Dag Sverre > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From tmp50 at ukr.net Thu Dec 17 08:28:02 2009 From: tmp50 at ukr.net (Dmitrey) Date: Thu, 17 Dec 2009 15:28:02 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: Message-ID: ?? ????: Sebastian Walter 1) From an abstract point of view, oofun should be a data type. And the container of choice would be a numpy.array of dtype ofun, not of dtype object. so array([1,2,3],dtype=float) * oofun(2) should return array([2,4,6],dtype=returntype(oofun,float)) What do you mean under oofun(2) ?! oofun is array([1,2,3],dtype=float) * oofun_instance should return result of type oofun and nothing else. Same to other classes - SAGE, polinomials, etc. When ndarray.__mul__(self, other) is called, and other is a container, it should simply check if it knows how to handle that container. If ndarray doesn't know, it should call the other.__rmul__ and hope that `other` knows what to do. E.g. class matrix: iscontainer = True .... class ndarray: def __mul__(self, other): if other.iscontainer: if not know_what_to_do_with(other): other.__rmul__(self) else: do what is done now: treat other as dtype=object then array * matrix would call the ndarray.__mul__ operator it then realizes that matrix is a container but ndarray doesnt know what to do with it. It would therefore call matrix.__rmul__ ? If other is not a container it is treated as object and we would get the current behaviour. This would allow to stay backward compatible. As for me, I dislike the idea - it's too complicated, and some containers sometimes could want to take operations priority, using their __rmul__? etc instead of ndarray __mul__ etc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Dec 17 09:21:21 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 17 Dec 2009 09:21:21 -0500 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <4B29FDCD.7070901@student.matnat.uio.no> References: <4B289681.3080901@student.matnat.uio.no> <4B29FDCD.7070901@student.matnat.uio.no> Message-ID: <4B2A3E61.4000205@american.edu> On 12/17/2009 4:45 AM, Dag Sverre Seljebotn wrote: > NumPy is special because rmul never gets called with NumPy arrays, which > can be quite inconvenient at times. Is this not the real problem? After all, np.multiply could be used when the need arises, removing any need to override the standard Python precedence for *. I know this view will be unpopular, and someone will surely argue that changing it would make the interaction between matrices and arrays too obscure. But in the long run, it may prove worth asking whether this convenience feature is actually inconvenient. Alan Isaac From sebastian.walter at gmail.com Thu Dec 17 10:22:09 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Thu, 17 Dec 2009 16:22:09 +0100 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: 2009/12/17 Dmitrey : > ?? ????: Sebastian Walter > > 1) From an abstract point of view, oofun should be a data type. > And the container of choice would be a numpy.array of dtype ofun, not > of dtype object. > > so array([1,2,3],dtype=float) * oofun(2) should return > array([2,4,6],dtype=returntype(oofun,float)) > > What do you mean under oofun(2) ?! i don't know the intrinsic implementation of your AD tool. Maybe I'm wrong, but in automatic differentiation, a computational graph of an algorithm is > oofun is > array([1,2,3],dtype=float) * oofun_instance should return result of type > oofun and nothing else. > Same to other classes - SAGE, polinomials, etc. could you elaborate on why someone would want to do an `array * polynomial` operation and not expect it to be an array of polynomial as result? > > When ndarray.__mul__(self, other) is called, and other is a container, > it should simply check if it knows how to handle that container. > If ndarray doesn't know, it should call the other.__rmul__ and hope > that `other` knows what to do. > > E.g. > > class matrix: > iscontainer = True > .... > > class ndarray: > > def __mul__(self, other): > if other.iscontainer: > if not know_what_to_do_with(other): > other.__rmul__(self) > > else: > do what is done now: treat other as dtype=object > > then array * matrix > would call the ndarray.__mul__ operator > it then realizes that matrix is a container but ndarray doesnt know > what to do with it. It would therefore call matrix.__rmul__ > > > > If other is not a container it is treated as object and we would get > the current behaviour. This would allow to stay backward compatible. > > As for me, I dislike the idea - it's too complicated, and some containers > sometimes could want to take operations priority, using their __rmul__? etc > instead of ndarray __mul__ etc. you would simply add class oofun: iscontainer = True to your oofun implementation and you would get exactly what you want.... I don't think this in any way complicated. It is basically what you have asked for in your original post. > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From tmp50 at ukr.net Thu Dec 17 10:37:21 2009 From: tmp50 at ukr.net (Dmitrey) Date: Thu, 17 Dec 2009 17:37:21 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: Message-ID: ?? ????: Sebastian Walter could you elaborate on why someone would want to do an `array * polynomial` operation and not expect it to be an array of polynomial as result? this has been already mentioned in the thread http://permalink.gmane.org/gmane.comp.python.scientific.devel/12445 In [1]: from numpy.polynomial import Polynomial as poly In [2]: p = poly([0,1]) In [3]: ones(2) * p Out[3]: array([poly([ 0.? 1.], [-1.? 1.]), poly([ 0.? 1.], [-1.? 1.])], dtype=object) In [4]: p * ones(2) Out[4]: Polynomial([ 0.,? 1.,? 1.], [-1.,? 1.]) you would simply add class oofun: iscontainer = True to your oofun implementation and you would get exactly what you want....I guess "iscontainer" is a bad choice for the field, but if numpy developers decide to use this one, let it be, it doesn't matter for me sufficiently. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.walter at gmail.com Thu Dec 17 11:13:26 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Thu, 17 Dec 2009 17:13:26 +0100 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: 2009/12/17 Dmitrey : > ?? ????: Sebastian Walter > > could you elaborate on why someone would > want to do an `array * polynomial` operation and not expect it to > be an array of polynomial as result? > > this has been already mentioned in the thread > http://permalink.gmane.org/gmane.comp.python.scientific.devel/12445 > In [1]: from numpy.polynomial import Polynomial as poly > > In [2]: p = poly([0,1]) > > In [3]: ones(2) * p > Out[3]: array([poly([ 0.? 1.], [-1.? 1.]), poly([ 0.? 1.], [-1.? 1.])], > dtype=object) > > In [4]: p * ones(2) > Out[4]: Polynomial([ 0.,? 1.,? 1.], [-1.,? 1.]) let me rephrase then. I don't understand why p * ones(2) should give Polynomial([ 0., 1., 1.], [-1., 1.]). A polynomial over the reals is a data type with a ring structure and should therefore behave "similarly" to floats IMHO. > > you would simply add > > class oofun: > iscontainer = True > > to your oofun implementation and you would get exactly what you want.... > > I guess "iscontainer" is a bad choice for the field, but if numpy developers > decide to use this one, let it be, it doesn't matter for me sufficiently. I'm just making suggestions ;). I'm as much numpy dev as you are... > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From tmp50 at ukr.net Thu Dec 17 12:07:37 2009 From: tmp50 at ukr.net (Dmitrey) Date: Thu, 17 Dec 2009 19:07:37 +0200 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: Message-ID: ?? ????: Sebastian Walter let me rephrase then. I don't understand why p * ones(2) should give Polynomial([ 0., 1., 1.], [-1., 1.]). A polynomial over the reals is a data type with a ring structure and should therefore behave "similarly" to floats IMHO. Since I'm not a numpy developer, I cannot give you irrefutable answer, but I guess it is much more useful for numpy users that are mostly engineering programmers, not researchers of a data type with a ring structure. Also, this is not only up to polynomials - as it has been mentioned, this issue is important for stacking with SAGE data types, oofuns etc, where users certainly want to get same type instead of an ndarray. -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.bergstra at gmail.com Thu Dec 17 16:27:27 2009 From: james.bergstra at gmail.com (James Bergstra) Date: Thu, 17 Dec 2009 16:27:27 -0500 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: Message-ID: <7f1eaee30912171327g318de484s1afd2d5bed3eb1a1@mail.gmail.com> I develop another symbolic-over-numpy package called theano, and somehow we avoid this problem. In [1]: import theano In [2]: import numpy In [3]: numpy.ones(4) * theano.tensor.dmatrix() Out[3]: Elemwise{mul,no_inplace}.0 In [4]: theano.tensor.dmatrix() * theano.tensor.dmatrix() Out[4]: Elemwise{mul,no_inplace}.0 In [5]: theano.tensor.dmatrix() * numpy.ones(4) Out[5]: Elemwise{mul,no_inplace}.0 The dmatrix() function returns an instance of the TensorVariable class defined in this file: http://trac-hg.assembla.com/theano/browser/theano/tensor/basic.py#L901 I think the only thing we added for numpy was __array_priority__ = 1000, which has already been suggested here. I'm confused by why this thread goes on. James 2009/12/17 Dmitrey : > ?? ????: Sebastian Walter > > > let me rephrase then. I don't understand why p * ones(2) should give > Polynomial([ 0., 1., 1.], [-1., 1.]). > > A polynomial over the reals is a data type with a ring structure and > > should therefore behave "similarly" to floats IMHO. > > > > Since I'm not a numpy developer, I cannot give you irrefutable answer, but I > guess it is much more useful for numpy users that are mostly engineering > programmers, not researchers of a data type with a ring structure. > > Also, this is not only up to polynomials - as it has been mentioned, this > issue is important for stacking with SAGE data types, oofuns etc, where > users certainly want to get same type instead of an ndarray. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- http://www-etud.iro.umontreal.ca/~bergstrj From charlesr.harris at gmail.com Thu Dec 17 18:35:46 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 17 Dec 2009 16:35:46 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: <7f1eaee30912171327g318de484s1afd2d5bed3eb1a1@mail.gmail.com> References: <7f1eaee30912171327g318de484s1afd2d5bed3eb1a1@mail.gmail.com> Message-ID: On Thu, Dec 17, 2009 at 2:27 PM, James Bergstra wrote: > I develop another symbolic-over-numpy package called theano, and > somehow we avoid this problem. > > In [1]: import theano > > In [2]: import numpy > > In [3]: numpy.ones(4) * theano.tensor.dmatrix() > Out[3]: Elemwise{mul,no_inplace}.0 > > In [4]: theano.tensor.dmatrix() * theano.tensor.dmatrix() > Out[4]: Elemwise{mul,no_inplace}.0 > > In [5]: theano.tensor.dmatrix() * numpy.ones(4) > Out[5]: Elemwise{mul,no_inplace}.0 > > > The dmatrix() function returns an instance of the TensorVariable class > defined in this file: > http://trac-hg.assembla.com/theano/browser/theano/tensor/basic.py#L901 > > I think the only thing we added for numpy was __array_priority__ = > 1000, which has already been suggested here. I'm confused by why this > thread goes on. > > Hmm, That does seem to work. I wonder if it is intended or just fortuitous, the documentation says: The __array_priority__ attribute > > __array_priority__ > This attribute allows simple but flexible determination of which sub- type > should be considered ?primary? when an operation involving two or more > sub-types arises. In operations where different sub-types are being used, > the sub-type with the largest __array_priority__ attribute will determine > the sub-type of the output(s). If two sub- types have the same > __array_priority__ then the sub-type of the first argument determines > the output. The default __array_priority__ attribute returns a value of 0.0 > for the base ndarray type and 1.0 for a sub-type. This attribute can also be > defined by objects that are not sub-types of the ndarray and can be used to > determine which __array_wrap__ method should be called for the return > output. > Which doesn't seem directly applicable. Perhaps the documentation is wrong, the last sentence is a bit confusing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Thu Dec 17 19:30:26 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 17 Dec 2009 16:30:26 -0800 Subject: [SciPy-dev] Numpy User Guide sections requiring author/editors and/or committees to propose content/coverage Message-ID: <45d1ab480912171630k38b1d9eer5c10b1cbf6cd5859@mail.gmail.com> misc.rst performance.rst howtofind.rst DG From charlesr.harris at gmail.com Thu Dec 17 19:54:16 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 17 Dec 2009 17:54:16 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: <7f1eaee30912171327g318de484s1afd2d5bed3eb1a1@mail.gmail.com> Message-ID: On Thu, Dec 17, 2009 at 4:35 PM, Charles R Harris wrote: > > > On Thu, Dec 17, 2009 at 2:27 PM, James Bergstra wrote: > >> I develop another symbolic-over-numpy package called theano, and >> somehow we avoid this problem. >> >> In [1]: import theano >> >> In [2]: import numpy >> >> In [3]: numpy.ones(4) * theano.tensor.dmatrix() >> Out[3]: Elemwise{mul,no_inplace}.0 >> >> In [4]: theano.tensor.dmatrix() * theano.tensor.dmatrix() >> Out[4]: Elemwise{mul,no_inplace}.0 >> >> In [5]: theano.tensor.dmatrix() * numpy.ones(4) >> Out[5]: Elemwise{mul,no_inplace}.0 >> >> >> The dmatrix() function returns an instance of the TensorVariable class >> defined in this file: >> http://trac-hg.assembla.com/theano/browser/theano/tensor/basic.py#L901 >> >> I think the only thing we added for numpy was __array_priority__ = >> 1000, which has already been suggested here. I'm confused by why this >> thread goes on. >> >> > Hmm, > > That does seem to work. I wonder if it is intended or just fortuitous, the > documentation says: > > The __array_priority__ attribute >> >> __array_priority__ >> This attribute allows simple but flexible determination of which sub- type >> should be considered ?primary? when an operation involving two or more >> sub-types arises. In operations where different sub-types are being used, >> the sub-type with the largest __array_priority__ attribute will determine >> the sub-type of the output(s). If two sub- types have the same >> __array_priority__ then the sub-type of the first argument determines >> the output. The default __array_priority__ attribute returns a value of >> 0.0 for the base ndarray type and 1.0 for a sub-type. This attribute can >> also be defined by objects that are not sub-types of the ndarray and can be >> used to determine which __array_wrap__ method should be called for the >> return output. >> > > Which doesn't seem directly applicable. Perhaps the documentation is wrong, > the last sentence is a bit confusing. > > OK, looks intended: /* * FAIL with NotImplemented if the other object has * the __r__ method and has __array_priority__ as * an attribute (signalling it can handle ndarray's) * and is not already an ndarray or a subtype of the same type. */ This is in ufunc_object.c. However, it doesn't works for general ufuncs, i.e., np.multiply(a,b) isn't the same as "a * b" Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Thu Dec 17 20:14:38 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 17 Dec 2009 17:14:38 -0800 Subject: [SciPy-dev] doc question: special.orthogonal.p_roots and co In-Reply-To: <645BA5FF-E292-4405-80C7-FC6A2033B8C0@enthought.com> References: <1cd32cbb0912060846n489708b5m1d1eda2b84fa62dc@mail.gmail.com> <1cd32cbb0912061343n4aa51c5cjdc0691c9e091362@mail.gmail.com> <1cd32cbb0912080647p6a0bf047p4a41c9b3e353d9ff@mail.gmail.com> <645BA5FF-E292-4405-80C7-FC6A2033B8C0@enthought.com> Message-ID: <45d1ab480912171714h38e80c0fm7d65ffe22bb7e4cf@mail.gmail.com> On Wed, Dec 9, 2009 at 3:33 AM, Travis Oliphant wrote: > > On Dec 8, 2009, at 8:47 AM, josef.pktd at gmail.com wrote: > > On Tue, Dec 8, 2009 at 9:11 AM, Ralf Gommers > wrote: > > In rev 6070 Pauli added an __all__ dict to orthogonal.py that does not > > include those functions. I think pydocweb only generates pages for > > objects > > in __all__ if that exists. So it looks like that is the reason. > > Should all the xx_roots funcs be in __all__ in your opinion? > > I agree with Josef, and think they should be in the __all__. > They are a simpler way to access just the roots and weights. ? They are > documented themselves (which is usually an indicator that they are intended > to be used outside the single file). > I'll add them to the __all__ if there are no strong objections. > -Travis > -- > Travis Oliphant > Enthought Inc. > 1-512-536-1057 > http://www.enthought.com > oliphant at enthought.com So, are there any pending doc changes that need to be made as a result of this thread? (IIUC, it was _either_ the functions should be added to __all__, _or_ the module docstring needed to be changed, and since I see no objections, I presume Travis added them to __all__, and thus no doc changes were/are required - correct?) From charlesr.harris at gmail.com Fri Dec 18 11:00:23 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 18 Dec 2009 09:00:23 -0700 Subject: [SciPy-dev] Suppressing of numpy __mul__, __div__ etc In-Reply-To: References: <7f1eaee30912171327g318de484s1afd2d5bed3eb1a1@mail.gmail.com> Message-ID: On Thu, Dec 17, 2009 at 5:54 PM, Charles R Harris wrote: > > > On Thu, Dec 17, 2009 at 4:35 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Thu, Dec 17, 2009 at 2:27 PM, James Bergstra > > wrote: >> >>> I develop another symbolic-over-numpy package called theano, and >>> somehow we avoid this problem. >>> >>> In [1]: import theano >>> >>> In [2]: import numpy >>> >>> In [3]: numpy.ones(4) * theano.tensor.dmatrix() >>> Out[3]: Elemwise{mul,no_inplace}.0 >>> >>> In [4]: theano.tensor.dmatrix() * theano.tensor.dmatrix() >>> Out[4]: Elemwise{mul,no_inplace}.0 >>> >>> In [5]: theano.tensor.dmatrix() * numpy.ones(4) >>> Out[5]: Elemwise{mul,no_inplace}.0 >>> >>> >>> The dmatrix() function returns an instance of the TensorVariable class >>> defined in this file: >>> http://trac-hg.assembla.com/theano/browser/theano/tensor/basic.py#L901 >>> >>> I think the only thing we added for numpy was __array_priority__ = >>> 1000, which has already been suggested here. I'm confused by why this >>> thread goes on. >>> >>> >> Hmm, >> >> That does seem to work. I wonder if it is intended or just fortuitous, the >> documentation says: >> >> The __array_priority__ attribute >>> >>> __array_priority__ >>> This attribute allows simple but flexible determination of which sub- >>> type should be considered ?primary? when an operation involving two or more >>> sub-types arises. In operations where different sub-types are being used, >>> the sub-type with the largest __array_priority__ attribute will determine >>> the sub-type of the output(s). If two sub- types have the same >>> __array_priority__ then the sub-type of the first argument determines >>> the output. The default __array_priority__ attribute returns a value of >>> 0.0 for the base ndarray type and 1.0 for a sub-type. This attribute can >>> also be defined by objects that are not sub-types of the ndarray and can be >>> used to determine which __array_wrap__ method should be called for the >>> return output. >>> >> >> Which doesn't seem directly applicable. Perhaps the documentation is >> wrong, the last sentence is a bit confusing. >> >> > OK, looks intended: > > /* > * FAIL with NotImplemented if the other object has > * the __r__ method and has __array_priority__ as > * an attribute (signalling it can handle ndarray's) > * and is not already an ndarray or a subtype of the same type. > */ > > This is in ufunc_object.c. However, it doesn't works for general ufuncs, > i.e., np.multiply(a,b) isn't the same as "a * b" > > What makes this confusing is that it is in the wrong place. This sort of behaviour should be enforced in multiarray, not in the ufunc object. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Dec 19 07:52:34 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 19 Dec 2009 14:52:34 +0200 Subject: [SciPy-dev] Buildout for Pydocweb In-Reply-To: References: Message-ID: <1261227151.4934.47.camel@idol> Hi, ke, 2009-12-16 kello 19:01 +0530, Madhusudan C.S kirjoitti: > 2009/12/16 Madhusudan C.S > Hi Pauli, Stefan and others, > I thought of working on pydocweb during SciPy.in 2009 > Sprints. But even before I started going through the code > of pydocweb, I felt pydocweb lacks a build system. So I > sat for half a day and have integrated buildout[0]. > > Buildout provides an environment to build, test and > deploy apps. Recently many, many django apps have > started using buildout. It is a nice way to maintain > our app and requires less time for maintaining. So if you > are interested in using buildout for pydocweb, I will be > happy to mail you my patches for review. > > [0] - http://www.buildout.org/ > > Sorry for not sending the patches in my previous mail, > I created a bzr branch on launchpad with my code for > buildout at [1]. Please take a look. Thanks for spending time on this! Some comments: Pydocweb is not yet really at the stage at which it's a reusable Django application. Some more reorganization needs to be done before it will actually work in a buildout / installed: - also install the pydoc-tool.py script - same for the database migration scripts - some of the templates should be moved inside docweb/, I believe Before this is done, having a build system in place is perhaps not so useful, because a major part of the functionality assumes a certain project layout, and does not reside only in the app. Did you check that you can run pydocweb from the buildout? *** Anyway, moving src/ -> pydocweb/ seems to make it work, and preserves the possibility to run in-place without buildout. Merging, thanks! -- Pauli Virtanen From timvictor at gmail.com Sun Dec 20 00:46:43 2009 From: timvictor at gmail.com (Tim Victor) Date: Sun, 20 Dec 2009 00:46:43 -0500 Subject: [SciPy-dev] Possible fix for scipy.sparse.lil_matrix column-slicing problem In-Reply-To: <4B1540AD.3080005@ntc.zcu.cz> References: <1cd32cbb0911251329se97c355x8ec78903c4260ce@mail.gmail.com> <4B139F28.8060807@ntc.zcu.cz> <4B151617.1040009@ntc.zcu.cz> <4B1540AD.3080005@ntc.zcu.cz> Message-ID: On Tue, Dec 1, 2009 at 11:13 AM, Robert Cimrman wrote: > Tim Victor wrote: >> On Tue, Dec 1, 2009 at 8:11 AM, Robert Cimrman wrote: >>> I think scipy.sparse indexing should follow the behavior of numpy dense arrays. >>> ?This is what current SVN scipy does (0.8.0.dev6122): >>> >>> In [1]: import scipy.sparse as spp >>> In [2]: a = spp.lil_matrix((10,10)) >>> In [3]: a[range(10),0] = 1 >>> >>> This is ok. >>> >>> In [5]: a[range(10),range(10)] = -1 >>> In [8]: print a.todense() >>> ------> print(a.todense()) >>> [[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >>> ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >>> ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >>> ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >>> ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >>> ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >>> ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >>> ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >>> ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.] >>> ?[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]] >>> >>> This is IMHO not ok (what other sparse matrix users think?) >>> >>> In [9]: import scipy as sp >>> In [10]: a[range(10),range(9,-1,-1)] = sp.rand(10) >>> In [12]: print a.todense() > ?>> >>> >>> same as above... >>> >>>> I'm reminded of the Zen of Python "Explicit is better than implicit." guideline. >>> :) >>> >>> Consider also this: the current behavior (broadcasting the index arrays to the >>> whole rectangle) is not compatible with NumPy, and does not allow setting >>> elements e.g. in a random sequence of positions. On the other hand, the >>> broadcasting behaviour can be easily, explicitely, obtained by using >>> numpy.mgrid and similar functions. >>> >>>> Best regards, and many apologies for the inconvenience, >>> No problem, any help and code contribution is more than welcome! >>> >>> I guess that fixing this issue should not be too difficult, so you could make >>> another stab :-) If there is a consensus here, of course... (Nathan?) >>> >>> cheers, >>> r. >> >> Yes, I agree with you 100%, Robert. The behavior of NumPy for dense >> arrays should be the guide, and I tried to follow it but didn't know >> to check that case. > > No problem at all. It was a coincidence that I stumbled on a case that was not > covered by the tests. I do not even uses lil_matrix much :) > >> I don't defend how my version handles your case where the i and j >> indexes are both sequences. The behavior that you expect is correct >> and I plan to fix it to make your code work. I would however like to >> make sure that I understand it well and get it all correct this >> time--including correctly handling the case where the right-hand side >> is also a sequence. > > Sure! I am not an expert in this either, so let's wait a bit if somebody chimes > in... Could you summarize this discussion in a new ticket, if you have a little > spare time? > > Note that I do not push this to be fixed soon by any means, my code already > runs ok with the current version. So take my "bugreport" easy ;) > > Best, > r. Finally had some time to figure out the ticket system and create one! http://projects.scipy.org/scipy/ticket/1075 I hope I did that right. My teaching duties are over for the semester and I should have some time to dive back into the code from now until the end of the year. Best regards, Tim Victor From tmp50 at ukr.net Sun Dec 20 07:52:40 2009 From: tmp50 at ukr.net (Dmitrey) Date: Sun, 20 Dec 2009 14:52:40 +0200 Subject: [SciPy-dev] How to ensure scipy version is no older than the svn number? Message-ID: Hi all, what's the recommended way to ensure the scipy installed is no older than a given svn repository number? (for example, I'm interested in r6139). Thank you in advance, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sun Dec 20 09:15:40 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 20 Dec 2009 15:15:40 +0100 Subject: [SciPy-dev] How to ensure scipy version is no older than the svn number? In-Reply-To: References: Message-ID: Hi Dmitrey, You can always check the scipy version, but I assume that people will want to have a newer version than r6139 and use your software! Matthieu 2009/12/20 Dmitrey : > Hi all, > what's the recommended way to ensure the scipy installed is no older than a > given svn repository number? (for example, I'm interested in r6139). > Thank you in advance, D. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From wnbell at gmail.com Sun Dec 20 09:31:02 2009 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 20 Dec 2009 09:31:02 -0500 Subject: [SciPy-dev] Possible fix for scipy.sparse.lil_matrix column-slicing problem In-Reply-To: References: <4B139F28.8060807@ntc.zcu.cz> <4B151617.1040009@ntc.zcu.cz> <4B1540AD.3080005@ntc.zcu.cz> Message-ID: On Sun, Dec 20, 2009 at 12:46 AM, Tim Victor wrote: > > Finally had some time to figure out the ticket system and create one! > > http://projects.scipy.org/scipy/ticket/1075 > > I hope I did that right. Looks good to me! > > My teaching duties are over for the semester and I should have some > time to dive back into the code from now until the end of the year. > Thanks for following up on this Tim. Let us know if you encounter any problems. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From charlesr.harris at gmail.com Sun Dec 20 19:03:41 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 20 Dec 2009 17:03:41 -0700 Subject: [SciPy-dev] Duplicate functionality. Message-ID: Hi All, There are two versions of bisection for finding roots, bisect in zeros.py and bisection in minpack.py. I think it might be appropriate to deprecate bisection with a suggestion to use bisect instead. Along the same lines, I think newton should be moved into the zeros module. Thoughts? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Dec 20 20:46:27 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 20 Dec 2009 17:46:27 -0800 Subject: [SciPy-dev] Duplicate functionality. In-Reply-To: References: Message-ID: <45d1ab480912201746n1aa154bcx9dfbe481ca5e332f@mail.gmail.com> On Sun, Dec 20, 2009 at 4:03 PM, Charles R Harris wrote: > Hi All, > > There are two versions of bisection for finding roots, bisect in zeros.py > and bisection in minpack.py. I think it might be appropriate to deprecate > bisection with a suggestion to use bisect instead. Along the same lines, I > think newton should be moved into the zeros module. Thoughts? > > Chuck First off, are the implementations equivalent, e.g., do they both return the same results for test cases, and with essentially the same speed? Are they both implemented at the C level? Both at the Python? One one way, the other the other? You see where I'm going with this: make sure you keep the "right" one. :-) Second, it's been a while, but my recollection is that some minimization methods (e.g., conjugate gradient) rely fundamentally on the bisection method, correct? Is bisection called within minpack.py? More than once? Do you really want to add an import zeros into minpack.py and replace bisection everywhere with zeros.bisect? (It's a semi-rhetorical question: I know it's not that big a deal to do so, "but if it ain't broke, don't fix it"; of course, code divergence is a source of future breakage...) Those would be my only concerns about that; as far as moving newton into zeros, I assume we'd keep a copy where it is through two release cycles and add a "this has been moved" warning, correct? DG From charlesr.harris at gmail.com Sun Dec 20 21:48:10 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 20 Dec 2009 19:48:10 -0700 Subject: [SciPy-dev] Duplicate functionality. In-Reply-To: <45d1ab480912201746n1aa154bcx9dfbe481ca5e332f@mail.gmail.com> References: <45d1ab480912201746n1aa154bcx9dfbe481ca5e332f@mail.gmail.com> Message-ID: On Sun, Dec 20, 2009 at 6:46 PM, David Goldsmith wrote: > On Sun, Dec 20, 2009 at 4:03 PM, Charles R Harris > wrote: > > Hi All, > > > > There are two versions of bisection for finding roots, bisect in zeros.py > > and bisection in minpack.py. I think it might be appropriate to deprecate > > bisection with a suggestion to use bisect instead. Along the same lines, > I > > think newton should be moved into the zeros module. Thoughts? > > > > Chuck > > First off, are the implementations equivalent, e.g., do they both > return the same results for test cases, and with essentially the same > speed? Are they both implemented at the C level? Both at the Python? > One one way, the other the other? You see where I'm going with this: > make sure you keep the "right" one. :-) > > Bisection is in python, bisect is in C and a bit faster. I haven't checked the comparative accuracy, bisect is the only one currently tested. Newton isn't tested either. > Second, it's been a while, but my recollection is that some > minimization methods (e.g., conjugate gradient) rely fundamentally on > the bisection method, correct? Is bisection called within minpack.py? > Doesn't look like it. Newton doesn't seem to be called anywhere either. More than once? Do you really want to add an import zeros into > minpack.py and replace bisection everywhere with zeros.bisect? No, but I don't think any of the routines are needed there. The c versions of the zero finders were written back around 2003 (by me) and came later than the versions in minpack. I think the current placement is a bit of an historical accident. > (It's > a semi-rhetorical question: I know it's not that big a deal to do so, > "but if it ain't broke, don't fix it"; of course, code divergence is a > source of future breakage...) > > Maintenance can get to be a hassle, but that really isn't a problem yet. Those would be my only concerns about that; as far as moving newton > into zeros, I assume we'd keep a copy where it is through two release > cycles and add a "this has been moved" warning, correct? > > Actually, I'd like to move zeros too, but that is probably a change too much ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Dec 21 11:28:03 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 21 Dec 2009 17:28:03 +0100 Subject: [SciPy-dev] Possible fix for scipy.sparse.lil_matrix column-slicing problem In-Reply-To: References: <4B139F28.8060807@ntc.zcu.cz> <4B151617.1040009@ntc.zcu.cz> <4B1540AD.3080005@ntc.zcu.cz> Message-ID: <4B2FA213.8090307@ntc.zcu.cz> Nathan Bell wrote: > On Sun, Dec 20, 2009 at 12:46 AM, Tim Victor wrote: >> Finally had some time to figure out the ticket system and create one! >> >> http://projects.scipy.org/scipy/ticket/1075 >> >> I hope I did that right. > > Looks good to me! > >> My teaching duties are over for the semester and I should have some >> time to dive back into the code from now until the end of the year. >> > > Thanks for following up on this Tim. Let us know if you encounter any problems. Yes, thanks Tim! You can cc me when you update the ticket, in case you need feedback. r. From cimrman3 at ntc.zcu.cz Mon Dec 21 11:42:08 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 21 Dec 2009 17:42:08 +0100 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <4B28934C.5070606@student.matnat.uio.no> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> Message-ID: <4B2FA560.7060502@ntc.zcu.cz> Dag Sverre Seljebotn wrote: > Robert Cimrman wrote: >> Quoting Nathaniel Smith : >> >> >>> As mentioned previously[0], I've written a scipy.sparse-compatible >>> wrapper for the CHOLMOD sparse Cholesky routines. I considered making >>> it 'scikits.cholmod' (cf. scikits.umfpack), but creating a new scikit >>> every time someone needs a sparse linear algebra routine seems like it >>> will become very silly very quickly, so instead I hereby declare the >>> existence of 'scikits.sparse' as a home for all such routines. (Of >>> course, it currently only contains scikits.sparse.cholmod). >>> >>> Manual: >>> http://packages.python.org/scikits.sparse/ >>> Source: >>> hg clone https://scikits-sparse.googlecode.com/hg/ scikits.sparse >>> Homepage: >>> http://code.google.com/p/scikits-sparse >>> Bug tracker: >>> http://code.google.com/p/scikits-sparse/issues/list >>> Mailing list: >>> scikits-sparse-discuss at lists.vorpus.org >>> http://lists.vorpus.org/cgi-bin/mailman/listinfo/scikits-sparse-discuss >>> >>> I would have sucked scikits.umfpack in, except that it uses SWIG, >>> which I don't understand and am not really inspired to learn, at least >>> for a v0.1 release. Also, there appear to still be some sort of >>> complicated entanglements with scipy.sparse (e.g. in at least part of >>> the test suite). Anyone feeling inspired? It's not a very complicated >>> interface; just rewrapping it might be as easy as anything else. >>> >> It would be great to have all the suitesparse in one scikit, thanks >> for working in that direction. >> >> Concerning the test entanglement - all direct umfpack references >> should be removed from scipy, the tests should live in the scikit >> IMHO. It's just my lack of time it's not done yet. As for wrappers, >> they just translate the numpy array arguments to the C arrays that >> umfpack expects - I guess it's the same you do with cython, so it >> should be easy to adapt. The umfpack scikit also uses a simple reuse >> mechanisms for the partial solution objects (symbolic, numeric, the LU >> factors etc.) - it would be great if this could be preserved. I cannot >> assist you right now by code as I am out of town this week, but I will >> gladly help with the conversion later. >> >> As for the wrapper licence, the umfpack scikit has been BSD, but I >> guess GPL is ok too, especially if the underlying library is GPL. Do >> you have a strong opinion on this? >> > I'm not sure if you have a choice -- I believe SuiteSparse is under GPL, > and I'd say a wrapper is clearly "derivative work"? > > IANAL, but just something to keep in mind. Keeping it GPL will at least > be on the safe side. IANAL either, so yes, I am +0 on that. > (Some parts of SuiteSparse might be under LGPL though, which would be > ok, but if the scikit is going for be for all of SuiteSparse it would be > less confusing to stick with GPL for the whole.) +1 to have the same license for all the scikit parts. Nathaniel, I have looked at the docs [1] (btw. very nice!) and noticed that you use capital letters to denote matrices in function and method names. What is general opinion on this considering the recommendations [2] (specifically [3])? In this case, the issue could be side-stepped by replacing the specialized functions, e.g. all Factor.solve_*() by a single function, e.g. Factor.solve( ..., mode='A') with a 'mode' argument. What do you think? Typing Factor.solve_P(b) or Factor.solve(b, 'P') seems of the same complexity/readability to me. cheers, r. [1] http://packages.python.org/scikits.sparse/ [2] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines [3] http://www.python.org/dev/peps/pep-0008/ From JDM at MarchRay.net Mon Dec 21 13:21:53 2009 From: JDM at MarchRay.net (Jonathan March) Date: Mon, 21 Dec 2009 12:21:53 -0600 Subject: [SciPy-dev] request docs edit permission Message-ID: Responding to the Documentation Progress Report in the just-released proceedings (excellent work, many thanks!) numpy user username: jdmarch -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Mon Dec 21 13:33:56 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Dec 2009 10:33:56 -0800 Subject: [SciPy-dev] Duplicate functionality. In-Reply-To: References: <45d1ab480912201746n1aa154bcx9dfbe481ca5e332f@mail.gmail.com> Message-ID: <45d1ab480912211033r1ed51c38q946b11fb008d5e4f@mail.gmail.com> On Sun, Dec 20, 2009 at 6:48 PM, Charles R Harris wrote: > > > On Sun, Dec 20, 2009 at 6:46 PM, David Goldsmith > wrote: >> >> On Sun, Dec 20, 2009 at 4:03 PM, Charles R Harris >> wrote: >> > Hi All, >> > >> > There are two versions of bisection for finding roots, bisect in >> > zeros.py >> > and bisection in minpack.py. I think it might be appropriate to >> > deprecate >> > bisection with a suggestion to use bisect instead. Along the same lines, >> > I >> > think newton should be moved into the zeros module. Thoughts? >> > >> > Chuck >> >> First off, are the implementations equivalent, e.g., do they both >> return the same results for test cases, and with essentially the same >> speed? ?Are they both implemented at the C level? ?Both at the Python? >> ?One one way, the other the other? ?You see where I'm going with this: >> make sure you keep the "right" one. :-) >> > > Bisection is in python, bisect is in C and a bit faster. I haven't checked > the comparative accuracy, bisect is the only one currently tested. Newton > isn't tested either. > >> >> Second, it's been a while, but my recollection is that some >> minimization methods (e.g., conjugate gradient) rely fundamentally on >> the bisection method, correct? ?Is bisection called within minpack.py? > > Doesn't look like it. Newton doesn't seem to be called anywhere either. > >> ?More than once? ?Do you really want to add an import zeros into >> minpack.py and replace bisection everywhere with zeros.bisect? > > No, but I don't think any of the routines are needed there. The c versions > of the zero finders were written back around 2003 (by me) and came later > than the versions in minpack. I think the current placement is a bit of an > historical accident. > >> >> (It's >> a semi-rhetorical question: I know it's not that big a deal to do so, >> "but if it ain't broke, don't fix it"; of course, code divergence is a >> source of future breakage...) >> > > Maintenance can get to be a hassle, but that really isn't a problem yet. > >> Those would be my only concerns about that; as far as moving newton >> into zeros, I assume we'd keep a copy where it is through two release >> cycles and add a "this has been moved" warning, correct? >> > > Actually, I'd like to move zeros too, but that is probably a change too much > ;) > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > Sounds good then: I've deposited my $0.02, and it sounds like they're earning interest. :-) DG From d.l.goldsmith at gmail.com Mon Dec 21 13:57:00 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Dec 2009 10:57:00 -0800 Subject: [SciPy-dev] request docs edit permission In-Reply-To: References: Message-ID: <45d1ab480912211057q3c9c2b4bsc4da2e06b3841849@mail.gmail.com> Thanks for signing on! (Unfortunately, I don't have edit rights granting privileges myself, but I like to at least acknowledge and thank newcomers; Gael or Stefan should be getting to you shortly.) David Goldsmith, Technical Editor Olympia, WA On Mon, Dec 21, 2009 at 10:21 AM, Jonathan March wrote: > Responding to the Documentation Progress Report in the just-released > proceedings (excellent work, many thanks!) > numpy user > username: jdmarch > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From gael.varoquaux at normalesup.org Tue Dec 22 01:17:21 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 22 Dec 2009 07:17:21 +0100 Subject: [SciPy-dev] request docs edit permission In-Reply-To: References: Message-ID: <20091222061721.GA25450@phare.normalesup.org> On Mon, Dec 21, 2009 at 12:21:53PM -0600, Jonathan March wrote: > Responding to the Documentation Progress Report in the just-released > proceedings (excellent work, many thanks!) > numpy user > username: jdmarch Hey Jonathan, I have given you edit rights. Thanks a lot for your interest. Ga?l From aarchiba at physics.mcgill.ca Tue Dec 22 15:01:26 2009 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Tue, 22 Dec 2009 16:01:26 -0400 Subject: [SciPy-dev] scipy.special.bdtrik bug (ticket #1076) Message-ID: Hi, I was recently doing some calculations with scipy.stats.binom().ppf and found a nasty bug ( http://projects.scipy.org/scipy/ticket/1076 ). If the binomial probability is tiny, totally wrong answers emerge. The problem turns out to be in the function scipy.special.bdtrik. There's no documentation anywhere about what this is supposed to do, but from context it's pretty clear it exists to calculate this value. It's a cephes function, and I got a little lost trying to track down its implementation. Maybe someone who's more familiar with cephes could point me to the code, and how to put it in __all__? Thanks, Anne From robert.kern at gmail.com Tue Dec 22 15:10:27 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 22 Dec 2009 14:10:27 -0600 Subject: [SciPy-dev] scipy.special.bdtrik bug (ticket #1076) In-Reply-To: References: Message-ID: <3d375d730912221210m64e3d23jae4a3ad782362cf0@mail.gmail.com> On Tue, Dec 22, 2009 at 14:01, Anne Archibald wrote: > Hi, > > I was recently doing some calculations with scipy.stats.binom().ppf > and found a nasty bug ( http://projects.scipy.org/scipy/ticket/1076 ). > If the binomial probability is tiny, totally wrong answers emerge. The > problem turns out to be in the function scipy.special.bdtrik. There's > no documentation anywhere about what this is supposed to do, but from > context it's pretty clear it exists to calculate this value. It's a > cephes function, and I got a little lost trying to track down its > implementation. Maybe someone who's more familiar with cephes could > point me to the code, and how to put it in __all__? [~]$ cd svn/scipy/scipy/special [special]$ grin -i bdtrik ./_cephesmodule.c: 827 : f = PyUFunc_FromFuncAndData(cephes3_functions, cdfbin2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "bdtrik", "", 0); 828 : PyDict_SetItemString(dictionary, "bdtrik", f); 902 : f = PyUFunc_FromFuncAndData(cephes3_functions, cdfnbn2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "nbdtrik", "", 0); 903 : PyDict_SetItemString(dictionary, "nbdtrik", f); .... [special]$ grin cdfbin2 ./_cephesmodule.c: 208 : static void * cdfbin2_data[] = {(void *)cdfbin2_wrap, (void *)cdfbin2_wrap}; 827 : f = PyUFunc_FromFuncAndData(cephes3_functions, cdfbin2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "bdtrik", "", 0); ./cdf_wrappers.c: 93 : double cdfbin2_wrap(double p, double xn, double pr) { ./cdf_wrappers.h: 18 : extern double cdfbin2_wrap(double p, double xn, double pr); [special]$ less cdf_wrappers.c # I see that cdfbin2_wrap() wraps the Fortran subroutine CDFBIN. This tells me that it's from the cdflib collection of functions, not the Cephes library itself. [special]$ less cdflib/cdfbin.f -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From warren.weckesser at enthought.com Tue Dec 22 15:40:11 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 22 Dec 2009 14:40:11 -0600 Subject: [SciPy-dev] scipy.special.bdtrik bug (ticket #1076) In-Reply-To: <3d375d730912221210m64e3d23jae4a3ad782362cf0@mail.gmail.com> References: <3d375d730912221210m64e3d23jae4a3ad782362cf0@mail.gmail.com> Message-ID: <4B312EAB.7020605@enthought.com> Robert Kern wrote: > On Tue, Dec 22, 2009 at 14:01, Anne Archibald > wrote: > >> Hi, >> >> I was recently doing some calculations with scipy.stats.binom().ppf >> and found a nasty bug ( http://projects.scipy.org/scipy/ticket/1076 ). >> If the binomial probability is tiny, totally wrong answers emerge. The >> problem turns out to be in the function scipy.special.bdtrik. There's >> no documentation anywhere about what this is supposed to do, but from >> context it's pretty clear it exists to calculate this value. It's a >> cephes function, and I got a little lost trying to track down its >> implementation. Maybe someone who's more familiar with cephes could >> point me to the code, and how to put it in __all__? >> > > [~]$ cd svn/scipy/scipy/special > [special]$ grin -i bdtrik > ./_cephesmodule.c: > 827 : f = PyUFunc_FromFuncAndData(cephes3_functions, > cdfbin2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "bdtrik", "", 0); > 828 : PyDict_SetItemString(dictionary, "bdtrik", f); > 902 : f = PyUFunc_FromFuncAndData(cephes3_functions, > cdfnbn2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "nbdtrik", "", > 0); > 903 : PyDict_SetItemString(dictionary, "nbdtrik", f); > .... > > [special]$ grin cdfbin2 > ./_cephesmodule.c: > 208 : static void * cdfbin2_data[] = {(void *)cdfbin2_wrap, (void > *)cdfbin2_wrap}; > 827 : f = PyUFunc_FromFuncAndData(cephes3_functions, > cdfbin2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "bdtrik", "", 0); > ./cdf_wrappers.c: > 93 : double cdfbin2_wrap(double p, double xn, double pr) { > ./cdf_wrappers.h: > 18 : extern double cdfbin2_wrap(double p, double xn, double pr); > > [special]$ less cdf_wrappers.c > # I see that cdfbin2_wrap() wraps the Fortran subroutine CDFBIN. This > tells me that it's from the cdflib collection of functions, not the > Cephes library itself. > > A quick look at the wrappers and the fortran function makes me think the bug is in the wrappers. If the fortran function CDFBIN returns with STATUS == 1 or STATUS == 2, the wrapper returns BOUND, and the caller would only know something was wrong if scipy_special_print_error_messages is not zero. Warren From peridot.faceted at gmail.com Tue Dec 22 16:49:39 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 22 Dec 2009 17:49:39 -0400 Subject: [SciPy-dev] scipy.special.bdtrik bug (ticket #1076) In-Reply-To: <4B312EAB.7020605@enthought.com> References: <3d375d730912221210m64e3d23jae4a3ad782362cf0@mail.gmail.com> <4B312EAB.7020605@enthought.com> Message-ID: 2009/12/22 Warren Weckesser : > Robert Kern wrote: >> On Tue, Dec 22, 2009 at 14:01, Anne Archibald >> wrote: >> >>> Hi, >>> >>> I was recently doing some calculations with scipy.stats.binom().ppf >>> and found a nasty bug ( http://projects.scipy.org/scipy/ticket/1076 ). >>> If the binomial probability is tiny, totally wrong answers emerge. The >>> problem turns out to be in the function scipy.special.bdtrik. There's >>> no documentation anywhere about what this is supposed to do, but from >>> context it's pretty clear it exists to calculate this value. It's a >>> cephes function, and I got a little lost trying to track down its >>> implementation. Maybe someone who's more familiar with cephes could >>> point me to the code, and how to put it in __all__? >>> >> >> [~]$ cd svn/scipy/scipy/special >> [special]$ grin -i bdtrik >> ./_cephesmodule.c: >> ? 827 : ? ? ? ? f = PyUFunc_FromFuncAndData(cephes3_functions, >> cdfbin2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "bdtrik", "", 0); >> ? 828 : ? ? ? ? PyDict_SetItemString(dictionary, "bdtrik", f); >> ? 902 : ? ? ? ? f = PyUFunc_FromFuncAndData(cephes3_functions, >> cdfnbn2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "nbdtrik", "", >> 0); >> ? 903 : ? ? ? ? PyDict_SetItemString(dictionary, "nbdtrik", f); >> .... >> >> [special]$ grin cdfbin2 >> ./_cephesmodule.c: >> ? 208 : static void * cdfbin2_data[] = {(void *)cdfbin2_wrap, (void >> *)cdfbin2_wrap}; >> ? 827 : ? ? ? ? f = PyUFunc_FromFuncAndData(cephes3_functions, >> cdfbin2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "bdtrik", "", 0); >> ./cdf_wrappers.c: >> ? ?93 : double cdfbin2_wrap(double p, double xn, double pr) { >> ./cdf_wrappers.h: >> ? ?18 : extern double cdfbin2_wrap(double p, double xn, double pr); >> >> [special]$ less cdf_wrappers.c >> # I see that cdfbin2_wrap() wraps the Fortran subroutine CDFBIN. This >> tells me that it's from the cdflib collection of functions, not the >> Cephes library itself. >> >> > > A quick look at the wrappers and the fortran function makes me think the > bug is in the wrappers. ?If the fortran function CDFBIN returns with > STATUS == 1 or STATUS == 2, the wrapper returns BOUND, and the caller > would only know something was wrong if > scipy_special_print_error_messages is not zero. That does look pretty dodgy, since the fault that happens is that one of the bounds is returned, but it's the wrong one (top bound rather than bottom bound). I'll experiment with it some more once I can get scipy to compile again (the new lambertw won't compile; I think it's a cython version issue). Anne From charlesr.harris at gmail.com Tue Dec 22 17:33:51 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 22 Dec 2009 15:33:51 -0700 Subject: [SciPy-dev] scipy.special.bdtrik bug (ticket #1076) In-Reply-To: References: <3d375d730912221210m64e3d23jae4a3ad782362cf0@mail.gmail.com> <4B312EAB.7020605@enthought.com> Message-ID: On Tue, Dec 22, 2009 at 2:49 PM, Anne Archibald wrote: > 2009/12/22 Warren Weckesser : > > Robert Kern wrote: > >> On Tue, Dec 22, 2009 at 14:01, Anne Archibald > >> wrote: > >> > >>> Hi, > >>> > >>> I was recently doing some calculations with scipy.stats.binom().ppf > >>> and found a nasty bug ( http://projects.scipy.org/scipy/ticket/1076 ). > >>> If the binomial probability is tiny, totally wrong answers emerge. The > >>> problem turns out to be in the function scipy.special.bdtrik. There's > >>> no documentation anywhere about what this is supposed to do, but from > >>> context it's pretty clear it exists to calculate this value. It's a > >>> cephes function, and I got a little lost trying to track down its > >>> implementation. Maybe someone who's more familiar with cephes could > >>> point me to the code, and how to put it in __all__? > >>> > >> > >> [~]$ cd svn/scipy/scipy/special > >> [special]$ grin -i bdtrik > >> ./_cephesmodule.c: > >> 827 : f = PyUFunc_FromFuncAndData(cephes3_functions, > >> cdfbin2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "bdtrik", "", 0); > >> 828 : PyDict_SetItemString(dictionary, "bdtrik", f); > >> 902 : f = PyUFunc_FromFuncAndData(cephes3_functions, > >> cdfnbn2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "nbdtrik", "", > >> 0); > >> 903 : PyDict_SetItemString(dictionary, "nbdtrik", f); > >> .... > >> > >> [special]$ grin cdfbin2 > >> ./_cephesmodule.c: > >> 208 : static void * cdfbin2_data[] = {(void *)cdfbin2_wrap, (void > >> *)cdfbin2_wrap}; > >> 827 : f = PyUFunc_FromFuncAndData(cephes3_functions, > >> cdfbin2_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "bdtrik", "", 0); > >> ./cdf_wrappers.c: > >> 93 : double cdfbin2_wrap(double p, double xn, double pr) { > >> ./cdf_wrappers.h: > >> 18 : extern double cdfbin2_wrap(double p, double xn, double pr); > >> > >> [special]$ less cdf_wrappers.c > >> # I see that cdfbin2_wrap() wraps the Fortran subroutine CDFBIN. This > >> tells me that it's from the cdflib collection of functions, not the > >> Cephes library itself. > >> > >> > > > > A quick look at the wrappers and the fortran function makes me think the > > bug is in the wrappers. If the fortran function CDFBIN returns with > > STATUS == 1 or STATUS == 2, the wrapper returns BOUND, and the caller > > would only know something was wrong if > > scipy_special_print_error_messages is not zero. > > That does look pretty dodgy, since the fault that happens is that one > of the bounds is returned, but it's the wrong one (top bound rather > than bottom bound). I'll experiment with it some more once I can get > scipy to compile again (the new lambertw won't compile; I think it's a > cython version issue). > > There is a recent (about 1 mo old) version of cython out. Maybe we should regenerate everything, although I'm not sure what the new release buys us, I didn't see any release notes. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Tue Dec 22 18:18:49 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 22 Dec 2009 18:18:49 -0500 Subject: [SciPy-dev] scipy.special.bdtrik bug (ticket #1076) In-Reply-To: References: <3d375d730912221210m64e3d23jae4a3ad782362cf0@mail.gmail.com> <4B312EAB.7020605@enthought.com> Message-ID: On 22-Dec-09, at 5:33 PM, Charles R Harris wrote: > There is a recent (about 1 mo old) version of cython out. Maybe we > should regenerate everything, although I'm not sure what the new > release buys us, I didn't see any release notes. http://wiki.cython.org/ReleaseNotes-0.12 C++ complex support sounds useful, but I'm not sure. David From forrest.bao at gmail.com Wed Dec 23 01:57:06 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Wed, 23 Dec 2009 00:57:06 -0600 Subject: [SciPy-dev] how scipy reference guide is generated Message-ID: <889df5f00912222257j5872e42bqff8c36949a730917@mail.gmail.com> Hi there, I have a (maybe) irrelevant question to ask. I saw the reference guide of scipy is pretty neat. I happen to write a Python library lately, and I need to find a way to publish my docs. I want to know which toolchain is used to generate a doc like scipy reference guide. table of contents: http://docs.scipy.org/doc/scipy/reference/linalg.html details to a function: http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lu.html#scipy.linalg.lu Cheers, Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Wed Dec 23 03:35:32 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Wed, 23 Dec 2009 10:35:32 +0200 Subject: [SciPy-dev] how scipy reference guide is generated In-Reply-To: <889df5f00912222257j5872e42bqff8c36949a730917@mail.gmail.com> References: <889df5f00912222257j5872e42bqff8c36949a730917@mail.gmail.com> Message-ID: <6a17e9ee0912230035i4ace03a1v2f17140dc000e6b0@mail.gmail.com> >2009/12/23 Forrest Sheng Bao : > I want to know which toolchain is used to generate a doc like scipy > reference guide. The reference guide is generated from ReStructured Text (ReST) files using Sphinx (http://sphinx.pocoo.org/). You can take a look at http://docs.scipy.org/numpy/Front%20Page/ for some more information on how this is achieved. It might also be worth poking around in http://projects.scipy.org/numpy/browser/trunk/doc. You can probably get away with a far less complex setup to generate your own docs from Sphinx. Cheers, Scott From josef.pktd at gmail.com Wed Dec 23 07:11:55 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 23 Dec 2009 07:11:55 -0500 Subject: [SciPy-dev] how scipy reference guide is generated In-Reply-To: <6a17e9ee0912230035i4ace03a1v2f17140dc000e6b0@mail.gmail.com> References: <889df5f00912222257j5872e42bqff8c36949a730917@mail.gmail.com> <6a17e9ee0912230035i4ace03a1v2f17140dc000e6b0@mail.gmail.com> Message-ID: <1cd32cbb0912230411w55ae4c8dif2860155c1e3492f@mail.gmail.com> On Wed, Dec 23, 2009 at 3:35 AM, Scott Sinclair wrote: >>2009/12/23 Forrest Sheng Bao : >> I want to know which toolchain is used to generate a doc like scipy >> reference guide. > > The reference guide is generated from ReStructured Text (ReST) files > using Sphinx (http://sphinx.pocoo.org/). > > You can take a look at http://docs.scipy.org/numpy/Front%20Page/ for > some more information on how this is achieved. It might also be worth > poking around in http://projects.scipy.org/numpy/browser/trunk/doc. > > You can probably get away with a far less complex setup to generate > your own docs from Sphinx. There was a thread on the numpy mailing list on Sept 21: "numpy docstring sphinx pre-processors" that explained a bit the use of numpydoc for other projects. BTW: I like the new style sheet of the scipy docs. I haven't looked at it in a while. Josef > > Cheers, > Scott > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From forrest.bao at gmail.com Wed Dec 23 22:04:34 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Wed, 23 Dec 2009 21:04:34 -0600 Subject: [SciPy-dev] how scipy reference guide is generated In-Reply-To: <1cd32cbb0912230411w55ae4c8dif2860155c1e3492f@mail.gmail.com> References: <889df5f00912222257j5872e42bqff8c36949a730917@mail.gmail.com> <6a17e9ee0912230035i4ace03a1v2f17140dc000e6b0@mail.gmail.com> <1cd32cbb0912230411w55ae4c8dif2860155c1e3492f@mail.gmail.com> Message-ID: <889df5f00912231904q500164ecx508ac5a2c05345de@mail.gmail.com> I am getting quite struggling now. I commented my Python code with reST but i counldn't generate expected HTML file. I expect that I can combine both my code and comments together in one doc and use Sphinx to only get the code out. I have a very simple file, index.py, containing only one dummy function: dd(a, b): """The sum of two numbers. Parameters ---------- a: an integer b: an integer Returns ------- a + b, the sum of a and b. """ return a+b I set up a project under current directory using sphinx-quickstart. And then, I run "make hitml" $ make html sphinx-build -b html -d _build/doctrees . _build/html Running Sphinx v0.6.3 loading pickled environment... done building [html]: targets for 0 source files that are out of date updating environment: [config changed] 3 added, 0 changed, 0 removed reading sources... [ 66%] index reST markup error: /forrest/work/BME/NKdata/Features/sphinx/index.py:5: (SEVERE/4) Unexpected section title. Parameters ---------- make: *** [html] Error 1 I do not understand why. Why it is unexpected? And, even if I delete the ---------, the triple quotes bracing my comments in Python file never disappear in final doc. Cheers, Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.bergstra at gmail.com Wed Dec 23 22:39:50 2009 From: james.bergstra at gmail.com (James Bergstra) Date: Wed, 23 Dec 2009 22:39:50 -0500 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <4B2FA560.7060502@ntc.zcu.cz> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> <4B2FA560.7060502@ntc.zcu.cz> Message-ID: <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> On Mon, Dec 21, 2009 at 11:42 AM, Robert Cimrman wrote: > In this case, the issue could be side-stepped by replacing the specialized > functions, e.g. all Factor.solve_*() by a single function, e.g. Factor.solve( > ..., mode='A') with a 'mode' argument. What do you think? Typing > Factor.solve_P(b) or Factor.solve(b, 'P') > seems of the same complexity/readability to me. FWIW, Factor.solve_P can be tab-completed, whereas Factor.solve(b, 'P') cannot. Also, the second form suggests [to me] that any letter or string could go in place of 'P'. I presume from the two options that actually there are just a few constants that make sense instead of 'P', but that wouldn't be clear from the second form alone. Maybe the sense of a cluttered interface can be addressed by better formatting of the docs (??) James -- http://www-etud.iro.umontreal.ca/~bergstrj From josef.pktd at gmail.com Wed Dec 23 22:47:46 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 23 Dec 2009 22:47:46 -0500 Subject: [SciPy-dev] how scipy reference guide is generated In-Reply-To: <889df5f00912231904q500164ecx508ac5a2c05345de@mail.gmail.com> References: <889df5f00912222257j5872e42bqff8c36949a730917@mail.gmail.com> <6a17e9ee0912230035i4ace03a1v2f17140dc000e6b0@mail.gmail.com> <1cd32cbb0912230411w55ae4c8dif2860155c1e3492f@mail.gmail.com> <889df5f00912231904q500164ecx508ac5a2c05345de@mail.gmail.com> Message-ID: <1cd32cbb0912231947t75a00ceej12ee7861a66c3f20@mail.gmail.com> On Wed, Dec 23, 2009 at 10:04 PM, Forrest Sheng Bao wrote: > I am getting quite struggling now. I commented my Python code with reST but > i counldn't generate expected HTML file. I expect that I can combine both my > code and comments together in one doc and use Sphinx to only get the code > out. > I have a very simple file, index.py, containing only one dummy function: I don't know what your layout is, but the index file in doc/source should be a pure rst file not a python script. The python module files should be accessible through the python path. For some of the numpy specific rst directives and formatting you need the numpy sphinx plugins, but I don't remember if section headers in doc strings belong to that group. The best way to get started is to look or copy some existing examples, pick any package that uses sphinx. Alternatively, I saw recently a package on pypi that creates your package structure including the sphinx doc layout, but unfortunately I don't remember the name and I didn't look at it. Josef > dd(a, b): > """The sum of two numbers. > Parameters > ---------- > a: an integer > b: an integer > Returns > ------- > a + b, the sum of a and b. > """ > return a+b > I set up a project under current directory using sphinx-quickstart. And > then, I run "make hitml" > $ make html > sphinx-build -b html -d _build/doctrees ? . _build/html > Running Sphinx v0.6.3 > loading pickled environment... done > building [html]: targets for 0 source files that are out of date > updating environment: [config changed] 3 added, 0 changed, 0 removed > reading sources... [ 66%] index > reST markup error: > /forrest/work/BME/NKdata/Features/sphinx/index.py:5: (SEVERE/4) Unexpected > section title. > Parameters > ---------- > make: *** [html] Error 1 > I do not understand why. Why it is unexpected? And, even if I delete the > ---------, the triple quotes bracing my comments in Python file never > disappear in final doc. > Cheers, > Forrest > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From forrest.bao at gmail.com Wed Dec 23 23:29:28 2009 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Wed, 23 Dec 2009 22:29:28 -0600 Subject: [SciPy-dev] how scipy reference guide is generated In-Reply-To: <1cd32cbb0912231947t75a00ceej12ee7861a66c3f20@mail.gmail.com> References: <889df5f00912222257j5872e42bqff8c36949a730917@mail.gmail.com> <6a17e9ee0912230035i4ace03a1v2f17140dc000e6b0@mail.gmail.com> <1cd32cbb0912230411w55ae4c8dif2860155c1e3492f@mail.gmail.com> <889df5f00912231904q500164ecx508ac5a2c05345de@mail.gmail.com> <1cd32cbb0912231947t75a00ceej12ee7861a66c3f20@mail.gmail.com> Message-ID: <889df5f00912232029t621c62a9xfb319fcdb44cbbb4@mail.gmail.com> Hi Josef, Thanks for your answer. I am kinda confused and lost. I am a little bit anxious that I can go thru the process converting a Python script to an HTML file. Maybe I didn't formulate my questions clearly at the beginning. I want to generate HTML or LaTeX docs from comments in my Python files, where my comments are in reST format. I saw that code in the doc page of scipy is like the format of my code. The page http://docs.scipy.org/scipy/source/scipy/dist/lib64/python2.4/site-packages/scipy/linalg/decomp.pycontains comments in reST format. I assume that I can generate an HTML page from Python code with comments and my comments will automatically become the contents of the HTML page. that's why I specify such a file as my master file when run sphinx-quickstart, though it seems that I am wrong. I have been reading this guide line all night without a clue. http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines At the end of that page, there is an example, example.py, which is a python code commented by reST docstrings. But I couldn't find any documents telling me what to do to that file. There is another example at the end of that page, HOWTO_BUILD_DOCS.txt which is in reST syntax. I can convert it using rst2html command, but fail to use Sphinx to generate HTML as itself explained. I did "easy_install numpydoc" and set "extensions = ['numpydoc']" in conf.py. But when I tried to "make html," I saw this: $ make html sphinx-build -b html -d _build/doctrees . _build/html Running Sphinx v0.6.3 Extension error: Unknown event name: autodoc-process-docstring make: *** [html] Error 1 Maybe that numpydoc has bugs? On Wed, Dec 23, 2009 at 9:47 PM, wrote: > I don't know what your layout is, but the index file in doc/source > should be a pure rst file not a python script. The python module files > should be accessible through the python path. For some of the numpy > specific rst directives and formatting you need the numpy sphinx > plugins, but I don't remember if section headers in doc strings belong > to that group. > I couldn't find a doc telling me what that rst file should contain and how my python modules will be included in such an rst file. I feel that Sphinx can not process any Python file but only reST file. But how can I convert a Python script into reST file? -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Dec 24 00:01:24 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Dec 2009 00:01:24 -0500 Subject: [SciPy-dev] how scipy reference guide is generated In-Reply-To: <889df5f00912232029t621c62a9xfb319fcdb44cbbb4@mail.gmail.com> References: <889df5f00912222257j5872e42bqff8c36949a730917@mail.gmail.com> <6a17e9ee0912230035i4ace03a1v2f17140dc000e6b0@mail.gmail.com> <1cd32cbb0912230411w55ae4c8dif2860155c1e3492f@mail.gmail.com> <889df5f00912231904q500164ecx508ac5a2c05345de@mail.gmail.com> <1cd32cbb0912231947t75a00ceej12ee7861a66c3f20@mail.gmail.com> <889df5f00912232029t621c62a9xfb319fcdb44cbbb4@mail.gmail.com> Message-ID: <1cd32cbb0912232101q7fbfae44mbccf3f0f7cfb9502@mail.gmail.com> On Wed, Dec 23, 2009 at 11:29 PM, Forrest Sheng Bao wrote: > Hi Josef, > Thanks for your answer. I am kinda confused and lost. I am a little bit > anxious that I can go thru the process converting a Python script to an HTML > file. > Maybe I didn't formulate my questions clearly at the beginning. I want to > generate HTML or LaTeX docs from comments in my Python files, where my > comments are in reST format. > I saw that code in the doc page of scipy is like the format of my code. The > page > http://docs.scipy.org/scipy/source/scipy/dist/lib64/python2.4/site-packages/scipy/linalg/decomp.py > contains comments in reST format.?I assume that I can generate an HTML page > from Python code with comments and my comments will automatically become the > contents of the HTML page. that's why I specify such a file as my master > file when run sphinx-quickstart, though it seems that I am wrong. Here is index.rst of scipy, the main part is that it specifies the toctree, which refers to the individual sections rst files: http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/files/head%3A/scikits/statsmodels/docs/source/ Each individual.rst file then refers with the rst directives to the actual source (.py module files), e.g. http://docs.scipy.org/scipy/source/scipy/doc/source/interpolate.rst#1 the automodule, autosummary, autoclass, ... directives then direct sphinx to which docstring from the source should be included in the generated docs. The tutorial here http://matplotlib.sourceforge.net/sampledoc/index.html also looks useful, although I haven't read it, but it is referenced from the the official docs of sphinx. The numpy CodingStyleGuidelines are more about the formatting and content of the docstrings, not really about the "getting started" layout of sphinx generated docs If you want to see a more verbose example, you can look at some of the scikits, that's what I did when I set up this http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/files/head%3A/scikits/statsmodels/docs/source/ I hope that gets you a bit further. Josef > I have been reading this guide line all night without a > clue.?http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines > At the end of that page, there is an example, example.py, which is a python > code commented by reST docstrings. But I couldn't find any documents telling > me what to do to that file. > There is another example at the end of that page, HOWTO_BUILD_DOCS.txt which > is in reST syntax. I can convert it using rst2html command, but fail to use > Sphinx to generate HTML as itself explained. I did "easy_install numpydoc" > and set "extensions = ['numpydoc']" in conf.py. But when I tried to "make > html," I saw this: > > $ make html > sphinx-build -b html -d _build/doctrees ? . _build/html > Running Sphinx v0.6.3 > Extension error: > Unknown event name: autodoc-process-docstring > make: *** [html] Error 1 > > Maybe that numpydoc has bugs? > On Wed, Dec 23, 2009 at 9:47 PM, wrote: >> >> I don't know what your layout is, but the index file in doc/source >> should be a pure rst file not a python script. The python module files >> should be accessible through the python path. For some of the numpy >> specific rst directives and formatting you need the numpy sphinx >> plugins, but I don't remember if section headers in doc strings belong >> to that group. > > > I couldn't find a doc telling me what that rst file should contain and how > my python modules will be included in such an rst file. > I feel that Sphinx can not process any Python file but only reST file. > But how can I convert a Python script into reST file? > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From cimrman3 at ntc.zcu.cz Thu Dec 24 10:00:21 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 24 Dec 2009 16:00:21 +0100 (CET) Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> <4B2FA560.7060502@ntc.zcu.cz> <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> Message-ID: On Wed, 23 Dec 2009, James Bergstra wrote: > On Mon, Dec 21, 2009 at 11:42 AM, Robert Cimrman wrote: >> In this case, the issue could be side-stepped by replacing the specialized >> functions, e.g. all Factor.solve_*() by a single function, e.g. Factor.solve( >> ..., mode='A') with a 'mode' argument. What do you think? Typing >> Factor.solve_P(b) or Factor.solve(b, 'P') >> seems of the same complexity/readability to me. > > FWIW, Factor.solve_P can be tab-completed, whereas Factor.solve(b, 'P') cannot. > Also, the second form suggests [to me] that any letter or string could > go in place of 'P'. I presume from the two options that actually > there are just a few constants that make sense instead of 'P', but > that wouldn't be clear from the second form alone. > > Maybe the sense of a cluttered interface can be addressed by better > formatting of the docs (??) The tab completion is a good point, I did not think about that. In the one function case the possible modes would be listed in its docstring only. But one has to read docstrings anyway (well, sometimes :)) and all the related functionality would be in one place which is IMHO good. Just my 2 cents. I would like to hear some feedback to my original naming style question, as I face same issues in my projects too. The question is about using function names that mix underscores with capital letters, e.g. solve_P(). This style is marked "bad" in the Python docs, but in linear algebra, matrices are commonly denoted by capital letters. So what do you think? cheers, r. PS: Merry Christmas! From gael.varoquaux at normalesup.org Thu Dec 24 10:40:43 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 24 Dec 2009 16:40:43 +0100 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> <4B2FA560.7060502@ntc.zcu.cz> <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> Message-ID: <20091224154043.GA11988@phare.normalesup.org> On Thu, Dec 24, 2009 at 04:00:21PM +0100, Robert Cimrman wrote: > I would like to hear some feedback to my original naming style question, > as I face same issues in my projects too. The question is about using > function names that mix underscores with capital letters, e.g. > solve_P(). This style is marked "bad" in the Python docs, but in linear > algebra, matrices are commonly denoted by capital letters. So what do you > think? Bad. Ga?l From ondrej at certik.cz Thu Dec 24 10:49:41 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 24 Dec 2009 16:49:41 +0100 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <20091224154043.GA11988@phare.normalesup.org> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> <4B2FA560.7060502@ntc.zcu.cz> <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> <20091224154043.GA11988@phare.normalesup.org> Message-ID: <85b5c3130912240749o2bbfc294p28ebbb8c05b1a8b5@mail.gmail.com> On Thu, Dec 24, 2009 at 4:40 PM, Gael Varoquaux wrote: > On Thu, Dec 24, 2009 at 04:00:21PM +0100, Robert Cimrman wrote: >> I would like to hear some feedback to my original naming style question, >> as I face same issues in my projects too. The question is about using >> function names that mix underscores with capital letters, e.g. >> solve_P(). This style is marked "bad" in the Python docs, but in linear >> algebra, matrices are commonly denoted by capital letters. So what do you >> think? > > Bad. Good. Ondrej From josef.pktd at gmail.com Thu Dec 24 12:57:29 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Dec 2009 12:57:29 -0500 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <85b5c3130912240749o2bbfc294p28ebbb8c05b1a8b5@mail.gmail.com> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> <4B2FA560.7060502@ntc.zcu.cz> <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> <20091224154043.GA11988@phare.normalesup.org> <85b5c3130912240749o2bbfc294p28ebbb8c05b1a8b5@mail.gmail.com> Message-ID: <1cd32cbb0912240957t2728cf28od07682fc0904e671@mail.gmail.com> On Thu, Dec 24, 2009 at 10:49 AM, Ondrej Certik wrote: > On Thu, Dec 24, 2009 at 4:40 PM, Gael Varoquaux > wrote: >> On Thu, Dec 24, 2009 at 04:00:21PM +0100, Robert Cimrman wrote: >>> I would like to hear some feedback to my original naming style question, >>> as I face same issues in my projects too. The question is about using >>> function names that mix underscores with capital letters, e.g. >>> solve_P(). This style is marked "bad" in the Python docs, but in linear >>> algebra, matrices are commonly denoted by capital letters. So what do you >>> think? >> >> Bad. > > Good. Bad in general. (If you mix arrays and matrices it might be helpful internally as a reminder.) And I like informative names, where I don't need a book or paper to follow the notation, and go back to the table of shortcut definitions or docstrings every five minutes. My short term memory is too short to remember what A,B,C, P, X, Y, ... and phi, psi, omega,... mean in different contexts. But this depends on whether you program for insiders or the general public. Josef > > Ondrej > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From gael.varoquaux at normalesup.org Thu Dec 24 13:27:57 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 24 Dec 2009 19:27:57 +0100 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <1cd32cbb0912240957t2728cf28od07682fc0904e671@mail.gmail.com> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> <4B2FA560.7060502@ntc.zcu.cz> <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> <20091224154043.GA11988@phare.normalesup.org> <85b5c3130912240749o2bbfc294p28ebbb8c05b1a8b5@mail.gmail.com> <1cd32cbb0912240957t2728cf28od07682fc0904e671@mail.gmail.com> Message-ID: <20091224182757.GA30318@phare.normalesup.org> On Thu, Dec 24, 2009 at 12:57:29PM -0500, josef.pktd at gmail.com wrote: > And I like informative names, where I don't need a book or paper to > follow the notation, and go back to the table of shortcut definitions > or docstrings every five minutes. My short term memory is too short to > remember what A,B,C, P, X, Y, ... and phi, psi, omega,... mean in > different contexts. But this depends on whether you program for > insiders or the general public. Oh Yes, longer names, that make sens. Ga?l From njs at pobox.com Thu Dec 24 18:06:14 2009 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 24 Dec 2009 15:06:14 -0800 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <1cd32cbb0912240957t2728cf28od07682fc0904e671@mail.gmail.com> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> <4B2FA560.7060502@ntc.zcu.cz> <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> <20091224154043.GA11988@phare.normalesup.org> <85b5c3130912240749o2bbfc294p28ebbb8c05b1a8b5@mail.gmail.com> <1cd32cbb0912240957t2728cf28od07682fc0904e671@mail.gmail.com> Message-ID: <961fa2b40912241506i3e06f14axed8ad25e43a2a86c@mail.gmail.com> On Thu, Dec 24, 2009 at 9:57 AM, wrote: > On Thu, Dec 24, 2009 at 10:49 AM, Ondrej Certik wrote: >> On Thu, Dec 24, 2009 at 4:40 PM, Gael Varoquaux >> wrote: >>> On Thu, Dec 24, 2009 at 04:00:21PM +0100, Robert Cimrman wrote: >>>> I would like to hear some feedback to my original naming style question, >>>> as I face same issues in my projects too. The question is about using >>>> function names that mix underscores with capital letters, e.g. >>>> solve_P(). This style is marked "bad" in the Python docs, but in linear >>>> algebra, matrices are commonly denoted by capital letters. So what do you >>>> think? >>> >>> Bad. >> >> Good. > > Bad in general. (If you mix arrays and matrices it might be helpful > internally as a reminder.) > > And I like informative names, where I don't need a book or paper to > follow the notation, and go back to the table of shortcut definitions > or docstrings every five minutes. My short term memory is too short to > remember what A,B,C, P, X, Y, ... and phi, psi, omega,... mean in > different contexts. But this depends on whether you program for > insiders or the general public. I agree with all the general principles here, but still stand by the current function names in this case. PEP8 and all that are a tool to help us write clear and usable code, and should be ignored when their prescriptions don't help us do that. Obviously when this happens is a matter of taste... But here, we're talking about object that represents the factorization LDL' = PAP' or LL' = PAP' and we need to refer to expressions like DL'. These are standard names in the literature, used consistently through the docs, and what else can you call the parts of a Cholesky factor that would be more informative than confusing? Esp. since the solve_* interfaces are pretty much expert only; people who just want to solve a positive-definite system won't see them. -- Nathaniel From ondrej at certik.cz Sat Dec 26 04:25:17 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Sat, 26 Dec 2009 10:25:17 +0100 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <961fa2b40912241506i3e06f14axed8ad25e43a2a86c@mail.gmail.com> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> <4B2FA560.7060502@ntc.zcu.cz> <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> <20091224154043.GA11988@phare.normalesup.org> <85b5c3130912240749o2bbfc294p28ebbb8c05b1a8b5@mail.gmail.com> <1cd32cbb0912240957t2728cf28od07682fc0904e671@mail.gmail.com> <961fa2b40912241506i3e06f14axed8ad25e43a2a86c@mail.gmail.com> Message-ID: <85b5c3130912260125g67ecddbid2a75f204aaedd05@mail.gmail.com> On Fri, Dec 25, 2009 at 12:06 AM, Nathaniel Smith wrote: > On Thu, Dec 24, 2009 at 9:57 AM, ? wrote: >> On Thu, Dec 24, 2009 at 10:49 AM, Ondrej Certik wrote: >>> On Thu, Dec 24, 2009 at 4:40 PM, Gael Varoquaux >>> wrote: >>>> On Thu, Dec 24, 2009 at 04:00:21PM +0100, Robert Cimrman wrote: >>>>> I would like to hear some feedback to my original naming style question, >>>>> as I face same issues in my projects too. The question is about using >>>>> function names that mix underscores with capital letters, e.g. >>>>> solve_P(). This style is marked "bad" in the Python docs, but in linear >>>>> algebra, matrices are commonly denoted by capital letters. So what do you >>>>> think? >>>> >>>> Bad. >>> >>> Good. >> >> Bad in general. (If you mix arrays and matrices it might be helpful >> internally as a reminder.) >> >> And I like informative names, where I don't need a book or paper to >> follow the notation, and go back to the table of shortcut definitions >> or docstrings every five minutes. My short term memory is too short to >> remember what A,B,C, P, X, Y, ... and phi, psi, omega,... mean in >> different contexts. But this depends on whether you program for >> insiders or the general public. > > I agree with all the general principles here, but still stand by the > current function names in this case. PEP8 and all that are a tool to > help us write clear and usable code, and should be ignored when their > prescriptions don't help us do that. Obviously when this happens is a > matter of taste... But here, we're talking about object that > represents the factorization > ?LDL' = PAP' > or > ?LL' = PAP' > and we need to refer to expressions like DL'. > > These are standard names in the literature, used consistently through > the docs, and what else can you call the parts of a Cholesky factor > that would be more informative than confusing? Esp. since the solve_* > interfaces are pretty much expert only; people who just want to solve > a positive-definite system won't see them. Yes, I agree with you. Ondrej From josef.pktd at gmail.com Sat Dec 26 08:37:53 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 26 Dec 2009 08:37:53 -0500 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <85b5c3130912260125g67ecddbid2a75f204aaedd05@mail.gmail.com> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B28934C.5070606@student.matnat.uio.no> <4B2FA560.7060502@ntc.zcu.cz> <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> <20091224154043.GA11988@phare.normalesup.org> <85b5c3130912240749o2bbfc294p28ebbb8c05b1a8b5@mail.gmail.com> <1cd32cbb0912240957t2728cf28od07682fc0904e671@mail.gmail.com> <961fa2b40912241506i3e06f14axed8ad25e43a2a86c@mail.gmail.com> <85b5c3130912260125g67ecddbid2a75f204aaedd05@mail.gmail.com> Message-ID: <1cd32cbb0912260537j544cc563se6df25031a839d82@mail.gmail.com> On Sat, Dec 26, 2009 at 4:25 AM, Ondrej Certik wrote: > On Fri, Dec 25, 2009 at 12:06 AM, Nathaniel Smith wrote: >> On Thu, Dec 24, 2009 at 9:57 AM, ? wrote: >>> On Thu, Dec 24, 2009 at 10:49 AM, Ondrej Certik wrote: >>>> On Thu, Dec 24, 2009 at 4:40 PM, Gael Varoquaux >>>> wrote: >>>>> On Thu, Dec 24, 2009 at 04:00:21PM +0100, Robert Cimrman wrote: >>>>>> I would like to hear some feedback to my original naming style question, >>>>>> as I face same issues in my projects too. The question is about using >>>>>> function names that mix underscores with capital letters, e.g. >>>>>> solve_P(). This style is marked "bad" in the Python docs, but in linear >>>>>> algebra, matrices are commonly denoted by capital letters. So what do you >>>>>> think? >>>>> >>>>> Bad. >>>> >>>> Good. >>> >>> Bad in general. (If you mix arrays and matrices it might be helpful >>> internally as a reminder.) >>> >>> And I like informative names, where I don't need a book or paper to >>> follow the notation, and go back to the table of shortcut definitions >>> or docstrings every five minutes. My short term memory is too short to >>> remember what A,B,C, P, X, Y, ... and phi, psi, omega,... mean in >>> different contexts. But this depends on whether you program for >>> insiders or the general public. >> >> I agree with all the general principles here, but still stand by the >> current function names in this case. PEP8 and all that are a tool to >> help us write clear and usable code, and should be ignored when their >> prescriptions don't help us do that. Obviously when this happens is a >> matter of taste... But here, we're talking about object that >> represents the factorization >> ?LDL' = PAP' >> or >> ?LL' = PAP' >> and we need to refer to expressions like DL'. >> >> These are standard names in the literature, used consistently through >> the docs, and what else can you call the parts of a Cholesky factor >> that would be more informative than confusing? Esp. since the solve_* >> interfaces are pretty much expert only; people who just want to solve >> a positive-definite system won't see them. > > Yes, I agree with you. > > Ondrej I was answering the general question of Robert. In this case, I also agree, especially because they are methods and not functions. I finally looked at the manual, which looks very informative. The only method names, I got stuck with, are Factor.L_D() versus Factor.LD() and warning with Factor.L() Factor.L() is Factor.L_inLL Factor.L_D() is Factor.L_inLDL and does L in Factor.solve_L() refer to L in LL or in LDL ? or is it irrelevant? just a comment from a non-expert Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From cimrman3 at ntc.zcu.cz Sat Dec 26 09:44:24 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Sat, 26 Dec 2009 15:44:24 +0100 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) Message-ID: <20091226154424.vq8drmamsowgwkc4@webmail.zcu.cz> Quoting josef.pktd at gmail.com: > On Sat, Dec 26, 2009 at 4:25 AM, Ondrej Certik wrote: >> On Fri, Dec 25, 2009 at 12:06 AM, Nathaniel Smith wrote: >>> On Thu, Dec 24, 2009 at 9:57 AM, wrote: >>>> On Thu, Dec 24, 2009 at 10:49 AM, Ondrej Certik wrote: >>>>> On Thu, Dec 24, 2009 at 4:40 PM, Gael Varoquaux >>>>> wrote: >>>>>> On Thu, Dec 24, 2009 at 04:00:21PM +0100, Robert Cimrman wrote: >>>>>>> I would like to hear some feedback to my original naming style >>>>>>> question, >>>>>>> as I face same issues in my projects too. The question is about using >>>>>>> function names that mix underscores with capital letters, e.g. >>>>>>> solve_P(). This style is marked "bad" in the Python docs, but in linear >>>>>>> algebra, matrices are commonly denoted by capital letters. So >>>>>>> what do you >>>>>>> think? >>>>>> >>>>>> Bad. >>>>> >>>>> Good. >>>> >>>> Bad in general. (If you mix arrays and matrices it might be helpful >>>> internally as a reminder.) >>>> >>>> And I like informative names, where I don't need a book or paper to >>>> follow the notation, and go back to the table of shortcut definitions >>>> or docstrings every five minutes. My short term memory is too short to >>>> remember what A,B,C, P, X, Y, ... and phi, psi, omega,... mean in >>>> different contexts. But this depends on whether you program for >>>> insiders or the general public. >>> >>> I agree with all the general principles here, but still stand by the >>> current function names in this case. PEP8 and all that are a tool to >>> help us write clear and usable code, and should be ignored when their >>> prescriptions don't help us do that. Obviously when this happens is a >>> matter of taste... But here, we're talking about object that >>> represents the factorization >>> LDL' = PAP' >>> or >>> LL' = PAP' >>> and we need to refer to expressions like DL'. >>> >>> These are standard names in the literature, used consistently through >>> the docs, and what else can you call the parts of a Cholesky factor >>> that would be more informative than confusing? Esp. since the solve_* >>> interfaces are pretty much expert only; people who just want to solve >>> a positive-definite system won't see them. >> >> Yes, I agree with you. >> >> Ondrej > > I was answering the general question of Robert. In this case, I also > agree, especially because they are methods and not functions. > I finally looked at the manual, which looks very informative. > > The only method names, I got stuck with, are > > Factor.L_D() versus Factor.LD() and warning with Factor.L() > > Factor.L() is Factor.L_inLL > Factor.L_D() is Factor.L_inLDL > > and does L in Factor.solve_L() refer to L in LL or in LDL ? or is it > irrelevant? > > just a comment from a non-expert > > Josef Yes, I also prefer the above instead of some artificial long names in this particular case of the cholmod wrappers. Thanks also for all the opinions on my general question. My only concern regarding the cholmod wrappers is whether to collect several similar groups of functions into a function per group, or not. But I am ok with the way it is now. r. From millman at berkeley.edu Sat Dec 26 11:24:20 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 26 Dec 2009 21:54:20 +0530 Subject: [SciPy-dev] XXX in basics.rst In-Reply-To: <45d1ab480912161505y38f973ceq38f6575b20e18f28@mail.gmail.com> References: <45d1ab480912161505y38f973ceq38f6575b20e18f28@mail.gmail.com> Message-ID: On Thu, Dec 17, 2009 at 4:35 AM, David Goldsmith wrote: > "Note XXX: there is overlap between this text extracted from numpy.doc > and "Guide to Numpy" chapter 2. Needs combining?" > > Opinions? ?What, ultimately, is to be the relationship between "Guide > to Numpy" and the "real-time" docs: do we want to preserve this > duplication of content, e.g., for user convenience, or consolidate? > If the latter, does that imply that we either: a) delete this file > (basics.rst) and direct users seeking this content to GtN, Chap. 2; or > b) ask Travis to modify (or for permission to modify) GtN; or is there > another "combining" alternative I'm not seeing? GtN is a static document and all new documentation will be written using the new documentation system. Travis has granted permission to take any text you want from GtN when working on the new documentation. Once the new documentation progresses to the point that it surpasses the GtN, we can mark the new user and reference guides as "Mature" and mark GtN as "Old" or something similar. Regards, -- Jarrod Millman Helen Wills Neuroscience Institute 10 Giannini Hall, UC Berkeley http://cirl.berkeley.edu/ From d.l.goldsmith at gmail.com Sat Dec 26 12:40:54 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 26 Dec 2009 09:40:54 -0800 Subject: [SciPy-dev] XXX in basics.rst In-Reply-To: References: <45d1ab480912161505y38f973ceq38f6575b20e18f28@mail.gmail.com> Message-ID: <45d1ab480912260940o7b649da6g9acc712594e29398@mail.gmail.com> On Sat, Dec 26, 2009 at 8:24 AM, Jarrod Millman wrote: > On Thu, Dec 17, 2009 at 4:35 AM, David Goldsmith > wrote: >> "Note XXX: there is overlap between this text extracted from numpy.doc >> and "Guide to Numpy" chapter 2. Needs combining?" >> >> Opinions? ?What, ultimately, is to be the relationship between "Guide >> to Numpy" and the "real-time" docs: do we want to preserve this >> duplication of content, e.g., for user convenience, or consolidate? >> If the latter, does that imply that we either: a) delete this file >> (basics.rst) and direct users seeking this content to GtN, Chap. 2; or >> b) ask Travis to modify (or for permission to modify) GtN; or is there >> another "combining" alternative I'm not seeing? > > GtN is a static document and all new documentation will be written > using the new documentation system. ?Travis has granted permission to > take any text you want from GtN when working on the new documentation. > ?Once the new documentation progresses to the point that it surpasses > the GtN, we can mark the new user and reference guides as "Mature" and > mark GtN as "Old" or something similar. > > Regards, > > -- > Jarrod Millman Thanks, Jarrod. IIUYC, yours is a (de facto) "vote" "for" maintaining the duplication of content. DG From njs at pobox.com Sun Dec 27 03:39:38 2009 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 27 Dec 2009 00:39:38 -0800 Subject: [SciPy-dev] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) In-Reply-To: <1cd32cbb0912260537j544cc563se6df25031a839d82@mail.gmail.com> References: <20091215232344.p8ndemblggcgogow@webmail.zcu.cz> <4B2FA560.7060502@ntc.zcu.cz> <7f1eaee30912231939v5785469v1f5ca77ea37e5ecd@mail.gmail.com> <20091224154043.GA11988@phare.normalesup.org> <85b5c3130912240749o2bbfc294p28ebbb8c05b1a8b5@mail.gmail.com> <1cd32cbb0912240957t2728cf28od07682fc0904e671@mail.gmail.com> <961fa2b40912241506i3e06f14axed8ad25e43a2a86c@mail.gmail.com> <85b5c3130912260125g67ecddbid2a75f204aaedd05@mail.gmail.com> <1cd32cbb0912260537j544cc563se6df25031a839d82@mail.gmail.com> Message-ID: <961fa2b40912270039l2ab515fanb0a05ac007f727ab@mail.gmail.com> On Sat, Dec 26, 2009 at 5:37 AM, wrote: > I was answering the general question of Robert. In this case, I also > agree, especially because they are methods and not functions. > I finally looked at the manual, which looks very informative. Thanks! And I very much appreciate the feedback from everyone here, it's very helpful -- docs are hard to get right without review! > The only method names, I got stuck with, are > > Factor.L_D() versus Factor.LD() and warning with Factor.L() > > Factor.L() ?is Factor.L_inLL > Factor.L_D() is Factor.L_inLDL > > and does L in Factor.solve_L() ?refer to L in LL or in LDL ? ?or is it > irrelevant? There is a paragraph at the top of the "Solving equations" section which specifies that all the solve_* methods work in the LDL' form, but in general... I totally agree, the LL' vs. LDL' distinction -- where there are two distinct matrices both called "L" -- is very confusing. I am not sure what to do about that, because it's the standard notation that's broken here, but I am strongly tempted to declare that in *my* docs one of them is called something else. However, I'm not sure what to call it instead -- I suppose "M", as the next letter after "L", unless someone has a better idea... There is a similar problem, though less extreme, for naming the matrix that is being factored -- the CHOLMOD docs refer to both the matrix being factored and the matrix you pass in to the library by the name "A", even though this may not be the same (you can have CHOLMOD square and add a constant to the matrix you pass in before factoring). On second thought, these should perhaps have different names as well. -- Nathaniel From millman at berkeley.edu Mon Dec 28 05:11:42 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 28 Dec 2009 15:41:42 +0530 Subject: [SciPy-dev] XXX in basics.rst In-Reply-To: <45d1ab480912260940o7b649da6g9acc712594e29398@mail.gmail.com> References: <45d1ab480912161505y38f973ceq38f6575b20e18f28@mail.gmail.com> <45d1ab480912260940o7b649da6g9acc712594e29398@mail.gmail.com> Message-ID: On Sat, Dec 26, 2009 at 11:10 PM, David Goldsmith wrote: > Thanks, Jarrod. ?IIUYC, yours is a (de facto) "vote" "for" maintaining > the duplication of content. It isn't so much a vote as an explanation of the situation. If you prevent people from duplicating information in the new system, then we will never be in a position to stop using the old documentation. During this transition period, there will be of necessity some time when users will need to use both the old documentation and the new documentation. -- Jarrod Millman Helen Wills Neuroscience Institute 10 Giannini Hall, UC Berkeley http://cirl.berkeley.edu/ From cournape at gmail.com Mon Dec 28 09:03:14 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 28 Dec 2009 23:03:14 +0900 Subject: [SciPy-dev] Announcing toydist, improving distribution and packaging situation Message-ID: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> (warning, long post) Hi there, As some of you already know, the packaging and distributions of scientific python packages have been a constant source of frustration. Open source is about making it easy for anyone to use software how they see fit, and I think python packaging infrastructure has not been very successfull for people not intimately familiar with python. A few weeks ago, after Guido visited Berkeley and was told how those issues were still there for the scientific community, he wrote an email asking whether current efforts on distutils-sig will be enough (see http://aspn.activestate.com/ASPN/Mail/Message/distutils-sig/3775972). Several of us have been participating to this discussion, but I feel like the divide between current efforts on distutils-sig and us (the SciPy community) is not getting smaller. At best, their efforts will be more work for us to track the new distribute fork, and more likely, it will be all for nothing as it won't solve any deep issue. To be honest, most of what is considered on distutils-sig sounds like anti-goals to me. Instead of keeping up with the frustrating process of "improving" distutils, I think we have enough smart people and manpower in the scientific community to go with our own solution. I am convinced it is doable because R or haskell, with a much smaller community than python, managed to pull out something with is miles ahead compared to pypi. The SciPy community is hopefully big enough so that a SciPy-specific solution may reach critical mass. Ideally, I wish we had something with the following capabilities: - easy to understand tools - http-based package repository ala CRAN, which would be easy to mirror and backup (through rsync-like tools) - decoupling the building, packaging and distribution of code and data - reliable install/uninstall/query of what is installed locally - facilities for building windows/max os x binaries - making the life of OS vendors (Linux, *BSD, etc...) easier The packaging part ============== Speaking is easy, so I started coding part of this toolset, called toydist (temporary name), which I presented at Scipy India a few days ago: http://github.com/cournape/toydist/ Toydist is more or less a rip off of cabal (http://www.haskell.org/cabal/), and consist of three parts: - a core which builds a package description from a declarative file similar to cabal files. The file is almost purely declarative, and can be parsed so that no arbitrary code is executed, thus making it easy to sandbox packages builds (e.g. on a build farm). - a set of command line tools to configure, build, install, build installers (egg only for now) etc... from the declarative file - backward compatibility tools: a tool to convert existing setup.py to the new format has been written, and a tool to use distutils through the new format for backward compatibility with complex distutils extensions should be relatively easy. The core idea is to make the format just rich enough to describe most packages out there, but simple enough so interfacing it with external tools is possible and reliable. As a regular contributor to scons, I am all too aware that a build tool is a very complex beast to get right, and repeating their efforts does not make sense. Typically, I envision that complex packages such as numpy, scipy or matplotlib would use make/waf/scons for the build - in a sense, toydist is written so that writing something like numscons would be easier. OTOH, most if not all scikits should be buildable from a purely declarative file. To give you a feel of the format, here is a snippet for the grin package from Robert K. (automatically converted): Name: grin Version: 1.1.1 Summary: A grep program configured the way I like it. Description: ==== grin ==== I wrote grin to help me search directories full of source code. The venerable GNU grep_ and find_ are great tools, but they fall just a little short for my normal use cases. License: BSD Platforms: UNKNOWN Classifiers: License :: OSI Approved :: BSD License, Development Status :: 5 - Production/Stable, Environment :: Console, Intended Audience :: Developers, Operating System :: OS Independent, Programming Language :: Python, Topic :: Utilities, ExtraSourceFiles: README.txt, setup.cfg, setup.py, Library: InstallDepends: argparse, Modules: grin, Executable: grin module: grin function: grin_main Executable: grind module: grin function: grind_main Although still very much experimental at this point, toydist already makes some things much easier than with distutils/setuptools: - path customization for any target can be done easily: you can easily add an option in the file so that configure --mynewdir=value works and is accessible at every step. - making packages FHS compliant is not a PITA anymore, and the scheme can be adapted to any OS, be it traditional FHS-like unix, mac os x, windows, etc... - All the options are accessible at every step (no more distutils commands nonsense) - data files can finally be handled correctly and consistently, instead of the 5 or 6 magics methods currently available in distutils/setuptools/numpy.distutils - building eggs does not involve setuptools anymore - not much coupling between package description and build infrastructure (building extensions is actually done through distutils ATM). Repository ======== The goal here is to have something like CRAN (http://cran.r-project.org/web/views/), ideally with a build farm so that whenever anyone submits a package to our repository, it would automatically be checked, and built for windows/mac os x and maybe a few major linux distributions. One could investigate the build service from open suse to that end (http://en.opensuse.org/Build_Service), which is based on xen VM to build installers in a reproducible way. Installed package db =============== I believe that the current open source enstaller package from Enthought can be a good starting point. It is based on eggs, but eggs are only used as a distribution format (eggs are never installed as eggs AFAIK). You can easily remove packages, query installed versions, etc... Since toydist produces eggs, interoperation between toydist and enstaller should not be too difficult. What's next ? ========== At this point, I would like to ask for help and comments, in particular: - Does all this make sense, or hopelessly intractable ? - Besides the points I have mentioned, what else do you think is needed ? - There has already been some work for the scikits webportal, but I think we should bypass pypi entirely (the current philosophy of not enforcing consistent metadata does not make much sense to me, and is at the opposite of most other similar system out there). - I think a build farm for at least windows packages would be a killer feature, and enough incentive to push some people to use our new infrastructure. It would be good to have a windows guy familiar with windows sandboxing/virtualization to do something there. The people working on the opensuse build service have started working on windows support - I think being able to automatically convert most of scientific packages is a significant feature, and needs to be more robust - so anyone is welcomed to try converting existing setup.py with toydist (see toydist readme). thanks, David From cournape at gmail.com Mon Dec 28 10:03:15 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 29 Dec 2009 00:03:15 +0900 Subject: [SciPy-dev] [matplotlib-devel] Announcing toydist, improving distribution and packaging situation In-Reply-To: References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> Message-ID: <5b8d13220912280703r43b8122ds609cbcd4a9a75ead@mail.gmail.com> On Mon, Dec 28, 2009 at 11:47 PM, Stefan Schwarzburg wrote: > Hi, > I would like to add a comment from the user perspective: > > - the main reason why I'm not satisfied with pypi/distutils/etc. and why I > will not be satisfied with toydist (with the features you listed), is that > they break my installation (debian/ubuntu). Toydist (or distutils) does not break anything as is. It would be like saying make breaks debian - it does not make much sense. As stated, one of the goal of giving up distutils is to make packaging by os vendors easier. In particular, by allowing to follow the FHS, and making things more consistent. It should be possible to automatically convert most packages to .deb (or .rpm) relatively easily. When you look at the numpy .deb package, most of the issues are distutils issues, and almost everything else can be done automatically. Note that even ignoring the windows problem, there are systems to do the kind of things I am talking about for linux-only systems (the opensuse build service), because distributions are not always really good at tracking fast changing softwares. IOW, traditional linux packaging has some issues as well. And anyway, nothing prevents debian or other OS vendors to package things as they want (as they do for R packages). David From ndbecker2 at gmail.com Mon Dec 28 13:03:31 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 28 Dec 2009 13:03:31 -0500 Subject: [SciPy-dev] [matplotlib-devel] Announcing toydist, improving distribution and packaging situation References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912280703r43b8122ds609cbcd4a9a75ead@mail.gmail.com> Message-ID: David Cournapeau wrote: > On Mon, Dec 28, 2009 at 11:47 PM, Stefan Schwarzburg > wrote: >> Hi, >> I would like to add a comment from the user perspective: >> >> - the main reason why I'm not satisfied with pypi/distutils/etc. and why >> I will not be satisfied with toydist (with the features you listed), is >> that they break my installation (debian/ubuntu). > > Toydist (or distutils) does not break anything as is. It would be like > saying make breaks debian - it does not make much sense. As stated, > one of the goal of giving up distutils is to make packaging by os > vendors easier. In particular, by allowing to follow the FHS, and > making things more consistent. It should be possible to automatically > convert most packages to .deb (or .rpm) relatively easily. When you > look at the numpy .deb package, most of the issues are distutils > issues, and almost everything else can be done automatically. > > Note that even ignoring the windows problem, there are systems to do > the kind of things I am talking about for linux-only systems (the > opensuse build service), because distributions are not always really > good at tracking fast changing softwares. IOW, traditional linux > packaging has some issues as well. And anyway, nothing prevents debian > or other OS vendors to package things as they want (as they do for R > packages). > > David I think the breakage that is referred to I can describe on my favorite system, fedora. I can install the fedora numpy rpm using yum. I could also use easy_install. Unfortunately: 1) Each one knows nothing about the other 2) They may install things into conflicting paths. In particular, on fedora arch-dependent things go in /usr/lib64/python/site-packages while arch-independent goes into /usr/lib/python... If you mix yum with easy_install (or setuptools), you many times wind up with 2 versions and a lot of confusion. This is NOT unusual. Let's say I have numpy-1.3.0 installed from rpms. I see the announcement of numpy-1.4.0, and decide I want it, before the rpm is available, so I use easy_install. Now numpy-1.4.0 shows up as a standard rpm, and a subsequent update (which could be automatic!) could produce a broken system. I don't really know what could be done about it. Perhaps a design that attempts to use native backends for installation where available? From d.l.goldsmith at gmail.com Mon Dec 28 13:20:26 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 28 Dec 2009 10:20:26 -0800 Subject: [SciPy-dev] XXX in basics.rst In-Reply-To: References: <45d1ab480912161505y38f973ceq38f6575b20e18f28@mail.gmail.com> <45d1ab480912260940o7b649da6g9acc712594e29398@mail.gmail.com> Message-ID: <45d1ab480912281020r7c075632w8934365cc99abab5@mail.gmail.com> On Mon, Dec 28, 2009 at 2:11 AM, Jarrod Millman wrote: > On Sat, Dec 26, 2009 at 11:10 PM, David Goldsmith > wrote: >> Thanks, Jarrod. ?IIUYC, yours is a (de facto) "vote" "for" maintaining >> the duplication of content. > > It isn't so much a vote as an explanation of the situation. ?If you > prevent people from duplicating information in the new system, then we > will never be in a position to stop using the old documentation. I couldn't agree more. (FTR, I didn't author the XXX comment found in basics.rst, I was merely reporting it and seeking feedback about it, trying to remain as neutral as I could. I definitely agree, however, that - despite the risk of divergence, for which, in the doc at least, the consequences aren't as potentially problematic as divergence in the code - duplication of such content at this stage in the game is not to be agonized over.) So far, you're the only one to have replied, and given the content of that reply, it sounds like the status quo will be preserved in this situation. Thanks again, DG > During this transition period, there will be of necessity some time > when users will need to use both the old documentation and the new > documentation. > > -- > Jarrod Millman > Helen Wills Neuroscience Institute > 10 Giannini Hall, UC Berkeley > http://cirl.berkeley.edu/ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From dagss at student.matnat.uio.no Mon Dec 28 13:49:13 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 28 Dec 2009 19:49:13 +0100 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> Message-ID: David wrote: > Repository > ======== > > The goal here is to have something like CRAN > (http://cran.r-project.org/web/views/), ideally with a build farm so > that whenever anyone submits a package to our repository, it would > automatically be checked, and built for windows/mac os x and maybe a > few major linux distributions. One could investigate the build service > from open suse to that end (http://en.opensuse.org/Build_Service), > which is based on xen VM to build installers in a reproducible way. Do you here mean automatic generation of Ubuntu debs, Debian debs, Windows MSI installer, Windows EXE installer, and so on? (If so then great!) If this is the goal, I wonder if one looks outside of Python-land one might find something that already does this -- there's a lot of different package format, "Linux meta-distributions", "install everywhere packages" and so on. Of course, toydist could have such any such tool as a backend/in a pipeline. > What's next ? > ========== > > At this point, I would like to ask for help and comments, in particular: > - Does all this make sense, or hopelessly intractable ? > - Besides the points I have mentioned, what else do you think is needed ? Hmm. What I miss is the discussion of other native libraries which the Python libraries need to bundle. Is it assumed that one want to continue linking C and Fortran code directly into Python .so modules, like the scipy library currently does? Let me take CHOLMOD (sparse Cholesky) as an example. - The Python package cvxopt use it, simply by linking about 20 C files directly into the Python-loadable module (.so) which goes into the Python site-packages (or wherever). This makes sure it just works. But, it doesn't feel like the right way at all. - scikits.sparse.cholmod OTOH simple specifies libraries=["cholmod"], and leave it up to the end-user to make sure it is installed. Linux users with root access can simply apt-get, but it is a pain for everybody else (Windows, Mac, non-root Linux). - Currently I'm making a Sage SPKG for CHOLMOD. This essentially gets the job done by not bothering about the problem, not even using the OS-installed Python. Something that would spit out both Sage SPKGs, Ubuntu debs, Windows installers, both with Python code and C/Fortran code or a mix (and put both in the place preferred by the system in question), seems ideal. Of course one would still need to make sure that the code builds properly everywhere, but just solving the distribution part of this would be a huge step ahead. What I'm saying is that this is a software distribution problem in general, and I'm afraid that Python-specific solutions are too narrow. Dag Sverre From cournape at gmail.com Mon Dec 28 13:55:13 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 29 Dec 2009 03:55:13 +0900 Subject: [SciPy-dev] [matplotlib-devel] Announcing toydist, improving distribution and packaging situation In-Reply-To: References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912280703r43b8122ds609cbcd4a9a75ead@mail.gmail.com> Message-ID: <5b8d13220912281055m72fbbc53ld2c3a6d8abe2425a@mail.gmail.com> On Tue, Dec 29, 2009 at 3:03 AM, Neal Becker wrote: > David Cournapeau wrote: > >> On Mon, Dec 28, 2009 at 11:47 PM, Stefan Schwarzburg >> wrote: >>> Hi, >>> I would like to add a comment from the user perspective: >>> >>> - the main reason why I'm not satisfied with pypi/distutils/etc. and why >>> I will not be satisfied with toydist (with the features you listed), is >>> that they break my installation (debian/ubuntu). >> >> Toydist (or distutils) does not break anything as is. It would be like >> saying make breaks debian - it does not make much sense. As stated, >> one of the goal of giving up distutils is to make packaging by os >> vendors easier. In particular, by allowing to follow the FHS, and >> making things more consistent. It should be possible to automatically >> convert most packages to .deb (or .rpm) relatively easily. When you >> look at the numpy .deb package, most of the issues are distutils >> issues, and almost everything else can be done automatically. >> >> Note that even ignoring the windows problem, there are systems to do >> the kind of things I am talking about for linux-only systems (the >> opensuse build service), because distributions are not always really >> good at tracking fast changing softwares. IOW, traditional linux >> packaging has some issues as well. And anyway, nothing prevents debian >> or other OS vendors to package things as they want (as they do for R >> packages). >> >> David > > I think the breakage that is referred to I can describe on my favorite > system, fedora. > > I can install the fedora numpy rpm using yum. ?I could also use > easy_install. ?Unfortunately: > 1) Each one knows nothing about the other > 2) They may install things into conflicting paths. ?In particular, on fedora > arch-dependent things go in /usr/lib64/python/site-packages while > arch-independent goes into /usr/lib/python... ?If you mix yum with > easy_install (or setuptools), you many times wind up with 2 versions and a > lot of confusion. > > This is NOT unusual. ?Let's say I have numpy-1.3.0 installed from rpms. ?I > see the announcement of numpy-1.4.0, and decide I want it, before the rpm is > available, so I use easy_install. ?Now numpy-1.4.0 shows up as a standard > rpm, and a subsequent update (which could be automatic!) could produce a > broken system. Several points: - First, this is caused by distutils misfeature of defaulting to /usr. This is a mistake. It should default to /usr/local, as does every other install method from sources. - A lot of instructions start by sudo easy_install... This is a very bad advice, especially given the previous issue. > I don't really know what could be done about it. ?Perhaps a design that > attempts to use native backends for installation where available? The idea would be that for a few major distributions at least, you would have .rpm available on the repository. If you install from sources, there would be a few mechanisms to avoid your exact issue (like maybe defaulting to --user kind of installs). Of course, it can only be dealt up to a point. David From cournape at gmail.com Mon Dec 28 14:14:01 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 29 Dec 2009 04:14:01 +0900 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> Message-ID: <5b8d13220912281114x18687d85wdaab008b3243a846@mail.gmail.com> On Tue, Dec 29, 2009 at 3:49 AM, Dag Sverre Seljebotn wrote: > > Do you here mean automatic generation of Ubuntu debs, Debian debs, Windows > MSI installer, Windows EXE installer, and so on? (If so then great!) Yes (although this is not yet implemented). In particular on windows, I want to implement a scheme so that you can convert from eggs to .exe and vice et versa, so people can still install as exe (or msi), even though the method would default to eggs. > If this is the goal, I wonder if one looks outside of Python-land one > might find something that already does this -- there's a lot of different > package format, "Linux meta-distributions", "install everywhere packages" > and so on. Yes, there are things like 0install or autopackage. I think those are deemed to fail, as long as it is not supported thoroughly by the distribution. Instead, my goal here is much simpler: producing rpm/deb. It does not solve every issue (install by non root, multiple // versions), but one has to be realistic :) I think automatically built rpm/deb, easy integration with native method can solve a lot of issues already. > > ?- Currently I'm making a Sage SPKG for CHOLMOD. This essentially gets the > job done by not bothering about the problem, not even using the > OS-installed Python. > > Something that would spit out both Sage SPKGs, Ubuntu debs, Windows > installers, both with Python code and C/Fortran code or a mix (and put > both in the place preferred by the system in question), seems ideal. Of > course one would still need to make sure that the code builds properly > everywhere, but just solving the distribution part of this would be a huge > step ahead. On windows, this issue may be solved using eggs: enstaller has a feature where dll put in a special location of an egg are installed in python such as they are found by the OS loader. One could have mechanisms based on $ORIGIN + rpath on linux to solve this issue for local installs on Linux, etc... But again, one has to be realistic on the goals. With toydist, I want to remove all the pile of magic, hacks built on top of distutils so that people can again hack their own solutions, as it should have been from the start (that's a big plus of python in general). It won't magically solve every issue out there, but it would hopefully help people to make their own. Bundling solutions like SAGE, EPD, etc... are still the most robust ways to deal with those issues in general, and I do not intended to replace those. > What I'm saying is that this is a software distribution problem in > general, and I'm afraid that Python-specific solutions are too narrow. Distribution is a hard problem. Instead of pushing a very narrow (and mostly ill-funded) view of how people should do things like distutils/setuptools/pip/buildout do, I want people to be able to be able to build their own solutions. No more "use this magic stick v 4.0.3.3.14svn1234, trust me it work you don't have to understand" which is too prevalant with those tools, which has always felt deeply unpythonic to me. David From dagss at student.matnat.uio.no Mon Dec 28 14:21:01 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 28 Dec 2009 20:21:01 +0100 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <5b8d13220912281114x18687d85wdaab008b3243a846@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912281114x18687d85wdaab008b3243a846@mail.gmail.com> Message-ID: <930cd7013d563c4b882be2ad6358a9d7.squirrel@webmail.uio.no> > On Tue, Dec 29, 2009 at 3:49 AM, Dag Sverre Seljebotn > wrote: > >> >> Do you here mean automatic generation of Ubuntu debs, Debian debs, >> Windows >> MSI installer, Windows EXE installer, and so on? (If so then great!) > > Yes (although this is not yet implemented). In particular on windows, > I want to implement a scheme so that you can convert from eggs to .exe > and vice et versa, so people can still install as exe (or msi), even > though the method would default to eggs. > >> If this is the goal, I wonder if one looks outside of Python-land one >> might find something that already does this -- there's a lot of >> different >> package format, "Linux meta-distributions", "install everywhere >> packages" >> and so on. > > Yes, there are things like 0install or autopackage. I think those are > deemed to fail, as long as it is not supported thoroughly by the > distribution. Instead, my goal here is much simpler: producing > rpm/deb. It does not solve every issue (install by non root, multiple > // versions), but one has to be realistic :) > > I think automatically built rpm/deb, easy integration with native > method can solve a lot of issues already. > >> >> ?- Currently I'm making a Sage SPKG for CHOLMOD. This essentially gets >> the >> job done by not bothering about the problem, not even using the >> OS-installed Python. >> >> Something that would spit out both Sage SPKGs, Ubuntu debs, Windows >> installers, both with Python code and C/Fortran code or a mix (and put >> both in the place preferred by the system in question), seems ideal. Of >> course one would still need to make sure that the code builds properly >> everywhere, but just solving the distribution part of this would be a >> huge >> step ahead. > > On windows, this issue may be solved using eggs: enstaller has a > feature where dll put in a special location of an egg are installed in > python such as they are found by the OS loader. One could have > mechanisms based on $ORIGIN + rpath on linux to solve this issue for > local installs on Linux, etc... > > But again, one has to be realistic on the goals. With toydist, I want > to remove all the pile of magic, hacks built on top of distutils so > that people can again hack their own solutions, as it should have been > from the start (that's a big plus of python in general). It won't > magically solve every issue out there, but it would hopefully help > people to make their own. > > Bundling solutions like SAGE, EPD, etc... are still the most robust > ways to deal with those issues in general, and I do not intended to > replace those. > >> What I'm saying is that this is a software distribution problem in >> general, and I'm afraid that Python-specific solutions are too narrow. > > Distribution is a hard problem. Instead of pushing a very narrow (and > mostly ill-funded) view of how people should do things like > distutils/setuptools/pip/buildout do, I want people to be able to be > able to build their own solutions. No more "use this magic stick v > 4.0.3.3.14svn1234, trust me it work you don't have to understand" > which is too prevalant with those tools, which has always felt deeply > unpythonic to me. Thanks, this cleared things up, and I like the direction this is heading. Thanks a lot for doing this! Dag Sverre From gael.varoquaux at normalesup.org Mon Dec 28 18:02:08 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 29 Dec 2009 00:02:08 +0100 Subject: [SciPy-dev] [matplotlib-devel] Announcing toydist, improving distribution and packaging situation In-Reply-To: References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912280703r43b8122ds609cbcd4a9a75ead@mail.gmail.com> <5b8d13220912281055m72fbbc53ld2c3a6d8abe2425a@mail.gmail.com> Message-ID: <20091228230208.GA9952@phare.normalesup.org> On Mon, Dec 28, 2009 at 02:29:24PM -0500, Neal Becker wrote: > Perhaps this could be useful: > http://checkinstall.izto.org/ Yes, checkinstall is really cool. However, I tend to prefer things with no magic that I don't have to sandbox to know what they are doing. This is why I am also happy to hear about toydist. Ga?l From cournape at gmail.com Mon Dec 28 23:38:05 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 29 Dec 2009 13:38:05 +0900 Subject: [SciPy-dev] [matplotlib-devel] Announcing toydist, improving distribution and packaging situation In-Reply-To: <20091228230208.GA9952@phare.normalesup.org> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912280703r43b8122ds609cbcd4a9a75ead@mail.gmail.com> <5b8d13220912281055m72fbbc53ld2c3a6d8abe2425a@mail.gmail.com> <20091228230208.GA9952@phare.normalesup.org> Message-ID: <5b8d13220912282038m5a32f6b4iae61b3c9278f562f@mail.gmail.com> On Tue, Dec 29, 2009 at 8:02 AM, Gael Varoquaux wrote: > On Mon, Dec 28, 2009 at 02:29:24PM -0500, Neal Becker wrote: >> Perhaps this could be useful: >> http://checkinstall.izto.org/ > > Yes, checkinstall is really cool. However, I tend to prefer things with > no magic that I don't have to sandbox to know what they are doing. I am still not sure the design is entirely right, but the install command in toymaker just reads a build manifest, which is a file containing all the files necessary for install. It is explicit, and list every file to be installed. By design, it cannot install anything outside this manifest. That's also how eggs are built (and soon win installers and mac os x pkg). cheers, David From cournape at gmail.com Tue Dec 29 09:22:52 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 29 Dec 2009 23:22:52 +0900 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> Message-ID: <5b8d13220912290622m2f0ec2c3x5a26e63118cb29a0@mail.gmail.com> On Tue, Dec 29, 2009 at 10:27 PM, Ren? Dudfield wrote: > Hi, > > In the toydist proposal/release notes, I would address 'what does > toydist do better' more explicitly. > > > > **** A big problem for science users is that numpy does not work with > pypi + (easy_install, buildout or pip) and python 2.6. **** > > > > Working with the rest of the python community as much as possible is > likely a good goal. Yes, but it is hopeless. Most of what is being discussed on distutils-sig is useless for us, and what matters is ignored at best. I think most people on distutils-sig are misguided, and I don't think the community is representative of people concerned with packaging anyway - most of the participants seem to be around web development, and are mostly dismissive of other's concerns (OS packagers, etc...). I want to note that I am not starting this out of thin air - I know most of distutils code very well, I have been the mostly sole maintainer of numpy.distutils for 2 years now. I have written extensive distutils extensions, in particular numscons which is able to fully build numpy, scipy and matplotlib on every platform that matters. Simply put, distutils code is horrible (this is an objective fact) and flawed beyond repair (this is more controversial). IMHO, it has almost no useful feature, except being standard. If you want a more detailed explanation of why I think distutils and all tools on top are deeply flawed, you can look here: http://cournape.wordpress.com/2009/04/01/python-packaging-a-few-observations-cabal-for-a-solution/ > numpy used to work with buildout in python2.5, but not with 2.6. > buildout lets other team members get up to speed with a project by > running one command. ?It installs things in the local directory, not > system wide. ?So you can have different dependencies per project. I don't think it is a very useful feature, honestly. It seems to me that they created a huge infrastructure to split packages into tiny pieces, and then try to get them back together, imaganing that multiple installed versions is a replacement for backward compatibility. Anyone with extensive packaging experience knows that's a deeply flawed model in general. > Plenty of good work is going on with python packaging. That's the opposite of my experience. What I care about is: - tools which are hackable and easily extensible - robust install/uninstall - real, DAG-based build system - explicit and repeatability None of this is supported by the tools, and the current directions go even further away. When I have to explain at length why the command-based design of distutils is a nightmare to work with, I don't feel very confident that the current maintainers are aware of the issues, for example. It shows that they never had to extend distutils much. > > There are build farms for windows packages and OSX uploaded to pypi. > Start uploading pre releases to pypi, and you get these for free (once > you make numpy compile out of the box on those compile farms). ?There > are compile farms for other OSes too... like ubuntu/debian, macports > etc. ?Some distributions even automatically download, compile and > package new releases once they spot a new file on your ftp/web site. I am familiar with some of those systems (PPA and opensuse build service in particular). One of the goal of my proposal is to make it easier to interoperate with those tools. I think Pypi is mostly useless. The lack of enforced metadata is a big no-no IMHO. The fact that Pypi is miles beyond CRAN for example is quite significant. I want CRAN for scientific python, and I don't see Pypi becoming it in the near future. The point of having our own Pypi-like server is that we could do the following: - enforcing metadata - making it easy to extend the service to support our needs > > pypm: ?http://pypm.activestate.com/list-n.html#numpy It is interesting to note that one of the maintainer of pypm has recently quitted the discussion about Pypi, most likely out of frustration from the other participants. > Documentation projects are being worked on to document, give tutorials > and make python packaging be easier all round. ?As witnessed by 20 or > so releases on pypi every day(and growing), lots of people are using > the python packaging tools successfully. This does not mean much IMO. Uploading on Pypi is almost required to use virtualenv, buildout, etc.. An interesting metric is not how many packages are uploaded, but how much it is used outside developers. > > I'm not sure making a separate build tool is a good idea. ?I think > going with the rest of the python community, and improving the tools > there is a better idea. It has been tried, and IMHO has been proved to have failed. You can look at the recent discussion (the one started by Guido in particular). > pps. some notes on toydist itself. > - toydist convert is cool for people converting a setup.py . ?This > means that most people can try out toydist right away. ?but what does > it gain these people who convert their setup.py files? Not much ATM, except that it is easier to write a toysetup.info compared to setup.py IMO, and that it supports a simple way to include data files (something which is currently *impossible* to do without writing your own distutils extensions). It has also the ability to build eggs without using setuptools (I consider not using setuptools a feature, given the too many failure modes of this package). The main goals though are to make it easier to build your own tools on top of if, and to integrate with real build systems. > - a toydist convert that generates a setup.py file might be cool :) toydist started like this, actually: you would write a setup.py file which loads the package from toysetup.info, and can be converted to a dict argument to distutils.core.setup. I have not updated it recently, but that's definitely on the TODO list for a first alpha, as it would enable people to benefit from the format, with 100 % backward compatibility with distutils. > - arbitrary code execution happens when building or testing with > toydist. You are right for testing, but wrong for building. As long as the build is entirely driven by toysetup.info, you only have to trust toydist (which is not safe ATM, but that's an implementation detail), and your build tools of course. Obviously, if you have a package which uses an external build tool on top of toysetup.info (as will be required for numpy itself for example), all bets are off. But I think that's a tiny fraction of the interesting packages for scientific computing. Sandboxing is particularly an issue on windows - I don't know a good solution for windows sandboxing, outside of full vms, which are heavy-weights. > - it should be possible to build this toydist functionality as a > distutils/distribute/buildout extension. No, it cannot, at least as far as distutils/distribute are concerned (I know nothing about buildout). Extending distutils is horrible, and fragile in general. Even autotools with its mix of generated sh scripts through m4 and perl is a breeze compared to distutils. > - extending toydist? ?How are extensions made? ?there are 175 buildout > packages which extend buildout, and many that extend > distutils/setuptools - so extension of build tools in a necessary > thing. See my answer earlier about interoperation with build tools. cheers, David From cournape at gmail.com Tue Dec 29 09:34:44 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 29 Dec 2009 23:34:44 +0900 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> Message-ID: <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> On Tue, Dec 29, 2009 at 10:27 PM, Ren? Dudfield wrote: > Buildout is what a lot of the python community are using now. I would like to note that buildout is a solution to a problem that I don't care to solve. This issue is particularly difficult to explain to people accustomed with buildout in my experience - I have not found a way to explain it very well yet. Buildout, virtualenv all work by sandboxing from the system python: each of them do not see each other, which may be useful for development, but as a deployment solution to the casual user who may not be familiar with python, it is useless. A scientist who installs numpy, scipy, etc... to try things out want to have everything available in one python interpreter, and does not want to jump to different virtualenvs and whatnot to try different packages. This has strong consequences on how you look at things from a packaging POV: - uninstall is crucial - a package bringing down python is a big no no (this happens way too often when you install things through setuptools) - if something fails, the recovery should be trivial - the person doing the installation may not know much about python - you cannot use sandboxing as a replacement for backward compatibility (that's why I don't care much about all the discussion about versioning - I don't think it is very useful as long as python itself does not support it natively). In the context of ruby, this article makes a similar point: http://www.madstop.com/ruby/ruby_has_a_distribution_problem.html David From gael.varoquaux at normalesup.org Tue Dec 29 10:55:09 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 29 Dec 2009 16:55:09 +0100 Subject: [SciPy-dev] [matplotlib-devel] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> Message-ID: <20091229155509.GA15515@phare.normalesup.org> On Tue, Dec 29, 2009 at 11:34:44PM +0900, David Cournapeau wrote: > Buildout, virtualenv all work by sandboxing from the system python: > each of them do not see each other, which may be useful for > development, but as a deployment solution to the casual user who may > not be familiar with python, it is useless. A scientist who installs > numpy, scipy, etc... to try things out want to have everything > available in one python interpreter, and does not want to jump to > different virtualenvs and whatnot to try different packages. I think that you are pointing out a large source of misunderstanding in packaging discussion. People behind setuptools, pip or buildout care to have a working ensemble of packages that deliver an application (often a web application)[1]. You and I, and many scientific developers see libraries as building blocks that need to be assembled by the user, the scientist using them to do new science. Thus the idea of isolation is not something that we can accept, because it means that we are restricting the user to a set of libraries. Our definition of user is not the same as the user targeted by buildout. Our user does not push buttons, but he writes code. However, unlike the developer targeted by buildout and distutils, our user does not want or need to learn about packaging. Trying to make the debate clearer... Ga?l [1] I know your position on why simply focusing on sandboxing working ensemble of libraries is not a replacement for backward compatibility, and will only create impossible problems in the long run. While I agree with you, this is not my point here. From cournape at gmail.com Tue Dec 29 18:20:39 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 30 Dec 2009 08:20:39 +0900 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <64ddb72c0912291036o79815ee4jf35e4db955a67bed@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> <64ddb72c0912291036o79815ee4jf35e4db955a67bed@mail.gmail.com> Message-ID: <5b8d13220912291520n2bdbbd00x7e5a19c4b4aa941d@mail.gmail.com> On Wed, Dec 30, 2009 at 3:36 AM, Ren? Dudfield wrote: > On Tue, Dec 29, 2009 at 2:34 PM, David Cournapeau wrote: >> On Tue, Dec 29, 2009 at 10:27 PM, Ren? Dudfield wrote: >> >>> Buildout is what a lot of the python community are using now. >> >> I would like to note that buildout is a solution to a problem that I >> don't care to solve. This issue is particularly difficult to explain >> to people accustomed with buildout in my experience - I have not found >> a way to explain it very well yet. > > Hello, > > The main problem buildout solves is getting developers up to speed > very quickly on a project. ?They should be able to call one command > and get dozens of packages, and everything else needed ready to go, > completely isolated from the rest of the system. > > If a project does not want to upgrade to the latest versions of > packages, they do not have to. ?This reduces the dependency problem a > lot. ?As one package does not have to block on waiting for 20 other > packages. ?It makes iterating packages daily, or even hourly to not be > a problem - even with dozens of different packages used. ?This is not > theoretical, many projects iterate this quickly, and do not have > problems. > > Backwards compatibility is of course a great thing to keep up... but > harder to do with dozens of packages, some of which are third party > ones. ?For example, some people are running pygame applications > written 8 years ago that are still running today on the latest > versions of pygame. ?I don't think people in the python world > understand API, and ABI compatibility as much as those in the C world. > > However buildout is a solution to their problem, and allows them to > iterate quickly with many participants, on many different projects. > Many of these people work on maybe 20-100 different projects at once, > and some machines may be running that many applications at once too. > So using the system pythons packages is completely out of the question > for them. This is all great, but I don't care about solving this issue, this is a *developer* issue. I don't mean this is not an important issue, it is just totally out of scope. The developer issues I care about are much more fine-grained (corrent dependency handling between target, toolchain customization, etc...). Note however that hopefully, by simplifying the packaging tools, the problems you see with numpy on 2.6 would be less common. The whole distutils/setuptools/distribute stack is hopelessly intractable, given how messy the code is. > > It is very easy to include a dozen packages in a buildout, so that you > have all the packages required. I think there is a confusion - I mostly care about *end users*. People who may not have compilers, who want to be able to easily upgrade one package, etc... David From dsdale24 at gmail.com Wed Dec 30 09:26:00 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Wed, 30 Dec 2009 09:26:00 -0500 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> Message-ID: Hi David, On Mon, Dec 28, 2009 at 9:03 AM, David Cournapeau wrote: > Executable: grin > ? ?module: grin > ? ?function: grin_main > > Executable: grind > ? ?module: grin > ? ?function: grind_main Have you thought at all about operations that are currently performed by post-installation scripts? For example, it might be desirable for the ipython or MayaVi windows installers to create a folder in the Start menu that contains links the the executable and the documentation. This is probably a secondary issue at this point in toydist's development, but I think it is an important feature in the long run. Also, have you considered support for package extras (package variants in Ports, allowing you to specify features that pull in additional dependencies like traits[qt4])? Enthought makes good use of them in ETS, and I think they would be worth keeping. Darren From cournape at gmail.com Wed Dec 30 10:50:10 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 31 Dec 2009 00:50:10 +0900 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <64ddb72c0912300315r420bd88dk5bb6be3a960bf44d@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290622m2f0ec2c3x5a26e63118cb29a0@mail.gmail.com> <64ddb72c0912300315r420bd88dk5bb6be3a960bf44d@mail.gmail.com> Message-ID: <5b8d13220912300750kaa4230hde31ced08c32d44e@mail.gmail.com> On Wed, Dec 30, 2009 at 8:15 PM, Ren? Dudfield wrote: > > Sitting down with Tarek(who is one of the current distutils > maintainers) in Berlin we had a little discussion about packaging over > pizza and beer... and he was quite mindful of OS packagers problems > and issues. This has been said many times on distutils-sig, but no concrete action has ever been taken in that direction. For example, toydist already supports the FHS better than distutils, and is more flexible. I have tried several times to explain why this matters on distutils-sig, but you then have the peanuts gallery interfering with unrelated nonsense (like it would break windows, as if it could not be implemented independently). Also, retrofitting support for --*dir in distutils would be *very* difficult, unless you are ready to break backward compatibility (there are 6 ways to install data files, and each of them has some corner cases, for example - it is a real pain to support this correctly in the convert command of toydist, and you simply cannot recover missing information to comply with the FHS in every case). > > However these systems were developed by the zope/plone/web crowd, so > they are naturally going to be thinking a lot about zope/plone/web > issues. Agreed - it is natural that they care about their problems first, that's how it works in open source. What I find difficult is when our concern are constantly dismissed by people who have no clue about our issues - and later claim we are not cooperative. > ?Debian, and ubuntu packages for them are mostly useless > because of the age. That's where the build farm enters. This is known issue, that's why the build service or PPA exist in the first place. >?I think > perhaps if toydist included something like stdeb as not an extension > to distutils, but a standalone tool (like toydist) there would be less > problems with it. That's pretty much how I intend to do things. Currently, in toydist, you can do something like: from toydist.core import PackageDescription pkg = PackageDescription.from_file("toysetup.info") # pkg now gives you access to metadata, as well as extensions, python modules, etc... I think this gives almost everything that is needed to implement a sdist_dsc command. Contrary to the Distribution class in distutils, this class would not need to be subclassed/monkey-patched by extensions, as it only cares about the description, and is 100 % uncoupled from the build part. > yes, I have also battled with distutils over the years. ?However it is > simpler than autotools (for me... maybe distutils has perverted my > fragile mind), and works on more platforms for python than any other > current system. Autotools certainly works on more platforms (windows notwhistanding), if only because python itself is built with autoconf. Distutils simplicity is a trap: it is simpler only if you restrict to what distutils gives you. Don't get me wrong, autotools are horrible, but I have never encountered cases where I had to spend hours to do trivial tasks, as has been the case with distutils. Numpy build system would be much, much easier to implement through autotools, and would be much more reliable. >?However > distutils has had more tests and testing systems added, so that > refactoring/cleaning up of distutils can happen more so. You can't refactor distutils without breaking backward compatibility, because distutils has no API. The whole implementation is the API. That's one of the fundamental disagreement I and other scipy dev have with current contributors on distutils-sig: the starting point (distutils) and the goal are so far away from each other that getting there step by step is hopeless. > I agree with many things in that post. ?Except your conclusion on > multiple versions of packages in isolation. ?Package isolation is like > processes, and package sharing is like threads - and threads are evil! I don't find the comparison very helpful (for once, you can share data between processes, whereas virtualenv cannot see each other AFAIK). > Science is supposed to allow repeatability. ?Without the same versions > of packages, repeating experiments is harder. ?This is a big problem > in science that multiple versions of packages in _isolation_ can help > get to a solution to the repeatability problem. I don't think that's true - at least it does not reflect my experience at all. But then, I don't pretend to have an extensive experience either. From most of my discussions at scipy conferences, I know most people are dissatisfied with the current python solutions. > >>> Plenty of good work is going on with python packaging. >> >> That's the opposite of my experience. What I care about is: >> ?- tools which are hackable and easily extensible >> ?- robust install/uninstall >> ?- real, DAG-based build system >> ?- explicit and repeatability >> >> None of this is supported by the tools, and the current directions go >> even further away. When I have to explain at length why the >> command-based design of distutils is a nightmare to work with, I don't >> feel very confident that the current maintainers are aware of the >> issues, for example. It shows that they never had to extend distutils >> much. >> > > All agreed! ?I'd add to the list parallel builds/tests (make -j 16), > and outputting to native build systems. ?eg, xcode, msvc projects, and > makefiles. Yep - I got quite far with numscons already. It cannot be used as a general solution, but as a dev tool for my own work on numpy/scipy, it has been a huge time saver, especially given the top notch dependency tracking system. It supports // builds, and I can build full debug builds of scipy < 1 minute on a fast machine. That's a real productivity booster. > > How will you handle toydist extensions so that multiple extensions do > not have problems with each other? ?I don't think this is possible > without isolation, and even then it's still a problem. By doing it mostly the Unix way, through protocols and file format, not through API. Good API is hard, but for build tools, it is much, much harder. When talking about extensions, I mostly think about the following: - adding a new compiler/new platform - adding a new installer format - adding a new kind of source file/target (say ctypes extension, cython compilation, etc...) Instead of using classes for compilers/tools, I am considering using python modules for each tool, and each tool would be registered through a source file extension (associate a function to ".c", for example). Actual compilation steps would be done through strings ("$CC ...."). The system would be kept simple, because for complex projects, one should forward all this to a real build system (like waf or scons). There is also the problem of post/pre hooks, adding new steps in toymaker: I have not thought much about this, but I like waf's way of doing it, and it may be applicable. In waf, the main script (called wscript) defines a function for each build step: def configure(): pass def build(): pass .... And undefined functions are considered unmodified. What I know for sure is that the distutils-way of extending through inheritance does not work at all. As soon as two extensions subclass the same base class, you're done. > > Yeah, cool. ?Many other projects have their own servers too. > pygame.org, plone, etc etc, which meet their own needs. ?Patches are > accepted for pypi btw. Yes, but how long before the patch is accepted and deployed ? > What type of enforcements of meta data, and how would they help? ?I > imagine this could be done in a number of ways to pypi. > - a distutils command extension that people could use. > - change pypi source code. > - check the metadata for certain packages, then email their authors > telling them about issues. First, packages with malformed metadata would be rejected, and it would not be possible to register a package without uploading the sources. I simply do not want to publish a package which does not even have a name or a version, for example. The current way of doing things in pypi in insane if you ask me. For example, if you want to install a package with its dependencies, you need to download the package, which may be in another website, and you need to execute setup.py just to know its dependencies. This has so many failures modes, I don't understand how this can seriously be considered, really. Every other system has an index to do this kind of things (curiously, both EPD and pypm have an index as well AFAIK). Again, a typical example of NIH, with inferior solutions implemented in the case of python. > > yeah, cool. ?That would let you develop things incrementally too, and > still have toydist be useful for the whole development period until it > catches up with the features of distutils needed. Initially, toydist was started to show that writing something compatible with distutils without being tight to distutils was possible. > If you execute build tools on arbitrary code, then arbitrary code > execution is easy for someone who wants to do bad things. Well, you could surely exploit built tools bugs. But at least, I can query metadata and packages features in a safe way - and this is very useful already (cf my points about being able to query packages metadata in one "query"). > and many times I still > get errors on different platforms, despite many years of multi > platform coding. Yes, that's a difficult process. We cannot fix this - but having automatically built (and hopefully tested) installers on major platforms would be a significant step in the right direction. That's one of the killer feature of CRAN (whenever you submit a package for CRAN, a windows installer is built, and tested). cheers, David From cournape at gmail.com Wed Dec 30 11:04:11 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 31 Dec 2009 01:04:11 +0900 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> Message-ID: <5b8d13220912300804p47687b99of8f58c154da9e6b3@mail.gmail.com> On Wed, Dec 30, 2009 at 11:26 PM, Darren Dale wrote: > Hi David, > > On Mon, Dec 28, 2009 at 9:03 AM, David Cournapeau wrote: >> Executable: grin >> ? ?module: grin >> ? ?function: grin_main >> >> Executable: grind >> ? ?module: grin >> ? ?function: grind_main > > Have you thought at all about operations that are currently performed > by post-installation scripts? For example, it might be desirable for > the ipython or MayaVi windows installers to create a folder in the > Start menu that contains links the the executable and the > documentation. This is probably a secondary issue at this point in > toydist's development, but I think it is an important feature in the > long run. The main problem I see with post hooks is how to support them in installers. For example, you would have a function which does the post install, and declare it as a post install hook through decorator: @hook.post_install def myfunc(): pass The main issue is how to communicate data - that's a major issue in every build system I know of (scons' solution is ugly: every function takes an env argument, which is basically a giant global variable). > > Also, have you considered support for package extras (package variants > in Ports, allowing you to specify features that pull in additional > dependencies like traits[qt4])? Enthought makes good use of them in > ETS, and I think they would be worth keeping. The declarative format may declare flags as follows: Flag: c_exts Description: Build (optional) C extensions Default: false Library: if flag(c_exts): Extension: foo sources: foo.c And this is automatically available at configure stage. It can be used anywhere in Library, not just for Extension (you could use is within the Requires section). I am considering adding more than Flag (flag are boolean), if it does not make the format too complex. The use case I have in mind is something like: toydist configure --with-lapack-dir=/opt/mkl/lib which I have wished to implement for numpy for ages. David From cournape at gmail.com Wed Dec 30 11:16:19 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 31 Dec 2009 01:16:19 +0900 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> Message-ID: <5b8d13220912300816s12c934adh4abdd6d703f8928f@mail.gmail.com> On Wed, Dec 30, 2009 at 11:26 PM, Darren Dale wrote: > Hi David, > > On Mon, Dec 28, 2009 at 9:03 AM, David Cournapeau wrote: >> Executable: grin >> ? ?module: grin >> ? ?function: grin_main >> >> Executable: grind >> ? ?module: grin >> ? ?function: grind_main > > Have you thought at all about operations that are currently performed > by post-installation scripts? For example, it might be desirable for > the ipython or MayaVi windows installers to create a folder in the > Start menu that contains links the the executable and the > documentation. This is probably a secondary issue at this point in > toydist's development, but I think it is an important feature in the > long run. > > Also, have you considered support for package extras (package variants > in Ports, allowing you to specify features that pull in additional > dependencies like traits[qt4])? Enthought makes good use of them in > ETS, and I think they would be worth keeping. Does this example covers what you have in mind ? I am not so familiar with this feature of setuptools: Name: hello Version: 1.0 Library: BuildRequires: paver, sphinx, numpy if os(windows) BuildRequires: pywin32 Packages: hello Extension: hello._bar sources: src/hellomodule.c if os(linux) Extension: hello._linux_backend sources: src/linbackend.c Note that instead of os(os_name), you can use flag(flag_name), where flag are boolean variables which can be user defined: http://github.com/cournape/toydist/blob/master/examples/simples/conditional/toysetup.info http://github.com/cournape/toydist/blob/master/examples/var_example/toysetup.info David From dsdale24 at gmail.com Wed Dec 30 16:06:56 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Wed, 30 Dec 2009 16:06:56 -0500 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <5b8d13220912300816s12c934adh4abdd6d703f8928f@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912300816s12c934adh4abdd6d703f8928f@mail.gmail.com> Message-ID: On Wed, Dec 30, 2009 at 11:16 AM, David Cournapeau wrote: > On Wed, Dec 30, 2009 at 11:26 PM, Darren Dale wrote: >> Hi David, >> >> On Mon, Dec 28, 2009 at 9:03 AM, David Cournapeau wrote: >>> Executable: grin >>> ? ?module: grin >>> ? ?function: grin_main >>> >>> Executable: grind >>> ? ?module: grin >>> ? ?function: grind_main >> >> Have you thought at all about operations that are currently performed >> by post-installation scripts? For example, it might be desirable for >> the ipython or MayaVi windows installers to create a folder in the >> Start menu that contains links the the executable and the >> documentation. This is probably a secondary issue at this point in >> toydist's development, but I think it is an important feature in the >> long run. >> >> Also, have you considered support for package extras (package variants >> in Ports, allowing you to specify features that pull in additional >> dependencies like traits[qt4])? Enthought makes good use of them in >> ETS, and I think they would be worth keeping. > > Does this example covers what you have in mind ? I am not so familiar > with this feature of setuptools: > > Name: hello > Version: 1.0 > > Library: > ? ?BuildRequires: paver, sphinx, numpy > ? ?if os(windows) > ? ? ? ?BuildRequires: pywin32 > ? ?Packages: > ? ? ? ?hello > ? ?Extension: hello._bar > ? ? ? ?sources: > ? ? ? ? ? ?src/hellomodule.c > ? ?if os(linux) > ? ? ? ?Extension: hello._linux_backend > ? ? ? ? ? ?sources: > ? ? ? ? ? ? ? ?src/linbackend.c > > Note that instead of os(os_name), you can use flag(flag_name), where > flag are boolean variables which can be user defined: > > http://github.com/cournape/toydist/blob/master/examples/simples/conditional/toysetup.info > > http://github.com/cournape/toydist/blob/master/examples/var_example/toysetup.info I should defer to the description of extras in the setuptools documentation. It is only a few paragraphs long: http://peak.telecommunity.com/DevCenter/setuptools#declaring-extras-optional-features-with-their-own-dependencies Darren From josef.pktd at gmail.com Wed Dec 30 21:38:23 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 30 Dec 2009 21:38:23 -0500 Subject: [SciPy-dev] trunk failures Message-ID: <1cd32cbb0912301838q21b5a8e1ge6a7fdb52ee0d8cd@mail.gmail.com> just FYI, after upgrading to numpy release Josef >>> numpy.version.version '1.4.0' >>> scipy.version.version '0.8.0.dev6156' ====================================================================== ERROR: test_decomp.test_lapack_misaligned(, (array ([[ 1.734e-255, 8.189e-217, 4.025e-178, 1.903e-139, 9.344e-101, ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.11.1-py2.5.egg\nose\case.p y", line 183, in runTest self.test(*self.arg) File "c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\scipy-0.8.0.dev6 156.win32\programs\python25\lib\site-packages\scipy\linalg\tests\test_decomp.py" , line 1106, in check_lapack_misaligned func(*a,**kwargs) File "\Programs\Python25\Lib\site-packages\scipy\linalg\basic.py", line 127, i n solve File "C:\Programs\Python25\Lib\site-packages\numpy\lib\function_base.py", line 586, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== ERROR: test_mpmath.test_expi_complex ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.11.1-py2.5.egg\nose\case.p y", line 183, in runTest self.test(*self.arg) File "C:\Programs\Python25\Lib\site-packages\numpy\testing\decorators.py", lin e 146, in skipper_func return f(*args, **kwargs) File "c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\scipy-0.8.0.dev6 156.win32\programs\python25\lib\site-packages\scipy\special\tests\test_mpmath.py ", line 46, in test_expi_complex dataset = np.array(dataset, dtype=np.complex_) TypeError: a float is required ====================================================================== FAIL: test_lambertw.test_values ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.11.1-py2.5.egg\nose\case.p y", line 183, in runTest self.test(*self.arg) File "c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\scipy-0.8.0.dev6 156.win32\programs\python25\lib\site-packages\scipy\special\tests\test_lambertw. py", line 80, in test_values FuncData(w, data, (0,1), 2, rtol=1e-10, atol=1e-13).check() File "c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\scipy-0.8.0.dev6 156.win32\programs\python25\lib\site-packages\scipy\special\tests\testutils.py", line 187, in check assert False, "\n".join(msg) AssertionError: Max |adiff|: 2.5797 Max |rdiff|: 3.81511 Bad results for the following points (in output 0): (-0.44800000000000001+0.40000000000000002j) 0j => ( -1.2370928928166736-1.6588828572971361j) != (-0.11855133765652383+0.665705343135 83418j) (rdiff 3.8151122286225245) ---------------------------------------------------------------------- Ran 4259 tests in 91.016s FAILED (KNOWNFAIL=10, SKIP=33, errors=3, failures=1) I'm also getting several special.gammaincinv warnings in stats.distributions, that I think are new, but they don't cause any test failure c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\scipy-0.8.0.dev6156.win3 2\programs\python25\lib\site-packages\scipy\stats\distributions.py:2050: Special FunctionWarning: gammaincinv: failed to converge at (a, y) = (1.1844749658973861 , 0.083706058837762387): 3 val2 = special.gammaincinv(a,1.0-q) From nwagner at iam.uni-stuttgart.de Thu Dec 31 09:10:04 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 31 Dec 2009 15:10:04 +0100 Subject: [SciPy-dev] trunk failures In-Reply-To: <1cd32cbb0912301838q21b5a8e1ge6a7fdb52ee0d8cd@mail.gmail.com> References: <1cd32cbb0912301838q21b5a8e1ge6a7fdb52ee0d8cd@mail.gmail.com> Message-ID: Hi all, the segfault connected with lambertw vanished into thin air with >>> scipy.__version__ '0.8.0.dev6162' ====================================================================== ERROR: test_decomp.test_lapack_misaligned(, (array([[ 1.734e-255, 8.189e-217, 4.025e-178, 1.903e-139, 9.344e-101, ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/linalg/tests/test_decomp.py", line 1109, in check_lapack_misaligned func(*a,**kwargs) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/linalg/basic.py", line 127, in solve a1, b1 = map(asarray_chkfinite,(a,b)) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/function_base.py", line 585, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== FAIL: test_random_real (test_basic.TestSingleIFFT) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/fftpack/tests/test_basic.py", line 205, in test_random_real assert_array_almost_equal (y1, x) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", line 761, in assert_array_almost_equal header='Arrays are not almost equal') File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", line 605, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 0.900900900901%) x: array([ 0.48450238 +7.51770290e-09j, 0.49108961 +1.85425488e-08j, 0.10608051 -6.62497541e-08j, 0.42776310 -3.53591778e-09j, 0.70493436 +5.46591927e-09j, 0.75120968 +4.13782644e-08j,... y: array([ 0.48450235, 0.49108979, 0.10608055, 0.42776316, 0.70493436, 0.75120968, 0.05061959, 0.94095784, 0.34811565, 0.622724 , 0.34106973, 0.50202626, 0.16626406, 0.29822108, 0.46144518,... ====================================================================== FAIL: test_lambertw.test_values ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_lambertw.py", line 80, in test_values FuncData(w, data, (0,1), 2, rtol=1e-10, atol=1e-13).check() File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/testutils.py", line 187, in check assert False, "\n".join(msg) AssertionError: Max |adiff|: 1.77636e-15 Max |rdiff|: 2.06237e-16 Bad results for the following points (in output 0): (-0.40000000000000002+0.40000000000000002j) 0j => (nan+nan*j) != (-0.10396515323290657+0.61899273315171632j) (rdiff 0.0) (-0.44800000000000001+0.40000000000000002j) 0j => (nan+nan*j) != (-0.11855133765652383+0.66570534313583418j) (rdiff 0.0) (-0.44800000000000001-0.40000000000000002j) 0j => (nan+nan*j) != (-0.11855133765652383-0.66570534313583418j) (rdiff 0.0) ---------------------------------------------------------------------- Ran 4259 tests in 77.614s FAILED (KNOWNFAIL=11, SKIP=18, errors=1, failures=2) Happy New Year !! Nils From josef.pktd at gmail.com Thu Dec 31 11:04:38 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 31 Dec 2009 11:04:38 -0500 Subject: [SciPy-dev] cephes docstrings editable? Message-ID: <1cd32cbb0912310804g5553e599j48184a12622742b7@mail.gmail.com> is it possible to edit the cephes doc strings in scipy.special? for example: http://docs.scipy.org/scipy/docs/scipy.special._cephes.chdtr/#scipy-special-chdtr The doc editor allows editing, but doesn't show the source. Are edits propagated back to wherever the generated (?) doc strings are hiding? Josef From ralf.gommers at googlemail.com Thu Dec 31 22:11:46 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 1 Jan 2010 11:11:46 +0800 Subject: [SciPy-dev] cephes docstrings editable? In-Reply-To: <1cd32cbb0912310804g5553e599j48184a12622742b7@mail.gmail.com> References: <1cd32cbb0912310804g5553e599j48184a12622742b7@mail.gmail.com> Message-ID: On Fri, Jan 1, 2010 at 12:04 AM, wrote: > is it possible to edit the cephes doc strings in scipy.special? > > for example: > > http://docs.scipy.org/scipy/docs/scipy.special._cephes.chdtr/#scipy-special-chdtr > > The doc editor allows editing, but doesn't show the source. Are edits > propagated back to wherever the generated (?) doc strings are hiding? > > > The patch generation indeed has a problem, so it would need some manual work when merging the wiki edit. But don't let that stop you from editing that docstring, the edits will land in svn at some point anyway. There's already an edit made to cephes.erf http://docs.scipy.org/scipy/docs/scipy.special._cephes.erf/ , which will result in the patch below at the moment. Cheers, Ralf ERROR: scipy.special._cephes.erf: source location for docstring is not known --- unknown-source-location/scipy.special._cephes.erf.py.old +++ unknown-source-location/scipy.special._cephes.erf.py @@ -1,3 +1,39 @@ # scipy.special._cephes.erf: Source location for docstring not known def erf(): + """ + erf(x[, out]) + + Returns the error function of complex argument. + + It is defined as : + + ..math:: 2/\\sqrt(\\pi)*\\int_{t=0..x}(\\exp(-t^2)) + + Parameters + ---------- + x : ndarray + the error function is computed for each item of x + + Returns + ------- + res : ndarray + the values of the error function at the given points x. + + Notes + ----- + The cumulative of the unit normal distribution is given by: + + ..math:: \\Phi(z) = \\frac{1}{2}[1 + erf(\\frac{z}{\\sqrt{2}})] + + References + ---------- + .. [1] http://en.wikipedia.org/wiki/Error_function + .. [2] Milton Abramowitz and Irene A. Stegun, eds. + Handbook of Mathematical Functions with Formulas, + Graphs, and Mathematical Tables. New York: Dover, + 1972. http://www.math.sfu.ca/~cbm/aands/page_297.htm + + See : erfc, erfinv, erfcinv + + """ pass -------------- next part -------------- An HTML attachment was scrubbed... URL: