From sberub at gmail.com Tue May 1 08:05:20 2007 From: sberub at gmail.com (Simon Berube) Date: Tue, 01 May 2007 12:05:20 -0000 Subject: [Numpy-discussion] simpliest way to check: array x is float, not integer In-Reply-To: <463478A5.1050507@ukr.net> References: <463478A5.1050507@ukr.net> Message-ID: <1178021120.606999.223480@h2g2000hsg.googlegroups.com> When using numpy array the type of the array is given by the "dtype" variable of the array. So if your array is int then array.dtype will be 'int32'. Numpy uses more complex data types then just int and floats so you might want to check all the available data types. Ex: In [168]: a = array([1,2,3]) In [169]: a.dtype Out[169]: dtype('int32') OR (Array) In [170]: b = array([1.,2.,3.]) In [171]: b.dtype Out[171]: dtype('float64') OR (Strings) In [172]: c = array(['sdf', 'fff', 'tye']) In [173]: c.dtype Out[173]: dtype('|S3') # This could also be 'Sxxx' where xxx is the length of the largest string in the array Alternatively, as a hackjob type check you could also do an "isinstance" check on the first element of the array since, unlike lists, arrays have uniform elements all the way through. Hope that helps, - Simon Berube P.S.: Someone please correct me if there's a better way. On Apr 29, 6:51 am, dmitrey wrote: > hi all, > please inform me what is the simplest way to check, does the vector x > that came to my func is float or integer. I.e. someone can pass to my > func for example x0 = numpy.array([1, 0, 0]) and it can yield wrong > unexpected results vs numpy.array([1.0, 0, 0]) . > Thx, D. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion From mpmusu at cc.usu.edu Tue May 1 10:32:46 2007 From: mpmusu at cc.usu.edu (Mark.Miller) Date: Tue, 01 May 2007 08:32:46 -0600 Subject: [Numpy-discussion] OverflowError: long too big to convert Message-ID: <46374F8E.6060807@cc.usu.edu> Can someone explain this? I can't seem to coerce numpy into storing large integer values. I'm sure that I'm just overlooking something simple... >>> import numpy >>> a='1'*300 >>> type(a) >>> b=int(a) >>> type(b) >>> c=numpy.empty((2,2),long) >>> c[:]=b Traceback (most recent call last): File "", line 1, in c[:]=b OverflowError: long too big to convert >>> Thanks, -Mark From robert.kern at gmail.com Tue May 1 10:43:31 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 01 May 2007 09:43:31 -0500 Subject: [Numpy-discussion] OverflowError: long too big to convert In-Reply-To: <46374F8E.6060807@cc.usu.edu> References: <46374F8E.6060807@cc.usu.edu> Message-ID: <46375213.3040005@gmail.com> Mark.Miller wrote: > Can someone explain this? I can't seem to coerce numpy into storing > large integer values. I'm sure that I'm just overlooking something > simple... > > > >>> import numpy > >>> a='1'*300 > >>> type(a) > > >>> b=int(a) > >>> type(b) > > >>> c=numpy.empty((2,2),long) > >>> c[:]=b > Traceback (most recent call last): > File "", line 1, in > c[:]=b > OverflowError: long too big to convert > >>> Use object arrays explicitly: c = numpy.empty((2, 2), dtype=object) Using dtype=long gets interpreted as requesting the largest available integer type (or maybe just int64, I'm not sure). Those aren't unbounded. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mpmusu at cc.usu.edu Tue May 1 10:59:29 2007 From: mpmusu at cc.usu.edu (Mark.Miller) Date: Tue, 01 May 2007 08:59:29 -0600 Subject: [Numpy-discussion] OverflowError: long too big to convert In-Reply-To: <46375213.3040005@gmail.com> References: <46374F8E.6060807@cc.usu.edu> <46375213.3040005@gmail.com> Message-ID: <463755D1.10201@cc.usu.edu> OK...so just for future reference...does a Numpy 'long' not directly correspond to a Python 'long'? Robert Kern wrote: > Mark.Miller wrote: >> Can someone explain this? I can't seem to coerce numpy into storing >> large integer values. I'm sure that I'm just overlooking something >> simple... >> >> >> >>> import numpy >> >>> a='1'*300 >> >>> type(a) >> >> >>> b=int(a) >> >>> type(b) >> >> >>> c=numpy.empty((2,2),long) >> >>> c[:]=b >> Traceback (most recent call last): >> File "", line 1, in >> c[:]=b >> OverflowError: long too big to convert >> >>> > > Use object arrays explicitly: > > c = numpy.empty((2, 2), dtype=object) > > Using dtype=long gets interpreted as requesting the largest available integer > type (or maybe just int64, I'm not sure). Those aren't unbounded. > From robert.kern at gmail.com Tue May 1 11:18:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 01 May 2007 10:18:33 -0500 Subject: [Numpy-discussion] OverflowError: long too big to convert In-Reply-To: <463755D1.10201@cc.usu.edu> References: <46374F8E.6060807@cc.usu.edu> <46375213.3040005@gmail.com> <463755D1.10201@cc.usu.edu> Message-ID: <46375A49.1030103@gmail.com> Mark.Miller wrote: > OK...so just for future reference...does a Numpy 'long' not directly > correspond to a Python 'long'? There is no Numpy "long", per se. There is a numpy.long symbol exposed, but it is just the builtin long type. However, numpy has no special support for Python's unbounded long type. You have to use object arrays if you want to hold them in numpy arrays. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Tue May 1 11:19:15 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 1 May 2007 09:19:15 -0600 Subject: [Numpy-discussion] OverflowError: long too big to convert In-Reply-To: <463755D1.10201@cc.usu.edu> References: <46374F8E.6060807@cc.usu.edu> <46375213.3040005@gmail.com> <463755D1.10201@cc.usu.edu> Message-ID: On 5/1/07, Mark.Miller wrote: > > OK...so just for future reference...does a Numpy 'long' not directly > correspond to a Python 'long'? No. A numpy long corresponds, more or less, to the C long long int. In [2]: array([1],dtype=long) Out[2]: array([1], dtype=int64) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue May 1 18:27:06 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 2 May 2007 00:27:06 +0200 Subject: [Numpy-discussion] simpliest way to check: array x is float, not integer In-Reply-To: <1178021120.606999.223480@h2g2000hsg.googlegroups.com> References: <463478A5.1050507@ukr.net> <1178021120.606999.223480@h2g2000hsg.googlegroups.com> Message-ID: <20070501222706.GE10731@mentat.za.net> On Tue, May 01, 2007 at 12:05:20PM -0000, Simon Berube wrote: > Alternatively, as a hackjob type check you could also do an > "isinstance" check on the first element of the array since, unlike > lists, arrays have uniform elements all the way through. Or use N.issubdtype(x.dtype,int) and N.issubdtype(x.dtype,float) Cheers St?fan From guillemborrell at gmail.com Tue May 1 18:38:12 2007 From: guillemborrell at gmail.com (Guillem Borrell i Nogueras) Date: Wed, 2 May 2007 00:38:12 +0200 Subject: [Numpy-discussion] ctypes TypeError. What I am doing wrong? Message-ID: <200705020038.13050.guillemborrell@gmail.com> Hi I wrote the next function just to learn how ctypes work with numpy arrays. I am not trying to write yet another wrapper to lapack, it's just an experiment. (you can cut and paste the code) from ctypes import c_int from numpy import array,float64 from numpy.ctypeslib import load_library,ndpointer def dgesv(N,A,B): # A=array([[1,2],[3,4]],dtype=float64,order='FORTRAN') # B=array([1,2],dtype=float64,order='FORTRAN') cN=c_int(N) NRHS=c_int(1) LDA=c_int(N) IPIV=(c_int * N)() LDB=c_int(N) INFO=c_int(1) lapack=load_library('liblapack.so','/usr/lib/') lapack.argtypes=[c_int,c_int, ndpointer(dtype=float64, ndim=2, flags='FORTRAN'), c_int,c_int, ndpointer(dtype=float64, ndim=1, flags='FORTRAN'), c_int,c_int] lapack.dgesv_(cN,NRHS,A,LDA,IPIV,B,LDB,INFO) return B And... In [2]: from lapackw import dgesv In [3]: from numpy import array In [4]: dgesv(2,array([[1,2],[3,4]]),array([1,2])) ... ArgumentError: argument 3: exceptions.TypeError: Don't know how to convert parameter 3 Maybe I am sooo dumb but I just don't know what I am doing wrong. numpy-1.0.1, ctypes 1.0.1, python 2.4.3. thanks guillem From stefan at sun.ac.za Tue May 1 19:29:46 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 2 May 2007 01:29:46 +0200 Subject: [Numpy-discussion] ctypes TypeError. What I am doing wrong? In-Reply-To: <200705020038.13050.guillemborrell@gmail.com> References: <200705020038.13050.guillemborrell@gmail.com> Message-ID: <20070501232946.GG10731@mentat.za.net> Hi Guillem On Wed, May 02, 2007 at 12:38:12AM +0200, Guillem Borrell i Nogueras wrote: > I wrote the next function just to learn how ctypes work with numpy arrays. I > am not trying to write yet another wrapper to lapack, it's just an > experiment. (you can cut and paste the code) > > from ctypes import c_int > from numpy import array,float64 > from numpy.ctypeslib import load_library,ndpointer > > def dgesv(N,A,B): > # A=array([[1,2],[3,4]],dtype=float64,order='FORTRAN') > # B=array([1,2],dtype=float64,order='FORTRAN') Here, I'd use something like A = asfortranarray(A.astype(float64)) B = asfortranarray(B.astype(float64)) > cN=c_int(N) > NRHS=c_int(1) > LDA=c_int(N) > IPIV=(c_int * N)() > LDB=c_int(N) > INFO=c_int(1) > > lapack=load_library('liblapack.so','/usr/lib/') > > lapack.argtypes=[c_int,c_int, > ndpointer(dtype=float64, > ndim=2, > flags='FORTRAN'), > c_int,c_int, > ndpointer(dtype=float64, > ndim=1, > flags='FORTRAN'), > c_int,c_int] This should be lapack.dgesv_.argtypes. Cheers St?fan From stefan at sun.ac.za Tue May 1 19:38:01 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 2 May 2007 01:38:01 +0200 Subject: [Numpy-discussion] ctypes TypeError. What I am doing wrong? In-Reply-To: <200705020038.13050.guillemborrell@gmail.com> References: <200705020038.13050.guillemborrell@gmail.com> Message-ID: <20070501233801.GH10731@mentat.za.net> On Wed, May 02, 2007 at 12:38:12AM +0200, Guillem Borrell i Nogueras wrote: > lapack.argtypes=[c_int,c_int, > ndpointer(dtype=float64, > ndim=2, > flags='FORTRAN'), > c_int,c_int, > ndpointer(dtype=float64, > ndim=1, > flags='FORTRAN'), > c_int,c_int] This also isn't correct, according to the dgesv documentation. It should be lapack.dgesv_.argtypes=[POINTER(c_int),POINTER(c_int), ndpointer(dtype=np.float64, ndim=2, flags='FORTRAN'), POINTER(c_int), POINTER(c_int), ndpointer(dtype=np.float64, ndim=2, flags='FORTRAN'), POINTER(c_int),POINTER(c_int)] I attach a working version of your script. Cheers St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: lapack.py Type: text/x-python Size: 1047 bytes Desc: not available URL: From koara at atlas.cz Tue May 1 20:36:19 2007 From: koara at atlas.cz (koara) Date: Tue, 01 May 2007 17:36:19 -0700 Subject: [Numpy-discussion] reporting scipy.sparse bug Message-ID: <1178066179.249420.164300@n76g2000hsh.googlegroups.com> scipy 0.5.2, in scipy.sparse.lil_matrix.__mul__: the optimization for when multiplying by zero scalar is flawed. A copy of the original matrix is returned, rather than the correct zero matrix. Nasty bug because it only manifests itself with special input (zero scalar), took me some time to nail my unexpected program output down to this :( Easily resolved by rewriting the function, e.g. def __mul__(self, other): # self * other if isscalarlike(other): if other == 0: # Multiply by zero: return the zero matrix return lil_matrix(self.shape, dtype = self.dtype) # Multiply this scalar by every element. new = self.copy() new.data = [[val * other for val in rowvals] for rowvals in new.data] return new else: return self.dot(other) From stefan at sun.ac.za Wed May 2 04:58:24 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 2 May 2007 10:58:24 +0200 Subject: [Numpy-discussion] reporting scipy.sparse bug In-Reply-To: <1178066179.249420.164300@n76g2000hsh.googlegroups.com> References: <1178066179.249420.164300@n76g2000hsh.googlegroups.com> Message-ID: <20070502085824.GK10731@mentat.za.net> On Tue, May 01, 2007 at 05:36:19PM -0700, koara wrote: > scipy 0.5.2, in scipy.sparse.lil_matrix.__mul__: the optimization for > when multiplying by zero scalar is flawed. A copy of the original > matrix is returned, rather than the correct zero matrix. Nasty bug > because it only manifests itself with special input (zero scalar), > took me some time to nail my unexpected program output down to this > :( You're right -- that *is* nasty! Fixed in r2951. Thanks for the report! http://projects.scipy.org/scipy/scipy/changeset/2951 Cheers St?fan From ndbecker2 at gmail.com Wed May 2 07:23:15 2007 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 02 May 2007 07:23:15 -0400 Subject: [Numpy-discussion] scipy for centos4.4? Message-ID: Anyone know where to find usable rpms from scipy on centos4.4? From markbak at gmail.com Wed May 2 08:02:57 2007 From: markbak at gmail.com (mark) Date: Wed, 02 May 2007 12:02:57 -0000 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length Message-ID: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> Hello - I try to convert an input argument to an array of floats. Input can be an array of integers or floats, or scalar integers or floats. The latter seem to give a problem, as they return an array that doesn't have a length. I don't quite understand what b really is in the example below. Doesn't every array have a length? Is this a bug or a feature? Thanks, Mark >>> b = asarray(3,'d') >>> size(b) 1 >>> len(b) Traceback (most recent call last): File "", line 1, in ? len(b) TypeError: len() of unsized object From markbak at gmail.com Wed May 2 08:08:17 2007 From: markbak at gmail.com (mark) Date: Wed, 02 May 2007 12:08:17 -0000 Subject: [Numpy-discussion] sort bug In-Reply-To: References: <463047DF.7070700@pobox.com><46306253.20803@ieee.org> <4630F89F.50003@ieee.org> <46341558.6040403@pobox.com> <46345FE9.6000704@pobox.com> Message-ID: <1178107697.028571.35470@p77g2000hsh.googlegroups.com> Sorry for joining this discussion late. If you are only interested in the four largest eigenvalues, there are more efficient algorithms out there than just eig(). There are algorithms that just give you the N largest. Then again, I don't know of any Python implementations, but I haven't looked, Mark On Apr 29, 11:04 pm, "Matthieu Brucher" wrote: > 2007/4/29, Anton Sherwood : > > > > > > > > Anton Sherwood wrote: > > > > I'm using eigenvectors of a graph's adjacency matrix as "topological" > > > > coordinates of the graph's vertices as embedded in 3space (something I > > > > learned about just recently). Whenever I've done this with a graph > > that > > > > *does* have a good 3d embedding, using the first eigenvector results > > in > > > > a flat model: apparently the first is not independent, at least in > > such > > > > cases. . . . > > > Charles R Harris wrote: > > > . . . the embedding part sounds interesting, > > > I'll have to think about why that works. > > > It's a mystery to me: I never did study enough matrix algebra to get a > > feel for eigenvectors (indeed this is the first time I've had anything > > to do with them). > > > I'll happily share my code with anyone who wants to experiment with it. > > Seems to me that this is much like Isomap and class multidimensional > scaling, no ? > > Matthieu > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion From koara at atlas.cz Wed May 2 08:16:57 2007 From: koara at atlas.cz (koara) Date: Wed, 02 May 2007 05:16:57 -0700 Subject: [Numpy-discussion] bug in scipy.io.mmio Message-ID: <1178108217.466221.150140@h2g2000hsg.googlegroups.com> Hello, when saving a sparse matrix via scipy 0.5.2: scipy.io.mmio.mmwrite(), an exception is thrown: scipy.io.mmio.py: line 269: AttributeError: gettypecode not found changing the line to read 269: typecode = a.dtype.char fixes the problem. From nwagner at iam.uni-stuttgart.de Wed May 2 08:21:38 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 02 May 2007 14:21:38 +0200 Subject: [Numpy-discussion] bug in scipy.io.mmio In-Reply-To: <1178108217.466221.150140@h2g2000hsg.googlegroups.com> References: <1178108217.466221.150140@h2g2000hsg.googlegroups.com> Message-ID: <46388252.2050109@iam.uni-stuttgart.de> koara wrote: > Hello, when saving a sparse matrix via scipy 0.5.2: > scipy.io.mmio.mmwrite(), an exception is thrown: > > scipy.io.mmio.py: line 269: AttributeError: gettypecode not found > > changing the line to read > > 269: typecode = a.dtype.char > > fixes the problem. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > Fixed in svn. >>> from scipy import * >>> A = sparse.speye(10,10) >>> A <10x10 sparse matrix of type '' with 10 stored elements (space for 10) in Compressed Sparse Column format> >>> help (io.mmwrite) >>> io.mmwrite('A1',A) Nils -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: A1.mtx URL: From nwagner at iam.uni-stuttgart.de Wed May 2 08:29:07 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 02 May 2007 14:29:07 +0200 Subject: [Numpy-discussion] sort bug In-Reply-To: <1178107697.028571.35470@p77g2000hsh.googlegroups.com> References: <463047DF.7070700@pobox.com><46306253.20803@ieee.org> <4630F89F.50003@ieee.org> <46341558.6040403@pobox.com> <46345FE9.6000704@pobox.com> <1178107697.028571.35470@p77g2000hsh.googlegroups.com> Message-ID: <46388413.9050301@iam.uni-stuttgart.de> mark wrote: > Sorry for joining this discussion late. > If you are only interested in the four largest eigenvalues, there are > more efficient algorithms out there than just eig(). > There are algorithms that just give you the N largest. > Then again, I don't know of any Python implementations, but I haven't > looked, > Mark > > On Apr 29, 11:04 pm, "Matthieu Brucher" > wrote: > >> 2007/4/29, Anton Sherwood : >> >> >> >> >> >> >>>> Anton Sherwood wrote: >>>> >>>>> I'm using eigenvectors of a graph's adjacency matrix as "topological" >>>>> coordinates of the graph's vertices as embedded in 3space (something I >>>>> learned about just recently). Whenever I've done this with a graph >>>>> >>> that >>> >>>>> *does* have a good 3d embedding, using the first eigenvector results >>>>> >>> in >>> >>>>> a flat model: apparently the first is not independent, at least in >>>>> >>> such >>> >>>>> cases. . . . >>>>> >>> Charles R Harris wrote: >>> >>>> . . . the embedding part sounds interesting, >>>> I'll have to think about why that works. >>>> >>> It's a mystery to me: I never did study enough matrix algebra to get a >>> feel for eigenvectors (indeed this is the first time I've had anything >>> to do with them). >>> >>> I'll happily share my code with anyone who wants to experiment with it. >>> >> Seems to me that this is much like Isomap and class multidimensional >> scaling, no ? >> >> Matthieu >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > There are several subroutines in LAPACK for this task. http://www.netlib.org/lapack/double/dsyevx.f http://www.netlib.org/lapack/double/dsygvx.f IIRC symeig provides a wrapper. See http://mdp-toolkit.sourceforge.net/symeig.html Nils From pgmdevlist at gmail.com Wed May 2 09:52:08 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 2 May 2007 09:52:08 -0400 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length In-Reply-To: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> Message-ID: <200705020952.08668.pgmdevlist@gmail.com> Mark, In your example: > >>> b = asarray(3,'d') b is really a numpy scalar, so it doesn't have a length. But it does have a size (1) and a ndim (0). If you need to have arrays with a length, you can force the array to have a dimension 1 with atleast_1d(b) or array(b,copy=False,ndmin=1) From charlesr.harris at gmail.com Wed May 2 10:00:58 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 2 May 2007 08:00:58 -0600 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length In-Reply-To: <200705020952.08668.pgmdevlist@gmail.com> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705020952.08668.pgmdevlist@gmail.com> Message-ID: On 5/2/07, Pierre GM wrote: > > Mark, > In your example: > > >>> b = asarray(3,'d') > > b is really a numpy scalar, so it doesn't have a length. But it does have > a > size (1) and a ndim (0). > If you need to have arrays with a length, you can force the array to have > a > dimension 1 with atleast_1d(b) or array(b,copy=False,ndmin=1) Or just array([1],'d') Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed May 2 10:19:43 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 2 May 2007 10:19:43 -0400 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length In-Reply-To: References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705020952.08668.pgmdevlist@gmail.com> Message-ID: <200705021019.43088.pgmdevlist@gmail.com> On Wednesday 02 May 2007 10:00:58 Charles R Harris wrote: > On 5/2/07, Pierre GM wrote: > > Mark, > Or just array([1],'d') Except that in that case you need to know in advance the input is a scalar to put it in a list. The atleast_1d should work better on any input. From faltet at carabos.com Wed May 2 11:39:29 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 02 May 2007 17:39:29 +0200 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length In-Reply-To: <200705020952.08668.pgmdevlist@gmail.com> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705020952.08668.pgmdevlist@gmail.com> Message-ID: <1178120369.2878.12.camel@localhost.localdomain> El dc 02 de 05 del 2007 a les 09:52 -0400, en/na Pierre GM va escriure: > Mark, > In your example: > > >>> b = asarray(3,'d') > > b is really a numpy scalar, so it doesn't have a length. But it does have a > size (1) and a ndim (0). Just one correction in terms of the current naming convention: b in this case is a 0-dim array, which is a different beast than a numpy scalar (although they behaves pretty similarly). You can distinguish between them in different ways, but one is using type(): In [24]:type(numpy.asarray(3.)) # a 0-dim array Out[24]: In [25]:type(numpy.float64(3.)) # a numpy scalar Out[25]: Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From pgmdevlist at gmail.com Wed May 2 11:47:43 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 2 May 2007 11:47:43 -0400 Subject: [Numpy-discussion] =?iso-8859-1?q?problem=3A_I_get_an_array_that_?= =?iso-8859-1?q?doesn=27t_have_a=09length?= In-Reply-To: <1178120369.2878.12.camel@localhost.localdomain> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705020952.08668.pgmdevlist@gmail.com> <1178120369.2878.12.camel@localhost.localdomain> Message-ID: <200705021147.45195.pgmdevlist@gmail.com> On Wednesday 02 May 2007 11:39:29 Francesc Altet wrote: > El dc 02 de 05 del 2007 a les 09:52 -0400, en/na Pierre GM va escriure: > > In your example: > > > >>> b = asarray(3,'d') > > > > b is really a numpy scalar, so it doesn't have a length. But it does have > > a size (1) and a ndim (0). > > Just one correction in terms of the current naming convention: b in this > case is a 0-dim array, which is a different beast than a numpy scalar > (although they behaves pretty similarly). Quite true, my bad. From markbak at gmail.com Wed May 2 12:27:10 2007 From: markbak at gmail.com (mark) Date: Wed, 02 May 2007 16:27:10 -0000 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length In-Reply-To: <200705021147.45195.pgmdevlist@gmail.com> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705020952.08668.pgmdevlist@gmail.com> <1178120369.2878.12.camel@localhost.localdomain> <200705021147.45195.pgmdevlist@gmail.com> Message-ID: <1178123230.891619.291880@y5g2000hsa.googlegroups.com> OK, so in my example, I get a zero dimension array. Apparently a feature, not a bug. What I don't understand is why it isn't an array of lenght one? (or: why it isn't a bug?) Is there any use for a zero dimension array? I would very much like it to be a one dimension array. In my application I don' t know whether an integer, float, or array gets passed to the function. This is my syntax: def test(a): b = asarray(a,'d') do something to b.... I thought this makes b an array AND a float. The atleast_1d only makes it an array, not necessarily a float. Any more thoughts on how to do this cleanly? Any reason NOT to have asarray(3,'d') return an array of length 1? Thanks, Mark On May 2, 5:47 pm, Pierre GM wrote: > On Wednesday 02 May 2007 11:39:29 Francesc Altet wrote: > > > El dc 02 de 05 del 2007 a les 09:52 -0400, en/na Pierre GM va escriure: > > > In your example: > > > > >>> b = asarray(3,'d') > > > > b is really a numpy scalar, so it doesn't have a length. But it does have > > > a size (1) and a ndim (0). > > > Just one correction in terms of the current naming convention: b in this > > case is a 0-dim array, which is a different beast than a numpy scalar > > (although they behaves pretty similarly). > > Quite true, my bad. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion From robert.kern at gmail.com Wed May 2 12:36:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 May 2007 11:36:32 -0500 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length In-Reply-To: <1178123230.891619.291880@y5g2000hsa.googlegroups.com> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705020952.08668.pgmdevlist@gmail.com> <1178120369.2878.12.camel@localhost.localdomain> <200705021147.45195.pgmdevlist@gmail.com> <1178123230.891619.291880@y5g2000hsa.googlegroups.com> Message-ID: <4638BE10.3090807@gmail.com> mark wrote: > OK, so in my example, I get a zero dimension array. Apparently a > feature, not a bug. > What I don't understand is why it isn't an array of lenght one? (or: > why it isn't a bug?) Because we need a way to get rank-0 arrays. > Is there any use for a zero dimension array? http://projects.scipy.org/scipy/numpy/wiki/ZeroRankArray > I would very much like it to be a one dimension array. > In my application I don' t know whether an integer, float, or array > gets passed to the function. > This is my syntax: > def test(a): > b = asarray(a,'d') > do something to b.... > I thought this makes b an array AND a float. The atleast_1d only makes > it an array, not necessarily a float. > Any more thoughts on how to do this cleanly? atleast_1d(a).astype(float) Put it into a function if you want that to be more concise. > Any reason NOT to have asarray(3,'d') return an array of length 1? Because we need a way to get a rank-0 array. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Wed May 2 12:41:58 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 2 May 2007 12:41:58 -0400 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length In-Reply-To: <1178123230.891619.291880@y5g2000hsa.googlegroups.com> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705021147.45195.pgmdevlist@gmail.com> <1178123230.891619.291880@y5g2000hsa.googlegroups.com> Message-ID: <200705021241.59554.pgmdevlist@gmail.com> On Wednesday 02 May 2007 12:27:10 mark wrote: > Any reason NOT to have asarray(3,'d') return an array of length 1? Because then, it would be "an array, not necessarily a float" ;) You just noticed yourself that an array of dimension 1 is pretty much like a list, while an array of dimension 0 is pretty much like a scalar. Keeping that in mind, I'm sure that you can see the advantage of 0D arrays: they are indeed arrays AND scalar at the same time... If you need your inputs to be array or scalar and stay that way, then the "asarray" method is the best. You can just perform some additional test on the dimension of the array, before trying to access its length or its size. A 0D array as a size of 1 and a dimension of zero, and therefore no length. A 1D array as a size of n and a dimension of 1. and a length (n). From dd55 at cornell.edu Wed May 2 13:50:24 2007 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 2 May 2007 13:50:24 -0400 Subject: [Numpy-discussion] Possible Numeric bug with python-2.5 Message-ID: <200705021350.24267.dd55@cornell.edu> I know Numeric is no longer supported, but I just upgraded to python-2.5 and now I'm having problems indexing Numeric arrays: In [1]: import Numeric In [2]: Numeric.arange(0,10) Out[2]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [3]: Numeric.arange(0,10)[:10] Out[3]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [4]: Numeric.arange(0,10)[:] Out[4]: zeros((0,), 'l') That last line should yield the same result as the previous command. Can anyone else confirm this behavior? I'm running Numeric-24.2 on 64 bit gentoo linux. Thanks, Darren From fperez.net at gmail.com Wed May 2 14:15:06 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 2 May 2007 12:15:06 -0600 Subject: [Numpy-discussion] Possible Numeric bug with python-2.5 In-Reply-To: <200705021350.24267.dd55@cornell.edu> References: <200705021350.24267.dd55@cornell.edu> Message-ID: On 5/2/07, Darren Dale wrote: > I know Numeric is no longer supported, but I just upgraded to python-2.5 and > now I'm having problems indexing Numeric arrays: Fine on 32-bit ubuntu, using Python 2.5: In [4]: Numeric.arange(0,10)[:] Out[4]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [5]: Numeric.__version__ Out[5]: '24.2' But I think that Numeric isn't 100% 64-bit safe (perhaps when run under Python 2.5). cheers, f From pgmdevlist at gmail.com Wed May 2 14:31:44 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 2 May 2007 14:31:44 -0400 Subject: [Numpy-discussion] Possible Numeric bug with python-2.5 In-Reply-To: References: <200705021350.24267.dd55@cornell.edu> Message-ID: <200705021431.44282.pgmdevlist@gmail.com> On Wednesday 02 May 2007 14:15:06 Fernando Perez wrote: > On 5/2/07, Darren Dale wrote: > > I know Numeric is no longer supported, but I just upgraded to python-2.5 > > and now I'm having problems indexing Numeric arrays: > > Fine on 32-bit ubuntu, using Python 2.5: > But I think that Numeric isn't 100% 64-bit safe (perhaps when run > under Python 2.5). My understanding is that Python 2.5 started using Py_ssize_t instead of int, to increase the number of data that can be handled on 64b. We just observed that w/ the timeseries package: a int in the C API prevented a dictionary to be initialized properly on Python 2.5, which prevented the package to be loaded properly. From Chris.Barker at noaa.gov Wed May 2 14:45:40 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 02 May 2007 11:45:40 -0700 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length In-Reply-To: <200705021241.59554.pgmdevlist@gmail.com> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705021147.45195.pgmdevlist@gmail.com> <1178123230.891619.291880@y5g2000hsa.googlegroups.com> <200705021241.59554.pgmdevlist@gmail.com> Message-ID: <4638DC54.4000807@noaa.gov> Pierre GM wrote: > If you need your inputs to be array or scalar and stay that way It didn't sound like the OP wanted that. I suspect that what is wanted if for to always be a 1-d array (i.e. vector). To do that, I'd do: import numpy as N >>> def test(a): ... b = N.asarray(a, dtype=N.float).reshape((-1,)) ... print b.shape ... >>> >>> test(5) (1,) >>> test((5,)) (1,) >>> test((5,6,7,8)) (4,) -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From pgmdevlist at gmail.com Wed May 2 14:58:37 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 2 May 2007 14:58:37 -0400 Subject: [Numpy-discussion] =?iso-8859-1?q?problem=3A_I_get_an_array_that_?= =?iso-8859-1?q?doesn=27t_have_a=09length?= In-Reply-To: <4638DC54.4000807@noaa.gov> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705021241.59554.pgmdevlist@gmail.com> <4638DC54.4000807@noaa.gov> Message-ID: <200705021458.38846.pgmdevlist@gmail.com> On Wednesday 02 May 2007 14:45:40 Christopher Barker wrote: > Pierre GM wrote: > > If you need your inputs to be array or scalar and stay that way > > It didn't sound like the OP wanted that. I suspect that what is wanted > if for to always be a 1-d array (i.e. vector). To do that, I'd do: I beg to differ: your option is equivalent to (and I suspect a bit slower than) atleast_1d, which is what the OP complained about... From Chris.Barker at noaa.gov Wed May 2 16:15:22 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 02 May 2007 13:15:22 -0700 Subject: [Numpy-discussion] problem: I get an array that doesn't have a length In-Reply-To: <200705021458.38846.pgmdevlist@gmail.com> References: <1178107377.456345.293860@q75g2000hsh.googlegroups.com> <200705021241.59554.pgmdevlist@gmail.com> <4638DC54.4000807@noaa.gov> <200705021458.38846.pgmdevlist@gmail.com> Message-ID: <4638F15A.9020905@noaa.gov> Pierre GM wrote: >> It didn't sound like the OP wanted that. I suspect that what is wanted >> if for to always be a 1-d array (i.e. vector). To do that, I'd do: > > I beg to differ: your option is equivalent to (and I suspect a bit slower > than) atleast_1d, which is what the OP complained about... There is a difference, but I don't know what the OP wanted. my method (N.asarray(a, dtype-N.float).reshape((-1,)) will ALWAYS create a 1-d array, even if the original is greater than 1-d atleast_1d will let an n-d array pass through. It also doesn't look like you can specify a data type with atleast_1d. I don't think my approach is slower, as it handles the conversion to float, and reshape doesn't copy the data if it doesn't need to. But I doubt speed makes any difference here anyway. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From boyle5 at llnl.gov Wed May 2 17:36:27 2007 From: boyle5 at llnl.gov (James Boyle) Date: Wed, 2 May 2007 14:36:27 -0700 Subject: [Numpy-discussion] building pynetcdf-0.7 on OS X 10.4 intel Message-ID: OS X 10.4.9, python 2.5, numpy 1.0.3.dev3726, netcdf 3.6.2 I am trying to build pynetcdf-0.7, so as to be able to read netCDF files from python 2.5/numpy . So far I have not had success. Enclosed is the dump of my most recent failure - the complaint comes from the linker since the linker has both a powerPC (ppc) and an intel (i386) flag. I guess that what is being attempted is a universal build ( both ppc and i386) but failing. Apparently my netcdf build was just i386 - although I did nothing special. I am at a loss as to what to do next - this might be a question for an Mac specialist - but I figured I'd try here first. --Jim [krait:~/Desktop/pynetcdf-0.7] boyle5% python setup.py build running build running config_fc running build_src building extension "pynetcdf._netcdf" sources building data_files sources running build_py creating build creating build/lib.macosx-10.3-fat-2.5 creating build/lib.macosx-10.3-fat-2.5/pynetcdf copying ./__init__.py -> build/lib.macosx-10.3-fat-2.5/pynetcdf copying ./NetCDF.py -> build/lib.macosx-10.3-fat-2.5/pynetcdf copying ./netcdf_demo.py -> build/lib.macosx-10.3-fat-2.5/pynetcdf running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'pynetcdf._netcdf' extension compiling C sources C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fPIC -fno-common -dynamic -DNDEBUG -g -O3 -Wall - Wstrict-prototypes creating build/temp.macosx-10.3-fat-2.5 compile options: '-I/Users/boyle5/netcdf-3.6.2/include -I/Library/ Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/ numpy/core/include -I/Library/Frameworks/Python.framework/Versions/ 2.5/include/python2.5 -c' gcc: ./_netcdf.c ./_netcdf.c: In function 'PyNetCDFFile_Close':./_netcdf.c: In function 'PyNetCDFFile_Close': ./_netcdf.c:965: warning: passing argument 2 of 'PyDict_Next' from incompatible pointer type./_netcdf.c:965: warning: passing argument 2 of 'PyDict_Next' from incompatible pointer type ./_netcdf.c: In function 'PyNetCDFVariableObject_subscript': ./_netcdf.c:1822: warning: passing argument 3 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1822: warning: passing argument 4 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1822: warning: passing argument 5 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1842: warning: passing argument 3 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1842: warning: passing argument 4 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1842: warning: passing argument 5 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c: In function 'PyNetCDFVariableObject_subscript': ./_netcdf.c:1822: warning: passing argument 3 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1822: warning: passing argument 4 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1822: warning: passing argument 5 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1842: warning: passing argument 3 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1842: warning: passing argument 4 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1842: warning: passing argument 5 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c: In function 'PyNetCDFVariableObject_ass_subscript': ./_netcdf.c:1944: warning: passing argument 3 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1944: warning: passing argument 4 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1944: warning: passing argument 5 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1964: warning: passing argument 3 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1964: warning: passing argument 4 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1964: warning: passing argument 5 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c: In function 'PyNetCDFVariableObject_ass_subscript': ./_netcdf.c:1944: warning: passing argument 3 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1944: warning: passing argument 4 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1944: warning: passing argument 5 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1964: warning: passing argument 3 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1964: warning: passing argument 4 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c:1964: warning: passing argument 5 of 'PySlice_GetIndices' from incompatible pointer type ./_netcdf.c: At top level: ./_netcdf.c:2009: warning: initialization from incompatible pointer type ./_netcdf.c:2011: warning: 'intargfunc' is deprecated ./_netcdf.c:2011: warning: initialization from incompatible pointer type ./_netcdf.c:2012: warning: 'intargfunc' is deprecated ./_netcdf.c:2012: warning: initialization from incompatible pointer type ./_netcdf.c:2013: warning: 'intintargfunc' is deprecated ./_netcdf.c:2013: warning: initialization from incompatible pointer type ./_netcdf.c:2014: warning: initialization from incompatible pointer type ./_netcdf.c:2015: warning: initialization from incompatible pointer type ./_netcdf.c:2019: warning: initialization from incompatible pointer type ./_netcdf.c: At top level: ./_netcdf.c:2009: warning: initialization from incompatible pointer type ./_netcdf.c:2011: warning: 'intargfunc' is deprecated ./_netcdf.c:2011: warning: initialization from incompatible pointer type ./_netcdf.c:2012: warning: 'intargfunc' is deprecated ./_netcdf.c:2012: warning: initialization from incompatible pointer type ./_netcdf.c:2013: warning: 'intintargfunc' is deprecated ./_netcdf.c:2013: warning: initialization from incompatible pointer type ./_netcdf.c:2014: warning: initialization from incompatible pointer type ./_netcdf.c:2015: warning: initialization from incompatible pointer type ./_netcdf.c:2019: warning: initialization from incompatible pointer type ./_netcdf.c: In function 'PyNetCDFVariable_WriteArray': ./_netcdf.c:1626: warning: 'lastloop' may be used uninitialized in this function ./_netcdf.c: In function 'PyNetCDFVariable_WriteArray': ./_netcdf.c:1626: warning: 'lastloop' may be used uninitialized in this function gcc -arch i386 -arch ppc -isysroot /Developer/SDKs/MacOSX10.4u.sdk - bundle -undefined dynamic_lookup build/temp.macosx-10.3-fat-2.5/ _netcdf.o -L/Users/boyle5/netcdf-3.6.2/lib -lnetcdf -o build/ lib.macosx-10.3-fat-2.5/pynetcdf/_netcdf.so /usr/bin/ld: for architecture ppc /usr/bin/ld: warning /Users/boyle5/netcdf-3.6.2/lib/libnetcdf.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) [krait:~/Desktop/pynetcdf-0.7] boyle5% python ActivePython 2.5.0.0 (ActiveState Software Inc.) based on Python 2.5 (r25:51908, Mar 9 2007, 17:40:37) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.0.3.dev3726' From roberto at dealmeida.net Wed May 2 18:37:35 2007 From: roberto at dealmeida.net (Rob De Almeida) Date: Wed, 02 May 2007 19:37:35 -0300 Subject: [Numpy-discussion] building pynetcdf-0.7 on OS X 10.4 intel In-Reply-To: References: Message-ID: <463912AF.1080609@dealmeida.net> James Boyle wrote: > I am trying to build pynetcdf-0.7, so as to be able to read netCDF > files from python 2.5/numpy . Sorry for the shameless plug, but if you only want to read netCDF files from numpy you can use a pure python netcdf reader that I wrote, called pupynere: http://cheeseshop.python.org/pypi/pupynere/ Best, --Rob From aisaac at american.edu Wed May 2 23:07:19 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 2 May 2007 23:07:19 -0400 Subject: [Numpy-discussion] building pynetcdf-0.7 on OS X 10.4 intel In-Reply-To: <463912AF.1080609@dealmeida.net> References: <463912AF.1080609@dealmeida.net> Message-ID: On Wed, 02 May 2007, Rob De Almeida apparently wrote: > http://cheeseshop.python.org/pypi/pupynere/ I do not think you were "shameless" enough. Pupynere is a PUre PYthon NEtcdf REader. It allows read-access to netCDF files using the same syntax as the Scientific.IO.NetCDF module. Even though it's written in Python, the module is up to 40% faster than Scientific.IO.NetCDF and pynetcdf. Mentioning the MIT license is also worthwhile. Cheers, Alan Isaac From faltet at carabos.com Thu May 3 03:13:09 2007 From: faltet at carabos.com (Francesc Altet) Date: Thu, 03 May 2007 09:13:09 +0200 Subject: [Numpy-discussion] Possible Numeric bug with python-2.5 In-Reply-To: <200705021431.44282.pgmdevlist@gmail.com> References: <200705021350.24267.dd55@cornell.edu> <200705021431.44282.pgmdevlist@gmail.com> Message-ID: <1178176389.2545.3.camel@localhost.localdomain> El dc 02 de 05 del 2007 a les 14:31 -0400, en/na Pierre GM va escriure: > On Wednesday 02 May 2007 14:15:06 Fernando Perez wrote: > > On 5/2/07, Darren Dale wrote: > > > I know Numeric is no longer supported, but I just upgraded to python-2.5 > > > and now I'm having problems indexing Numeric arrays: > > > > Fine on 32-bit ubuntu, using Python 2.5: > > > But I think that Numeric isn't 100% 64-bit safe (perhaps when run > > under Python 2.5). > > My understanding is that Python 2.5 started using Py_ssize_t instead of int, > to increase the number of data that can be handled on 64b. We just observed > that w/ the timeseries package: a int in the C API prevented a dictionary to > be initialized properly on Python 2.5, which prevented the package to be > loaded properly. Yeah, that's probably the cause of the malfunction in 64-bit processors. I've also been bitten by this: http://projects.scipy.org/pipermail/numpy-discussion/2006-November/024428.html So, Numeric is currently unsable for Python 2.5 and 64-bit platforms and probably will allways be that way. Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From christian at marquardt.sc Thu May 3 05:58:19 2007 From: christian at marquardt.sc (Christian Marquardt) Date: Thu, 3 May 2007 11:58:19 +0200 (CEST) Subject: [Numpy-discussion] building pynetcdf-0.7 on OS X 10.4 intel In-Reply-To: References: Message-ID: <54607.193.17.11.23.1178186299.squirrel@webmail.marquardt.sc> On Wed, May 2, 2007 23:36, James Boyle wrote: > OS X 10.4.9, python 2.5, numpy 1.0.3.dev3726, netcdf 3.6.2 > > /usr/bin/ld: warning /Users/boyle5/netcdf-3.6.2/lib/libnetcdf.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) NOT having any experience on Macs, but doesn't the above error message suggest that your netCDF library has been build for a i386 instead of a ppc? Could that be the problem? Can you run the ncdump and ncgen executables from the same netCDF distribution? Just a thought, Christian. From washakie at gmail.com Tue May 1 13:23:32 2007 From: washakie at gmail.com (John Washakie) Date: Tue, 1 May 2007 19:23:32 +0200 Subject: [Numpy-discussion] linux cluster installation Message-ID: From: "John Washakie" To: numpy-discussion at lists.sourceforge.net Date: Tue, 1 May 2007 17:38:52 +0200 Subject: linux cluster installation What is the best way to install numpy and ultimately scipy on a cluster of linux machines so that you just install it once? I imagine you have to be root to do the installation for everyone, but what if you just want to install it to your home directory? I have done: [wash at wyo ~]$ setup.py install prefix=~ ... [wash at wyo ~]$ python Python 2.4.3 (#1, Jun 13 2006, 16:41:18) [GCC 4.0.2 20051125 (Red Hat 4.0.2-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * Traceback (most recent call last): File "", line 1, in ? ImportError: No module named numpy >>> Suggestions? From ckkart at hoc.net Thu May 3 19:38:22 2007 From: ckkart at hoc.net (Christian K) Date: Fri, 04 May 2007 08:38:22 +0900 Subject: [Numpy-discussion] linux cluster installation In-Reply-To: References: Message-ID: John Washakie wrote: > From: "John Washakie" > To: numpy-discussion at lists.sourceforge.net > Date: Tue, 1 May 2007 17:38:52 +0200 > Subject: linux cluster installation > What is the best way to install numpy and ultimately scipy on a > cluster of linux machines so that you just install it once? I imagine > you have to be root to do the installation for everyone, but what if > you just want to install it to your home directory? > > I have done: > [wash at wyo ~]$ setup.py install prefix=~ > > ... > > [wash at wyo ~]$ python > Python 2.4.3 (#1, Jun 13 2006, 16:41:18) > [GCC 4.0.2 20051125 (Red Hat 4.0.2-8)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> from numpy import * > Traceback (most recent call last): > File "", line 1, in ? > ImportError: No module named numpy You need to set PYTHONPATH to point at your site-packages dir. So in your case it should be something like: ~/lib/python2.X/site-packages Christian From oliphant.travis at ieee.org Thu May 3 21:42:01 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 03 May 2007 19:42:01 -0600 Subject: [Numpy-discussion] PyArray_FromDimsAndData weird seg fault In-Reply-To: <4632501F.9020807@nd.edu> References: <4632501F.9020807@nd.edu> Message-ID: <463A8F69.6080208@ieee.org> Trevor M Cickovski wrote: > Hi, > > I'm using SWIG to wrap a function that calls the NumPy routine PyArray_FromDimsAndData. > > PyObject* toArray() { > int dims = 5; > double* myH; > myH = (double*)malloc(dims*sizeof(double)); > myH[0] = 0; myH[1] = 1; myH[2] = 2; myH[3] = 3; myH[4] = 4; > return PyArray_FromDimsAndData(1,&dims,PyArray_FLOAT,(char*)myH); > } > > However, for some reason when I call this from Python, the > PyArray_FromDimsAndData() > seg faults everytime. Does anybody see something wrong with my use of the function > itself? > Did you call import_array() in your module initialization function? -Travis From matthieu.brucher at gmail.com Fri May 4 03:16:34 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 4 May 2007 09:16:34 +0200 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux Message-ID: Hi, I'm trying to test my code on several platforms, Windows and Linux, and I'm using some data files that where saved with a tofile(sep=' ') under Linux. Those files can be loaded without a problem under Linux, but under Windows with the latest numpy, these data cannot be loaded, some numbers are not considered - eg +inf or -inf). Is this a known behaviour ? How could I load these correctly under both platforms (I don't want to save them in binary form, I'm using the files for other purpose - Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri May 4 03:24:15 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 04 May 2007 16:24:15 +0900 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: References: Message-ID: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > Hi, > > I'm trying to test my code on several platforms, Windows and Linux, > and I'm using some data files that where saved with a tofile(sep=' ') > under Linux. Those files can be loaded without a problem under Linux, > but under Windows with the latest numpy, these data cannot be loaded, > some numbers are not considered - eg +inf or -inf). tofile is using pickling, right ? If you dump to a text file, there may be a problem because of end of line ? > Is this a known behaviour ? How could I load these correctly under > both platforms (I don't want to save them in binary form, I'm using > the files for other purpose - Personally, I always use pytables: the file format (hdf5) is binary, but the file format has a standard C api (and C++/java as well), which means you can access those files pretty much anywhere, and is designed to be cross platform. David From matthieu.brucher at gmail.com Fri May 4 04:01:16 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 4 May 2007 10:01:16 +0200 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> References: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> Message-ID: > > tofile is using pickling, right ? If you dump to a text file, there may > be a problem because of end of line ? I'm not using the binary form, so it's not pickling. Example of the first line of my data file : 0.0 inf 13.9040914426 14.7406669444 inf 4.41783247603 inf inf 6.05071515635inf inf inf 15.6925185021 inf inf inf inf inf inf inf > Is this a known behaviour ? How could I load these correctly under > > both platforms (I don't want to save them in binary form, I'm using > > the files for other purpose - > Personally, I always use pytables: the file format (hdf5) is binary, but > the file format has a standard C api (and C++/java as well), which means > you can access those files pretty much anywhere, and is designed to be > cross platform. Well, that would be a solution, but for the moment, I can't use this solution.. I use a lot of computers, and I can't install the packages I want, and their latest version - for instance Feisty that has only numpy 1.0.1 and scipy 0.5.1... - Besides, pure beginners, that do not know of the - excellent - pytables, will use a text format - simple to check visually the results for instance - and will run into this behaviour :( Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri May 4 05:23:17 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 4 May 2007 11:23:17 +0200 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: References: Message-ID: <20070504092316.GA23778@mentat.za.net> Hi Matthieu On Fri, May 04, 2007 at 09:16:34AM +0200, Matthieu Brucher wrote: > I'm trying to test my code on several platforms, Windows and Linux, and I'm > using some data files that where saved with a tofile(sep=' ') under Linux. > Those files can be loaded without a problem under Linux, but under Windows with > the latest numpy, these data cannot be loaded, some numbers are not considered > - eg +inf or -inf). > Is this a known behaviour ? How could I load these correctly under both > platforms (I don't want to save them in binary form, I'm using the files for > other purpose - Please file a ticket at http://projects.scipy.org/scipy/numpy/newticket along with a short code snippet to reproduce the problem. That way we won't forget about it. Cheers St?fan From matthieu.brucher at gmail.com Fri May 4 05:27:33 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 4 May 2007 11:27:33 +0200 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: <20070504092316.GA23778@mentat.za.net> References: <20070504092316.GA23778@mentat.za.net> Message-ID: > > Please file a ticket at > > http://projects.scipy.org/scipy/numpy/newticket > > along with a short code snippet to reproduce the problem. That way we > won't forget about it. > > Cheers > St?fan > Thank you, I didn't know if it was known or not, ... I'll post a ticket as soon as possible. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Fri May 4 12:44:02 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 04 May 2007 09:44:02 -0700 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: References: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> Message-ID: <463B62D2.9060704@noaa.gov> Matthieu Brucher wrote: > Example of the first line of my data file : > 0.0 inf 13.9040914426 14.7406669444 inf 4.41783247603 inf inf > 6.05071515635 inf inf inf 15.6925185021 inf inf inf inf inf inf inf I'm pretty sure fromfile() is using the standard C fscanf(). That means that whether in understands "inf" depends on the C lib. I'm guessing that the MS libc doesn't understand the same spelling of "inf" that the gcc one does. There may indeed be no literal for the IEEE Inf. Indeed, the Python built-in "float" relies on libc too, and on OS-X (glibc), I get: >>> float("inf") inf On Windows (standard python.org build, compiled with MSC), I get ValueError: invalid literal for float(): inf "Inf" gives me the same thing. It's too bad that C isn't just a little bit more standardized! In short, I don't know that this is a bug. It is a missing feature, but It may be hard to get someone to write the code to account for the limited fscanf() in fromfile(). -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From matthieu.brucher at gmail.com Fri May 4 13:22:53 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 4 May 2007 19:22:53 +0200 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: <463B62D2.9060704@noaa.gov> References: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> <463B62D2.9060704@noaa.gov> Message-ID: > > In short, I don't know that this is a bug. It is a missing feature, but > It may be hard to get someone to write the code to account for the > limited fscanf() in fromfile(). > That's what I was thinking :( Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri May 4 18:34:30 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 5 May 2007 00:34:30 +0200 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: <463B62D2.9060704@noaa.gov> References: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> <463B62D2.9060704@noaa.gov> Message-ID: <20070504223430.GK23778@mentat.za.net> On Fri, May 04, 2007 at 09:44:02AM -0700, Christopher Barker wrote: > Matthieu Brucher wrote: > > Example of the first line of my data file : > > 0.0 inf 13.9040914426 14.7406669444 inf 4.41783247603 inf inf > > 6.05071515635 inf inf inf 15.6925185021 inf inf inf inf inf inf inf > > I'm pretty sure fromfile() is using the standard C fscanf(). That means > that whether in understands "inf" depends on the C lib. I'm guessing > that the MS libc doesn't understand the same spelling of "inf" that the > gcc one does. There may indeed be no literal for the IEEE Inf. It would be interesting to see how Inf and NaN (vs. inf and nan) are interpreted under Windows. Are there any free fscanf implementations out there that we can include with numpy? Cheers St?fan From Chris.Barker at noaa.gov Fri May 4 18:45:32 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 04 May 2007 15:45:32 -0700 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: <20070504223430.GK23778@mentat.za.net> References: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> <463B62D2.9060704@noaa.gov> <20070504223430.GK23778@mentat.za.net> Message-ID: <463BB78C.2070702@noaa.gov> Stefan van der Walt wrote: > It would be interesting to see how Inf and NaN (vs. inf and nan) are > interpreted under Windows. Neither works. I suspect there is no literal that works with the lib the python.org python is built with. (by the way, I'm testing with 2.4, but I think that's the same compiler as 2.5) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Fri May 4 18:43:00 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 04 May 2007 17:43:00 -0500 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: <20070504223430.GK23778@mentat.za.net> References: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> <463B62D2.9060704@noaa.gov> <20070504223430.GK23778@mentat.za.net> Message-ID: <463BB6F4.8020202@gmail.com> Stefan van der Walt wrote: > On Fri, May 04, 2007 at 09:44:02AM -0700, Christopher Barker wrote: >> Matthieu Brucher wrote: >>> Example of the first line of my data file : >>> 0.0 inf 13.9040914426 14.7406669444 inf 4.41783247603 inf inf >>> 6.05071515635 inf inf inf 15.6925185021 inf inf inf inf inf inf inf >> I'm pretty sure fromfile() is using the standard C fscanf(). That means >> that whether in understands "inf" depends on the C lib. I'm guessing >> that the MS libc doesn't understand the same spelling of "inf" that the >> gcc one does. There may indeed be no literal for the IEEE Inf. > > It would be interesting to see how Inf and NaN (vs. inf and nan) are > interpreted under Windows. I'm pretty sure that they are also rejected. "1.#INF" and "1.#QNAN" might be accepted though since that's what ftoa() gives for those quantities. > Are there any free fscanf implementations out there that we can > include with numpy? This might be easy enough to adapt: http://www.python.org/ftp/python/contrib-09-Dec-1999/Misc/sscanfmodule.c.Z -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Fri May 4 18:52:53 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 4 May 2007 18:52:53 -0400 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: <20070504223430.GK23778@mentat.za.net> References: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> <463B62D2.9060704@noaa.gov> <20070504223430.GK23778@mentat.za.net> Message-ID: <20070504225253.GA30544@arbutus.physics.mcmaster.ca> On Sat, May 05, 2007 at 12:34:30AM +0200, Stefan van der Walt wrote: > On Fri, May 04, 2007 at 09:44:02AM -0700, Christopher Barker wrote: > > Matthieu Brucher wrote: > > > Example of the first line of my data file : > > > 0.0 inf 13.9040914426 14.7406669444 inf 4.41783247603 inf inf > > > 6.05071515635 inf inf inf 15.6925185021 inf inf inf inf inf inf inf > > > > I'm pretty sure fromfile() is using the standard C fscanf(). That means > > that whether in understands "inf" depends on the C lib. I'm guessing > > that the MS libc doesn't understand the same spelling of "inf" that the > > gcc one does. There may indeed be no literal for the IEEE Inf. > > It would be interesting to see how Inf and NaN (vs. inf and nan) are > interpreted under Windows. > > Are there any free fscanf implementations out there that we can > include with numpy? There's no need; all that fscanf is being used for is with the single format string "%d" (and variants for each type). So that's easily replaced with type-specific functions (strtol, strtod, etc.). For the floating-point types, checking first if the string matches inf or nan patterns would be sufficient. There's a bug in fromfile anyways: because it passes the separator directly to fscanf to skip over it, using a % in your separator will not work. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Chris.Barker at noaa.gov Fri May 4 18:57:49 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 04 May 2007 15:57:49 -0700 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: <463BB6F4.8020202@gmail.com> References: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> <463B62D2.9060704@noaa.gov> <20070504223430.GK23778@mentat.za.net> <463BB6F4.8020202@gmail.com> Message-ID: <463BBA6D.1040406@noaa.gov> Robert Kern wrote: > "1.#INF" and "1.#QNAN" might be > accepted though since that's what ftoa() gives for those quantities. not in Python: float("1.#INF") gives a value error. That may or may not reflect what *scanf does, but I suspect it does. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cookedm at physics.mcmaster.ca Fri May 4 19:12:15 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 4 May 2007 19:12:15 -0400 Subject: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux In-Reply-To: <463BB6F4.8020202@gmail.com> References: <463ADF9F.2050004@ar.media.kyoto-u.ac.jp> <463B62D2.9060704@noaa.gov> <20070504223430.GK23778@mentat.za.net> <463BB6F4.8020202@gmail.com> Message-ID: <20070504231215.GB30544@arbutus.physics.mcmaster.ca> On Fri, May 04, 2007 at 05:43:00PM -0500, Robert Kern wrote: > Stefan van der Walt wrote: > > On Fri, May 04, 2007 at 09:44:02AM -0700, Christopher Barker wrote: > >> Matthieu Brucher wrote: > >>> Example of the first line of my data file : > >>> 0.0 inf 13.9040914426 14.7406669444 inf 4.41783247603 inf inf > >>> 6.05071515635 inf inf inf 15.6925185021 inf inf inf inf inf inf inf > >> I'm pretty sure fromfile() is using the standard C fscanf(). That means > >> that whether in understands "inf" depends on the C lib. I'm guessing > >> that the MS libc doesn't understand the same spelling of "inf" that the > >> gcc one does. There may indeed be no literal for the IEEE Inf. > > > > It would be interesting to see how Inf and NaN (vs. inf and nan) are > > interpreted under Windows. > > I'm pretty sure that they are also rejected. "1.#INF" and "1.#QNAN" might be > accepted though since that's what ftoa() gives for those quantities. So, from some googling, here's the "special" strings for floats, as regular expressions. The case of the letters doesn't seem to matter. positive infinity: [+]?inf [+]?Infinity 1\.#INF negative infinity: -Inf -1.#INF -Infinity not a number: s?NaN[0-9]+ (The 's' is for signalling NaNs, the digits are for diagnostic information. See the decimal spec at http://www2.hursley.ibm.com/decimal/daconvs.html) -1\.#IND 1\.#QNAN (Windows quiet NaN?) There may be more. If we wish to support these, then writing our own parser for them is probably the only way. I'll do it, I just need a complete list of what we want to accept :-) On the other side of the coin, I'd argue the string representations of our float scalars should also be platform-agnostic (standardising on Inf and NaN would be best, I think). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From wbaxter at gmail.com Sat May 5 01:44:02 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 5 May 2007 14:44:02 +0900 Subject: [Numpy-discussion] matlab vs. python question In-Reply-To: <200704282037.39721.benjamin@decideur.info> References: <1e2af89e0704280409u3f6a7d99j2a7283da373dced3@mail.gmail.com> <1177783414.369496.115660@y80g2000hsf.googlegroups.com> <200704282037.39721.benjamin@decideur.info> Message-ID: On 4/29/07, Benjamin Thyreau wrote: > Le Samedi 28 Avril 2007 20:03, Simon Berube a ?crit: > > (...) > > On the other hand, if you are more interested in small > > projects where speed of development is more important than long term > > sustainability of the code Matlab is probably better. This is usually > > the case in research environment, at least until you figure out > > exactly what you want to do, then switch to python. > > > > I hope this help, (...) > > I guess the point of the question was, in what way should numpy/scipy be > improved in order to get the same usability at those cases, according to > you ? It's not as obvious as you make it sound, and there also are for sure > some advantages in the ipython/scipy stack over matlab's prompt, > usability-wise. Any experiences which helps in improving it is thus useful I like ipython's post-fix ? help. I was back in Matlab a couple of days ago and that's one of the few things I missed about my Numpy setup. I often find I'm typing a command and then I realize I don't recall what the arguments are so then I can just type one extra '?' to get the help. Of course even better would be if ipython were in a GUI, and docstrings could just be proactively prefectched and displayed in another pane while I type, or argument lists could be popped up like in PyCrust. Sadly, other than that I can't think of a whole lot about the numpy environment that I like better than the Matlab environment. (just talking about the environment here -- not the language). Matlab's debugger is very nicely integrated with the prompt, and plots just work, always, and are fast without having to think about ion/ioff stuff or show(). Never need any prayers to the god of GUI event loops to keep plots from freezing things up with Matlab. It's also refreshing to just be able to type "zeros" and "rand" and not have to worry about namespace prefixes. On the other hand I think I yelled a profanity out loud when Matlab gave me the "can't define functions in a script" error. I'd completely forgotten that particular bit of Matlab stupidity. There are probably ways to make the numpy environment more like the above. But there needs to be a 'button' that sets everything up like that or most people just won't find it. For instance maybe ipython could install a "Numpy Interactive" link that uses the -pylab flag and also pulls the numpy namespace into the global one. For debugger integration, is there any way to set up ipython so that on errors it will pop up a GUI debugger that shows the line of source code where the error occured and let you set break points? --bb From f.braennstroem at gmx.de Thu May 3 16:57:09 2007 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Thu, 3 May 2007 20:57:09 +0000 Subject: [Numpy-discussion] scipy for centos4.4? References: Message-ID: Hi Neal, * Neal Becker wrote: > Anyone know where to find usable rpms from scipy on centos4.4? I would like to see some more rpms too ... I am on scientific linux 4.4 :-) Greetings! Fabian From a.schmolck at gmx.net Sat May 5 06:54:52 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Sat, 05 May 2007 11:54:52 +0100 Subject: [Numpy-discussion] matlab vs. python question In-Reply-To: (Bill Baxter's message of "Sat\, 5 May 2007 14\:44\:02 +0900") References: <1e2af89e0704280409u3f6a7d99j2a7283da373dced3@mail.gmail.com> <1177783414.369496.115660@y80g2000hsf.googlegroups.com> <200704282037.39721.benjamin@decideur.info> Message-ID: "Bill Baxter" writes: > also pulls the numpy namespace into the global one. For debugger > integration, is there any way to set up ipython so that on errors it > will pop up a GUI debugger that shows the line of source code where > the error occured and let you set break points? Sure -- use emacs and ipython.el (I know this is not a very satisfactory answer if you are not an emacs user). 'as From lists.steve at arachnedesign.net Sat May 5 10:31:27 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Sat, 5 May 2007 10:31:27 -0400 Subject: [Numpy-discussion] matlab vs. python question In-Reply-To: References: <1e2af89e0704280409u3f6a7d99j2a7283da373dced3@mail.gmail.com> <1177783414.369496.115660@y80g2000hsf.googlegroups.com> <200704282037.39721.benjamin@decideur.info> Message-ID: On May 5, 2007, at 6:54 AM, Alexander Schmolck wrote: > "Bill Baxter" writes: > >> also pulls the numpy namespace into the global one. For debugger >> integration, is there any way to set up ipython so that on errors it >> will pop up a GUI debugger that shows the line of source code where >> the error occured and let you set break points? > > Sure -- use emacs and ipython.el (I know this is not a very > satisfactory > answer if you are not an emacs user). Or maybe we can try to work out some quick how-tos with this: http://www.digitalpeers.com/pythondebugger/ Has anybody used it before? I've had it lying around w/ the intent to use it, but haven't gotten around to it just yet. -steve From charlesr.harris at gmail.com Sat May 5 21:33:03 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 5 May 2007 19:33:03 -0600 Subject: [Numpy-discussion] diagonal Message-ID: Hi All, Just noting an oddity when the diagonal method is called on arrays with ndim > 2 In [1]: a = arange(8).reshape(2,2,2) In [2]: a.diagonal() Out[2]: array([[0, 6], [1, 7]]) The diagonal is taken over the first two indices while the rightmost indices vary the slowest. I would expect things to be the other way around. In particular, it would work better with the arrays of matrices proposed by Tim. I suppose another logical extrapolation from two dimensions would be to take the "diagonal" on all three indices. But the current behaviour strikes me as a bit unexpected. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidnovakovic at gmail.com Sun May 6 08:29:13 2007 From: davidnovakovic at gmail.com (dpn) Date: Sun, 6 May 2007 22:29:13 +1000 Subject: [Numpy-discussion] svd headers and diagonals Message-ID: <59d13e7d0705060529v32b5536bp6bf73708ec538e92@mail.gmail.com> Hi, i have two questions, both loosely related to SVD. I've seen this post: http://thread.gmane.org/gmane.comp.python.numeric.general/4575 >>> u,s,v = numpy.linalg.svd(numpy.array([[4,2],[2,4]])) # symmetric matrix u == v >>> u array([[-0.70710678, -0.70710678], [-0.70710678, 0.70710678]]) >>> v array([[-0.70710678, -0.70710678], [-0.70710678, 0.70710678]]) >>> s.shape (2,) since my data matrix is symmetrical, i'd expect USV = X, but I don't get that: >>> u * s * v array([[ 3., 1.], [ 3., 1.]]) matrixmultiply doesnt help either >>> from numpy.core import matrixmultiply as mm >>> mm(u,mm(s,v)) array([ 6., 2.]) Question 2. I'm relativly new to linealg, so i could be way off here. In applications such as LSA, the dimensions of a matrix are either documents or term identifiers, I noticed in PDL ( http://pdl.perl.org/ ), you can set the headers of row or columns. I havent found a way to do this in numpy, which means when the dimensions get sorted by their singular value, I lose the ordering I may have recorded externally. I there a way to store row and column headers? Cheers David Novakovic From matthieu.brucher at gmail.com Sun May 6 08:45:29 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 6 May 2007 14:45:29 +0200 Subject: [Numpy-discussion] svd headers and diagonals In-Reply-To: <59d13e7d0705060529v32b5536bp6bf73708ec538e92@mail.gmail.com> References: <59d13e7d0705060529v32b5536bp6bf73708ec538e92@mail.gmail.com> Message-ID: Don't forget that '*' is element-wise for arrays, use dot instead ;) Matthieu 2007/5/6, dpn : > > Hi, > > i have two questions, both loosely related to SVD. > I've seen this post: > http://thread.gmane.org/gmane.comp.python.numeric.general/4575 > > >>> u,s,v = numpy.linalg.svd(numpy.array([[4,2],[2,4]])) # symmetric > matrix u == v > >>> u > array([[-0.70710678, -0.70710678], > [-0.70710678, 0.70710678]]) > >>> v > array([[-0.70710678, -0.70710678], > [-0.70710678, 0.70710678]]) > >>> s.shape > (2,) > > since my data matrix is symmetrical, i'd expect USV = X, but I don't get > that: > >>> u * s * v > array([[ 3., 1.], > [ 3., 1.]]) > > matrixmultiply doesnt help either > >>> from numpy.core import matrixmultiply as mm > >>> mm(u,mm(s,v)) > array([ 6., 2.]) > > Question 2. > > I'm relativly new to linealg, so i could be way off here. > In applications such as LSA, the dimensions of a matrix are either > documents or term identifiers, I noticed in PDL ( http://pdl.perl.org/ > ), you can set the headers of row or columns. > > I havent found a way to do this in numpy, which means when the > dimensions get sorted by their singular value, I lose the ordering I > may have recorded externally. > > I there a way to store row and column headers? > > Cheers > > David Novakovic > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidnovakovic at gmail.com Sun May 6 08:56:17 2007 From: davidnovakovic at gmail.com (dpn) Date: Sun, 6 May 2007 22:56:17 +1000 Subject: [Numpy-discussion] svd headers and diagonals In-Reply-To: References: <59d13e7d0705060529v32b5536bp6bf73708ec538e92@mail.gmail.com> Message-ID: <59d13e7d0705060556i17423781p7da024ae67144b59@mail.gmail.com> Somehow your suggestion made me think about using diag, Thanks :) >>> mm(u,mm(numpy.diag(s),v)) matrix([[ 4., 2.], [ 2., 4.]]) Now if anyone knows how to set headers for matrices or arrays? dpn On 5/6/07, Matthieu Brucher wrote: > Don't forget that '*' is element-wise for arrays, use dot instead ;) > > Matthieu > > 2007/5/6, dpn : > > > > Hi, > > > > i have two questions, both loosely related to SVD. > > I've seen this post: > > > http://thread.gmane.org/gmane.comp.python.numeric.general/4575 > > > > >>> u,s,v = numpy.linalg.svd(numpy.array([[4,2],[2,4]])) # symmetric > > matrix u == v > > >>> u > > array([[-0.70710678, -0.70710678], > > [-0.70710678, 0.70710678]]) > > >>> v > > array([[-0.70710678, -0.70710678], > > [-0.70710678, 0.70710678]]) > > >>> s.shape > > (2,) > > > > since my data matrix is symmetrical, i'd expect USV = X, but I don't get > that: > > >>> u * s * v > > array([[ 3., 1.], > > [ 3., 1.]]) > > > > matrixmultiply doesnt help either > > >>> from numpy.core import matrixmultiply as mm > > >>> mm(u,mm(s,v)) > > array([ 6., 2.]) > > > > Question 2. > > > > I'm relativly new to linealg, so i could be way off here. > > In applications such as LSA, the dimensions of a matrix are either > > documents or term identifiers, I noticed in PDL ( http://pdl.perl.org/ > > ), you can set the headers of row or columns. > > > > I havent found a way to do this in numpy, which means when the > > dimensions get sorted by their singular value, I lose the ordering I > > may have recorded externally. > > > > I there a way to store row and column headers? > > > > Cheers > > > > David Novakovic > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From openopt at ukr.net Mon May 7 03:45:16 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 07 May 2007 10:45:16 +0300 Subject: [Numpy-discussion] howto check arrays for equal (numpy 1.0b, array_equal is absent)? & best way to copy? Message-ID: <463ED90C.9040006@ukr.net> hi all, I have some troubles with Python2.5+matplotlib, so now I'm using Python2.4.3. I failed to compile both numpy1.0.1 and 1.0.2 (Mandrake2007) so currently I'm using 1.0b. Howto check if numpy.array instances x and y are equal? (i.e. all elements are same) Please answer one more question - what is the best (fastest) way to create copy of numpy.array instance? WBR, D. From wbaxter at gmail.com Mon May 7 04:22:37 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 7 May 2007 17:22:37 +0900 Subject: [Numpy-discussion] howto check arrays for equal (numpy 1.0b, array_equal is absent)? & best way to copy? In-Reply-To: <463ED90C.9040006@ukr.net> References: <463ED90C.9040006@ukr.net> Message-ID: On 5/7/07, dmitrey wrote: > hi all, > I have some troubles with Python2.5+matplotlib, so now I'm using > Python2.4.3. I failed to compile both numpy1.0.1 and 1.0.2 > (Mandrake2007) so currently I'm using 1.0b. > Howto check if numpy.array instances x and y are equal? (i.e. all > elements are same) numpy.all(A==B) should do it. But if you're dealing with floating point, then numpy.allclose(A,B) is probably a better choice. > > Please answer one more question - what is the best (fastest) way to > create copy of numpy.array instance? B = A.copy() Unless you don't mind the "copy" referring to the same memory, and then just B = A --bb From david at ar.media.kyoto-u.ac.jp Mon May 7 06:30:37 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 07 May 2007 19:30:37 +0900 Subject: [Numpy-discussion] Should numpy.allclose return True for arrays of different shape ? Message-ID: <463EFFCD.3050703@ar.media.kyoto-u.ac.jp> Hi, I wanted to know if the following behaviour is a bug or intended behaviour: """ import numpy as N N.allclose(N.array([[1., 1.]]), N.array([1.])) """ eg should allclose return True if arrays have different shape ? cheers, David From Chris.Barker at noaa.gov Mon May 7 12:23:43 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 07 May 2007 09:23:43 -0700 Subject: [Numpy-discussion] matlab vs. python question In-Reply-To: References: <1e2af89e0704280409u3f6a7d99j2a7283da373dced3@mail.gmail.com> <1177783414.369496.115660@y80g2000hsf.googlegroups.com> <200704282037.39721.benjamin@decideur.info> Message-ID: <463F528F.8000109@noaa.gov> Bill Baxter wrote: > Of course even better would be if ipython were in a GUI, and > docstrings could just be proactively prefectched and displayed in > another pane while I type, or argument lists could be popped up like > in PyCrust. I think someone was working on iPython-PyCrust integration at some point -- it would be nice! -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Mon May 7 12:36:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 07 May 2007 11:36:19 -0500 Subject: [Numpy-discussion] Should numpy.allclose return True for arrays of different shape ? In-Reply-To: <463EFFCD.3050703@ar.media.kyoto-u.ac.jp> References: <463EFFCD.3050703@ar.media.kyoto-u.ac.jp> Message-ID: <463F5583.7080802@gmail.com> David Cournapeau wrote: > Hi, > > I wanted to know if the following behaviour is a bug or intended > behaviour: > > """ > import numpy as N > N.allclose(N.array([[1., 1.]]), N.array([1.])) > """ > > eg should allclose return True if arrays have different shape ? If they are broadcastable to each other, then yes: >>> from numpy import * >>> array([[1., 1.]]) + array([1.]) array([[ 2., 2.]]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From numpy at mspacek.mm.st Mon May 7 19:43:40 2007 From: numpy at mspacek.mm.st (Martin Spacek) Date: Mon, 07 May 2007 16:43:40 -0700 Subject: [Numpy-discussion] Searching object arrays Message-ID: <463FB9AC.60007@mspacek.mm.st> I want to find the indices of all the None objects in an object array: >> import numpy as np >> i = np.array([0, 1, 2, None, 3, 4, None]) >> np.where(i == None) () Using == doesn't work the same way on object arrays as it does on, say, an array of int32. Any suggestions? Do I have to use a loop or list comprehension, like this? >> [ ii for (ii, id) in enumerate(i) if id == None ] Thanks, Martin From tim.hochberg at ieee.org Mon May 7 19:55:01 2007 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Mon, 7 May 2007 16:55:01 -0700 Subject: [Numpy-discussion] Searching object arrays In-Reply-To: <463FB9AC.60007@mspacek.mm.st> References: <463FB9AC.60007@mspacek.mm.st> Message-ID: On 5/7/07, Martin Spacek wrote: > > I want to find the indices of all the None objects in an object array: > > >> import numpy as np > >> i = np.array([0, 1, 2, None, 3, 4, None]) > >> np.where(i == None) > () > > Using == doesn't work the same way on object arrays as it does on, say, > an array of int32. Any suggestions? Using np.equals instead of == seems to work: >>> i = np.array([0,1,2,None,3,4,None]) >>> i array([0, 1, 2, None, 3, 4, None], dtype=object) >>> np.where(i == None) () >>> i == None False >>> np.where(np.equal(i, None)) (array([3, 6]),) I suspect the issue isn't actually object arrays, but rather some optimization related to None, that's causing problems. -tim Do I have to use a loop or list > comprehension, like this? > > >> [ ii for (ii, id) in enumerate(i) if id == None ] > > Thanks, > > Martin > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- //=][=\\ tim.hochberg at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From numpy at mspacek.mm.st Mon May 7 20:25:40 2007 From: numpy at mspacek.mm.st (Martin Spacek) Date: Mon, 07 May 2007 17:25:40 -0700 Subject: [Numpy-discussion] Searching object arrays In-Reply-To: References: <463FB9AC.60007@mspacek.mm.st> Message-ID: <463FC384.9060207@mspacek.mm.st> Great, thanks Tim! Martin Timothy Hochberg wrote: > > Using np.equals instead of == seems to work: > > >>> i = np.array([0,1,2,None,3,4,None]) > >>> i > array([0, 1, 2, None, 3, 4, None], dtype=object) > >>> np.where(i == None) > () > >>> i == None > False > >>> np.where(np.equal(i, None)) > (array([3, 6]),) > > I suspect the issue isn't actually object arrays, but rather some > optimization related to None, that's causing problems. > > -tim > From david at ar.media.kyoto-u.ac.jp Mon May 7 23:24:54 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 08 May 2007 12:24:54 +0900 Subject: [Numpy-discussion] Should numpy.allclose return True for arrays of different shape ? In-Reply-To: <463F5583.7080802@gmail.com> References: <463EFFCD.3050703@ar.media.kyoto-u.ac.jp> <463F5583.7080802@gmail.com> Message-ID: <463FED86.4070109@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: >> Hi, >> >> I wanted to know if the following behaviour is a bug or intended >> behaviour: >> >> """ >> import numpy as N >> N.allclose(N.array([[1., 1.]]), N.array([1.])) >> """ >> >> eg should allclose return True if arrays have different shape ? > > If they are broadcastable to each other, then yes: Ok, I assumed it returns True because the second one is broadcastable to the first one, but this is not intuitive to me. What is the rationale for it ? (not that I want to criticize it, just curious) David From robert.kern at gmail.com Mon May 7 23:36:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 07 May 2007 22:36:15 -0500 Subject: [Numpy-discussion] Should numpy.allclose return True for arrays of different shape ? In-Reply-To: <463FED86.4070109@ar.media.kyoto-u.ac.jp> References: <463EFFCD.3050703@ar.media.kyoto-u.ac.jp> <463F5583.7080802@gmail.com> <463FED86.4070109@ar.media.kyoto-u.ac.jp> Message-ID: <463FF02F.4070106@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> David Cournapeau wrote: >>> Hi, >>> >>> I wanted to know if the following behaviour is a bug or intended >>> behaviour: >>> >>> """ >>> import numpy as N >>> N.allclose(N.array([[1., 1.]]), N.array([1.])) >>> """ >>> >>> eg should allclose return True if arrays have different shape ? >> If they are broadcastable to each other, then yes: > Ok, I assumed it returns True because the second one is broadcastable to > the first one, but this is not intuitive to me. What is the rationale > for it ? (not that I want to criticize it, just curious) The primary use case is this: allclose(some_array, some_scalar) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From numpy at mspacek.mm.st Tue May 8 02:30:41 2007 From: numpy at mspacek.mm.st (Martin Spacek) Date: Mon, 07 May 2007 23:30:41 -0700 Subject: [Numpy-discussion] Type conversion weirdness in numpy-1.0.2.win32-py2.4 binary Message-ID: <1178605841.7663.1188631741@webmail.messagingengine.com> In linux and win32 (numpy 1.0.1 release compiled from source, and 1.0.3dev3726 respectively), I get the following normal behaviour: >>> import numpy as np >>> np.array([1.0, 2.0, 3.0, 4.0]) array([ 1., 2., 3., 4.]) >>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) array([ 1, 2, 3, 4]) But on three separate Windows machines running the binary numpy-1.0.2.win32-py2.4 install, I get this: >>> import numpy as np >>> np.array([1.0, 2.0, 3.0, 4.0]) array([ 1., 2., 3., 4.]) >>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) 31195176 >>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) 30137880 >>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) 31186080 >>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) 31186080 >>> np.float64(np.array([1.0, 2.0, 3.0, 4.0])) 2.1359481850412033e-314 >>> np.int32(np.array([1, 2, 3, 4])) 28729424 One of them was running the numpy-1.0.1 binary, which showed normal behaviour, until I upgraded it to 1.0.2. Test come out fine: >>> np.test() Found 5 tests for numpy.distutils.misc_util Found 31 tests for numpy.core.numerictypes Found 32 tests for numpy.linalg Found 4 tests for numpy.lib.index_tricks Found 4 tests for numpy.core.scalarmath Found 9 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 198 tests for numpy.core.multiarray Found 3 tests for numpy.lib.getlimits Found 36 tests for numpy.core.ma Found 2 tests for numpy.lib.polynomial Found 1 tests for numpy.fft.fftpack Found 13 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 13 tests for numpy.core.umath Found 1 tests for numpy.lib.ufunclike Found 4 tests for numpy.ctypeslib Found 43 tests for numpy.lib.function_base Found 9 tests for numpy.core.records Found 59 tests for numpy.core.numeric Found 3 tests for numpy.fft.helper Found 48 tests for numpy.lib.shape_base Found 0 tests for __main__ ............................................................................................................... ............................................................................................................... ............................................................................................................... ............................................................................................................... ............................................................................................................... ............... ---------------------------------------------------------------------- Ran 570 tests in 1.125s OK >>> numpy.version.version '1.0.2' I'm running python 2.4.4 in all cases (except 2.4.3 on linux). Is this a problem with the numpy-1.0.2.win32-py2.4 binary? I could try building a new 1.0.2 binary, if that would help. Martin From numpy at mspacek.mm.st Tue May 8 05:15:17 2007 From: numpy at mspacek.mm.st (Martin Spacek) Date: Tue, 08 May 2007 02:15:17 -0700 Subject: [Numpy-discussion] Type conversion weirdness in numpy-1.0.2.win32-py2.4 binary In-Reply-To: <1178605841.7663.1188631741@webmail.messagingengine.com> References: <1178605841.7663.1188631741@webmail.messagingengine.com> Message-ID: <46403FA5.6040606@mspacek.mm.st> I just tried building the 1.0.2 release, and I still get the type conversion problem. Building from 1.0.3dev3736 makes the problem disappear. Was this an issue that was fixed recently? Martin Martin Spacek wrote: > In linux and win32 (numpy 1.0.1 release compiled from source, and > 1.0.3dev3726 respectively), I get the following normal behaviour: > >>>> import numpy as np >>>> np.array([1.0, 2.0, 3.0, 4.0]) > array([ 1., 2., 3., 4.]) >>>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) > array([ 1, 2, 3, 4]) > > But on three separate Windows machines running the binary > numpy-1.0.2.win32-py2.4 install, I get this: > >>>> import numpy as np >>>> np.array([1.0, 2.0, 3.0, 4.0]) > array([ 1., 2., 3., 4.]) >>>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) > 31195176 >>>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) > 30137880 >>>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) > 31186080 >>>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) > 31186080 >>>> np.float64(np.array([1.0, 2.0, 3.0, 4.0])) > 2.1359481850412033e-314 >>>> np.int32(np.array([1, 2, 3, 4])) > 28729424 > > One of them was running the numpy-1.0.1 binary, which showed normal > behaviour, until I upgraded it to 1.0.2. > > Test come out fine: > >>>> np.test() > Found 5 tests for numpy.distutils.misc_util > Found 31 tests for numpy.core.numerictypes > Found 32 tests for numpy.linalg > Found 4 tests for numpy.lib.index_tricks > Found 4 tests for numpy.core.scalarmath > Found 9 tests for numpy.lib.arraysetops > Found 42 tests for numpy.lib.type_check > Found 198 tests for numpy.core.multiarray > Found 3 tests for numpy.lib.getlimits > Found 36 tests for numpy.core.ma > Found 2 tests for numpy.lib.polynomial > Found 1 tests for numpy.fft.fftpack > Found 13 tests for numpy.lib.twodim_base > Found 10 tests for numpy.core.defmatrix > Found 13 tests for numpy.core.umath > Found 1 tests for numpy.lib.ufunclike > Found 4 tests for numpy.ctypeslib > Found 43 tests for numpy.lib.function_base > Found 9 tests for numpy.core.records > Found 59 tests for numpy.core.numeric > Found 3 tests for numpy.fft.helper > Found 48 tests for numpy.lib.shape_base > Found 0 tests for __main__ > ............................................................................................................... > ............................................................................................................... > ............................................................................................................... > ............................................................................................................... > ............................................................................................................... > ............... > ---------------------------------------------------------------------- > Ran 570 tests in 1.125s > > OK > >>>> numpy.version.version > '1.0.2' > > I'm running python 2.4.4 in all cases (except 2.4.3 on linux). Is this a > problem with the numpy-1.0.2.win32-py2.4 binary? I could try building a > new 1.0.2 binary, if that would help. > > Martin > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From giorgio.luciano at chimica.unige.it Tue May 8 06:18:56 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Tue, 08 May 2007 12:18:56 +0200 Subject: [Numpy-discussion] matlab vs. python question In-Reply-To: <463F528F.8000109@noaa.gov> References: <1e2af89e0704280409u3f6a7d99j2a7283da373dced3@mail.gmail.com> <1177783414.369496.115660@y80g2000hsf.googlegroups.com> <200704282037.39721.benjamin@decideur.info> <463F528F.8000109@noaa.gov> Message-ID: <46404E90.30102@chimica.unige.it> All good points stated here. Unfortunatly I'm not able to contribute, and as stated also in other thread, scipy/python was not born for being a clone of matlab (uhmm well probably matplotlib at first was). What would be the "killer" distro for matlab ? An updated distro with numpy/scipy/matplotlib, all in one (also portable can be good ;) A good workspace (with an interactive button) just to not get figures freezed Copy and paste from spreadsheet with a small windows to look at your variables after that there would be no more reason to use matlab.. If any guru/ generous programmer is listening ;)... My two cents Giorgio From gael.varoquaux at normalesup.org Tue May 8 06:23:06 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 8 May 2007 12:23:06 +0200 Subject: [Numpy-discussion] matlab vs. python question In-Reply-To: <46404E90.30102@chimica.unige.it> References: <1e2af89e0704280409u3f6a7d99j2a7283da373dced3@mail.gmail.com> <1177783414.369496.115660@y80g2000hsf.googlegroups.com> <200704282037.39721.benjamin@decideur.info> <463F528F.8000109@noaa.gov> <46404E90.30102@chimica.unige.it> Message-ID: <20070508102301.GI17024@clipper.ens.fr> On Tue, May 08, 2007 at 12:18:56PM +0200, Giorgio Luciano wrote: > A good workspace (with an interactive button) just to not get figures > freezed I am not sure what you mean by "figures freezed" but I would like to check that you are aware of ipython, and its "-pylab" switch that allows a nice interactive workflow from the command line with figures. Under windows you most often have to create manually a shortcut to add the switch, unfortunately. Ga?l From konrad.hinsen at laposte.net Tue May 8 07:10:09 2007 From: konrad.hinsen at laposte.net (Konrad Hinsen) Date: Tue, 8 May 2007 13:10:09 +0200 Subject: [Numpy-discussion] building pynetcdf-0.7 on OS X 10.4 intel In-Reply-To: References: Message-ID: <82B144A8-2AC7-46C6-8476-018D4F5E7A90@laposte.net> On 02.05.2007, at 23:36, James Boyle wrote: > So far I have not had success. > Enclosed is the dump of my most recent failure - the complaint comes > from the linker since the linker has both a powerPC (ppc) and an > intel (i386) flag. I guess that what is being attempted is a > universal build ( both ppc and i386) but failing. Apparently my > netcdf build was just i386 - although I did nothing special. A standard netCDF build on the Mac is single-architecture. I haven't seen instructions yet for making a universal binary on the Mac. MacPython builds its extensions as universal binaries, so the linker complains about the missing netCDF code for the other processor. However, if all you care about is using netCDF on the machine that you build on, you can safely ignore that warning. The Python extension module should work fine. Note also that pynetcdf is not based on the latest release of the ScientificPython code, which contains various bug fixes. You might be better off using ScientificPython 2.7, which works pretty well with NumPy these days. Konrad. -- --------------------------------------------------------------------- Konrad Hinsen Centre de Biophysique Mol?culaire, CNRS Orl?ans Synchrotron Soleil - Division Exp?riences Saint Aubin - BP 48 91192 Gif sur Yvette Cedex, France Tel. +33-1 69 35 97 15 E-Mail: hinsen at cnrs-orleans.fr --------------------------------------------------------------------- From giorgio.luciano at chimica.unige.it Tue May 8 07:10:47 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Tue, 08 May 2007 13:10:47 +0200 Subject: [Numpy-discussion] matlab vs. python question In-Reply-To: <20070508102301.GI17024@clipper.ens.fr> References: <1e2af89e0704280409u3f6a7d99j2a7283da373dced3@mail.gmail.com> <1177783414.369496.115660@y80g2000hsf.googlegroups.com> <200704282037.39721.benjamin@decideur.info> <463F528F.8000109@noaa.gov> <46404E90.30102@chimica.unige.it> <20070508102301.GI17024@clipper.ens.fr> Message-ID: <46405AB7.1060701@chimica.unige.it> Thanks Gael, I dont' use Ipython, but Ide in interactive mode and everything is fine (changed matplotlibrc and then se switch on in configuration). I just meant hat if you dont' know it or dont' pay attention to start with the correct shortcut etc. etc. you dont' have an "immediate" interactive environment. I will try ipython too ;) Giorgio From pgmdevlist at gmail.com Tue May 8 10:27:24 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 8 May 2007 10:27:24 -0400 Subject: [Numpy-discussion] umath.maximum.reduce Message-ID: <200705081027.24373.pgmdevlist@gmail.com> All, On a 2D array, numpy.core.umath.maximum.reduce returns a 1D array (the default axis is 0). An axis=None is not recognized as a valid argument for numpy.core.umath.maximum, unfortunately... Is this the wanted behavior ? Thanks a lot in advance P. From peridot.faceted at gmail.com Tue May 8 11:51:23 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 8 May 2007 11:51:23 -0400 Subject: [Numpy-discussion] matlab vs. python question In-Reply-To: <20070508102301.GI17024@clipper.ens.fr> References: <1e2af89e0704280409u3f6a7d99j2a7283da373dced3@mail.gmail.com> <1177783414.369496.115660@y80g2000hsf.googlegroups.com> <200704282037.39721.benjamin@decideur.info> <463F528F.8000109@noaa.gov> <46404E90.30102@chimica.unige.it> <20070508102301.GI17024@clipper.ens.fr> Message-ID: On 08/05/07, Gael Varoquaux wrote: > On Tue, May 08, 2007 at 12:18:56PM +0200, Giorgio Luciano wrote: > > A good workspace (with an interactive button) just to not get figures > > freezed > > I am not sure what you mean by "figures freezed" but I would like to > check that you are aware of ipython, and its "-pylab" switch that allows > a nice interactive workflow from the command line with figures. Under > windows you most often have to create manually a shortcut to add the > switch, unfortunately. Unfortunately this seems to make it impossible for me to use ctrl-c or any other method to interrupt an interminable computation (on MPL 0.90.0). Very frustrating. Anne From gael.varoquaux at normalesup.org Tue May 8 11:54:23 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 8 May 2007 17:54:23 +0200 Subject: [Numpy-discussion] matlab vs. python question In-Reply-To: References: <1e2af89e0704280409u3f6a7d99j2a7283da373dced3@mail.gmail.com> <1177783414.369496.115660@y80g2000hsf.googlegroups.com> <200704282037.39721.benjamin@decideur.info> <463F528F.8000109@noaa.gov> <46404E90.30102@chimica.unige.it> <20070508102301.GI17024@clipper.ens.fr> Message-ID: <20070508155423.GL17024@clipper.ens.fr> On Tue, May 08, 2007 at 11:51:23AM -0400, Anne Archibald wrote: > Unfortunately this seems to make it impossible for me to use ctrl-c or > any other method to interrupt an interminable computation (on MPL > 0.90.0). Very frustrating. AFAIK this is fixed in the latest release of ipython (Kudos to the development team !). Ga?l From oliphant.travis at ieee.org Wed May 9 02:26:34 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 09 May 2007 00:26:34 -0600 Subject: [Numpy-discussion] Type conversion weirdness in numpy-1.0.2.win32-py2.4 binary In-Reply-To: <46403FA5.6040606@mspacek.mm.st> References: <1178605841.7663.1188631741@webmail.messagingengine.com> <46403FA5.6040606@mspacek.mm.st> Message-ID: <4641699A.50300@ieee.org> Martin Spacek wrote: > I just tried building the 1.0.2 release, and I still get the type > conversion problem. Building from 1.0.3dev3736 makes the problem > disappear. Was this an issue that was fixed recently? > > Yes. -Travis From bernhard.voigt at gmail.com Wed May 9 08:54:37 2007 From: bernhard.voigt at gmail.com (Bernhard Voigt) Date: Wed, 9 May 2007 14:54:37 +0200 Subject: [Numpy-discussion] subclassing ndarray and recarray Message-ID: <21a270aa0705090554v2a2a6e52q105896c6d859586a@mail.gmail.com> I'm trying to subclass ndarray or recarray to build a record array that has a dictionary with a mapping of keys to array indexes and vice versa. I come across the problem, that depending on the field access method I get different types back: # a is a sublcass of record array >> print type(a) >> print type(a['x']) >> print type(a.x) I guess this has something to do with the call to the view method in the recarray implementation but I can't figure out how to solve this. Hence I switched back to subclass the ndarray and implemented the field access via a.x by myself. In order to update the mapping of the array instance when slicing is used, I would like to get the selected indexes in the __array_finalize__ method. Is there a way to access the selected indexes, or do I have to overwrite the __getitem__ method to get indexes from the slice object. This could be rather complicated due to the many different slice specifications. Here's an incomplete code example from my subclass, maybe this helps to understand the problem: def __array_finalize__(self, obj): if not hasattr(self, 'mapping'): # create a mapping for the selected indexes from the mapping of the original array given by obj # reverse_mapping maps indexes to keys, __selected_indexes is the field I would like to have self.reverse_mapping = dict((obj.reverse_mapping[i], i for i n self.__selected_indexes) self.mapping = dict((k,i) for i,k in self.reverse_mapping.iteritems()) def __getitem__(self, key): # fill the __selected_indexes field self.__selected_indexes = foo(key) # pass the selection to ndarray - calls __array_finalilze__ and where the mapping is performed return numpy.ndarray.__getitem__(self, key) Thanks for help! Bernhard -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbeachy at gmail.com Wed May 9 14:04:34 2007 From: mbeachy at gmail.com (Mike Beachy) Date: Wed, 9 May 2007 14:04:34 -0400 Subject: [Numpy-discussion] numpy.distutils Message-ID: <30eaa4540705091104g21c3849djb8991261011e53f2@mail.gmail.com> Hi all - I've been trying to set up configuration files to standardize a local 1.02numpy installation and have run into a problem with the intel compiler package. If I try python setup.py config_fc --fcompiler=intel build_ext (from the top level numpy-1.0.2 directory) I get the following failures (with ifort 8.1): ifort: Command line warning: extension 'M' not supported ignored in option '-x' ifort: Command line error: Unrecognized keyword 'SSE2' for option '-arch' ifort: Command line warning: extension 'M' not supported ignored in option '-x' ifort: Command line error: Unrecognized keyword 'SSE2' for option '-arch' I assumed that the 'noarch' option to the config_fc command would get me around this, but it does not. This seems to be because the get_flags_linker_so method in the numpy.distutils.fcompiler.intel.IntelFCompiler class always extends the options with the result of self.get_flags_arch(). Removing that call fixes the problem. I'm asking about this here rather than simply filing a bug because I wonder if there's a way of overriding the get_flags_linker_so method that I've simply overlooked. I'm not sure that I've discovered all the ways to modify a numpy installation yet. (So far I've found site.cfg, setup.cfg, and Setup. Am I missing anything? Also, is there a recommended documentation source for this stuff?) Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.denniston at alum.dartmouth.org Wed May 9 15:53:10 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Wed, 9 May 2007 14:53:10 -0500 Subject: [Numpy-discussion] Memory Error in Search Sorted Message-ID: So searchsorted is supposed to take a list. The arguments I am using are not correct. But it still seems strange to me that these incorrect params trigger a memory error. Is a check simply missing or is this possibly a sign of a larger bug? I can simply correct my params so this is no big deal, but I just wanted to report it as it seemed like a subtle bug. --Tom In [2]: numpy.searchsorted(numpy.array(['1', '2', '3', '4']), [1]) Out[2]: array([0]) In [3]: numpy.searchsorted(numpy.array(['1', '2', '3', '4']), 1) --------------------------------------------------------------------------- Traceback (most recent call last) /code/src/ in () /lib/python2.5/site-packages/numpy/core/fromnumeric.py in searchsorted(a, v, side) 378 except AttributeError: 379 return _wrapit(a, 'searchsorted', v, side) --> 380 return searchsorted(v, side) 381 382 def resize(a, new_shape): : From robert.kern at gmail.com Wed May 9 16:00:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 09 May 2007 15:00:33 -0500 Subject: [Numpy-discussion] Memory Error in Search Sorted In-Reply-To: References: Message-ID: <46422861.2070401@gmail.com> Tom Denniston wrote: > So searchsorted is supposed to take a list. The arguments I am using > are not correct. No, it will take a scalar value to search for, too. In [18]: numpy.searchsorted(numpy.array([1, 2, 3, 4]), 1) Out[18]: 0 In [19]: numpy.searchsorted(numpy.array(['1', '2', '3', '4']), '1') Out[19]: 0 > But it still seems strange to me that these > incorrect params trigger a memory error. Is a check simply missing or > is this possibly a sign of a larger bug? I can simply correct my > params so this is no big deal, but I just wanted to report it as it > seemed like a subtle bug. It is a bug, but the source is probably the array of strings, not fact that the search value is a scalar. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Wed May 9 16:29:30 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 9 May 2007 16:29:30 -0400 Subject: [Numpy-discussion] subclassing ndarray and recarray In-Reply-To: <21a270aa0705090554v2a2a6e52q105896c6d859586a@mail.gmail.com> References: <21a270aa0705090554v2a2a6e52q105896c6d859586a@mail.gmail.com> Message-ID: <200705091629.32486.pgmdevlist@gmail.com> On Wednesday 09 May 2007 08:54:37 Bernhard Voigt wrote: > I'm trying to subclass ndarray or recarray to build a record array that has > a dictionary with a mapping of keys to array indexes and vice versa. Bernhard, Could you send me the rest of your code ? I'd like to test a couple of things before committing a proper answer. From erin.sheldon at gmail.com Thu May 10 00:57:24 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Thu, 10 May 2007 00:57:24 -0400 Subject: [Numpy-discussion] scipy for centos4.4? In-Reply-To: References: Message-ID: <331116dc0705092157o747bc38cpaa21d40ce09ccd3c@mail.gmail.com> We also have a number of centos boxes here at NYU astronomy and we've been in dependency hell trying to get everything to work without failing tests (mainly scipy more than numpy). Any help greatly appreciated. Erin On 5/3/07, Fabian Braennstroem wrote: > Hi Neal, > > * Neal Becker wrote: > > Anyone know where to find usable rpms from scipy on centos4.4? > > I would like to see some more rpms too ... I am on > scientific linux 4.4 :-) > > > Greetings! > Fabian > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From bernhard.voigt at gmail.com Thu May 10 07:18:20 2007 From: bernhard.voigt at gmail.com (Bernhard Voigt) Date: Thu, 10 May 2007 13:18:20 +0200 Subject: [Numpy-discussion] 6 new messages in 4 topics - abridged In-Reply-To: <4642d69a.602a8535.29c6.fffff907SMTPIN_ADDED@mx.google.com> References: <4642d69a.602a8535.29c6.fffff907SMTPIN_ADDED@mx.google.com> Message-ID: <21a270aa0705100418p2318c555j45569d41d604393c@mail.gmail.com> Dear Pierre, I've attached the arraydict implementation file. You can run it and take a look at the following example on your own: In [25]: run arraydict.py creates an arraydict with 10 elements in the current scope. Keys are the index numbers of items In [26]: a Out[26]: arraydict([(-0.51430764775177518, 0.17962503931139237), (-1.4037792804089142, 0.37263515556827359), (1.9048324627948983, 1.4155903391279885), (0.077070370958404841, -1.4284963747790793), (0.20177037521016888, 0.25023158062312373), (0.88821059412119174, 0.29415143595187959), (0.46224769848661729, -0.80670670514715426), (-0.079049832245684654, -2.5738917233959899), (-0.562854982548048, 2.0708323443154897), (-2.4176013660591691, 0.36401660943002978)], dtype=[('x', '0] In [29]: foo Out[29]: arraydict([(1.9048324627948983, 1.4155903391279885), (0.077070370958404841, -1.4284963747790793), (0.20177037521016888, 0.25023158062312373), (0.88821059412119174, 0.29415143595187959), (0.46224769848661729, -0.80670670514715426)], dtype=[('x', '0. A new arraydict is created with the selected items and keys. Here's the stuff I couldn't figure out how to deal with, making selections on slices, lists etc... In [31]: bar = a[1:6:2] The selection is correct, because it's passed to ndarray.__getitem__ In [32]: bar Out[32]: arraydict([(-1.4037792804089142, 0.37263515556827359), (0.077070370958404841, -1.4284963747790793), (0.88821059412119174, 0.29415143595187959)], dtype=[('x', ' wrote: > On Wednesday 09 May 2007 08:54:37 Bernhard Voigt wrote: > > > I'm trying to subclass ndarray or recarray to build a record array that has > > a dictionary with a mapping of keys to array indexes and vice versa. > > Bernhard, > Could you send me the rest of your code ? I'd like to test a couple of things > before committing a proper answer. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy .orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: arraydict.py Type: text/x-python Size: 6119 bytes Desc: not available URL: From bernhard.voigt at gmail.com Thu May 10 07:21:29 2007 From: bernhard.voigt at gmail.com (Bernhard Voigt) Date: Thu, 10 May 2007 13:21:29 +0200 Subject: [Numpy-discussion] subclassing ndarray and recarray Message-ID: <21a270aa0705100421l3c9bbfccu8616e39b1b88c954@mail.gmail.com> Sorry for the spam, but I didn't modify the subject of the previous mail. Here's the message again but now with the right subject: Dear Pierre, I've attached the arraydict implementation file. You can run it and take a look at the following example on your own: In [25]: run arraydict.py creates an arraydict with 10 elements in the current scope. Keys are the index numbers of items In [26]: a Out[26]: arraydict([(-0.51430764775177518, 0.17962503931139237), (-1.4037792804089142, 0.37263515556827359), (1.9048324627948983, 1.4155903391279885), (0.077070370958404841 , -1.4284963747790793), (0.20177037521016888, 0.25023158062312373), (0.88821059412119174, 0.29415143595187959), (0.46224769848661729, -0.80670670514715426), (-0.079049832245684654, - 2.5738917233959899), (-0.562854982548048, 2.0708323443154897), (-2.4176013660591691, 0.36401660943002978)], dtype=[('x', '0] In [29]: foo Out[29]: arraydict([(1.9048324627948983, 1.4155903391279885), (0.077070370958404841, -1.4284963747790793), (0.20177037521016888, 0.25023158062312373), (0.88821059412119174, 0.29415143595187959 ), (0.46224769848661729, -0.80670670514715426)], dtype=[('x', '0. A new arraydict is created with the selected items and keys. Here's the stuff I couldn't figure out how to deal with, making selections on slices, lists etc... In [31]: bar = a[1:6:2] The selection is correct, because it's passed to ndarray.__getitem__ In [32]: bar Out[32]: arraydict([(-1.4037792804089142, 0.37263515556827359), (0.077070370958404841, -1.4284963747790793), (0.88821059412119174, 0.29415143595187959)], dtype=[('x', ' wrote: > On Wednesday 09 May 2007 08:54:37 Bernhard Voigt wrote: > > > I'm trying to subclass ndarray or recarray to build a record array that has > > a dictionary with a mapping of keys to array indexes and vice versa. > > Bernhard, > Could you send me the rest of your code ? I'd like to test a couple of things > before committing a proper answer. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: arraydict.py Type: text/x-python Size: 6119 bytes Desc: not available URL: From perry at stsci.edu Thu May 10 15:27:58 2007 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 10 May 2007 15:27:58 -0400 Subject: [Numpy-discussion] numpy version of Interactive Data Analysis tutorial available Message-ID: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> I have updated the "Using Python for Interactive Data Analysis" tutorial to use numpy instead of numarray (finally!). There are further improvements I would like to make in its organization and formatting (in the process including suggestions others have made to that end), but I'd rather get this version out, which I believe addresses all the content changes needed to make it useful for numpy, without delaying it any further. The tutorial, as well as other supporting material and information, can be obtained from: http://www.scipy.org/wikis/topical_software/Tutorial I'm sure errors remain; please let me know of any you find. Perry From oliphant at ee.byu.edu Thu May 10 17:51:28 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 May 2007 15:51:28 -0600 Subject: [Numpy-discussion] NumPy 1.0.3 release next week Message-ID: <464393E0.3030900@ee.byu.edu> Hi all, I'd like to relase NumPy 1.0.3 next week (on Tuesday) along with a new release of SciPy. Please let me know of changes that you are planning on making before then. Best, -Travis From pgmdevlist at gmail.com Thu May 10 20:10:45 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 10 May 2007 20:10:45 -0400 Subject: [Numpy-discussion] subclassing ndarray and recarray In-Reply-To: <21a270aa0705100421l3c9bbfccu8616e39b1b88c954@mail.gmail.com> References: <21a270aa0705100421l3c9bbfccu8616e39b1b88c954@mail.gmail.com> Message-ID: <200705102010.46574.pgmdevlist@gmail.com> Bernhard, Looks like you have to modify your __getitem__ and __getslice__ methods. The following seems to work in simple cases. The numpy.array in front of numpy.ndarray.__getxxx__ is to transform the 'numpy.void def __getitem__(self, key): if isinstance(key, arraydict) and key.dtype == bool: # The key is a condition return self.select(self.keys(key)) elif isinstance(key, str): # The key is a string, therefore a field return numpy.ndarray.__getitem__(self, key) else: # The key is a sequence or a scalar obj = numpy.array(numpy.ndarray.__getitem__(self, key)).view(type(self)) key = numpy.array(key, copy=False, ndmin=1) # _mapping won't like dealing w/ negative keys: add n to them key[key<0] += len(self) obj.__make_mapping(key) return obj def __getslice__(self,i,j): obj = numpy.array(numpy.ndarray.__getslice__(self, i,j)).view(type(self)) obj.__make_mapping(range(i,j+1)) return obj From pearu at cens.ioc.ee Fri May 11 04:35:59 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 11 May 2007 11:35:59 +0300 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <464393E0.3030900@ee.byu.edu> References: <464393E0.3030900@ee.byu.edu> Message-ID: <46442AEF.2010908@cens.ioc.ee> Hi, I am going to work on g3 f2py starting next week on full time --- I am now hired by Simula Research Laboratory that will support my work on f2py. I think I'll spend the next few months on g3 f2py. However, this work should not affect the numpy release that you are planning. I'll keep all major changes on my computer until numpy 1.0.3 is released. Though, I might commit some trivial fixes as I run to them. Regards, Pearu Travis Oliphant wrote: > Hi all, > > I'd like to relase NumPy 1.0.3 next week (on Tuesday) along with a > new release of SciPy. Please let me know of changes that you are > planning on making before then. > > Best, > > -Travis > > _______________________________________________ Numpy-discussion > mailing list Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From gruben at bigpond.net.au Fri May 11 09:56:39 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri, 11 May 2007 23:56:39 +1000 Subject: [Numpy-discussion] numpy version of Interactive Data Analysis tutorial available In-Reply-To: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> References: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> Message-ID: <46447617.30704@bigpond.net.au> This is great Perry, I think this will help to convince our department's astronomer(s) to learn and maybe use Python for teaching. By the way, if you do a global search for "numarray" in your document, you'll pick up a few pieces of unchanged text and code. Gary R. Perry Greenfield wrote: > I have updated the "Using Python for Interactive Data Analysis" > tutorial to use numpy instead of numarray (finally!). There are > further improvements I would like to make in its organization and > formatting (in the process including suggestions others have made to > that end), but I'd rather get this version out, which I believe > addresses all the content changes needed to make it useful for numpy, > without delaying it any further. > > The tutorial, as well as other supporting material and information, > can be obtained from: > > http://www.scipy.org/wikis/topical_software/Tutorial > > I'm sure errors remain; please let me know of any you find. > > Perry From gnurser at googlemail.com Fri May 11 10:04:32 2007 From: gnurser at googlemail.com (George Nurser) Date: Fri, 11 May 2007 15:04:32 +0100 Subject: [Numpy-discussion] interrupted svn updates Message-ID: <1d1e6ea70705110704v2b1de501xa2dd371bc71b940a@mail.gmail.com> I'm trying to update numpy from svn. My first try was very slow, but eventially produced 72 updated files; gave message at end: svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' svn: REPORT of '/svn/numpy/!svn/vcc/default': Could not read response body: connection was closed by server. (http://svn.scipy.org) 2nd try produced 4 more & then died with above error 3rd & 4th just dies with above error. Any ideas -- is thw problem here or there? Many thanks, George Nurser. From chanley at stsci.edu Fri May 11 10:06:32 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 11 May 2007 10:06:32 -0400 Subject: [Numpy-discussion] interrupted svn updates In-Reply-To: <1d1e6ea70705110704v2b1de501xa2dd371bc71b940a@mail.gmail.com> References: <1d1e6ea70705110704v2b1de501xa2dd371bc71b940a@mail.gmail.com> Message-ID: <46447868.9030504@stsci.edu> I had that problem this morning as well. It appears to be a problem on the server side. Chris George Nurser wrote: > I'm trying to update numpy from svn. > My first try was very slow, but eventially produced 72 updated files; > gave message at end: > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > svn: REPORT of '/svn/numpy/!svn/vcc/default': Could not read response > body: connection was closed by server. (http://svn.scipy.org) > > 2nd try produced 4 more & then died with above error > > 3rd & 4th just dies with above error. > > Any ideas -- is thw problem here or there? > > Many thanks, George Nurser. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From michael.stoelzle at email.de Fri May 11 10:34:22 2007 From: michael.stoelzle at email.de (michael.stoelzle at email.de) Date: Fri, 11 May 2007 16:34:22 +0200 Subject: [Numpy-discussion] problems with calculating numpy.float64 Message-ID: <217095730@web.de> Hello out there, i try to run this Python-code snippet after I have imported: import numpy as Numeric import numpy as numpy Numeric.Int = Numeric.int32 Numeric.Float = Numeric.float64 Code: if m < maxN and n < maxN and self.activeWide[m+1, n+1]: try: deltaX = x[m+1] - x[m] except TypeError: print '#' * 70 print 'type x', type(x) type_a, type_b = map(type, (x[m + 1], x[m])) print 'type_a, type_b', type_a, type_b, type_a is type_b print '#' * 70 raise My result for this part looks like this: ---------------------------------------- True ---------------------------------------- Inappropriate argument type. unsupported operand type(s) for -: 'numpy.float64' and 'numpy.float64' line 161 in setIDs in file F:\Unstrut_2007\arc_egmo\gw.py line 71 in __init__ in file F:\Unstrut_2007\arc_egmo\mf2000\mf2000exe.py line 380 in makeInactiveFringeMap in file F:\Unstrut_2007\arc_egmo\mf2000\mf2000exe.py This means that Python cant execute the line: deltaX = x[m+1] - x[m] You can see the "Types" of the values between the --- --- two lines above. I reproduce this problem in my Python 2.4 interpreter on my Windows XP Prof. OS and there are no problems in calculating with these Types, everything is OK. Thanks for your help Greetz from Berlin, Germany Michael From jstrunk at enthought.com Fri May 11 11:52:32 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Fri, 11 May 2007 10:52:32 -0500 Subject: [Numpy-discussion] interrupted svn updates In-Reply-To: <46447868.9030504@stsci.edu> References: <1d1e6ea70705110704v2b1de501xa2dd371bc71b940a@mail.gmail.com> <46447868.9030504@stsci.edu> Message-ID: <200705111052.32784.jstrunk@enthought.com> Is this still causing trouble? I restarted apache about 20 minutes after you sent this. Thanks, Jeff On Friday 11 May 2007 9:06 am, Christopher Hanley wrote: > I had that problem this morning as well. It appears to be a problem on > the server side. > > Chris > > George Nurser wrote: > > I'm trying to update numpy from svn. > > My first try was very slow, but eventially produced 72 updated files; > > gave message at end: > > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > > svn: REPORT of '/svn/numpy/!svn/vcc/default': Could not read response > > body: connection was closed by server. (http://svn.scipy.org) > > > > 2nd try produced 4 more & then died with above error > > > > 3rd & 4th just dies with above error. > > > > Any ideas -- is thw problem here or there? > > > > Many thanks, George Nurser. > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From chanley at stsci.edu Fri May 11 12:01:27 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 11 May 2007 12:01:27 -0400 Subject: [Numpy-discussion] interrupted svn updates In-Reply-To: <200705111052.32784.jstrunk@enthought.com> References: <1d1e6ea70705110704v2b1de501xa2dd371bc71b940a@mail.gmail.com> <46447868.9030504@stsci.edu> <200705111052.32784.jstrunk@enthought.com> Message-ID: <46449357.9050504@stsci.edu> Everything seems fine to me. Thank you for your prompt support. Chris Jeff Strunk wrote: > Is this still causing trouble? I restarted apache about 20 minutes after you > sent this. > > Thanks, > Jeff > > On Friday 11 May 2007 9:06 am, Christopher Hanley wrote: > >> I had that problem this morning as well. It appears to be a problem on >> the server side. >> >> Chris >> >> George Nurser wrote: >> >>> I'm trying to update numpy from svn. >>> My first try was very slow, but eventially produced 72 updated files; >>> gave message at end: >>> svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' >>> svn: REPORT of '/svn/numpy/!svn/vcc/default': Could not read response >>> body: connection was closed by server. (http://svn.scipy.org) >>> >>> 2nd try produced 4 more & then died with above error >>> >>> 3rd & 4th just dies with above error. >>> >>> Any ideas -- is thw problem here or there? >>> >>> Many thanks, George Nurser. >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://projects.scipy.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > From gnurser at googlemail.com Fri May 11 12:21:36 2007 From: gnurser at googlemail.com (George Nurser) Date: Fri, 11 May 2007 17:21:36 +0100 Subject: [Numpy-discussion] interrupted svn updates In-Reply-To: <200705111052.32784.jstrunk@enthought.com> References: <1d1e6ea70705110704v2b1de501xa2dd371bc71b940a@mail.gmail.com> <46447868.9030504@stsci.edu> <200705111052.32784.jstrunk@enthought.com> Message-ID: <1d1e6ea70705110921o412e6fd0xa05901082d19ea50@mail.gmail.com> It still seems to be causing me problems. Jeff, I tried a few minutes ago and got: svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' svn: REPORT of '/svn/numpy/!svn/vcc/default': Chunk delimiter was invalid (http://svn.scipy.org) and then just now the previous error: svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' svn: REPORT of '/svn/numpy/!svn/vcc/default': Could not read response body: connection was closed by server. (http://svn.scipy.org) I don't have a great knowledge of subversion -- Is there any cache-type thing I should clear? Many thanks for loooking at this. George Nurser. From peridot.faceted at gmail.com Fri May 11 13:06:37 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 11 May 2007 13:06:37 -0400 Subject: [Numpy-discussion] numpy version of Interactive Data Analysis tutorial available In-Reply-To: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> References: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> Message-ID: On 10/05/07, Perry Greenfield wrote: > I have updated the "Using Python for Interactive Data Analysis" > tutorial to use numpy instead of numarray (finally!). There are > further improvements I would like to make in its organization and > formatting (in the process including suggestions others have made to > that end), but I'd rather get this version out, which I believe > addresses all the content changes needed to make it useful for numpy, > without delaying it any further. Thank you for writing this - I'll have to see if it helps wean my officemates away from C for quick scripting tasks. Are you interested in additional examples? I could write one for time-domain data (based on real, public, RXTE data, showing some evidence for sub-millisecond pulsations in XTE J1739-85 (reported by Kaaret et al)). I notice that the source for the tutorial does not seem to be available... Anne From bernhard.voigt at gmail.com Fri May 11 15:07:01 2007 From: bernhard.voigt at gmail.com (Bernhard Voigt) Date: Fri, 11 May 2007 21:07:01 +0200 Subject: [Numpy-discussion] subclassing ndarray and recarray In-Reply-To: <200705102010.46574.pgmdevlist@gmail.com> References: <21a270aa0705100421l3c9bbfccu8616e39b1b88c954@mail.gmail.com> <200705102010.46574.pgmdevlist@gmail.com> Message-ID: <21a270aa0705111207t657ced4fn3fea789963c25818@mail.gmail.com> Hi Pierre, that's great! I didn't do exactly what I wanted, but seeing how to overwrite the __getitem__ and __getslice__ methods I can adapt my class that it works with the use cases I need. Thanks for your help! Bernhard On 5/11/07, Pierre GM wrote: > > Bernhard, > Looks like you have to modify your __getitem__ and __getslice__ methods. > The > following seems to work in simple cases. > > The numpy.array in front of numpy.ndarray.__getxxx__ is to transform > the 'numpy.void > > def __getitem__(self, key): > if isinstance(key, arraydict) and key.dtype == bool: > # The key is a condition > return self.select(self.keys(key)) > elif isinstance(key, str): > # The key is a string, therefore a field > return numpy.ndarray.__getitem__(self, key) > else: > # The key is a sequence or a scalar > obj = numpy.array(numpy.ndarray.__getitem__(self, > key)).view(type(self)) > key = numpy.array(key, copy=False, ndmin=1) > # _mapping won't like dealing w/ negative keys: add n to them > key[key<0] += len(self) > obj.__make_mapping(key) > return obj > > def __getslice__(self,i,j): > obj = numpy.array(numpy.ndarray.__getslice__(self, > i,j)).view(type(self)) > obj.__make_mapping(range(i,j+1)) > return obj > -- Bernhard Voigt Phone: ++49 33762 - 7 - 7291 DESY, Zeuthen Mail: bernhard.voigt at desy.de Platanenallee 6 Web: www-zeuthen.desy.de/~bvoigt 15738 Zeuthen AIM/Skype: bernievoigt -------------- next part -------------- An HTML attachment was scrubbed... URL: From perry at stsci.edu Fri May 11 15:31:58 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 11 May 2007 15:31:58 -0400 Subject: [Numpy-discussion] numpy version of Interactive Data Analysis tutorial available In-Reply-To: References: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> Message-ID: <185C14CA-3321-42D6-B462-CA6B9379D75F@stsci.edu> On May 11, 2007, at 1:06 PM, Anne Archibald wrote: > On 10/05/07, Perry Greenfield wrote: >> I have updated the "Using Python for Interactive Data Analysis" >> tutorial to use numpy instead of numarray (finally!). There are >> further improvements I would like to make in its organization and >> formatting (in the process including suggestions others have made to >> that end), but I'd rather get this version out, which I believe >> addresses all the content changes needed to make it useful for numpy, >> without delaying it any further. > > Thank you for writing this - I'll have to see if it helps wean my > officemates away from C for quick scripting tasks. > > Are you interested in additional examples? I could write one for > time-domain data (based on real, public, RXTE data, showing some > evidence for sub-millisecond pulsations in XTE J1739-85 (reported by > Kaaret et al)). I notice that the source for the tutorial does not > seem to be available... > > Anne Hi Anne, Sure I could use more examples. The source is in LyX. I suppose I can put it on a svn server somewhere. I haven't yet figured out what to do about that, but it seems like a good thing to do. Perry From perry at stsci.edu Fri May 11 15:33:14 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 11 May 2007 15:33:14 -0400 Subject: [Numpy-discussion] numpy version of Interactive Data Analysis tutorial available In-Reply-To: <46447617.30704@bigpond.net.au> References: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> <46447617.30704@bigpond.net.au> Message-ID: On May 11, 2007, at 9:56 AM, Gary Ruben wrote: > This is great Perry, > > I think this will help to convince our department's astronomer(s) to > learn and maybe use Python for teaching. > By the way, if you do a global search for "numarray" in your document, > you'll pick up a few pieces of unchanged text and code. > > Gary R. Yeah, I meant to do that at some point. I'll fix these soon. Perry From gnurser at googlemail.com Fri May 11 16:46:56 2007 From: gnurser at googlemail.com (George Nurser) Date: Fri, 11 May 2007 21:46:56 +0100 Subject: [Numpy-discussion] interrupted svn updates In-Reply-To: <200705111052.32784.jstrunk@enthought.com> References: <1d1e6ea70705110704v2b1de501xa2dd371bc71b940a@mail.gmail.com> <46447868.9030504@stsci.edu> <200705111052.32784.jstrunk@enthought.com> Message-ID: <1d1e6ea70705111346x5005bc76o2092ba63942fc18e@mail.gmail.com> Jeff, Sorry to bother you again on this, but it's certainly still giving the same problem. svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' svn: REPORT of '/svn/numpy/!svn/vcc/default': Could not read response body: connection was closed by server. (http://svn.scipy.org) I tried a fresh checkout in a new directory, so the problems can't be here. Regards, George Nurser. From fullung at gmail.com Fri May 11 17:49:47 2007 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 11 May 2007 23:49:47 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <464393E0.3030900@ee.byu.edu> References: <464393E0.3030900@ee.byu.edu> Message-ID: <20070511214947.GA2228@dogbert.sdsl.sun.ac.za> Here are a few tickets that might warrant some attention from someone who is intimately familiar with NumPy's internals ;-) http://projects.scipy.org/scipy/numpy/ticket/390 http://projects.scipy.org/scipy/numpy/ticket/405 http://projects.scipy.org/scipy/numpy/ticket/466 http://projects.scipy.org/scipy/numpy/ticket/469 On Thu, 10 May 2007, Travis Oliphant wrote: > Hi all, > > I'd like to relase NumPy 1.0.3 next week (on Tuesday) along with a new > release of SciPy. Please let me know of changes that you are planning > on making before then. > > Best, > > -Travis From jstrunk at enthought.com Fri May 11 18:18:04 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Fri, 11 May 2007 17:18:04 -0500 Subject: [Numpy-discussion] interrupted svn updates In-Reply-To: <1d1e6ea70705111346x5005bc76o2092ba63942fc18e@mail.gmail.com> References: <1d1e6ea70705110704v2b1de501xa2dd371bc71b940a@mail.gmail.com> <200705111052.32784.jstrunk@enthought.com> <1d1e6ea70705111346x5005bc76o2092ba63942fc18e@mail.gmail.com> Message-ID: <200705111718.04601.jstrunk@enthought.com> On Friday 11 May 2007 3:46 pm, George Nurser wrote: > Jeff, > > Sorry to bother you again on this, but it's certainly still giving the > same problem. > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > svn: REPORT of '/svn/numpy/!svn/vcc/default': Could not read response > body: connection was closed by server. (http://svn.scipy.org) > > I tried a fresh checkout in a new directory, so the problems can't be here. > > Regards, George Nurser. It appears to be a runaway moin process. The load average was pretty high. I'll keep a closer eye on it to see what might be happening. I restarted apache to fix it temporarily. Thanks, Jeff From openopt at ukr.net Fri May 11 18:22:58 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 12 May 2007 01:22:58 +0300 Subject: [Numpy-discussion] best way of counting time and cputime? Message-ID: <4644ECC2.5060909@ukr.net> hi all, please inform me which way for counting elapsed time and cputime in Python is best? Previously in some Python sources I noticed there are some ones. Currently I use time.time() for time. Thx, D. From fullung at gmail.com Fri May 11 18:50:10 2007 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 12 May 2007 00:50:10 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <20070511214947.GA2228@dogbert.sdsl.sun.ac.za> References: <464393E0.3030900@ee.byu.edu> <20070511214947.GA2228@dogbert.sdsl.sun.ac.za> Message-ID: <20070511225010.GA2672@dogbert.sdsl.sun.ac.za> Here's another issue with a patch that looks ready to go: http://projects.scipy.org/scipy/numpy/ticket/509 Enhancement you might consider: http://projects.scipy.org/scipy/numpy/ticket/375 And this one looks like it can be closed: http://projects.scipy.org/scipy/numpy/ticket/395 Cheers, Albert On Fri, 11 May 2007, Albert Strasheim wrote: > Here are a few tickets that might warrant some attention from someone > who is intimately familiar with NumPy's internals ;-) > > http://projects.scipy.org/scipy/numpy/ticket/390 > http://projects.scipy.org/scipy/numpy/ticket/405 > http://projects.scipy.org/scipy/numpy/ticket/466 > http://projects.scipy.org/scipy/numpy/ticket/469 > > On Thu, 10 May 2007, Travis Oliphant wrote: > > > Hi all, > > > > I'd like to relase NumPy 1.0.3 next week (on Tuesday) along with a new > > release of SciPy. Please let me know of changes that you are planning > > on making before then. From oliphant.travis at ieee.org Fri May 11 19:11:19 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 May 2007 17:11:19 -0600 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <20070511225010.GA2672@dogbert.sdsl.sun.ac.za> References: <464393E0.3030900@ee.byu.edu> <20070511214947.GA2228@dogbert.sdsl.sun.ac.za> <20070511225010.GA2672@dogbert.sdsl.sun.ac.za> Message-ID: <4644F817.60909@ieee.org> Albert Strasheim wrote: > Here's another issue with a patch that looks ready to go: > > http://projects.scipy.org/scipy/numpy/ticket/509 > > Enhancement you might consider: > > http://projects.scipy.org/scipy/numpy/ticket/375 > > And this one looks like it can be closed: > > http://projects.scipy.org/scipy/numpy/ticket/395 > > Cheers, > > Albert > Thanks for the ticket reviews, Albert. That is really helpful. -Travis From cookedm at physics.mcmaster.ca Fri May 11 19:48:21 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 11 May 2007 19:48:21 -0400 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <20070511225010.GA2672@dogbert.sdsl.sun.ac.za> References: <464393E0.3030900@ee.byu.edu> <20070511214947.GA2228@dogbert.sdsl.sun.ac.za> <20070511225010.GA2672@dogbert.sdsl.sun.ac.za> Message-ID: <20070511234821.GA25941@arbutus.physics.mcmaster.ca> On Sat, May 12, 2007 at 12:50:10AM +0200, Albert Strasheim wrote: > Here's another issue with a patch that looks ready to go: > > http://projects.scipy.org/scipy/numpy/ticket/509 > > Enhancement you might consider: > > http://projects.scipy.org/scipy/numpy/ticket/375 > > And this one looks like it can be closed: > > http://projects.scipy.org/scipy/numpy/ticket/395 > > Cheers, > > Albert > > On Fri, 11 May 2007, Albert Strasheim wrote: > > > Here are a few tickets that might warrant some attention from someone > > who is intimately familiar with NumPy's internals ;-) > > > > http://projects.scipy.org/scipy/numpy/ticket/390 > > http://projects.scipy.org/scipy/numpy/ticket/405 > > http://projects.scipy.org/scipy/numpy/ticket/466 > > http://projects.scipy.org/scipy/numpy/ticket/469 I've added a 1.0.3 milestone and set these to them (or to 1.1, according to Travis's comments). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fullung at gmail.com Sat May 12 05:44:01 2007 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 12 May 2007 11:44:01 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <4644F817.60909@ieee.org> References: <464393E0.3030900@ee.byu.edu> <20070511214947.GA2228@dogbert.sdsl.sun.ac.za> <20070511225010.GA2672@dogbert.sdsl.sun.ac.za> <4644F817.60909@ieee.org> Message-ID: <20070512094401.GA10725@dogbert.sdsl.sun.ac.za> On Fri, 11 May 2007, Travis Oliphant wrote: > Thanks for the ticket reviews, Albert. That is really helpful. My pleasure. Found two more issues that look like they could be addressed: http://projects.scipy.org/scipy/numpy/ticket/422 http://projects.scipy.org/scipy/numpy/ticket/450 Cheers, Albert From bernhard.voigt at gmail.com Sat May 12 05:51:53 2007 From: bernhard.voigt at gmail.com (bernhard.voigt at gmail.com) Date: Sat, 12 May 2007 09:51:53 -0000 Subject: [Numpy-discussion] best way of counting time and cputime? In-Reply-To: <4644ECC2.5060909@ukr.net> References: <4644ECC2.5060909@ukr.net> Message-ID: <1178963513.131762.51580@l77g2000hsb.googlegroups.com> It depends on what you're aiming at. If you want to compare different implementations of some expressions and need to know their average execution times you should use the timeit module. If you want to have the full execution time of a script, time.time (call at the begin and end, compute the difference, gives the elapsed time) and time.clock (under linux the cpu clock time used by the process) are the methods you want. Cheers! Bernhard On May 12, 12:22 am, dmitrey wrote: > hi all, > please inform me which way for counting elapsed time and cputime in > Python is best? Previously in some Python sources I noticed there are > some ones. > Currently I use time.time() for time. > Thx, D. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion From fullung at gmail.com Sat May 12 06:05:31 2007 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 12 May 2007 12:05:31 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <20070511234821.GA25941@arbutus.physics.mcmaster.ca> References: <464393E0.3030900@ee.byu.edu> <20070511214947.GA2228@dogbert.sdsl.sun.ac.za> <20070511225010.GA2672@dogbert.sdsl.sun.ac.za> <20070511234821.GA25941@arbutus.physics.mcmaster.ca> Message-ID: <20070512100531.GB10725@dogbert.sdsl.sun.ac.za> Hello all On Fri, 11 May 2007, David M. Cooke wrote: > I've added a 1.0.3 milestone and set these to them (or to 1.1, according > to Travis's comments). I've reviewed some more tickets and filed everything that looks like it can be resolved for this release under 1.0.3. To see which tickets are still outstanding (some need fixes, some can just be closed if appropriate), take a look at the list under "1.0.3 Release Release" on this page: http://projects.scipy.org/scipy/numpy/report/3 Tickets marked with the 1.0.3 milestone that aren't going to be fixed, but should be, should have their milestone changed to 1.1 (and maybe we should add a 1.0.4 milestone too). Regards, Albert From fullung at gmail.com Sat May 12 07:19:18 2007 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 12 May 2007 13:19:18 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <464393E0.3030900@ee.byu.edu> References: <464393E0.3030900@ee.byu.edu> Message-ID: <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> I've more or less finished my quick triage effort. Issues remaining to be resolved for the 1.0.3 release: http://projects.scipy.org/scipy/numpy/query?status=new&status=assigned&status=reopened&milestone=1.0.3+Release If they can't be fixed for this release, we should move them over to 1.1 or maybe 1.0.4 (when it is created) if they can be fixed "soon but not now". There are a few tickets that don't have a milestone yet: http://projects.scipy.org/scipy/numpy/query?status=new&status=assigned&status=reopened&milestone= The roadmap also shows a better picture of what's going on: http://projects.scipy.org/scipy/numpy/roadmap?show=all Cheers, Albert On Thu, 10 May 2007, Travis Oliphant wrote: > Hi all, > > I'd like to relase NumPy 1.0.3 next week (on Tuesday) along with a new > release of SciPy. Please let me know of changes that you are planning > on making before then. > > Best, > > -Travis From gael.varoquaux at normalesup.org Sat May 12 07:40:11 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 12 May 2007 13:40:11 +0200 Subject: [Numpy-discussion] Getting started wiki page Message-ID: <20070512114011.GE7911@clipper.ens.fr> Hi all, I would very much link the Getting Started wiki page ( http://scipy.org/Getting_Started ) to the front page. But I am not sure it is of good enough quality so far. Could people please have a look and make comments, or edit the page. Cheers, Ga?l From matthew.brett at gmail.com Sat May 12 08:01:10 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 12 May 2007 13:01:10 +0100 Subject: [Numpy-discussion] [SciPy-user] Getting started wiki page In-Reply-To: <20070512114011.GE7911@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> Message-ID: <1e2af89e0705120501p1dccd8c5yfc2b229578484420@mail.gmail.com> > I would very much link the Getting Started wiki page ( > http://scipy.org/Getting_Started ) to the front page. But I am not sure > it is of good enough quality so far. Could people please have a look and > make comments, or edit the page. Thank you for doing this. It's pitched very well. Matthew From charlesr.harris at gmail.com Sat May 12 10:15:35 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 12 May 2007 08:15:35 -0600 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> References: <464393E0.3030900@ee.byu.edu> <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> Message-ID: On 5/12/07, Albert Strasheim wrote: > > I've more or less finished my quick triage effort. > > Issues remaining to be resolved for the 1.0.3 release: > > > http://projects.scipy.org/scipy/numpy/query?status=new&status=assigned&status=reopened&milestone=1.0.3+Release > > If they can't be fixed for this release, we should move them over to > 1.1 or maybe 1.0.4 (when it is created) if they can be fixed "soon but > not now". > > There are a few tickets that don't have a milestone yet: > > > http://projects.scipy.org/scipy/numpy/query?status=new&status=assigned&status=reopened&milestone= > > The roadmap also shows a better picture of what's going on: > > http://projects.scipy.org/scipy/numpy/roadmap?show=all Thanks, Albert. The tickets look much better organized now. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From subscriber100 at rjs.org Sat May 12 12:40:28 2007 From: subscriber100 at rjs.org (Ray Schumacher) Date: Sat, 12 May 2007 09:40:28 -0700 Subject: [Numpy-discussion] numpy array sharing between processes? Message-ID: <6.2.3.4.2.20070512084615.02ede7a0@pop-server.san.rr.com> An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat May 12 13:14:11 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 12 May 2007 11:14:11 -0600 Subject: [Numpy-discussion] best way of counting time and cputime? In-Reply-To: <1178963513.131762.51580@l77g2000hsb.googlegroups.com> References: <4644ECC2.5060909@ukr.net> <1178963513.131762.51580@l77g2000hsb.googlegroups.com> Message-ID: On 5/12/07, bernhard.voigt at gmail.com wrote: > It depends on what you're aiming at. If you want to compare different > implementations of some expressions and need to know their average > execution times you should use the timeit module. If you want to have > the full execution time of a script, time.time (call at the begin and > end, compute the difference, gives the elapsed time) and time.clock > (under linux the cpu clock time used by the process) are the methods > you want. Don't use time.clock, it has a nasty wraparound problem that will give you negative times after a while, and effectively junk for very long running processes. This is much safer (from IPython.genutils): def clock(): """clock() -> floating point number Return the *TOTAL USER+SYSTEM* CPU time in seconds since the start of the process. This is done via a call to resource.getrusage, so it avoids the wraparound problems in time.clock().""" u,s = resource.getrusage(resource.RUSAGE_SELF)[:2] return u+s The versions for only user or only system time are obvious, and they're already in IPython.genutils as well: http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/genutils.py#L165 Cheers, f From fperez.net at gmail.com Sat May 12 14:12:21 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 12 May 2007 12:12:21 -0600 Subject: [Numpy-discussion] [SciPy-user] Getting started wiki page In-Reply-To: <20070512114011.GE7911@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> Message-ID: On 5/12/07, Gael Varoquaux wrote: > Hi all, > > I would very much link the Getting Started wiki page ( > http://scipy.org/Getting_Started ) to the front page. But I am not sure > it is of good enough quality so far. Could people please have a look and > make comments, or edit the page. Great work! Thanks a lot for putting time into this, which is extremely useful to newcomers. One minor nit: I think it would be better to more prominently mention the -pylab switch right at the begginning of your FFT example. The reason is that without it, plotting is really nearly unusable, so rather than 1. show really doesn't-works-well approach 2. show solution later I think it would be best to start with the -pylab approach from the start. You can mention that -pylab is only needed if you want plotting and requires matplotlib. Just my 1e-2, f From gael.varoquaux at normalesup.org Sat May 12 14:21:57 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 12 May 2007 20:21:57 +0200 Subject: [Numpy-discussion] [SciPy-user] Getting started wiki page In-Reply-To: References: <20070512114011.GE7911@clipper.ens.fr> Message-ID: <20070512182157.GD11346@clipper.ens.fr> On Sat, May 12, 2007 at 12:12:21PM -0600, Fernando Perez wrote: > Thanks a lot for putting time into this, which is extremely useful to > newcomers. I got bored of always explaining the same things to project students :->. > I think it would be best to start with the -pylab approach from the > start. You can mention that -pylab is only needed if you want > plotting and requires matplotlib. I think you are right. The biggest problem is that (at least with the enthon python distribution) there is no icon under windows to start ipython with the "-pylab" switch, and many people have no clue what we are talking about. I change as you suggested, though. Ga?l From openopt at ukr.net Sat May 12 15:33:27 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 12 May 2007 22:33:27 +0300 Subject: [Numpy-discussion] best way of counting time and cputime? In-Reply-To: References: <4644ECC2.5060909@ukr.net> <1178963513.131762.51580@l77g2000hsb.googlegroups.com> Message-ID: <46461687.1070603@ukr.net> Is the genutils module not included to standard CPython edition? First of all I'm interested in what is the best way for latter, for to users not need installing anything else. Thx, D. Fernando Perez wrote: > On 5/12/07, bernhard.voigt at gmail.com wrote: > >> It depends on what you're aiming at. If you want to compare different >> implementations of some expressions and need to know their average >> execution times you should use the timeit module. If you want to have >> the full execution time of a script, time.time (call at the begin and >> end, compute the difference, gives the elapsed time) and time.clock >> (under linux the cpu clock time used by the process) are the methods >> you want. >> > > Don't use time.clock, it has a nasty wraparound problem that will give > you negative times after a while, and effectively junk for very long > running processes. This is much safer (from IPython.genutils): > > def clock(): > """clock() -> floating point number > > Return the *TOTAL USER+SYSTEM* CPU time in seconds since the start of > the process. This is done via a call to resource.getrusage, so it > avoids the wraparound problems in time.clock().""" > > u,s = resource.getrusage(resource.RUSAGE_SELF)[:2] > return u+s > > > The versions for only user or only system time are obvious, and > they're already in IPython.genutils as well: > > http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/genutils.py#L165 > > Cheers, > > f > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > From ryanlists at gmail.com Sat May 12 16:05:41 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 12 May 2007 15:05:41 -0500 Subject: [Numpy-discussion] [SciPy-user] Getting started wiki page In-Reply-To: <20070512182157.GD11346@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> <20070512182157.GD11346@clipper.ens.fr> Message-ID: You can add the -pylab switch to the desktop shortcut under Windows. I had created a Windows IPython installer that automatically creates a second entry under Start > All Programs > IPython that includes the -pylab -p scipy option. You can download my installer from here: http://www.siue.edu/~rkrauss/ipython-0.7.3.svn.win32.exe but as the name implies it is 0.7.3 which is quite old. Ville said he was going to put this in the standard windows executable from now on. Ryan On 5/12/07, Gael Varoquaux wrote: > On Sat, May 12, 2007 at 12:12:21PM -0600, Fernando Perez wrote: > > Thanks a lot for putting time into this, which is extremely useful to > > newcomers. > > I got bored of always explaining the same things to project students :->. > > > I think it would be best to start with the -pylab approach from the > > start. You can mention that -pylab is only needed if you want > > plotting and requires matplotlib. > > I think you are right. The biggest problem is that (at least with the > enthon python distribution) there is no icon under windows to start > ipython with the "-pylab" switch, and many people have no clue what we > are talking about. I change as you suggested, though. > > Ga?l > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From brian.lee.hawthorne at gmail.com Sat May 12 16:09:46 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Sat, 12 May 2007 13:09:46 -0700 Subject: [Numpy-discussion] [SciPy-user] Getting started wiki page In-Reply-To: References: <20070512114011.GE7911@clipper.ens.fr> <20070512182157.GD11346@clipper.ens.fr> Message-ID: <796269930705121309y459c29bfkdd6c84521ca4c7c@mail.gmail.com> Seems like it might be convenient for IPython to detect if matplotlib is installed and if it is then to use pylab mode by default (unless specified otherwise with a switch like -nopylab). Brian On 5/12/07, Ryan Krauss wrote: > > You can add the -pylab switch to the desktop shortcut under Windows. > I had created a Windows IPython installer that automatically creates a > second entry under Start > All Programs > IPython > that includes the -pylab -p scipy option. You can download my > installer from here: > http://www.siue.edu/~rkrauss/ipython-0.7.3.svn.win32.exe > but as the name implies it is 0.7.3 which is quite old. Ville said he > was going to put this in the standard windows executable from now on. > > Ryan > > On 5/12/07, Gael Varoquaux wrote: > > On Sat, May 12, 2007 at 12:12:21PM -0600, Fernando Perez wrote: > > > Thanks a lot for putting time into this, which is extremely useful to > > > newcomers. > > > > I got bored of always explaining the same things to project students > :->. > > > > > I think it would be best to start with the -pylab approach from the > > > start. You can mention that -pylab is only needed if you want > > > plotting and requires matplotlib. > > > > I think you are right. The biggest problem is that (at least with the > > enthon python distribution) there is no icon under windows to start > > ipython with the "-pylab" switch, and many people have no clue what we > > are talking about. I change as you suggested, though. > > > > Ga?l > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat May 12 16:17:22 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 12 May 2007 15:17:22 -0500 Subject: [Numpy-discussion] [SciPy-user] Getting started wiki page In-Reply-To: <796269930705121309y459c29bfkdd6c84521ca4c7c@mail.gmail.com> References: <20070512114011.GE7911@clipper.ens.fr> <20070512182157.GD11346@clipper.ens.fr> <796269930705121309y459c29bfkdd6c84521ca4c7c@mail.gmail.com> Message-ID: <464620D2.1040506@gmail.com> Brian Hawthorne wrote: > Seems like it might be convenient for IPython to detect if matplotlib is > installed and if it is then to use pylab mode by default (unless > specified otherwise with a switch like -nopylab). That's a bad idea. IPython has some magic, but it shouldn't be that magical. Just because a package is installed doesn't mean that the user wants it loaded every time with GUI threads and all of the other finicky stuff that comes along with -pylab. IPython is a general purpose tool, not a front end to matplotlib. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Sat May 12 17:45:15 2007 From: strawman at astraw.com (Andrew Straw) Date: Sat, 12 May 2007 14:45:15 -0700 Subject: [Numpy-discussion] numpy array sharing between processes? In-Reply-To: <6.2.3.4.2.20070512084615.02ede7a0@pop-server.san.rr.com> References: <6.2.3.4.2.20070512084615.02ede7a0@pop-server.san.rr.com> Message-ID: <4646356B.30104@astraw.com> Ray Schumacher wrote: > > After Googling for examples on this, in the Cookbook > http://www.scipy.org/Cookbook/Multithreading > MPI and POSH (dead?), I don't think I know the answer... > We have a data collection app running on dual core processors; I start > one thread collecting/writing new data directly into a numpy circular > buffer, another thread does correlation on the newest data and > occasional FFTs, both now use 50% CPU, total. > The threads never need to access the same buffer slices. > I'd prefer to have two processes, forking the FFT process off and > utilizing the second core. The processes would only need to share two > variables (buffer insert position and a short_integer result from the > FFT process, each process would only read or write), in addition to > the numpy array itself. > > Should I pass the numpy address to the second process and just create > an identical array there, as in > http://projects.scipy.org/pipermail/numpy-discussion/2006-October/023647.html > ? > > Use a file-like object to share the other variables? mmap? > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/413807 > > I also thought ctypes > ctypes.string_at(address[, size]) > might do both easily enough, although would mean a copy. We already > use it for the collection thread. > Does anyone have a lightweight solution to this relatively simple sort > of problem? I'll pitch in a few donuts (and my eternal gratitude) for an example of shared memory use using numpy arrays that is cross platform, or at least works in linux, mac, and windows. -Andrew From michael.stoelzle at email.de Fri May 11 08:42:15 2007 From: michael.stoelzle at email.de (michael.stoelzle at email.de) Date: Fri, 11 May 2007 14:42:15 +0200 Subject: [Numpy-discussion] problems with calculating numpy.float64 Message-ID: <216994041@web.de> Hello out there, i try to run this Python-code snippet after I have imported: import numpy as Numeric import numpy as numpy Numeric.Int = Numeric.int32 Numeric.Float = Numeric.float64 Code: if m < maxN and n < maxN and self.activeWide[m+1, n+1]: try: deltaX = x[m+1] - x[m] except TypeError: print '#' * 70 print 'type x', type(x) type_a, type_b = map(type, (x[m + 1], x[m])) print 'type_a, type_b', type_a, type_b, type_a is type_b print '#' * 70 raise My result for this part looks like this: ---------------------------------------- True ---------------------------------------- Inappropriate argument type. unsupported operand type(s) for -: 'numpy.float64' and 'numpy.float64' line 161 in setIDs in file F:\Unstrut_2007\arc_egmo\gw.py line 71 in __init__ in file F:\Unstrut_2007\arc_egmo\mf2000\mf2000exe.py line 380 in makeInactiveFringeMap in file F:\Unstrut_2007\arc_egmo\mf2000\mf2000exe.py This means that Python cant execute the line: deltaX = x[m+1] - x[m] You can see the "Types" of the values between the --- --- two lines above. I reproduce this problem in my Python 2.4 interpreter on my Windows XP Prof. OS and there are no problems in calculating with these Types, everything is OK. Thanks for your help Greetz from Berlin, Germany Michael From robert.kern at gmail.com Sat May 12 20:04:09 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 12 May 2007 19:04:09 -0500 Subject: [Numpy-discussion] problems with calculating numpy.float64 In-Reply-To: <216994041@web.de> References: <216994041@web.de> Message-ID: <464655F9.6070804@gmail.com> michael.stoelzle at email.de wrote: > Hello out there, > > i try to run this Python-code snippet after I have imported: Can you try to come up with a small, self-contained example? I can't replicate your problem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From davidnovakovic at gmail.com Sat May 12 20:22:40 2007 From: davidnovakovic at gmail.com (Dave P. Novakovic) Date: Sun, 13 May 2007 10:22:40 +1000 Subject: [Numpy-discussion] very large matrices. Message-ID: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> Hi, I have test data of about 75000 x 75000 dimensions. I need to do svd, or at least an eigen decomp on this data. from search suggests to me that the linalg functions in scipy and numpy don't work on sparse matrices. I can't even get empty((10000,10000),dtype=float) to work (memory errors, or too many dims), I'm starting to feel like I'm in a bit of trouble here :) What do people use to do large svd's? I'm not adverse to using another lib or wrapping something. Cheers Dave From charlesr.harris at gmail.com Sat May 12 20:45:54 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 12 May 2007 18:45:54 -0600 Subject: [Numpy-discussion] very large matrices. In-Reply-To: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> Message-ID: On 5/12/07, Dave P. Novakovic wrote: > > Hi, > > I have test data of about 75000 x 75000 dimensions. I need to do svd, > or at least an eigen decomp on this data. from search suggests to me > that the linalg functions in scipy and numpy don't work on sparse > matrices. > > I can't even get empty((10000,10000),dtype=float) to work (memory > errors, or too many dims), I'm starting to feel like I'm in a bit of > trouble here :) Umm, big. What do people use to do large svd's? I'm not adverse to using another > lib or wrapping something. What sort of machine do you have? There are column iterative methods for svd that resemble Gram-Schmidt orthogonalization that could probably be adapted to work over the array one column at a time. Are your arrays actually sparse? Do you only need a few eigenvalues? Are you doing least squares? A more precise description of the problem might lead to alternative , less demanding, approaches. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat May 12 20:54:54 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 12 May 2007 18:54:54 -0600 Subject: [Numpy-discussion] numpy array sharing between processes? In-Reply-To: <4646356B.30104@astraw.com> References: <6.2.3.4.2.20070512084615.02ede7a0@pop-server.san.rr.com> <4646356B.30104@astraw.com> Message-ID: On 5/12/07, Andrew Straw wrote: > > Ray Schumacher wrote: > > > > After Googling for examples on this, in the Cookbook > > http://www.scipy.org/Cookbook/Multithreading > > MPI and POSH (dead?), I don't think I know the answer... > > We have a data collection app running on dual core processors; I start > > one thread collecting/writing new data directly into a numpy circular > > buffer, another thread does correlation on the newest data and > > occasional FFTs, both now use 50% CPU, total. > > The threads never need to access the same buffer slices. > > I'd prefer to have two processes, forking the FFT process off and > > utilizing the second core. The processes would only need to share two > > variables (buffer insert position and a short_integer result from the > > FFT process, each process would only read or write), in addition to > > the numpy array itself. > > > > Should I pass the numpy address to the second process and just create > > an identical array there, as in > > > http://projects.scipy.org/pipermail/numpy-discussion/2006-October/023647.html > > ? > > > > Use a file-like object to share the other variables? mmap? > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/413807 > > > > I also thought ctypes > > ctypes.string_at(address[, size]) > > might do both easily enough, although would mean a copy. We already > > use it for the collection thread. > > Does anyone have a lightweight solution to this relatively simple sort > > of problem? > I'll pitch in a few donuts (and my eternal gratitude) for an example of > shared memory use using numpy arrays that is cross platform, or at least > works in linux, mac, and windows. I wonder if you could mmap a file and use it as common memory? Forking in python under linux leads to copies because anything that accesses an object changes its reference count. Pipes are easy and could be used for synchronization. Would python threading work for you? It might be the easiest way to have the fft going on while doing something else. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidnovakovic at gmail.com Sat May 12 20:58:02 2007 From: davidnovakovic at gmail.com (Dave P. Novakovic) Date: Sun, 13 May 2007 10:58:02 +1000 Subject: [Numpy-discussion] very large matrices. In-Reply-To: References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> Message-ID: <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> Hey, thanks for the response. core 2 duo with 4gb RAM. I've heard about iterative svd functions. I actually need a complete svd, with all eigenvalues (not LSI). I'm actually more interested in the individual eigenvectors. As an example, a single row could probably have about 3000 non zero elements. I think I'll try outputting a sparse matrix file and using svdlibc. If this works I'll wrap svdlibc with ctypes and post the results back here. i just wanted to make sure there was absolutely no way of doing it with sci/numpy before i looked at anything else. Cheers Dave On 5/13/07, Charles R Harris wrote: > > > On 5/12/07, Dave P. Novakovic wrote: > > Hi, > > > > I have test data of about 75000 x 75000 dimensions. I need to do svd, > > or at least an eigen decomp on this data. from search suggests to me > > that the linalg functions in scipy and numpy don't work on sparse > > matrices. > > > > I can't even get empty((10000,10000),dtype=float) to > work (memory > > errors, or too many dims), I'm starting to feel like I'm in a bit of > > trouble here :) > > Umm, big. > > > What do people use to do large svd's? I'm not adverse to using another > > lib or wrapping something. > > What sort of machine do you have? There are column iterative methods for svd > that resemble Gram-Schmidt orthogonalization that could probably be adapted > to work over the array one column at a time. Are your arrays actually > sparse? Do you only need a few eigenvalues? Are you doing least squares? A > more precise description of the problem might lead to alternative , less > demanding, approaches. > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From subscriber100 at rjs.org Sat May 12 21:52:28 2007 From: subscriber100 at rjs.org (Ray Schumacher) Date: Sat, 12 May 2007 18:52:28 -0700 Subject: [Numpy-discussion] numpy array sharing between processes? Message-ID: <6.2.3.4.2.20070512182528.02e72200@pop-server.san.rr.com> An HTML attachment was scrubbed... URL: From strawman at astraw.com Sat May 12 22:21:16 2007 From: strawman at astraw.com (Andrew Straw) Date: Sat, 12 May 2007 19:21:16 -0700 Subject: [Numpy-discussion] numpy array sharing between processes? In-Reply-To: References: <6.2.3.4.2.20070512084615.02ede7a0@pop-server.san.rr.com> <4646356B.30104@astraw.com> Message-ID: <4646761C.4070406@astraw.com> Charles R Harris wrote: > > I'll pitch in a few donuts (and my eternal gratitude) for an > example of > shared memory use using numpy arrays that is cross platform, or at > least > works in linux, mac, and windows. > > > I wonder if you could mmap a file and use it as common memory? Yes, that's the basic idea. Now for the example that works on those platforms... > Forking in python under linux leads to copies because anything that > accesses an object changes its reference count. I'm not sure what you're trying to say here. If it's shared memory, it's not copied -- that's the whole point. I don't really care how I spawn the multiple processes, and indeed forking is one way. > Pipes are easy and could be used for synchronization. True. But they're not going to be very fast. (I'd like to send streams of realtime images between different processes.) > Would python threading work for you? That's what I use now and what I'd like to get away from because 1) the GIL sucks and 2) (bug-free) threading is hard. -Andrew From charlesr.harris at gmail.com Sun May 13 00:23:20 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 12 May 2007 22:23:20 -0600 Subject: [Numpy-discussion] numpy array sharing between processes? In-Reply-To: <4646761C.4070406@astraw.com> References: <6.2.3.4.2.20070512084615.02ede7a0@pop-server.san.rr.com> <4646356B.30104@astraw.com> <4646761C.4070406@astraw.com> Message-ID: On 5/12/07, Andrew Straw wrote: > > Charles R Harris wrote: > > > > I'll pitch in a few donuts (and my eternal gratitude) for an > > example of > > shared memory use using numpy arrays that is cross platform, or at > > least > > works in linux, mac, and windows. > > > > > > I wonder if you could mmap a file and use it as common memory? > Yes, that's the basic idea. Now for the example that works on those > platforms... > > Forking in python under linux leads to copies because anything that > > accesses an object changes its reference count. > I'm not sure what you're trying to say here. If it's shared memory, it's > not copied Ah, but the child receives a *copy* of the parent memory, right down to the heap and stack. In linux this is implemented as copy-on-write for efficiency, but that won't do much good, for as soon as you touch a python object it's reference count changes and the copy is underway. The one thing you do get is a copy of any open file descriptors and shared memory handles, so you can exchange data that way if you are careful to synchronize your accesses. Pipes are like that. However, when the child exits the files are closed, so some care is needed. Share memory, however, persists. Shared memory sounds like what you want. There are python modules shm and shm_wrapper for unix systems, but windows has that capacity also so the functionality could probably be extended. I've never used the modules, so caveat emptor. I don't quite see how one would handle reference counting to shared memory, so you probably have to explicitly destroy it when you are done. Hmmm, what would be nice is numpy arrays created out of shared memory. Boost seems to have stuff for shared memory, so that might be another starting point. Some info on the python modules is at http://nikitathespider.com/python/shm/ Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sun May 13 00:45:24 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 13 May 2007 00:45:24 -0400 Subject: [Numpy-discussion] very large matrices. In-Reply-To: <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> Message-ID: On 12/05/07, Dave P. Novakovic wrote: > core 2 duo with 4gb RAM. > > I've heard about iterative svd functions. I actually need a complete > svd, with all eigenvalues (not LSI). I'm actually more interested in > the individual eigenvectors. > > As an example, a single row could probably have about 3000 non zero elements. I think you need to think hard about whether your problem can be done in another way. First of all, the singular values (as returned from the svd) are not eigenvalues - eigenvalue decomposition is a much harder problem, numerically. Second, your full non-sparse matrix will be 8*75000*75000 bytes, or about 42 gibibytes. Put another way, the representation of your data alone is ten times the size of the RAM on the machine you're using. Third, your matrix has 225 000 000 nonzero entries; assuming a perfect sparse representation with no extra bytes (at least two bytes per entry is typical, usually more), that's 1.7 GiB. Recall that basically any matrix operation is at least O(N^3), so you can expect order 10^14 floating-point operations to be required. This is actually the *least* significant constraint; pushing stuff into and out of disk caches will be taking most of your time. Even if you can represent your matrix sparsely (using only a couple of gibibytes), you've said you want the full set of eigenvectors, which is not likely to be a sparse matrix - so your result is back up to 42 GiB. And you should expect an eigenvalue algorithm, if it even survives massive roundoff problems, to require something like that much working space; thus your problem probably has a working size of something like 84 GiB. SVD is a little easier, if that's what you want, but the full solution is twice as large, though if you discard entries corresponding to small values it might be quite reasonable. You'll still need some fairly specialized code, though. Which form are you looking for? Solving your problem in a reasonable amount of time, as described and on the hardware you specify, is going to require some very specialized algorithms; you could try looking for an out-of-core eigenvalue package, but I'd first look to see if there's any way you can simplify your problem - getting just one eigenvector, maybe. Anne From davidnovakovic at gmail.com Sun May 13 02:46:15 2007 From: davidnovakovic at gmail.com (Dave P. Novakovic) Date: Sun, 13 May 2007 16:46:15 +1000 Subject: [Numpy-discussion] very large matrices. In-Reply-To: References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> Message-ID: <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> They are very large numbers indeed. Thanks for giving me a wake up call. Currently my data is represented as vectors in a vectorset, a typical sparse representation. I reduced the problem significantly by removing lots of noise. I'm basically recording traces of a terms occurrence throughout a corpus and doing an analysis of the eigenvectors. I reduced my matrix to 4863 x 4863 by filtering the original corpus. Now when I attempt svd, I'm finding a memory error in the svd routine. Is there a hard upper limit of the size of a matrix for these calculations? File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 575, in svd vt = zeros((n, nvt), t) MemoryError Cheers Dave On 5/13/07, Anne Archibald wrote: > On 12/05/07, Dave P. Novakovic wrote: > > > core 2 duo with 4gb RAM. > > > > I've heard about iterative svd functions. I actually need a complete > > svd, with all eigenvalues (not LSI). I'm actually more interested in > > the individual eigenvectors. > > > > As an example, a single row could probably have about 3000 non zero elements. > > I think you need to think hard about whether your problem can be done > in another way. > > First of all, the singular values (as returned from the svd) are not > eigenvalues - eigenvalue decomposition is a much harder problem, > numerically. > > Second, your full non-sparse matrix will be 8*75000*75000 bytes, or > about 42 gibibytes. Put another way, the representation of your data > alone is ten times the size of the RAM on the machine you're using. > > Third, your matrix has 225 000 000 nonzero entries; assuming a perfect > sparse representation with no extra bytes (at least two bytes per > entry is typical, usually more), that's 1.7 GiB. > > Recall that basically any matrix operation is at least O(N^3), so you > can expect order 10^14 floating-point operations to be required. This > is actually the *least* significant constraint; pushing stuff into and > out of disk caches will be taking most of your time. > > Even if you can represent your matrix sparsely (using only a couple of > gibibytes), you've said you want the full set of eigenvectors, which > is not likely to be a sparse matrix - so your result is back up to 42 > GiB. And you should expect an eigenvalue algorithm, if it even > survives massive roundoff problems, to require something like that > much working space; thus your problem probably has a working size of > something like 84 GiB. > > SVD is a little easier, if that's what you want, but the full solution > is twice as large, though if you discard entries corresponding to > small values it might be quite reasonable. You'll still need some > fairly specialized code, though. Which form are you looking for? > > Solving your problem in a reasonable amount of time, as described and > on the hardware you specify, is going to require some very specialized > algorithms; you could try looking for an out-of-core eigenvalue > package, but I'd first look to see if there's any way you can simplify > your problem - getting just one eigenvector, maybe. > > Anne > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From highdraw at email.de Sun May 13 05:50:55 2007 From: highdraw at email.de (highdraw) Date: Sun, 13 May 2007 11:50:55 +0200 Subject: [Numpy-discussion] problems with calculating numpy.float64 Message-ID: Hi out there, this is the code segment if m < maxN and n < maxN and self.activeWide[m+1, n+1]: try: deltaX = x[m+1] - x[m] except TypeError: print '-' * 40 print type(x) type_a, type_b = map(type, (x[m + 1], x[m])) print type_a, type_b, type_a is type_b print '-' * 40 raise the if-conclusion is TRUE! I got an error at the line "deltaX = x[m+1] - x[m]", the values of these types: x : x[m] : x[m+1]: m : counting variable, i guess an integer? I just try to subtract x[m] from x[m+1], but I got an error: Inappropriate argument type. unsupported operand type(s) for -: 'numpy.float64' and 'numpy.float64' For more code snippets you can follow these URL: http://www.python-forum.de/topic-10580.html It is in german language, but there are more code snippets and some tests from me and other users. The problem is that both variables are definitely from the same type, but i cant substract theme. When I go to my python interpreter and reproduce the code there is no problem. I think the imports from numpy are OK, i cant see the error? Thanks for any help! Michael -- From openopt at ukr.net Sun May 13 07:36:39 2007 From: openopt at ukr.net (dmitrey) Date: Sun, 13 May 2007 14:36:39 +0300 Subject: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional? Message-ID: <4646F847.6040701@ukr.net> i.e. for example from flat array [1, 2, 3] obtain array([[ 1.], [ 2.], [ 3.]]) I have numpy v 1.0.1 Thx, D. From cookedm at physics.mcmaster.ca Sun May 13 07:40:46 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 13 May 2007 07:40:46 -0400 Subject: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional? In-Reply-To: <4646F847.6040701@ukr.net> References: <4646F847.6040701@ukr.net> Message-ID: <20070513114046.GA1583@arbutus.physics.mcmaster.ca> On Sun, May 13, 2007 at 02:36:39PM +0300, dmitrey wrote: > i.e. for example from flat array [1, 2, 3] obtain > array([[ 1.], > [ 2.], > [ 3.]]) > > I have numpy v 1.0.1 > Thx, D. Use newaxis: In [1]: a = array([1., 2., 3.]) In [2]: a Out[2]: array([ 1., 2., 3.]) In [3]: a[:,newaxis] Out[3]: array([[ 1.], [ 2.], [ 3.]]) In [4]: a[newaxis,:] Out[4]: array([[ 1., 2., 3.]]) When newaxis is used as an index, a new axis of dimension 1 is added. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From dd55 at cornell.edu Sun May 13 07:46:47 2007 From: dd55 at cornell.edu (Darren Dale) Date: Sun, 13 May 2007 07:46:47 -0400 Subject: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional? In-Reply-To: <4646F847.6040701@ukr.net> References: <4646F847.6040701@ukr.net> Message-ID: <200705130746.47793.dd55@cornell.edu> On Sunday 13 May 2007 7:36:39 am dmitrey wrote: > i.e. for example from flat array [1, 2, 3] obtain > array([[ 1.], > [ 2.], > [ 3.]]) a=array([1,2,3]) a.shape=(len(a),1) From stefan at sun.ac.za Sun May 13 10:02:54 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 13 May 2007 16:02:54 +0200 Subject: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional? In-Reply-To: <200705130746.47793.dd55@cornell.edu> References: <4646F847.6040701@ukr.net> <200705130746.47793.dd55@cornell.edu> Message-ID: <20070513140254.GH7315@mentat.za.net> On Sun, May 13, 2007 at 07:46:47AM -0400, Darren Dale wrote: > On Sunday 13 May 2007 7:36:39 am dmitrey wrote: > > i.e. for example from flat array [1, 2, 3] obtain > > array([[ 1.], > > [ 2.], > > [ 3.]]) > > a=array([1,2,3]) > a.shape=(len(a),1) Or just a.shape = (-1,1) Cheers St?fan From fullung at gmail.com Sun May 13 10:13:35 2007 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 13 May 2007 16:13:35 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: References: <464393E0.3030900@ee.byu.edu> <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> Message-ID: <20070513141334.GA3924@dogbert.sdsl.sun.ac.za> Hello all On Sat, 12 May 2007, Charles R Harris wrote: > On 5/12/07, Albert Strasheim wrote: > > > >I've more or less finished my quick triage effort. > > Thanks, Albert. The tickets look much better organized now. My pleasure. Stefan van der Walt has also gotten in on the act and we're now down to 19 open tickets with 1.0.3 as the milestone. http://projects.scipy.org/scipy/numpy/query?status=new&status=assigned&status=reopened&milestone=1.0.3+Release Regards, Albert From charlesr.harris at gmail.com Sun May 13 10:30:47 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 13 May 2007 08:30:47 -0600 Subject: [Numpy-discussion] very large matrices. In-Reply-To: <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> Message-ID: On 5/13/07, Dave P. Novakovic wrote: > > They are very large numbers indeed. Thanks for giving me a wake up call. > Currently my data is represented as vectors in a vectorset, a typical > sparse representation. > > I reduced the problem significantly by removing lots of noise. I'm > basically recording traces of a terms occurrence throughout a corpus > and doing an analysis of the eigenvectors. > > I reduced my matrix to 4863 x 4863 by filtering the original corpus. > Now when I attempt svd, I'm finding a memory error in the svd routine. > Is there a hard upper limit of the size of a matrix for these > calculations? I get the same error here with linalg.svd(eye(5000)), and the memory is indeed gone. Hmm, I think it should work, although it is sure pushing the limits of what I've got: linalg.svd(eye(1000)) works fine. I think 4GB should be enough if your memory limits are set high enough. Are you trying some sort of principal components analysis? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Sun May 13 11:19:30 2007 From: openopt at ukr.net (dmitrey) Date: Sun, 13 May 2007 18:19:30 +0300 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <20070513141334.GA3924@dogbert.sdsl.sun.ac.za> References: <464393E0.3030900@ee.byu.edu> <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> <20070513141334.GA3924@dogbert.sdsl.sun.ac.za> Message-ID: <46472C82.4000808@ukr.net> Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update channels? (as for me I'm interested in Ubuntu/Kubuntu, currently there is v 1.0.1) I tried to compile numpy 1.0.2, but, as well as in Octave compiling, it failed because "c compiler can't create executable". gcc reinstallation didn't help, other c compilers are absent in update channel (I had seen only tcc, but I'm sure it will not help, and it (I mean trying to install other C compilers) requires too much efforts). D. From stefan at sun.ac.za Sun May 13 11:53:24 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 13 May 2007 17:53:24 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <46472C82.4000808@ukr.net> References: <464393E0.3030900@ee.byu.edu> <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> <20070513141334.GA3924@dogbert.sdsl.sun.ac.za> <46472C82.4000808@ukr.net> Message-ID: <20070513155324.GA12060@mentat.za.net> On Sun, May 13, 2007 at 06:19:30PM +0300, dmitrey wrote: > Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update > channels? (as for me I'm interested in Ubuntu/Kubuntu, currently there > is v 1.0.1) > I tried to compile numpy 1.0.2, but, as well as in Octave compiling, it > failed because "c compiler can't create executable". gcc reinstallation > didn't help, other c compilers are absent in update channel (I had seen > only tcc, but I'm sure it will not help, and it (I mean trying to > install other C compilers) requires too much efforts). Many people here are compiling numpy fine under Ubuntu. Do you have write permissions to the output directory? What is the compiler error given? Cheers St?fan From openopt at ukr.net Sun May 13 11:54:49 2007 From: openopt at ukr.net (dmitrey) Date: Sun, 13 May 2007 18:54:49 +0300 Subject: [Numpy-discussion] copy object with multiple subfields, including ndarrays Message-ID: <464734C9.9000309@ukr.net> hi all, does anyone know howto copy an instance of class, that contains multiple subfields, for example myObj.field1.subfield2 = 'asdf' myObj.field4.subfield8 = numpy.mat('1 2 3; 4 5 6') I tried from copy import copy myObjCopy = copy(myObj) but it seems that it doesn't work correctly Thx, D. From stefan at sun.ac.za Sun May 13 13:03:03 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 13 May 2007 19:03:03 +0200 Subject: [Numpy-discussion] dtype hashes are not equal Message-ID: <20070513170303.GB12060@mentat.za.net> Hi all, In the numpy.sctypes dictionary, there are two entries for uint32: In [2]: N.sctypes['uint'] Out[2]: [, , , , ] Comparing the dtypes of the two types gives the correct answer: In [3]: sc = N.sctypes['uint'] In [4]: N.dtype(sc[2]) == N.dtype(sc[3]) Out[4]: True But the hash values for the dtypes (and the types) differ: In [42]: for T in N.sctypes['uint']: dt = N.dtype(T) print T, dt print '=>', hash(T), hash(dt) uint8 => -1217082432 -1217078592 uint16 => -1217082240 -1217078464 uint32 => -1217081856 -1217078336 uint32 => -1217082048 -1217078400 uint64 => -1217081664 -1217078208 Is this expected/correct behaviour? Cheers St?fan From robert.kern at gmail.com Sun May 13 13:15:28 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 13 May 2007 12:15:28 -0500 Subject: [Numpy-discussion] dtype hashes are not equal In-Reply-To: <20070513170303.GB12060@mentat.za.net> References: <20070513170303.GB12060@mentat.za.net> Message-ID: <464747B0.7030201@gmail.com> Stefan van der Walt wrote: > Hi all, > > In the numpy.sctypes dictionary, there are two entries for uint32: > > In [2]: N.sctypes['uint'] > Out[2]: > [, > , > , > , > ] > > Comparing the dtypes of the two types gives the correct answer: > > In [3]: sc = N.sctypes['uint'] > > In [4]: N.dtype(sc[2]) == N.dtype(sc[3]) > Out[4]: True > > But the hash values for the dtypes (and the types) differ: > > In [42]: for T in N.sctypes['uint']: > dt = N.dtype(T) > print T, dt > print '=>', hash(T), hash(dt) > > uint8 > => -1217082432 -1217078592 > uint16 > => -1217082240 -1217078464 > uint32 > => -1217081856 -1217078336 > uint32 > => -1217082048 -1217078400 > uint64 > => -1217081664 -1217078208 > > Is this expected/correct behaviour? It's expected, but not desired. We haven't implemented the hash function for dtype objects, so we have the default, which is based on object identity rather than value. It is something that should be implemented, given time. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Sun May 13 13:21:15 2007 From: openopt at ukr.net (dmitrey) Date: Sun, 13 May 2007 20:21:15 +0300 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <20070513155324.GA12060@mentat.za.net> References: <464393E0.3030900@ee.byu.edu> <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> <20070513141334.GA3924@dogbert.sdsl.sun.ac.za> <46472C82.4000808@ukr.net> <20070513155324.GA12060@mentat.za.net> Message-ID: <4647490B.3080701@ukr.net> Stefan van der Walt wrote: > On Sun, May 13, 2007 at 06:19:30PM +0300, dmitrey wrote: > >> Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update >> channels? (as for me I'm interested in Ubuntu/Kubuntu, currently there >> is v 1.0.1) >> I tried to compile numpy 1.0.2, but, as well as in Octave compiling, it >> failed because "c compiler can't create executable". gcc reinstallation >> didn't help, other c compilers are absent in update channel (I had seen >> only tcc, but I'm sure it will not help, and it (I mean trying to >> install other C compilers) requires too much efforts). >> > > Many people here are compiling numpy fine under Ubuntu. Do you have > write permissions to the output directory? What is the compiler error > given? > > Cheers > St?fan > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > Sorry, I meant compiling Python2.5 and Octave, not numpy & Octave Python2.5 is already present (in Ubuntu 7.04), but I tried to compile and install it from sources because numpy compilation failed with (I have gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4), compiling as root) ... ... C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-prototypes -fPIC compile options: '-I/usr/include/python2.5 -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.5 -c' gcc: _configtest.c In file included from /usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/syslimits.h:7, from /usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/limits.h:11, from /usr/include/python2.5/Python.h:18, from _configtest.c:2: /usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/limits.h:122:61: error: limits.h: No such file or directory In file included from _configtest.c:2: /usr/include/python2.5/Python.h:32:19: error: stdio.h: No such file or directory /usr/include/python2.5/Python.h:34:5: error: #error "Python.h requires that stdio.h define NULL." /usr/include/python2.5/Python.h:37:20: error: string.h: No such file or directory /usr/include/python2.5/Python.h:39:19: error: errno.h: No such file or directory /usr/include/python2.5/Python.h:41:20: error: stdlib.h: No such file or directory /usr/include/python2.5/Python.h:43:20: error: unistd.h: No such file or directory /usr/include/python2.5/Python.h:55:20: error: assert.h: No such file or directory In file included from /usr/include/python2.5/Python.h:57, from _configtest.c:2: /usr/include/python2.5/pyport.h:7:20: error: stdint.h: No such file or directory In file included from /usr/include/python2.5/Python.h:57, from _configtest.c:2: /usr/include/python2.5/pyport.h:73: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?Py_uintptr_t? /usr/include/python2.5/pyport.h:74: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?Py_intptr_t? /usr/include/python2.5/pyport.h:97: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?Py_ssize_t? /usr/include/python2.5/pyport.h:204:76: error: math.h: No such file or directory /usr/include/python2.5/pyport.h:211:22: error: sys/time.h: No such file or directory /usr/include/python2.5/pyport.h:212:18: error: time.h: No such file or directory /usr/include/python2.5/pyport.h:230:24: error: sys/select.h: No such file or directory /usr/include/python2.5/pyport.h:269:22: error: sys/stat.h: No such file or directory In file included from /usr/include/python2.5/Python.h:76, from _configtest.c:2: /usr/include/python2.5/pymem.h:50: warning: parameter names (without types) in function declaration /usr/include/python2.5/pymem.h:51: error: expected declaration specifiers or ?...? before ?size_t? In file included from /usr/include/python2.5/Python.h:78, from _configtest.c:2: /usr/include/python2.5/object.h:104: error: expected specifier-qualifier-list before ?Py_ssize_t? /usr/include/python2.5/object.h:108: error: expected specifier-qualifier-list before ?Py_ssize_t? /usr/include/python2.5/object.h:131: error: expected declaration specifiers or ?...? before ?*? token /usr/include/python2.5/object.h:131: warning: type defaults to ?int? in declaration of ?Py_ssize_t? /usr/include/python2.5/object.h:131: error: ?Py_ssize_t? declared as function returning a function /usr/include/python2.5/object.h:131: warning: function declaration isn?t a prototype ................. (etc,etc) and here is a part of python2.5 compilation log: gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4) configure:2093: $? = 0 configure:2095: gcc -V &5 gcc: '-V' option must have argument configure:2098: $? = 1 configure:2121: checking for C compiler default output file name configure:2124: gcc conftest.c >&5 /usr/bin/ld: crt1.o: No such file: No such file or directory collect2: ld returned 1 exit status configure:2127: $? = 1 configure: failed program was: | /* confdefs.h. */ | | #define _GNU_SOURCE 1 | #define _NETBSD_SOURCE 1 | #define __BSD_VISIBLE 1 | #define _BSD_TYPES 1 | #define _XOPEN_SOURCE 600 | #define _XOPEN_SOURCE_EXTENDED 1 | #define _POSIX_C_SOURCE 200112L | /* end confdefs.h. */ | | int | main () | { | | ; | return 0; | } configure:2166: error: C compiler cannot create executables See `config.log' for more details. D. From matthieu.brucher at gmail.com Sun May 13 13:22:48 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 13 May 2007 19:22:48 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <4647490B.3080701@ukr.net> References: <464393E0.3030900@ee.byu.edu> <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> <20070513141334.GA3924@dogbert.sdsl.sun.ac.za> <46472C82.4000808@ukr.net> <20070513155324.GA12060@mentat.za.net> <4647490B.3080701@ukr.net> Message-ID: Hi, you have a problem with your Ubuntu installation, not with numpy. Matthieu 2007/5/13, dmitrey : > > Stefan van der Walt wrote: > > On Sun, May 13, 2007 at 06:19:30PM +0300, dmitrey wrote: > > > >> Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update > >> channels? (as for me I'm interested in Ubuntu/Kubuntu, currently there > >> is v 1.0.1) > >> I tried to compile numpy 1.0.2, but, as well as in Octave compiling, it > >> failed because "c compiler can't create executable". gcc reinstallation > >> didn't help, other c compilers are absent in update channel (I had seen > >> only tcc, but I'm sure it will not help, and it (I mean trying to > >> install other C compilers) requires too much efforts). > >> > > > > Many people here are compiling numpy fine under Ubuntu. Do you have > > write permissions to the output directory? What is the compiler error > > given? > > > > Cheers > > St?fan > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > Sorry, I meant compiling Python2.5 and Octave, not numpy & Octave > Python2.5 is already present (in Ubuntu 7.04), but I tried to compile > and install it from sources because numpy compilation failed with > (I have gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4), compiling as root) > > ... > ... > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall > -Wstrict-prototypes -fPIC > compile options: '-I/usr/include/python2.5 -Inumpy/core/src > -Inumpy/core/include -I/usr/include/python2.5 -c' > gcc: _configtest.c > In file included from > /usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/syslimits.h:7, > from /usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/limits.h:11, > from /usr/include/python2.5/Python.h:18, > from _configtest.c:2: > /usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/limits.h:122:61: error: > limits.h: No such file or directory > In file included from _configtest.c:2: > /usr/include/python2.5/Python.h:32:19: error: stdio.h: No such file or > directory > /usr/include/python2.5/Python.h:34:5: error: #error "Python.h requires > that stdio.h define NULL." > /usr/include/python2.5/Python.h:37:20: error: string.h: No such file or > directory > /usr/include/python2.5/Python.h:39:19: error: errno.h: No such file or > directory > /usr/include/python2.5/Python.h:41:20: error: stdlib.h: No such file or > directory > /usr/include/python2.5/Python.h:43:20: error: unistd.h: No such file or > directory > /usr/include/python2.5/Python.h:55:20: error: assert.h: No such file or > directory > In file included from /usr/include/python2.5/Python.h:57, > from _configtest.c:2: > /usr/include/python2.5/pyport.h:7:20: error: stdint.h: No such file or > directory > In file included from /usr/include/python2.5/Python.h:57, > from _configtest.c:2: > /usr/include/python2.5/pyport.h:73: error: expected '=', ',', ';', 'asm' > or '__attribute__' before 'Py_uintptr_t' > /usr/include/python2.5/pyport.h:74: error: expected '=', ',', ';', 'asm' > or '__attribute__' before 'Py_intptr_t' > /usr/include/python2.5/pyport.h:97: error: expected '=', ',', ';', 'asm' > or '__attribute__' before 'Py_ssize_t' > /usr/include/python2.5/pyport.h:204:76: error: math.h: No such file or > directory > /usr/include/python2.5/pyport.h:211:22: error: sys/time.h: No such file > or directory > /usr/include/python2.5/pyport.h:212:18: error: time.h: No such file or > directory > /usr/include/python2.5/pyport.h:230:24: error: sys/select.h: No such > file or directory > /usr/include/python2.5/pyport.h:269:22: error: sys/stat.h: No such file > or directory > In file included from /usr/include/python2.5/Python.h:76, > from _configtest.c:2: > /usr/include/python2.5/pymem.h:50: warning: parameter names (without > types) in function declaration > /usr/include/python2.5/pymem.h:51: error: expected declaration > specifiers or '...' before 'size_t' > In file included from /usr/include/python2.5/Python.h:78, > from _configtest.c:2: > /usr/include/python2.5/object.h:104: error: expected > specifier-qualifier-list before 'Py_ssize_t' > /usr/include/python2.5/object.h:108: error: expected > specifier-qualifier-list before 'Py_ssize_t' > /usr/include/python2.5/object.h:131: error: expected declaration > specifiers or '...' before '*' token > /usr/include/python2.5/object.h:131: warning: type defaults to 'int' in > declaration of 'Py_ssize_t' > /usr/include/python2.5/object.h:131: error: 'Py_ssize_t' declared as > function returning a function > /usr/include/python2.5/object.h:131: warning: function declaration isn't > a prototype > ................. > (etc,etc) > > > and here is a part of python2.5 compilation log: > > gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4) > configure:2093: $? = 0 > configure:2095: gcc -V &5 > gcc: '-V' option must have argument > configure:2098: $? = 1 > configure:2121: checking for C compiler default output file name > configure:2124: gcc conftest.c >&5 > /usr/bin/ld: crt1.o: No such file: No such file or directory > collect2: ld returned 1 exit status > configure:2127: $? = 1 > configure: failed program was: > | /* confdefs.h. */ > | > | #define _GNU_SOURCE 1 > | #define _NETBSD_SOURCE 1 > | #define __BSD_VISIBLE 1 > | #define _BSD_TYPES 1 > | #define _XOPEN_SOURCE 600 > | #define _XOPEN_SOURCE_EXTENDED 1 > | #define _POSIX_C_SOURCE 200112L > | /* end confdefs.h. */ > | > | int > | main () > | { > | > | ; > | return 0; > | } > configure:2166: error: C compiler cannot create executables > See `config.log' for more details. > > > > > D. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun May 13 13:52:55 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 13 May 2007 19:52:55 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <4647490B.3080701@ukr.net> References: <464393E0.3030900@ee.byu.edu> <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> <20070513141334.GA3924@dogbert.sdsl.sun.ac.za> <46472C82.4000808@ukr.net> <20070513155324.GA12060@mentat.za.net> <4647490B.3080701@ukr.net> Message-ID: <20070513175255.GC12060@mentat.za.net> Hi Dmitrey On Sun, May 13, 2007 at 08:21:15PM +0300, dmitrey wrote: > > Many people here are compiling numpy fine under Ubuntu. Do you have > > write permissions to the output directory? What is the compiler error > > given? > > > Sorry, I meant compiling Python2.5 and Octave, not numpy & Octave > Python2.5 is already present (in Ubuntu 7.04), but I tried to compile > and install it from sources because numpy compilation failed with > (I have gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4), compiling as > root) This isn't really the place to discuss compiling Python or Octave, but a good first move would be to install the 'build-essential' package. This will hopefully provide the header files and the compiler you need. Cheers St?fan From tim.hochberg at ieee.org Sun May 13 14:41:53 2007 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Sun, 13 May 2007 11:41:53 -0700 Subject: [Numpy-discussion] copy object with multiple subfields, including ndarrays In-Reply-To: <464734C9.9000309@ukr.net> References: <464734C9.9000309@ukr.net> Message-ID: It's alwys helpful if you can include a self contained example so it's easy to figure out exactly what you are getting at. I say that because I'm not entirely sure of the context here -- it appears that this is not numpy related issue at all, but rather a general python question. If so, I think what you are looking for is copy.deepcopy. As it name implies it does a deep copy of an object as opposed to a shallow copy, which is what copy.copydoes. If that doesn't do what you want or I misunderstood your question, please supply some more detail. On 5/13/07, dmitrey wrote: > > hi all, > does anyone know howto copy an instance of class, that contains multiple > subfields, for example > myObj.field1.subfield2 = 'asdf' > myObj.field4.subfield8 = numpy.mat('1 2 3; 4 5 6') > > I tried > from copy import copy > myObjCopy = copy(myObj) > but it seems that it doesn't work correctly > > Thx, D. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- //=][=\\ tim.hochberg at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjw at sympatico.ca Sun May 13 20:31:53 2007 From: cjw at sympatico.ca (Colin J. Williams) Date: Sun, 13 May 2007 20:31:53 -0400 Subject: [Numpy-discussion] best way of counting time and cputime? In-Reply-To: <46461687.1070603@ukr.net> References: <4644ECC2.5060909@ukr.net> <1178963513.131762.51580@l77g2000hsb.googlegroups.com> <46461687.1070603@ukr.net> Message-ID: dmitrey wrote: > Is the genutils module not included to standard CPython edition? > First of all I'm interested in what is the best way for latter, for to > users not need installing anything else. > Thx, D. > > > Fernando Perez wrote: >> On 5/12/07, bernhard.voigt at gmail.com wrote: >> >>> It depends on what you're aiming at. If you want to compare different >>> implementations of some expressions and need to know their average >>> execution times you should use the timeit module. If you want to have >>> the full execution time of a script, time.time (call at the begin and >>> end, compute the difference, gives the elapsed time) and time.clock >>> (under linux the cpu clock time used by the process) are the methods >>> you want. >>> >> Don't use time.clock, it has a nasty wraparound problem that will give >> you negative times after a while, and effectively junk for very long >> running processes. This is much safer (from IPython.genutils): >> >> def clock(): >> """clock() -> floating point number >> >> Return the *TOTAL USER+SYSTEM* CPU time in seconds since the start of >> the process. This is done via a call to resource.getrusage, so it >> avoids the wraparound problems in time.clock().""" >> >> u,s = resource.getrusage(resource.RUSAGE_SELF)[:2] >> return u+s >> >> >> The versions for only user or only system time are obvious, and >> they're already in IPython.genutils as well: >> >> http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/genutils.py#L165 >> >> Cheers, >> >> f >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> >> >> >> Have you looked at timeit? 25.9 timeit -- Measure execution time of small code snippets New in version 2.3. This module provides a simple way to time small bits of Python code. It has both command line as well as callable interfaces. It avoids a number of common traps for measuring execution times. See also Tim Peters' introduction to the ``Algorithms'' chapter in the Python Cookbook, published by O'Reilly. The module defines the following public class: class Timer( [stmt='pass' [, setup='pass' [, timer=]]]) Class for timing execution speed of small code snippets. Colin W. From davidnovakovic at gmail.com Sun May 13 22:19:23 2007 From: davidnovakovic at gmail.com (Dave P. Novakovic) Date: Mon, 14 May 2007 12:19:23 +1000 Subject: [Numpy-discussion] very large matrices. In-Reply-To: References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> Message-ID: <59d13e7d0705131919r5214cff4m419b6d6bd254b3b6@mail.gmail.com> > Are you trying some sort of principal components analysis? PCA is indeed one part of the research I'm doing. Dave From charlesr.harris at gmail.com Sun May 13 22:40:26 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 13 May 2007 20:40:26 -0600 Subject: [Numpy-discussion] very large matrices. In-Reply-To: <59d13e7d0705131919r5214cff4m419b6d6bd254b3b6@mail.gmail.com> References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> <59d13e7d0705131919r5214cff4m419b6d6bd254b3b6@mail.gmail.com> Message-ID: On 5/13/07, Dave P. Novakovic wrote: > > > Are you trying some sort of principal components analysis? > > PCA is indeed one part of the research I'm doing. I had the impression you were trying to build a linear space in which to embed a model, like atmospheric folk do when they try to invert spectra to obtain thermal profiles. Model based compression would be another aspect of this. I wonder if there aren't some algorithms out there for this sort of thing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidnovakovic at gmail.com Sun May 13 23:35:54 2007 From: davidnovakovic at gmail.com (Dave P. Novakovic) Date: Mon, 14 May 2007 13:35:54 +1000 Subject: [Numpy-discussion] very large matrices. In-Reply-To: References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> <59d13e7d0705131919r5214cff4m419b6d6bd254b3b6@mail.gmail.com> Message-ID: <59d13e7d0705132035j280074b5ped050ee46bec6a16@mail.gmail.com> There are definitely elements of spectral graph theory in my research too. I'll summarise We are interested in seeing the each eigenvector from svd can represent in a semantic space In addition to this we'll be testing it against some algorithms like concept indexing (uses a bipartitional k-meansish method for dim reduction) also testing against Orthogonal Locality Preserving indexing, which uses the laplacian of a similarity matrix to calculate projections of a document (or term) into a manifold. These methods have been implemented and tested for document classification, I'm interested in seeing their applicability to modelling semantics with a system known as Hyperspace to analog language. I was hoping to do svd to my HAL built out of reuters, but that was way too big. instead i'm trying with the traces idea i mentioned before (ie contextually grepping a keyword out of the docs to build a space around it.) Cheers Dave On 5/14/07, Charles R Harris wrote: > > > On 5/13/07, Dave P. Novakovic wrote: > > > Are you trying some sort of principal components analysis? > > > > PCA is indeed one part of the research I'm doing. > > I had the impression you were trying to build a linear space in which to > embed a model, like atmospheric folk do when they try to invert spectra to > obtain thermal profiles. Model based compression would be another aspect of > this. I wonder if there aren't some algorithms out there for this sort of > thing. > > Chuck > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From giorgio.luciano at chimica.unige.it Mon May 14 03:14:53 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Mon, 14 May 2007 09:14:53 +0200 Subject: [Numpy-discussion] very large matrices. In-Reply-To: <59d13e7d0705132035j280074b5ped050ee46bec6a16@mail.gmail.com> References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> <59d13e7d0705131919r5214cff4m419b6d6bd254b3b6@mail.gmail.com> <59d13e7d0705132035j280074b5ped050ee46bec6a16@mail.gmail.com> Message-ID: <46480C6D.8040808@chimica.unige.it> If you are using it for making a PCA, why dont' you try to use nipals algorithm ? (probably a silly question , just wanted to give help :) Giorgio > > From zpincus at stanford.edu Sun May 13 18:14:32 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Sun, 13 May 2007 15:14:32 -0700 Subject: [Numpy-discussion] argsort and take along arbitrary axes Message-ID: <75648071-2D8C-4725-943B-E228B83041A6@stanford.edu> Hello all, I've got a few questions that came up as I tried to calculate various statistics about an image time-series. For example, I have an array of shape (t,x,y) representing t frames of a time-lapse of resolution (x,y). Now, say I want to both argsort and sort this time-series, pixel- wise. (For example.) In 1-d it's easy: indices = a.argsort() sorted = a[indices] I would have thought that doing this on my 3-d array would work similarly: indices = a.argsort(axis=0) sorted = a.take(indices, axis=0) Unfortunately, this gives a ValueError of "dimensions too large." Now, I know that 'a.sort(axis=0)' works fine for the given example, but I'm curious about how to this sort of indexing operation in the general case. Thanks for any insight, Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine From a.u.r.e.l.i.a.n at gmx.net Mon May 14 04:30:01 2007 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Mon, 14 May 2007 10:30:01 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <4647490B.3080701@ukr.net> References: <464393E0.3030900@ee.byu.edu> <20070513155324.GA12060@mentat.za.net> <4647490B.3080701@ukr.net> Message-ID: <200705141030.01996.a.u.r.e.l.i.a.n@gmx.net> Hi, in error logs as yours, always look for the first line which says "error". If it is, like in your case, something like On Sunday, 13. May 2007 19:21:15 dmitrey wrote: > /usr/lib/gcc/x86_64-linux-gnu/4.1.2/include/limits.h:122:61: error: > limits.h: No such file or directory you are missing some dependencies for the build. Although I am a bit puzzled that the standard C libraries seem to be missing. Try the following: sudo apt-get build-essential This gets all the standard packages for compiling software (compiler, automake, autoconf, etc.) Then you need the dependencies for numpy. Afaik this is just the python-dev package. The most convenient way is apt-get build-dep numpy If you compile software which has no corresponding Ubuntu package, you have to find out about the deps yourself. Usually the README and/or INSTALL file contain this information. You always need the package with -dev at the end. Having installed all this, setup should run fine. HTH, Johannes <-- who uses SVN numpy&scipy under Kubuntu 7.04 From bernhard.voigt at gmail.com Mon May 14 07:13:20 2007 From: bernhard.voigt at gmail.com (bernhard.voigt at gmail.com) Date: Mon, 14 May 2007 11:13:20 -0000 Subject: [Numpy-discussion] best way of counting time and cputime? In-Reply-To: <46461687.1070603@ukr.net> References: <4644ECC2.5060909@ukr.net> <1178963513.131762.51580@l77g2000hsb.googlegroups.com> <46461687.1070603@ukr.net> Message-ID: <1179141200.120297.325290@u30g2000hsc.googlegroups.com> > Is the genutils module not included to standard CPython edition? It's not. It's a sub-module of IPython. It's based on the resource module,though and that comes with Python on Linux. Just define the function Fernando posted: > > def clock(): > > """clock() -> floating point number > > > Return the *TOTAL USER+SYSTEM* CPU time in seconds since the start of > > the process. This is done via a call to resource.getrusage, so it > > avoids the wraparound problems in time.clock().""" > > > u,s = resource.getrusage(resource.RUSAGE_SELF)[:2] > > return u+s Take a look at the module docs of resource. Bernhard From charlesr.harris at gmail.com Mon May 14 10:48:14 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 14 May 2007 08:48:14 -0600 Subject: [Numpy-discussion] Documentation Message-ID: Hi All, I've been trying out epydoc to see how things look. Apart from all the error messages, it does run. However, I didn't like the appearance of the documentation structured as suggested in numpy/doc, either in the terminal or in the generated html. In particular, I didn't like the consolidated lists and the interpreted variable names. I can see where these might be useful down the road, but for now I stuck to definition lists with italicized headings and plain old variable names. Another problem we might want to address are the doctest blocks. Numpy inserts blank lines when it is printing out multidimensional arrays and because blank lines indicate the end of the block that screws up the formatting. It would also be nice to make the SeeAlso entries links to relevant functions. Anyway, I've attached the html file generated from fromnumeric.py for your flamage and suggestions. The routines I restructured are sort, argsort, searchsorted, diagonal, std, var, and mean. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: docs.zip Type: application/zip Size: 10287 bytes Desc: not available URL: From charlesr.harris at gmail.com Mon May 14 12:02:42 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 14 May 2007 10:02:42 -0600 Subject: [Numpy-discussion] argsort and take along arbitrary axes In-Reply-To: <75648071-2D8C-4725-943B-E228B83041A6@stanford.edu> References: <75648071-2D8C-4725-943B-E228B83041A6@stanford.edu> Message-ID: On 5/13/07, Zachary Pincus wrote: > > Hello all, > > I've got a few questions that came up as I tried to calculate various > statistics about an image time-series. For example, I have an array > of shape (t,x,y) representing t frames of a time-lapse of resolution > (x,y). > > Now, say I want to both argsort and sort this time-series, pixel- > wise. (For example.) > > In 1-d it's easy: > indices = a.argsort() > sorted = a[indices] > > I would have thought that doing this on my 3-d array would work > similarly: > indices = a.argsort(axis=0) > sorted = a.take(indices, axis=0) > > Unfortunately, this gives a ValueError of "dimensions too large." > Now, I know that 'a.sort(axis=0)' works fine for the given example, > but I'm curious about how to this sort of indexing operation in the > general case. Unfortunately, argsort doesn't work transparently with take or fancy indexing for multidimensional arrays. I am thinking of adding a function argtake for this, and also for the results returned by argmax and argmin, but at the moment you have to fill in the values of the other indices and use fancy indexing. For now, it is probably simpler, prettier, and faster to just sort the array. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From zpincus at stanford.edu Mon May 14 12:52:20 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Mon, 14 May 2007 09:52:20 -0700 Subject: [Numpy-discussion] argsort and take along arbitrary axes In-Reply-To: References: <75648071-2D8C-4725-943B-E228B83041A6@stanford.edu> Message-ID: <31EC3A8C-0981-4A8F-BA58-28C9B4292260@stanford.edu> >> I've got a few questions that came up as I tried to calculate various >> statistics about an image time-series. For example, I have an array >> of shape (t,x,y) representing t frames of a time-lapse of resolution >> (x,y). >> >> Now, say I want to both argsort and sort this time-series, pixel- >> wise. (For example.) >> >> In 1-d it's easy: >> indices = a.argsort() >> sorted = a[indices] >> >> I would have thought that doing this on my 3-d array would work >> similarly: >> indices = a.argsort(axis=0) >> sorted = a.take(indices, axis=0) >> >> Unfortunately, this gives a ValueError of "dimensions too large." >> Now, I know that 'a.sort(axis=0)' works fine for the given example, >> but I'm curious about how to this sort of indexing operation in the >> general case. > > Unfortunately, argsort doesn't work transparently with take or > fancy indexing for multidimensional arrays. I am thinking of adding > a function argtake for this, and also for the results returned by > argmax and argmin, but at the moment you have to fill in the > values of the other indices and use fancy indexing. For now, it is > probably simpler, prettier, and faster to just sort the array. Thanks Charles. Unfortunately, the argsort/sort buisness was, as I mentioned, just an example of the kind of 'take' operation that I am trying to figure out how to do. There are other operations that will have similarly-formatted 'indices' arrays (as above) that aren't generated from argsort... As such, how do I "fill in the values of the other indices and use fancy indexing"? Even after reading the numpy book about that, and reading the docstring for numpy.take, I'm still vague on this. Would I use numpy.indices to get a list of index arrays, and then swap in (at the correct position in this list) the result of argsort (or the other operations), and use that for fancy indexing? Is there an easier/faster way? Thanks again, Zach From zpincus at stanford.edu Mon May 14 13:27:56 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Mon, 14 May 2007 10:27:56 -0700 Subject: [Numpy-discussion] very large matrices. In-Reply-To: <59d13e7d0705132035j280074b5ped050ee46bec6a16@mail.gmail.com> References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> <59d13e7d0705131919r5214cff4m419b6d6bd254b3b6@mail.gmail.com> <59d13e7d0705132035j280074b5ped050ee46bec6a16@mail.gmail.com> Message-ID: <3255A587-EFAE-4CCA-A9DE-85CACE44407A@stanford.edu> Hello Dave, I don't know if this will be useful to your research, but it may be worth pointing out in general. As you know PCA (and perhaps some other spectral algorithms?) use eigenvalues of matrices that can be factored out as A'A (where ' means transpose). For example, in the PCA case, if A is a centered data matrix (i.e. the mean value of each data point has been subtracted off), and if the data elements are row- vectors, then A'A is the covariance matrix. PCA then examines the (nonzero) eigenvectors of this covariance matrix A'A. Now, it's possible to determine the eigenvalues/eigenvectors for A'A from the matrix AA', which in many useful cases is much smaller than A'A. For example, imagine that you have 100 data points, each in 10,000 dimensions. (This is common in imaging applications.) A'A is 10,000x10,000, but AA' is only 100x100. We can get the eigenvalues/ vectors of A'A from those of AA', as described below. (This works in part because in such cases, the larger matrix is rank-deficient, so there will only be e.g. 100 nonzero eigenvalues anyway.) If you're lucky enough to have this kind of structure in your problem, feel free to use the following code which exploits that (and explains in a bit more detail how it works). Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine def _symm_eig(a): """Return the eigenvectors and eigenvalues of the symmetric matrix a'*a. If a has more columns than rows, then that matrix will be rank- deficient, and the non-zero eigenvalues and eigenvectors can be more easily extracted from the matrix a*a', from the properties of the SVD: if a of shape (m,n) has SVD u*s*v', then: a'*a = v*s'*s*v' a*a' = u*s*s'*u' That is, v contains the eigenvectors of a'*a, with s'*s the eigenvalues, according to the eigen-decomposition theorem. Now, let s_hat, an array of shape (m,n), be such that s * s_hat = I (m,m) and s_hat * s = I(n,n). With that, we can solve for u or v in terms of the other: v = a'*u*s_hat' u = a*v*s_hat """ m, n = a.shape if m >= n: # just return the eigenvalues and eigenvectors of a'a vecs, vals = _eigh(numpy.dot(a.transpose(), a)) vecs = numpy.where(vecs < 0, 0, vecs) return vecs, vals else: # figure out the eigenvalues and vectors based on aa', which is smaller sst_diag, u = _eigh(numpy.dot(a, a.transpose())) # in case due to numerical instabilities we have sst_diag < 0 anywhere, # peg them to zero sst_diag = numpy.where(sst_diag < 0, 0, sst_diag) # now get the inverse square root of the diagonal, which will form the # main diagonal of s_hat err = numpy.seterr(divide='ignore', invalid='ignore') s_hat_diag = 1/numpy.sqrt(sst_diag) numpy.seterr(**err) s_hat_diag = numpy.where(numpy.isfinite(s_hat_diag), s_hat_diag, 0) # s_hat_diag is a list of length m, a'u is (n,m), so we can just use # numpy's broadcasting instead of matrix multiplication, and only create # the upper mxm block of a'u, since that's all we'll use anyway... v = numpy.dot(a.transpose(), u[:,:m]) * s_hat_diag return sst_diag, v def _eigh(m): """Return eigenvalues and corresponding eigenvectors of the hermetian array m, ordered by eigenvalues in decreasing order. Note that the numpy.linalg.eigh makes no order guarantees.""" values, vectors = numpy.linalg.eigh(m) order = numpy.flipud(values.argsort()) return values[order], vectors[:,order] On May 13, 2007, at 8:35 PM, Dave P. Novakovic wrote: > There are definitely elements of spectral graph theory in my research > too. I'll summarise > > We are interested in seeing the each eigenvector from svd can > represent in a semantic space > In addition to this we'll be testing it against some algorithms like > concept indexing (uses a bipartitional k-meansish method for dim > reduction) > also testing against Orthogonal Locality Preserving indexing, which > uses the laplacian of a similarity matrix to calculate projections of > a document (or term) into a manifold. > > These methods have been implemented and tested for document > classification, I'm interested in seeing their applicability to > modelling semantics with a system known as Hyperspace to analog > language. > > I was hoping to do svd to my HAL built out of reuters, but that was > way too big. instead i'm trying with the traces idea i mentioned > before (ie contextually grepping a keyword out of the docs to build a > space around it.) > > Cheers > > Dave > > On 5/14/07, Charles R Harris wrote: >> >> >> On 5/13/07, Dave P. Novakovic wrote: >>>> Are you trying some sort of principal components analysis? >>> >>> PCA is indeed one part of the research I'm doing. >> >> I had the impression you were trying to build a linear space in >> which to >> embed a model, like atmospheric folk do when they try to invert >> spectra to >> obtain thermal profiles. Model based compression would be another >> aspect of >> this. I wonder if there aren't some algorithms out there for this >> sort of >> thing. >> >> Chuck >> >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> >> > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From nvf at uwm.edu Mon May 14 13:38:58 2007 From: nvf at uwm.edu (Nick Fotopoulos) Date: Mon, 14 May 2007 12:38:58 -0500 Subject: [Numpy-discussion] .max() and zero length arrays Message-ID: Dear all, I find myself frequently wanting to take the max of an array that might have zero length. If it is zero length, it throws an exception, when I would like to gracefully substitute my own value. For example, one solution with lists is to do max(possibly_empty_list + [minimum_value]), but it seems clunky to do a similar trick with arrays and concatenate or to use a try: except: block. What do other people do? If there's no good idiom, would it be possible to add kwargs like default_value and/or minimum_value? Thanks, Nick From cookedm at physics.mcmaster.ca Mon May 14 14:21:50 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 14 May 2007 14:21:50 -0400 Subject: [Numpy-discussion] .max() and zero length arrays In-Reply-To: References: Message-ID: <20070514182150.GA7943@arbutus.physics.mcmaster.ca> On Mon, May 14, 2007 at 12:38:58PM -0500, Nick Fotopoulos wrote: > Dear all, > > I find myself frequently wanting to take the max of an array that > might have zero length. If it is zero length, it throws an exception, > when I would like to gracefully substitute my own value. For example, > one solution with lists is to do max(possibly_empty_list + > [minimum_value]), but it seems clunky to do a similar trick with > arrays and concatenate or to use a try: except: block. What do other > people do? If there's no good idiom, would it be possible to add > kwargs like default_value and/or minimum_value? What about if maximum returned negative infinity (for floats) or the minimum int? That would make maximum act like sum and product, where the identity for those functions is returned: In [2]: sum([]) Out[2]: 0.0 In [3]: product([]) Out[3]: 1.0 -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From charlesr.harris at gmail.com Mon May 14 14:32:47 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 14 May 2007 12:32:47 -0600 Subject: [Numpy-discussion] argsort and take along arbitrary axes In-Reply-To: <31EC3A8C-0981-4A8F-BA58-28C9B4292260@stanford.edu> References: <75648071-2D8C-4725-943B-E228B83041A6@stanford.edu> <31EC3A8C-0981-4A8F-BA58-28C9B4292260@stanford.edu> Message-ID: On 5/14/07, Zachary Pincus wrote: > > >> I've got a few questions that came up as I tried to calculate various > >> statistics about an image time-series. For example, I have an array > >> of shape (t,x,y) representing t frames of a time-lapse of resolution > >> (x,y). > >> > >> Now, say I want to both argsort and sort this time-series, pixel- > >> wise. (For example.) > >> > >> In 1-d it's easy: > >> indices = a.argsort() > >> sorted = a[indices] > >> > >> I would have thought that doing this on my 3-d array would work > >> similarly: > >> indices = a.argsort(axis=0) > >> sorted = a.take(indices, axis=0) > >> > >> Unfortunately, this gives a ValueError of "dimensions too large." > >> Now, I know that 'a.sort(axis=0)' works fine for the given example, > >> but I'm curious about how to this sort of indexing operation in the > >> general case. > > > > Unfortunately, argsort doesn't work transparently with take or > > fancy indexing for multidimensional arrays. I am thinking of adding > > a function argtake for this, and also for the results returned by > > argmax and argmin, but at the moment you have to fill in the > > values of the other indices and use fancy indexing. For now, it is > > probably simpler, prettier, and faster to just sort the array. > > Thanks Charles. Unfortunately, the argsort/sort buisness was, as I > mentioned, just an example of the kind of 'take' operation that I am > trying to figure out how to do. There are other operations that will > have similarly-formatted 'indices' arrays (as above) that aren't > generated from argsort... > > As such, how do I "fill in the values of the other indices and use > fancy indexing"? Even after reading the numpy book about that, and > reading the docstring for numpy.take, I'm still vague on this. Would > I use numpy.indices to get a list of index arrays, and then swap in > (at the correct position in this list) the result of argsort (or the > other operations), and use that for fancy indexing? Is there an > easier/faster way? I've attached a quick, mostly untested, version of argtake. It's in Python, probably not too fast, but see if it works for you. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: argtake.py Type: text/x-python Size: 1361 bytes Desc: not available URL: From Chris.Barker at noaa.gov Mon May 14 14:37:05 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 14 May 2007 11:37:05 -0700 Subject: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional? In-Reply-To: <20070513140254.GH7315@mentat.za.net> References: <4646F847.6040701@ukr.net> <200705130746.47793.dd55@cornell.edu> <20070513140254.GH7315@mentat.za.net> Message-ID: <4648AC51.9090809@noaa.gov> or: a=array([1,2,3]).reshape((-1,1)) Darn, I guess there is more than one obvious way to do it! -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From subscriber100 at rjs.org Mon May 14 14:44:11 2007 From: subscriber100 at rjs.org (Ray S) Date: Mon, 14 May 2007 11:44:11 -0700 Subject: [Numpy-discussion] numpy array sharing between processes? (and ctypes) Message-ID: <6.2.3.4.2.20070514105840.05986ed0@blue-cove.com> While investigating ctypes and numpy for sharing, I saw that the example on http://www.scipy.org/Cookbook/Ctypes#head-7def99d882618b52956c6334e08e085e297cb0c6 does not quite work. However, with numpy.version.version=='1.0b1', ActivePython 2.4.3 Build 12: import numpy as N from ctypes import * x = N.zeros((3, 3), dtype=N.float64) xdataptr = N.intp(x.__array_interface__['data'])[0] y = (c_double * x.size).from_address(xdataptr) y[0] = 123. y[4] = 456. y[8] = 789 print N.diag(x) Works for me... I can then do: >>> import numpy.core.multiarray as MA >>> xBuf = MA.getbuffer(x) >>> z = MA.frombuffer(xBuf).reshape((3,3)) >>> z array([[ 123., 0., 0.], [ 0., 456., 0.], [ 0., 0., 789.]]) >>> z[0,1] = 99 >>> z array([[ 123., 99., 0.], [ 0., 456., 0.], [ 0., 0., 789.]]) >>> x array([[ 123., 99., 0.], [ 0., 456., 0.], [ 0., 0., 789.]]) >>> y[1] 99.0 From oliphant.travis at ieee.org Mon May 14 15:01:13 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 May 2007 13:01:13 -0600 Subject: [Numpy-discussion] numpy array sharing between processes? (and ctypes) In-Reply-To: <6.2.3.4.2.20070514105840.05986ed0@blue-cove.com> References: <6.2.3.4.2.20070514105840.05986ed0@blue-cove.com> Message-ID: <4648B1F9.5030005@ieee.org> Ray S wrote: > print N.diag(x) > > Works for me... > > I can then do: > >>> import numpy.core.multiarray as MA > >>> xBuf = MA.getbuffer(x) > >>> z = MA.frombuffer(xBuf).reshape((3,3)) > I see this kind of importing used more often than it should be. It is dangerous to import directly from numpy.core and numpy.lib and even more dangerous to import directly from files in those sub-packages. The organization of the files may change suddenly in the future. The only guarantee is that the numpy name-space will not change suddenly. These namespaces are all incorporated in numpy. Thus, numpy.getbuffer numpy.frombuffer are the recommended ways to use this functionality. -Travis From robert.kern at gmail.com Mon May 14 14:53:30 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 May 2007 13:53:30 -0500 Subject: [Numpy-discussion] .max() and zero length arrays In-Reply-To: <20070514182150.GA7943@arbutus.physics.mcmaster.ca> References: <20070514182150.GA7943@arbutus.physics.mcmaster.ca> Message-ID: <4648B02A.6000209@gmail.com> David M. Cooke wrote: > On Mon, May 14, 2007 at 12:38:58PM -0500, Nick Fotopoulos wrote: >> Dear all, >> >> I find myself frequently wanting to take the max of an array that >> might have zero length. If it is zero length, it throws an exception, >> when I would like to gracefully substitute my own value. For example, >> one solution with lists is to do max(possibly_empty_list + >> [minimum_value]), but it seems clunky to do a similar trick with >> arrays and concatenate or to use a try: except: block. What do other >> people do? If there's no good idiom, would it be possible to add >> kwargs like default_value and/or minimum_value? > > What about if maximum returned negative infinity (for floats) > or the minimum int? That would make maximum act like sum and product, > where the identity for those functions is returned: If possible, I would prefer a way to pass a value to use and raise the error if no such value is passed rather than hardcode an identity value for min() and max(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From a.schmolck at gmx.net Mon May 14 15:08:40 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 14 May 2007 20:08:40 +0100 Subject: [Numpy-discussion] .max() and zero length arrays In-Reply-To: <4648B02A.6000209@gmail.com> (Robert Kern's message of "Mon\, 14 May 2007 13\:53\:30 -0500") References: <20070514182150.GA7943@arbutus.physics.mcmaster.ca> <4648B02A.6000209@gmail.com> Message-ID: Robert Kern writes: > If possible, I would prefer a way to pass a value to use and raise the error if > no such value is passed rather than hardcode an identity value for min() and max(). What's wrong with inf? I'm not sure integer reductions should have max/min-ints as identity values (because this could lead to nasty bugs when implicit dtype promotions are involved), but I can't see any problems resulting from +/-inf. 'as From stefan at sun.ac.za Mon May 14 15:11:54 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 14 May 2007 21:11:54 +0200 Subject: [Numpy-discussion] numpy array sharing between processes? (and ctypes) In-Reply-To: <6.2.3.4.2.20070514105840.05986ed0@blue-cove.com> References: <6.2.3.4.2.20070514105840.05986ed0@blue-cove.com> Message-ID: <20070514191154.GE12060@mentat.za.net> On Mon, May 14, 2007 at 11:44:11AM -0700, Ray S wrote: > While investigating ctypes and numpy for sharing, I saw that the > example on > http://www.scipy.org/Cookbook/Ctypes#head-7def99d882618b52956c6334e08e085e297cb0c6 > does not quite work. However, with numpy.version.version=='1.0b1', > ActivePython 2.4.3 Build 12: That page should probably be replaced by http://www.scipy.org/Cookbook/Ctypes2 Cheers St?fan From nvf at uwm.edu Mon May 14 15:18:50 2007 From: nvf at uwm.edu (Nick Fotopoulos) Date: Mon, 14 May 2007 14:18:50 -0500 Subject: [Numpy-discussion] Numpy-discussion Digest, Vol 8, Issue 28 In-Reply-To: References: Message-ID: On 5/14/07, numpy-discussion-request at scipy.org wrote: > Message: 3 > Date: Mon, 14 May 2007 14:21:50 -0400 > From: "David M. Cooke" > Subject: Re: [Numpy-discussion] .max() and zero length arrays > To: Discussion of Numerical Python > Message-ID: <20070514182150.GA7943 at arbutus.physics.mcmaster.ca> > Content-Type: text/plain; charset=us-ascii > > On Mon, May 14, 2007 at 12:38:58PM -0500, Nick Fotopoulos wrote: > > Dear all, > > > > I find myself frequently wanting to take the max of an array that > > might have zero length. If it is zero length, it throws an exception, > > when I would like to gracefully substitute my own value. For example, > > one solution with lists is to do max(possibly_empty_list + > > [minimum_value]), but it seems clunky to do a similar trick with > > arrays and concatenate or to use a try: except: block. What do other > > people do? If there's no good idiom, would it be possible to add > > kwargs like default_value and/or minimum_value? > > What about if maximum returned negative infinity (for floats) > or the minimum int? That would make maximum act like sum and product, > where the identity for those functions is returned: > > In [2]: sum([]) > Out[2]: 0.0 > In [3]: product([]) > Out[3]: 1.0 I have mixed feelings. On the one hand, my new idiom would be: max(possidly_zero_len_array.max(), min_val) , which makes me happy. On the other hand: possibly_zero_len_array.max() in possibly_zero_len_array is no longer guaranteed to evaluate True, which me cringe a little, but might not bother others. That would, however, be an excellent behavior for a new supremum method or for a new boolean supremum kwarg in max. Further thoughts, David? Others? Thanks, Nick From robert.kern at gmail.com Mon May 14 15:18:52 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 May 2007 14:18:52 -0500 Subject: [Numpy-discussion] .max() and zero length arrays In-Reply-To: References: <20070514182150.GA7943@arbutus.physics.mcmaster.ca> <4648B02A.6000209@gmail.com> Message-ID: <4648B61C.6070702@gmail.com> Alexander Schmolck wrote: > Robert Kern writes: > >> If possible, I would prefer a way to pass a value to use and raise the error if >> no such value is passed rather than hardcode an identity value for min() and max(). > > What's wrong with inf? I'm not sure integer reductions should have > max/min-ints as identity values (because this could lead to nasty bugs when > implicit dtype promotions are involved), but I can't see any problems > resulting from +/-inf. 0 and 1 exist in both integer and floating point versions. +/-inf don't. A hardcoded identity value should be consistent across all uses, not changing depending on the type (and we shouldn't use +/-inf for integer arrays, either). There are also times when I have constraints on the domain of the inputs. For example, I might be dealing with arrays that should only have positive numbers. If I call max() on the arrays, I might want the result for the empty one to be 0, not -inf. Besides, being able to specify what value to use to start the reduction is a generally useful feature regardless. There have been several times in the past week, even, when I wished we had that capability. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From a.schmolck at gmx.net Mon May 14 15:58:04 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 14 May 2007 20:58:04 +0100 Subject: [Numpy-discussion] .max() and zero length arrays In-Reply-To: <4648B61C.6070702@gmail.com> (Robert Kern's message of "Mon\, 14 May 2007 14\:18\:52 -0500") References: <20070514182150.GA7943@arbutus.physics.mcmaster.ca> <4648B02A.6000209@gmail.com> <4648B61C.6070702@gmail.com> Message-ID: Robert Kern writes: > Alexander Schmolck wrote: >> Robert Kern writes: >> >>> If possible, I would prefer a way to pass a value to use and raise the error if >>> no such value is passed rather than hardcode an identity value for min() and max(). >> >> What's wrong with inf? I'm not sure integer reductions should have >> max/min-ints as identity values (because this could lead to nasty bugs when >> implicit dtype promotions are involved), but I can't see any problems >> resulting from +/-inf. > > 0 and 1 exist in both integer and floating point versions. +/-inf don't. A > hardcoded identity value should be consistent across all uses, not changing > depending on the type . I don't see why the integer case can't just throw an error, because I don't see how this could cause much trouble. OTOH I have no difficulty thinking of current 'special' integer-behavior that I find quite problematic, e.g. seterr('ignore'); divide(1,0) == 0 # no error, no warning[1] > (and we shouldn't use +/-inf for integer arrays, either) Agreed. > There are also times when I have constraints on the domain of the inputs. For > example, I might be dealing with arrays that should only have positive numbers. > If I call max() on the arrays, I might want the result for the empty one to be > 0, not -inf. I don't find this argument convincing -- add.reduce won't care that your numbers are all positive and add.multiply won't care if they're all negative. > Besides, being able to specify what value to use to start the reduction is a > generally useful feature regardless. Yes, I agree that this it should be possible to pass a seed value, but that is to my mind largely an orthogonal issue. Whilst we're discussing reduction wish-lists, I think it would also be useful to have a flag to specify that a reduction should be rank-preserving (i.e. use unit dimensions over the reduced axis) because that works synergisticly together with broadcasting (e.g. X - mean(X, axis=n, samerank=1) ). > There have been several times in the past week, even, when I wished we had > that capability. 'as Footnotes: [1] Maybe the result of overhomogenizing? IMO the logic that applies to floating point numerics just doesn't apply to integer numerics -- dividing by a number you know to be exactly 0 is always an error, period. From robert.kern at gmail.com Mon May 14 16:44:55 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 May 2007 15:44:55 -0500 Subject: [Numpy-discussion] .max() and zero length arrays In-Reply-To: References: <20070514182150.GA7943@arbutus.physics.mcmaster.ca> <4648B02A.6000209@gmail.com> <4648B61C.6070702@gmail.com> Message-ID: <4648CA47.7040308@gmail.com> Alexander Schmolck wrote: > Robert Kern writes: > >> Alexander Schmolck wrote: >>> Robert Kern writes: >>> >>>> If possible, I would prefer a way to pass a value to use and raise the error if >>>> no such value is passed rather than hardcode an identity value for min() and max(). >>> What's wrong with inf? I'm not sure integer reductions should have >>> max/min-ints as identity values (because this could lead to nasty bugs when >>> implicit dtype promotions are involved), but I can't see any problems >>> resulting from +/-inf. >> 0 and 1 exist in both integer and floating point versions. +/-inf don't. A >> hardcoded identity value should be consistent across all uses, not changing >> depending on the type . > > I don't see why the integer case can't just throw an error, because I don't > see how this could cause much trouble. The floating point case should also throw an error if the integer case does. > OTOH I have no difficulty thinking of > current 'special' integer-behavior that I find quite problematic, e.g. > > seterr('ignore'); divide(1,0) == 0 # no error, no warning[1] > >> (and we shouldn't use +/-inf for integer arrays, either) > > Agreed. > >> There are also times when I have constraints on the domain of the inputs. For >> example, I might be dealing with arrays that should only have positive numbers. >> If I call max() on the arrays, I might want the result for the empty one to be >> 0, not -inf. > > I don't find this argument convincing -- add.reduce won't care that your > numbers are all positive and add.multiply won't care if they're all negative. Correct, they won't. It also doesn't matter that they won't. It does for min() and max(), though. >> Besides, being able to specify what value to use to start the reduction is a >> generally useful feature regardless. > > Yes, I agree that this it should be possible to pass a seed value, but that is > to my mind largely an orthogonal issue. It is a largely orthogonal issue; however, it also happens to solve the issue at hand without introducing other weirdness. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From val at vtek.com Mon May 14 21:16:06 2007 From: val at vtek.com (val) Date: Mon, 14 May 2007 21:16:06 -0400 Subject: [Numpy-discussion] very large matrices. References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com><59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> Message-ID: <01e001c7968e$9d2a8380$6400a8c0@D380> Dave, I'm may be totally wrong, but i have intuitive feeling that your problem may be reformulated with focus on separation of a "basic" physical (vs. mathematical) 'core' and the terms which depend on a reasonable "small parameter". In other words, my point is to build a simplified model problem with the physically-important components which would generate 100-500 component vectors *core* problem (that is, to build "physical" principal componments first 'manually' based on common/physical sense, guesses, heuristics). To me, the vectors and matrices of 5-10K components and more is a sign of suboptimal problem formulation and a need of its reformulation in "more physical" terms - *if* that means smth significant in your concrete situation. For instance, the noise reduction can be done at problem formulation level first by introducing "regularizing" terms such as artificial friction, viscosity, coupling, etc which used to create a dramatic regularization effect stabilizing the problem and thus making it easy-to-interpret in physical (vs. purely math) terms. The Monte-Carlo technique is also an option in this type of problems. Of course, more physical details would be helpful in better understanding your problem. good luck, val ----- Original Message ----- From: "Dave P. Novakovic" To: "Discussion of Numerical Python" Sent: Sunday, May 13, 2007 2:46 AM Subject: Re: [Numpy-discussion] very large matrices. > They are very large numbers indeed. Thanks for giving me a wake up call. > Currently my data is represented as vectors in a vectorset, a typical > sparse representation. > > I reduced the problem significantly by removing lots of noise. I'm > basically recording traces of a terms occurrence throughout a corpus > and doing an analysis of the eigenvectors. > > I reduced my matrix to 4863 x 4863 by filtering the original corpus. > Now when I attempt svd, I'm finding a memory error in the svd routine. > Is there a hard upper limit of the size of a matrix for these > calculations? > > File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line > 575, in svd > vt = zeros((n, nvt), t) > MemoryError > > Cheers > > Dave > > > On 5/13/07, Anne Archibald wrote: >> On 12/05/07, Dave P. Novakovic wrote: >> >> > core 2 duo with 4gb RAM. >> > >> > I've heard about iterative svd functions. I actually need a complete >> > svd, with all eigenvalues (not LSI). I'm actually more interested in >> > the individual eigenvectors. >> > >> > As an example, a single row could probably have about 3000 non zero >> > elements. >> >> I think you need to think hard about whether your problem can be done >> in another way. >> >> First of all, the singular values (as returned from the svd) are not >> eigenvalues - eigenvalue decomposition is a much harder problem, >> numerically. >> >> Second, your full non-sparse matrix will be 8*75000*75000 bytes, or >> about 42 gibibytes. Put another way, the representation of your data >> alone is ten times the size of the RAM on the machine you're using. >> >> Third, your matrix has 225 000 000 nonzero entries; assuming a perfect >> sparse representation with no extra bytes (at least two bytes per >> entry is typical, usually more), that's 1.7 GiB. >> >> Recall that basically any matrix operation is at least O(N^3), so you >> can expect order 10^14 floating-point operations to be required. This >> is actually the *least* significant constraint; pushing stuff into and >> out of disk caches will be taking most of your time. >> >> Even if you can represent your matrix sparsely (using only a couple of >> gibibytes), you've said you want the full set of eigenvectors, which >> is not likely to be a sparse matrix - so your result is back up to 42 >> GiB. And you should expect an eigenvalue algorithm, if it even >> survives massive roundoff problems, to require something like that >> much working space; thus your problem probably has a working size of >> something like 84 GiB. >> >> SVD is a little easier, if that's what you want, but the full solution >> is twice as large, though if you discard entries corresponding to >> small values it might be quite reasonable. You'll still need some >> fairly specialized code, though. Which form are you looking for? >> >> Solving your problem in a reasonable amount of time, as described and >> on the hardware you specify, is going to require some very specialized >> algorithms; you could try looking for an out-of-core eigenvalue >> package, but I'd first look to see if there's any way you can simplify >> your problem - getting just one eigenvector, maybe. >> >> Anne >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From davidnovakovic at gmail.com Mon May 14 21:54:59 2007 From: davidnovakovic at gmail.com (Dave P. Novakovic) Date: Tue, 15 May 2007 11:54:59 +1000 Subject: [Numpy-discussion] very large matrices. In-Reply-To: <01e001c7968e$9d2a8380$6400a8c0@D380> References: <59d13e7d0705121722i35d69b72o6a7d42369bcfac2b@mail.gmail.com> <59d13e7d0705121758n1182c420xcc2478b8456b853c@mail.gmail.com> <59d13e7d0705122346i592c2b9dj14d216028a21cf2e@mail.gmail.com> <01e001c7968e$9d2a8380$6400a8c0@D380> Message-ID: <59d13e7d0705141854k5d9426b2uacf99e9ba4d85514@mail.gmail.com> Thank you all for your suggestions. I'm taking the time to look into all of them properly. Giorgio: A cursory glance looks promising there, thanks. Zachary: Brilliant, that looks great. val: What you are saying is correct, at the moment I'm taking 2 lines of context around each instance of the term in question, I might change that to 1, hopefully this will reduce the amount of dimensions. Dave On 5/15/07, val wrote: > Dave, > I'm may be totally wrong, but i have intuitive feeling that > your problem may be reformulated with focus on separation of a "basic" > physical (vs. mathematical) 'core' and the terms which depend on a > reasonable "small parameter". In other words, my point is > to build a simplified model problem with the physically-important > components which would generate 100-500 component vectors > *core* problem (that is, to build "physical" principal componments > first 'manually' based on common/physical sense, guesses, heuristics). > To me, the vectors and matrices of 5-10K components and more is > a sign of suboptimal problem formulation and a need of its reformulation > in "more physical" terms - *if* that means smth significant in your concrete > situation. For instance, the noise reduction can be done at problem > formulation level first by introducing "regularizing" terms such as > artificial friction, viscosity, coupling, etc which used to create > a dramatic regularization effect stabilizing the problem and thus > making it easy-to-interpret in physical (vs. purely math) terms. > The Monte-Carlo technique is also an option in this type of > problems. Of course, more physical details would be helpful in > better understanding your problem. > good luck, > val > > ----- Original Message ----- > From: "Dave P. Novakovic" > To: "Discussion of Numerical Python" > Sent: Sunday, May 13, 2007 2:46 AM > Subject: Re: [Numpy-discussion] very large matrices. > > > > They are very large numbers indeed. Thanks for giving me a wake up call. > > Currently my data is represented as vectors in a vectorset, a typical > > sparse representation. > > > > I reduced the problem significantly by removing lots of noise. I'm > > basically recording traces of a terms occurrence throughout a corpus > > and doing an analysis of the eigenvectors. > > > > I reduced my matrix to 4863 x 4863 by filtering the original corpus. > > Now when I attempt svd, I'm finding a memory error in the svd routine. > > Is there a hard upper limit of the size of a matrix for these > > calculations? > > > > File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line > > 575, in svd > > vt = zeros((n, nvt), t) > > MemoryError > > > > Cheers > > > > Dave > > > > > > On 5/13/07, Anne Archibald wrote: > >> On 12/05/07, Dave P. Novakovic wrote: > >> > >> > core 2 duo with 4gb RAM. > >> > > >> > I've heard about iterative svd functions. I actually need a complete > >> > svd, with all eigenvalues (not LSI). I'm actually more interested in > >> > the individual eigenvectors. > >> > > >> > As an example, a single row could probably have about 3000 non zero > >> > elements. > >> > >> I think you need to think hard about whether your problem can be done > >> in another way. > >> > >> First of all, the singular values (as returned from the svd) are not > >> eigenvalues - eigenvalue decomposition is a much harder problem, > >> numerically. > >> > >> Second, your full non-sparse matrix will be 8*75000*75000 bytes, or > >> about 42 gibibytes. Put another way, the representation of your data > >> alone is ten times the size of the RAM on the machine you're using. > >> > >> Third, your matrix has 225 000 000 nonzero entries; assuming a perfect > >> sparse representation with no extra bytes (at least two bytes per > >> entry is typical, usually more), that's 1.7 GiB. > >> > >> Recall that basically any matrix operation is at least O(N^3), so you > >> can expect order 10^14 floating-point operations to be required. This > >> is actually the *least* significant constraint; pushing stuff into and > >> out of disk caches will be taking most of your time. > >> > >> Even if you can represent your matrix sparsely (using only a couple of > >> gibibytes), you've said you want the full set of eigenvectors, which > >> is not likely to be a sparse matrix - so your result is back up to 42 > >> GiB. And you should expect an eigenvalue algorithm, if it even > >> survives massive roundoff problems, to require something like that > >> much working space; thus your problem probably has a working size of > >> something like 84 GiB. > >> > >> SVD is a little easier, if that's what you want, but the full solution > >> is twice as large, though if you discard entries corresponding to > >> small values it might be quite reasonable. You'll still need some > >> fairly specialized code, though. Which form are you looking for? > >> > >> Solving your problem in a reasonable amount of time, as described and > >> on the hardware you specify, is going to require some very specialized > >> algorithms; you could try looking for an out-of-core eigenvalue > >> package, but I'd first look to see if there's any way you can simplify > >> your problem - getting just one eigenvector, maybe. > >> > >> Anne > >> _______________________________________________ > >> Numpy-discussion mailing list > >> Numpy-discussion at scipy.org > >> http://projects.scipy.org/mailman/listinfo/numpy-discussion > >> > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From gruben at bigpond.net.au Tue May 15 07:42:10 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Tue, 15 May 2007 21:42:10 +1000 Subject: [Numpy-discussion] Documentation In-Reply-To: References: Message-ID: <46499C92.6070405@bigpond.net.au> Hi Charles, In you post, is your numpy/doc reference referring to the file HOWTO_DOCUMENT.txt? I notice that this is not the same as the one here which I think may be the preferred standard. If someone can confirm this, it should probably replace the HOWTO_DOCUMENT.txt content. IMO, the wiki version is more readable and also more comprehensive. Gary R. Charles R Harris wrote: > Hi All, > > I've been trying out epydoc to see how things look. Apart from all the > error messages, it does run. However, I didn't like the appearance of > the documentation structured as suggested in numpy/doc, either in the > terminal or in the generated html. In particular, I didn't like the > consolidated lists and the interpreted variable names. I can see where > these might be useful down the road, but for now I stuck to definition > lists with italicized headings and plain old variable names. > > Another problem we might want to address are the doctest blocks. Numpy > inserts blank lines when it is printing out multidimensional arrays and > because blank lines indicate the end of the block that screws up the > formatting. It would also be nice to make the SeeAlso entries links to > relevant functions. > > Anyway, I've attached the html file generated from fromnumeric.py for > your flamage and suggestions. The routines I restructured are sort, > argsort, searchsorted, diagonal, std, var, and mean. > > Chuck From fullung at gmail.com Tue May 15 09:59:59 2007 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 15 May 2007 15:59:59 +0200 Subject: [Numpy-discussion] numpy array sharing between processes? (and ctypes) In-Reply-To: <20070514191154.GE12060@mentat.za.net> References: <6.2.3.4.2.20070514105840.05986ed0@blue-cove.com> <20070514191154.GE12060@mentat.za.net> Message-ID: <20070515135959.GA25595@dogbert.sdsl.sun.ac.za> Agreed. Could someone with wiki admin access please delete the Ctypes page and rename type Ctypes2 page to Ctypes? As far as I know, Ctypes2 is really what you want to look at (at least it was, last time I worked on it). Thanks. Cheers, Albert On Mon, 14 May 2007, Stefan van der Walt wrote: > On Mon, May 14, 2007 at 11:44:11AM -0700, Ray S wrote: > > While investigating ctypes and numpy for sharing, I saw that the > > example on > > http://www.scipy.org/Cookbook/Ctypes#head-7def99d882618b52956c6334e08e085e297cb0c6 > > does not quite work. However, with numpy.version.version=='1.0b1', > > ActivePython 2.4.3 Build 12: > > That page should probably be replaced by > > http://www.scipy.org/Cookbook/Ctypes2 From charlesr.harris at gmail.com Tue May 15 10:40:40 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 08:40:40 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: <46499C92.6070405@bigpond.net.au> References: <46499C92.6070405@bigpond.net.au> Message-ID: On 5/15/07, Gary Ruben wrote: > > Hi Charles, > > In you post, is your numpy/doc reference referring to the file > HOWTO_DOCUMENT.txt? > > I notice that this is not the same as the one here > > which I think may be the preferred standard. If someone can confirm > this, it should probably replace the HOWTO_DOCUMENT.txt content. IMO, > the wiki version is more readable and also more comprehensive. Yes, I started with the HOWTO_DOCUMENT.txt in numpy. Hmm, the scipy version does look better, but I have some nits: 1) The standard text width for almost everything software is 79, one less then the old standard terminal width and the number of columns in an IBM 5081 punch card. The proposed standard should adhere to this and require the right margin to be set at 79 independently of everything else. Requiring different right hand margins in different sections is *ridiculous*, no sensible coder would adhere to such a standard, let the software worry about it. 2) Lining up the paragraph indentations after a list item heading is a job for nematodes. I suggest using the standard restructured text definition lists. Example: List Heading: label1 explanation label2 explanation This will look decent when run through the current epydoc. You can put dashes after the labels for type markup, or even colons as long as the explanation isn't proceeded by a blank line. 3) I think the main headings look better when italicized, so surround them with stars thusly: *Heading* 4) I introduced a new section, "Description", because when epydoc formats the documentation it serves to separate the main description from the short one line summary. It can also serve as a tag if we write our own preparser. To summarize, the documentation standard should be the simplest thing that looks good both in a terminal and when formatted into pdf or html by epydoc. The scipy standard strikes me a far too convoluted and downright ugly in a terminal. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio.luciano at chimica.unige.it Tue May 15 11:08:57 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Tue, 15 May 2007 17:08:57 +0200 Subject: [Numpy-discussion] call for chapeters in a book about freeware/opensource in chemometrics Message-ID: <4649CD09.7060309@chimica.unige.it> Dear All, me and other two colleagues will be interested in writing a tutorial book about the use of opensource/freeware software in chemometrics (mainly python oriented). I've contacted the editor at Blackwell publishing that has told me that can be interested in it and sent me the module for submitting the "official" proposal. I will be very glad to hear from everyone that would like to write a chapter on it. In my opinion it will be the best to have a book with lots of examples that covers "simple" task, but i will be also glad if anyone would like to write tutorial chapter of less commons subjects. for any feedback just write to me I guess this can be both a change for spread the word of opensouce/freeware togheter with chemometrics also to let know the audience that it's not necessary to invest a lot of money in software for working in chemometrics. Of course, since there are not too much freeware/opensource software in the field, it can be a chance to "advertise" personal built-in software and to "start" something new. I hope to receive help and I will be glad to talk with enthusiast all around Giorgio P.S.: (sorry for cross posting) -- -======================- Dr Giorgio Luciano Ph.D. Di.C.T.F.A. Dipartimento di Chimica e Tecnologie Farmaceutiche e Alimentari Via Brigata Salerno (ponte) - 16147 Genova (GE) - Italy email luciano at dictfa.unige.it www.chemometrics.it -======================- From robert.kern at gmail.com Tue May 15 12:03:43 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 May 2007 11:03:43 -0500 Subject: [Numpy-discussion] Documentation In-Reply-To: <46499C92.6070405@bigpond.net.au> References: <46499C92.6070405@bigpond.net.au> Message-ID: <4649D9DF.1020805@gmail.com> Gary Ruben wrote: > Hi Charles, > > In you post, is your numpy/doc reference referring to the file > HOWTO_DOCUMENT.txt? > > I notice that this is not the same as the one here > > which I think may be the preferred standard. If someone can confirm > this, it should probably replace the HOWTO_DOCUMENT.txt content. IMO, > the wiki version is more readable and also more comprehensive. Note that despite the title, we haven't actually agreed to this format. It was simply a proposal. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Tue May 15 12:21:48 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 May 2007 12:21:48 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: <4649D9DF.1020805@gmail.com> References: <46499C92.6070405@bigpond.net.au><4649D9DF.1020805@gmail.com> Message-ID: > Gary Ruben wrote: >> In you post, is your numpy/doc reference referring to the >> file HOWTO_DOCUMENT.txt? >> I notice that this is not the same as the one here >> >> which I think may be the preferred standard. If someone can confirm >> this, it should probably replace the HOWTO_DOCUMENT.txt content. IMO, >> the wiki version is more readable and also more comprehensive. On Tue, 15 May 2007, Robert Kern apparently wrote: > Note that despite the title, we haven't actually agreed to this format. It was > simply a proposal. Just to add a bit more specificity, I believe that the HOWTO_DOCUMENT.txt document at http://svn.scipy.org/svn/numpy/trunk/numpy/doc/HOWTO_DOCUMENT.txt summarizes a substantial discussion and broad agreement. I.e., it is currently the preferred standard. Apologies if I missed a relevant discussion, Alan Isaac From charlesr.harris at gmail.com Tue May 15 12:46:06 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 10:46:06 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <46499C92.6070405@bigpond.net.au> <4649D9DF.1020805@gmail.com> Message-ID: On 5/15/07, Alan G Isaac wrote: > > > Gary Ruben wrote: > >> In you post, is your numpy/doc reference referring to the > >> file HOWTO_DOCUMENT.txt? > >> I notice that this is not the same as the one here > >> > >> which I think may be the preferred standard. If someone can confirm > >> this, it should probably replace the HOWTO_DOCUMENT.txt content. IMO, > >> the wiki version is more readable and also more comprehensive. > > > On Tue, 15 May 2007, Robert Kern apparently wrote: > > Note that despite the title, we haven't actually agreed to this format. > It was > > simply a proposal. > > > Just to add a bit more specificity, I believe that the > HOWTO_DOCUMENT.txt document at > http://svn.scipy.org/svn/numpy/trunk/numpy/doc/HOWTO_DOCUMENT.txt > summarizes a substantial discussion and broad agreement. > I.e., it is currently the preferred standard. > > Apologies if I missed a relevant discussion, I think the output generated for the consolidated lists is too ugly to bear. The bullets are distracting, there is no spacing between entries, and the explanatory text is appended on the same line, making it difficult to read. So get rid of ":Parameters:" and just use "Parameters: ". Futhermore, the dashes under Notes and Examples make these section headers and PyDoc moves these blocks to the top so that they are printed before the Parameters list and other list blocks. Did no one actually run this stuff through PyDoc? So my recommendations are to remove the leading :'s from the list headings, making them definition lists, italicize them, and insert the mandatory blank line after. Then remove the dashes from under Notes and Examples, and let the documenter choose between lists with the appended : , or just indent the text after, which has the same effect if there are no list item headings, which there typically aren't in those sections. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 15 12:50:23 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 10:50:23 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <46499C92.6070405@bigpond.net.au> <4649D9DF.1020805@gmail.com> Message-ID: On 5/15/07, Charles R Harris wrote: > > > > On 5/15/07, Alan G Isaac wrote: > > > > > Gary Ruben wrote: > > >> In you post, is your numpy/doc reference referring to the > > >> file HOWTO_DOCUMENT.txt? > > >> I notice that this is not the same as the one here > > >> < http://scipy.org/DocstringStandard> > > >> which I think may be the preferred standard. If someone can confirm > > >> this, it should probably replace the HOWTO_DOCUMENT.txt content. IMO, > > >> the wiki version is more readable and also more comprehensive. > > > > > > On Tue, 15 May 2007, Robert Kern apparently wrote: > > > Note that despite the title, we haven't actually agreed to this > > format. It was > > > simply a proposal. > > > > > > Just to add a bit more specificity, I believe that the > > HOWTO_DOCUMENT.txt document at > > http://svn.scipy.org/svn/numpy/trunk/numpy/doc/HOWTO_DOCUMENT.txt > > summarizes a substantial discussion and broad agreement. > > I.e., it is currently the preferred standard. > > > > Apologies if I missed a relevant discussion, > > > > I think the output generated for the consolidated lists is too ugly to > bear. The bullets are distracting, there is no spacing between entries, and > the explanatory text is appended on the same line, making it difficult to > read. So get rid of ":Parameters:" and just use "Parameters: ". Futhermore, > the dashes under Notes and Examples make these section headers and PyDoc > moves these blocks to the top so that they are printed before the Parameters > list and other list blocks. Did no one actually run this stuff through > PyDoc? So my recommendations are to remove the leading :'s from the list > headings, making them definition lists, italicize them, and insert the > mandatory blank line after. Then remove the dashes from under Notes and > Examples, and let the documenter choose between lists with the appended : , > or just indent the text after, which has the same effect if there are no > list item headings, which there typically aren't in those sections. > > Chuck > s/PyDoc/epydoc/g -------------- next part -------------- An HTML attachment was scrubbed... URL: From welchr at umich.edu Tue May 15 13:06:48 2007 From: welchr at umich.edu (Ryan Welch) Date: Tue, 15 May 2007 17:06:48 +0000 (UTC) Subject: [Numpy-discussion] 64-bit Numpy (ppc64) Message-ID: Hi guys, I'm trying to figure out how to compile Numpy for -arch ppc64 on darwin, and I can't seem to get it to work. Any pointers for how I can alter the build process? (I'm not too familiar with distutils, been doing some reading but so far no luck in figuring out how to pass in my own compiler flags other than directly modifying script files, which still doesn't seem to work.) Note: I have a 64-bit Python 2.5 binary already compiled, and I've confirmed it can allocate more than 4GB of memory. Cheers, Ryan From aisaac at american.edu Tue May 15 13:15:12 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 May 2007 13:15:12 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <46499C92.6070405@bigpond.net.au> <4649D9DF.1020805@gmail.com> Message-ID: > On 5/15/07, Alan G Isaac wrote: >> Just to add a bit more specificity, I believe that the >> HOWTO_DOCUMENT.txt document at >> http://svn.scipy.org/svn/numpy/trunk/numpy/doc/HOWTO_DOCUMENT.txt >> summarizes a substantial discussion and broad agreement. >> I.e., it is currently the preferred standard. On Tue, 15 May 2007, Charles R Harris apparently wrote: > I think the output generated for the consolidated lists is too ugly to bear. > The bullets are distracting, there is no spacing between entries, and the > explanatory text is appended on the same line, making it difficult to read. > So get rid of ":Parameters:" and just use "Parameters: ". Futhermore, the > dashes under Notes and Examples make these section headers and PyDoc moves > these blocks to the top so that they are printed before the Parameters list > and other list blocks. Did no one actually run this stuff through PyDoc? So > my recommendations are to remove the leading :'s from the list headings, > making them definition lists, italicize them, and insert the mandatory blank > line after. Then remove the dashes from under Notes and Examples, and let > the documenter choose between lists with the appended : , or just indent the > text after, which has the same effect if there are no list item headings, > which there typically aren't in those sections. If this is meant as a reply to my post, it seems to treat my simple summary as advocacy. Now I am in fact an advocate of sticking with reStructuredText, but I was simply summarizing how I understood a rather *extended* discussion. There was an outcome; I believe the doc above captures it. Charles apparently wishes to reopen that discussion. It will probably help to keep a few things separate. Bullets in consolidated lists: if you don't like them, don't use them. Use a definition list instead. (I like this much better myself.) Formatting: this is just a style issue. Submit a CSS file. Placement of Notes and Examples: Seems simple to change, and the Epydoc developer has been very accommodating. Raise it as an issue or submit a patch. I will not try to summarize all the reasons *for* relying on reST, since these were hashed over in the previous discussion. I will just mention the obvious: getting additional functionality can have some costs, and one cost of having access to all the possible epydoc conversions---including typeset math in both PDF and XHTML documents---is the use of (very light) informative markup. Cheers, Alan Isaac From charlesr.harris at gmail.com Tue May 15 13:32:53 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 11:32:53 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <46499C92.6070405@bigpond.net.au> <4649D9DF.1020805@gmail.com> Message-ID: On 5/15/07, Alan G Isaac wrote: > > > On 5/15/07, Alan G Isaac wrote: > >> Just to add a bit more specificity, I believe that the > >> HOWTO_DOCUMENT.txt document at > >> http://svn.scipy.org/svn/numpy/trunk/numpy/doc/HOWTO_DOCUMENT.txt > >> summarizes a substantial discussion and broad agreement. > >> I.e., it is currently the preferred standard. > > > On Tue, 15 May 2007, Charles R Harris apparently wrote: > > I think the output generated for the consolidated lists is too ugly to > bear. > > The bullets are distracting, there is no spacing between entries, and > the > > explanatory text is appended on the same line, making it difficult to > read. > > So get rid of ":Parameters:" and just use "Parameters: ". Futhermore, > the > > dashes under Notes and Examples make these section headers and PyDoc > moves > > these blocks to the top so that they are printed before the Parameters > list > > and other list blocks. Did no one actually run this stuff through PyDoc? > So > > my recommendations are to remove the leading :'s from the list headings, > > making them definition lists, italicize them, and insert the mandatory > blank > > line after. Then remove the dashes from under Notes and Examples, and > let > > the documenter choose between lists with the appended : , or just indent > the > > text after, which has the same effect if there are no list item > headings, > > which there typically aren't in those sections. > > > If this is meant as a reply to my post, it seems to treat my > simple summary as advocacy. I was pointing out why I thought the suggested style was deficient. Now I am in fact an advocate of sticking with > reStructuredText, but I was simply summarizing how > I understood a rather *extended* discussion. > There was an outcome; I believe the doc above captures it. > > Charles apparently wishes to reopen that discussion. Yes! It will probably help to keep a few things separate. > > Bullets in consolidated lists: > if you don't like them, don't use them. > Use a definition list instead. > (I like this much better myself.) We should settle on one or the other. Formatting: > this is just a style issue. > Submit a CSS file. I will look into that. Placement of Notes and Examples: > Seems simple to change, and the Epydoc developer has > been very accommodating. Raise it as an issue or submit > a patch. It also causes the headings to be set in large bold type. Ugly. I will not try to summarize all the reasons *for* relying on > reST, since these were hashed over in the previous > discussion. I think reST is great, and properly used we get good looking output without mucking up the help text displayed in a terminal. > I will just mention the obvious: getting > additional functionality can have some costs, and one cost > of having access to all the possible epydoc > conversions---including typeset math in both PDF and XHTML > documents---is the use of (very light) informative markup. I have no problem with that. I am just trying to get the most suitable output without making upstream changes to epydoc. We will have to do something to deal with the documentation for c functions in add_newdocs.py, but that is another problem. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue May 15 13:39:20 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 May 2007 13:39:20 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <46499C92.6070405@bigpond.net.au> <4649D9DF.1020805@gmail.com> Message-ID: On Tue, 15 May 2007, Charles R Harris apparently wrote: > It also causes the headings to be set in large bold type. Ugly. Well I agree. But again this is just a style issue. All font decisions (and *much* else) can be set in a CSS file. > I think reST is great, and properly used we get good > looking output without mucking up the help text displayed > in a terminal. I agree. Cheers, Alan From gael.varoquaux at normalesup.org Tue May 15 13:44:17 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 15 May 2007 19:44:17 +0200 Subject: [Numpy-discussion] Documentation In-Reply-To: References: Message-ID: <20070515174416.GF7988@clipper.ens.fr> On Tue, May 15, 2007 at 01:15:12PM -0400, Alan G Isaac wrote: > Bullets in consolidated lists: > if you don't like them, don't use them. > Use a definition list instead. > (I like this much better myself.) Can't this be fixed by a CSS ? Ga?l From aisaac at american.edu Tue May 15 13:48:14 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 May 2007 13:48:14 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: <20070515174416.GF7988@clipper.ens.fr> References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: > On Tue, May 15, 2007 at 01:15:12PM -0400, Alan G Isaac wrote: >> Bullets in consolidated lists: >> if you don't like them, don't use them. >> Use a definition list instead. >> (I like this much better myself.) On Tue, 15 May 2007, Gael Varoquaux apparently wrote: > Can't this be fixed by a CSS ? That is also true for the HTML. But I thought he was not liking them in the text (console) documentation as well. Cheers, Alan Isaac From charlesr.harris at gmail.com Tue May 15 13:51:52 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 11:51:52 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On 5/15/07, Alan G Isaac wrote: > > > On Tue, May 15, 2007 at 01:15:12PM -0400, Alan G Isaac wrote: > >> Bullets in consolidated lists: > >> if you don't like them, don't use them. > >> Use a definition list instead. > >> (I like this much better myself.) > > > On Tue, 15 May 2007, Gael Varoquaux apparently wrote: > > Can't this be fixed by a CSS ? > > > That is also true for the HTML. > But I thought he was not liking them in the > text (console) documentation as well. No, you don't need bullets in the consolidated list to get them in the HTML, so that isn't a problem. If a CSS can set all the font styles, item spacing, and treatment of explanatory text, that would be the more flexible route. If we can also make SeeAlso generate links to the referenced routines I will be very satisfied. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Tue May 15 14:00:19 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 15 May 2007 11:00:19 -0700 Subject: [Numpy-discussion] 64-bit Numpy (ppc64) In-Reply-To: References: Message-ID: <4649F533.30406@noaa.gov> Ryan Welch wrote: > I'm trying to figure out how to compile Numpy for -arch ppc64 on darwin, and I What happens with a plain old: > python setup.py build distutils should just "do the right thing" > Note: I have a 64-bit Python 2.5 binary already compiled, and I've confirmed it > can allocate more than 4GB of memory. If you use that python to run setup.py, and it's a PPC binary, you should be all set. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From welchr at umich.edu Tue May 15 14:13:27 2007 From: welchr at umich.edu (Ryan Welch) Date: Tue, 15 May 2007 18:13:27 +0000 (UTC) Subject: [Numpy-discussion] 64-bit Numpy (ppc64) References: <4649F533.30406@noaa.gov> Message-ID: Christopher Barker noaa.gov> writes: > What happens with a plain old: > > > python setup.py build > > distutils should just "do the right thing" > > If you use that python to run setup.py, and it's a PPC binary, you > should be all set. > > -CHB > When I run a normal build, it always uses -arch ppc -arch i386 to compile C sources. I'd like to get it to do -arch ppc64 instead, and remove the unnecessary i386 if at all possible. From aisaac at american.edu Tue May 15 14:18:11 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 May 2007 14:18:11 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On Tue, 15 May 2007, Charles R Harris apparently wrote: > If a CSS can set all the font styles, item spacing, and > treatment of explanatory text, that would be the more > flexible route. I'm not sure what "treatment" meams here, but if it is a matter of font decisions or spacing, the answer should be yes. > If we can also make SeeAlso generate links to the > referenced routines I will be very satisfied. epydoc supports documentation cross-reference links. I think you get what you want (if I understand) just but putting the name in back ticks, `like_this`. That will give me a link to the like_this function in the same module (and maybe anywhere in the package, but I'm not sure). Cheers, Alan From charlesr.harris at gmail.com Tue May 15 14:16:23 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 12:16:23 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: <20070515174416.GF7988@clipper.ens.fr> References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On 5/15/07, Gael Varoquaux wrote: > > On Tue, May 15, 2007 at 01:15:12PM -0400, Alan G Isaac wrote: > > Bullets in consolidated lists: > > if you don't like them, don't use them. > > Use a definition list instead. > > (I like this much better myself.) > > Can't this be fixed by a CSS ? Looks like the font used for section headings can be specified, but there is nothing in the epydoc.css file that looks like it affects the consolidated lists or the movement of a section heading to the top of the output. I think the latter comes from reST because that is where it is documented. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 15 14:22:12 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 12:22:12 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On 5/15/07, Alan G Isaac wrote: > > On Tue, 15 May 2007, Charles R Harris apparently wrote: > > If a CSS can set all the font styles, item spacing, and > > treatment of explanatory text, that would be the more > > flexible route. > > I'm not sure what "treatment" meams here, > but if it is a matter of font decisions or spacing, > the answer should be yes. The item in a consolidated list is divided into three parts: name, type, explanation. I see no way in the CSS to control the display of those parts, although it might be an undocumented feature. I basically want it to look like a definition list with maybe the current default for type, which puts it in parenthesis. > If we can also make SeeAlso generate links to the > > referenced routines I will be very satisfied. > > epydoc supports documentation cross-reference links. > I think you get what you want (if I understand) just > but putting the name in back ticks, `like_this`. > That will give me a link to the like_this function > in the same module (and maybe anywhere in the package, > but I'm not sure). Yes, I just noticed that. One less thing to worry about. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue May 15 14:22:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 May 2007 13:22:23 -0500 Subject: [Numpy-discussion] 64-bit Numpy (ppc64) In-Reply-To: References: <4649F533.30406@noaa.gov> Message-ID: <4649FA5F.2060903@gmail.com> Ryan Welch wrote: > When I run a normal build, it always uses -arch ppc -arch i386 to compile C > sources. I'd like to get it to do -arch ppc64 instead, and remove the > unnecessary i386 if at all possible. Those parameters come from the BASECFLAGS variable in $prefix/lib/python2.5/config/Makefile and should be whatever Python was configured with. Check that file and see what it has for that variable. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Tue May 15 14:27:56 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 May 2007 14:27:56 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On Tue, 15 May 2007, Charles R Harris apparently wrote: > Looks like the font used for section headings can be > specified, but there is nothing in the epydoc.css file > that looks like it affects the consolidated lists or the > movement of a section heading to the top of the output. > I think the latter comes from reST because that is where > it is documented. I cannot imagine that reST would move things around unless you ask, so that must be an epydoc decision. Perhaps I recall (?) that the epydoc developer expressed willingness to revisit this decision or make it modifiable. I suppose it might be modified by floating it (with CSS), but that seems a last resort. Cheers, Alan Isaac From Chris.Barker at noaa.gov Tue May 15 14:30:01 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 15 May 2007 11:30:01 -0700 Subject: [Numpy-discussion] 64-bit Numpy (ppc64) In-Reply-To: References: <4649F533.30406@noaa.gov> Message-ID: <4649FC29.5070407@noaa.gov> Ryan Welch wrote: > When I run a normal build, it always uses -arch ppc -arch i386 to compile C > sources. I'd like to get it to do -arch ppc64 instead, and remove the > unnecessary i386 if at all possible. Is your Python Universal? I suspect so. If you don't want a Universal build, then you may have to go find the right flags to build Python for PPC only, then distutils should work for you. If you do want a Universal Python, then why not a Universal numpy? This has been discussed on the pythonmac list -- a scan of the archives there should give you your options: http://mail.python.org/mailman/listinfo/pythonmac-sig -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From welchr at umich.edu Tue May 15 14:31:00 2007 From: welchr at umich.edu (Ryan Welch) Date: Tue, 15 May 2007 18:31:00 +0000 (UTC) Subject: [Numpy-discussion] 64-bit Numpy (ppc64) References: <4649F533.30406@noaa.gov> <4649FA5F.2060903@gmail.com> Message-ID: Robert Kern gmail.com> writes: > > Ryan Welch wrote: > > > When I run a normal build, it always uses -arch ppc -arch i386 to compile C > > sources. I'd like to get it to do -arch ppc64 instead, and remove the > > unnecessary i386 if at all possible. > > Those parameters come from the BASECFLAGS variable in > $prefix/lib/python2.5/config/Makefile and should be whatever Python was > configured with. Check that file and see what it has for that variable. > Actually my mistake completely, I realized my python was pointing incorrectly to an older version once I saw your statement there. So, I fixed it to correctly use python2.5 to run the build, but it still seems to fail: C compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -fast -arch ppc64 -Wall -Wstrict-prototypes -fno-common -fPIC compile options: '-I/usr/local/include/python2.5 -Inumpy/core/src -Inumpy/core/include -I/usr/local/include/python2.5 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:50: warning: format '%d' expects type 'int', but argument 3 has type 'long unsigned int' _configtest.c:57: warning: format '%d' expects type 'int', but argument 3 has type 'long unsigned int' _configtest.c:72: warning: format '%d' expects type 'int', but argument 3 has type 'long unsigned int' gcc _configtest.o -L/usr/local/lib -L/usr/lib -L/sw/lib -o _configtest /usr/bin/ld: _configtest.o bad magic number (not a Mach-O file) collect2: ld returned 1 exit status /usr/bin/ld: _configtest.o bad magic number (not a Mach-O file) collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o From charlesr.harris at gmail.com Tue May 15 14:55:19 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 12:55:19 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On 5/15/07, Charles R Harris wrote: > > > > On 5/15/07, Alan G Isaac wrote: > > > > On Tue, 15 May 2007, Charles R Harris apparently wrote: > > > If a CSS can set all the font styles, item spacing, and > > > treatment of explanatory text, that would be the more > > > flexible route. > > > > I'm not sure what "treatment" meams here, > > but if it is a matter of font decisions or spacing, > > the answer should be yes. > > > The item in a consolidated list is divided into three parts: name, type, > explanation. I see no way in the CSS to control the display of those parts, > although it might be an undocumented feature. I basically want it to look > like a definition list with maybe the current default for type, which puts > it in parenthesis. > > > If we can also make SeeAlso generate links to the > > > referenced routines I will be very satisfied. > > > > epydoc supports documentation cross-reference links. > > I think you get what you want (if I understand) just > > but putting the name in back ticks, `like_this`. > > That will give me a link to the like_this function > > in the same module (and maybe anywhere in the package, > > but I'm not sure). > > > Yes, I just noticed that. One less thing to worry about. > Problems with the include HOWTO_DOCUMENT.txt file: 1) Needs to show, __docformat__ = "restructuredtext en", generates errors otherwise. 2) Consolidated field *must* have a type if : is included in the item. 3) Notes and Examples are moved to the top. 4) :OtherParameters: is not a recogized type. I am willing to correct this stuff, but I really don't think these things should be committed without anyone apparently running epydoc and looking at the output. I've attached an html file generated by pasting the HOWTO_DOCUMENT.txt markup into a fictitious foo function. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue May 15 15:02:01 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 May 2007 15:02:01 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On Tue, 15 May 2007, Charles R Harris apparently wrote: > The item in a consolidated list is divided into three > parts: name, type, explanation. I see no way in the CSS to > control the display of those parts, although it might be > an undocumented feature. I basically want it to look like > a definition list with maybe the current default for type, > which puts it in parenthesis. OK, I see. To control this we need a very small change in output. Instead of::
myvar: mytype
my explanation
I think (as long as the colon is ok) we just need::
myvar: mytype
my explanation
This actually seems a very appropriate change for epydoc and fully backwards compatible. I'll copy this to Ed Loper and see if he will consider making this change. (I'm not sure he is monitoring this discussion any longer.) Cheers, Alan From charlesr.harris at gmail.com Tue May 15 15:19:55 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 13:19:55 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On 5/15/07, Alan G Isaac wrote: > > On Tue, 15 May 2007, Charles R Harris apparently wrote: > > The item in a consolidated list is divided into three > > parts: name, type, explanation. I see no way in the CSS to > > control the display of those parts, although it might be > > an undocumented feature. I basically want it to look like > > a definition list with maybe the current default for type, > > which puts it in parenthesis. > > OK, I see. > To control this we need a very small change in output. > Instead of:: > >
>
myvar: mytype
>
my explanation
>
> > I think (as long as the colon is ok) we just need:: > >
>
> myvar: > mytype >
>
my explanation
>
> > This actually seems a very appropriate change for epydoc and > fully backwards compatible. I'll copy this to Ed Loper and > see if he will consider making this change. (I'm not sure he > is monitoring this discussion any longer.) What I would like it to look like, except for the type stuff, is attached. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue May 15 15:39:42 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 May 2007 15:39:42 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On Tue, 15 May 2007, Charles R Harris apparently wrote: > What I would like it to look like, except for the type > stuff, is attached. Ooof! Aside from obviously agreeing with the location of notes and examples, I have to say, the readability of your example is greatly reduced from the epydoc default! Specifically, I would restore the background colors and restore bold to the headings. (I would agree bold is not needed/desirable for the function names in the Function Details.) But I have no background in web design. Alan Isaac From charlesr.harris at gmail.com Tue May 15 15:56:40 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 13:56:40 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <20070515174416.GF7988@clipper.ens.fr> Message-ID: On 5/15/07, Alan G Isaac wrote: > > On Tue, 15 May 2007, Charles R Harris apparently wrote: > > What I would like it to look like, except for the type > > stuff, is attached. > > > Ooof! > Aside from obviously agreeing with the location of notes > and examples, I have to say, the readability of your > example is greatly reduced from the epydoc default! > > Specifically, I would restore the background colors and > restore bold to the headings. (I would agree bold is > not needed/desirable for the function names in the > Function Details.) They are there and show up here, but I couldn't upload the whole generated documentation because of the mailing list limit on the size of posts. The lack of colorization for the doctest example in the html generated from HOWTO is real, however, the code needs to be indented. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh2358 at gmail.com Tue May 15 16:20:15 2007 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 15 May 2007 15:20:15 -0500 Subject: [Numpy-discussion] pretty printing record array element - datetime Message-ID: <88e473830705151320t75593e7cxbab07995c1b07bbe@mail.gmail.com> I have a numpy record array and I want to pretty print a single element. I was trying to loop over the names in the element dtype and use getattr to access the field value, but I got fouled up because getattr is trying to access the dtype attribute of one of the python objects (datetime.date) that I am storing in the record array. The problem field in the example below is ('fiscaldate', '|O4') which is a python datetime.date object. I subsequently discovered that I can simply use tr[name] rather than getattr(tr, name),but wanted to post this in case it is a bug. In other news, would it be possible to add a pprint method to record array elements that did something like the pprint_element below? It would be nice to be able to do tr.pprint() and get nice output which shows key/value pairs. Thanks, JDH ======================================================= # the buggy code In [33]: type(thisr) Out[33]: In [34]: tr = thisr[14] In [35]: tr Out[35]: ('CDE', 'Y2002', datetime.date(2002, 12, 31), datetime.date(2002, 3, 18), datetime.date(2003, 3, 18), datetime.date(2004, 3, 18), datetime.date(2005, 3, 18), 'CDE', 'A', 767.0, 126.95, 85.944000000000003, -81.207999999999998, -8.484, 4.9589999999999996, -28.071999999999999, 33.030000000000001, 415.96499999999997, 184.334) In [36]: tr.dtype Out[36]: dtype([('id', '|S10'), ('period', '|S5'), ('fiscaldate', '|O4'), ('datepy1', '|O4'), ('reportdate', '|O4'), ('dateny1', '|O4'), ('dateny2', '|O4'), ('ticker', '|S6'), ('exchange', '|S1'), ('employees', '", line 2, in ? File "/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/records.py", line 138, in __getattribute__ if obj.dtype.fields: AttributeError: 'datetime.date' object has no attribute 'dtype' In [39]: numpy.__version__ Out[39]: '1.0.3.dev3728' ====================================================== # the good code (the colons are aligned in the output in an ASCII terminal) def pprint_element(tr): names = tr.dtype.names maxlen = max([len(name) for name in names]) rows = [] fmt = '%% %ds: %%s'%maxlen for name in names: rows.append(fmt%(name, tr[name])) return '\n'.join(rows) In [49]: print pprint_element(tr) id: CDE period: Y2002 fiscaldate: 2002-12-31 datepy1: 2002-03-18 reportdate: 2003-03-18 dateny1: 2004-03-18 dateny2: 2005-03-18 ticker: CDE exchange: A employees: 767.0 marketcap: 126.95 sales: 85.944 income: -81.208 cashflowops: -8.484 returnunhedged: 4.959 returnq: -28.072 returnhedgedprior: 33.03 returnhedgednext1: 415.965 returnhedgednext2: 184.334 From mattknox_ca at hotmail.com Tue May 15 16:28:17 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 15 May 2007 20:28:17 +0000 (UTC) Subject: [Numpy-discussion] pretty printing record array element - datetime References: <88e473830705151320t75593e7cxbab07995c1b07bbe@mail.gmail.com> Message-ID: > I have a numpy record array and I want to pretty print a single > element. I was trying to loop over the names in the element dtype and > use getattr to access the field value, but I got fouled up because > getattr is trying to access the dtype attribute of one of the python > objects (datetime.date) that I am storing in the record array. The > problem field in the example below is ('fiscaldate', '|O4') which is a > python datetime.date object. > I don't really know much about record arrays since I haven't used them at all really... but are you perhaps trying to emulate a time series with your record arrays? You may want to take a look at the timeseries module that is currently in development: http://www.scipy.org/TimeSeriesPackage . It can probably ease your pain in terms of dealing with date indexing in a more seamless manner, as well as "pretty printing" (and plotting). There is also a sub-module in the package for record array TimeSeries objects of sorts... but Pierre would be more qualified to talk about that than me. We LOVE getting new testers and feedback, so if you're willing to give it a spin it would be greatly appreciated. - Matt From charlesr.harris at gmail.com Tue May 15 17:34:15 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 May 2007 15:34:15 -0600 Subject: [Numpy-discussion] umath.maximum.reduce In-Reply-To: <200705081027.24373.pgmdevlist@gmail.com> References: <200705081027.24373.pgmdevlist@gmail.com> Message-ID: On 5/8/07, Pierre GM wrote: > > All, > On a 2D array, numpy.core.umath.maximum.reduce returns a 1D array (the > default axis is 0). An axis=None is not recognized as a valid argument > for > numpy.core.umath.maximum, unfortunately... Is this the wanted behavior ? > Thanks a lot in advance I see no one answered ;) I think it is traditional behaviour from Numeric, so can't be changed without a bit of thought. Why not file a ticket for the 1.1 release and see how folks respond? Is there any reason why you can't use the x.max() method? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Tue May 15 17:34:01 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 15 May 2007 17:34:01 -0400 Subject: [Numpy-discussion] pretty printing record array element - datetime In-Reply-To: <88e473830705151320t75593e7cxbab07995c1b07bbe@mail.gmail.com> References: <88e473830705151320t75593e7cxbab07995c1b07bbe@mail.gmail.com> Message-ID: <200705151734.01948.pgmdevlist@gmail.com> In addition to Matt's answer: Yes, we have a tmulti where you can store dates and other variables in a record-like TimeSeries. However, there must be simpler: Why are you using getattr to access the fields ? Your object tr is a record array, so you could use for n in tr.dtype.names: print n, tr[n] Alternatively, you can view tr as recarray, and then access the fields as attributes: trv = tr.view(numpy.recarray) for n in tr.dtype.names: print n, getattr(trv, n) Note that you gonna get 0d-arrays in both cases, so you may want to use an extra .item() From welchr at umich.edu Tue May 15 17:38:35 2007 From: welchr at umich.edu (Ryan Welch) Date: Tue, 15 May 2007 21:38:35 +0000 (UTC) Subject: [Numpy-discussion] 64-bit Numpy (ppc64) References: <4649F533.30406@noaa.gov> <4649FC29.5070407@noaa.gov> Message-ID: Christopher Barker noaa.gov> writes: > Is your Python Universal? I suspect so. If you don't want a Universal > build, then you may have to go find the right flags to build Python for > PPC only, then distutils should work for you. If you do want a Universal > Python, then why not a Universal numpy? Yes, my 64-bit Python 2.5 build is universal. I actually can't get it to compile without it. And in all of my searching, I don't think anyone else has done it either, unfortunately. From pgmdevlist at gmail.com Tue May 15 17:52:42 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 15 May 2007 17:52:42 -0400 Subject: [Numpy-discussion] umath.maximum.reduce In-Reply-To: References: <200705081027.24373.pgmdevlist@gmail.com> Message-ID: <200705151752.43054.pgmdevlist@gmail.com> On Tuesday 15 May 2007 17:34:15 Charles R Harris wrote: > I see no one answered ;) I think it is traditional behaviour from Numeric, > so can't be changed without a bit of thought. > Why not file a ticket for the > 1.1 release and see how folks respond? I'll try to. > Is there any reason why you can't use the x.max() method? Actually, there is: I try to redefine the .max() for a subclass (maskedarray). The quick fix I used was to ravel my array first if axis=None. my question was more theoretical: legacy is a good enough explanation for me... From oliphant.travis at ieee.org Tue May 15 19:00:21 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 15 May 2007 17:00:21 -0600 Subject: [Numpy-discussion] pretty printing record array element - datetime In-Reply-To: <88e473830705151320t75593e7cxbab07995c1b07bbe@mail.gmail.com> References: <88e473830705151320t75593e7cxbab07995c1b07bbe@mail.gmail.com> Message-ID: <464A3B85.8090608@ieee.org> John Hunter wrote: > I have a numpy record array and I want to pretty print a single > element. I was trying to loop over the names in the element dtype and > use getattr to access the field value, but I got fouled up because > getattr is trying to access the dtype attribute of one of the python > objects (datetime.date) that I am storing in the record array. The > problem field in the example below is ('fiscaldate', '|O4') which is a > python datetime.date object. > Ah! This is a bug introduced a long time ago when we decided to get rid of the "object-data-type" array scalars. I'm working on a fix. > In other news, would it be possible to add a pprint method to record > array elements that did something like the pprint_element below? It > would be nice to be able to do tr.pprint() and get nice output which > shows key/value pairs. > That is easy enough to do and I have no objection. -Travis From fullung at gmail.com Tue May 15 23:12:23 2007 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 16 May 2007 05:12:23 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 release next week In-Reply-To: <20070513141334.GA3924@dogbert.sdsl.sun.ac.za> References: <464393E0.3030900@ee.byu.edu> <20070512111917.GA17254@dogbert.sdsl.sun.ac.za> <20070513141334.GA3924@dogbert.sdsl.sun.ac.za> Message-ID: <20070516031223.GA32726@dogbert.sdsl.sun.ac.za> Hello all A quick update: we're down to 16 active tickets, with 38 tickets already closed for the 1.0.3 release. Thanks to all the bugfixers. Summary of tickets for 1.0.3: http://projects.scipy.org/scipy/numpy/milestone/1.0.3%20Release Active tickets for 1.0.3: http://projects.scipy.org/scipy/numpy/query?status=new&status=assigned&status=reopened&milestone=1.0.3+Release Cheers, Albert On Sun, 13 May 2007, Albert Strasheim wrote: > Hello all > > On Sat, 12 May 2007, Charles R Harris wrote: > > > On 5/12/07, Albert Strasheim wrote: > > > > > >I've more or less finished my quick triage effort. > > > > Thanks, Albert. The tickets look much better organized now. > > My pleasure. Stefan van der Walt has also gotten in on the act and > we're now down to 19 open tickets with 1.0.3 as the milestone. From charlesr.harris at gmail.com Wed May 16 12:36:21 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 16 May 2007 10:36:21 -0600 Subject: [Numpy-discussion] Heads up on gcc-4.2 and -fstrict-overflow Message-ID: Hi All, The newest released version of gcc implements the flag -fstrict-overflow, which is on by default. In C this means that signed integers are assumed to not overflow, as by the strict C standard only unsigned integers use modular arithmetic and wrap. This may affect numpy because currently signed integers wrap and don't raise a flag on overflow. We may need to unset this flag to maintain current behaviour. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed May 16 21:03:43 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 16 May 2007 21:03:43 -0400 Subject: [Numpy-discussion] Unfortunate user experience with max() Message-ID: Hi, Numpy has a max() function. It takes an array, and possibly some extra arguments (axis and default). Unfortunately, this means that >>> numpy.max(-1.3,2,7) -1.3 This can lead to surprising bugs in code that either explicitly expects it to behave like python's max() or implicitly expects that by doing "from numpy import max". I don't have a *suggestion*, exactly, for how to deal with this; checking the type of the axis argument, or even its value, when the first argument is a scalar, will still let some bugs slip through (e.g., max(-1,0)). But I've been bitten by this a few times even once I noticed it. Is there anything reasonable to do about this, beyond conditioning oneself to use amax? Thanks, Anne M. Archibald From wbaxter at gmail.com Wed May 16 21:21:55 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 17 May 2007 10:21:55 +0900 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: References: Message-ID: On 5/17/07, Anne Archibald wrote: > > Hi, > > Numpy has a max() function. It takes an array, and possibly some extra > arguments (axis and default). Unfortunately, this means that > > >>> numpy.max(-1.3,2,7) > -1.3 > > This can lead to surprising bugs in code that either explicitly > expects it to behave like python's max() or implicitly expects that by > doing "from numpy import max". > > I don't have a *suggestion*, exactly, for how to deal with this; > checking the type of the axis argument, or even its value, when the > first argument is a scalar, will still let some bugs slip through > (e.g., max(-1,0)). But I've been bitten by this a few times even once > I noticed it. > > Is there anything reasonable to do about this, beyond conditioning > oneself to use amax? I don't know what a good fix is, but I got bitten by that one the other day too. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed May 16 21:34:53 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 16 May 2007 21:34:53 -0400 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: References: Message-ID: On Wed, 16 May 2007, Anne Archibald apparently wrote: >>>> numpy.max(-1.3,2,7) > -1.3 Is that new behavior? I get a TypeError on the last argument. (As expected.) Cheers, Alan Isaac From kwgoodman at gmail.com Wed May 16 21:36:12 2007 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 16 May 2007 18:36:12 -0700 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: References: Message-ID: On 5/16/07, Anne Archibald wrote: > Numpy has a max() function. It takes an array, and possibly some extra > arguments (axis and default). Unfortunately, this means that > > >>> numpy.max(-1.3,2,7) > -1.3 > > This can lead to surprising bugs in code that either explicitly > expects it to behave like python's max() or implicitly expects that by > doing "from numpy import max". I have several bite marks from >> x matrix([[ 2. ], [ nan], [ 1. ]]) >> numpy.max(x) 1.0 From peridot.faceted at gmail.com Wed May 16 21:54:42 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 16 May 2007 21:54:42 -0400 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: References: Message-ID: On 16/05/07, Alan G Isaac wrote: > On Wed, 16 May 2007, Anne Archibald apparently wrote: > >>>> numpy.max(-1.3,2,7) > > -1.3 > > Is that new behavior? > I get a TypeError on the last argument. > (As expected.) For which version of numpy? In [2]: numpy.max(-1.3,2.7) Out[2]: -1.3 In [3]: numpy.__version__ Out[3]: '1.0.1' Anne From robert.kern at gmail.com Wed May 16 21:57:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 May 2007 20:57:37 -0500 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: References: Message-ID: <464BB691.8060902@gmail.com> Anne Archibald wrote: > On 16/05/07, Alan G Isaac wrote: >> On Wed, 16 May 2007, Anne Archibald apparently wrote: >>>>>> numpy.max(-1.3,2,7) ^ typo >>> -1.3 >> Is that new behavior? >> I get a TypeError on the last argument. >> (As expected.) > For which version of numpy? > > In [2]: numpy.max(-1.3,2.7) > Out[2]: -1.3 Ah, you had a typo in your original message, conveniently labeled above. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Wed May 16 22:00:55 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 16 May 2007 22:00:55 -0400 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: References: Message-ID: <20070517020055.GA10408@arbutus.physics.mcmaster.ca> On Wed, May 16, 2007 at 09:03:43PM -0400, Anne Archibald wrote: > Hi, > > Numpy has a max() function. It takes an array, and possibly some extra > arguments (axis and default). Unfortunately, this means that > > >>> numpy.max(-1.3,2,7) > -1.3 > > This can lead to surprising bugs in code that either explicitly > expects it to behave like python's max() or implicitly expects that by > doing "from numpy import max". > > I don't have a *suggestion*, exactly, for how to deal with this; > checking the type of the axis argument, or even its value, when the > first argument is a scalar, will still let some bugs slip through > (e.g., max(-1,0)). But I've been bitten by this a few times even once > I noticed it. > > Is there anything reasonable to do about this, beyond conditioning > oneself to use amax? 1) Don't do 'from numpy import max' :-) 2) 'from numpy import *' doesn't import max, so that's ok I don't think we can reasonably change that in a 1.0 release, but I'm all in favour of removing numpy.max in 1.1. Shadowing builtins is a bad idea. The same goes for numpy.sum, which, at the least, should be modified so that the function signatures are compatible: numpy.sum(x, axis=None, dtype=None, out=None) vs. sum(sequence, start=0) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From charlesr.harris at gmail.com Wed May 16 22:32:39 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 16 May 2007 20:32:39 -0600 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: <20070517020055.GA10408@arbutus.physics.mcmaster.ca> References: <20070517020055.GA10408@arbutus.physics.mcmaster.ca> Message-ID: On 5/16/07, David M. Cooke wrote: > > On Wed, May 16, 2007 at 09:03:43PM -0400, Anne Archibald wrote: > > Hi, > > > > Numpy has a max() function. It takes an array, and possibly some extra > > arguments (axis and default). Unfortunately, this means that > > > > >>> numpy.max(-1.3,2,7) > > -1.3 > > > > This can lead to surprising bugs in code that either explicitly > > expects it to behave like python's max() or implicitly expects that by > > doing "from numpy import max". > > > > I don't have a *suggestion*, exactly, for how to deal with this; > > checking the type of the axis argument, or even its value, when the > > first argument is a scalar, will still let some bugs slip through > > (e.g., max(-1,0)). But I've been bitten by this a few times even once > > I noticed it. > > > > Is there anything reasonable to do about this, beyond conditioning > > oneself to use amax? > > 1) Don't do 'from numpy import max' :-) > 2) 'from numpy import *' doesn't import max, so that's ok > > I don't think we can reasonably change that in a 1.0 release, but I'm > all in favour of removing numpy.max in 1.1. Shadowing builtins is a > bad idea. PyChecker warns of a number of such shadowings in numpy. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Thu May 17 00:07:46 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 17 May 2007 13:07:46 +0900 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: <20070517020055.GA10408@arbutus.physics.mcmaster.ca> References: <20070517020055.GA10408@arbutus.physics.mcmaster.ca> Message-ID: On 5/17/07, David M. Cooke wrote: > > On Wed, May 16, 2007 at 09:03:43PM -0400, Anne Archibald wrote: > > Hi, > > > > Numpy has a max() function. It takes an array, and possibly some extra > > arguments (axis and default). Unfortunately, this means that > > > > >>> numpy.max(-1.3,2,7) > > -1.3 > > > > This can lead to surprising bugs in code that either explicitly > > expects it to behave like python's max() or implicitly expects that by > > doing "from numpy import max". > > > > I don't have a *suggestion*, exactly, for how to deal with this; > > checking the type of the axis argument, or even its value, when the > > first argument is a scalar, will still let some bugs slip through > > (e.g., max(-1,0)). But I've been bitten by this a few times even once > > I noticed it. > > > > Is there anything reasonable to do about this, beyond conditioning > > oneself to use amax? > > 1) Don't do 'from numpy import max' :-) > 2) 'from numpy import *' doesn't import max, so that's ok > > I don't think we can reasonably change that in a 1.0 release, but I'm > all in favour of removing numpy.max in 1.1. Shadowing builtins is a > bad idea. The same goes for numpy.sum, which, at the least, should be > modified so that the function signatures are compatible: > numpy.sum(x, axis=None, dtype=None, out=None) > vs. > sum(sequence, start=0) For numpy.max at least, not accepting a fractional value as an axis would probably catch a large case of common errors. I mean it doesn't seem like it should be legal to say 'axis=2.7'. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu May 17 04:54:22 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 17 May 2007 02:54:22 -0600 Subject: [Numpy-discussion] [SciPy-user] median filter with clipping In-Reply-To: <200705161828.41791.pgmdevlist@gmail.com> References: <464B5BF6.6050007@stsci.edu> <200705161754.42813.pgmdevlist@gmail.com> <20070516220435.GD27364@bams.ccf.swri.edu> <200705161828.41791.pgmdevlist@gmail.com> Message-ID: <464C183E.6010601@ieee.org> Pierre GM wrote: >> How far away is maskedarray from being able to replace numpy.ma? >> > > So far, it does everything that numpy.core.ma does, with I believe more > flexibility and some additional features (hard/soft mask, easy > subclassing...). Personally, I stopped using numpy.core.ma completely (unless > for test purposes), and I even managed to convince another user to switch to > this module for his own work (cf the TimeSeries package). > > Of course, I expect that there are still some bugs here and there, but I'm > working on it (when I find them). It's a tad slower than numpy.core.ma, but > that's a side effect of the flexibility. In the long term, there are some > plans about porting the module to C, but we're talking in quarters more than > in weeks. > > About when it'll be promoted outside the sandbox: well, we need more feedback > from users, as usual. I guess that's the principal stopping block. I'd be > quite grateful if you could try it and let me know what you think. I grew > fond of this child born in pain (explaining to my bosses why I spent so much > time on something which is only remotely connected to what I paid to do...), > so I make sure that the baby behaves... > I'm inclined to move his masked array over to ma wholesale. The fact that Pierre sees it as his baby is very important to me. If it doesn't have significant compatibility issues than I'm all for it. I'm mainly interested in hearing how people actually using numpy.ma would respond. Perhaps we should move it over and start deprecating numpy.ma?? -Travis > Once again, please don't hesitate to contact me if you have any > comments/complaints/suggestions. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From lbolla at gmail.com Thu May 17 04:48:40 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 17 May 2007 10:48:40 +0200 Subject: [Numpy-discussion] BLAS and LAPACK used? Message-ID: <80c99e790705170148g41ae19f9kbb9ff57251583cae@mail.gmail.com> Hi all, I need to know the libraries (BLAS and LAPACK) which numpy has been linked to, when I compiled it. I can't remember which ones I used (ATLAS, MKL, etc...)... Is there an easy way to find it out? Thanks in advance, Lorenzo Bolla. -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.u.r.e.l.i.a.n at gmx.net Thu May 17 04:51:52 2007 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Thu, 17 May 2007 10:51:52 +0200 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: <20070517020055.GA10408@arbutus.physics.mcmaster.ca> References: <20070517020055.GA10408@arbutus.physics.mcmaster.ca> Message-ID: <200705171051.53370.a.u.r.e.l.i.a.n@gmx.net> Hi, > > This can lead to surprising bugs in code that either explicitly > > expects it to behave like python's max() or implicitly expects that by > > doing "from numpy import max". my solution is to never use numpy.max. For arrays, I always use the method call (somearray.max()). For everything else the builtin. > I don't think we can reasonably change that in a 1.0 release, but I'm > all in favour of removing numpy.max in 1.1. Shadowing builtins is a > bad idea. The same goes for numpy.sum, which, at the least, should be > modified so that the function signatures are compatible: > numpy.sum(x, axis=None, dtype=None, out=None) > vs. > sum(sequence, start=0) +1 from me. Real world example where the shadowing made my code not work: >>> l = [[1, 2], [5, 6, 7], [9]] >>> l2 = sum(l, []) builtin result: l2 == [1, 2, 5, 6, 7, 9] numpy.sum result: exception... My suggestion for this would be to remove all shadowing in favor of array methods. Though I am aware this would break *lots* of "legacy" code. cu, Johannes From numpy at mspacek.mm.st Thu May 17 05:08:38 2007 From: numpy at mspacek.mm.st (Martin Spacek) Date: Thu, 17 May 2007 02:08:38 -0700 Subject: [Numpy-discussion] BLAS and LAPACK used? In-Reply-To: <80c99e790705170148g41ae19f9kbb9ff57251583cae@mail.gmail.com> References: <80c99e790705170148g41ae19f9kbb9ff57251583cae@mail.gmail.com> Message-ID: <464C1B96.5080000@mspacek.mm.st> lorenzo bolla wrote: > Hi all, > I need to know the libraries (BLAS and LAPACK) which numpy has been > linked to, when I compiled it. > I can't remember which ones I used (ATLAS, MKL, etc...)... > Is there an easy way to find it out? > Thanks in advance, > Lorenzo Bolla. Yup: >>> import numpy >>> numpy.show_config() Martin From nwagner at iam.uni-stuttgart.de Thu May 17 05:53:05 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 17 May 2007 11:53:05 +0200 Subject: [Numpy-discussion] BLAS and LAPACK used? In-Reply-To: <80c99e790705170148g41ae19f9kbb9ff57251583cae@mail.gmail.com> References: <80c99e790705170148g41ae19f9kbb9ff57251583cae@mail.gmail.com> Message-ID: On Thu, 17 May 2007 10:48:40 +0200 "lorenzo bolla" wrote: > Hi all, > I need to know the libraries (BLAS and LAPACK) which >numpy has been linked > to, when I compiled it. > I can't remember which ones I used (ATLAS, MKL, >etc...)... > Is there an easy way to find it out? > Thanks in advance, > Lorenzo Bolla. AFAIK numpy.show_config() Nils From rudolphv at gmail.com Thu May 17 07:52:08 2007 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Thu, 17 May 2007 13:52:08 +0200 Subject: [Numpy-discussion] Expected behavior of numpy object arrays. Is this a bug? Message-ID: <97670e910705170452v42335c53o2321c03623eff5a7@mail.gmail.com> Can someone please confirm if the following is the expected behavior of numpy ndarrays of dtype=object, i.e. object arrays. I suspect it might be a bug. In the Python shell dump below, shouldn't the dtype of oa2[0] and oa2[1] be 'int32' as is the case for oa1[0] and oa1[1] ? Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> np.__version__ '1.0.2' >>> a = np.array([1,2], dtype='int32') >>> b = np.array([5,6,7], dtype='int32') >>> c = np.array([3,4], dtype='int32') >>> oa1 = np.array([a, b], dtype=object) >>> oa2 = np.array([a, c], dtype=object) >>> oa1 array([[1 2], [5 6 7]], dtype=object) >>> oa1[0] array([1, 2]) >>> oa1[0].dtype dtype('int32') >>> oa1[1] array([5, 6, 7]) >>> oa1[1].dtype dtype('int32') >>> oa2 array([[1, 2], [3, 4]], dtype=object) >>> oa2[0] array([1, 2], dtype=object) >>> oa2[0].dtype dtype('object') >>> oa2[1] array([3, 4], dtype=object) >>> oa2[1].dtype dtype('object') >>> -- Rudolph van der Merwe From david.huard at gmail.com Thu May 17 09:42:27 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 17 May 2007 09:42:27 -0400 Subject: [Numpy-discussion] [SciPy-user] median filter with clipping In-Reply-To: <464C183E.6010601@ieee.org> References: <464B5BF6.6050007@stsci.edu> <200705161754.42813.pgmdevlist@gmail.com> <20070516220435.GD27364@bams.ccf.swri.edu> <200705161828.41791.pgmdevlist@gmail.com> <464C183E.6010601@ieee.org> Message-ID: <91cf711d0705170642s40c97e04id301c46b6c62e825@mail.gmail.com> 2007/5/17, Travis Oliphant : > I'm inclined to move his masked array over to ma wholesale. The fact > that Pierre sees it as his baby is very important to me. If it doesn't > have significant compatibility issues than I'm all for it. I'm mainly > interested in hearing how people actually using numpy.ma would respond. > > Perhaps we should move it over and start deprecating numpy.ma?? At the risk of offending Pierre's baby, I think that's a little premature. Not that maskedarray is not ready for general use on its own, but rather because subtle compatibility issues with numpy.ma may break matplotlib functions (I found one, and wouldn't be surprised to find others). Before removing numpy.ma, I'd like to have some time to run the whole matplotlib testsuite with maskedarray as the core ma package. Besides that, I'm all for it, the TimeSeries package is *extremely* useful to me. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Thu May 17 09:44:43 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Thu, 17 May 2007 13:44:43 +0000 (UTC) Subject: [Numpy-discussion] [SciPy-user] median filter with clipping References: <464B5BF6.6050007@stsci.edu> <200705161754.42813.pgmdevlist@gmail.com> <20070516220435.GD27364@bams.ccf.swri.edu> <200705161828.41791.pgmdevlist@gmail.com> <464C183E.6010601@ieee.org> Message-ID: > I'm inclined to move his masked array over to ma wholesale. The fact > that Pierre sees it as his baby is very important to me. If it doesn't > have significant compatibility issues than I'm all for it. I'm mainly > interested in hearing how people actually using numpy.ma would respond. > > Perhaps we should move it over and start deprecating numpy.ma?? > > -Travis I am probably a little biased on this because I've been using Pierre's maskedarray module with the timeseries module for a while now, but I am definitely in favor of this. The timeseries module in it's current form wouldn't really be possible with the current numpy.ma because of subclassing issues (well, it would be a lot uglier anyway). The maskedarray module in the sandbox works very well and the few bugs I've encountered Pierre has been quick to implement fixes for. The fact that it works seamlessly as a drop in replacement in matplotlib is a good sign as far as backwards compatibility. - Matt From Glen.Mabey at swri.org Thu May 17 09:48:17 2007 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Thu, 17 May 2007 08:48:17 -0500 Subject: [Numpy-discussion] [SciPy-user] median filter with clipping In-Reply-To: <464C183E.6010601@ieee.org> References: <464B5BF6.6050007@stsci.edu> <200705161754.42813.pgmdevlist@gmail.com> <20070516220435.GD27364@bams.ccf.swri.edu> <200705161828.41791.pgmdevlist@gmail.com> <464C183E.6010601@ieee.org> Message-ID: <20070517134817.GF27364@bams.ccf.swri.edu> On Thu, May 17, 2007 at 02:54:22AM -0600, Travis Oliphant wrote: > Pierre GM wrote: > >> How far away is maskedarray from being able to replace numpy.ma? > >> > > > > So far, it does everything that numpy.core.ma does, with I believe more > > flexibility and some additional features (hard/soft mask, easy > > subclassing...). Personally, I stopped using numpy.core.ma completely (unless > > for test purposes), and I even managed to convince another user to switch to > > this module for his own work (cf the TimeSeries package). > > > > Of course, I expect that there are still some bugs here and there, but I'm > > working on it (when I find them). It's a tad slower than numpy.core.ma, but > > that's a side effect of the flexibility. In the long term, there are some > > plans about porting the module to C, but we're talking in quarters more than > > in weeks. > > > > About when it'll be promoted outside the sandbox: well, we need more feedback > > from users, as usual. I guess that's the principal stopping block. I'd be > > quite grateful if you could try it and let me know what you think. I grew > > fond of this child born in pain (explaining to my bosses why I spent so much > > time on something which is only remotely connected to what I paid to do...), > > so I make sure that the baby behaves... I never received this response from Pierre, even though I have a white-filter on scipy-user ... strange. > I'm inclined to move his masked array over to ma wholesale. The fact > that Pierre sees it as his baby is very important to me. If it doesn't > have significant compatibility issues than I'm all for it. I'm mainly > interested in hearing how people actually using numpy.ma would respond. > > Perhaps we should move it over and start deprecating numpy.ma?? Meaning that there would exist both numpy.ma and numpy.maskedarray for a period? That sounds great to me. I'll happily switch over once I have more confidence in its stability. Glen From pgmdevlist at gmail.com Thu May 17 10:00:53 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 17 May 2007 10:00:53 -0400 Subject: [Numpy-discussion] [SciPy-user] median filter with clipping In-Reply-To: <464C183E.6010601@ieee.org> References: <464B5BF6.6050007@stsci.edu> <200705161828.41791.pgmdevlist@gmail.com> <464C183E.6010601@ieee.org> Message-ID: <200705171000.53908.pgmdevlist@gmail.com> n Thursday 17 May 2007 04:54:22 Travis Oliphant wrote: > I'm inclined to move his masked array over to ma wholesale. The fact > that Pierre sees it as his baby is very important to me. Well, all the credits should go to Paul Dubois, the original author of numpy.core.ma, and the scores of people who helped him. maskedarray would not have come to existence without them. > If it doesn't > have significant compatibility issues than I'm all for it. None that I've seen so far. MaskedArrays created by one package can be read by the other. With a one-line editing of a file (numerix/ma/__init__.py), matplotlib runs seamlessly with the new package. But once again, it is not completely bug-free: I just found a couple of bugs this week-end, or even yesterday, that were brought to my attention by Matt Knox when playing with the TimeSeries package. Nothing major, just some minor annoyances. > I'm mainly > interested in hearing how people actually using numpy.ma would respond. One issue is that maskedarray *is* slower than numpy.core.ma. If performance is preferred over flexibility, then one should stick to numpy.core.ma. Some basic estimations show about 15%. I'd be quite interested in hearing about actual users of the packages, in order to find what points to implement/modify. #---------------------- On Thursday 17 May 2007 09:42:27 David Huard wrote: > At the risk of offending Pierre's baby, I think that's a little premature. > Not that maskedarray is not ready for general use on its own, but rather > because subtle compatibility issues with numpy.ma may break matplotlib > functions (I found one, and wouldn't be surprised to find others). David, I wouldn't speak about compatibility, just about bugs: the problem was in the implementation of .max() w/ maskedarray. The origin of the problem was (is still) in umath.maximum.reduce that doesn't accept axis=None, so a numpy problem ;). But I agree: switching may have some subtle consequences in matplotlib (nothing that can't be quickly fiexed, however). What do Eric Firing, John Hunter and the other mpl developer think ? My only request would be for more users ! That's the only way I can find how to improve maskedarray. From david.huard at gmail.com Thu May 17 11:22:56 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 17 May 2007 11:22:56 -0400 Subject: [Numpy-discussion] [SciPy-user] median filter with clipping In-Reply-To: <200705171000.53908.pgmdevlist@gmail.com> References: <464B5BF6.6050007@stsci.edu> <200705161828.41791.pgmdevlist@gmail.com> <464C183E.6010601@ieee.org> <200705171000.53908.pgmdevlist@gmail.com> Message-ID: <91cf711d0705170822n3bf5960eq1d56b27f583dbdd2@mail.gmail.com> Pierre, 2007/5/17, Pierre GM : > > But I agree: switching may have some subtle consequences in > matplotlib (nothing that can't be quickly fiexed, however). All examples in backend_driver.py test seem to run fine (+ others I added that contained masked arrays). David -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu May 17 11:56:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 May 2007 10:56:33 -0500 Subject: [Numpy-discussion] Expected behavior of numpy object arrays. Is this a bug? In-Reply-To: <97670e910705170452v42335c53o2321c03623eff5a7@mail.gmail.com> References: <97670e910705170452v42335c53o2321c03623eff5a7@mail.gmail.com> Message-ID: <464C7B31.5050606@gmail.com> Rudolph van der Merwe wrote: > Can someone please confirm if the following is the expected behavior > of numpy ndarrays of dtype=object, i.e. object arrays. I suspect it > might be a bug. It's expected. The array() function has to make some guesses as to what you meant when you pass it a sequence of sequences and are trying to make an object array. It tries to go as deep as it can. When you give it a list of len-2 sequences, it thinks you want a (2, 2) array. When you give it a list of a len-2 sequence an a len-3 sequence, the only thing it can do is make a (2,) array of the two objects. The best way to build the array you want is to make the object array first, and assign the contents: In [1]: from numpy import * In [2]: a = array([1, 2]) In [3]: b = array([5, 6, 7]) In [4]: c = array([3, 4]) In [5]: oa1 = empty([2], dtype=object) In [6]: oa2 = empty([2], dtype=object) In [7]: oa1[:] = [a, b] In [8]: oa2[:] = [a, c] In [9]: oa1 Out[9]: array([[1 2], [5 6 7]], dtype=object) In [10]: oa2 Out[10]: array([[1 2], [3 4]], dtype=object) In [11]: oa1[0].dtype Out[11]: dtype('int32') In [12]: oa2[0].dtype Out[12]: dtype('int32') -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Thu May 17 12:17:26 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 17 May 2007 10:17:26 -0600 Subject: [Numpy-discussion] Unfortunate user experience with max() In-Reply-To: <200705171051.53370.a.u.r.e.l.i.a.n@gmx.net> References: <20070517020055.GA10408@arbutus.physics.mcmaster.ca> <200705171051.53370.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <464C8016.2070604@ee.byu.edu> Johannes Loehnert wrote: >>numpy.sum(x, axis=None, dtype=None, out=None) >>vs. >>sum(sequence, start=0) >> >> The problem here is that Numeric had sum before Python had sum. So, there is a legacy issue. As you can tell, there are limits to my concern of shadowing builtins. That's what name-spaces are for :-) But, I'm sympathetic to bite-wounds. -Travis From rudolphv at gmail.com Thu May 17 15:03:59 2007 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Thu, 17 May 2007 21:03:59 +0200 Subject: [Numpy-discussion] Expected behavior of numpy object arrays. Is this a bug? In-Reply-To: <464C7B31.5050606@gmail.com> References: <97670e910705170452v42335c53o2321c03623eff5a7@mail.gmail.com> <464C7B31.5050606@gmail.com> Message-ID: <97670e910705171203g1d96adbdj7be78c89d487e1b4@mail.gmail.com> Robert, Thanks for the suggestion on creating an empty object array first (of the needed shape) and then assigning the entries. It works like a charm. Rudolph On 5/17/07, Robert Kern wrote: > Rudolph van der Merwe wrote: > > Can someone please confirm if the following is the expected behavior > > of numpy ndarrays of dtype=object, i.e. object arrays. I suspect it > > might be a bug. > > It's expected. The array() function has to make some guesses as to what you > meant when you pass it a sequence of sequences and are trying to make an object > array. It tries to go as deep as it can. When you give it a list of len-2 > sequences, it thinks you want a (2, 2) array. When you give it a list of a len-2 > sequence an a len-3 sequence, the only thing it can do is make a (2,) array of > the two objects. > > The best way to build the array you want is to make the object array first, and > assign the contents: > > > In [1]: from numpy import * > > In [2]: a = array([1, 2]) > > In [3]: b = array([5, 6, 7]) > > In [4]: c = array([3, 4]) > > In [5]: oa1 = empty([2], dtype=object) > > In [6]: oa2 = empty([2], dtype=object) > > In [7]: oa1[:] = [a, b] > > In [8]: oa2[:] = [a, c] > > In [9]: oa1 > Out[9]: array([[1 2], [5 6 7]], dtype=object) > > In [10]: oa2 > Out[10]: array([[1 2], [3 4]], dtype=object) > > In [11]: oa1[0].dtype > Out[11]: dtype('int32') > > In [12]: oa2[0].dtype > Out[12]: dtype('int32') > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Rudolph van der Merwe From efiring at hawaii.edu Thu May 17 18:33:11 2007 From: efiring at hawaii.edu (Eric Firing) Date: Thu, 17 May 2007 18:33:11 -0400 Subject: [Numpy-discussion] [SciPy-user] median filter with clipping In-Reply-To: <464C183E.6010601@ieee.org> References: <464B5BF6.6050007@stsci.edu> <200705161754.42813.pgmdevlist@gmail.com> <20070516220435.GD27364@bams.ccf.swri.edu> <200705161828.41791.pgmdevlist@gmail.com> <464C183E.6010601@ieee.org> Message-ID: <464CD827.3090107@hawaii.edu> Travis Oliphant wrote: [...] > I'm inclined to move his masked array over to ma wholesale. The fact > that Pierre sees it as his baby is very important to me. If it doesn't > have significant compatibility issues than I'm all for it. I'm mainly > interested in hearing how people actually using numpy.ma would respond. > > Perhaps we should move it over and start deprecating numpy.ma?? +1 Eric > > -Travis From lyang at unb.ca Thu May 17 19:44:07 2007 From: lyang at unb.ca (Yang, Lu) Date: Thu, 17 May 2007 20:44:07 -0300 Subject: [Numpy-discussion] Numpy 1.0.2 install question In-Reply-To: <464CD827.3090107@hawaii.edu> References: <464B5BF6.6050007@stsci.edu> <200705161754.42813.pgmdevlist@gmail.com> <20070516220435.GD27364@bams.ccf.swri.edu> <200705161828.41791.pgmdevlist@gmail.com> <464C183E.6010601@ieee.org> <464CD827.3090107@hawaii.edu> Message-ID: <1179445447.464ce8c72527a@webmail.unb.ca> Hi, I am installing numpy 1.0.2 on AMD Opeteron, Solaris 10. However, I got the following error after running 'python setup.py install. I highly appreciate your help. Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_3649 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/sfw/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/sfw/lib libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in /usr/sfw/lib libraries f77blas,cblas,atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.2/numpy/distutils/system_info.py:1301: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /usr/sfw/lib libraries blas not found in /usr/local/lib libraries blas not found in /usr/lib NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.2/numpy/distutils/system_info.py:1310: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.2/numpy/distutils/system_info.py:1313: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) NOT AVAILABLE lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/sfw/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/sfw/lib libraries lapack_atlas not found in /usr/sfw/lib libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in /usr/sfw/lib libraries lapack_atlas not found in /usr/sfw/lib libraries f77blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.2/numpy/distutils/system_info.py:1210: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: libraries lapack not found in /usr/sfw/lib libraries lapack not found in /usr/local/lib libraries lapack not found in /usr/lib NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.2/numpy/distutils/system_info.py:1221: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) lapack_src_info: NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.2/numpy/distutils/system_info.py:1224: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) NOT AVAILABLE running install running build running config_fc running build_src building py_modules sources building extension "numpy.core.multiarray" sources adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h' to sources. adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.solaris-2.10-i86pc-2.3/numpy/core/src/scalartypes.inc', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/src/arraytypes.inc', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__ufunc_api.h' to sources. adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.solaris-2.10-i86pc-2.3/numpy/core/src/scalartypes.inc', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/src/arraytypes.inc', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h' to sources. numpy.core - nothing done with h_files= ['build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h'] building extension "numpy.core.scalarmath" sources adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__ufunc_api.h' to sources. numpy.core - nothing done with h_files= ['build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__ufunc_api.h'] building extension "numpy.core._dotblas" sources building extension "numpy.lib._compiled_base" sources building extension "numpy.numarray._capi" sources building extension "numpy.fft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources ### Warning: Using unoptimized lapack ### adding 'numpy/linalg/lapack_litemodule.c' to sources. adding 'numpy/linalg/zlapack_lite.c' to sources. adding 'numpy/linalg/dlapack_lite.c' to sources. adding 'numpy/linalg/blas_lite.c' to sources. adding 'numpy/linalg/dlamch.c' to sources. adding 'numpy/linalg/f2c_lite.c' to sources. building extension "numpy.random.mtrand" sources customize SunFCompiler customize SunFCompiler customize SunFCompiler using config C compiler: /opt/SUNWspro/bin/cc -i -xO4 -xspace -xstrconst -xpentium -mr -DANSICPP -D__STDC_VERSION__=199409L -DNDEBUG -O -xchip=opteron compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/sfw/include/python2.3 -c' cc: _configtest.c "_configtest.c", line 7: #error: No _WIN32 cc: acomp failed for _configtest.c "_configtest.c", line 7: #error: No _WIN32 cc: acomp failed for _configtest.c failure. removing: _configtest.c _configtest.o building data_files sources running build_py copying build/src.solaris-2.10-i86pc-2.3/numpy/__config__.py -> build/lib.solaris-2.10-i86pc-2.3/numpy copying build/src.solaris-2.10-i86pc-2.3/numpy/distutils/__config__.py -> build/lib.solaris-2.10-i86pc-2.3/numpy/distutils running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'numpy.core.multiarray' extension compiling C sources C compiler: /opt/SUNWspro/bin/cc -i -xO4 -xspace -xstrconst -xpentium -mr -DANSICPP -D__STDC_VERSION__=199409L -DNDEBUG -O -xchip=opteron compile options: '-Ibuild/src.solaris-2.10-i86pc-2.3/numpy/core/src -Inumpy/core/include -Ibuild/src.solaris-2.10-i86pc-2.3/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/sfw/include/python2.3 -c' /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc -G -L/lib -xchip=opteron build/temp.solaris-2.10-i86pc-2.3/numpy/core/src/multiarraymodule.o -lm -o build/lib.solaris-2.10-i86pc-2.3/numpy/core/multiarray.so sh: /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc: not found sh: /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc: not found error: Command "/sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc -G -L/lib -xchip=opteron build/temp.solaris-2.10-i86pc-2.3/numpy/core/src/multiarraymodule.o -lm -o build/lib.solaris-2.10-i86pc-2.3/numpy/core/multiarray.so" failed with exit status 1 From efiring at hawaii.edu Thu May 17 20:05:18 2007 From: efiring at hawaii.edu (Eric Firing) Date: Thu, 17 May 2007 20:05:18 -0400 Subject: [Numpy-discussion] [SciPy-user] median filter with clipping In-Reply-To: <200705171000.53908.pgmdevlist@gmail.com> References: <464B5BF6.6050007@stsci.edu> <200705161828.41791.pgmdevlist@gmail.com> <464C183E.6010601@ieee.org> <200705171000.53908.pgmdevlist@gmail.com> Message-ID: <464CEDBE.8000506@hawaii.edu> Pierre GM wrote: [...] > > David, I wouldn't speak about compatibility, just about bugs: the problem was > in the implementation of .max() w/ maskedarray. The origin of the problem was > (is still) in umath.maximum.reduce that doesn't accept axis=None, so a numpy > problem ;). But I agree: switching may have some subtle consequences in > matplotlib (nothing that can't be quickly fiexed, however). What do Eric > Firing, John Hunter and the other mpl developer think ? I think this would be a good time to make the switch. We are going to be stripping out the Numeric and numarray support, so let's finalize the new ma capabilities at the same time. I think that maskedarray is actually closer to being a drop-in replacement for ndarray than ma is, and I think it will be easier to work with. I am confident that any problems can be solved easily. A 15% speed penalty doesn't bother me; presumably it can be reduced later. > > My only request would be for more users ! That's the only way I can find how > to improve maskedarray. Moving maskedarray from the sandbox into svn numpy will make it easier for mpl devels to use it while doing and testing the mpl numpification. (I won't be able to work on this until June.) I suppose that it will be necessary for mpl to support both for a while, unfortunately, but I haven't thought this through carefully. Eric From pearu at cens.ioc.ee Fri May 18 04:30:09 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 18 May 2007 10:30:09 +0200 Subject: [Numpy-discussion] numpy.distutils improvements (was Re: NumPy 1.0.3 release next week) In-Reply-To: <464393E0.3030900@ee.byu.edu> References: <464393E0.3030900@ee.byu.edu> Message-ID: <464D6411.1060406@cens.ioc.ee> Hi Travis, I have pretty much completed work with http://projects.scipy.org/scipy/numpy/ticket/522 that is about using proper compilers/linkers for individual extension modules instead of using the one compiler/linker set for all extension modules in distribution - this allows one to define Fortran77, Fortran90, C++ based extension modules in the same setup.py file. I haven't applied patches to svn yet as they modify build_ext.py and build_clib.py modules quite a bit and I haven't test my changes with cygwin/mingw compilers, though related bugs, if any, should be easy to fix. What are your current plans with numpy 1.0.3 release? Do we have enough time (couple of days, I guess) to iron out numpy.distutils for untested platforms or shall I wait until 1.0.3 is out? Best regards, Pearu Travis Oliphant wrote: > Hi all, > > I'd like to relase NumPy 1.0.3 next week (on Tuesday) along with a new > release of SciPy. Please let me know of changes that you are planning > on making before then. > > Best, > > -Travis > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From openopt at ukr.net Fri May 18 08:10:23 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 18 May 2007 15:10:23 +0300 Subject: [Numpy-discussion] best way for storing extensible data? Message-ID: <464D97AF.2090304@ukr.net> hi all, what is best way for storing data in numpy array? (amount of memory for preallocating is unknown) Currently I use just a Python list, i.e. r = [] for i in xrange(N)#N is very big ... r.append(some_value) Thx, D. From cookedm at physics.mcmaster.ca Fri May 18 08:45:31 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 18 May 2007 08:45:31 -0400 Subject: [Numpy-discussion] best way for storing extensible data? In-Reply-To: <464D97AF.2090304@ukr.net> References: <464D97AF.2090304@ukr.net> Message-ID: <20070518124531.GA21171@arbutus.physics.mcmaster.ca> On Fri, May 18, 2007 at 03:10:23PM +0300, dmitrey wrote: > hi all, > what is best way for storing data in numpy array? (amount of memory for > preallocating is unknown) > Currently I use just a Python list, i.e. > > r = [] > for i in xrange(N)#N is very big > ... > r.append(some_value) In the above, you know how big you need b/c you know N ;-) so empty is a good choice: r = empty((N,), dtype=float) for i in xrange(N): r[i] = some_value empty() allocates the array, but doesn't clear it or anything (as opposed to zeros(), which would set the elements to zero). If you don't know N, then fromiter would be best: def ivalues(): while some_condition(): ... yield some_value r = fromiter(ivalues(), dtype=float) It'll act like appending to a list, where it will grow the array (by doubling, I think) when it needs to, so appending each value is amortized to O(1) time. A list though would use more memory per element as each element is a full Python object. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From peridot.faceted at gmail.com Fri May 18 11:01:37 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 18 May 2007 11:01:37 -0400 Subject: [Numpy-discussion] best way for storing extensible data? In-Reply-To: <20070518124531.GA21171@arbutus.physics.mcmaster.ca> References: <464D97AF.2090304@ukr.net> <20070518124531.GA21171@arbutus.physics.mcmaster.ca> Message-ID: On 18/05/07, David M. Cooke wrote: > It'll act like appending to a list, where it will grow the array (by > doubling, I think) when it needs to, so appending each value is > amortized to O(1) time. A list though would use more memory > per element as each element is a full Python object. That said, don't be afraid to use a list. The memory penalty is not high (an extra 50% or 100% or so, just what it costs to duplicate an array, and about as much as is wasted in the amortizing) and python's list-handling can be quite efficient and convenient. List comprehensions, in particular, can be a very good way to write array operations that would otherwise be cumbersome. Iterator comprehensions can fill some of the same role. Anne From tim.hochberg at ieee.org Fri May 18 12:01:43 2007 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Fri, 18 May 2007 09:01:43 -0700 Subject: [Numpy-discussion] best way for storing extensible data? In-Reply-To: <20070518124531.GA21171@arbutus.physics.mcmaster.ca> References: <464D97AF.2090304@ukr.net> <20070518124531.GA21171@arbutus.physics.mcmaster.ca> Message-ID: On 5/18/07, David M. Cooke wrote: > > On Fri, May 18, 2007 at 03:10:23PM +0300, dmitrey wrote: > > hi all, > > what is best way for storing data in numpy array? (amount of memory for > > preallocating is unknown) > > Currently I use just a Python list, i.e. > > > > r = [] > > for i in xrange(N)#N is very big > > ... > > r.append(some_value) > > In the above, you know how big you need b/c you know N ;-) so empty is > a good choice: > > r = empty((N,), dtype=float) > for i in xrange(N): > r[i] = some_value > > empty() allocates the array, but doesn't clear it or anything (as > opposed to zeros(), which would set the elements to zero). > > If you don't know N, then fromiter would be best: > > def ivalues(): > while some_condition(): > ... > yield some_value > > r = fromiter(ivalues(), dtype=float) > > It'll act like appending to a list, where it will grow the array (by > doubling, I think) It uses 50% overallocation. It also reallocs at the end in an attempt to give back any extra space. I'm not sure is that's actually effective though. when it needs to, so appending each value is > amortized to O(1) time. A list though would use more memory > per element as each element is a full Python object. Note that if you are pulling from an iterator, even if you know the length, fromiter may well be better since you can specify a length with the optional third argument. r = fromiter(some_value_iter, float, N) -tim -- //=][=\\ tim.hochberg at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Fri May 18 12:26:40 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 May 2007 10:26:40 -0600 Subject: [Numpy-discussion] numpy.distutils improvements (was Re: NumPy 1.0.3 release next week) In-Reply-To: <464D6411.1060406@cens.ioc.ee> References: <464393E0.3030900@ee.byu.edu> <464D6411.1060406@cens.ioc.ee> Message-ID: <464DD3C0.7000900@ee.byu.edu> Pearu Peterson wrote: >Hi Travis, > >I have pretty much completed work with > > http://projects.scipy.org/scipy/numpy/ticket/522 > >that is about using proper compilers/linkers for >individual extension modules instead of using the >one compiler/linker set for all extension modules >in distribution - this allows one to define Fortran77, >Fortran90, C++ based extension modules in the same >setup.py file. > >I haven't applied patches to svn yet as they modify >build_ext.py and build_clib.py modules quite a bit >and I haven't test my changes with cygwin/mingw >compilers, though related bugs, if any, should be >easy to fix. > >What are your current plans with numpy 1.0.3 release? >Do we have enough time (couple of days, I guess) >to iron out numpy.distutils for untested platforms >or shall I wait until 1.0.3 is out? > > Let's get your changes in to 1.0.3. I could use a few days to get it out, anyway. My wife's grandmother died this week and I have a funeral to attend over the next coupld of days. So, let's put Monday as the target date. Is that enough time, or should we put Tuesday down? Thanks so much, -Travis From travis at enthought.com Mon May 14 12:35:29 2007 From: travis at enthought.com (Travis Vaught) Date: Mon, 14 May 2007 11:35:29 -0500 Subject: [Numpy-discussion] Documentation In-Reply-To: References: Message-ID: <05202BC1-10C1-4D14-8FA6-D311ED593942@enthought.com> Just to be thorough... enthought's endo package generates the attached docs when running this: python ~/svnrepos/enthought/src/lib/enthought/endo/scripts/endo.py -r core -p core --rst -d docs -------------- next part -------------- A non-text attachment was scrubbed... Name: numpydocs.tgz Type: application/octet-stream Size: 50458 bytes Desc: not available URL: -------------- next part -------------- I don't want to confuse the numpy documentation landscape, but if anything in endo is useful, we'd love to see it get some use. Thanks, Travis On May 14, 2007, at 9:48 AM, Charles R Harris wrote: > Hi All, > > I've been trying out epydoc to see how things look. Apart from all > the error messages, it does run. However, I didn't like the > appearance of the documentation structured as suggested in numpy/ > doc, either in the terminal or in the generated html. In > particular, I didn't like the consolidated lists and the > interpreted variable names. I can see where these might be useful > down the road, but for now I stuck to definition lists with > italicized headings and plain old variable names. > > Another problem we might want to address are the doctest blocks. > Numpy inserts blank lines when it is printing out multidimensional > arrays and because blank lines indicate the end of the block that > screws up the formatting. It would also be nice to make the SeeAlso > entries links to relevant functions. > > Anyway, I've attached the html file generated from fromnumeric.py > for your flamage and suggestions. The routines I restructured are > sort, argsort, searchsorted, diagonal, std, var, and mean. > > Chuck > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From james at nbn.ac.za Wed May 16 18:04:29 2007 From: james at nbn.ac.za (James Dominy) Date: Thu, 17 May 2007 00:04:29 +0200 Subject: [Numpy-discussion] Building an ndarray around an already allocated area of memory Message-ID: <464B7FED.1060704@nbn.ac.za> Hi, First time post, and pretty new to numpy. I'm wrapping another library (SUNDIALS) in python, using ctypes, and was wondering if there were a way to create an ndarray python object around an array (ctypes.c_float*32, for example) without copying it? For example, I have an array created using ctypes ---- import ctypes import numpy ctarray = (ctypes.c_float*32)() ---- And want to create an ndarray such that both the ndarray and the ctypes array are using the same actual memory area. The reason for doing this this, is because using numpy methods to manipulate the contents of 'ctarray' may be useful to some people, and the array should optimally be changed in place, rather than creating a copy as an ndarray, manipulating, then copying back. Of course, if the ndarray object were garbage collected, the memory allocated for the actual array shouldn't be deallocated. Is this possible? Thanks, James From oliphant at ee.byu.edu Fri May 18 12:37:17 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 May 2007 10:37:17 -0600 Subject: [Numpy-discussion] NumPy 1.0.3 for Tuesday, May 22 Message-ID: <464DD63D.2070203@ee.byu.edu> Hi all, Thank you to the many people who have contributed to cleaning up the tickets in preparation for NumPy 1.0.3 and to the people who have reported bugs and problems. In order to attend to more of the issues and to test out some of Pearu's new changes to numpy.distutils, I've delayed putting out 1.0.3 by about one week to next Tuesday. Best regards, -Travis P.S. I will also be busy this weekend attending to the funeral of my wife's grandmother, Blanche Hansen. She died on May 15th after a short illness. She lived a full and productive life of almost 96 years. If you've never spoken to someone who remembers World War I, then you are missing out on remarkable insight to our changing global society. From charlesr.harris at gmail.com Fri May 18 12:59:21 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 18 May 2007 10:59:21 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: <05202BC1-10C1-4D14-8FA6-D311ED593942@enthought.com> References: <05202BC1-10C1-4D14-8FA6-D311ED593942@enthought.com> Message-ID: On 5/14/07, Travis Vaught wrote: > > Just to be thorough... > > enthought's endo package generates the attached docs when running this: > > python ~/svnrepos/enthought/src/lib/enthought/endo/scripts/endo.py -r > core -p core --rst -d docs > > > > I don't want to confuse the numpy documentation landscape, but if > anything in endo is useful, we'd love to see it get some use. Thanks, Examples of the various types of documentation produced by the different markups can be found in the module fromnumeric. Do you have any preferences? I note that the html produced by endo has no background coloration or emphasis of list item labels such as produced by epydoc. It looks like the first can be changed in the default.css, but I wasn't clear if the latter could also be added. There now seem to be three different document creation programs: pydoc, epydoc, and endo. Does anyone know how compatible these all are? I think it would be best to go with the most widespread program that looks like it will be maintained. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri May 18 13:02:54 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 18 May 2007 11:02:54 -0600 Subject: [Numpy-discussion] numpy.distutils improvements (was Re: NumPy 1.0.3 release next week) In-Reply-To: <464DD3C0.7000900@ee.byu.edu> References: <464393E0.3030900@ee.byu.edu> <464D6411.1060406@cens.ioc.ee> <464DD3C0.7000900@ee.byu.edu> Message-ID: Hi Travis, On 5/18/07, Travis Oliphant wrote: > > Pearu Peterson wrote: > > >Hi Travis, > > > >I have pretty much completed work with > > > > http://projects.scipy.org/scipy/numpy/ticket/522 > > > >that is about using proper compilers/linkers for > >individual extension modules instead of using the > >one compiler/linker set for all extension modules > >in distribution - this allows one to define Fortran77, > >Fortran90, C++ based extension modules in the same > >setup.py file. > > > >I haven't applied patches to svn yet as they modify > >build_ext.py and build_clib.py modules quite a bit > >and I haven't test my changes with cygwin/mingw > >compilers, though related bugs, if any, should be > >easy to fix. > > > >What are your current plans with numpy 1.0.3 release? > >Do we have enough time (couple of days, I guess) > >to iron out numpy.distutils for untested platforms > >or shall I wait until 1.0.3 is out? > > > > > > Let's get your changes in to 1.0.3. I could use a few days to get it > out, anyway. My wife's grandmother died this week and I have a funeral > to attend over the next coupld of days. > > So, let's put Monday as the target date. Is that enough time, or should > we put Tuesday down? I think we should put it off until you have time. You are a crucial part of the release and things don't need to be rushed. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Fri May 18 14:19:56 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 18 May 2007 14:19:56 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <05202BC1-10C1-4D14-8FA6-D311ED593942@enthought.com> Message-ID: On Fri, 18 May 2007, Charles R Harris apparently wrote: > There now seem to be three different document creation programs: pydoc, > epydoc, and endo. Does anyone know how compatible these all are? I think it > would be best to go with the most widespread program that looks like it will > be maintained. It is probably a good idea to separate this into two pieces: - markup language - document creation program I think there is a substantial community behind reST, which can be supported by a variety of document creation programs. Cheers, Alan Isaac From charlesr.harris at gmail.com Fri May 18 14:24:20 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 18 May 2007 12:24:20 -0600 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <05202BC1-10C1-4D14-8FA6-D311ED593942@enthought.com> Message-ID: On 5/18/07, Alan G Isaac wrote: > > On Fri, 18 May 2007, Charles R Harris apparently wrote: > > There now seem to be three different document creation programs: pydoc, > > epydoc, and endo. Does anyone know how compatible these all are? I think > it > > would be best to go with the most widespread program that looks like it > will > > be maintained. > > It is probably a good idea to separate this into two pieces: > > - markup language > - document creation program > > I think there is a substantial community behind reST, > which can be supported by a variety of document creation > programs. Yes, and reST seems to be compatible with all three programs, so that is an excellent base. What I am searching for is some consensus as to what the documentation template should look like. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Fri May 18 15:49:33 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 18 May 2007 15:49:33 -0400 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <05202BC1-10C1-4D14-8FA6-D311ED593942@enthought.com> Message-ID: On Fri, 18 May 2007, Charles R Harris apparently wrote: > reST seems to be compatible with all three programs, so > that is an excellent base. What I am searching for is some > consensus as to what the documentation template should > look like. I hope this is not too off topic. It seems that there is at least one question for Enthought here: since epydoc has the MIT license, why not work on making epydoc traits aware instead of developing endo separately? I assume that an answer to that question will provide lots of relevant information? Cheers, Alan Isaac From Shawn.Gong at drdc-rddc.gc.ca Fri May 18 15:50:15 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Fri, 18 May 2007 15:50:15 -0400 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682B7@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682B6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682B7@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <2E58C246F17003499C141D334794D049027682B8@ottawaex02.Ottawa.drdc-rddc.gc.ca> Hi List, I am trying to install numpy 1.0.1 on Linux. (Numeric and numarray have been fine) System info: Redhat Linux kernel 2.4 with gcc 3.2.3 but no separate FORTRAN compiler. It has a Fortran 77 compiler (the one which comes as part of gcc) My questions: 1) install can't find ATLAS (*.a) that I specify. Do they have to be *.so files? Do the *.a file sizes look right? Where can I get libblas.so liblapack.so files? 2) do I need a FORTRAN compiler? Will Fortran 77 compiler do? Thanks, Shawn Followings are error messages: ----------------------------- When I type "python setup.py install > numpy.out", I get: Screen display as follows: ----------------------------- Running from numpy source directory. /home/sgong/dev/numpy-1.0.1/numpy/distutils/system_info.py:934: UserWarning: ********************************************************************* ??? Lapack library (from ATLAS) is probably incomplete: ????? size of /usr/lib/liblapack.so is 3676k (expected >4000k) ????Follow the instructions in the KNOWN PROBLEMS section of the file ??? numpy/INSTALL.txt. ********************************************************************* ? I then asked system admin people to install a complete ATLAS onto /usr/local/lib/atlas -rw-r--r-- 1 root root 8049172 May 15 08:49 libatlas.a -rw-r--r-- 1 root root 279016 May 15 08:49 libcblas.a -rw-r--r-- 1 root root 342062 May 15 08:49 libf77blas.a -rw-r--r-- 1 root root 5314268 May 15 08:49 liblapack.a -rw-r--r-- 1 root root 279590 May 15 08:49 libptcblas.a -rw-r--r-- 1 root root 342430 May 15 08:49 libptf77blas.a -rw-r--r-- 1 root root 320316 May 15 08:49 libtstatlas.a System admin people said that he doesn't know how to get/install *.so. He can only get *.a library. I also changed site.cfg to enter "library_dirs = /usr/local/lib/atlas/" Please find the numpy.out.txt file. Some of the lines are: -------------------------------- F2PY Version 2_3473 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /home/sgong/dev/dist/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/local/lib/atlas libraries lapack,blas not found in /usr/local/lib/atlas/ libraries lapack,blas not found in /home/sgong/dev/dist/lib libraries lapack,blas not found in /usr/local/lib/atlas libraries lapack,blas not found in /usr/local/lib Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] language = c Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc _configtest.o -L/usr/lib -llapack -lblas -o _configtest _configtest.o(.text+0x15): In function `main': /home/sgong/dev/numpy-1.0.1/_configtest.c:5: undefined reference to `ATL_buildinfo' /usr/lib/liblapack.so: undefined reference to `e_wsfe' /usr/lib/liblapack.so: undefined reference to `z_abs' /usr/lib/liblapack.so: undefined reference to `c_sqrt' /usr/lib/liblapack.so: undefined reference to `s_cmp' /usr/lib/liblapack.so: undefined reference to `z_exp' /usr/lib/liblapack.so: undefined reference to `c_exp' /usr/lib/liblapack.so: undefined reference to `etime_' /usr/lib/liblapack.so: undefined reference to `do_fio' /usr/lib/liblapack.so: undefined reference to `z_sqrt' /usr/lib/liblapack.so: undefined reference to `s_cat' /usr/lib/liblapack.so: undefined reference to `s_stop' /usr/lib/liblapack.so: undefined reference to `c_abs' /usr/lib/liblapack.so: undefined reference to `s_wsfe' /usr/lib/liblapack.so: undefined reference to `s_copy' collect2: ld returned 1 exit status _configtest.o(.text+0x15): In function `main': /home/sgong/dev/numpy-1.0.1/_configtest.c:5: undefined reference to `ATL_buildinfo' -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpy.out.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: libs.txt URL: From robert.kern at gmail.com Fri May 18 15:53:54 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 May 2007 14:53:54 -0500 Subject: [Numpy-discussion] Documentation In-Reply-To: References: <05202BC1-10C1-4D14-8FA6-D311ED593942@enthought.com> Message-ID: <464E0452.10603@gmail.com> Alan G Isaac wrote: > I hope this is not too off topic. It seems that there is at > least one question for Enthought here: > since epydoc has the MIT license, > why not work on making epydoc traits aware instead of > developing endo separately? We are not developing endo. It was written some time ago for us, but there are no current plans for active development. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri May 18 15:58:30 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 May 2007 14:58:30 -0500 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682B8@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682B6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682B7@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682B8@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <464E0566.2020303@gmail.com> Gong, Shawn (Contractor) wrote: > Hi List, > > I am trying to install numpy 1.0.1 on Linux. (Numeric and numarray have been fine) > System info: Redhat Linux kernel 2.4 with gcc 3.2.3 but no separate FORTRAN compiler. > It has a Fortran 77 compiler (the one which comes as part of gcc) > > My questions: > 1) install can't find ATLAS (*.a) that I specify. > Do they have to be *.so files? No. > Do the *.a file sizes look right? Pretty much. > Where can I get libblas.so liblapack.so files? Don't bother. Try to get the libraries that you have working; see below. > 2) do I need a FORTRAN compiler? Will Fortran 77 compiler do? Yes. Yes. > I also changed site.cfg to enter > "library_dirs = /usr/local/lib/atlas/" Could you provide the complete site.cfg file and tell us where you put it relative to the numpy source directory? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Shawn.Gong at drdc-rddc.gc.ca Fri May 18 16:04:14 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Fri, 18 May 2007 16:04:14 -0400 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed In-Reply-To: <464E0566.2020303@gmail.com> References: <2E58C246F17003499C141D334794D049027682B6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682B7@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682B8@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E0566.2020303@gmail.com> Message-ID: <2E58C246F17003499C141D334794D049027682B9@ottawaex02.Ottawa.drdc-rddc.gc.ca> Thank you Robert for the quick reply. I have been fighting this for a while. site.cfg file is attached. It is sitting in my development dir called /home/sgong/dev/numpy-1.0.1/ (same as setup.py) Shawn -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Robert Kern Sent: Friday, May 18, 2007 3:59 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Linux numpy 1.0.1 install failed Gong, Shawn (Contractor) wrote: > Hi List, > > I am trying to install numpy 1.0.1 on Linux. (Numeric and numarray have been fine) > System info: Redhat Linux kernel 2.4 with gcc 3.2.3 but no separate FORTRAN compiler. > It has a Fortran 77 compiler (the one which comes as part of gcc) > > My questions: > 1) install can't find ATLAS (*.a) that I specify. > Do they have to be *.so files? No. > Do the *.a file sizes look right? Pretty much. > Where can I get libblas.so liblapack.so files? Don't bother. Try to get the libraries that you have working; see below. > 2) do I need a FORTRAN compiler? Will Fortran 77 compiler do? Yes. Yes. > I also changed site.cfg to enter > "library_dirs = /usr/local/lib/atlas/" Could you provide the complete site.cfg file and tell us where you put it relative to the numpy source directory? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: site.cfg Type: application/octet-stream Size: 71 bytes Desc: site.cfg URL: From robert.kern at gmail.com Fri May 18 16:09:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 May 2007 15:09:17 -0500 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682B9@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682B6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682B7@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682B8@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E0566.2020303@gmail.com> <2E58C246F17003499C141D334794D049027682B9@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <464E07ED.9090506@gmail.com> Gong, Shawn (Contractor) wrote: > Thank you Robert for the quick reply. I have been fighting this for a > while. > > site.cfg file is attached. It is sitting in my development dir called > /home/sgong/dev/numpy-1.0.1/ (same as setup.py) You are missing some of the ATLAS libraries. See this section in the site.cfg.example (I don't recall if it made it into 1.0.1): # Defaults # ======== # The settings given here will apply to all other sections if not overridden. # This is a good place to add general library and include directories like # /usr/local/{lib,include} # #[DEFAULT] #library_dirs = /usr/local/lib #include_dirs = /usr/local/include # Optimized BLAS and LAPACK # ------------------------- # Use the blas_opt and lapack_opt sections to give any settings that are # required to link against your chosen BLAS and LAPACK, including the regular # FORTRAN reference BLAS and also ATLAS. Some other sections still exist for # linking against certain optimized libraries (e.g. [atlas], [lapack_atlas]), # however, they are now deprecated and should not be used. # # These are typical configurations for ATLAS (assuming that the library and # include directories have already been set in [DEFAULT]; the include directory # is important for the BLAS C interface): # #[blas_opt] #libraries = f77blas, cblas, atlas # #[lapack_opt] #libraries = lapack, f77blas, cblas, atlas -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Shawn.Gong at drdc-rddc.gc.ca Fri May 18 16:17:49 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Fri, 18 May 2007 16:17:49 -0400 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed In-Reply-To: <464E07ED.9090506@gmail.com> References: <2E58C246F17003499C141D334794D049027682B6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682B7@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682B8@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E0566.2020303@gmail.com><2E58C246F17003499C141D334794D049027682B9@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E07ED.9090506@gmail.com> Message-ID: <2E58C246F17003499C141D334794D049027682BA@ottawaex02.Ottawa.drdc-rddc.gc.ca> Hi Robert, 1) your reply: You are missing some of the ATLAS libraries. Do you mean that I need install more ATLAS libraries? What are they? These are what I have right now: -rw-r--r-- 1 root root 8049172 May 15 08:49 libatlas.a -rw-r--r-- 1 root root 279016 May 15 08:49 libcblas.a -rw-r--r-- 1 root root 342062 May 15 08:49 libf77blas.a -rw-r--r-- 1 root root 5314268 May 15 08:49 liblapack.a -rw-r--r-- 1 root root 279590 May 15 08:49 libptcblas.a -rw-r--r-- 1 root root 342430 May 15 08:49 libptf77blas.a -rw-r--r-- 1 root root 320316 May 15 08:49 libtstatlas.a 2) do I need to add more entries in site.cfg? what to add? 3) is numpy-1.0.1 too old? Should I try 1.0.2? Thanks, Shawn -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Robert Kern Sent: Friday, May 18, 2007 4:09 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Linux numpy 1.0.1 install failed Gong, Shawn (Contractor) wrote: > Thank you Robert for the quick reply. I have been fighting this for a > while. > > site.cfg file is attached. It is sitting in my development dir called > /home/sgong/dev/numpy-1.0.1/ (same as setup.py) You are missing some of the ATLAS libraries. See this section in the site.cfg.example (I don't recall if it made it into 1.0.1): # Defaults # ======== # The settings given here will apply to all other sections if not overridden. # This is a good place to add general library and include directories like # /usr/local/{lib,include} # #[DEFAULT] #library_dirs = /usr/local/lib #include_dirs = /usr/local/include # Optimized BLAS and LAPACK # ------------------------- # Use the blas_opt and lapack_opt sections to give any settings that are # required to link against your chosen BLAS and LAPACK, including the regular # FORTRAN reference BLAS and also ATLAS. Some other sections still exist for # linking against certain optimized libraries (e.g. [atlas], [lapack_atlas]), # however, they are now deprecated and should not be used. # # These are typical configurations for ATLAS (assuming that the library and # include directories have already been set in [DEFAULT]; the include directory # is important for the BLAS C interface): # #[blas_opt] #libraries = f77blas, cblas, atlas # #[lapack_opt] #libraries = lapack, f77blas, cblas, atlas -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion From robert.kern at gmail.com Fri May 18 16:23:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 May 2007 15:23:41 -0500 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682BA@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682B6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682B7@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682B8@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E0566.2020303@gmail.com><2E58C246F17003499C141D334794D049027682B9@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E07ED.9090506@gmail.com> <2E58C246F17003499C141D334794D049027682BA@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <464E0B4D.4000401@gmail.com> Gong, Shawn (Contractor) wrote: > Hi Robert, > 1) your reply: You are missing some of the ATLAS libraries. > Do you mean that I need install more ATLAS libraries? No, sorry, I meant that your site.cfg did not list all of the ATLAS libraries that need to be listed. > 2) do I need to add more entries in site.cfg? what to add? The [blas_opt] and [lapack_opt] sections in the example. > 3) is numpy-1.0.1 too old? Should I try 1.0.2? It would be a good idea, but not necessary. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From erin.sheldon at gmail.com Fri May 18 16:26:39 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 18 May 2007 16:26:39 -0400 Subject: [Numpy-discussion] Getting ield lenghts in extension module Message-ID: <331116dc0705181326y408234c7l58ae7c53108e1b5b@mail.gmail.com> Hi all - I'm writing an extension module and I have a PyArrayObject with fields created with PyArray_Zeros from a PyArray_Descr object. In other words, this is an array of non-homogeneous data records, for example: dtype([('x', ' References: <2E58C246F17003499C141D334794D049027682B6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682B7@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682B8@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E0566.2020303@gmail.com><2E58C246F17003499C141D334794D049027682B9@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E07ED.9090506@gmail.com><2E58C246F17003499C141D334794D049027682BA@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E0B4D.4000401@gmail.com> Message-ID: <2E58C246F17003499C141D334794D049027682BB@ottawaex02.Ottawa.drdc-rddc.gc.ca> numpy-1.0.2 (from sourceforge site) doesn't even have the site.cfg file. Is it unnecessary fro this version or it is a mistake? Shawn -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Robert Kern Sent: Friday, May 18, 2007 4:24 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Linux numpy 1.0.1 install failed Gong, Shawn (Contractor) wrote: > Hi Robert, > 1) your reply: You are missing some of the ATLAS libraries. > Do you mean that I need install more ATLAS libraries? No, sorry, I meant that your site.cfg did not list all of the ATLAS libraries that need to be listed. > 2) do I need to add more entries in site.cfg? what to add? The [blas_opt] and [lapack_opt] sections in the example. > 3) is numpy-1.0.1 too old? Should I try 1.0.2? It would be a good idea, but not necessary. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion From robert.kern at gmail.com Fri May 18 16:43:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 May 2007 15:43:19 -0500 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682BB@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682B6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682B7@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682B8@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E0566.2020303@gmail.com><2E58C246F17003499C141D334794D049027682B9@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E07ED.9090506@gmail.com><2E58C246F17003499C141D334794D049027682BA@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E0B4D.4000401@gmail.com> <2E58C246F17003499C141D334794D049027682BB@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <464E0FE7.6070400@gmail.com> Gong, Shawn (Contractor) wrote: > numpy-1.0.2 (from sourceforge site) doesn't even have the site.cfg file. > Is it unnecessary fro this version or it is a mistake? The site.cfg file is something that people who are installing numpy need to write if they need to supply that information. That information will be different for each installation, so we do not supply one with false information. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Shawn.Gong at drdc-rddc.gc.ca Fri May 18 17:06:38 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Fri, 18 May 2007 17:06:38 -0400 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed Message-ID: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> Sorry Robert, My email sent to mail-list bounced back 5 times. Shawn _____________________________________________ From: Gong, Shawn (Contractor) Sent: Friday, May 18, 2007 5:05 PM To: 'numpy-discussion-request at scipy.org' Subject: RE: [Numpy-discussion] Linux numpy 1.0.1 install failed Hi Robert, I added ref to site.cfg, but still getting the same error message: see "out.txt" file. It seems that install can't find the libraries in /usr/local/lib/atlas/ (see the "out.txt" file Line 14: libraries lapack,blas not found in /usr/local/lib/atlas) Shawn <> <> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: out.txt URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: site.cfg Type: application/octet-stream Size: 173 bytes Desc: site.cfg URL: From robert.kern at gmail.com Fri May 18 17:55:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 May 2007 16:55:32 -0500 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <464E20D4.6040308@gmail.com> Gong, Shawn (Contractor) wrote: > > Hi Robert, > > I added ref to site.cfg, but still getting the same error message: see > "out.txt" file. > > It seems that install can't find the libraries in /usr/local/lib/atlas/ > (see the "out.txt" file Line 14: libraries lapack,blas not found in > /usr/local/lib/atlas) Delete your [atlas] section entirely. It is wrong. Follow the instructions in the example. To make this clear, use exactly the following text in your site.cfg: [blas_opt] library_dirs = /usr/local/lib/atlas libraries = f77blas, cblas, atlas [lapack_opt] library_dirs = /usr/local/lib/atlas libraries = lapack, f77blas, cblas, atlas -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Sun May 20 05:17:06 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 20 May 2007 11:17:06 +0200 Subject: [Numpy-discussion] [Pointer] [Doc-SIG] The docs, reloaded (fwd) Message-ID: <20070520091706.GC14894@clipper.ens.fr> Hi all, There is a very interesting discussion and python-dev and Doc-SIG about using reST to document python. I think that the ongoing effort to document numpy and friends with reST could benefit from interaction with these guys. Ga?l ----- Forwarded message from Georg Brandl ----- To: Doc-Sig at python.org From: Georg Brandl Date: Sat, 19 May 2007 19:14:09 +0200 Cc: python-dev at python.org Subject: [Doc-SIG] The docs, reloaded Hi, over the last few weeks I've hacked on a new approach to Python's documentation. As Python already has an excellent documentation framework, the docutils, with a readable yet extendable markup format, reST, I thought that it should be possible to use those instead of the current LaTeX->latex2html toolchain. For the impatient: the result can be seen at . I've written a converter tool that handles most of the LaTeX markup and turns it into reST, as well as a builder tool that adds many custom directives and roles, and also features like index generation and cross-document linking. (What you can see at the URL is a completely statical version of the docs, as it would be shipped with the distribution. For the real online docs, I have more plans; I'll come to that later.) So why the effort? Here's a partial list of things that have already been improved: - the source is much more readable (for examples, try the "view source" links in the left navbar) - all function/class/module names are properly cross-referenced - the HTML pages are generated from templates, using a language similar to Django's template language - Python and C code snippets are syntax highlighted - for the offline version, there's a JavaScript enabled search function - the index is generated over all the documentation, making it easier to find stuff you need - the toolchain is pure Python, therefore can easily be shipped What more? If there is support for this approach, I have plans for things that can be added to the online version: - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would redirect you to the matching location. - "interactive patching": provide an "propose edit" link, leading to a Wiki-like page where you can edit the source. From the result, a diff is generated, which can be accepted, edited or rejected by the development team. This is even more straightforward than plain old comments. - the same infrastructure could be used for developers, with automatic checkin into subversion. - of course, plain old comments can be useful too. Concluding, a small caveat: The conversion/build tools are, of course, not complete yet. There are a number of XXX comments in the text, most of them indicate that the converter could not handle a situation -- that would have to be corrected once after conversion is done. Waiting for comments! Cheers, Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. _______________________________________________ Doc-SIG maillist - Doc-SIG at python.org http://mail.python.org/mailman/listinfo/doc-sig ----- End forwarded message ----- -- Gael Varoquaux, Groupe d'optique atomique, Laboratoire Charles Fabry de l'Institut d'Optique Campus Polytechnique, RD 128 91127 Palaiseau cedex FRANCE Tel : 33 (0) 1 64 53 33 49 - Fax : 33 (0) 1 64 53 31 01 Labs: 33 (0) 1 64 53 33 63 - 33 (0) 1 64 53 33 62 From robert.kern at gmail.com Sun May 20 08:26:10 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 20 May 2007 07:26:10 -0500 Subject: [Numpy-discussion] [Pointer] [Doc-SIG] The docs, reloaded (fwd) In-Reply-To: <20070520091706.GC14894@clipper.ens.fr> References: <20070520091706.GC14894@clipper.ens.fr> Message-ID: <46503E62.8020100@gmail.com> Gael Varoquaux wrote: > Hi all, > > There is a very interesting discussion and python-dev and Doc-SIG about > using reST to document python. I think that the ongoing effort to > document numpy and friends with reST could benefit from interaction with > these guys. Not particularly. They are solving a different problem there: moving from the existing LaTeX system for manual documentation to reST. It doesn't have much to do with docstrings and automatically generated API docs, which is what we've been talking about. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Sun May 20 11:29:02 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 20 May 2007 09:29:02 -0600 Subject: [Numpy-discussion] [Pointer] [Doc-SIG] The docs, reloaded (fwd) In-Reply-To: <46503E62.8020100@gmail.com> References: <20070520091706.GC14894@clipper.ens.fr> <46503E62.8020100@gmail.com> Message-ID: On 5/20/07, Robert Kern wrote: > > Gael Varoquaux wrote: > > Hi all, > > > > There is a very interesting discussion and python-dev and Doc-SIG about > > using reST to document python. I think that the ongoing effort to > > document numpy and friends with reST could benefit from interaction with > > these guys. > > Not particularly. They are solving a different problem there: moving from > the > existing LaTeX system for manual documentation to reST. It doesn't have > much to > do with docstrings and automatically generated API docs, which is what > we've > been talking about. It looked like a nice way to generate a manual and was nicely organized and pleasant to look at. For the moment I think we can continue to work on the docstrings but it is good to keep thinking about where it might fit in a larger scheme. It will be interesting to see if anything happens with the interactive patching. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at shrogers.com Sun May 20 13:31:14 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Sun, 20 May 2007 11:31:14 -0600 Subject: [Numpy-discussion] [Pointer] [Doc-SIG] The docs, reloaded (fwd) In-Reply-To: References: <20070520091706.GC14894@clipper.ens.fr> <46503E62.8020100@gmail.com> Message-ID: <465085E2.4010002@shrogers.com> Charles R Harris wrote: > > > On 5/20/07, *Robert Kern* > wrote: > > Gael Varoquaux wrote: > > Hi all, > > > > There is a very interesting discussion and python-dev and > Doc-SIG about > > using reST to document python. I think that the ongoing effort to > > document numpy and friends with reST could benefit from > interaction with > > these guys. > > Not particularly. They are solving a different problem there: > moving from the > existing LaTeX system for manual documentation to reST. It doesn't > have much to > do with docstrings and automatically generated API docs, which is > what we've > been talking about. > > > It looked like a nice way to generate a manual and was nicely > organized and pleasant to look at. For the moment I think we can > continue to work on the docstrings but it is good to keep thinking > about where it might fit in a larger scheme. It will be interesting to > see if anything happens with the interactive patching. > > Chuck It also provides a path to unified documentation, whether generated from docstrings or manually. # Steve From rosset at lal.in2p3.fr Sun May 20 22:39:05 2007 From: rosset at lal.in2p3.fr (Cyrille Rosset) Date: Mon, 21 May 2007 04:39:05 +0200 Subject: [Numpy-discussion] Memory leak when looking .flags Message-ID: <46510649.5010003@lal.in2p3.fr> Hi, I'm not sure this is the right mailing list for this, but it seems there's a memory leak when looking at flags : >>> from numpy import * >>> x=ones(50000000) #==> python use 25% of memory (ok) >>> del x #==> memory usage fall back to almost zero (as seen in top) Thqt's good. but if I look at flags before the del : >>> x=ones(50000000) >>> x.flags C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False >>> del x >>> who() Upper bound on total bytes = 0 That looks nice, but the memory usage by python (in top) is still 25%... Isn't it a bug ? Cheers Cyrille Rosset. From robert.kern at gmail.com Sun May 20 22:44:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 20 May 2007 21:44:19 -0500 Subject: [Numpy-discussion] Memory leak when looking .flags In-Reply-To: <46510649.5010003@lal.in2p3.fr> References: <46510649.5010003@lal.in2p3.fr> Message-ID: <46510783.3010607@gmail.com> Cyrille Rosset wrote: > Hi, > > I'm not sure this is the right mailing list for this, but it seems > there's a memory leak when looking at flags : > > >>> from numpy import * > >>> x=ones(50000000) #==> python use 25% of memory (ok) > >>> del x > #==> memory usage fall back to almost zero (as seen in top) > Thqt's good. > > but if I look at flags before the del : > >>> x=ones(50000000) > >>> x.flags > C_CONTIGUOUS : True > F_CONTIGUOUS : True > OWNDATA : True > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > >>> del x > >>> who() > > Upper bound on total bytes = 0 > > That looks nice, but the memory usage by python (in top) is still 25%... > Isn't it a bug ? No, x.flags is still being stored in _. It still has a reference to x. Evaluate something else (e.g. ">>> 1") to clear that out and the memory should be released. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rosset at lal.in2p3.fr Sun May 20 22:56:52 2007 From: rosset at lal.in2p3.fr (Cyrille Rosset) Date: Mon, 21 May 2007 04:56:52 +0200 Subject: [Numpy-discussion] Memory leak when looking .flags In-Reply-To: <46510783.3010607@gmail.com> References: <46510649.5010003@lal.in2p3.fr> <46510783.3010607@gmail.com> Message-ID: <46510A74.9080206@lal.in2p3.fr> Ok, that works fine with python. But not in ipython... is there some other trick ? (there's a whole collection of _* variables in there...) Cyrille. Robert Kern a ?crit : > Cyrille Rosset wrote: >> Hi, >> >> I'm not sure this is the right mailing list for this, but it seems >> there's a memory leak when looking at flags : >> >> >>> from numpy import * >> >>> x=ones(50000000) #==> python use 25% of memory (ok) >> >>> del x >> #==> memory usage fall back to almost zero (as seen in top) >> Thqt's good. >> >> but if I look at flags before the del : >> >>> x=ones(50000000) >> >>> x.flags >> C_CONTIGUOUS : True >> F_CONTIGUOUS : True >> OWNDATA : True >> WRITEABLE : True >> ALIGNED : True >> UPDATEIFCOPY : False >> >>> del x >> >>> who() >> >> Upper bound on total bytes = 0 >> >> That looks nice, but the memory usage by python (in top) is still 25%... >> Isn't it a bug ? > > No, x.flags is still being stored in _. It still has a reference to x. Evaluate > something else (e.g. ">>> 1") to clear that out and the memory should be released. > From robert.kern at gmail.com Sun May 20 23:31:12 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 20 May 2007 22:31:12 -0500 Subject: [Numpy-discussion] Memory leak when looking .flags In-Reply-To: <46510A74.9080206@lal.in2p3.fr> References: <46510649.5010003@lal.in2p3.fr> <46510783.3010607@gmail.com> <46510A74.9080206@lal.in2p3.fr> Message-ID: <46511280.9080407@gmail.com> Cyrille Rosset wrote: > Ok, that works fine with python. > But not in ipython... is there some other trick ? > (there's a whole collection of _* variables in there...) And the Out[NN]'s, too. You should be able to del all of them: del Out[NN], _NN, _, __, ___ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cimrman3 at ntc.zcu.cz Mon May 21 08:09:37 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 21 May 2007 14:09:37 +0200 Subject: [Numpy-discussion] array vs. matrix performance Message-ID: <46518C01.8070501@ntc.zcu.cz> I have come to a case where using a matrix would be easier than an array. The code uses lots of dot products, so I tested scipy.dot() performance with the code below and found that the array version is much faster (about 3 times for the given shape). What is the reason for this? Or is something wrong with my measurement? regards, r. --- import timeit setup = """ import numpy as nm import scipy as sc X = nm.random.rand( 100, 3 ) X = nm.asmatrix( X ) # (un)comment this line. print X.shape """ tt = timeit.Timer( 'sc.dot( X.T, X )', setup ) print tt.timeit() From nwagner at iam.uni-stuttgart.de Mon May 21 09:16:42 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 21 May 2007 15:16:42 +0200 Subject: [Numpy-discussion] array vs. matrix performance In-Reply-To: <46518C01.8070501@ntc.zcu.cz> References: <46518C01.8070501@ntc.zcu.cz> Message-ID: <46519BBA.6030603@iam.uni-stuttgart.de> Robert Cimrman wrote: > I have come to a case where using a matrix would be easier than an > array. The code uses lots of dot products, so I tested scipy.dot() > performance with the code below and found that the array version is much > faster (about 3 times for the given shape). What is the reason for this? > Or is something wrong with my measurement? > > regards, > r. > > --- > import timeit > setup = """ > import numpy as nm > import scipy as sc > X = nm.random.rand( 100, 3 ) > X = nm.asmatrix( X ) # (un)comment this line. > print X.shape > """ > tt = timeit.Timer( 'sc.dot( X.T, X )', setup ) > print tt.timeit() > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > Confirmed, but for what reason ? 0.5.3.dev3020 1.0.3.dev3792 Array version 6.84843301773 Matrix version 17.1273219585 Nils From david at ar.media.kyoto-u.ac.jp Mon May 21 09:22:45 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 21 May 2007 22:22:45 +0900 Subject: [Numpy-discussion] array vs. matrix performance In-Reply-To: <46519BBA.6030603@iam.uni-stuttgart.de> References: <46518C01.8070501@ntc.zcu.cz> <46519BBA.6030603@iam.uni-stuttgart.de> Message-ID: <46519D25.7080608@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > Robert Cimrman wrote: >> I have come to a case where using a matrix would be easier than an >> array. The code uses lots of dot products, so I tested scipy.dot() >> performance with the code below and found that the array version is much >> faster (about 3 times for the given shape). What is the reason for this? >> Or is something wrong with my measurement? >> >> regards, >> r. >> >> --- >> import timeit >> setup = """ >> import numpy as nm >> import scipy as sc >> X = nm.random.rand( 100, 3 ) >> X = nm.asmatrix( X ) # (un)comment this line. >> print X.shape >> """ >> tt = timeit.Timer( 'sc.dot( X.T, X )', setup ) >> print tt.timeit() >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > Confirmed, but for what reason ? > > 0.5.3.dev3020 > 1.0.3.dev3792 > Array version > 6.84843301773 > Matrix version > 17.1273219585 > My guess would be that for such small matrices, the cost of matrix wrapping is not negligeable against the actual computation. This difference disappears for bigger matrices (for example, try 1000 and 5000 instead of 100 for the first dimension). David From pearu at cens.ioc.ee Mon May 21 09:35:14 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 21 May 2007 15:35:14 +0200 Subject: [Numpy-discussion] Do we still need support for Python 2.3? Message-ID: <4651A012.5010009@cens.ioc.ee> Hi, I noticed that numpy revision 3794 introduces `set` but that is available starting from Python 2.4. Hence the question. Best regards, Pearu From nadavh at visionsense.com Mon May 21 09:56:40 2007 From: nadavh at visionsense.com (Nadav Horesh) Date: Mon, 21 May 2007 16:56:40 +0300 Subject: [Numpy-discussion] Do we still need support for Python 2.3? Message-ID: <07C6A61102C94148B8104D42DE95F7E8C8F26C@exchange2k.envision.co.il> The question should be: Are there members who rely on python 2.3 Nadav -----Original Message----- From: numpy-discussion-bounces at scipy.org on behalf of Pearu Peterson Sent: Mon 21-May-07 16:35 To: Discussion of Numerical Python Cc: Subject: [Numpy-discussion] Do we still need support for Python 2.3? Hi, I noticed that numpy revision 3794 introduces `set` but that is available starting from Python 2.4. Hence the question. Best regards, Pearu _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2780 bytes Desc: not available URL: From charlesr.harris at gmail.com Mon May 21 10:10:12 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 21 May 2007 08:10:12 -0600 Subject: [Numpy-discussion] array vs. matrix performance In-Reply-To: <46519D25.7080608@ar.media.kyoto-u.ac.jp> References: <46518C01.8070501@ntc.zcu.cz> <46519BBA.6030603@iam.uni-stuttgart.de> <46519D25.7080608@ar.media.kyoto-u.ac.jp> Message-ID: On 5/21/07, David Cournapeau wrote: > > Nils Wagner wrote: > > Robert Cimrman wrote: > >> I have come to a case where using a matrix would be easier than an > >> array. The code uses lots of dot products, so I tested scipy.dot() > >> performance with the code below and found that the array version is > much > >> faster (about 3 times for the given shape). What is the reason for > this? > >> Or is something wrong with my measurement? > >> > >> regards, > >> r. > >> > >> --- > >> import timeit > >> setup = """ > >> import numpy as nm > >> import scipy as sc > >> X = nm.random.rand( 100, 3 ) > >> X = nm.asmatrix( X ) # (un)comment this line. > >> print X.shape > >> """ > >> tt = timeit.Timer( 'sc.dot( X.T, X )', setup ) > >> print tt.timeit() > >> _______________________________________________ > >> Numpy-discussion mailing list > >> Numpy-discussion at scipy.org > >> http://projects.scipy.org/mailman/listinfo/numpy-discussion > >> > > Confirmed, but for what reason ? > > > > 0.5.3.dev3020 > > 1.0.3.dev3792 > > Array version > > 6.84843301773 > > Matrix version > > 17.1273219585 > > > My guess would be that for such small matrices, the cost of matrix > wrapping is not negligeable against the actual computation. This > difference disappears for bigger matrices (for example, try 1000 and > 5000 instead of 100 for the first dimension). Hmmm, I wonder how Tim's matrix version works. I've attached the code. I am rather fond of the approach and have used it a few times myself. Tim uses the dot product is written so: a(b). That is, the () operator is used instead of the *. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mat.py Type: text/x-python Size: 3604 bytes Desc: not available URL: From charlesr.harris at gmail.com Mon May 21 10:43:23 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 21 May 2007 08:43:23 -0600 Subject: [Numpy-discussion] array vs. matrix performance In-Reply-To: References: <46518C01.8070501@ntc.zcu.cz> <46519BBA.6030603@iam.uni-stuttgart.de> <46519D25.7080608@ar.media.kyoto-u.ac.jp> Message-ID: On 5/21/07, Charles R Harris wrote: > > > > On 5/21/07, David Cournapeau wrote: > > > > Nils Wagner wrote: > > > Robert Cimrman wrote: > > >> I have come to a case where using a matrix would be easier than an > > >> array. The code uses lots of dot products, so I tested scipy.dot() > > >> performance with the code below and found that the array version is > > much > > >> faster (about 3 times for the given shape). What is the reason for > > this? > > >> Or is something wrong with my measurement? > > >> > > >> regards, > > >> r. > > >> > > >> --- > > >> import timeit > > >> setup = """ > > >> import numpy as nm > > >> import scipy as sc > > >> X = nm.random.rand( 100, 3 ) > > >> X = nm.asmatrix( X ) # (un)comment this line. > > >> print X.shape > > >> """ > > >> tt = timeit.Timer( 'sc.dot( X.T, X )', setup ) > > >> print tt.timeit() > > >> _______________________________________________ > > >> Numpy-discussion mailing list > > >> Numpy-discussion at scipy.org > > >> http://projects.scipy.org/mailman/listinfo/numpy-discussion > > >> > > > Confirmed, but for what reason ? > > > > > > 0.5.3.dev3020 > > > 1.0.3.dev3792 > > > Array version > > > 6.84843301773 > > > Matrix version > > > 17.1273219585 > > > > > My guess would be that for such small matrices, the cost of matrix > > wrapping is not negligeable against the actual computation. This > > difference disappears for bigger matrices (for example, try 1000 and > > 5000 instead of 100 for the first dimension). > > > Hmmm, I wonder how Tim's matrix version works. I've attached the code. I > am rather fond of the approach and have used it a few times myself. Tim uses > the dot product is written so: a(b). That is, the () operator is used > instead of the *. > Well, it doesn't go. There is a bug in the code and, on a second look, it doesn't use dot. BTW, on my machine numpy dot is faster than the scipy version. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon May 21 12:45:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 11:45:18 -0500 Subject: [Numpy-discussion] Do we still need support for Python 2.3? In-Reply-To: <07C6A61102C94148B8104D42DE95F7E8C8F26C@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E8C8F26C@exchange2k.envision.co.il> Message-ID: <4651CC9E.9040109@gmail.com> Nadav Horesh wrote: > The question should be: Are there members who rely on python 2.3 Numeric and numpy's user base has always been much more extensive than the list membership. I don't think that the question can be reformulated that way. I think we still do need to support Python 2.3. Being able to use the builtin set() rather than set.Set() is not a compelling use case. Libraries that are as fundamental as numpy is trying to be should try to maintain compatibility with the oldest versions of Python that are feasible. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From chanley at stsci.edu Mon May 21 12:59:31 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 21 May 2007 12:59:31 -0400 Subject: [Numpy-discussion] Do we still need support for Python 2.3? In-Reply-To: <4651CC9E.9040109@gmail.com> References: <07C6A61102C94148B8104D42DE95F7E8C8F26C@exchange2k.envision.co.il> <4651CC9E.9040109@gmail.com> Message-ID: <4651CFF3.1070706@stsci.edu> I would have to say that I agree with Robert. We (STScI) are about to force all of our users to install numpy. For some of them that can be a lot to ask. I don't also want to add the extra complication of upgrading their Python version as well. My feeling is that not everyone has made the jump to numpy yet. I think we should keep it as painless a process as possible. Just my two cents. Chris Robert Kern wrote: > Nadav Horesh wrote: >> The question should be: Are there members who rely on python 2.3 > > Numeric and numpy's user base has always been much more extensive than the list > membership. I don't think that the question can be reformulated that way. > > I think we still do need to support Python 2.3. Being able to use the builtin > set() rather than set.Set() is not a compelling use case. Libraries that are as > fundamental as numpy is trying to be should try to maintain compatibility with > the oldest versions of Python that are feasible. > From robert.kern at gmail.com Mon May 21 13:32:43 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 12:32:43 -0500 Subject: [Numpy-discussion] Do we still need support for Python 2.3? In-Reply-To: <4651A012.5010009@cens.ioc.ee> References: <4651A012.5010009@cens.ioc.ee> Message-ID: <4651D7BB.80608@gmail.com> Pearu Peterson wrote: > Hi, > > I noticed that numpy revision 3794 introduces `set` > but that is available starting from Python 2.4. Note that the code David added is compatible with Python 2.3. 22 try: 23 set 24 except NameError: 25 from sets import Set as set -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pearu at cens.ioc.ee Mon May 21 13:44:14 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 21 May 2007 20:44:14 +0300 (EEST) Subject: [Numpy-discussion] Do we still need support for Python 2.3? In-Reply-To: <4651D7BB.80608@gmail.com> References: <4651A012.5010009@cens.ioc.ee> <4651D7BB.80608@gmail.com> Message-ID: <3721.84.202.199.60.1179769454.squirrel@cens.ioc.ee> On Mon, May 21, 2007 8:32 pm, Robert Kern wrote: > Pearu Peterson wrote: >> Hi, >> >> I noticed that numpy revision 3794 introduces `set` >> but that is available starting from Python 2.4. > > Note that the code David added is compatible with Python 2.3. > > 22 try: > 23 set > 24 except NameError: > 25 from sets import Set as set Sorry for the noice, I wasn't careful reading the code.. Best, Pearu From millman at berkeley.edu Mon May 21 15:28:32 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 21 May 2007 12:28:32 -0700 Subject: [Numpy-discussion] removing references to Scipy Core Message-ID: Hello, I noticed that there are a few places where Scipy Core is still referenced: 1). On http://numpy.scipy.org/new_features.html the title is still Scipy Core even though the heading has been updated: " New Features of SciPy Core

New Features of NumPy

" 2). If you do a search for "scipy" on PyPi the highest score is scipy_core: http://cheeseshop.python.org/pypi/scipy_core/0.8.4 I am not that familiar with the cheeseshop, but if possible I think it would be a good idea to remove this page. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From cimrman3 at ntc.zcu.cz Tue May 22 08:26:36 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 22 May 2007 14:26:36 +0200 Subject: [Numpy-discussion] array vs. matrix performance In-Reply-To: <46519D25.7080608@ar.media.kyoto-u.ac.jp> References: <46518C01.8070501@ntc.zcu.cz> <46519BBA.6030603@iam.uni-stuttgart.de> <46519D25.7080608@ar.media.kyoto-u.ac.jp> Message-ID: <4652E17C.5060706@ntc.zcu.cz> Re-hi, thanks for all the comments. I have re-tried with X = nm.random.rand( 10000, 3 ) and the times (in seconds) were: 428.588043213 # scipy.dot, array 445.045716047 # numpy.dot, array 519.489458799 # scipy.dot, matrix 513.328601122 # numpy.dot, matrix The scipy.dot and numpy.dot performs the same for me, as I use ATLAS, IMHO. David, you are right that for larger data the difference is not so big but still they are significant. Anyway, I have resolved my problem with arrays only for the moment. r. From Shawn.Gong at drdc-rddc.gc.ca Tue May 22 10:00:23 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Tue, 22 May 2007 10:00:23 -0400 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682C1@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E20D4.6040308@gmail.com> <2E58C246F17003499C141D334794D049027682C1@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <2E58C246F17003499C141D334794D049027682C2@ottawaex02.Ottawa.drdc-rddc.gc.ca> Hi Robert, I used exactly your text in site.cfg. The out.txt seems to get the lib_dir, but "import numpy" in python stills gives error. Please see the attached files. Should I ask system guys to move the libs to /usr/lib so I don't have to fight with site.cfg ? Thanks, Shawn -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Robert Kern Sent: Friday, May 18, 2007 5:56 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed Gong, Shawn (Contractor) wrote: > > Hi Robert, > > I added ref to site.cfg, but still getting the same error message: see > "out.txt" file. > > It seems that install can't find the libraries in /usr/local/lib/atlas/ > (see the "out.txt" file Line 14: libraries lapack,blas not found in > /usr/local/lib/atlas) Delete your [atlas] section entirely. It is wrong. Follow the instructions in the example. To make this clear, use exactly the following text in your site.cfg: [blas_opt] library_dirs = /usr/local/lib/atlas libraries = f77blas, cblas, atlas [lapack_opt] library_dirs = /usr/local/lib/atlas libraries = lapack, f77blas, cblas, atlas -------------- next part -------------- A non-text attachment was scrubbed... Name: site.cfg Type: application/octet-stream Size: 175 bytes Desc: site.cfg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: out.zip Type: application/x-zip-compressed Size: 9270 bytes Desc: out.zip URL: From klemm at phys.ethz.ch Tue May 22 12:04:09 2007 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 22 May 2007 18:04:09 +0200 Subject: [Numpy-discussion] numpy and freeze.py Message-ID: Hi, I want to use freeze.py on code that heavily relies on numpy. If I just try python2.5 /scratch/src/Python-2.5/Tools/freeze/freeze.py pylay.py the make works but then I get the error: Traceback (most recent call last): File "pylay.py", line 1, in import kuvBeta4 as kuv File "kuvBeta4.py", line 6, in import mfunBeta4 as mfun File "mfunBeta4.py", line 2, in import numpy File "/glb/eu/siep_bv/proj/yot04/apps/python2.5/lib/python2.5/site-packages/numpy/__init__.py", line 39, in import core File "/glb/eu/siep_bv/proj/yot04/apps/python2.5/lib/python2.5/site-packages/numpy/core/__init__.py", line 5, in import multiarray ImportError: No module named multiarray Am I doing something wrong? Or does freeze.py not work with numpy? Hanno -- Hanno Klemm klemm at phys.ethz.ch From robert.kern at gmail.com Tue May 22 12:50:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 22 May 2007 11:50:24 -0500 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682C2@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E20D4.6040308@gmail.com> <2E58C246F17003499C141D334794D049027682C1@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682C2@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <46531F50.50108@gmail.com> Gong, Shawn (Contractor) wrote: > Hi Robert, > > I used exactly your text in site.cfg. The out.txt seems to get the > lib_dir, but "import numpy" in python stills gives error. *What* error? Please copy-and-paste the complete error message. > Please see the attached files. > Should I ask system guys to move the libs to /usr/lib so I don't have to > fight with site.cfg ? Not until you tell us what the error message is. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Shawn.Gong at drdc-rddc.gc.ca Tue May 22 13:00:59 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Tue, 22 May 2007 13:00:59 -0400 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <46531F50.50108@gmail.com> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E20D4.6040308@gmail.com> <2E58C246F17003499C141D334794D049027682C1@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682C2@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46531F50.50108@gmail.com> Message-ID: <2E58C246F17003499C141D334794D049027682C4@ottawaex02.Ottawa.drdc-rddc.gc.ca> Robert, 1) this is the first a few lines of the out.txt when I type "python setup.py install >out.txt" : (I'm not sure if "libraries mkl,vml,guide not found" means trouble or not) -------------------------------- F2PY Version 2_3473 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /home/sgong/dev/dist/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /home/sgong/dev/dist/lib Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc _configtest.o -L/usr/local/lib/atlas -lptf77blas -lptcblas -latlas -o _configtest ATLAS version 3.6.0 built by root on Mon May 14 15:46:37 EDT 2007: UNAME : Linux srv25662-rde.ottawa.drdc-rddc.gc.ca 2.4.21-37.0.1.ELsmp #1 SMP Wed Jan 11 18:44:17 EST 2006 i686 i686 i386 GNU/Linux INSTFLG : MMDEF : /usr/local/src/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/gemm ARCHDEF : /usr/local/src/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/misc F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 1048576 F77 : /usr/bin/g77, version GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-53)) 3.2.3 20030502 (Red Hat Linux 3.2.3-53) F77FLAGS : -fomit-frame-pointer -O CC : /usr/bin/gcc, version gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-53) CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/bin/gcc, version gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-53) MCCFLAGS : -fomit-frame-pointer -O success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /home/sgong/dev/dist/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE -------------------- 2) Then in Python, when I type "import numpy" It says: Running from numpy source directory Then I type "numpy.sqrt(5)" AttributeError: 'module' object has no attribute 'sqrt' I assume that this means numpy was not installed successfully. Thanks Shawn -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Robert Kern Sent: Tuesday, May 22, 2007 12:50 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed Gong, Shawn (Contractor) wrote: > Hi Robert, > > I used exactly your text in site.cfg. The out.txt seems to get the > lib_dir, but "import numpy" in python stills gives error. *What* error? Please copy-and-paste the complete error message. > Please see the attached files. > Should I ask system guys to move the libs to /usr/lib so I don't have to > fight with site.cfg ? Not until you tell us what the error message is. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion From robert.kern at gmail.com Tue May 22 13:05:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 22 May 2007 12:05:29 -0500 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682C4@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E20D4.6040308@gmail.com> <2E58C246F17003499C141D334794D049027682C1@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682C2@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46531F50.50108@gmail.com> <2E58C246F17003499C141D334794D049027682C4@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <465322D9.20902@gmail.com> Gong, Shawn (Contractor) wrote: > 2) Then in Python, when I type "import numpy" > It says: Running from numpy source directory > Then I type "numpy.sqrt(5)" > AttributeError: 'module' object has no attribute 'sqrt' > I assume that this means numpy was not installed successfully. No, it means that you are running from the numpy source directory. That means it is picking up the partial numpy package in the source that's used to bootstrap the installation. Change out of that directory and try again. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Shawn.Gong at drdc-rddc.gc.ca Tue May 22 13:11:35 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Tue, 22 May 2007 13:11:35 -0400 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <465322D9.20902@gmail.com> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E20D4.6040308@gmail.com> <2E58C246F17003499C141D334794D049027682C1@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682C2@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46531F50.50108@gmail.com><2E58C246F17003499C141D334794D049027682C4@ottawaex02.Ottawa.drdc-rddc.gc.ca> <465322D9.20902@gmail.com> Message-ID: <2E58C246F17003499C141D334794D049027682C5@ottawaex02.Ottawa.drdc-rddc.gc.ca> Hi Robert "Running from numpy source directory" message also appears when I installed numpy. I am running python 2.3.6, not 2.4 You said "It is picking up the partial numpy package in the source". Do you mean it is picking up the partial numpy package from python 2.3.6 ? How do I change out of that directory and try again? Thanks, Shawn -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Robert Kern Sent: Tuesday, May 22, 2007 1:05 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed Gong, Shawn (Contractor) wrote: > 2) Then in Python, when I type "import numpy" > It says: Running from numpy source directory > Then I type "numpy.sqrt(5)" > AttributeError: 'module' object has no attribute 'sqrt' > I assume that this means numpy was not installed successfully. No, it means that you are running from the numpy source directory. That means it is picking up the partial numpy package in the source that's used to bootstrap the installation. Change out of that directory and try again. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion From cookedm at physics.mcmaster.ca Tue May 22 13:13:31 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 22 May 2007 13:13:31 -0400 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682C5@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E20D4.6040308@gmail.com> <465322D9.20902@gmail.com> <2E58C246F17003499C141D334794D049027682C5@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <20070522171331.GA8725@arbutus.physics.mcmaster.ca> On Tue, May 22, 2007 at 01:11:35PM -0400, Gong, Shawn (Contractor) wrote: > Hi Robert > "Running from numpy source directory" message also appears when I > installed numpy. > I am running python 2.3.6, not 2.4 Just what it says; the current directory is the directory that the numpy source is in. If you do 'import numpy' there, it finds the *source* first, not the installed package. > You said "It is picking up the partial numpy package in the source". Do > you mean it is picking up the partial numpy package from python 2.3.6 ? > > How do I change out of that directory and try again? cd .. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Tue May 22 13:16:05 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 22 May 2007 12:16:05 -0500 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682C5@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E20D4.6040308@gmail.com> <2E58C246F17003499C141D334794D049027682C1@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682C2@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46531F50.50108@gmail.com><2E58C246F17003499C141D334794D049027682C4@ottawaex02.Ottawa.drdc-rddc.gc.ca> <465322D9.20902@gmail.com> <2E58C246F17003499C141D334794D049027682C5@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <46532555.2040700@gmail.com> Gong, Shawn (Contractor) wrote: > Hi Robert > "Running from numpy source directory" message also appears when I > installed numpy. Yes, that's fine. Like I said, it is used for bootstrapping the installation. > I am running python 2.3.6, not 2.4 I don't see how that is relevant. > You said "It is picking up the partial numpy package in the source". Do > you mean it is picking up the partial numpy package from python 2.3.6 ? No, it is picking up the numpy package that is in the directory you are currently in. > How do I change out of that directory and try again? $ cd .. $ python >>> import numpy -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Shawn.Gong at drdc-rddc.gc.ca Tue May 22 13:27:57 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Tue, 22 May 2007 13:27:57 -0400 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <46532555.2040700@gmail.com> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E20D4.6040308@gmail.com> <2E58C246F17003499C141D334794D049027682C1@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682C2@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46531F50.50108@gmail.com><2E58C246F17003499C141D334794D049027682C4@ottawaex02.Ottawa.drdc-rddc.gc.ca> <465322D9.20902@gmail.com><2E58C246F17003499C141D334794D049027682C5@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46532555.2040700@gmail.com> Message-ID: <2E58C246F17003499C141D334794D049027682C6@ottawaex02.Ottawa.drdc-rddc.gc.ca> Thank you David M. Cooke and Robert. Now I changed directory and ran python, Got further and hit this error message: >python Python 2.3.6 (#9, May 18 2007, 10:22:59) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-53)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/__init__.py", line 40, in ? import linalg File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/__init__. py", line 4, in ? from linalg import * File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/linalg.py ", line 25, in ? from numpy.linalg import lapack_lite ImportError: /home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/lapack_lit e.so: undefined symbol: pthread_join From mjanikas at esri.com Tue May 22 13:33:27 2007 From: mjanikas at esri.com (Mark Janikas) Date: Tue, 22 May 2007 10:33:27 -0700 Subject: [Numpy-discussion] numpy and freeze.py In-Reply-To: Message-ID: <627102C921CD9745B070C3B10CB8199B010EC0AF@hardwire.esri.com> I cant be sure if your issue is related to mine, so I was wondering where/when you got your numpy build? My issue: http://projects.scipy.org/pipermail/numpy-discussion/2007-April/027000.h tml Travis has been kind enough to work with me on it. His changes are in the svn. So, I don't think this is an issue that has arisen due to the changes unless you have checked numpy out recently and compiled it yourself. MJ -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Hanno Klemm Sent: Tuesday, May 22, 2007 9:04 AM To: numpy-discussion at scipy.org Subject: [Numpy-discussion] numpy and freeze.py Hi, I want to use freeze.py on code that heavily relies on numpy. If I just try python2.5 /scratch/src/Python-2.5/Tools/freeze/freeze.py pylay.py the make works but then I get the error: Traceback (most recent call last): File "pylay.py", line 1, in import kuvBeta4 as kuv File "kuvBeta4.py", line 6, in import mfunBeta4 as mfun File "mfunBeta4.py", line 2, in import numpy File "/glb/eu/siep_bv/proj/yot04/apps/python2.5/lib/python2.5/site-packages/n umpy/__init__.py", line 39, in import core File "/glb/eu/siep_bv/proj/yot04/apps/python2.5/lib/python2.5/site-packages/n umpy/core/__init__.py", line 5, in import multiarray ImportError: No module named multiarray Am I doing something wrong? Or does freeze.py not work with numpy? Hanno -- Hanno Klemm klemm at phys.ethz.ch _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion From Shawn.Gong at drdc-rddc.gc.ca Tue May 22 13:45:14 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Tue, 22 May 2007 13:45:14 -0400 Subject: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed In-Reply-To: <2E58C246F17003499C141D334794D049027682C6@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C0@ottawaex02.Ottawa.drdc-rddc.gc.ca> <464E20D4.6040308@gmail.com> <2E58C246F17003499C141D334794D049027682C1@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682C2@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46531F50.50108@gmail.com><2E58C246F17003499C141D334794D049027682C4@ottawaex02.Ottawa.drdc-rddc.gc.ca> <465322D9.20902@gmail.com><2E58C246F17003499C141D334794D049027682C5@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46532555.2040700@gmail.com> <2E58C246F17003499C141D334794D049027682C6@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <2E58C246F17003499C141D334794D049027682C7@ottawaex02.Ottawa.drdc-rddc.gc.ca> Sorry forgot to mention that Python was installed "with-thread=no" as it is required for OpenEV application. I remember that when I installed numarray I had to set "--unthreaded" Maybe I need to do the same thing. Can someone tell me how to set this flag? Thanks, Shawn -----Original Message----- From: Gong, Shawn (Contractor) Sent: Tuesday, May 22, 2007 1:28 PM To: 'Discussion of Numerical Python' Subject: RE: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed Thank you David M. Cooke and Robert. Now I changed directory and ran python, Got further and hit this error message: >python Python 2.3.6 (#9, May 18 2007, 10:22:59) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-53)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/__init__.py", line 40, in ? import linalg File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/__init__. py", line 4, in ? from linalg import * File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/linalg.py ", line 25, in ? from numpy.linalg import lapack_lite ImportError: /home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/lapack_lit e.so: undefined symbol: pthread_join From wbaxter at gmail.com Tue May 22 20:25:03 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 23 May 2007 09:25:03 +0900 Subject: [Numpy-discussion] MAX_INT? Message-ID: Is there a way to obtain the equivalent of MAX_INT for the integral types numpy knows about? I know about numpy.finfo for the floating point types, but is there anything like that for integral types? Thanks, --Bill From robert.kern at gmail.com Tue May 22 20:32:45 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 22 May 2007 19:32:45 -0500 Subject: [Numpy-discussion] MAX_INT? In-Reply-To: References: Message-ID: <46538BAD.2010009@gmail.com> Bill Baxter wrote: > Is there a way to obtain the equivalent of MAX_INT for the integral > types numpy knows about? > > I know about numpy.finfo for the floating point types, but is there > anything like that for integral types? SVN numpy has numpy.lib.getlimits.iinfo(), now. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Tue May 22 20:46:02 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 23 May 2007 09:46:02 +0900 Subject: [Numpy-discussion] MAX_INT? In-Reply-To: <46538BAD.2010009@gmail.com> References: <46538BAD.2010009@gmail.com> Message-ID: Great. Thanks! Is there a plan to expose that as numpy.iinfo? --bb On 5/23/07, Robert Kern wrote: > Bill Baxter wrote: > > Is there a way to obtain the equivalent of MAX_INT for the integral > > types numpy knows about? > > > > I know about numpy.finfo for the floating point types, but is there > > anything like that for integral types? > > SVN numpy has numpy.lib.getlimits.iinfo(), now. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From vallis.35530053 at bloglines.com Tue May 22 22:18:25 2007 From: vallis.35530053 at bloglines.com (vallis.35530053 at bloglines.com) Date: 23 May 2007 02:18:25 -0000 Subject: [Numpy-discussion] More help with numpy and SWIG Message-ID: <1179886705.1347363956.9008.sendItem@bloglines.com> Hello, I am trying to use numpy in conjuction with a custom SWIGged C struct (real_vec_t) that contains array data. I'd like to use the array interface to share the array data between real_vec_t and numpy. I have two questions for a combined numpy/SWIG magician: - This is how I implement the array interface in real_vec_t, so that if rvt is a real_vec_t instance, I can do na = numpy.array(rvt) and get a numpy array with the same content. I define the python attribute, %extend real_vec_t { %pythoncode %{ __array_struct__ = property(get_array_struct,doc='Array protocol') %} [...] %} which calls the C function (thanks to Lisandro Dalcin for suggesting this) %extend real_vec_t { PyObject *get_array_struct() { [...create the PyArrayInterface pointed to by "inter"...] [...do Py_INCREF(self) (see below)...] [...return PyCObject pointing to inter (see below)...] } %} I have problems with the last two steps. I'm supposed to do a Py_INCREF on the SWIG object so that the array data is preserved until all numpy arrays that may come to use it are destroyed. But I don't think it's correct to say "Py_INCREF(self)", because in this context self is a pointer to a real_vec_t structure, not the SWIG PyObject that wraps it... how should I call Py_INCREF? I have a related problem for the last step. I'm supposed to return PyCObject_FromVoidPtrAndDesc(inter,self,inter_free), where inter is the pointer to the PyArrayInterface, inter_free points to a function that will free any memory allocated with the PyArrayInterface, and self, again, should be a pointer to the SWIG wrapper, not just the real_vec_t C struct. - The second question is about the conditions under which the data pointed to by PyArrayInterface is shared with the numpy array I create with numpy.array(rvt), or it is copied. Is this a function of the flags I set in PyArrayInterface, of flags that must be passed to numpy.array(), or both? I couldn't quite dig this out of the numpy manual (but will happily take a pointer to the discussion therein...) From oliphant.travis at ieee.org Wed May 23 02:09:44 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 May 2007 00:09:44 -0600 Subject: [Numpy-discussion] NumPy 1.0.3 release tomorrow, SciPy 0.5.3 next week Message-ID: <4653DAA8.20708@ieee.org> I'd like to tag the tree and make a NumPy 1.0.4 release tomorrow. Is there anything that needs to be done before that can happen? I'd also like to get another release of SciPy out the door as soon as possible. At the end of this week or early next week. I've added a bunch of 1-d classic spline interpolation algorithms (not the fast B-spline versions but the basic flexible algorithms of order 2, 3, 4, and 5 which have different boundary conditions). There are several ways to get at them in scipy.interpolate. I've also added my own "smoothest" algorithm which specifies the extra degrees of freedom available in spline formulations by making the resulting Kth-order spline have the smallest Kth-order derivative discontinuity. I've never seen this described by anyone but I'm sure somebody must have done discussed it before. If anybody knows something about splines who recalls seeing something like this, I would appreciate a reference. This specification actually allows very decent looking quadratic (2nd-order) splines. Best regards, -Travis From klemm at phys.ethz.ch Wed May 23 04:32:22 2007 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Wed, 23 May 2007 10:32:22 +0200 Subject: [Numpy-discussion] numpy and freeze.py In-Reply-To: <627102C921CD9745B070C3B10CB8199B010EC0AF@hardwire.esri.com> References: Message-ID: Thanks for your hint! My numpy version is 1.0.2. compiled by myself a bit ago. I have however not fully understood your reply. Is the issue, that caused problems in your code in the numpy svn version resolved now? I read something in the thread about forcefully unloading certain DLLs, I am on a Linux machine, I imagine that could change matters? A bit of googling brought me to this thread: http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3086478 Have these issues been resolved? Or should I rather ask the freeze developers? Hanno Mark Janikas said: > I cant be sure if your issue is related to mine, so I was wondering > where/when you got your numpy build? > > My issue: > http://projects.scipy.org/pipermail/numpy-discussion/2007-April/027000.h > tml > > Travis has been kind enough to work with me on it. His changes are in > the svn. So, I don't think this is an issue that has arisen due to the > changes unless you have checked numpy out recently and compiled it > yourself. > > MJ > > -----Original Message----- > From: numpy-discussion-bounces at scipy.org > [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Hanno Klemm > Sent: Tuesday, May 22, 2007 9:04 AM > To: numpy-discussion at scipy.org > Subject: [Numpy-discussion] numpy and freeze.py > > > Hi, > > I want to use freeze.py on code that heavily relies on numpy. If I > just try > > python2.5 /scratch/src/Python-2.5/Tools/freeze/freeze.py pylay.py > > the make works but then I get the error: > > Traceback (most recent call last): > File "pylay.py", line 1, in > import kuvBeta4 as kuv > File "kuvBeta4.py", line 6, in > import mfunBeta4 as mfun > File "mfunBeta4.py", line 2, in > import numpy > File > "/glb/eu/siep_bv/proj/yot04/apps/python2.5/lib/python2.5/site-packages/n > umpy/__init__.py", > line 39, in > import core > File > "/glb/eu/siep_bv/proj/yot04/apps/python2.5/lib/python2.5/site-packages/n > umpy/core/__init__.py", > line 5, in > import multiarray > ImportError: No module named multiarray > > > Am I doing something wrong? Or does freeze.py not work with numpy? > > Hanno > > > -- > Hanno Klemm > klemm at phys.ethz.ch > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Hanno Klemm klemm at phys.ethz.ch From fullung at gmail.com Wed May 23 08:39:41 2007 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 23 May 2007 14:39:41 +0200 Subject: [Numpy-discussion] Question about flags of fancy indexed array Message-ID: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> Hello all Consider the following example: In [43]: x = N.zeros((3,2)) In [44]: x.flags Out[44]: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [45]: x[:,[1,0]].flags Out[45]: C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False Is it correct that the F_CONTIGUOUS flag is set in the case of the fancy indexed x? I'm running NumPy 1.0.3.dev3792 here. Cheers, Albert From peridot.faceted at gmail.com Wed May 23 09:49:08 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 23 May 2007 09:49:08 -0400 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> Message-ID: On 23/05/07, Albert Strasheim wrote: > Consider the following example: First a comment: almost nobody needs to care how the data is stored internally. Try to avoid looking at the flags unless you're interfacing with a C library. The nice feature of numpy is that it hides all that junk - strides, contiguous storage, iteration, what have you - so that you don't have to deal with it. > Is it correct that the F_CONTIGUOUS flag is set in the case of the fancy > indexed x? I'm running NumPy 1.0.3.dev3792 here. Numpy arrays are always stored in contiguous blocks of memory with uniform strides. The "CONTIGUOUS" flag actually means something totally different, which is unfortunate, but in any case, "fancy indexing" can't be done as a simple reindexing operation. It must make a copy of the array. So what you're seeing is the flags of a fresh new array, created from scratch (and numpy always creates arrays in C order internally, though that is an implementation detail you should not rely on). Anne From fullung at gmail.com Wed May 23 10:24:55 2007 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 23 May 2007 16:24:55 +0200 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> Message-ID: <20070523142455.GA31485@dogbert.sdsl.sun.ac.za> Hello all On Wed, 23 May 2007, Anne Archibald wrote: > On 23/05/07, Albert Strasheim wrote: > > > Consider the following example: > > First a comment: almost nobody needs to care how the data is stored > internally. Try to avoid looking at the flags unless you're > interfacing with a C library. The nice feature of numpy is that it > hides all that junk - strides, contiguous storage, iteration, what > have you - so that you don't have to deal with it. As luck would have it, I am interfacing with a C library. > > Is it correct that the F_CONTIGUOUS flag is set in the case of the fancy > > indexed x? I'm running NumPy 1.0.3.dev3792 here. > > Numpy arrays are always stored in contiguous blocks of memory with > uniform strides. The "CONTIGUOUS" flag actually means something > totally different, which is unfortunate, but in any case, "fancy > indexing" can't be done as a simple reindexing operation. It must make > a copy of the array. So what you're seeing is the flags of a fresh new > array, created from scratch (and numpy always creates arrays in C > order internally, though that is an implementation detail you should > not rely on). If you are correct that this is in fact a fresh new array, I really don't understand where the values of these flags. To recap: In [19]: x = N.zeros((3,2)) In [20]: x.flags Out[20]: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [21]: x[:,[1,0]].flags Out[21]: C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False So since x and x[:,[1,0]] are both new arrays, shouldn't their flags be identical? I'd expect at least C_CONTIGUOUS and OWNDATA to be True. Thanks. Cheers, Albert From stefan at sun.ac.za Wed May 23 10:36:37 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 23 May 2007 16:36:37 +0200 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> Message-ID: <20070523143637.GZ6192@mentat.za.net> On Wed, May 23, 2007 at 09:49:08AM -0400, Anne Archibald wrote: > On 23/05/07, Albert Strasheim wrote: > > > Is it correct that the F_CONTIGUOUS flag is set in the case of the fancy > > indexed x? I'm running NumPy 1.0.3.dev3792 here. > > Numpy arrays are always stored in contiguous blocks of memory with > uniform strides. The "CONTIGUOUS" flag actually means something > totally different, which is unfortunate, but in any case, "fancy > indexing" can't be done as a simple reindexing operation. It must make > a copy of the array. So what you're seeing is the flags of a fresh new > array, created from scratch (and numpy always creates arrays in C > order internally, though that is an implementation detail you should > not rely on). That still doesn't explain In [41]: N.zeros((3,2))[:,[0,1]].flags Out[41]: C_CONTIGUOUS : False F_CONTIGUOUS : True <<<<<<<<<<< OWNDATA : False <<<<<<<<<<< WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False vs. In [40]: N.zeros((3,2),order='F')[:,[0,1]].flags Out[40]: C_CONTIGUOUS : True F_CONTIGUOUS : False <<<<<<<<<< OWNDATA : False <<<<<<<<<< WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False Maybe the Fortran-ordering quiz at http://mentat.za.net/numpy/quiz needs an update! :) Cheers St?fan From peridot.faceted at gmail.com Wed May 23 10:44:53 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 23 May 2007 10:44:53 -0400 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: <20070523142455.GA31485@dogbert.sdsl.sun.ac.za> References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> <20070523142455.GA31485@dogbert.sdsl.sun.ac.za> Message-ID: On 23/05/07, Albert Strasheim wrote: > If you are correct that this is in fact a fresh new array, I really > don't understand where the values of these flags. To recap: > > In [19]: x = N.zeros((3,2)) > > In [20]: x.flags > Out[20]: > C_CONTIGUOUS : True > F_CONTIGUOUS : False > OWNDATA : True > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > In [21]: x[:,[1,0]].flags > Out[21]: > C_CONTIGUOUS : False > F_CONTIGUOUS : True > OWNDATA : False > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > So since x and x[:,[1,0]] are both new arrays, shouldn't their flags be > identical? I'd expect at least C_CONTIGUOUS and OWNDATA to be True. It looks like x[:,[1,0]] is done by fancy indexing on the first index and then transposing. I haven't looked at the implementation, though. If you need the result to be C-contiguous without further copying, you can do: x.transpose()[[1,0],:].transpose().flags which is horrible. I wouldn't rely on it, though, I'd use asconguousarray afterward. Perhaps Travis could comment on whether it is true that numpy does transpose like this when fancy indexing (at least, this flavour of fancy indexing). Anne From charlesr.harris at gmail.com Wed May 23 10:48:43 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 May 2007 08:48:43 -0600 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: <20070523142455.GA31485@dogbert.sdsl.sun.ac.za> References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> <20070523142455.GA31485@dogbert.sdsl.sun.ac.za> Message-ID: On 5/23/07, Albert Strasheim wrote: > > Hello all > > On Wed, 23 May 2007, Anne Archibald wrote: > > > On 23/05/07, Albert Strasheim wrote: > > > > > Consider the following example: > > > > First a comment: almost nobody needs to care how the data is stored > > internally. Try to avoid looking at the flags unless you're > > interfacing with a C library. The nice feature of numpy is that it > > hides all that junk - strides, contiguous storage, iteration, what > > have you - so that you don't have to deal with it. > > As luck would have it, I am interfacing with a C library. > > > > Is it correct that the F_CONTIGUOUS flag is set in the case of the > fancy > > > indexed x? I'm running NumPy 1.0.3.dev3792 here. > > > > Numpy arrays are always stored in contiguous blocks of memory with > > uniform strides. The "CONTIGUOUS" flag actually means something > > totally different, which is unfortunate, but in any case, "fancy > > indexing" can't be done as a simple reindexing operation. It must make > > a copy of the array. So what you're seeing is the flags of a fresh new > > array, created from scratch (and numpy always creates arrays in C > > order internally, though that is an implementation detail you should > > not rely on). > > If you are correct that this is in fact a fresh new array, I really > don't understand where the values of these flags. To recap: > > In [19]: x = N.zeros((3,2)) > > In [20]: x.flags > Out[20]: > C_CONTIGUOUS : True > F_CONTIGUOUS : False > OWNDATA : True > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > In [21]: x[:,[1,0]].flags > Out[21]: > C_CONTIGUOUS : False > F_CONTIGUOUS : True > OWNDATA : False > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > So since x and x[:,[1,0]] are both new arrays, shouldn't their flags be > identical? I'd expect at least C_CONTIGUOUS and OWNDATA to be True. The contiguous refers to how stuff is layed out in memory. In this case it appears that fancy indexing creates the new array by first copying column 1, then column 2, so that the new array is indeed F_CONTIGUOUS. Assuming I correctly understand the behaviour of the tostring argument, which is debatable, that is indeed what happens. In [28]: x = arange(6, dtype=int8).reshape(3,2) In [29]: x Out[29]: array([[0, 1], [2, 3], [4, 5]], dtype=int8) In [30]: y = x[:,[1,0]] In [31]: y Out[31]: array([[1, 0], [3, 2], [5, 4]], dtype=int8) In [32]: x.tostring('A') Out[32]: '\x00\x01\x02\x03\x04\x05' In [33]: y.tostring('A') Out[33]: '\x01\x03\x05\x00\x02\x04' Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed May 23 10:53:12 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 May 2007 08:53:12 -0600 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> <20070523142455.GA31485@dogbert.sdsl.sun.ac.za> Message-ID: On 5/23/07, Charles R Harris wrote: > > > > On 5/23/07, Albert Strasheim wrote: > > > > Hello all > > > > On Wed, 23 May 2007, Anne Archibald wrote: > > > > > On 23/05/07, Albert Strasheim wrote: > > > > > > > Consider the following example: > > > > > > First a comment: almost nobody needs to care how the data is stored > > > internally. Try to avoid looking at the flags unless you're > > > interfacing with a C library. The nice feature of numpy is that it > > > hides all that junk - strides, contiguous storage, iteration, what > > > have you - so that you don't have to deal with it. > > > > As luck would have it, I am interfacing with a C library. > > > > > > Is it correct that the F_CONTIGUOUS flag is set in the case of the > > fancy > > > > indexed x? I'm running NumPy 1.0.3.dev3792 here. > > > > > > Numpy arrays are always stored in contiguous blocks of memory with > > > uniform strides. The "CONTIGUOUS" flag actually means something > > > totally different, which is unfortunate, but in any case, "fancy > > > indexing" can't be done as a simple reindexing operation. It must make > > > a copy of the array. So what you're seeing is the flags of a fresh new > > > > > array, created from scratch (and numpy always creates arrays in C > > > order internally, though that is an implementation detail you should > > > not rely on). > > > > If you are correct that this is in fact a fresh new array, I really > > don't understand where the values of these flags. To recap: > > > > In [19]: x = N.zeros((3,2)) > > > > In [20]: x.flags > > Out[20]: > > C_CONTIGUOUS : True > > F_CONTIGUOUS : False > > OWNDATA : True > > WRITEABLE : True > > ALIGNED : True > > UPDATEIFCOPY : False > > > > In [21]: x[:,[1,0]].flags > > Out[21]: > > C_CONTIGUOUS : False > > F_CONTIGUOUS : True > > OWNDATA : False > > WRITEABLE : True > > ALIGNED : True > > UPDATEIFCOPY : False > > > > So since x and x[:,[1,0]] are both new arrays, shouldn't their flags be > > identical? I'd expect at least C_CONTIGUOUS and OWNDATA to be True. > > > The contiguous refers to how stuff is layed out in memory. In this case it > appears that fancy indexing creates the new array by first copying column 1, > then column 2, so that the new array is indeed F_CONTIGUOUS. Assuming I > correctly understand the behaviour of the tostring argument, which is > debatable, that is indeed what happens. > Make that The contiguous flags refer to how stuff is layed out in memory. In this case it appears that fancy indexing creates the new array by first copying column 1, then column 0, so that the new array is indeed F_CONTIGUOUS. Assuming I correctly understand the behaviour of the tostring argument, which is debatable, that is indeed what happens. Real programmers don't do writing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed May 23 11:03:50 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 May 2007 09:03:50 -0600 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> <20070523142455.GA31485@dogbert.sdsl.sun.ac.za> Message-ID: On 5/23/07, Anne Archibald wrote: > > On 23/05/07, Albert Strasheim wrote: > > > If you are correct that this is in fact a fresh new array, I really > > don't understand where the values of these flags. To recap: > > > > In [19]: x = N.zeros((3,2)) > > > > In [20]: x.flags > > Out[20]: > > C_CONTIGUOUS : True > > F_CONTIGUOUS : False > > OWNDATA : True > > WRITEABLE : True > > ALIGNED : True > > UPDATEIFCOPY : False > > > > In [21]: x[:,[1,0]].flags > > Out[21]: > > C_CONTIGUOUS : False > > F_CONTIGUOUS : True > > OWNDATA : False > > WRITEABLE : True > > ALIGNED : True > > UPDATEIFCOPY : False > > > > So since x and x[:,[1,0]] are both new arrays, shouldn't their flags be > > identical? I'd expect at least C_CONTIGUOUS and OWNDATA to be True. > > It looks like x[:,[1,0]] is done by fancy indexing on the first index > and then transposing. I haven't looked at the implementation, though. > If you need the result to be C-contiguous without further copying, you > can do: > x.transpose()[[1,0],:].transpose().flags > which is horrible. I wouldn't rely on it, though, I'd use > asconguousarray afterward. Both copy and ascontiguousarray do the job: In [39]: ascontiguousarray(x[:,[1,0]]).flags Out[39]: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [40]: x[:,[1,0]].copy().flags Out[40]: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False The extra copy is unfortunate but I don't see how it is to be avoided. Maintaining order and contiguity in memory is not the goal of numpy, convenience and flexibility is. So interfacing with external C and FORTRAN is always going to need some care. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Wed May 23 11:56:28 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 23 May 2007 18:56:28 +0300 Subject: [Numpy-discussion] norm in scipy.linalg but not numpy? Message-ID: <4654642C.8000607@ukr.net> hi all, I was very surprised that norm() is present in scipy.linalg but absent in numpy. Don't you think it's better to add the one to numpy? As for me, I use the func very intensively, and I don't want to write my own (slow) func in Python or use sqrt(dot(v,v)) or scipy.linalg.norm(v) (i.e. make dependence on scipy because of single function linalg.norm()) (as it is proposed in http://www.scipy.org/NumPy_for_Matlab_Users) and what about second argument to norm, as MATLAB provides? norm(x) = norm(x,2) by default, norm(x, inf) = max(abs(x)), norm(x, -inf) = min(abs(x)), norm(x,1) = sum(abs(x)), norm(x,p) = (x1^p+...)^(1/p) etc... Of course, I could write those 10-15 lines of code in Python, but I think C or Pyrex would be better and faster. D. From nwagner at iam.uni-stuttgart.de Wed May 23 12:00:08 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 23 May 2007 18:00:08 +0200 Subject: [Numpy-discussion] norm in scipy.linalg but not numpy? In-Reply-To: <4654642C.8000607@ukr.net> References: <4654642C.8000607@ukr.net> Message-ID: <46546508.2070608@iam.uni-stuttgart.de> dmitrey wrote: > hi all, > I was very surprised that norm() is present in scipy.linalg but absent > in numpy. > Don't you think it's better to add the one to numpy? > As for me, I use the func very intensively, and I don't want to write my > own (slow) func in Python or use > > sqrt(dot(v,v)) > or > scipy.linalg.norm(v) (i.e. make dependence on scipy because of single > function linalg.norm()) > (as it is proposed in http://www.scipy.org/NumPy_for_Matlab_Users) > > and what about second argument to norm, as MATLAB provides? > norm(x) = norm(x,2) by default, norm(x, inf) = max(abs(x)), norm(x, > -inf) = min(abs(x)), norm(x,1) = sum(abs(x)), norm(x,p) = > (x1^p+...)^(1/p) etc... > Of course, I could write those 10-15 lines of code in Python, but I > think C or Pyrex would be better and faster. > D. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > The second argument is available. Help on function norm in module scipy.linalg.basic: norm(x, ord=None) norm(x, ord=None) -> n Matrix or vector norm. Inputs: x -- a rank-1 (vector) or rank-2 (matrix) array ord -- the order of the norm. Comments: For arrays of any rank, if ord is None: calculate the square norm (Euclidean norm for vectors, Frobenius norm for matrices) For vectors ord can be any real number including Inf or -Inf. ord = Inf, computes the maximum of the magnitudes ord = -Inf, computes minimum of the magnitudes ord is finite, computes sum(abs(x)**ord,axis=0)**(1.0/ord) For matrices ord can only be one of the following values: ord = 2 computes the largest singular value ord = -2 computes the smallest singular value ord = 1 computes the largest column sum of absolute values ord = -1 computes the smallest column sum of absolute values ord = Inf computes the largest row sum of absolute values ord = -Inf computes the smallest row sum of absolute values ord = 'fro' computes the frobenius norm sqrt(sum(diag(X.H * X),axis=0)) For values ord < 0, the result is, strictly speaking, not a mathematical 'norm', but it may still be useful for numerical purposes. You cannot compute the norm of sparse matrices. http://projects.scipy.org/scipy/scipy/ticket/18 Nils From tim.hochberg at ieee.org Wed May 23 12:00:57 2007 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Wed, 23 May 2007 09:00:57 -0700 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> <20070523142455.GA31485@dogbert.sdsl.sun.ac.za> Message-ID: I'm not sure why the data ends up F_CONTIGUOUS, but it appears that you can get things to end up as C_CONTIGUOUS by transposing before the indexing and then transposing back. I don't think that this results in extra copies, but I'm not certain of that. >>> a = np.arange(6).reshape(3,2) >>> a array([[0, 1], [2, 3], [4, 5]]) >>> b1 = a[:,[1,0]] >>> a.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False >>> b1.flags C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False >>> b2 = a.transpose()[[(1,0)]].transpose() >>> b2.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False >>> b1 == b2 array([[True, True], [True, True], [True, True]], dtype=bool) -- //=][=\\ tim.hochberg at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed May 23 12:55:28 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 May 2007 11:55:28 -0500 Subject: [Numpy-discussion] norm in scipy.linalg but not numpy? In-Reply-To: <4654642C.8000607@ukr.net> References: <4654642C.8000607@ukr.net> Message-ID: <46547200.3010904@gmail.com> dmitrey wrote: > hi all, > I was very surprised that norm() is present in scipy.linalg but absent > in numpy. > Don't you think it's better to add the one to numpy? In [237]: import numpy In [238]: numpy.linalg.norm? Type: function Base Class: Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy-1.0.3.dev3795-py2.5-macosx-10.3-fat.egg/numpy/linalg/linalg.py Definition: numpy.linalg.norm(x, ord=None) Docstring: norm(x, ord=None) -> n Matrix or vector norm. Inputs: x -- a rank-1 (vector) or rank-2 (matrix) array ord -- the order of the norm. Comments: For arrays of any rank, if ord is None: calculate the square norm (Euclidean norm for vectors, Frobenius norm for matrices) For vectors ord can be any real number including Inf or -Inf. ord = Inf, computes the maximum of the magnitudes ord = -Inf, computes minimum of the magnitudes ord is finite, computes sum(abs(x)**ord,axis=0)**(1.0/ord) For matrices ord can only be one of the following values: ord = 2 computes the largest singular value ord = -2 computes the smallest singular value ord = 1 computes the largest column sum of absolute values ord = -1 computes the smallest column sum of absolute values ord = Inf computes the largest row sum of absolute values ord = -Inf computes the smallest row sum of absolute values ord = 'fro' computes the frobenius norm sqrt(sum(diag(X.H * X),axis=0)) For values ord < 0, the result is, strictly speaking, not a mathematical 'norm', but it may still be useful for numerical purposes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Wed May 23 13:01:23 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 23 May 2007 20:01:23 +0300 Subject: [Numpy-discussion] norm in scipy.linalg but not numpy? In-Reply-To: <46547200.3010904@gmail.com> References: <4654642C.8000607@ukr.net> <46547200.3010904@gmail.com> Message-ID: <46547363.7090000@ukr.net> Ok, then appropriate changes in http://www.scipy.org/NumPy_for_Matlab_Users should be made D. Robert Kern wrote: > dmitrey wrote: > >> hi all, >> I was very surprised that norm() is present in scipy.linalg but absent >> in numpy. >> Don't you think it's better to add the one to numpy? >> > > In [237]: import numpy > > In [238]: numpy.linalg.norm? > Type: function > Base Class: > Namespace: Interactive > File: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy-1.0.3.dev3795-py2.5-macosx-10.3-fat.egg/numpy/linalg/linalg.py > Definition: numpy.linalg.norm(x, ord=None) > Docstring: > norm(x, ord=None) -> n > > Matrix or vector norm. > > Inputs: > > x -- a rank-1 (vector) or rank-2 (matrix) array > ord -- the order of the norm. > > Comments: > For arrays of any rank, if ord is None: > calculate the square norm (Euclidean norm for vectors, > Frobenius norm for matrices) > > For vectors ord can be any real number including Inf or -Inf. > ord = Inf, computes the maximum of the magnitudes > ord = -Inf, computes minimum of the magnitudes > ord is finite, computes sum(abs(x)**ord,axis=0)**(1.0/ord) > > For matrices ord can only be one of the following values: > ord = 2 computes the largest singular value > ord = -2 computes the smallest singular value > ord = 1 computes the largest column sum of absolute values > ord = -1 computes the smallest column sum of absolute values > ord = Inf computes the largest row sum of absolute values > ord = -Inf computes the smallest row sum of absolute values > ord = 'fro' computes the frobenius norm sqrt(sum(diag(X.H * X),axis=0)) > > For values ord < 0, the result is, strictly speaking, not a > mathematical 'norm', but it may still be useful for numerical purposes. > > From robert.kern at gmail.com Wed May 23 13:18:56 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 May 2007 12:18:56 -0500 Subject: [Numpy-discussion] norm in scipy.linalg but not numpy? In-Reply-To: <46547363.7090000@ukr.net> References: <4654642C.8000607@ukr.net> <46547200.3010904@gmail.com> <46547363.7090000@ukr.net> Message-ID: <46547780.2090903@gmail.com> dmitrey wrote: > Ok, then appropriate changes in > http://www.scipy.org/NumPy_for_Matlab_Users should be made Go for it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Wed May 23 13:19:14 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 May 2007 11:19:14 -0600 Subject: [Numpy-discussion] norm in scipy.linalg but not numpy? In-Reply-To: <46547200.3010904@gmail.com> References: <4654642C.8000607@ukr.net> <46547200.3010904@gmail.com> Message-ID: On 5/23/07, Robert Kern wrote: > > dmitrey wrote: > > hi all, > > I was very surprised that norm() is present in scipy.linalg but absent > > in numpy. > > Don't you think it's better to add the one to numpy? > > In [237]: import numpy > > In [238]: numpy.linalg.norm? > Type: function > Base Class: > Namespace: Interactive > File: > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy- > 1.0.3.dev3795-py2.5-macosx-10.3-fat.egg/numpy/linalg/linalg.py > Definition: numpy.linalg.norm(x, ord=None) > Docstring: > norm(x, ord=None) -> n > > Matrix or vector norm. > > Inputs: > > x -- a rank-1 (vector) or rank-2 (matrix) array > ord -- the order of the norm. > > Comments: > For arrays of any rank, if ord is None: > calculate the square norm (Euclidean norm for vectors, > Frobenius norm for matrices) > > For vectors ord can be any real number including Inf or -Inf. > ord = Inf, computes the maximum of the magnitudes > ord = -Inf, computes minimum of the magnitudes > ord is finite, computes sum(abs(x)**ord,axis=0)**(1.0/ord) > > For matrices ord can only be one of the following values: > ord = 2 computes the largest singular value > ord = -2 computes the smallest singular value > ord = 1 computes the largest column sum of absolute values > ord = -1 computes the smallest column sum of absolute values > ord = Inf computes the largest row sum of absolute values > ord = -Inf computes the smallest row sum of absolute values > ord = 'fro' computes the frobenius norm sqrt(sum(diag(X.H * > X),axis=0)) > > For values ord < 0, the result is, strictly speaking, not a > mathematical 'norm', but it may still be useful for numerical > purposes. The enhancement I would like is an axis argument, so that the norm of an array of vectors could be taken. That is often how I want to use the norm. It is, of course, not that much trouble to put together such a function using current numpy, but it would be nice if it were standard. I note that the current norm is like that in MatLab and I think that is a defect in MatLab also. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Shawn.Gong at drdc-rddc.gc.ca Wed May 23 13:46:15 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Wed, 23 May 2007 13:46:15 -0400 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed - undefined symbol: pthread_join In-Reply-To: <2E58C246F17003499C141D334794D049027682C7@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682C7@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <2E58C246F17003499C141D334794D049027682CD@ottawaex02.Ottawa.drdc-rddc.gc.ca> Hi Robert and list, My colleague said that it certainly looks like a missing thread library. It looks like the problem is that lapack_lite was compiled multi-threaded and can't find the thread library. Any solution to that? BTW, I tried 1.0.2 and got the same error. Thanks, Shawn -----Original Message----- From: Gong, Shawn (Contractor) Sent: Tuesday, May 22, 2007 1:28 PM To: 'Discussion of Numerical Python' Subject: RE: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed Thank you David M. Cooke and Robert. Now I changed directory and ran python, Got further and hit this error message: >python Python 2.3.6 (#9, May 18 2007, 10:22:59) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-53)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/__init__.py", line 40, in ? import linalg File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/__init__. py", line 4, in ? from linalg import * File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/linalg.py ", line 25, in ? from numpy.linalg import lapack_lite ImportError: /home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/lapack_lit e.so: undefined symbol: pthread_join _______________________________________________ From oliphant.travis at ieee.org Wed May 23 14:12:51 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 May 2007 12:12:51 -0600 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> Message-ID: <46548423.3040700@ieee.org> Albert Strasheim wrote: > Hello all > > Consider the following example: > > In [43]: x = N.zeros((3,2)) > > In [44]: x.flags > Out[44]: > C_CONTIGUOUS : True > F_CONTIGUOUS : False > OWNDATA : True > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > In [45]: x[:,[1,0]].flags > Out[45]: > C_CONTIGUOUS : False > F_CONTIGUOUS : True > OWNDATA : False > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > Is it correct that the F_CONTIGUOUS flag is set in the case of the fancy > indexed x? I'm running NumPy 1.0.3.dev3792 here. > In this case, yes. When you use fancy-indexing with standard slicing (an extension that NumPy added), the implementation uses transposes under the covers quite often. So, you can't rely on the output of fancy indexing being a C-contiguous array (even though it won't be referring to the same data as the original array). So, you are exposing an implementation detail here. To interface to code that requires C-contiguous or F-contiguous data, you have to check the flags and make an appropriate copy. -Travis From oliphant.travis at ieee.org Wed May 23 14:14:56 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 May 2007 12:14:56 -0600 Subject: [Numpy-discussion] Question about flags of fancy indexed array In-Reply-To: References: <004e01c79d37$6f61a1e0$0100a8c0@sun.ac.za> <20070523142455.GA31485@dogbert.sdsl.sun.ac.za> Message-ID: <465484A0.2020303@ieee.org> Charles R Harris wrote: > > > On 5/23/07, *Albert Strasheim* > wrote: > > Hello all > > On Wed, 23 May 2007, Anne Archibald wrote: > > > On 23/05/07, Albert Strasheim > wrote: > > > > > Consider the following example: > > > > First a comment: almost nobody needs to care how the data is stored > > internally. Try to avoid looking at the flags unless you're > > interfacing with a C library. The nice feature of numpy is that it > > hides all that junk - strides, contiguous storage, iteration, what > > have you - so that you don't have to deal with it. > > As luck would have it, I am interfacing with a C library. > > > > Is it correct that the F_CONTIGUOUS flag is set in the case of > the fancy > > > indexed x? I'm running NumPy 1.0.3.dev3792 here. > > > > Numpy arrays are always stored in contiguous blocks of memory with > > uniform strides. The "CONTIGUOUS" flag actually means something > > totally different, which is unfortunate, but in any case, "fancy > > indexing" can't be done as a simple reindexing operation. It > must make > > a copy of the array. So what you're seeing is the flags of a > fresh new > > array, created from scratch (and numpy always creates arrays in C > > order internally, though that is an implementation detail you should > > not rely on). > > If you are correct that this is in fact a fresh new array, I really > don't understand where the values of these flags. To recap: > > In [19]: x = N.zeros((3,2)) > > In [20]: x.flags > Out[20]: > C_CONTIGUOUS : True > F_CONTIGUOUS : False > OWNDATA : True > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > In [21]: x[:,[1,0]].flags > Out[21]: > C_CONTIGUOUS : False > F_CONTIGUOUS : True > OWNDATA : False > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > So since x and x[:,[1,0]] are both new arrays, shouldn't their > flags be > identical? I'd expect at least C_CONTIGUOUS and OWNDATA to be True. > > > The contiguous refers to how stuff is layed out in memory. In this > case it appears that fancy indexing creates the new array by first > copying column 1, then column 2, so that the new array is indeed > F_CONTIGUOUS. Assuming I correctly understand the behaviour of the > tostring argument, which is debatable, that is indeed what happens. Yes, you are using the 'A' argument correctly. For contiguous arrays, you can always get a look at the actual buffer, using x.data[:] or y.data[:] You can't get a buffer for discontiguous arrays. -Travis From erin.sheldon at gmail.com Wed May 23 14:12:08 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 23 May 2007 14:12:08 -0400 Subject: [Numpy-discussion] More scalar math issues Message-ID: <331116dc0705231112k14c0bf6do7288d35c3b359c35@mail.gmail.com> Hi all- The types of objects are not preserved through mathematical operations when numpy scalars are involved. This has been discussed on this list before. I think the following error is problematic, however. >>> x=N.array(0.1) >>> type(x) >>> y = N.cos(x) # the type has changed >>> type(y) # Now some functions cannot be used on the data, for example. >>> N.arccos(y,y) : return arrays must be of ArrayType So if I write a function which is supposed to work with any numpy object it will fail with this type of code. One solution would be to do this at the top of every file, for each input: var = N.array(var_input, ndmin=1, copy=False) or I could do this in the code: y=x.copy() N.cos(x,y) but wouldn't it be better to just preserve the type through functions like cos? Or does this cause repercussions in other places? Erin From oliphant.travis at ieee.org Wed May 23 14:24:24 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 May 2007 12:24:24 -0600 Subject: [Numpy-discussion] More scalar math issues In-Reply-To: <331116dc0705231112k14c0bf6do7288d35c3b359c35@mail.gmail.com> References: <331116dc0705231112k14c0bf6do7288d35c3b359c35@mail.gmail.com> Message-ID: <465486D8.6000704@ieee.org> Erin Sheldon wrote: > y=x.copy() > N.cos(x,y) > but wouldn't it be better to just preserve the type through functions > like cos? Or does this cause repercussions in other places? > This causes repercussions in other places. This can be re-visited for 1.1, but for now, all ufuncs that return 0-d arrays are converted to array scalars. -Travis From openopt at ukr.net Wed May 23 14:52:10 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 23 May 2007 21:52:10 +0300 Subject: [Numpy-discussion] norm in scipy.linalg but not numpy? In-Reply-To: <46547780.2090903@gmail.com> References: <4654642C.8000607@ukr.net> <46547200.3010904@gmail.com> <46547363.7090000@ukr.net> <46547780.2090903@gmail.com> Message-ID: <46548D5A.4090007@ukr.net> Robert Kern wrote: > Go for it. > > I'm sorry, I don't know how to edit the page and I guess I haven't enough permissions. D. From robert.kern at gmail.com Wed May 23 14:59:25 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 May 2007 13:59:25 -0500 Subject: [Numpy-discussion] norm in scipy.linalg but not numpy? In-Reply-To: <46548D5A.4090007@ukr.net> References: <4654642C.8000607@ukr.net> <46547200.3010904@gmail.com> <46547363.7090000@ukr.net> <46547780.2090903@gmail.com> <46548D5A.4090007@ukr.net> Message-ID: <46548F0D.9020504@gmail.com> dmitrey wrote: > Robert Kern wrote: >> Go for it. >> > I'm sorry, I don't know how to edit the page and I guess I haven't > enough permissions. Create an account here: http://www.scipy.org/UserPreferences -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Wed May 23 15:18:32 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 23 May 2007 22:18:32 +0300 Subject: [Numpy-discussion] norm in scipy.linalg but not numpy? In-Reply-To: <46548F0D.9020504@gmail.com> References: <4654642C.8000607@ukr.net> <46547200.3010904@gmail.com> <46547363.7090000@ukr.net> <46547780.2090903@gmail.com> <46548D5A.4090007@ukr.net> <46548F0D.9020504@gmail.com> Message-ID: <46549388.8000402@ukr.net> Ok, I have made the changes. Robert Kern wrote: > dmitrey wrote: > >> Robert Kern wrote: >> >>> Go for it. >>> >>> >> I'm sorry, I don't know how to edit the page and I guess I haven't >> enough permissions. >> > > Create an account here: > > http://www.scipy.org/UserPreferences > > From oliphant.travis at ieee.org Wed May 23 15:43:20 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 May 2007 13:43:20 -0600 Subject: [Numpy-discussion] numpy and freeze.py In-Reply-To: References: Message-ID: <46549958.7080900@ieee.org> Hanno Klemm wrote: > Hi, > > I want to use freeze.py on code that heavily relies on numpy. If I > just try > > python2.5 /scratch/src/Python-2.5/Tools/freeze/freeze.py pylay.py > I've seen people use NumPy with programs like freeze before. You have to take into account that multiarray is a compiled extension module. Thus, you usually have to do something (like write a description file or something like that) to make sure it gets included in the distribution tree. I've never done it, so I am not of much help. -Travis From oliphant.travis at ieee.org Wed May 23 19:31:50 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 May 2007 17:31:50 -0600 Subject: [Numpy-discussion] NumPy 1.0.3 released Message-ID: <4654CEE6.9000703@ieee.org> I'm pleased to announce the release of NumPy 1.0.3 Hopefully, this release will work better with multiple interpreters as well as having some significant bugs fixed. Other changes include * x/y follows Python standard on mixed-sign division * iinfo added to provide information on integer data-types * improvements to SwIG typemaps, numpy.distutils, and f2py * improvements to separator handling in fromfile and fromstring * many, many bug fixes Thank you to everybody who contributed to the recent release. Best regards, NumPy Developers http://numpy.scipy.org From fullung at gmail.com Wed May 23 19:44:26 2007 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 24 May 2007 01:44:26 +0200 Subject: [Numpy-discussion] NumPy 1.0.3 released In-Reply-To: <4654CEE6.9000703@ieee.org> References: <4654CEE6.9000703@ieee.org> Message-ID: <20070523234426.GA9481@dogbert.sdsl.sun.ac.za> On Wed, 23 May 2007, Travis Oliphant wrote: > I'm pleased to announce the release of NumPy 1.0.3 > > Hopefully, this release will work better with multiple interpreters as > well as having some significant bugs fixed. Great stuff. Could you (or someone) please take care of the last 4 active tickets for the 1.0.3 milestone? http://projects.scipy.org/scipy/numpy/query?status=new&status=assigned&status=reopened&milestone=1.0.3+Release I think at least 113, 114 and 385 can be closed. Not sure about 228. Cheers. Albert From mike.ressler at alum.mit.edu Wed May 23 20:03:07 2007 From: mike.ressler at alum.mit.edu (Mike Ressler) Date: Wed, 23 May 2007 17:03:07 -0700 Subject: [Numpy-discussion] Six-legged feature in median function Message-ID: <268febdf0705231703h50df4b03uf941565d0c2605e4@mail.gmail.com> Bumped into the following in numpy-1.0.2 and 1.0.3 (of course :-) on both 32-bit and 64-bit linux boxes: >>> import numpy as nm >>> a=nm.zeros(1000000,dtype='Int32')-30000 >>> nm.median(a) -30000.0 >>> a=nm.zeros(1000000,dtype='Int16')-30000 >>> nm.median(a) Warning: overflow encountered in short_scalars 2768.0 Something in the computation of median overflows when there are a large number of values near the ends of an Int16 (like my image data). I can work around this, of course, by casting the data as something larger, but this one could definitely use a tweak. Mike -- mike.ressler at alum.mit.edu From brian.lee.hawthorne at gmail.com Thu May 24 00:42:34 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Wed, 23 May 2007 21:42:34 -0700 Subject: [Numpy-discussion] numpy testing Message-ID: <796269930705232142q914d807yd37d310a3e2a3e11@mail.gmail.com> Hello, I'm having a bit of trouble with numpy testing not finding my tests. It seems like a bug, but maybe it's a subtle feature. I constructed the simplest possible straw man to illustrate my problem (attached as foo.tgz). The directory structure looks like this: a/ __init__.py foo.py tests/ test_foo.py To witness the problem: % tar zxf foo.tgz % python -c 'import a; a.test()' % python -c 'import a; a.testall()' The test method (of NumpyTest class) does not pick up the test, but testall does. I think I've followed all the naming rules, even included test_, check_, and bench_ methods in the test case, in case they were treated differently. Any suggestions? I encountered this problem while attempting to isolate another (maybe related) problem with numpy testing. I'm working on the nipy project ( neuroimaging.scipy.org) and we are using the numpy test system. During a refactoring session, I noticed that when I moved some existing functionality (along with its tests) into a new identically named module under a different package (like a.b.foo and a.c.foo), these particular tests were no longer found. I thought it might have something to do with the modules having the same name, but when I tried to isolate the problem in a mockup, I ran into the problem described above. So, first things first. Maybe it's something really dumb that I can't see because I've been staring at it for too long. Thanks in advance for any suggestions, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: foo.tgz Type: application/x-gzip Size: 364 bytes Desc: not available URL: From openopt at ukr.net Thu May 24 02:12:14 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 24 May 2007 09:12:14 +0300 Subject: [Numpy-discussion] NumPy 1.0.3 released In-Reply-To: <4654CEE6.9000703@ieee.org> References: <4654CEE6.9000703@ieee.org> Message-ID: <46552CBE.9010901@ukr.net> When I try to install numpy from sourses, either 1.0.2 or 1.0.3, I always get the debugged program raised the exception unhandled ".../site-packages/numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_FromUnicode" File: /usr/local/lib/python2.5/site-packages/numpy/core/__init__.py. Line: 5 Currently I use numpy1.0b.deb Does anyone know how to fix the problem? Thx, D. Travis Oliphant wrote: > I'm pleased to announce the release of NumPy 1.0.3 > > Hopefully, this release will work better with multiple interpreters as > well as having some significant bugs fixed. > > Other changes include > > * x/y follows Python standard on mixed-sign division > * iinfo added to provide information on integer data-types > * improvements to SwIG typemaps, numpy.distutils, and f2py > * improvements to separator handling in fromfile and fromstring > * many, many bug fixes > > Thank you to everybody who contributed to the recent release. > > Best regards, > > NumPy Developers > http://numpy.scipy.org > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > From oliphant.travis at ieee.org Thu May 24 02:36:11 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 May 2007 00:36:11 -0600 Subject: [Numpy-discussion] NumPy 1.0.3 released In-Reply-To: <46552CBE.9010901@ukr.net> References: <4654CEE6.9000703@ieee.org> <46552CBE.9010901@ukr.net> Message-ID: <4655325B.2010501@ieee.org> dmitrey wrote: > When I try to install numpy from sourses, either 1.0.2 or 1.0.3, I > always get > the debugged program raised the exception unhandled > ".../site-packages/numpy/core/multiarray.so: > undefined symbol: PyUnicodeUCS2_FromUnicode" > File: /usr/local/lib/python2.5/site-packages/numpy/core/__init__.py. > Line: 5 > > There is most likely a problem with your Python header files. For some reason you are running a "wide"-build of Python (where unicode characters are 4-bytes each), but are compiling extensions to use 2-byte unicode functions. It's possible you have a /usr/local copy of Python and a /usr copy of Python built with different widths for the unicode characters and you are grabbing the wrong include files when you compile. -Travis From pearu at cens.ioc.ee Thu May 24 05:59:26 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 24 May 2007 11:59:26 +0200 Subject: [Numpy-discussion] numpy testing In-Reply-To: <796269930705232142q914d807yd37d310a3e2a3e11@mail.gmail.com> References: <796269930705232142q914d807yd37d310a3e2a3e11@mail.gmail.com> Message-ID: <465561FE.4040409@cens.ioc.ee> Brian Hawthorne wrote: > Hello, > I'm having a bit of trouble with numpy testing not finding my tests. It > seems like a bug, but maybe it's a subtle feature. I constructed the > simplest possible straw man to illustrate my problem (attached as foo.tgz). > The directory structure looks like this: > > a/ > __init__.py > foo.py > tests/ > test_foo.py > > To witness the problem: > > % tar zxf foo.tgz > % python -c 'import a; a.test()' > % python -c 'import a; a.testall()' > > The test method (of NumpyTest class) does not pick up the test, but testall > does. I think I've followed all the naming rules, even included test_, > check_, and bench_ methods in the test case, in case they were treated > differently. Any suggestions? This is because the package does not import foo. If you would add import foo to a/__init__.py, all tests will be picked up. Regards, Pearu From fullung at gmail.com Thu May 24 06:47:01 2007 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 24 May 2007 12:47:01 +0200 Subject: [Numpy-discussion] Another flags question Message-ID: <007f01c79df0$dbffba30$0100a8c0@sun.ac.za> Hello all Me vs the flags again. I found another case where the flags aren't what I would expect: In [118]: x = N.array(N.arange(24.0).reshape(6,4), order='F') In [119]: x Out[119]: array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [ 12., 13., 14., 15.], [ 16., 17., 18., 19.], [ 20., 21., 22., 23.]]) In [120]: x[:,0:1] Out[120]: array([[ 0.], [ 4.], [ 8.], [ 12.], [ 16.], [ 20.]]) # start=0, stop=1, step=2 In [121]: x[:,0:1:2] Out[121]: array([[ 0.], [ 4.], [ 8.], [ 12.], [ 16.], [ 20.]]) In [122]: x[:,0:1].flags Out[122]: C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [123]: x[:,0:1:2].flags Out[123]: C_CONTIGUOUS : False F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [124]: x[:,0:1].strides Out[124]: (8, 48) In [125]: x[:,0:1:2].strides Out[125]: (8, 96) The views are slightly different (as can be seen from at least the strides), but I'd expect F_CONTIGUOUS to be true in both cases. I'm guessing that somewhere this special case isn't being checked for, which translates into a "missed opportunity" for marking the view as contiguous. Probably not a bug per se, but I thought I'd mention it here. Cheers, Albert From brian.lee.hawthorne at gmail.com Thu May 24 07:56:35 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Thu, 24 May 2007 04:56:35 -0700 Subject: [Numpy-discussion] numpy testing In-Reply-To: <465561FE.4040409@cens.ioc.ee> References: <796269930705232142q914d807yd37d310a3e2a3e11@mail.gmail.com> <465561FE.4040409@cens.ioc.ee> Message-ID: <796269930705240456v6e143d6p1613978133524b01@mail.gmail.com> Ah, thanks for the tip. That did the trick. Cheers -Brian On 5/24/07, Pearu Peterson wrote: > > > > Brian Hawthorne wrote: > > Hello, > > I'm having a bit of trouble with numpy testing not finding my tests. It > > seems like a bug, but maybe it's a subtle feature. I constructed the > > simplest possible straw man to illustrate my problem (attached as > foo.tgz). > > The directory structure looks like this: > > > > a/ > > __init__.py > > foo.py > > tests/ > > test_foo.py > > > > To witness the problem: > > > > % tar zxf foo.tgz > > % python -c 'import a; a.test()' > > % python -c 'import a; a.testall()' > > > > The test method (of NumpyTest class) does not pick up the test, but > testall > > does. I think I've followed all the naming rules, even included test_, > > check_, and bench_ methods in the test case, in case they were treated > > differently. Any suggestions? > > This is because the package does not import foo. If you would add > import foo > to a/__init__.py, all tests will be picked up. > > Regards, > Pearu > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Thu May 24 09:40:18 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 24 May 2007 15:40:18 +0200 Subject: [Numpy-discussion] f2py and C functions/subroutines/structs Message-ID: <80c99e790705240640l2de0cf60yaa2a9da924af9a1a@mail.gmail.com> Hi all, I hope this is the right mailing list to post my question to. I'm trying to make some easy C code working with Python by using f2py. For example, take the test.c file: ---------------------------------- typedef struct data { int a; double b; } DATA; /*function with struct*/ DATA incr(DATA x) { DATA y; y.a = x.a + 1; y.b = x.b + 1.; return(y); } /*function: return value*/ int incr0(int x) { return(x+1); } /*subroutine: no return value*/ void incr1(int *x) { *x += 1; } ---------------------------------- If I do: $ f2py -c test.c -m prova all seems fine, but inside ipython I get this error: In [1]: import prova --------------------------------------------------------------------------- Traceback (most recent call last) /xlv1/labsoi_devices/bollalo001/work/test/python/ in () : dynamic module does not define init function (initprova) I think I'm missing a correct .pyf file to do: $ f2py -c prova2.pyf test.c I tried writing prova2.pyf by my own, because doing: $ f2py test.c -h prova2.pyf -m prova gives me an empty prova2.pyf, but I wasn't able to do it!!! Can anyone of use kindly show me how to write it? Or giving me a good tutorial to read? I found only very few information on www.scipy.org. Thank you in advance, Lorenzo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu May 24 09:47:41 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 24 May 2007 07:47:41 -0600 Subject: [Numpy-discussion] Another flags question In-Reply-To: <007f01c79df0$dbffba30$0100a8c0@sun.ac.za> References: <007f01c79df0$dbffba30$0100a8c0@sun.ac.za> Message-ID: On 5/24/07, Albert Strasheim wrote: > > Hello all > > Me vs the flags again. I found another case where the flags aren't what I > would expect: > > In [118]: x = N.array(N.arange(24.0).reshape(6,4), order='F') > > In [119]: x > Out[119]: > array([[ 0., 1., 2., 3.], > [ 4., 5., 6., 7.], > [ 8., 9., 10., 11.], > [ 12., 13., 14., 15.], > [ 16., 17., 18., 19.], > [ 20., 21., 22., 23.]]) > > In [120]: x[:,0:1] > Out[120]: > array([[ 0.], > [ 4.], > [ 8.], > [ 12.], > [ 16.], > [ 20.]]) > > # start=0, stop=1, step=2 > In [121]: x[:,0:1:2] > Out[121]: > array([[ 0.], > [ 4.], > [ 8.], > [ 12.], > [ 16.], > [ 20.]]) > > In [122]: x[:,0:1].flags > Out[122]: > C_CONTIGUOUS : False > F_CONTIGUOUS : True > OWNDATA : False > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > The column is C_CONTIGUOUS also. Another example that should be both C_CONTIGUOUS and F_CONTIGUOUS is In [1]: ones(2)[:,newaxis].flags Out[1]: C_CONTIGUOUS : False F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False I think indices that can only have the value 0 should be ignored in determining contiguity, but it seems we are not doing that at the moment. Another example: In [3]: array([1])[:,newaxis].flags Out[3]: C_CONTIGUOUS : False F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [123]: x[:,0:1:2].flags > Out[123]: > C_CONTIGUOUS : False > F_CONTIGUOUS : False > OWNDATA : False > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > In [124]: x[:,0:1].strides > Out[124]: (8, 48) > > In [125]: x[:,0:1:2].strides > Out[125]: (8, 96) > > The views are slightly different (as can be seen from at least the > strides), > but I'd expect F_CONTIGUOUS to be true in both cases. I'm guessing that > somewhere this special case isn't being checked for, which translates into > a > "missed opportunity" for marking the view as contiguous. Probably not a > bug > per se, but I thought I'd mention it here. Contiguous detection misses a few opportunities but it is hard to get all the cases right. But maybe we can improve it a bit more. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From williams at astro.ox.ac.uk Thu May 24 09:49:22 2007 From: williams at astro.ox.ac.uk (Michael Williams) Date: Thu, 24 May 2007 14:49:22 +0100 Subject: [Numpy-discussion] NumPy 1.0.3 released In-Reply-To: <4654CEE6.9000703@ieee.org> References: <4654CEE6.9000703@ieee.org> Message-ID: <20070524134921.GA7527@astro.ox.ac.uk> On Wed, May 23, 2007 at 05:31:50PM -0600, Travis Oliphant wrote: > I'm pleased to announce the release of NumPy 1.0.3 numpy 1.0.3 is causing a warning on scipy 0.5.2: In [1]: import numpy In [2]: numpy.__version__ Out[2]: '1.0.3' In [3]: import scipy /Users/mike/Library/Python/2.5/site-packages/scipy/misc/__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test In [4]: scipy.__version__ Out[4]: '0.5.2' I wasn't getting this error with numpy 1.0.1 (I skipped 1.0.2, but would be happy to check if it would be useful). Assuming that I can live with the warning message, is there anything to worry about here? Do I need to upgrade to scipy svn? -- Mike From gael.varoquaux at normalesup.org Thu May 24 09:50:46 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 24 May 2007 15:50:46 +0200 Subject: [Numpy-discussion] f2py and C functions/subroutines/structs In-Reply-To: <80c99e790705240640l2de0cf60yaa2a9da924af9a1a@mail.gmail.com> References: <80c99e790705240640l2de0cf60yaa2a9da924af9a1a@mail.gmail.com> Message-ID: <20070524135035.GB9378@clipper.ens.fr> On Thu, May 24, 2007 at 03:40:18PM +0200, lorenzo bolla wrote: > I hope this is the right mailing list to post my question to. > I'm trying to make some easy C code working with Python by using f2py. I can't help too much on that as I gave up using f2Py to wrap C a while ago, but I think you are much better off using xtypes to call C functions. It is dead-easy. I don't know about the overhead, if it is an issue in your case. My 2 euro cents, Ga?l From gael.varoquaux at normalesup.org Thu May 24 09:52:26 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 24 May 2007 15:52:26 +0200 Subject: [Numpy-discussion] f2py and C functions/subroutines/structs In-Reply-To: <20070524135035.GB9378@clipper.ens.fr> References: <80c99e790705240640l2de0cf60yaa2a9da924af9a1a@mail.gmail.com> <20070524135035.GB9378@clipper.ens.fr> Message-ID: <20070524135218.GC9378@clipper.ens.fr> On Thu, May 24, 2007 at 03:50:40PM +0200, Gael Varoquaux wrote: > On Thu, May 24, 2007 at 03:40:18PM +0200, lorenzo bolla wrote: > > I hope this is the right mailing list to post my question to. > > I'm trying to make some easy C code working with Python by using f2py. > I can't help too much on that as I gave up using f2Py to wrap C a while > ago, but I think you are much better off using xtypes to call C > functions. It is dead-easy. Oops, forgot the manual ! Have a look at http://www.scipy.org/Cookbook/Ctypes to see how to easily call C function with numpy. Ga?l From pgmdevlist at gmail.com Thu May 24 10:08:33 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 24 May 2007 10:08:33 -0400 Subject: [Numpy-discussion] f2py and C functions/subroutines/structs In-Reply-To: <80c99e790705240640l2de0cf60yaa2a9da924af9a1a@mail.gmail.com> References: <80c99e790705240640l2de0cf60yaa2a9da924af9a1a@mail.gmail.com> Message-ID: <200705241008.33692.pgmdevlist@gmail.com> Lorenzo, you can indeed use f2py to write extensions around some C code: http://cens.ioc.ee/projects/f2py2e/usersguide/index.html http://www.scipy.org/Cookbook/f2py_and_NumPy I think you should also be able to find some actual examples in the scipy sources... From lbolla at gmail.com Thu May 24 10:34:56 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 24 May 2007 16:34:56 +0200 Subject: [Numpy-discussion] f2py and C functions/subroutines/structs In-Reply-To: <200705241008.33692.pgmdevlist@gmail.com> References: <80c99e790705240640l2de0cf60yaa2a9da924af9a1a@mail.gmail.com> <200705241008.33692.pgmdevlist@gmail.com> Message-ID: <80c99e790705240734x54339312n1f2911835369d5a0@mail.gmail.com> I tried to write my own prova2.pyf and this is it: ---------------------------------------------------------------- ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module prova interface function incr(x) real, dimension(2), intent(c) :: incr real, dimension(2), intent(c,in) :: x end function incr function incr0(x) integer intent(c) :: incr0 integer intent(c,in) :: x end function incr0 subroutine incr1(x) intent(c) :: incr1 integer intent(c,in,out) :: x end subroutine incr1 end interface end python module prova ! This file was auto-generated with f2py (version:2_3473). ! See http://cens.ioc.ee/projects/f2py2e/ ---------------------------------------------------------------- Unfortunately, only the function incr0 works. incr1 gives a segmentation fault and incr does not return an array (or better a struct) as I would... any hints? thank you! L. On 5/24/07, Pierre GM wrote: > > Lorenzo, > you can indeed use f2py to write extensions around some C code: > > http://cens.ioc.ee/projects/f2py2e/usersguide/index.html > http://www.scipy.org/Cookbook/f2py_and_NumPy > > I think you should also be able to find some actual examples in the scipy > sources... > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu May 24 10:56:03 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 24 May 2007 15:56:03 +0100 Subject: [Numpy-discussion] Median / mean functionality confusing? Message-ID: <1e2af89e0705240756q14c55379p1ca9ee743ec55724@mail.gmail.com> Hi, Does anyone else find this unexpected? In [93]: import numpy as N In [94]: a = N.arange(10).reshape(5,2) In [95]: N.mean(a) Out[95]: 4.5 In [96]: N.median(a) Out[96]: array([4, 5]) i.e. shouldn't median have the same axis, dtype, default axis=None behavior as mean? Best, Matthew From svetosch at gmx.net Thu May 24 11:09:52 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Thu, 24 May 2007 17:09:52 +0200 Subject: [Numpy-discussion] Median / mean functionality confusing? In-Reply-To: <1e2af89e0705240756q14c55379p1ca9ee743ec55724@mail.gmail.com> References: <1e2af89e0705240756q14c55379p1ca9ee743ec55724@mail.gmail.com> Message-ID: <4655AAC0.3020909@gmx.net> Matthew Brett schrieb: > Hi, > > Does anyone else find this unexpected? > > In [93]: import numpy as N > In [94]: a = N.arange(10).reshape(5,2) > In [95]: N.mean(a) > Out[95]: 4.5 > In [96]: N.median(a) > Out[96]: array([4, 5]) > > i.e. shouldn't median have the same axis, dtype, default axis=None > behavior as mean? > Well according to the docstring it doesn't even have an axis argument: median(m) median(m) returns a median of m along the first dimension of m. but I agree that it would be useful if it did! Regarding dtype, I disagree. Why do you want to force the result to be a float? cheers, sven From matthew.brett at gmail.com Thu May 24 11:27:53 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 24 May 2007 16:27:53 +0100 Subject: [Numpy-discussion] Median / mean functionality confusing? In-Reply-To: <4655AAC0.3020909@gmx.net> References: <1e2af89e0705240756q14c55379p1ca9ee743ec55724@mail.gmail.com> <4655AAC0.3020909@gmx.net> Message-ID: <1e2af89e0705240827k3b2d85c4m3dd099e64d08d9a@mail.gmail.com> > Regarding dtype, I disagree. Why do you want to force the result to be a > float? Fair comment - I really meant the axis, and axis=None difference. Matthew From millman at berkeley.edu Thu May 24 11:59:50 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 24 May 2007 08:59:50 -0700 Subject: [Numpy-discussion] NumPy 1.0.3 released In-Reply-To: <20070524134921.GA7527@astro.ox.ac.uk> References: <4654CEE6.9000703@ieee.org> <20070524134921.GA7527@astro.ox.ac.uk> Message-ID: On 5/24/07, Michael Williams wrote: > In [3]: import scipy > /Users/mike/Library/Python/2.5/site-packages/scipy/misc/__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code > test = ScipyTest().test > > In [4]: scipy.__version__ > Out[4]: '0.5.2' > > I wasn't getting this error with numpy 1.0.1 (I skipped 1.0.2, but would > be happy to check if it would be useful). Assuming that I can live with > the warning message, is there anything to worry about here? Do I need to > upgrade to scipy svn? You can safely ignore it. It is just a warning; the code still works as before. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From Chris.Barker at noaa.gov Thu May 24 12:11:46 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 24 May 2007 09:11:46 -0700 Subject: [Numpy-discussion] Median / mean functionality confusing? In-Reply-To: <1e2af89e0705240827k3b2d85c4m3dd099e64d08d9a@mail.gmail.com> References: <1e2af89e0705240756q14c55379p1ca9ee743ec55724@mail.gmail.com> <4655AAC0.3020909@gmx.net> <1e2af89e0705240827k3b2d85c4m3dd099e64d08d9a@mail.gmail.com> Message-ID: <4655B942.9030308@noaa.gov> Matthew Brett wrote: >> Regarding dtype, I disagree. Why do you want to force the result to be a >> float? well, what's the median of (1, 2, 3, 4) ? I learned it is 2.5 (Zar, Jerrold H. 1984. Biostatistical Analysis. Prentice Hall.) Of course, the median of an odd number of integers would be an integer. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu May 24 12:13:46 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 24 May 2007 09:13:46 -0700 Subject: [Numpy-discussion] NumPy 1.0.3 for OS-X In-Reply-To: <4654CEE6.9000703@ieee.org> References: <4654CEE6.9000703@ieee.org> Message-ID: <4655B9BA.3050301@noaa.gov> Anyone have plans to build the latest release to put up on pythonmac? I could do it, but I'm busy, not the most qualified, and don't want to duplicate effort. If no one else is, though, I'll try to fit it in. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From svetosch at gmx.net Thu May 24 12:26:40 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Thu, 24 May 2007 18:26:40 +0200 Subject: [Numpy-discussion] Median / mean functionality confusing? In-Reply-To: <4655B942.9030308@noaa.gov> References: <1e2af89e0705240756q14c55379p1ca9ee743ec55724@mail.gmail.com> <4655AAC0.3020909@gmx.net> <1e2af89e0705240827k3b2d85c4m3dd099e64d08d9a@mail.gmail.com> <4655B942.9030308@noaa.gov> Message-ID: <4655BCC0.6040308@gmx.net> Christopher Barker schrieb: > Matthew Brett wrote: >>> Regarding dtype, I disagree. Why do you want to force the result to be a >>> float? > > well, what's the median of (1, 2, 3, 4) ? > > I learned it is 2.5 > > (Zar, Jerrold H. 1984. Biostatistical Analysis. Prentice Hall.) Is that the seminal work on the topic ;-) > > Of course, the median of an odd number of integers would be an integer. > that's why I asked about _forcing_ to a float -sven From David.L.Goldsmith at noaa.gov Thu May 24 12:45:13 2007 From: David.L.Goldsmith at noaa.gov (David L Goldsmith) Date: Thu, 24 May 2007 09:45:13 -0700 Subject: [Numpy-discussion] NumPy 1.0.3 for OS-X In-Reply-To: <4655B9BA.3050301@noaa.gov> References: <4654CEE6.9000703@ieee.org> <4655B9BA.3050301@noaa.gov> Message-ID: <4655C119.6020208@noaa.gov> Since I have to install a numpy on my new Mac, I'll try. Chris, you have a pre-Intel Mac to try it out on, correct? Anyone: are there issues I should be aware of besides building it Universal? ANSI v. Unicode? Should I build against older versions of Python? If so, how far back should I go? Etc. DG Christopher Barker wrote: > Anyone have plans to build the latest release to put up on pythonmac? > > I could do it, but I'm busy, not the most qualified, and don't want to > duplicate effort. If no one else is, though, I'll try to fit it in. > > -Chris > > > -- ERD/ORR/NOS/NOAA From Chris.Barker at noaa.gov Thu May 24 12:53:33 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 24 May 2007 09:53:33 -0700 Subject: [Numpy-discussion] Median / mean functionality confusing? In-Reply-To: <4655BCC0.6040308@gmx.net> References: <1e2af89e0705240756q14c55379p1ca9ee743ec55724@mail.gmail.com> <4655AAC0.3020909@gmx.net> <1e2af89e0705240827k3b2d85c4m3dd099e64d08d9a@mail.gmail.com> <4655B942.9030308@noaa.gov> <4655BCC0.6040308@gmx.net> Message-ID: <4655C30D.6080305@noaa.gov> Sven Schreiber wrote: >> (Zar, Jerrold H. 1984. Biostatistical Analysis. Prentice Hall.) > > Is that the seminal work on the topic ;-) Of course not, just a reference I have handy -- though I suppose there are any number of them on the web too. >> Of course, the median of an odd number of integers would be an integer. > that's why I asked about _forcing_ to a float To complete the discussion: >>> a = N.arange(4) >>> type(N.median(a)) >>> a = N.arange(4) >>> N.median(a) 1.5 >>> type(N.median(a)) >>> a = N.arange(5) >>> N.median(a) 2 >>> type(N.median(a)) So median converts to a float if it needs to, and keeps it an integer otherwise, which seems reasonable to me, though it would be nice to specify a dtype, so that you can make sure you always get a float if you want one. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu May 24 12:55:17 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 24 May 2007 09:55:17 -0700 Subject: [Numpy-discussion] NumPy 1.0.3 for OS-X In-Reply-To: <4655C119.6020208@noaa.gov> References: <4654CEE6.9000703@ieee.org> <4655B9BA.3050301@noaa.gov> <4655C119.6020208@noaa.gov> Message-ID: <4655C375.4080102@noaa.gov> David L Goldsmith wrote: > Since I have to install a numpy on my new Mac, I'll try. Chris, you > have a pre-Intel Mac to try it out on, correct? Anyone: are there > issues I should be aware of besides building it Universal? ANSI v. > Unicode? Build against the Universal python 2.4 and 2.5 distributed at pythonmac. That way you'll get something compatible with those, which is what we want. And yes, I can test on PPC. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From bmschwar at fas.harvard.edu Thu May 24 13:57:57 2007 From: bmschwar at fas.harvard.edu (Benjamin M. Schwartz) Date: Thu, 24 May 2007 13:57:57 -0400 Subject: [Numpy-discussion] Single-Precision FFT Message-ID: <4655D225.30407@fas.harvard.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I am working on two programs using NumPy for the OLPC project. In both cases the performance is limited by the FFT. The OLPC machine uses a AMD Geode CPU, which is generally slow, but especially bad at double-precision floating point. It would be a major improvement if we could compile NumPy to use complex64 for FFTs instead of complex128, and might even increase the probability of NumPy being provided as a learning tool to millions of children. I know that FFTW can be compiled to run in single precision. What would it take to make NumPy use a single-precision FFT library? If absolutely necessary, it might be possible to ship a patched version of NumPy, but any other solution would be preferable. A compile-time configuration option in NumPy would be ideal. - --Ben Schwartz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGVdIlUJT6e6HFtqQRAlCIAJ9o6DFMasm/ZABr8WMdRlmy/bTTMACZAXYn dCxeFPJtMbd/2YuYt9+4hDM= =aRQL -----END PGP SIGNATURE----- From robert.kern at gmail.com Thu May 24 14:14:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 May 2007 13:14:41 -0500 Subject: [Numpy-discussion] Single-Precision FFT In-Reply-To: <4655D225.30407@fas.harvard.edu> References: <4655D225.30407@fas.harvard.edu> Message-ID: <4655D611.3070503@gmail.com> Benjamin M. Schwartz wrote: > I am working on two programs using NumPy for the OLPC project. In both cases > the performance is limited by the FFT. The OLPC machine uses a AMD Geode CPU, > which is generally slow, but especially bad at double-precision floating point. > It would be a major improvement if we could compile NumPy to use complex64 for > FFTs instead of complex128, and might even increase the probability of NumPy > being provided as a learning tool to millions of children. > > I know that FFTW can be compiled to run in single precision. What would it take > to make NumPy use a single-precision FFT library? > > If absolutely necessary, it might be possible to ship a patched version of > NumPy, but any other solution would be preferable. A compile-time configuration > option in NumPy would be ideal. I'd prefer not to put in such an option. Options are bad. It means that the same program will give different results on different machines depending on how the person compiled numpy. Instead, you might want to consider porting the fftpack_lite module in numpy to use single precision (fairly easy) and distribute it separately. That's probably the fastest approach, but it's a bit isolating. An approach that takes a little bit more work, but is probably better in the long run is to help us provide *both* single-precision and double-precision FFTs. This would involve copying fftpack.c, renaming the functions and re-#defining Treal to be "float" instead of "double", then modifying the functions in fftpack_litemodule.c to use either the single-precision or the double-precision functions depending on the type of the input (or possibly a keyword argument; there is a small issue of backwards compatibility, here). Which approach would you like to take? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Thu May 24 14:18:07 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 24 May 2007 14:18:07 -0400 Subject: [Numpy-discussion] Single-Precision FFT In-Reply-To: <4655D225.30407@fas.harvard.edu> References: <4655D225.30407@fas.harvard.edu> Message-ID: On 24/05/07, Benjamin M. Schwartz wrote: > I know that FFTW can be compiled to run in single precision. What would it take > to make NumPy use a single-precision FFT library? > > If absolutely necessary, it might be possible to ship a patched version of > NumPy, but any other solution would be preferable. A compile-time configuration > option in NumPy would be ideal. As a less last-resort option, you could just write a separate ctypes wrapper for some single-precision FFT library (FFTW in single mode, say), and provide it as singlefft. This is of course not as nice as Robert Kern's suggestion of making single-precision FFTs an integral part of numpy, but it would solve your problem and be fairly straightforward. Anne From bmschwar at fas.harvard.edu Thu May 24 14:22:00 2007 From: bmschwar at fas.harvard.edu (Benjamin M. Schwartz) Date: Thu, 24 May 2007 14:22:00 -0400 Subject: [Numpy-discussion] Single-Precision FFT In-Reply-To: References: <4655D225.30407@fas.harvard.edu> Message-ID: <4655D7C8.2020605@fas.harvard.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Thank you both for your excellent suggestions. I'll re-read fftpack.c and try to puzzle out what to do next. Thanks, Ben -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGVdfIUJT6e6HFtqQRAp+uAKCfocjRu+NW3y+qfahUliro4Tj0rwCbBCEj 4/rINxqo1zOgGoAUy2huQ/I= =RI6Q -----END PGP SIGNATURE----- From faltet at carabos.com Thu May 24 14:33:52 2007 From: faltet at carabos.com (Francesc Altet) Date: Thu, 24 May 2007 20:33:52 +0200 Subject: [Numpy-discussion] Aligning an array on Windows Message-ID: <1180031632.2585.42.camel@localhost.localdomain> Hi, Some time ago I made an improvement in speed on the numexpr version of PyTables so as to accelerate the operations with unaligned arrays (objects that can appear quite commonly when dealing with columns of recarrays, as PyTables does). This improvement has demostrated to work correctly and flawlessly in Linux machines (using GCC 4.x and in both 32-bit and 64-bit Linux boxes) for several weeks of intensive testing. Moreover, its speed-up is ranging from a 40% on modern processors and up to a 70% in older ones, so I'd like to keep it. The surprise came today when I tried to compile the same code on a Windows box (Win XP Service Pack 2) using MSVC 7.1, through the free (as in beer) Toolkit 2003. The compilation process went fine, but the problem is that I'm getting crashes from time to time when running the numexpr test suite. After some in-depth investigation, I'm pretty sure that the problem is in a concrete part of the code that I'd modified for this improvement. IMO, the affected code is in numexpr/interp_body.c and reads like: case OP_COPY_II: VEC_ARG1(memcpy(dest, x1, sizeof(int)); dest += sizeof(int); x1 += stride1); case OP_COPY_LL: VEC_ARG1(memcpy(dest, x1, sizeof(long long)); dest += sizeof(long long); x1 += stride1); case OP_COPY_FF: VEC_ARG1(memcpy(dest, x1, sizeof(double)); dest += sizeof(double); x1 += stride1); case OP_COPY_CC: VEC_ARG1(memcpy(dest, x1, sizeof(double)*2); dest += sizeof(double)*2; x1 += stride1); This might seem complicated, but it is not. Each of the OP_COPY_XX is a function that has to copy source (x1) to destination (dest) for int, long long, double and complex data types (this function will be called in a loop for copying all the data in array). The code for the original numexpr reads as: case OP_COPY_BB: VEC_ARG1(b_dest = b1); case OP_COPY_II: VEC_ARG1(i_dest = i1); case OP_COPY_FF: VEC_ARG1(f_dest = f1); case OP_COPY_CC: VEC_ARG1(cr_dest = c1r; ci_dest = c1i); i.e. the copy is done through direct assignment. This can be done because, in the original numexpr, an array is always guaranteed to reach this part of the code (the computing kernel) in the aligned form. But in my code, this is not guaranteed (the copy is made precisely for alignment purposes), so this is why I need to make use of memcpy/memmove calls. The thing I don't see is why my version of the code can create problems on Windows platforms and work perfectly on Linux ones. I've tried to use memmove instead of memcpy, but the problem persists. I've had a look at how numpy makes an 'aligned' copy of an unaligned array, and it seems to me that it uses memcpy/memmove (not sure when you should use one or another) just as I use it above. It might be possible that the problem is in another place, but my tests reaffirm me in the possibility that something is wrong with my copy code above (but again, I can't see where). Of course, we can get rid of this optimization but it is a bit depressing to have renounce to it just because it doesn't work on Windows :( Thanks in advance for any hint you may provide! -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From David.L.Goldsmith at noaa.gov Thu May 24 16:20:50 2007 From: David.L.Goldsmith at noaa.gov (David L Goldsmith) Date: Thu, 24 May 2007 13:20:50 -0700 Subject: [Numpy-discussion] NumPy 1.0.3 for OS-X In-Reply-To: <4655C119.6020208@noaa.gov> References: <4654CEE6.9000703@ieee.org> <4655B9BA.3050301@noaa.gov> <4655C119.6020208@noaa.gov> Message-ID: <4655F3A2.5080004@noaa.gov> OK, I'm having difficulties, so if anyone can beat me to it, that'd be great. DG David L Goldsmith wrote: > Since I have to install a numpy on my new Mac, I'll try. Chris, you > have a pre-Intel Mac to try it out on, correct? Anyone: are there > issues I should be aware of besides building it Universal? ANSI v. > Unicode? Should I build against older versions of Python? If so, how > far back should I go? Etc. > > DG > > Christopher Barker wrote: > >> Anyone have plans to build the latest release to put up on pythonmac? >> >> I could do it, but I'm busy, not the most qualified, and don't want to >> duplicate effort. If no one else is, though, I'll try to fit it in. >> >> -Chris >> >> >> >> > > > -- ERD/ORR/NOS/NOAA From David.L.Goldsmith at noaa.gov Thu May 24 17:44:55 2007 From: David.L.Goldsmith at noaa.gov (David L Goldsmith) Date: Thu, 24 May 2007 14:44:55 -0700 Subject: [Numpy-discussion] NumPy 1.0.3 for OS-X In-Reply-To: <4655F3A2.5080004@noaa.gov> References: <4654CEE6.9000703@ieee.org> <4655B9BA.3050301@noaa.gov> <4655C119.6020208@noaa.gov> <4655F3A2.5080004@noaa.gov> Message-ID: <46560757.2000005@noaa.gov> Hold on again, I think I did it: it works on my BSN Intel Mac and Chris is about to test it on his not-so-new PPC Mac. Assuming I built a viable product, how do I put it in the right place (i.e., @ http://pythonmac.org/packages/py25-fat/index.html)? Thanks! DG David L Goldsmith wrote: > OK, I'm having difficulties, so if anyone can beat me to it, that'd be > great. > > DG > > David L Goldsmith wrote: > >> Since I have to install a numpy on my new Mac, I'll try. Chris, you >> have a pre-Intel Mac to try it out on, correct? Anyone: are there >> issues I should be aware of besides building it Universal? ANSI v. >> Unicode? Should I build against older versions of Python? If so, how >> far back should I go? Etc. >> >> DG >> >> Christopher Barker wrote: >> >> >>> Anyone have plans to build the latest release to put up on pythonmac? >>> >>> I could do it, but I'm busy, not the most qualified, and don't want to >>> duplicate effort. If no one else is, though, I'll try to fit it in. >>> >>> -Chris >>> >>> >>> >>> >>> >> >> > > > -- ERD/ORR/NOS/NOAA From ryanlists at gmail.com Thu May 24 17:57:52 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 24 May 2007 16:57:52 -0500 Subject: [Numpy-discussion] Vista installer? Message-ID: I am trying to use Numpy/Scipy for a class I am teaching this summer. I have one student running Vista. Is there an installer that works for Vista? Running the exe file from webpage gives errors about not being able to create various folders and files. I think this is from Vista being very restrictive about which files and folders are writable. Is anyone out there running Numpy/Scipy in Vista? If so, how did you get it to work? Thanks, Ryan From jl at dmi.dk Fri May 25 04:37:44 2007 From: jl at dmi.dk (Jesper Larsen) Date: Fri, 25 May 2007 10:37:44 +0200 Subject: [Numpy-discussion] corrcoef of masked array Message-ID: <200705251037.45071.jl@dmi.dk> Hi numpy users, I have a masked array of dimension (nvariables, nobservations) that contain missing values at arbitrary points. Is it safe to rely on numpy.corrcoeff to calculate the correlation coefficients of a masked array (it seems to give reasonable results)? Cheers, Jesper From chanley at stsci.edu Fri May 25 08:27:47 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 25 May 2007 08:27:47 -0400 Subject: [Numpy-discussion] build problem on RHE3 machine Message-ID: <4656D643.3090000@stsci.edu> Good Morning, When attempting to do my daily numpy build from svn I now receive the following error. I am a Redhat Enterprise 3 Machine running Python 2.5.1. libraries lapack_atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /data/sparty1/dev/numpy/numpy/distutils/system_info.py:1221: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: libraries lapack not found in /usr/stsci/pyssgdev/Python-2.5.1/lib libraries lapack not found in /usr/local/lib FOUND: libraries = ['lapack'] library_dirs = ['/usr/lib'] language = f77 FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources creating build creating build/src.linux-i686-2.5 creating build/src.linux-i686-2.5/numpy creating build/src.linux-i686-2.5/numpy/distutils building extension "numpy.core.multiarray" sources creating build/src.linux-i686-2.5/numpy/core Generating build/src.linux-i686-2.5/numpy/core/config.h customize GnuFCompiler customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgf90 customize AbsoftFCompiler Could not locate executable f90 customize NAGFCompiler Could not locate executable f95 customize VastFCompiler customize GnuFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable ifort Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler Could not locate executable ifort Could not locate executable efort Could not locate executable efc customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 error: don't know how to compile Fortran code on platform 'posix' This problem is new this morning. Chris -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From cookedm at physics.mcmaster.ca Fri May 25 09:44:25 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 25 May 2007 09:44:25 -0400 Subject: [Numpy-discussion] build problem on RHE3 machine In-Reply-To: <4656D643.3090000@stsci.edu> References: <4656D643.3090000@stsci.edu> Message-ID: <20070525134425.GA23672@arbutus.physics.mcmaster.ca> On Fri, May 25, 2007 at 08:27:47AM -0400, Christopher Hanley wrote: > Good Morning, > > When attempting to do my daily numpy build from svn I now receive the > following error. I am a Redhat Enterprise 3 Machine running Python 2.5.1. > > libraries lapack_atlas not found in /usr/local/lib > libraries f77blas,cblas,atlas not found in /usr/lib > libraries lapack_atlas not found in /usr/lib > numpy.distutils.system_info.atlas_info > NOT AVAILABLE > > /data/sparty1/dev/numpy/numpy/distutils/system_info.py:1221: UserWarning: > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [atlas]) or by setting > the ATLAS environment variable. > warnings.warn(AtlasNotFoundError.__doc__) > lapack_info: > libraries lapack not found in /usr/stsci/pyssgdev/Python-2.5.1/lib > libraries lapack not found in /usr/local/lib > FOUND: > libraries = ['lapack'] > library_dirs = ['/usr/lib'] > language = f77 > > FOUND: > libraries = ['lapack', 'blas'] > library_dirs = ['/usr/lib'] > define_macros = [('NO_ATLAS_INFO', 1)] > language = f77 > > running install > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands > --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_src > building py_modules sources > creating build > creating build/src.linux-i686-2.5 > creating build/src.linux-i686-2.5/numpy > creating build/src.linux-i686-2.5/numpy/distutils > building extension "numpy.core.multiarray" sources > creating build/src.linux-i686-2.5/numpy/core > Generating build/src.linux-i686-2.5/numpy/core/config.h > customize GnuFCompiler > customize IntelFCompiler > Could not locate executable ifort > Could not locate executable ifc > customize LaheyFCompiler > Could not locate executable lf95 > customize PGroupFCompiler > Could not locate executable pgf90 > customize AbsoftFCompiler > Could not locate executable f90 > customize NAGFCompiler > Could not locate executable f95 > customize VastFCompiler > customize GnuFCompiler > customize CompaqFCompiler > Could not locate executable fort > customize IntelItaniumFCompiler > Could not locate executable ifort > Could not locate executable efort > Could not locate executable efc > customize IntelEM64TFCompiler > Could not locate executable ifort > Could not locate executable efort > Could not locate executable efc > customize Gnu95FCompiler > Could not locate executable gfortran > Could not locate executable f95 > customize G95FCompiler > Could not locate executable g95 > error: don't know how to compile Fortran code on platform 'posix' > > This problem is new this morning. Hmm, my fault. I'll have a look. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Fri May 25 09:52:36 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 25 May 2007 09:52:36 -0400 Subject: [Numpy-discussion] build problem on RHE3 machine In-Reply-To: <4656D643.3090000@stsci.edu> References: <4656D643.3090000@stsci.edu> Message-ID: <20070525135236.GA24200@arbutus.physics.mcmaster.ca> On Fri, May 25, 2007 at 08:27:47AM -0400, Christopher Hanley wrote: > Good Morning, > > When attempting to do my daily numpy build from svn I now receive the > following error. I am a Redhat Enterprise 3 Machine running Python 2.5.1. > > libraries lapack_atlas not found in /usr/local/lib > libraries f77blas,cblas,atlas not found in /usr/lib > libraries lapack_atlas not found in /usr/lib > numpy.distutils.system_info.atlas_info > NOT AVAILABLE > > /data/sparty1/dev/numpy/numpy/distutils/system_info.py:1221: UserWarning: > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [atlas]) or by setting > the ATLAS environment variable. > warnings.warn(AtlasNotFoundError.__doc__) > lapack_info: > libraries lapack not found in /usr/stsci/pyssgdev/Python-2.5.1/lib > libraries lapack not found in /usr/local/lib > FOUND: > libraries = ['lapack'] > library_dirs = ['/usr/lib'] > language = f77 > > FOUND: > libraries = ['lapack', 'blas'] > library_dirs = ['/usr/lib'] > define_macros = [('NO_ATLAS_INFO', 1)] > language = f77 > > running install > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands > --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_src > building py_modules sources > creating build > creating build/src.linux-i686-2.5 > creating build/src.linux-i686-2.5/numpy > creating build/src.linux-i686-2.5/numpy/distutils > building extension "numpy.core.multiarray" sources > creating build/src.linux-i686-2.5/numpy/core > Generating build/src.linux-i686-2.5/numpy/core/config.h > customize GnuFCompiler > customize IntelFCompiler > Could not locate executable ifort > Could not locate executable ifc > customize LaheyFCompiler > Could not locate executable lf95 > customize PGroupFCompiler > Could not locate executable pgf90 > customize AbsoftFCompiler > Could not locate executable f90 > customize NAGFCompiler > Could not locate executable f95 > customize VastFCompiler > customize GnuFCompiler > customize CompaqFCompiler > Could not locate executable fort > customize IntelItaniumFCompiler > Could not locate executable ifort > Could not locate executable efort > Could not locate executable efc > customize IntelEM64TFCompiler > Could not locate executable ifort > Could not locate executable efort > Could not locate executable efc > customize Gnu95FCompiler > Could not locate executable gfortran > Could not locate executable f95 > customize G95FCompiler > Could not locate executable g95 > error: don't know how to compile Fortran code on platform 'posix' > > This problem is new this morning. Could you send me the results of running with the -v flag? i.e., python setup.py -v build -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From lyang at unb.ca Fri May 25 11:43:02 2007 From: lyang at unb.ca (Yang, Lu) Date: Fri, 25 May 2007 12:43:02 -0300 Subject: [Numpy-discussion] Numpy 1.0.3 install problem. Help! Message-ID: <1180107782.46570406acf1f@webmail.unb.ca> Hi, I am installing Numpy 1.0.3 on Solaris 10. I am new to Numpy install. Here are what I did and the result of 'python setup.py install'. Please help. Thanks in advance. I set: setenv CFLAGS "-xchip=opteron " setenv CXXFLAGS "-xchip=opteron " setenv CC /opt/SUNWspro/bin/cc setenv CXX /opt/SUNWspro/bin/CC setenv LDFLAGS " -L/lib " setenv LD_LIBRARY_PATH /opt/SUNWspro/lib:$LD_LIBRARY_PATH Results of 'python setup.py install': Running from numpy source directory. F2PY Version 2_3816 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/sfw/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/sfw/lib libraries lapack,blas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries lapack,blas not found in /usr/sfw/lib libraries lapack,blas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.3/numpy/distutils/system_info.py:1314: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /usr/sfw/lib libraries blas not found in /usr/local/lib libraries blas not found in /usr/lib NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.3/numpy/distutils/system_info.py:1323: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.3/numpy/distutils/system_info.py:1326: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) NOT AVAILABLE lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/sfw/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/sfw/lib libraries lapack_atlas not found in /usr/sfw/lib libraries lapack,blas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries lapack,blas not found in /usr/sfw/lib libraries lapack_atlas not found in /usr/sfw/lib libraries lapack,blas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.3/numpy/distutils/system_info.py:1221: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: libraries lapack not found in /usr/sfw/lib libraries lapack not found in /usr/local/lib libraries lapack not found in /usr/lib NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.3/numpy/distutils/system_info.py:1232: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) lapack_src_info: NOT AVAILABLE /net/nfs1-data/main/apps/src/numpy-1.0.3/numpy/distutils/system_info.py:1235: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) NOT AVAILABLE non-existing path in '/net/nfs1-data/main/apps/src/numpy-1.0.3': 'COMPATIBILITY' non-existing path in '/net/nfs1-data/main/apps/src/numpy-1.0.3': 'scipy_compatibility' running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources building extension "numpy.core.multiarray" sources adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h' to sources. adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.solaris-2.10-i86pc-2.3/numpy/core/src/scalartypes.inc', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/src/arraytypes.inc', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__ufunc_api.h' to sources. adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.solaris-2.10-i86pc-2.3/numpy/core/src/scalartypes.inc', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/src/arraytypes.inc', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h' to sources. numpy.core - nothing done with h_files= ['build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h'] building extension "numpy.core.scalarmath" sources adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__ufunc_api.h' to sources. numpy.core - nothing done with h_files= ['build/src.solaris-2.10-i86pc-2.3/numpy/core/config.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__multiarray_api.h', 'build/src.solaris-2.10-i86pc-2.3/numpy/core/__ufunc_api.h'] building extension "numpy.core._dotblas" sources building extension "numpy.lib._compiled_base" sources building extension "numpy.numarray._capi" sources building extension "numpy.fft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources ### Warning: Using unoptimized lapack ### adding 'numpy/linalg/lapack_litemodule.c' to sources. adding 'numpy/linalg/zlapack_lite.c' to sources. adding 'numpy/linalg/dlapack_lite.c' to sources. adding 'numpy/linalg/blas_lite.c' to sources. adding 'numpy/linalg/dlamch.c' to sources. adding 'numpy/linalg/f2c_lite.c' to sources. building extension "numpy.random.mtrand" sources customize SunFCompiler customize SunFCompiler customize SunFCompiler using config C compiler: /opt/SUNWspro/bin/cc -i -xO4 -xspace -xstrconst -xpentium -mr -DANSICPP -D__STDC_VERSION__=199409L -DNDEBUG -O -xchip=opteron compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/sfw/include/python2.3 -c' cc: _configtest.c "_configtest.c", line 7: #error: No _WIN32 cc: acomp failed for _configtest.c "_configtest.c", line 7: #error: No _WIN32 cc: acomp failed for _configtest.c failure. removing: _configtest.c _configtest.o building data_files sources running build_py copying build/src.solaris-2.10-i86pc-2.3/numpy/__config__.py -> build/lib.solaris-2.10-i86pc-2.3/numpy copying build/src.solaris-2.10-i86pc-2.3/numpy/distutils/__config__.py -> build/lib.solaris-2.10-i86pc-2.3/numpy/distutils running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'numpy.core.multiarray' extension compiling C sources C compiler: /opt/SUNWspro/bin/cc -i -xO4 -xspace -xstrconst -xpentium -mr -DANSICPP -D__STDC_VERSION__=199409L -DNDEBUG -O -xchip=opteron compile options: '-Ibuild/src.solaris-2.10-i86pc-2.3/numpy/core/src -Inumpy/core/include -Ibuild/src.solaris-2.10-i86pc-2.3/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/sfw/include/python2.3 -c' /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc -G -L/lib -xchip=opteron build/temp.solaris-2.10-i86pc-2.3/numpy/core/src/multiarraymodule.o -lm -o build/lib.solaris-2.10-i86pc-2.3/numpy/core/multiarray.so sh: /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc: not found sh: /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc: not found error: Command "/sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc -G -L/lib -xchip=opteron build/temp.solaris-2.10-i86pc-2.3/numpy/core/src/multiarraymodule.o -lm -o build/lib.solaris-2.10-i86pc-2.3/numpy/core/multiarray.so" failed with exit status 1 From chanley at stsci.edu Fri May 25 12:52:18 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 25 May 2007 12:52:18 -0400 Subject: [Numpy-discussion] build problem on RHE3 machine In-Reply-To: <20070525135236.GA24200@arbutus.physics.mcmaster.ca> References: <4656D643.3090000@stsci.edu> <20070525135236.GA24200@arbutus.physics.mcmaster.ca> Message-ID: <46571442.9020000@stsci.edu> Sorry I didn't respond sooner. It seems to have taken almost 3 hours for me to receive this message. In any case the problems seems to have been resolved. I am able to build and install numpy version 1.0.4.dev3828 on my RHE3 machine running Python 2.5.1. Thank you for the quick fix. Chris p.s. All unittests pass. > Could you send me the results of running with the -v flag? > i.e., python setup.py -v build > > > From robert.kern at gmail.com Fri May 25 13:18:42 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 May 2007 12:18:42 -0500 Subject: [Numpy-discussion] corrcoef of masked array In-Reply-To: <200705251037.45071.jl@dmi.dk> References: <200705251037.45071.jl@dmi.dk> Message-ID: <46571A72.6000903@gmail.com> Jesper Larsen wrote: > Hi numpy users, > > I have a masked array of dimension (nvariables, nobservations) that contain > missing values at arbitrary points. Is it safe to rely on numpy.corrcoeff to > calculate the correlation coefficients of a masked array (it seems to give > reasonable results)? No, it isn't. There are several different options for estimating correlations in the face of missing data, none of which are clearly superior to the others. None of them are trivial. None of them are implemented in numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri May 25 13:23:31 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 May 2007 12:23:31 -0500 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed - undefined symbol: pthread_join In-Reply-To: <2E58C246F17003499C141D334794D049027682CD@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682C7@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682CD@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <46571B93.9040707@gmail.com> Gong, Shawn (Contractor) wrote: > Hi Robert and list, > > My colleague said that it certainly looks like a missing thread library. > It looks like the problem is that lapack_lite was compiled > multi-threaded and can't find the thread library. Okay, it looks like ATLAS's multithreaded libraries are being picked up before the single-threaded libraries. Try the following site.cfg: [atlas] library_dirs = /usr/local/lib/atlas libraries = lapack, f77blas, cblas, atlas If that doesn't work, copy the single-threaded libraries named above to somewhere else (like /home/sgong/dev/dist/lib/), and use the following site.cfg: [atlas] library_dirs = /home/sgong/dev/dist/lib libraries = lapack, f77blas, cblas, atlas You may have to run ranlib on the libraries after you copy them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Fri May 25 13:25:15 2007 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 25 May 2007 19:25:15 +0200 Subject: [Numpy-discussion] build problem on RHE3 machine References: <4656D643.3090000@stsci.edu><20070525135236.GA24200@arbutus.physics.mcmaster.ca> <46571442.9020000@stsci.edu> Message-ID: <004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za> I'm still having problems on Windows with r3828. Build command: python setup.py -v config --compiler=msvc build_clib --compiler=msvc build_ext --compiler=msvc bdist_wininst Output: F2PY Version 2_3828 blas_opt_info: blas_mkl_info: ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) ( include_dirs = C:\Program Files\Intel\MKL\9.0\include:C:\Python24\include ) (paths: ) (paths: ) (paths: ) (paths: ) (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\mkl_c.lib) (paths: ) (paths: ) (paths: ) (paths: ) (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\libguide40.lib) ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] ( library_dirs = C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] lapack_opt_info: lapack_mkl_info: mkl_info: ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) ( include_dirs = C:\Program Files\Intel\MKL\9.0\include:C:\Python24\include ) (paths: ) (paths: ) (paths: ) (paths: ) (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\mkl_c.lib) (paths: ) (paths: ) (paths: ) (paths: ) (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\libguide40.lib) ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_lapack', 'mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] ( library_dirs = C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_lapack', 'mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] running config running build_clib running build_ext running build_src building py_modules sources building extension "numpy.core.multiarray" sources Generating build\src.win32-2.4\numpy\core\config.h No module named msvccompiler in numpy.distutils; trying from distutils new_compiler returns distutils.msvccompiler.MSVCCompiler 0 1 customize GnuFCompiler find_executable('g77') Could not locate executable g77 find_executable('f77') Could not locate executable f77 _find_existing_fcompiler: compiler_type='gnu' not found customize IntelVisualFCompiler find_executable('ifl') Could not locate executable ifl _find_existing_fcompiler: compiler_type='intelv' not found customize AbsoftFCompiler find_executable('f90') Could not locate executable f90 find_executable('f77') Could not locate executable f77 _find_existing_fcompiler: compiler_type='absoft' not found customize CompaqVisualFCompiler find_executable('DF') Could not locate executable DF _find_existing_fcompiler: compiler_type='compaqv' not found customize IntelItaniumVisualFCompiler find_executable('efl') Could not locate executable efl _find_existing_fcompiler: compiler_type='intelev' not found customize Gnu95FCompiler find_executable('gfortran') Could not locate executable gfortran find_executable('f95') Could not locate executable f95 _find_existing_fcompiler: compiler_type='gnu95' not found customize G95FCompiler find_executable('g95') Could not locate executable g95 _find_existing_fcompiler: compiler_type='g95' not found removed c:\docume~1\albert\locals~1\temp\tmp1gax32__dummy.f removed c:\docume~1\albert\locals~1\temp\tmp_lgu9f__dummy.f removed c:\docume~1\albert\locals~1\temp\tmp4vpnwa__dummy.f removed c:\docume~1\albert\locals~1\temp\tmp8xx1ll__dummy.f removed c:\docume~1\albert\locals~1\temp\tmp4veorf__dummy.f removed c:\docume~1\albert\locals~1\temp\tmpwjdbiy__dummy.f Running from numpy source directory. error: don't know how to compile Fortran code on platform 'nt' ----- Original Message ----- From: "Christopher Hanley" To: "Discussion of Numerical Python" Sent: Friday, May 25, 2007 6:52 PM Subject: Re: [Numpy-discussion] build problem on RHE3 machine > Sorry I didn't respond sooner. It seems to have taken almost 3 hours > for me to receive this message. > > In any case the problems seems to have been resolved. I am able to > build and install numpy version 1.0.4.dev3828 on my RHE3 machine running > Python 2.5.1. > > Thank you for the quick fix. > > Chris > > p.s. All unittests pass. > >> Could you send me the results of running with the -v flag? >> i.e., python setup.py -v build From cookedm at physics.mcmaster.ca Fri May 25 13:37:16 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 25 May 2007 13:37:16 -0400 Subject: [Numpy-discussion] build problem on RHE3 machine In-Reply-To: <004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za> References: <46571442.9020000@stsci.edu> <004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za> Message-ID: <20070525173716.GA25025@arbutus.physics.mcmaster.ca> On Fri, May 25, 2007 at 07:25:15PM +0200, Albert Strasheim wrote: > I'm still having problems on Windows with r3828. Build command: > > python setup.py -v config --compiler=msvc build_clib --compiler=msvc > build_ext --compiler=msvc bdist_wininst Can you send me the output of python setup.py -v config_fc --help-fcompiler And what fortran compiler are you trying to use? > > Output: > > F2PY Version 2_3828 > blas_opt_info: > blas_mkl_info: > ( library_dirs = C:\Program > Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) > ( include_dirs = C:\Program > Files\Intel\MKL\9.0\include:C:\Python24\include ) > (paths: ) > (paths: ) > (paths: ) > (paths: ) > (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\mkl_c.lib) > (paths: ) > (paths: ) > (paths: ) > (paths: ) > (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\libguide40.lib) > ( library_dirs = C:\Program > Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) > FOUND: > libraries = ['mkl_c', 'libguide40'] > library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', > 'C:\\Python24\\include'] > > ( library_dirs = C:\Python24\lib:C:\:C:\Python24\libs ) > FOUND: > libraries = ['mkl_c', 'libguide40'] > library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', > 'C:\\Python24\\include'] > > lapack_opt_info: > lapack_mkl_info: > mkl_info: > ( library_dirs = C:\Program > Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) > ( include_dirs = C:\Program > Files\Intel\MKL\9.0\include:C:\Python24\include ) > (paths: ) > (paths: ) > (paths: ) > (paths: ) > (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\mkl_c.lib) > (paths: ) > (paths: ) > (paths: ) > (paths: ) > (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\libguide40.lib) > ( library_dirs = C:\Program > Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) > FOUND: > libraries = ['mkl_c', 'libguide40'] > library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', > 'C:\\Python24\\include'] > > ( library_dirs = C:\Program > Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) > FOUND: > libraries = ['mkl_lapack', 'mkl_c', 'libguide40'] > library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', > 'C:\\Python24\\include'] > > ( library_dirs = C:\Python24\lib:C:\:C:\Python24\libs ) > FOUND: > libraries = ['mkl_lapack', 'mkl_c', 'libguide40'] > library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', > 'C:\\Python24\\include'] > > running config > running build_clib > running build_ext > running build_src > building py_modules sources > building extension "numpy.core.multiarray" sources > Generating build\src.win32-2.4\numpy\core\config.h > No module named msvccompiler in numpy.distutils; trying from distutils > new_compiler returns distutils.msvccompiler.MSVCCompiler > 0 > 1 > customize GnuFCompiler > find_executable('g77') > Could not locate executable g77 > find_executable('f77') > Could not locate executable f77 > _find_existing_fcompiler: compiler_type='gnu' not found > customize IntelVisualFCompiler > find_executable('ifl') > Could not locate executable ifl > _find_existing_fcompiler: compiler_type='intelv' not found > customize AbsoftFCompiler > find_executable('f90') > Could not locate executable f90 > find_executable('f77') > Could not locate executable f77 > _find_existing_fcompiler: compiler_type='absoft' not found > customize CompaqVisualFCompiler > find_executable('DF') > Could not locate executable DF > _find_existing_fcompiler: compiler_type='compaqv' not found > customize IntelItaniumVisualFCompiler > find_executable('efl') > Could not locate executable efl > _find_existing_fcompiler: compiler_type='intelev' not found > customize Gnu95FCompiler > find_executable('gfortran') > Could not locate executable gfortran > find_executable('f95') > Could not locate executable f95 > _find_existing_fcompiler: compiler_type='gnu95' not found > customize G95FCompiler > find_executable('g95') > Could not locate executable g95 > _find_existing_fcompiler: compiler_type='g95' not found > removed c:\docume~1\albert\locals~1\temp\tmp1gax32__dummy.f > removed c:\docume~1\albert\locals~1\temp\tmp_lgu9f__dummy.f > removed c:\docume~1\albert\locals~1\temp\tmp4vpnwa__dummy.f > removed c:\docume~1\albert\locals~1\temp\tmp8xx1ll__dummy.f > removed c:\docume~1\albert\locals~1\temp\tmp4veorf__dummy.f > removed c:\docume~1\albert\locals~1\temp\tmpwjdbiy__dummy.f > Running from numpy source directory. > error: don't know how to compile Fortran code on platform 'nt' > > ----- Original Message ----- > From: "Christopher Hanley" > To: "Discussion of Numerical Python" > Sent: Friday, May 25, 2007 6:52 PM > Subject: Re: [Numpy-discussion] build problem on RHE3 machine > > > > Sorry I didn't respond sooner. It seems to have taken almost 3 hours > > for me to receive this message. > > > > In any case the problems seems to have been resolved. I am able to > > build and install numpy version 1.0.4.dev3828 on my RHE3 machine running > > Python 2.5.1. > > > > Thank you for the quick fix. > > > > Chris > > > > p.s. All unittests pass. > > > >> Could you send me the results of running with the -v flag? > >> i.e., python setup.py -v build > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Fri May 25 13:45:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 May 2007 12:45:32 -0500 Subject: [Numpy-discussion] build problem on RHE3 machine In-Reply-To: <20070525173716.GA25025@arbutus.physics.mcmaster.ca> References: <46571442.9020000@stsci.edu> <004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za> <20070525173716.GA25025@arbutus.physics.mcmaster.ca> Message-ID: <465720BC.8090400@gmail.com> David M. Cooke wrote: > On Fri, May 25, 2007 at 07:25:15PM +0200, Albert Strasheim wrote: >> I'm still having problems on Windows with r3828. Build command: >> >> python setup.py -v config --compiler=msvc build_clib --compiler=msvc >> build_ext --compiler=msvc bdist_wininst > > Can you send me the output of > > python setup.py -v config_fc --help-fcompiler > > And what fortran compiler are you trying to use? If he's trying to build numpy, he shouldn't be using *any* Fortran compiler. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Fri May 25 13:50:20 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 25 May 2007 13:50:20 -0400 Subject: [Numpy-discussion] build problem on RHE3 machine In-Reply-To: <465720BC.8090400@gmail.com> References: <46571442.9020000@stsci.edu> <004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za> <20070525173716.GA25025@arbutus.physics.mcmaster.ca> <465720BC.8090400@gmail.com> Message-ID: <20070525175020.GA25108@arbutus.physics.mcmaster.ca> On Fri, May 25, 2007 at 12:45:32PM -0500, Robert Kern wrote: > David M. Cooke wrote: > > On Fri, May 25, 2007 at 07:25:15PM +0200, Albert Strasheim wrote: > >> I'm still having problems on Windows with r3828. Build command: > >> > >> python setup.py -v config --compiler=msvc build_clib --compiler=msvc > >> build_ext --compiler=msvc bdist_wininst > > > > Can you send me the output of > > > > python setup.py -v config_fc --help-fcompiler > > > > And what fortran compiler are you trying to use? > > If he's trying to build numpy, he shouldn't be using *any* Fortran compiler. Ah true. Still, config_fc wll say it can't find one (and that should be fine). I think the bug has to do with how it searches for a compiler. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From faltet at carabos.com Fri May 25 14:39:13 2007 From: faltet at carabos.com (Francesc Altet) Date: Fri, 25 May 2007 20:39:13 +0200 Subject: [Numpy-discussion] Aligning an array on Windows In-Reply-To: <1180031632.2585.42.camel@localhost.localdomain> References: <1180031632.2585.42.camel@localhost.localdomain> Message-ID: <200705252039.14426.faltet@carabos.com> A Dijous 24 Maig 2007 20:33, Francesc Altet escrigu?: > Hi, > > Some time ago I made an improvement in speed on the numexpr version of > PyTables so as to accelerate the operations with unaligned arrays > (objects that can appear quite commonly when dealing with columns of > recarrays, as PyTables does). > > This improvement has demostrated to work correctly and flawlessly in > Linux machines (using GCC 4.x and in both 32-bit and 64-bit Linux boxes) > for several weeks of intensive testing. Moreover, its speed-up is > ranging from a 40% on modern processors and up to a 70% in older ones, > so I'd like to keep it. > > The surprise came today when I tried to compile the same code on a > Windows box (Win XP Service Pack 2) using MSVC 7.1, through the free (as > in beer) Toolkit 2003. The compilation process went fine, but the > problem is that I'm getting crashes from time to time when running the > numexpr test suite. > > After some in-depth investigation, I'm pretty sure that the problem is > in a concrete part of the code that I'd modified for this improvement. > IMO, the affected code is in numexpr/interp_body.c and reads like: > > case OP_COPY_II: VEC_ARG1(memcpy(dest, x1, sizeof(int)); > dest += sizeof(int); x1 += stride1); > case OP_COPY_LL: VEC_ARG1(memcpy(dest, x1, sizeof(long long)); > dest += sizeof(long long); x1 += stride1); > case OP_COPY_FF: VEC_ARG1(memcpy(dest, x1, sizeof(double)); > dest += sizeof(double); x1 += stride1); > case OP_COPY_CC: VEC_ARG1(memcpy(dest, x1, sizeof(double)*2); > dest += sizeof(double)*2; x1 += stride1); [snip] Just for the record: I've found the culprit. The problem here was the use of the stride1 variable that was declared just above the main switch for opcodes as: intp stride1 = params.memsteps[arg1]; Unfortunately, this assignment gave problems because arg1 can take values out of the range of memsteps array. The solution was to use another variable, that was initialized as: intp sb1 = params.memsteps[arg1]; but in the VEC_ARG* macros, just after the BOUNDS_CHECK(arg1) call, so that it checks that arg1 doesn't get out of range. All in all, a very subtle bug that would have evident for Numexpr main authors, but not for me. Anyway, you can find the details of the fix in: http://www.pytables.org/trac/changeset/2939 I don't know exactly why, this wasn't giving problems with Linux boxes. Fortunately, Windows platform is much more finicky in terms of memory problems and brought this bug to my attention. Oh, thanks god for letting Windows be! ;) Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From Shawn.Gong at drdc-rddc.gc.ca Fri May 25 14:55:56 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Fri, 25 May 2007 14:55:56 -0400 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed - undefined symbol: pthread_join In-Reply-To: <46571B93.9040707@gmail.com> References: <2E58C246F17003499C141D334794D049027682C6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682C7@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682CD@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46571B93.9040707@gmail.com> Message-ID: <2E58C246F17003499C141D334794D049027682D9@ottawaex02.Ottawa.drdc-rddc.gc.ca> Hi Robert, I have tried both suggestions and got the same error message when "import numpy" Try #1) changed site.cfg to have only these 3 lines [atlas] library_dirs = /usr/local/lib/atlas libraries = lapack, f77blas, cblas, atlas result: did not work Try #2) changed site.cfg to have only these 3 lines [atlas] library_dirs = /home/sgong/dev/dist/lib/atlas libraries = lapack, f77blas, cblas, atlas copy the above 4 files onto /home/sgong/dev/dist/lib/atlas /usr/local/lib/atlas/ has all 7 files: lapack, f77blas, cblas, atlas, ptcblas, ptf77blas, statlas ranlib (in /home/sgong/dev/dist/lib/atlas) result: did not work the install screen capture "out" from Try #2 is attached for your reference. Note that both library_dirs = ['/usr/local/lib/atlas', '/home/sgong/dev/dist/lib/atlas'] are found. But '/usr/local/lib/atlas' is ahead of '/home/sgong/dev/dist/lib/atlas'. Is it a problem? Thanks, Shawn --------------------------- >python Python 2.3.6 (#9, May 18 2007, 10:22:59) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-53)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/__init__.py", line 40, in ? import linalg File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/__init__. py", line 4, in ? from linalg import * File "/home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/linalg.py ", line 25, in ? from numpy.linalg import lapack_lite ImportError: /home/sgong/dev/dist/lib/python2.3/site-packages/numpy/linalg/lapack_lit e.so: undefined symbol: pthread_join -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Robert Kern Sent: Friday, May 25, 2007 1:24 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Linux numpy 1.0.1 install failed - undefined symbol: pthread_join Gong, Shawn (Contractor) wrote: > Hi Robert and list, > > My colleague said that it certainly looks like a missing thread library. > It looks like the problem is that lapack_lite was compiled > multi-threaded and can't find the thread library. Okay, it looks like ATLAS's multithreaded libraries are being picked up before the single-threaded libraries. Try the following site.cfg: [atlas] library_dirs = /usr/local/lib/atlas libraries = lapack, f77blas, cblas, atlas If that doesn't work, copy the single-threaded libraries named above to somewhere else (like /home/sgong/dev/dist/lib/), and use the following site.cfg: [atlas] library_dirs = /home/sgong/dev/dist/lib libraries = lapack, f77blas, cblas, atlas You may have to run ranlib on the libraries after you copy them. -- Robert Kern -------------- next part -------------- A non-text attachment was scrubbed... Name: out.zip Type: application/x-zip-compressed Size: 9292 bytes Desc: out.zip URL: From robert.kern at gmail.com Fri May 25 15:08:25 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 May 2007 14:08:25 -0500 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed - undefined symbol: pthread_join In-Reply-To: <2E58C246F17003499C141D334794D049027682D9@ottawaex02.Ottawa.drdc-rddc.gc.ca> References: <2E58C246F17003499C141D334794D049027682C6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682C7@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682CD@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46571B93.9040707@gmail.com> <2E58C246F17003499C141D334794D049027682D9@ottawaex02.Ottawa.drdc-rddc.gc.ca> Message-ID: <46573429.9010801@gmail.com> Gong, Shawn (Contractor) wrote: > Hi Robert, > I have tried both suggestions and got the same error message when > "import numpy" > > Try #1) changed site.cfg to have only these 3 lines > [atlas] > library_dirs = /usr/local/lib/atlas > libraries = lapack, f77blas, cblas, atlas > > result: did not work > > Try #2) changed site.cfg to have only these 3 lines > [atlas] > library_dirs = /home/sgong/dev/dist/lib/atlas > libraries = lapack, f77blas, cblas, atlas > > copy the above 4 files onto /home/sgong/dev/dist/lib/atlas > /usr/local/lib/atlas/ has all 7 files: lapack, f77blas, cblas, atlas, > ptcblas, ptf77blas, statlas > ranlib (in /home/sgong/dev/dist/lib/atlas) > > result: did not work > > the install screen capture "out" from Try #2 is attached for your > reference. Note that both library_dirs = ['/usr/local/lib/atlas', > '/home/sgong/dev/dist/lib/atlas'] are found. > But '/usr/local/lib/atlas' is ahead of '/home/sgong/dev/dist/lib/atlas'. > > Is it a problem? Okay, here is the full scoop: * The multi-threaded ATLAS is always tried first. This doesn't work for you since you compiled your Python without pthreads. * The standard library directories (/usr/lib, /usr/local/lib, etc.) are tried along with fairly standard ATLAS library directories, too (/usr/lib/atlas, /usr/local/lib/atlas). * Consequently, the multi-threaded libraries in /usr/local/lib/atlas are picked up before the version that you want. Here is how to override this (I think): Set the environment variable PTATLAS=None and then run the build. E.g. $ export PTATLAS=None $ python setup.py build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Shawn.Gong at drdc-rddc.gc.ca Fri May 25 15:56:22 2007 From: Shawn.Gong at drdc-rddc.gc.ca (Gong, Shawn (Contractor)) Date: Fri, 25 May 2007 15:56:22 -0400 Subject: [Numpy-discussion] Linux numpy 1.0.1 install failed - undefined symbol: pthread_join In-Reply-To: <46573429.9010801@gmail.com> References: <2E58C246F17003499C141D334794D049027682C6@ottawaex02.Ottawa.drdc-rddc.gc.ca> <2E58C246F17003499C141D334794D049027682C7@ottawaex02.Ottawa.drdc-rddc.gc.ca><2E58C246F17003499C141D334794D049027682CD@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46571B93.9040707@gmail.com><2E58C246F17003499C141D334794D049027682D9@ottawaex02.Ottawa.drdc-rddc.gc.ca> <46573429.9010801@gmail.com> Message-ID: <2E58C246F17003499C141D334794D049027682DA@ottawaex02.Ottawa.drdc-rddc.gc.ca> Hi Robert The override does the trick. It worked. Thanks and have a nice weekend. Shawn Okay, here is the full scoop: * The multi-threaded ATLAS is always tried first. This doesn't work for you since you compiled your Python without pthreads. * The standard library directories (/usr/lib, /usr/local/lib, etc.) are tried along with fairly standard ATLAS library directories, too (/usr/lib/atlas, /usr/local/lib/atlas). * Consequently, the multi-threaded libraries in /usr/local/lib/atlas are picked up before the version that you want. Here is how to override this (I think): Set the environment variable PTATLAS=None and then run the build. E.g. $ export PTATLAS=None $ python setup.py build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion From tim.hochberg at ieee.org Fri May 25 17:19:10 2007 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Fri, 25 May 2007 14:19:10 -0700 Subject: [Numpy-discussion] Aligning an array on Windows In-Reply-To: <200705252039.14426.faltet@carabos.com> References: <1180031632.2585.42.camel@localhost.localdomain> <200705252039.14426.faltet@carabos.com> Message-ID: On 5/25/07, Francesc Altet wrote: > > A Dijous 24 Maig 2007 20:33, Francesc Altet escrigu?: [SNIP] Just for the record: I've found the culprit. The problem here was the use > of > the stride1 variable that was declared just above the main switch for > opcodes > as: > > intp stride1 = params.memsteps[arg1]; > > Unfortunately, this assignment gave problems because arg1 can take values > out > of the range of memsteps array. The solution was to use another variable, > that was initialized as: > > intp sb1 = params.memsteps[arg1]; > > but in the VEC_ARG* macros, just after the BOUNDS_CHECK(arg1) call, so > that it > checks that arg1 doesn't get out of range. All in all, a very subtle bug > that would have evident for Numexpr main authors, but not for me. Anyway, > you can find the details of the fix in: > http://www.pytables.org/trac/changeset/2939 Don't feel bad, when I had a very similar problem early on when we were first adding multiple types and it mystified me for considerably longer than this seems to have stumped you. I don't know exactly why, this wasn't giving problems with Linux boxes. > Fortunately, Windows platform is much more finicky in terms of memory > problems and brought this bug to my attention. Oh, thanks god for letting > Windows be! ;) ;-) -- //=][=\\ tim.hochberg at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpmusu at cc.usu.edu Fri May 25 18:53:12 2007 From: mpmusu at cc.usu.edu (Mark.Miller) Date: Fri, 25 May 2007 16:53:12 -0600 Subject: [Numpy-discussion] shelve and object arrays Message-ID: <465768D8.8040100@cc.usu.edu> Greetings: I recently experimented with changing from use of string arrays in some of my code to object arrays. This change speeds up my simulations and produces identical numerical results relative to use of string arrays. However, it now appears that my code is having issues at the ends of my simulations when all data (and simulation parameters) are archived using shelve. Here's the message: /var/lib/torque/mom_priv/jobs/539356.uint.SC: line 9: 18640 Segmentation fault I have been unable to create a small stand-alone example that reproduces the problem. Any thoughts on this? -Mark From stefan at sun.ac.za Sat May 26 03:07:21 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 26 May 2007 09:07:21 +0200 Subject: [Numpy-discussion] shelve and object arrays In-Reply-To: <465768D8.8040100@cc.usu.edu> References: <465768D8.8040100@cc.usu.edu> Message-ID: <20070526070721.GX6192@mentat.za.net> Hi Mark On Fri, May 25, 2007 at 04:53:12PM -0600, Mark.Miller wrote: > I recently experimented with changing from use of string arrays in some > of my code to object arrays. This change speeds up my simulations and > produces identical numerical results relative to use of string arrays. > However, it now appears that my code is having issues at the ends of my > simulations when all data (and simulation parameters) are archived using > shelve. > > Here's the message: > > /var/lib/torque/mom_priv/jobs/539356.uint.SC: line 9: 18640 Segmentation > fault If you are running on a platform where valgrind is supported, you can attempt to use that to localise the problem. Instructions are here: http://www.scipy.org/Cookbook/C_Extensions#head-9d3c4f5894aa215af47ea7784a33ab0252d230d8 Cheers St?fan From stefan at sun.ac.za Sat May 26 03:16:32 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 26 May 2007 09:16:32 +0200 Subject: [Numpy-discussion] corrcoef of masked array In-Reply-To: <200705251037.45071.jl@dmi.dk> References: <200705251037.45071.jl@dmi.dk> Message-ID: <20070526071632.GY6192@mentat.za.net> Hi Jesper On Fri, May 25, 2007 at 10:37:44AM +0200, Jesper Larsen wrote: > I have a masked array of dimension (nvariables, nobservations) that contain > missing values at arbitrary points. Is it safe to rely on numpy.corrcoeff to > calculate the correlation coefficients of a masked array (it seems to give > reasonable results)? I don't think it is. If my thinking is correct, you would expect the following to have different results: In [38]: x = N.random.random(100) In [39]: y = N.random.random(100) In [40]: N.corrcoef(x,y) Out[40]: array([[ 1. , -0.07291798], [-0.07291798, 1. ]]) In [41]: x_ = N.ma.masked_array(x,mask=(N.random.random(100)>0.5).astype(bool)) In [42]: y_ = N.ma.masked_array(y,mask=(N.random.random(100)>0.5).astype(bool)) In [43]: N.corrcoef(x_,y_) Out[43]: array([[ 1. , -0.07291798], [-0.07291798, 1. ]]) Regards St?fan From matthew.brett at gmail.com Sat May 26 05:48:24 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 26 May 2007 10:48:24 +0100 Subject: [Numpy-discussion] Median / mean functionality confusing? In-Reply-To: <4655C30D.6080305@noaa.gov> References: <1e2af89e0705240756q14c55379p1ca9ee743ec55724@mail.gmail.com> <4655AAC0.3020909@gmx.net> <1e2af89e0705240827k3b2d85c4m3dd099e64d08d9a@mail.gmail.com> <4655B942.9030308@noaa.gov> <4655BCC0.6040308@gmx.net> <4655C30D.6080305@noaa.gov> Message-ID: <1e2af89e0705260248j59fc209eqb734fcde4fe33ef@mail.gmail.com> Can I resurrect this thread then by agreeing with Chris, and my original post, that it would be better if median had the same behavior as mean, accepting axis and dtype as inputs? Best, Matthew On 5/24/07, Christopher Barker wrote: > Sven Schreiber wrote: > >> (Zar, Jerrold H. 1984. Biostatistical Analysis. Prentice Hall.) > > > > Is that the seminal work on the topic ;-) > > Of course not, just a reference I have handy -- though I suppose there > are any number of them on the web too. > > >> Of course, the median of an odd number of integers would be an integer. > > > that's why I asked about _forcing_ to a float > > To complete the discussion: > > >>> a = N.arange(4) > >>> type(N.median(a)) > > >>> a = N.arange(4) > >>> N.median(a) > 1.5 > >>> type(N.median(a)) > > >>> a = N.arange(5) > >>> N.median(a) > 2 > >>> type(N.median(a)) > > > So median converts to a float if it needs to, and keeps it an integer > otherwise, which seems reasonable to me, though it would be nice to > specify a dtype, so that you can make sure you always get a float if you > want one. > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From oliphant.travis at ieee.org Sat May 26 11:17:16 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 26 May 2007 09:17:16 -0600 Subject: [Numpy-discussion] Numpy 1.0.3 install problem. Help! In-Reply-To: <1180107782.46570406acf1f@webmail.unb.ca> References: <1180107782.46570406acf1f@webmail.unb.ca> Message-ID: <46584F7C.9070504@ieee.org> Yang, Lu wrote: > Hi, > I am installing Numpy 1.0.3 on Solaris 10. I am new to Numpy install. Here are what I > did and the result of 'python setup.py install'. Please help. Thanks in advance. > > [snip] > C compiler: /opt/SUNWspro/bin/cc -i -xO4 -xspace -xstrconst -xpentium -mr -DANSICPP > -D__STDC_VERSION__=199409L -DNDEBUG -O -xchip=opteron > > compile options: '-Ibuild/src.solaris-2.10-i86pc-2.3/numpy/core/src -Inumpy/core/include > -Ibuild/src.solaris-2.10-i86pc-2.3/numpy/core -Inumpy/core/src -Inumpy/core/include > -I/usr/sfw/include/python2.3 -c' > /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc -G -L/lib -xchip=opteron > build/temp.solaris-2.10-i86pc-2.3/numpy/core/src/multiarraymodule.o -lm -o > build/lib.solaris-2.10-i86pc-2.3/numpy/core/multiarray.so > sh: /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc: not found > sh: /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc: not found > error: Command "/sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc -G -L/lib > -xchip=opteron build/temp.solaris-2.10-i86pc-2.3/numpy/core/src/multiarraymodule.o -lm > -o build/lib.solaris-2.10-i86pc-2.3/numpy/core/multiarray.so" failed with exit status > 1 > It looks like your compiler chain is broken. It looks like the C compiler is first selected as /opt/SUNWspro/bin/cc but then an attempt is made to use /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc for the linking step. This step is failing because it can't find the compiler. Can you build other things on your platform? -Travis From charlesr.harris at gmail.com Sat May 26 12:11:34 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 May 2007 10:11:34 -0600 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: Message-ID: On 5/24/07, Ryan Krauss wrote: > > I am trying to use Numpy/Scipy for a class I am teaching this summer. > I have one student running Vista. Is there an installer that works > for Vista? Running the exe file from webpage gives errors about not > being able to create various folders and files. I think this is from > Vista being very restrictive about which files and folders are > writable. Is anyone out there running Numpy/Scipy in Vista? If so, > how did you get it to work? Install Ubuntu? ;) I've heard nothing but complaints and nasty words from co-workers stuck with new computers and trying to use Vista as a development platform for scientific work. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Sat May 26 12:44:32 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Sat, 26 May 2007 18:44:32 +0200 Subject: [Numpy-discussion] IEEE floating point arithmetics Message-ID: <80c99e790705260944h4a10d3aag6143f47681c7af1a@mail.gmail.com> Hi all. I have to work with floating point arithmetics and I found a module called "double module" (http://symptotic.com/mj/double/public/double-module.html) that does what I'd like. Basically, I would like to find the nearest smaller and bigger floating point numbers, given a "real" real number (the function doubleToLongBits can be used to do just that.). Is there something similar in numpy? In other words: is there a way, in numpy, to convert a floating point number to its binary representation and back? Thank you and regards, L. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ctrachte at gmail.com Sat May 26 13:29:15 2007 From: ctrachte at gmail.com (Carl Trachte) Date: Sat, 26 May 2007 10:29:15 -0700 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: Message-ID: <426ada670705261029q62c21566lbddf415306cad85c@mail.gmail.com> For the case in question, having the student set up his or her personal computer to work for the class (dual boot/Ubuntu) would probably be fine. Long term, though, I don't think Vista can be written off as a supported platform. If you're forced by your system admin to use it in a work environment AND you need Numpy, ignoring it isn't the answer. I'm not complaining (It would take me longer than Vista's commercial life to shoehorn a solution to this problem myself). I am however following this thread to see how it plays out. This way I can at least start planning ahead in the work environment (I DO have FreeBSD, Ubuntu, and Windows XP installed on my home computers). My 2 cents. Carl T. On 5/26/07, Charles R Harris wrote: > > > > On 5/24/07, Ryan Krauss wrote: > > > > I am trying to use Numpy/Scipy for a class I am teaching this summer. > > I have one student running Vista. Is there an installer that works > > for Vista? Running the exe file from webpage gives errors about not > > being able to create various folders and files. I think this is from > > Vista being very restrictive about which files and folders are > > writable. Is anyone out there running Numpy/Scipy in Vista? If so, > > how did you get it to work? > > > Install Ubuntu? ;) I've heard nothing but complaints and nasty words from > co-workers stuck with new computers and trying to use Vista as a development > platform for scientific work. > > Chuck > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Sat May 26 14:41:38 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 26 May 2007 12:41:38 -0600 Subject: [Numpy-discussion] IEEE floating point arithmetics In-Reply-To: <80c99e790705260944h4a10d3aag6143f47681c7af1a@mail.gmail.com> References: <80c99e790705260944h4a10d3aag6143f47681c7af1a@mail.gmail.com> Message-ID: <46587F62.4060204@ieee.org> lorenzo bolla wrote: > Hi all. > I have to work with floating point arithmetics and I found a module > called "double module" > (http://symptotic.com/mj/double/public/double-module.html > ) that does > what I'd like. Basically, I would like to find the nearest smaller and > bigger floating point numbers, given a "real" real number (the > function |doubleToLongBits| > can > be used to do just that.). > Is there something similar in numpy? > In other words: is there a way, in numpy, to convert a floating point > number to its binary representation and back? Sure: floating-point ---> binary representation a = numpy.pi bitstr = numpy.binary_repr(numpy.float_(a).view('u8')) print bitstr '100000000001001001000011111101101010100010001000010110100011000' Then: binary representation --> floating point number = numpy.uint64(int(bitstr,base=2)).view('f8') -Travis From charlesr.harris at gmail.com Sat May 26 14:40:57 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 May 2007 12:40:57 -0600 Subject: [Numpy-discussion] IEEE floating point arithmetics In-Reply-To: <80c99e790705260944h4a10d3aag6143f47681c7af1a@mail.gmail.com> References: <80c99e790705260944h4a10d3aag6143f47681c7af1a@mail.gmail.com> Message-ID: On 5/26/07, lorenzo bolla wrote: > > Hi all. > I have to work with floating point arithmetics and I found a module called > "double module" (http://symptotic.com/mj/double/public/double-module.html ) > that does what I'd like. Basically, I would like to find the nearest smaller > and bigger floating point numbers, given a "real" real number (the function > doubleToLongBits can > be used to do just that.). > Is there something similar in numpy? > In other words: is there a way, in numpy, to convert a floating point > number to its binary representation and back? > I don't quite understand what problem you are trying to solve. Python doubles are, I believe, the same as the C doubles supported by the hardware, i.e., IEEE754 doubles. Correct me if I am wrong. If you just what the same bits in a long, do In [5]: float64(3.0).view(int64) Out[5]: 4613937818241073152 To see it in hex: In [6]: hex(float64(3.0).view(int64)) Out[6]: '0x4008000000000000L' Note that you need to keep endianess in mind to interpret the results. Chuck Thank you and regards, > L. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat May 26 14:55:54 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 May 2007 12:55:54 -0600 Subject: [Numpy-discussion] IEEE floating point arithmetics In-Reply-To: References: <80c99e790705260944h4a10d3aag6143f47681c7af1a@mail.gmail.com> Message-ID: On 5/26/07, Charles R Harris wrote: > > > > On 5/26/07, lorenzo bolla wrote: > > > > Hi all. > > I have to work with floating point arithmetics and I found a module > > called "double module" (http://symptotic.com/mj/double/public/double-module.html ) > > that does what I'd like. Basically, I would like to find the nearest smaller > > and bigger floating point numbers, given a "real" real number (the function > > doubleToLongBits can > > be used to do just that.). > > Is there something similar in numpy? > > In other words: is there a way, in numpy, to convert a floating point > > number to its binary representation and back? > > > > I don't quite understand what problem you are trying to solve. Python > doubles are, I believe, the same as the C doubles supported by the hardware, > i.e., IEEE754 doubles. Correct me if I am wrong. If you just what the same > bits in a long, do > > In [5]: float64(3.0).view(int64) > Out[5]: 4613937818241073152 > > To see it in hex: > > In [6]: hex(float64(3.0).view(int64)) > Out[6]: '0x4008000000000000L' > > Note that you need to keep endianess in mind to interpret the results. > In particular, my machine is little endian, meaning that the low order bits are stored first. It seems that Hex always prints with the high order bits first, so on my machine the bits are in the reverse order of the hex display. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat May 26 18:10:42 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 May 2007 17:10:42 -0500 Subject: [Numpy-discussion] IEEE floating point arithmetics In-Reply-To: References: <80c99e790705260944h4a10d3aag6143f47681c7af1a@mail.gmail.com> Message-ID: <4658B062.2070100@gmail.com> Charles R Harris wrote: > In particular, my machine is little endian, meaning that the low order > bits are stored first. It seems that Hex always prints with the high > order bits first, so on my machine the bits are in the reverse order of > the hex display. Of course. Your int64's are in the same byte order as your float64's. The hex representation is not supposed to be the sequence of nybbles in memory, but the integer in human base-16 notation, which is always big-endian. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Sun May 27 00:37:41 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 May 2007 22:37:41 -0600 Subject: [Numpy-discussion] chirp-z Message-ID: All, I have been looking around for chirp-z code with a useable license. There is the original fortran version by Rader et. al. out there, as well as a package from FreeBSD at http://ftp2.at.freebsd.org/pub/FreeBSD/distfiles/fxt-2006.12.17.tgz. The latter is c++ and relies on function signatures to distinguish various functions with the same name, but looks pretty easy to translate. I note that we could probably use an fast_correlate also. Is there any interest in adding these to the fft pack? Or do they properly belong in scipy? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sun May 27 03:41:22 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 27 May 2007 09:41:22 +0200 Subject: [Numpy-discussion] chirp-z In-Reply-To: References: Message-ID: Hi, Isn't the chirp transform only two cross-correlations ? And for a fast one, there is a module in SciPy, and I think that kind of operation belongs more to Scipy than Numpy ;) Matthieu 2007/5/27, Charles R Harris : > > All, > > I have been looking around for chirp-z code with a useable license. There > is the original fortran version by Rader et. al. out there, as well as a > package from FreeBSD at > http://ftp2.at.freebsd.org/pub/FreeBSD/distfiles/fxt-2006.12.17.tgz. The > latter is c++ and relies on function signatures to distinguish various > functions with the same name, but looks pretty easy to translate. I note > that we could probably use an fast_correlate also. Is there any interest in > adding these to the fft pack? Or do they properly belong in scipy? > > Chuck > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sun May 27 03:41:52 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 27 May 2007 09:41:52 +0200 Subject: [Numpy-discussion] chirp-z In-Reply-To: References: Message-ID: Sorry, not cross-correlation, convolution, too early in the morning here... 2007/5/27, Matthieu Brucher : > > Hi, > > Isn't the chirp transform only two cross-correlations ? And for a fast > one, there is a module in SciPy, and I think that kind of operation belongs > more to Scipy than Numpy ;) > > Matthieu > > 2007/5/27, Charles R Harris : > > > > All, > > > > I have been looking around for chirp-z code with a useable license. > > There is the original fortran version by Rader et. al. out there, as well as > > a package from FreeBSD at > > http://ftp2.at.freebsd.org/pub/FreeBSD/distfiles/fxt-2006.12.17.tgz. The > > latter is c++ and relies on function signatures to distinguish various > > functions with the same name, but looks pretty easy to translate. I note > > that we could probably use an fast_correlate also. Is there any interest in > > adding these to the fft pack? Or do they properly belong in scipy? > > > > Chuck > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyang at unb.ca Sun May 27 09:21:47 2007 From: lyang at unb.ca (Yang, Lu) Date: Sun, 27 May 2007 10:21:47 -0300 Subject: [Numpy-discussion] Numpy 1.0.3 install problem. Help! In-Reply-To: <46584F7C.9070504@ieee.org> References: <1180107782.46570406acf1f@webmail.unb.ca> <46584F7C.9070504@ieee.org> Message-ID: <1180272107.465985eb5b931@webmail.unb.ca> Thanks, Travis. I don't have problem building other applications on the same platform. Are there any files in the extracted /numpy-1.0.3 that I can modify the path of the C compiler? I have checked all the files in it withouth luck. Thanks a lot. Quoting Travis Oliphant : > Yang, Lu wrote: > > Hi, > > I am installing Numpy 1.0.3 on Solaris 10. I am new to Numpy install. Here are what > I > > did and the result of 'python setup.py install'. Please help. Thanks in advance. > > > > > [snip] > > C compiler: /opt/SUNWspro/bin/cc -i -xO4 -xspace -xstrconst -xpentium -mr > -DANSICPP > > -D__STDC_VERSION__=199409L -DNDEBUG -O -xchip=opteron > > > > compile options: '-Ibuild/src.solaris-2.10-i86pc-2.3/numpy/core/src > -Inumpy/core/include > > -Ibuild/src.solaris-2.10-i86pc-2.3/numpy/core -Inumpy/core/src > -Inumpy/core/include > > -I/usr/sfw/include/python2.3 -c' > > /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc -G -L/lib -xchip=opteron > > build/temp.solaris-2.10-i86pc-2.3/numpy/core/src/multiarraymodule.o -lm -o > > build/lib.solaris-2.10-i86pc-2.3/numpy/core/multiarray.so > > sh: /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc: not found > > sh: /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc: not found > > error: Command "/sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc -G -L/lib > > -xchip=opteron build/temp.solaris-2.10-i86pc-2.3/numpy/core/src/multiarraymodule.o > -lm > > -o build/lib.solaris-2.10-i86pc-2.3/numpy/core/multiarray.so" failed with exit > status > > 1 > > > > It looks like your compiler chain is broken. It looks like the C > compiler is first selected as /opt/SUNWspro/bin/cc but then an attempt > is made to use /sgnome/tools/x86-solaris/forte/SOS8/SUNWspro/bin/cc for > the linking step. This step is failing because it can't find the > compiler. > > Can you build other things on your platform? > > -Travis > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From svetosch at gmx.net Sun May 27 11:17:28 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Sun, 27 May 2007 17:17:28 +0200 Subject: [Numpy-discussion] Median / mean functionality confusing? In-Reply-To: <1e2af89e0705260248j59fc209eqb734fcde4fe33ef@mail.gmail.com> References: <1e2af89e0705240756q14c55379p1ca9ee743ec55724@mail.gmail.com> <4655AAC0.3020909@gmx.net> <1e2af89e0705240827k3b2d85c4m3dd099e64d08d9a@mail.gmail.com> <4655B942.9030308@noaa.gov> <4655BCC0.6040308@gmx.net> <4655C30D.6080305@noaa.gov> <1e2af89e0705260248j59fc209eqb734fcde4fe33ef@mail.gmail.com> Message-ID: <4659A108.7060704@gmx.net> Matthew Brett schrieb: > Can I resurrect this thread then by agreeing with Chris, and my > original post, that it would be better if median had the same behavior > as mean, accepting axis and dtype as inputs? Well I'm not a developer but I guess there is a backwards-compatibility issue here. I mean not with just _accepting_ additional arguments, but with using the same defaults as mean() etc. do, which would break existing median() uses. I Don't know what the best solution is... -sven From svetosch at gmx.net Sun May 27 11:23:00 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Sun, 27 May 2007 17:23:00 +0200 Subject: [Numpy-discussion] complex arrays and ctypes Message-ID: <4659A254.6020700@gmx.net> Hi, I am wondering whether it's possible to pass (and get back) arrays of complex floats via ctypes. To be concrete, I have managed to access lapack's dgges (Fortran) function using ctypes (thanks to a related example on this list), and now I would like to do the same with the complex version zgges. Thanks for your insights, Sven From fullung at gmail.com Sun May 27 11:55:01 2007 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 27 May 2007 17:55:01 +0200 Subject: [Numpy-discussion] complex arrays and ctypes In-Reply-To: <4659A254.6020700@gmx.net> References: <4659A254.6020700@gmx.net> Message-ID: <20070527155501.GA4970@dogbert.sdsl.sun.ac.za> Hello On Sun, 27 May 2007, Sven Schreiber wrote: > Hi, > I am wondering whether it's possible to pass (and get back) arrays of > complex floats via ctypes. > > To be concrete, I have managed to access lapack's dgges (Fortran) > function using ctypes (thanks to a related example on this list), and > now I would like to do the same with the complex version zgges. An array just gets passed to the function as a pointer, so the answer is definately yes. Do you need some specific code? Cheers, Albert From oliphant.travis at ieee.org Sun May 27 14:29:30 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 27 May 2007 12:29:30 -0600 Subject: [Numpy-discussion] Numpy 1.0.3 install problem. Help! In-Reply-To: <1180272107.465985eb5b931@webmail.unb.ca> References: <1180107782.46570406acf1f@webmail.unb.ca> <46584F7C.9070504@ieee.org> <1180272107.465985eb5b931@webmail.unb.ca> Message-ID: <4659CE0A.1050106@ieee.org> Yang, Lu wrote: > Thanks, Travis. I don't have problem building other applications on the same platform. > Are there any files in the extracted /numpy-1.0.3 that I can modify the path of the C > compiler? I have checked all the files in it withouth luck. > The C-compiler that is used is the same one used to build Python. It is picked up using Python distutils. So, another problem could be that the compiler used to build Python is not available. You should look in the file $PYTHONDIR/config/Makefile where $PYTHONDIR is where Python is installed on your system. There will be a line CC = in there. That's the compiler that is going to be used. Also, check your $PATH variable when you are building numpy if a full path name is not given in the CC line to see what compiler will be picked up. These configuration issues are usually a matter of getting the right compiler and the right linker. Good luck, -Travis From strawman at astraw.com Sun May 27 16:32:35 2007 From: strawman at astraw.com (Andrew Straw) Date: Sun, 27 May 2007 13:32:35 -0700 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: Message-ID: <4659EAE3.8040306@astraw.com> Charles R Harris wrote: > > > On 5/24/07, *Ryan Krauss* > wrote: > > I am trying to use Numpy/Scipy for a class I am teaching this summer. > I have one student running Vista. Is there an installer that works > for Vista? Running the exe file from webpage gives errors about not > being able to create various folders and files. I think this is from > Vista being very restrictive about which files and folders are > writable. Is anyone out there running Numpy/Scipy in Vista? If so, > how did you get it to work? > > > Install Ubuntu? ;) I've heard nothing but complaints and nasty words > from co-workers stuck with new computers and trying to use Vista as a > development platform for scientific work. Just a follow-up, vmware is now given away and Ubuntu has always been free, and I guess most computers with Vista have native support for virtualization. So, this way you could run Ubuntu and Vista simultaneously without a speed loss on either or partitioning the drive. Of course, if it were me I'd put Ubuntu as the host OS and probably never boot the guest OS. ;) From charlesr.harris at gmail.com Sun May 27 17:28:15 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 May 2007 15:28:15 -0600 Subject: [Numpy-discussion] chirp-z In-Reply-To: References: Message-ID: On 5/27/07, Matthieu Brucher wrote: > > Hi, > > Isn't the chirp transform only two cross-correlations ? And for a fast > one, there is a module in SciPy, and I think that kind of operation belongs > more to Scipy than Numpy ;) Umm, no, There really aren't any transparent fast fft convolutions in SciPy. The closest thing is in signaltools, fftconvolve, and if you ask it to convolve, say, sequences whose length add up to 7902, then it will do a size 7901 transform. Because 7901 is prime this takes about 300 times as long as a transform of size 8192. That glitch could be fixed, but I think something as basic as fftconvolve should reside at a higher level than scipy.signalsanyway, say in numpy.fft. There are other scipy functions that do convolution, but those that use an fft are limited. There is a (buggy) version for 2d arrays in stsci, and a version with limited (real) functionality in fftpack. I don't see any more. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at shrogers.com Sun May 27 19:05:59 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Sun, 27 May 2007 17:05:59 -0600 Subject: [Numpy-discussion] APL2007: Arrays and Objects Message-ID: <465A0ED7.1090506@shrogers.com> I'd like to see some NumPy/SciPy participation in this. I'm thinking of submitting a "Python as a Tool of Thought" paper, playing off of Ken Iverson's Turing Award Lecture "Notation as a Tool of Thought". A NumPy/SciPy tutorial would be valuable as well. # Steve =================================== Announcing APL2007, Montreal, October 21-23 The APL 2007 conference, sponsored by ACM SIGAPL, has as its principal theme "Arrays and Objects" and, appropriately, is co-located with OOPSLA 2007, in Montreal this October. APL 2007 starts with a tutorial day on Sunday, October 21, followed by a two-day program on Monday and Tuesday, October 22 and 23. APLers are welcome to attend OOPSLA program events on Monday and Tuesday (and OOPSLA attendees are welcome to come to APL program events). Registrants at APL 2007 can add full OOPSLA attendance at a favorable price. Dates: Sunday Oct 21 Tutorials Monday, Tuesday Oct 22,23 APL 2007 program Monday-Friday Oct 22-26 OOPSLA program APL 2007 keynote speaker: Guy Steele, Sun Microsystems Laboratories Tutorials Using objects within APL Array language practicum Intro to [language] for other-language users ( We expect that there will be at least one introductory tutorial on "classic" APL, and hope to have introductions to a variety of array languages ) We solicit papers and proposals for tutorials, panels and workshops on all aspects of array-oriented programming and languages; this year we have particular interest in the themes of integrating the use of arrays and objects languages that support the use of arrays as a central and thematic technique marketplace and education: making practitioners aware of array thinking and array languages Our interest is in the essential use of arrays in programming in any language (though our historical concern has been the APL family of languages: classic APL, J, K, NIAL, ....). Dates: Tutorial, panel, and workshop proposals, and notice of intent to submit papers, are due by Friday June 15, to the Program Chair. Contributed papers, not more than 10 pages in length, are due by Monday, July 23, to the Program Chair. Details of form of submission can be obtained from the program chair. Deadline for detailed tutorial/panel/workshop information TBA. Cost (to SIGAPL and ACM members, approximate $US, final cost TBA) APL2007 registration $375 Tutorial day $250 Single conference days $200 Social events: Opening reception Monday Others TBA Conference venue: Palais de Congres, Montreal, Quebec, CANADA Conference hotel: Hyatt Regency Montreal Committee General Chair Guy Laroque Guy.Laroque at nrcan.gc.ca Program Chair Lynne C. Shaw Shaw at ACM.org Treasurer Steven H. Rogers Steve at SHRogers.com Publicity Mike Kent MKent at ACM.org Links APL2007 http://www.sigapl.org/apl2007 OOPSLA 2007 http://www.oopsla.org/oopsla2007 Palais de Congres http://www.congresmtl.com/ Hyatt Regency Montreal http://montreal.hyatt.com Guy Steele http://research.sun.com/people/mybio.php?uid=25706 ACM SIGAPL http://www.sigapl.org From cookedm at physics.mcmaster.ca Sun May 27 19:32:44 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 27 May 2007 19:32:44 -0400 Subject: [Numpy-discussion] Numpy 1.0.3 install problem. Help! In-Reply-To: <4659CE0A.1050106@ieee.org> References: <1180107782.46570406acf1f@webmail.unb.ca> <46584F7C.9070504@ieee.org> <1180272107.465985eb5b931@webmail.unb.ca> <4659CE0A.1050106@ieee.org> Message-ID: <20070527233244.GA5138@arbutus.physics.mcmaster.ca> On Sun, May 27, 2007 at 12:29:30PM -0600, Travis Oliphant wrote: > Yang, Lu wrote: > > Thanks, Travis. I don't have problem building other applications on the same platform. > > Are there any files in the extracted /numpy-1.0.3 that I can modify the path of the C > > compiler? I have checked all the files in it withouth luck. > > > > The C-compiler that is used is the same one used to build Python. It is > picked up using Python distutils. So, another problem could be that > the compiler used to build Python is not available. > > You should look in the file > > $PYTHONDIR/config/Makefile > > where $PYTHONDIR is where Python is installed on your system. > > There will be a line > > CC = > > in there. That's the compiler that is going to be used. Also, check > your $PATH variable when you are building numpy if a full path name is > not given in the CC line to see what compiler will be picked up. Also, check that you don't have a CC environment variable defined (i.e., echo $CC should be blank), as that will overrride the Python Makefile settings. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From matthieu.brucher at gmail.com Mon May 28 02:03:38 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 28 May 2007 08:03:38 +0200 Subject: [Numpy-discussion] chirp-z In-Reply-To: References: Message-ID: > > Umm, no, > > There really aren't any transparent fast fft convolutions in SciPy. The > closest thing is in signaltools, fftconvolve, and if you ask it to convolve, > say, sequences whose length add up to 7902, then it will do a size 7901 > transform. Because 7901 is prime this takes about 300 times as long as a > transform of size 8192. That glitch could be fixed, but I think something as > basic as fftconvolve should reside at a higher level than scipy.signalsanyway, say in > numpy.fft. > Why not scipy.fft then ? It is currently reviewed. There are other scipy functions that do convolution, but those that use an > fft are limited. There is a (buggy) version for 2d arrays in stsci, and a > version with limited (real) functionality in fftpack. I don't see any more. > Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon May 28 02:12:02 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 28 May 2007 08:12:02 +0200 Subject: [Numpy-discussion] chirp-z In-Reply-To: References: Message-ID: > > There really aren't any transparent fast fft convolutions in SciPy. The > closest thing is in signaltools, fftconvolve, and if you ask it to convolve, > say, sequences whose length add up to 7902, then it will do a size 7901 > transform. > BTW, is this really a glitch ? I think there is two schools there : - those who think the software must do something - those who think it is the programmer's responsibility. If I give to a fftconvolve sequences that add up this way, I want it to use a 7901 transform, not a 2^n transform. So you understood I'm in the second group. If people want to use a convolution with fft, they _should_ know a little about this algorithm. I'm a moderator on a French programming forum, and in the algorithm area, there are people that even don't know the FFT algorithm that want to make complicated thing with it, it is bound to fail. I suppose that indicating this "limitation" in the docstring is enough, so that people make the decision of is it enough or not Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon May 28 02:07:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 May 2007 15:07:14 +0900 Subject: [Numpy-discussion] chirp-z In-Reply-To: References: Message-ID: <465A7192.8050102@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > > On 5/27/07, *Matthieu Brucher* > wrote: > > Hi, > > Isn't the chirp transform only two cross-correlations ? And for a > fast one, there is a module in SciPy, and I think that kind of > operation belongs more to Scipy than Numpy ;) > > > Umm, no, > > There really aren't any transparent fast fft convolutions in SciPy. > The closest thing is in signaltools, fftconvolve, and if you ask it to > convolve, say, sequences whose length add up to 7902, then it will do > a size 7901 transform. Because 7901 is prime this takes about 300 > times as long as a transform of size 8192. There is only one fft implementation in numpy, and I don't think it works (eg it is not O(NlogN)) for prime numbers. > That glitch could be fixed, but I think something as basic as > fftconvolve should reside at a higher level than scipy.signals anyway, > say in numpy.fft. numpy is for low level things. The problem of fft is that good, general fft algorithms (which work for any size in O(NlogN)) are not that common, and not easily distributable. scipy.fft does have code to do that, though, if you have one of the required library (mkl, fftw both provide NlogN behaviour for any size). Now, for convolution, you could use 0 padding, but this kind of thing may be too high level for numpy (which should be kept to the minimum: the goal of numpy really is to provide ndarray and basic). > > There are other scipy functions that do convolution, but those that > use an fft are limited. There is a (buggy) version for 2d arrays in > stsci, and a version with limited (real) functionality in fftpack. I > don't see any more. Could you open a ticket on scipy trac with an example which shows the bug, and explain what you want ? I may take a look at it (I have some code to do cross correlation in numpy somewhere, and could take time to improve its quality for inclusion in scipy). David From david at ar.media.kyoto-u.ac.jp Mon May 28 02:15:33 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 May 2007 15:15:33 +0900 Subject: [Numpy-discussion] chirp-z In-Reply-To: References: Message-ID: <465A7385.3090900@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > > There really aren't any transparent fast fft convolutions in > SciPy. The closest thing is in signaltools, fftconvolve, and if > you ask it to convolve, say, sequences whose length add up to > 7902, then it will do a size 7901 transform. > > > BTW, is this really a glitch ? I think there is two schools there : > - those who think the software must do something > - those who think it is the programmer's responsibility. > > If I give to a fftconvolve sequences that add up this way, I want it > to use a 7901 transform, not a 2^n transform. So you understood I'm in > the second group. I may miss something, but in theory at least, using zeros padding and fft of size 2^N (8192 here) will give you exactly the same result that doing fft of size 7901 if done right. Then, there are some precision problems which may appear, but I don't think they are significant for common cases. For example, I know for sure that matlab implements fast convolution/correlation this way (eg in lpc code, for example). > If people want to use a convolution with fft, they _should_ know a > little about this algorithm. I generally agree with you, specially for a package aimed at scientific (we are suppose to think, after all), but in this precise case, I think there is a valid case for exception, specially since it has almost no cost > I'm a moderator on a French programming forum, and in the algorithm > area, there are people that even don't know the FFT algorithm that > want to make complicated thing with it, it is bound to fail. I suppose > that indicating this "limitation" in the docstring is enough, so that > people make the decision of is it enough or not If someone does use fft but does not know it is better to use 2^n, will he take a look at the docstrings :) ? David From matthieu.brucher at gmail.com Mon May 28 02:31:20 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 28 May 2007 08:31:20 +0200 Subject: [Numpy-discussion] chirp-z In-Reply-To: <465A7385.3090900@ar.media.kyoto-u.ac.jp> References: <465A7385.3090900@ar.media.kyoto-u.ac.jp> Message-ID: > > > I'm a moderator on a French programming forum, and in the algorithm > > area, there are people that even don't know the FFT algorithm that > > want to make complicated thing with it, it is bound to fail. I suppose > > that indicating this "limitation" in the docstring is enough, so that > > people make the decision of is it enough or not > If someone does use fft but does not know it is better to use 2^n, will > he take a look at the docstrings :) ? > Well, if he's French, he won't look ;) Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon May 28 02:37:59 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 28 May 2007 08:37:59 +0200 Subject: [Numpy-discussion] chirp-z In-Reply-To: References: Message-ID: <20070528063759.GL6192@mentat.za.net> On Sat, May 26, 2007 at 10:37:41PM -0600, Charles R Harris wrote: > I have been looking around for chirp-z code with a useable license. There is > the original fortran version by Rader et. al. out there, as well as a package > from FreeBSD at > http://ftp2.at.freebsd.org/pub/FreeBSD/distfiles/fxt-2006.12.17.tgz. The latter > is c++ and relies on function signatures to distinguish various functions with > the same name, but looks pretty easy to translate. I note that we could > probably use an fast_correlate also. Is there any interest in adding these to > the fft pack? Or do they properly belong in scipy? The thread here: http://www.mail-archive.com/numpy-discussion at scipy.org/msg01812.html contains an implementation. Since it uses the FFT to do its underlying calculations, it's pretty fast. Cheers St?fan From stefan at sun.ac.za Mon May 28 04:12:58 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 28 May 2007 10:12:58 +0200 Subject: [Numpy-discussion] chirp-z In-Reply-To: <465A7385.3090900@ar.media.kyoto-u.ac.jp> References: <465A7385.3090900@ar.media.kyoto-u.ac.jp> Message-ID: <20070528081258.GP6192@mentat.za.net> On Mon, May 28, 2007 at 03:15:33PM +0900, David Cournapeau wrote: > Matthieu Brucher wrote: > > > > There really aren't any transparent fast fft convolutions in > > SciPy. The closest thing is in signaltools, fftconvolve, and if > > you ask it to convolve, say, sequences whose length add up to > > 7902, then it will do a size 7901 transform. > > > > > > BTW, is this really a glitch ? I think there is two schools there : > > - those who think the software must do something > > - those who think it is the programmer's responsibility. > > > > If I give to a fftconvolve sequences that add up this way, I want it > > to use a 7901 transform, not a 2^n transform. So you understood I'm in > > the second group. > I may miss something, but in theory at least, using zeros padding and > fft of size 2^N (8192 here) will give you exactly the same result that > doing fft of size 7901 if done right. Then, there are some precision > problems which may appear, but I don't think they are significant for > common cases. For example, I know for sure that matlab implements fast > convolution/correlation this way (eg in lpc code, for example). There is also the problem that you suddenly double your memory usage. St?fan From faltet at carabos.com Mon May 28 04:47:23 2007 From: faltet at carabos.com (Francesc Altet) Date: Mon, 28 May 2007 10:47:23 +0200 Subject: [Numpy-discussion] Aligning an array on Windows In-Reply-To: References: <1180031632.2585.42.camel@localhost.localdomain> <200705252039.14426.faltet@carabos.com> Message-ID: <1180342043.2699.7.camel@localhost.localdomain> El dv 25 de 05 del 2007 a les 14:19 -0700, en/na Timothy Hochberg va escriure: > Don't feel bad, when I had a very similar problem early on when we > were first adding multiple types and it mystified me for considerably > longer than this seems to have stumped you. Well, I wouldn't say the truth if I say that this doesn't help ;) Anyway, I think that this piece of code is dangerous enough and in order to avoid someone (including me!) tripping over it again, it would be nice to apply the next 'patch': Index: interp_body.c =================================================================== --- interp_body.c (revision 3053) +++ interp_body.c (working copy) @@ -89,6 +89,9 @@ unsigned int arg2 = params.program[pc+3]; #define arg3 params.program[pc+5] #define store_index params.index_data[store_in] + /* WARNING: From now on, only do references to params.mem[arg[123]] + & params.memsteps[arg[123]] inside the VEC_ARG[123] macros, + or you will risk accessing invalid addresses. */ #define reduce_ptr (dest + flat_index(&store_index, j)) #define i_reduce *(long *)reduce_ptr #define f_reduce *(double *)reduce_ptr Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From david at ar.media.kyoto-u.ac.jp Mon May 28 04:47:24 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 May 2007 17:47:24 +0900 Subject: [Numpy-discussion] chirp-z In-Reply-To: <20070528081258.GP6192@mentat.za.net> References: <465A7385.3090900@ar.media.kyoto-u.ac.jp> <20070528081258.GP6192@mentat.za.net> Message-ID: <465A971C.8080908@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > > There is also the problem that you suddenly double your memory usage. Is it a problem for 1d signals ? I cannot think about an example where you would need to do fft long enough such as this is a problem for the applications I know of fft. Also, if you do several 1d fft of the same size, in most cases I know, you can pack several of them together, and do "vectorized" fft, which will not take the double of memory. Now, I know there are many applications where you use fft that I don't know about, and this may be a problem for multi dimensional problems. But then, the cost in direct implementation would be really slow (I am of course talking of cases where you need the full convolution/correlation; if you need only a part of it, then it is a totally different matter, and I don't think there is a good way to know automatically what's best; I had to code my own lpc code for this exact reason in matlab). I think the problem of correlation and convolution right now that there are several implementations of them in different places, which is a bit confusing a first (at least it was for me when I started using scipy). If I do a grep on convolution for numpy and scipy sources, I get: - numpy/core/numeric : implemented using correlation, which itself uses direct implementation (in C, with dot). - scipy.stsci and scipy.ndimage: I have never used those packages. - scipy.fftpack : uses fft, obviously. - scipy.signal : proposes both with and without fft. Maybe there would be a way to avoid so many packages to provide more or less the same functionalities ? To provide a fast, and without too much memory overhead correlation function was one of the main reason for the ticket 400 I opened in numpy trac (provide a hook to an fft implementation at the C level). cheers, David From svetosch at gmx.net Mon May 28 12:11:46 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Mon, 28 May 2007 18:11:46 +0200 Subject: [Numpy-discussion] complex arrays and ctypes In-Reply-To: <20070527155501.GA4970@dogbert.sdsl.sun.ac.za> References: <4659A254.6020700@gmx.net> <20070527155501.GA4970@dogbert.sdsl.sun.ac.za> Message-ID: <465AFF42.1070007@gmx.net> Albert Strasheim schrieb: > Hello > > On Sun, 27 May 2007, Sven Schreiber wrote: > >> Hi, >> I am wondering whether it's possible to pass (and get back) arrays of >> complex floats via ctypes. >> >> To be concrete, I have managed to access lapack's dgges (Fortran) >> function using ctypes (thanks to a related example on this list), and >> now I would like to do the same with the complex version zgges. > > An array just gets passed to the function as a pointer, so the answer is > definately yes. Do you need some specific code? > Thanks for offering, but indeed it already seems to work! I asked first (before trying), because getting a ready-to-use lapack dll for Windows that actually includes zgges is not trivial. (It turned out that the one from scilab has it.) And being totally ignorant in Fortran I didn't know what the equivalent of Fortran's COMPLEX*16 is in numpy (complex128 seems to work here). The ctypes integration in numpy is really great! Thanks, Sven From ryanlists at gmail.com Mon May 28 19:54:50 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 28 May 2007 18:54:50 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: <4659EAE3.8040306@astraw.com> References: <4659EAE3.8040306@astraw.com> Message-ID: What is the easiet way to get started with VMWare with windows as the host operating system? Do I need just VMware player? Do I need some Ubuntu kernel or image or something? Ryan On 5/27/07, Andrew Straw wrote: > Charles R Harris wrote: > > > > > > On 5/24/07, *Ryan Krauss* > > wrote: > > > > I am trying to use Numpy/Scipy for a class I am teaching this summer. > > I have one student running Vista. Is there an installer that works > > for Vista? Running the exe file from webpage gives errors about not > > being able to create various folders and files. I think this is from > > Vista being very restrictive about which files and folders are > > writable. Is anyone out there running Numpy/Scipy in Vista? If so, > > how did you get it to work? > > > > > > Install Ubuntu? ;) I've heard nothing but complaints and nasty words > > from co-workers stuck with new computers and trying to use Vista as a > > development platform for scientific work. > Just a follow-up, vmware is now given away and Ubuntu has always been > free, and I guess most computers with Vista have native support for > virtualization. So, this way you could run Ubuntu and Vista > simultaneously without a speed loss on either or partitioning the drive. > Of course, if it were me I'd put Ubuntu as the host OS and probably > never boot the guest OS. ;) > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From bhendrix at enthought.com Mon May 28 20:14:31 2007 From: bhendrix at enthought.com (Bryce Hendrix) Date: Mon, 28 May 2007 19:14:31 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <4659EAE3.8040306@astraw.com> Message-ID: <465B7067.9090606@enthought.com> Ryan, The VMWare player is the best starting point. Just download and install the player, then get an image from somewhere (you can find a plethora of them on the VMWare web site). After downloading the image, its as simple as double clicking on the file in explorer or launching it via the start menu. Bryce Ryan Krauss wrote: > What is the easiet way to get started with VMWare with windows as the > host operating system? Do I need just VMware player? Do I need some > Ubuntu kernel or image or something? > > Ryan > > On 5/27/07, Andrew Straw wrote: > >> Charles R Harris wrote: >> >>> On 5/24/07, *Ryan Krauss* >> > wrote: >>> >>> I am trying to use Numpy/Scipy for a class I am teaching this summer. >>> I have one student running Vista. Is there an installer that works >>> for Vista? Running the exe file from webpage gives errors about not >>> being able to create various folders and files. I think this is from >>> Vista being very restrictive about which files and folders are >>> writable. Is anyone out there running Numpy/Scipy in Vista? If so, >>> how did you get it to work? >>> >>> >>> Install Ubuntu? ;) I've heard nothing but complaints and nasty words >>> from co-workers stuck with new computers and trying to use Vista as a >>> development platform for scientific work. >>> >> Just a follow-up, vmware is now given away and Ubuntu has always been >> free, and I guess most computers with Vista have native support for >> virtualization. So, this way you could run Ubuntu and Vista >> simultaneously without a speed loss on either or partitioning the drive. >> Of course, if it were me I'd put Ubuntu as the host OS and probably >> never boot the guest OS. ;) >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> >> > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Mon May 28 21:56:59 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 28 May 2007 20:56:59 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: <465B7067.9090606@enthought.com> References: <4659EAE3.8040306@astraw.com> <465B7067.9090606@enthought.com> Message-ID: It appears that the free VMware player can't just read an iso. Where can I get a good Ubuntu virtual appliance? On 5/28/07, Bryce Hendrix wrote: > > Ryan, > > The VMWare player is the best starting point. Just download and install the > player, then get an image from somewhere (you can find a plethora of them on > the VMWare web site). After downloading the image, its as simple as double > clicking on the file in explorer or launching it via the start menu. > > Bryce > > > Ryan Krauss wrote: > What is the easiet way to get started with VMWare with windows as the > host operating system? Do I need just VMware player? Do I need some > Ubuntu kernel or image or something? > > Ryan > > On 5/27/07, Andrew Straw wrote: > > > Charles R Harris wrote: > > > On 5/24/07, *Ryan Krauss* > wrote: > > I am trying to use Numpy/Scipy for a class I am teaching this summer. > I have one student running Vista. Is there an installer that works > for Vista? Running the exe file from webpage gives errors about not > being able to create various folders and files. I think this is from > Vista being very restrictive about which files and folders are > writable. Is anyone out there running Numpy/Scipy in Vista? If so, > how did you get it to work? > > > Install Ubuntu? ;) I've heard nothing but complaints and nasty words > from co-workers stuck with new computers and trying to use Vista as a > development platform for scientific work. > > Just a follow-up, vmware is now given away and Ubuntu has always been > free, and I guess most computers with Vista have native support for > virtualization. So, this way you could run Ubuntu and Vista > simultaneously without a speed loss on either or partitioning the drive. > Of course, if it were me I'd put Ubuntu as the host OS and probably > never boot the guest OS. ;) > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From wbaxter at gmail.com Mon May 28 22:11:36 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 29 May 2007 11:11:36 +0900 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <4659EAE3.8040306@astraw.com> <465B7067.9090606@enthought.com> Message-ID: On 5/29/07, Ryan Krauss wrote: > It appears that the free VMware player can't just read an iso. Where > can I get a good Ubuntu virtual appliance? The 'image' you're looking for is not an iso, it's a special VMWare image. Try this one: http://www.vmware.com/vmtn/appliances/directory/ubuntu.html --bb > > On 5/28/07, Bryce Hendrix wrote: > > > > Ryan, > > > > The VMWare player is the best starting point. Just download and install the > > player, then get an image from somewhere (you can find a plethora of them on > > the VMWare web site). After downloading the image, its as simple as double > > clicking on the file in explorer or launching it via the start menu. > > > > Bryce > > > > > > Ryan Krauss wrote: > > What is the easiet way to get started with VMWare with windows as the > > host operating system? Do I need just VMware player? Do I need some > > Ubuntu kernel or image or something? > > > > Ryan > > > > On 5/27/07, Andrew Straw wrote: > > > > > > Charles R Harris wrote: > > > > > > On 5/24/07, *Ryan Krauss* > > wrote: > > > > I am trying to use Numpy/Scipy for a class I am teaching this summer. > > I have one student running Vista. Is there an installer that works > > for Vista? Running the exe file from webpage gives errors about not > > being able to create various folders and files. I think this is from > > Vista being very restrictive about which files and folders are > > writable. Is anyone out there running Numpy/Scipy in Vista? If so, > > how did you get it to work? > > > > > > Install Ubuntu? ;) I've heard nothing but complaints and nasty words > > from co-workers stuck with new computers and trying to use Vista as a > > development platform for scientific work. > > > > Just a follow-up, vmware is now given away and Ubuntu has always been > > free, and I guess most computers with Vista have native support for > > virtualization. So, this way you could run Ubuntu and Vista > > simultaneously without a speed loss on either or partitioning the drive. > > Of course, if it were me I'd put Ubuntu as the host OS and probably > > never boot the guest OS. ;) > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From ryanlists at gmail.com Mon May 28 22:16:23 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 28 May 2007 21:16:23 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <4659EAE3.8040306@astraw.com> <465B7067.9090606@enthought.com> Message-ID: Thanks Bill. I saw that one, but it looks like it is only for 64bit AMD processors. I will download that one and try it out just to get a feel for how it works, but I was looking for a 32bit i386 to be a general as possible. Ryan On 5/28/07, Bill Baxter wrote: > On 5/29/07, Ryan Krauss wrote: > > It appears that the free VMware player can't just read an iso. Where > > can I get a good Ubuntu virtual appliance? > > The 'image' you're looking for is not an iso, it's a special VMWare image. > Try this one: > http://www.vmware.com/vmtn/appliances/directory/ubuntu.html > > --bb > > > > > On 5/28/07, Bryce Hendrix wrote: > > > > > > Ryan, > > > > > > The VMWare player is the best starting point. Just download and install the > > > player, then get an image from somewhere (you can find a plethora of them on > > > the VMWare web site). After downloading the image, its as simple as double > > > clicking on the file in explorer or launching it via the start menu. > > > > > > Bryce > > > > > > > > > Ryan Krauss wrote: > > > What is the easiet way to get started with VMWare with windows as the > > > host operating system? Do I need just VMware player? Do I need some > > > Ubuntu kernel or image or something? > > > > > > Ryan > > > > > > On 5/27/07, Andrew Straw wrote: > > > > > > > > > Charles R Harris wrote: > > > > > > > > > On 5/24/07, *Ryan Krauss* > > > wrote: > > > > > > I am trying to use Numpy/Scipy for a class I am teaching this summer. > > > I have one student running Vista. Is there an installer that works > > > for Vista? Running the exe file from webpage gives errors about not > > > being able to create various folders and files. I think this is from > > > Vista being very restrictive about which files and folders are > > > writable. Is anyone out there running Numpy/Scipy in Vista? If so, > > > how did you get it to work? > > > > > > > > > Install Ubuntu? ;) I've heard nothing but complaints and nasty words > > > from co-workers stuck with new computers and trying to use Vista as a > > > development platform for scientific work. > > > > > > Just a follow-up, vmware is now given away and Ubuntu has always been > > > free, and I guess most computers with Vista have native support for > > > virtualization. So, this way you could run Ubuntu and Vista > > > simultaneously without a speed loss on either or partitioning the drive. > > > Of course, if it were me I'd put Ubuntu as the host OS and probably > > > never boot the guest OS. ;) > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discussion at scipy.org > > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > > > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discussion at scipy.org > > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > > > > > > > > > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discussion at scipy.org > > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From wbaxter at gmail.com Mon May 28 22:21:20 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 29 May 2007 11:21:20 +0900 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <4659EAE3.8040306@astraw.com> <465B7067.9090606@enthought.com> Message-ID: No, there's a link in the middle of the page that says "ALSO available for AMD 64bit", but the link you're looking for is in the upper right corner of the page, and is for Intel 32: http://www.vmware.com/vmtn/appliances/directory/scripts/va-stats/appliance-redirect.php?nid=595&target=http%3A%2F%2Fisv-image.ubuntu.com%2Fvmware%2FUbuntu-6.06.1-desktop-i386.zip --bb On 5/29/07, Ryan Krauss wrote: > Thanks Bill. I saw that one, but it looks like it is only for 64bit > AMD processors. I will download that one and try it out just to get a > feel for how it works, but I was looking for a 32bit i386 to be a > general as possible. > From ryanlists at gmail.com Mon May 28 22:45:31 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 28 May 2007 21:45:31 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <4659EAE3.8040306@astraw.com> <465B7067.9090606@enthought.com> Message-ID: Thanks again. I guess I just read that wrong. On 5/28/07, Bill Baxter wrote: > No, there's a link in the middle of the page that says "ALSO available > for AMD 64bit", but the link you're looking for is in the upper right > corner of the page, and is for Intel 32: > http://www.vmware.com/vmtn/appliances/directory/scripts/va-stats/appliance-redirect.php?nid=595&target=http%3A%2F%2Fisv-image.ubuntu.com%2Fvmware%2FUbuntu-6.06.1-desktop-i386.zip > > --bb > > On 5/29/07, Ryan Krauss wrote: > > Thanks Bill. I saw that one, but it looks like it is only for 64bit > > AMD processors. I will download that one and try it out just to get a > > feel for how it works, but I was looking for a 32bit i386 to be a > > general as possible. > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From ryanlists at gmail.com Tue May 29 00:11:05 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 28 May 2007 23:11:05 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <4659EAE3.8040306@astraw.com> <465B7067.9090606@enthought.com> Message-ID: I have this more or less working, the only problem is that my guest Ubuntu OS doesn't have Scipy/Numpy/IPython and I can't get basic networking. So, I can't install anything. Does anyone have a VMWare virtual appliance with Scipy/Numpy/IPython/Matplotlib already installed? I posted a question to the VMWare forum about networking, but will welcome any help from here. Thanks, Ryan On 5/28/07, Ryan Krauss wrote: > Thanks again. I guess I just read that wrong. > > On 5/28/07, Bill Baxter wrote: > > No, there's a link in the middle of the page that says "ALSO available > > for AMD 64bit", but the link you're looking for is in the upper right > > corner of the page, and is for Intel 32: > > http://www.vmware.com/vmtn/appliances/directory/scripts/va-stats/appliance-redirect.php?nid=595&target=http%3A%2F%2Fisv-image.ubuntu.com%2Fvmware%2FUbuntu-6.06.1-desktop-i386.zip > > > > --bb > > > > On 5/29/07, Ryan Krauss wrote: > > > Thanks Bill. I saw that one, but it looks like it is only for 64bit > > > AMD processors. I will download that one and try it out just to get a > > > feel for how it works, but I was looking for a 32bit i386 to be a > > > general as possible. > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > From david at ar.media.kyoto-u.ac.jp Tue May 29 00:06:50 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 May 2007 13:06:50 +0900 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <4659EAE3.8040306@astraw.com> <465B7067.9090606@enthought.com> Message-ID: <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> Ryan Krauss wrote: > I have this more or less working, the only problem is that my guest > Ubuntu OS doesn't have Scipy/Numpy/IPython and I can't get basic > networking. So, I can't install anything. Does anyone have a VMWare > virtual appliance with Scipy/Numpy/IPython/Matplotlib already > installed? > > I posted a question to the VMWare forum about networking, but will > welcome any help from here. > I use vmware to test various packages: it is a barebone version (only command line), dunno if this is of any interest for you ? David From ryanlists at gmail.com Tue May 29 00:26:31 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 28 May 2007 23:26:31 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> References: <465B7067.9090606@enthought.com> <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> Message-ID: I need to plot things using matplotlib, so I don't think it works for me without X. Thanks though. Ryan On 5/28/07, David Cournapeau wrote: > Ryan Krauss wrote: > > I have this more or less working, the only problem is that my guest > > Ubuntu OS doesn't have Scipy/Numpy/IPython and I can't get basic > > networking. So, I can't install anything. Does anyone have a VMWare > > virtual appliance with Scipy/Numpy/IPython/Matplotlib already > > installed? > > > > I posted a question to the VMWare forum about networking, but will > > welcome any help from here. > > > I use vmware to test various packages: it is a barebone version (only > command line), dunno if this is of any interest for you ? > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From david at ar.media.kyoto-u.ac.jp Tue May 29 00:30:19 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 May 2007 13:30:19 +0900 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <465B7067.9090606@enthought.com> <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> Message-ID: <465BAC5B.9030001@ar.media.kyoto-u.ac.jp> Ryan Krauss wrote: > I need to plot things using matplotlib, so I don't think it works for > me without X. > This is becoming OT, but maybe it would be easier to solve your network problem ? You can contact me privately if you have network problems with ubuntu, This makes me think that having a ubuntu (or whatever distribution) appliance with scipy and co would be a pretty good thing to have available for promoting scipy... David From strawman at astraw.com Tue May 29 00:39:32 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 May 2007 21:39:32 -0700 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <465B7067.9090606@enthought.com> <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> Message-ID: <465BAE84.1020106@astraw.com> Hi Ryan, I use VMware server on my linux box to host several more linux images. I will see if I can whip you up a Ubuntu Feisty i386 image with the "big 4" - numpy/scipy/matplotlib/ipython. If I understand their docs correctly, I have "virtual appliances" for previously existing images already... If it's that easy, this should take just a few minutes. I'll see what I can do and post the results. -Andrew Ryan Krauss wrote: > I need to plot things using matplotlib, so I don't think it works for > me without X. > > Thanks though. > > Ryan > > On 5/28/07, David Cournapeau wrote: > >> Ryan Krauss wrote: >> >>> I have this more or less working, the only problem is that my guest >>> Ubuntu OS doesn't have Scipy/Numpy/IPython and I can't get basic >>> networking. So, I can't install anything. Does anyone have a VMWare >>> virtual appliance with Scipy/Numpy/IPython/Matplotlib already >>> installed? >>> >>> I posted a question to the VMWare forum about networking, but will >>> welcome any help from here. >>> >>> >> I use vmware to test various packages: it is a barebone version (only >> command line), dunno if this is of any interest for you ? >> >> David >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> >> > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From strawman at astraw.com Tue May 29 02:10:59 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 May 2007 23:10:59 -0700 Subject: [Numpy-discussion] Vista installer? In-Reply-To: <465BAE84.1020106@astraw.com> References: <465B7067.9090606@enthought.com> <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> <465BAE84.1020106@astraw.com> Message-ID: <465BC3F3.9060806@astraw.com> OK, I have placed an Ubuntu 7.04 image with stock numpy, scipy, matplotlib, and ipython at http://mosca.caltech.edu/outgoing/Ubuntu%207.04%20for%20scientific%20computing%20in%20Python.zip The md5sum is 4191e13abda1154c94e685ffdc0f829b. Note: I haven't tested this at all on any computer other than the one which I created the virtual appliance on. I'm posting now because it will take me a while to download the file (it's 1 GB) in order to test. -Andrew Andrew Straw wrote: > Hi Ryan, > > I use VMware server on my linux box to host several more linux images. I > will see if I can whip you up a Ubuntu Feisty i386 image with the "big > 4" - numpy/scipy/matplotlib/ipython. If I understand their docs > correctly, I have "virtual appliances" for previously existing images > already... If it's that easy, this should take just a few minutes. I'll > see what I can do and post the results. > > -Andrew > > Ryan Krauss wrote: >> I need to plot things using matplotlib, so I don't think it works for >> me without X. >> >> Thanks though. >> >> Ryan >> >> On 5/28/07, David Cournapeau wrote: >> >>> Ryan Krauss wrote: >>> >>>> I have this more or less working, the only problem is that my guest >>>> Ubuntu OS doesn't have Scipy/Numpy/IPython and I can't get basic >>>> networking. So, I can't install anything. Does anyone have a VMWare >>>> virtual appliance with Scipy/Numpy/IPython/Matplotlib already >>>> installed? >>>> >>>> I posted a question to the VMWare forum about networking, but will >>>> welcome any help from here. >>>> >>>> >>> I use vmware to test various packages: it is a barebone version (only >>> command line), dunno if this is of any interest for you ? >>> >>> David >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://projects.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From strawman at astraw.com Tue May 29 03:24:46 2007 From: strawman at astraw.com (Andrew Straw) Date: Tue, 29 May 2007 00:24:46 -0700 Subject: [Numpy-discussion] Vista installer? In-Reply-To: <465BC3F3.9060806@astraw.com> References: <465B7067.9090606@enthought.com> <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> <465BAE84.1020106@astraw.com> <465BC3F3.9060806@astraw.com> Message-ID: <465BD53E.2020205@astraw.com> Andrew Straw wrote: > OK, I have placed an Ubuntu 7.04 image with stock numpy, scipy, matplotlib, and ipython at http://mosca.caltech.edu/outgoing/Ubuntu%207.04%20for%20scientific%20computing%20in%20Python.zip > > The md5sum is 4191e13abda1154c94e685ffdc0f829b. > > Note: I haven't tested this at all on any computer other than the one which I created the virtual appliance on. I'm posting now because it will take me a while to download the file (it's 1 GB) in order to test. > After downloading the 1GB, it does seems to work on my laptop. (On a Mac running VMware Fusion, even! Sweet! I thought this job would require booting my desktop.) The username is "ubuntu" and the password is "abc123". I set the network up to share the host's interface with NAT. Hmm, as David suggests, this might be a pretty good way to make an easy-to-try scipy environment, particularly for Windows users. I'm happy to continue to (allow Caltech to) host this, but also happy if we move it, or an improved version, somewhere else. For example, I could imagine wanting auto login turned on and some kind of icon on the desktop that says "click here for interactive python prompt". -Andrew > -Andrew > > Andrew Straw wrote: > >> Hi Ryan, >> >> I use VMware server on my linux box to host several more linux images. I will see if I can whip you up a Ubuntu Feisty i386 image with the "big 4" - numpy/scipy/matplotlib/ipython. If I understand their docs correctly, I have "virtual appliances" for previously existing images already... If it's that easy, this should take just a few minutes. I'll see what I can do and post the results. >> >> -Andrew >> >> Ryan Krauss wrote: >> >>> I need to plot things using matplotlib, so I don't think it works for >>> me without X. >>> >>> Thanks though. >>> >>> Ryan >>> >>> On 5/28/07, David Cournapeau wrote: >>> >>>> Ryan Krauss wrote: >>>> >>>>> I have this more or less working, the only problem is that my guest >>>>> Ubuntu OS doesn't have Scipy/Numpy/IPython and I can't get basic >>>>> networking. So, I can't install anything. Does anyone have a VMWare >>>>> virtual appliance with Scipy/Numpy/IPython/Matplotlib already >>>>> installed? >>>>> >>>>> I posted a question to the VMWare forum about networking, but will >>>>> welcome any help from here. >>>>> >>>>> >>>> I use vmware to test various packages: it is a barebone version (only >>>> command line), dunno if this is of any interest for you ? >>>> >>>> David >>>> _______________________________________________ >>>> Numpy-discussion mailing list >>>> Numpy-discussion at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://projects.scipy.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From charles.vejnar at isb-sib.ch Tue May 29 05:44:25 2007 From: charles.vejnar at isb-sib.ch (Charles V.) Date: Tue, 29 May 2007 11:44:25 +0200 Subject: [Numpy-discussion] Long double in Python with Numpy Message-ID: <200705291144.25842.charles.vejnar@isb-sib.ch> Hi, I have a C library using "long double" numbers. With SWIG, I am able to use this library in Python. I use the PyArray_FromDims function in the SWIG "interface" file. But sometimes, I only need one long double : it's not very convenient to create an array for only one number... In Python there is "numpy.longdouble". Did you know how to create this object in C in the SWIG "interface" file ? (maybe the PyLongDoubleScalarObject structure ?) Thank you Charles From openopt at ukr.net Tue May 29 07:55:57 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 29 May 2007 14:55:57 +0300 Subject: [Numpy-discussion] howto exclude elements from sorted array which are present in other sorted array Message-ID: <465C14CD.3090306@ukr.net> hi all, I have 2 sorted arrays arr1 & arr2 (ascending order, for example [1,2,5,8] and [3,5], but length may be thousands, so I need simpliest way, not costly). Howto obtain arr3 that consists of elements present in arr1 but absent in arr2? Of course, I can write something by myself, like arr3 = [x for x in arr1 if not x in arr2] but maybe already written routines are present in numpy? (that use sort) Usually arr2 is much more small than arr1, typical former size is 10-20 and typical latter size can be 1000-10000 Thank you in advance, D. From fullung at gmail.com Tue May 29 08:56:05 2007 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 29 May 2007 14:56:05 +0200 Subject: [Numpy-discussion] build problem on RHE3 machine References: <46571442.9020000@stsci.edu><004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za><20070525173716.GA25025@arbutus.physics.mcmaster.ca><465720BC.8090400@gmail.com> <20070525175020.GA25108@arbutus.physics.mcmaster.ca> Message-ID: <001801c7a1f0$b7e9bc00$0100a8c0@sun.ac.za> Hello all ----- Original Message ----- From: "David M. Cooke" To: "Discussion of Numerical Python" Sent: Friday, May 25, 2007 7:50 PM Subject: Re: [Numpy-discussion] build problem on RHE3 machine > On Fri, May 25, 2007 at 12:45:32PM -0500, Robert Kern wrote: >> David M. Cooke wrote: >> > On Fri, May 25, 2007 at 07:25:15PM +0200, Albert Strasheim wrote: >> >> I'm still having problems on Windows with r3828. Build command: >> >> >> >> python setup.py -v config --compiler=msvc build_clib --compiler=msvc >> >> build_ext --compiler=msvc bdist_wininst >> > >> > Can you send me the output of >> > >> > python setup.py -v config_fc --help-fcompiler >> > >> > And what fortran compiler are you trying to use? >> >> If he's trying to build numpy, he shouldn't be using *any* Fortran >> compiler. > > Ah true. Still, config_fc wll say it can't find one (and that should be > fine). > I think the bug has to do with how it searches for a compiler. I see there's been more work on numpy.distutils, but I still can't build r3841 on a "normal" Windows system with Visual Studio .NET 2003 installed. Is there any info I can provide to get this issue fixed? Thanks. Cheers, Albert From openopt at ukr.net Tue May 29 09:14:44 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 29 May 2007 16:14:44 +0300 Subject: [Numpy-discussion] howto exclude elements from sorted array which are present in other sorted array In-Reply-To: <465C14CD.3090306@ukr.net> References: <465C14CD.3090306@ukr.net> Message-ID: <465C2744.2050204@ukr.net> thanks all, I have solved the problem dmitrey wrote: > hi all, > I have 2 sorted arrays arr1 & arr2 (ascending order, for example > [1,2,5,8] and [3,5], but length may be thousands, so I need simpliest > way, not costly). > Howto obtain arr3 that consists of elements present in arr1 but absent > in arr2? > Of course, I can write something by myself, like > arr3 = [x for x in arr1 if not x in arr2] > but maybe already written routines are present in numpy? (that use sort) > > Usually arr2 is much more small than arr1, typical former size is 10-20 > and typical latter size can be 1000-10000 > > Thank you in advance, D. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > From charlesr.harris at gmail.com Tue May 29 09:17:01 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 May 2007 07:17:01 -0600 Subject: [Numpy-discussion] howto exclude elements from sorted array which are present in other sorted array In-Reply-To: <465C14CD.3090306@ukr.net> References: <465C14CD.3090306@ukr.net> Message-ID: On 5/29/07, dmitrey wrote: > > hi all, > I have 2 sorted arrays arr1 & arr2 (ascending order, for example > [1,2,5,8] and [3,5], but length may be thousands, so I need simpliest > way, not costly). > Howto obtain arr3 that consists of elements present in arr1 but absent > in arr2? > Of course, I can write something by myself, like > arr3 = [x for x in arr1 if not x in arr2] > but maybe already written routines are present in numpy? (that use sort) > > Usually arr2 is much more small than arr1, typical former size is 10-20 > and typical latter size can be 1000-10000 count = arr2.searchsorted(arr1, side='right') - arr2.searchsorted(arr1, side='left') arr3 = arr1[count == 0] Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Tue May 29 12:58:59 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 29 May 2007 12:58:59 -0400 Subject: [Numpy-discussion] build problem on RHE3 machine In-Reply-To: <001801c7a1f0$b7e9bc00$0100a8c0@sun.ac.za> References: <46571442.9020000@stsci.edu><004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za><20070525173716.GA25025@arbutus.physics.mcmaster.ca><465720BC.8090400@gmail.com> <20070525175020.GA25108@arbutus.physics.mcmaster.ca> <001801c7a1f0$b7e9bc00$0100a8c0@sun.ac.za> Message-ID: <53FB9E6D-1F05-4726-897D-70A4DC07DCBE@physics.mcmaster.ca> On May 29, 2007, at 08:56 , Albert Strasheim wrote: > Hello all > > ----- Original Message ----- > From: "David M. Cooke" > To: "Discussion of Numerical Python" > Sent: Friday, May 25, 2007 7:50 PM > Subject: Re: [Numpy-discussion] build problem on RHE3 machine > > >> On Fri, May 25, 2007 at 12:45:32PM -0500, Robert Kern wrote: >>> David M. Cooke wrote: >>>> On Fri, May 25, 2007 at 07:25:15PM +0200, Albert Strasheim wrote: >>>>> I'm still having problems on Windows with r3828. Build command: >>>>> >>>>> python setup.py -v config --compiler=msvc build_clib -- >>>>> compiler=msvc >>>>> build_ext --compiler=msvc bdist_wininst >>>> >>>> Can you send me the output of >>>> >>>> python setup.py -v config_fc --help-fcompiler >>>> >>>> And what fortran compiler are you trying to use? >>> >>> If he's trying to build numpy, he shouldn't be using *any* Fortran >>> compiler. >> >> Ah true. Still, config_fc wll say it can't find one (and that >> should be >> fine). >> I think the bug has to do with how it searches for a compiler. > > I see there's been more work on numpy.distutils, but I still can't > build > r3841 on a "normal" Windows system with Visual Studio .NET 2003 > installed. > > Is there any info I can provide to get this issue fixed? Anything you've got :) The output of these are hopefully useful to me (after removing build/): $ python setup.py -v build $ python setup.py -v config_fc --help-fcompiler -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From faltet at carabos.com Tue May 29 13:15:16 2007 From: faltet at carabos.com (Francesc Altet) Date: Tue, 29 May 2007 19:15:16 +0200 Subject: [Numpy-discussion] ANN: PyTables 2.0rc2 released Message-ID: <1180458916.2593.2.camel@localhost.localdomain> ============================ Announcing PyTables 2.0rc2 ============================ PyTables is a library for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data with support for full 64-bit file addressing. PyTables runs on top of the HDF5 library and NumPy package for achieving maximum throughput and convenient use. This is the second (and probably last) release candidate for PyTables 2.0. On it, together with the traditional bunch of bug fixes, you will find a handful of optimizations for dealing with very large tables. Also, the "Optimization tips" chapter of User's Guide has been updated and the manual is almost ready (bar some errors or typos we may have introduced) for the long awaited 2.0 final release. In particular, the "Indexed searches" section shows pretty definitive plots on the performance of the completely new and innovative indexing engine that will be available in the Pro version (to be released very soon now). You can download a source package of the version 2.0rc2 with generated PDF and HTML docs and binaries for Windows from http://www.pytables.org/download/preliminary/ For an on-line version of the manual, visit: http://www.pytables.org/docs/manual-2.0rc2 In case you want to know more in detail what has changed in this version, have a look at ``RELEASE_NOTES.txt``. Find the HTML version for this document at: http://www.pytables.org/moin/ReleaseNotes/Release_2.0rc2 If you are a user of PyTables 1.x, probably it is worth for you to look at ``MIGRATING_TO_2.x.txt`` file where you will find directions on how to migrate your existing PyTables 1.x apps to the 2.0 version. You can find an HTML version of this document at http://www.pytables.org/moin/ReleaseNotes/Migrating_To_2.x Keep reading for an overview of the most prominent improvements in PyTables 2.0 series. New features of PyTables 2.0 ============================ - A complete refactoring of many, many modules in PyTables. With this, the different parts of the code are much better integrated and code redundancy is kept under a minimum. A lot of new optimizations have been included as well, making working with it a smoother experience than ever before. - NumPy is finally at the core! That means that PyTables no longer needs numarray in order to operate, although it continues to be supported (as well as Numeric). This also means that you should be able to run PyTables in scenarios combining Python 2.5 and 64-bit platforms (these are a source of problems with numarray/Numeric because they don't support this combination as of this writing). - Most of the operations in PyTables have experimented noticeable speed-ups (sometimes up to 2x, like in regular Python table selections). This is a consequence of both using NumPy internally and a considerable effort in terms of refactorization and optimization of the new code. - Combined conditions are finally supported for in-kernel selections. So, now it is possible to perform complex selections like:: result = [ row['var3'] for row in table.where('(var2 < 20) | (var1 == "sas")') ] or:: complex_cond = '((%s <= col5) & (col2 <= %s)) ' \ '| (sqrt(col1 + 3.1*col2 + col3*col4) > 3)' result = [ row['var3'] for row in table.where(complex_cond % (inf, sup)) ] and run them at full C-speed (or perhaps more, due to the cache-tuned computing kernel of Numexpr, which has been integrated into PyTables). - Now, it is possible to get fields of the ``Row`` iterator by specifying their position, or even ranges of positions (extended slicing is supported). For example, you can do:: result = [ row[4] for row in table # fetch field #4 if row[1] < 20 ] result = [ row[:] for row in table # fetch all fields if row['var2'] < 20 ] result = [ row[1::2] for row in # fetch odd fields table.iterrows(2, 3000, 3) ] in addition to the classical:: result = [row['var3'] for row in table.where('var2 < 20')] - ``Row`` has received a new method called ``fetch_all_fields()`` in order to easily retrieve all the fields of a row in situations like:: [row.fetch_all_fields() for row in table.where('column1 < 0.3')] The difference between ``row[:]`` and ``row.fetch_all_fields()`` is that the former will return all the fields as a tuple, while the latter will return the fields in a NumPy void type and should be faster. Choose whatever fits better to your needs. - Now, all data that is read from disk is converted, if necessary, to the native byteorder of the hosting machine (before, this only happened with ``Table`` objects). This should help to accelerate applications that have to do computations with data generated in platforms with a byteorder different than the user machine. - The modification of values in ``*Array`` objects (through __setitem__) now doesn't make a copy of the value in the case that the shape of the value passed is the same as the slice to be overwritten. This results in considerable memory savings when you are modifying disk objects with big array values. - All leaf constructors (except for ``Array``) have received a new ``chunkshape`` argument that lets the user explicitly select the chunksizes for the underlying HDF5 datasets (only for advanced users). - All leaf constructors have received a new parameter called ``byteorder`` that lets the user specify the byteorder of their data *on disk*. This effectively allows to create datasets in other byteorders than the native platform. - Native HDF5 datasets with ``H5T_ARRAY`` datatypes are fully supported for reading now. - The test suites for the different packages are installed now, so you don't need a copy of the PyTables sources to run the tests. Besides, you can run the test suite from the Python console by using:: >>> tables.tests() Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdfgroup.org/HDF5/ About NumPy: http://numpy.scipy.org/ To know more about the company behind the development of PyTables, see: http://www.carabos.com/ Acknowledgments =============== Thanks to many users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last, but not least thanks a lot to the HDF5 and NumPy (and numarray!) makers. Without them PyTables simply would not exist. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From haley at ucar.edu Tue May 29 13:50:26 2007 From: haley at ucar.edu (Mary Haley) Date: Tue, 29 May 2007 11:50:26 -0600 (MDT) Subject: [Numpy-discussion] Building NumPy 1.0.3 on SGI/IRIX system Message-ID: Hi all, I have been trying to build numpy 1.0.3 on my SGI/IRIX64 box (64-bit version) with no luck. I'm using Python 2.4.4. I've tried setting CC to both the default "cc" compiler (which I built python with), and "gcc", with no luck. I tried c89 and c99, still with no luck. I get errors like: cc -64 _configtest.o -L/usr/local/lib -L/usr/lib -o _configtest ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. failure. removing: _configtest.c _configtest.o I've tried building other versions of numpy over the last couple of years and have never been successful. I've actually been using Numeric on that machine, and using numpy everywhere else. Has anybody been successful with this, and if so, what compile options did you use? Thanks, --Mary From charlesr.harris at gmail.com Tue May 29 13:58:43 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 May 2007 11:58:43 -0600 Subject: [Numpy-discussion] Building NumPy 1.0.3 on SGI/IRIX system In-Reply-To: References: Message-ID: On 5/29/07, Mary Haley wrote: > > > Hi all, > > I have been trying to build numpy 1.0.3 on my SGI/IRIX64 box (64-bit > version) with no luck. I'm using Python 2.4.4. There have been other reports of problems on IRIX 6.5. What we need is someone running on an SGI box and willing to help us get things working. I've tried setting CC to both the default "cc" compiler (which > I built python with), and "gcc", with no luck. I tried c89 and c99, > still with no luck. I get errors like: > > cc -64 _configtest.o -L/usr/local/lib -L/usr/lib -o _configtest > ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. > ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. Looks like ld is trying to link a 64bit program to a 32bit library. I wonder what crt1 is? Is there a 64bit version? How are the 64 and 32 bit libraries in IRIX distinguished? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin.sheldon at gmail.com Tue May 29 14:17:14 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Tue, 29 May 2007 14:17:14 -0400 Subject: [Numpy-discussion] byteswap() leaves dtype unchanged Message-ID: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> Hi all - I have read some big-endian data and I want to byte swap it to little endian. If I use a.byteswap(True) the bytes clearly get swapped, but the dtype is not updated to reflect the new data type. e.g [~]|1> a=N.array([2.5, 3.2]) [~]|2> a.dtype.descr <2> [('', ' a.byteswap(True) <3> array([ 5.37543423e-321, -1.54234871e-180]) [~]|4> a.dtype.descr <4> [('', ' References: Message-ID: On Tue, 29 May 2007, Charles R Harris wrote: > On 5/29/07, Mary Haley wrote: >> >> >> Hi all, >> >> I have been trying to build numpy 1.0.3 on my SGI/IRIX64 box (64-bit >> version) with no luck. I'm using Python 2.4.4. > > > There have been other reports of problems on IRIX 6.5. What we need is > someone running on an SGI box and willing to help us get things working. Chuck, I will be willing to help here. Unfortunately I can't give people logins to this system, but I will be willing to try a bunch of things. > I've tried setting CC to both the default "cc" compiler (which >> I built python with), and "gcc", with no luck. I tried c89 and c99, >> still with no luck. I get errors like: >> >> cc -64 _configtest.o -L/usr/local/lib -L/usr/lib -o _configtest >> ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. >> ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. > > > Looks like ld is trying to link a 64bit program to a 32bit library. I wonder > what crt1 is? Is there a 64bit version? How are the 64 and 32 bit libraries > in IRIX distinguished? I was wondering about this /usr/lib/crt1.o myself. Generally when you see this error message, it means that some of the code is getting compiled in 64-bit mode (using the -64 option) and some is being compiled in 32-bit mode (using -32 or -n32). You can't mix these two. This can also happen, I think, if compilers are mixed. That is, if you try to use "cc -64" for some code, and then "gcc" for other code, as the "gcc" compiler doesn't recognize the -64 or -n32 options. Can I build NumPy a certain way for you that might give you more debug information to go on? --Mary > Chuck > From fullung at gmail.com Tue May 29 16:22:21 2007 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 29 May 2007 22:22:21 +0200 Subject: [Numpy-discussion] build problem on RHE3 machine References: <46571442.9020000@stsci.edu><004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za><20070525173716.GA25025@arbutus.physics.mcmaster.ca><465720BC.8090400@gmail.com> <20070525175020.GA25108@arbutus.physics.mcmaster.ca> <001801c7a1f0$b7e9bc00$0100a8c0@sun.ac.za> <53FB9E6D-1F05-4726-897D-70A4DC07DCBE@physics.mcmaster.ca> Message-ID: <00ab01c7a22f$0ff36340$0100a8c0@sun.ac.za> Hello all ----- Original Message ----- From: "David M. Cooke" To: "Discussion of Numerical Python" Cc: "Albert Strasheim" Sent: Tuesday, May 29, 2007 6:58 PM Subject: Re: [Numpy-discussion] build problem on RHE3 machine >> Is there any info I can provide to get this issue fixed? > > Anything you've got :) The output of these are hopefully useful to me > (after removing build/): > > $ python setup.py -v build > $ python setup.py -v config_fc --help-fcompiler Attached as build1.log and build2.log. Cheers, Albert -------------- next part -------------- A non-text attachment was scrubbed... Name: build2.log Type: application/octet-stream Size: 2795 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build1.log Type: application/octet-stream Size: 4478 bytes Desc: not available URL: From charlesr.harris at gmail.com Tue May 29 16:33:29 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 May 2007 14:33:29 -0600 Subject: [Numpy-discussion] Building NumPy 1.0.3 on SGI/IRIX system In-Reply-To: References: Message-ID: On 5/29/07, Mary Haley wrote: > > > On Tue, 29 May 2007, Charles R Harris wrote: > > > On 5/29/07, Mary Haley wrote: > >> > >> > >> Hi all, > >> > >> I have been trying to build numpy 1.0.3 on my SGI/IRIX64 box (64-bit > >> version) with no luck. I'm using Python 2.4.4. > > > > > > There have been other reports of problems on IRIX 6.5. What we need is > > someone running on an SGI box and willing to help us get things working. > > Chuck, > > I will be willing to help here. Unfortunately I can't give people logins > to this system, but I will be willing to try a bunch of things. > > > I've tried setting CC to both the default "cc" compiler (which > >> I built python with), and "gcc", with no luck. I tried c89 and c99, > >> still with no luck. I get errors like: > >> > >> cc -64 _configtest.o -L/usr/local/lib -L/usr/lib -o _configtest > >> ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. > >> ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. > > > > > > Looks like ld is trying to link a 64bit program to a 32bit library. I > wonder > > what crt1 is? Is there a 64bit version? How are the 64 and 32 bit > libraries > > in IRIX distinguished? > > I was wondering about this /usr/lib/crt1.o myself. Generally when you > see this error message, it means that some of the code is getting > compiled in 64-bit mode (using the -64 option) and some is being > compiled in 32-bit mode (using -32 or -n32). You can't mix these > two. > > This can also happen, I think, if compilers are mixed. That is, > if you try to use "cc -64" for some code, and then "gcc" for > other code, as the "gcc" compiler doesn't recognize the -64 or > -n32 options. I believe the options are -m32 and -m64. Do gcc -v to see what the target is. Can I build NumPy a certain way for you that might give you > more debug information to go on? I'm waiting for one of the build gurus to check in, it's not my area of expertise. Are the 32 and 64 bit libraries in IRIX put in different places? On linux, for instance, the (non-debian) standard is 32 bit in the usual location, 64 bit in lib64. You can control where numpy looks for libraries by editing the site.cfg file at the top of the numpy directory. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 29 16:41:44 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 May 2007 14:41:44 -0600 Subject: [Numpy-discussion] Building NumPy 1.0.3 on SGI/IRIX system In-Reply-To: References: Message-ID: On 5/29/07, Mary Haley wrote: > > > On Tue, 29 May 2007, Charles R Harris wrote: > > > On 5/29/07, Mary Haley wrote: > >> > >> > >> Hi all, > >> > >> I have been trying to build numpy 1.0.3 on my SGI/IRIX64 box (64-bit > >> version) with no luck. I'm using Python 2.4.4. > > > > > > There have been other reports of problems on IRIX 6.5. What we need is > > someone running on an SGI box and willing to help us get things working. > > Chuck, > > I will be willing to help here. Unfortunately I can't give people logins > to this system, but I will be willing to try a bunch of things. > > > I've tried setting CC to both the default "cc" compiler (which > >> I built python with), and "gcc", with no luck. I tried c89 and c99, > >> still with no luck. I get errors like: > >> > >> cc -64 _configtest.o -L/usr/local/lib -L/usr/lib -o _configtest > >> ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. > >> ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. > > > > > > Looks like ld is trying to link a 64bit program to a 32bit library. I > wonder > > what crt1 is? Is there a 64bit version? How are the 64 and 32 bit > libraries > > in IRIX distinguished? > > I was wondering about this /usr/lib/crt1.o myself. Google yields: ... *crt1.o* is the "c run time object" which gets prepended to your files when you compile them so they can run when you type their names if you have the permissions set correctly. I am not sure why this would show up in numpy as numpy is compiling to a module Hmmm. Chuck. -------------- next part -------------- An HTML attachment was scrubbed... URL: From haley at ucar.edu Tue May 29 17:16:55 2007 From: haley at ucar.edu (Mary Haley) Date: Tue, 29 May 2007 15:16:55 -0600 (MDT) Subject: [Numpy-discussion] Building NumPy 1.0.3 on SGI/IRIX system In-Reply-To: References: Message-ID: On Tue, 29 May 2007, Charles R Harris wrote: > On 5/29/07, Mary Haley wrote: >> >> >> On Tue, 29 May 2007, Charles R Harris wrote: >> >> > On 5/29/07, Mary Haley wrote: >> >> >> >> >> >> Hi all, >> >> >> >> I have been trying to build numpy 1.0.3 on my SGI/IRIX64 box (64-bit >> >> version) with no luck. I'm using Python 2.4.4. >> > >> > >> > There have been other reports of problems on IRIX 6.5. What we need is >> > someone running on an SGI box and willing to help us get things working. >> >> Chuck, >> >> I will be willing to help here. Unfortunately I can't give people logins >> to this system, but I will be willing to try a bunch of things. >> >> > I've tried setting CC to both the default "cc" compiler (which >> >> I built python with), and "gcc", with no luck. I tried c89 and c99, >> >> still with no luck. I get errors like: >> >> >> >> cc -64 _configtest.o -L/usr/local/lib -L/usr/lib -o _configtest >> >> ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. >> >> ld64: FATAL 12 : Expecting n64 objects: /usr/lib/crt1.o is o32. >> > >> > >> > Looks like ld is trying to link a 64bit program to a 32bit library. I >> wonder >> > what crt1 is? Is there a 64bit version? How are the 64 and 32 bit >> libraries >> > in IRIX distinguished? >> >> I was wondering about this /usr/lib/crt1.o myself. Generally when you >> see this error message, it means that some of the code is getting >> compiled in 64-bit mode (using the -64 option) and some is being >> compiled in 32-bit mode (using -32 or -n32). You can't mix these >> two. >> >> This can also happen, I think, if compilers are mixed. That is, >> if you try to use "cc -64" for some code, and then "gcc" for >> other code, as the "gcc" compiler doesn't recognize the -64 or >> -n32 options. > > > I believe the options are -m32 and -m64. Do gcc -v to see what the target > is. Sorry, what I meant to say is I'm not sure if these are compatible with "cc". I will give this a try, although I prefer sticking with "cc" because this compiler is kept more up-to-date. > Can I build NumPy a certain way for you that might give you >> more debug information to go on? > > > I'm waiting for one of the build gurus to check in, it's not my area of > expertise. Are the 32 and 64 bit libraries in IRIX put in different places? > On linux, for instance, the (non-debian) standard is 32 bit in the usual > location, 64 bit in lib64. You can control where numpy looks for libraries > by editing the site.cfg file at the top of the numpy directory. > This is the same for IRIX. I looked at the site.cfg file and only saw two lines. I'll see if I can use it to help it look for 64-bit libraries. --Mary > Chuck > From rowen at cesmail.net Tue May 29 18:03:44 2007 From: rowen at cesmail.net (Russell E Owen) Date: Tue, 29 May 2007 15:03:44 -0700 Subject: [Numpy-discussion] NumPy 1.0.3 for OS-X References: <4654CEE6.9000703@ieee.org> <4655B9BA.3050301@noaa.gov> <4655C119.6020208@noaa.gov> <4655F3A2.5080004@noaa.gov> <46560757.2000005@noaa.gov> Message-ID: In article <46560757.2000005 at noaa.gov>, David L Goldsmith wrote: > Hold on again, I think I did it: it works on my BSN Intel Mac and Chris > is about to test it on his not-so-new PPC Mac. Assuming I built a > viable product, how do I put it in the right place (i.e., @ > http://pythonmac.org/packages/py25-fat/index.html)? Thanks! Put it on a server and send the link to Bob Ippolito: bob (insert at here) redivi (insert dot here) com Two other suggestions: - Include the date in your filename. That way if you have to modify the installer users can tell there's been a change. - Make it a .dmg file (e.g. by running Disk Utility and drag it onto the icon in the dock). I've found a few users have trouble with zip files. Thank you for doing this. -- Russell From David.L.Goldsmith at noaa.gov Tue May 29 18:16:20 2007 From: David.L.Goldsmith at noaa.gov (David L Goldsmith) Date: Tue, 29 May 2007 15:16:20 -0700 Subject: [Numpy-discussion] NumPy 1.0.3 for OS-X In-Reply-To: References: <4654CEE6.9000703@ieee.org> <4655B9BA.3050301@noaa.gov> <4655C119.6020208@noaa.gov> <4655F3A2.5080004@noaa.gov> <46560757.2000005@noaa.gov> Message-ID: <465CA634.9030209@noaa.gov> Thanks, sent it to Bob over the weekend; sorry, I'm pretty sure I didn't include the date in the filename, and Bob specifically requested a zip (after I sent him a mkpg, I believe), so that's what I sent him - Bob, do you need me to repackage and send again? DG Russell E Owen wrote: > In article <46560757.2000005 at noaa.gov>, > David L Goldsmith wrote: > > >> Hold on again, I think I did it: it works on my BSN Intel Mac and Chris >> is about to test it on his not-so-new PPC Mac. Assuming I built a >> viable product, how do I put it in the right place (i.e., @ >> http://pythonmac.org/packages/py25-fat/index.html)? Thanks! >> > > Put it on a server and send the link to Bob Ippolito: bob (insert at > here) redivi (insert dot here) com > > Two other suggestions: > - Include the date in your filename. That way if you have to modify the > installer users can tell there's been a change. > - Make it a .dmg file (e.g. by running Disk Utility and drag it onto the > icon in the dock). I've found a few users have trouble with zip files. > > Thank you for doing this. > > -- Russell > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- ERD/ORR/NOS/NOAA From ryanlists at gmail.com Tue May 29 21:17:25 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 29 May 2007 20:17:25 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: <465C733C.4060607@astraw.com> References: <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> <465BAC5B.9030001@ar.media.kyoto-u.ac.jp> <465BE8A6.7080909@ar.media.kyoto-u.ac.jp> <465C733C.4060607@astraw.com> Message-ID: Success. I think this is officially a viable alternative for running Scipy/Numpy/Matplotlib/IPython in Vista. I turned off my firewall on my host (XP) and I was able to browse the internet in Ubuntu. Andrew was right that I still couldn't ping, but I seem to have everything else. And with shared folders enabled I seem to have some default SMB stuff set up. I couldn't successfully browse the shares of my partitions (C$, E$, ...), but I did specifically share a folder (from windows) and editted a Python file in that folder from within VMware. The available shared folders showed up under Places > Network as I should have expected for Ubuntu SMB shares. Another of my students supposedly has everything working in Vista with no problems. I will have to check that out and report back. Ryan On 5/29/07, Andrew Straw wrote: > Ryan Krauss wrote: > > Thanks for your help in getting me up and going with VMWare. Andrew, > > your virtual appiance seems to work quite well. > > > > I have two hurdles left (I hope there are only 2). The first is still > > networking. I have NAT chosen in the VMWare player and that seems to > > give me limited connectivity. I seem to be able to use apt but can't > > ping or browse the internet. Any thoughts on how to trouble shoot > > this? > I'm guessing that the firewall on the host computer is preventing a lot > of this. I'm not sure I'd expect ping to work ever over NAT, as that's a > UDP thing, which is stateless and thus my guess is that NAT doesn't know > where to direct the incoming packet. However, I think you should be able > to browse, and the fact that apt works but browsing doesn't is certainly > odd, since the standard apt repositories are simply web servers. > > You can try bridged mode. That's usually what I do, but I figured it was > less likely to work than NAT (because it would require 2 external IP > addresses, and I'm not sure how your network is configured), so although > it allows full connectivity, including UDP, I chose NAT for that image. > > > > My second hurdle is how to move files between the host and guest OS. > > I see a shared folders option in the player, but don't seem to be able > > to add to the list of folders. My students need to be able to import > > python modules I give them and then submit files they would be > > developing within the guest OS. I googled for vmware shared folders > > and it seems like a non-free option. > Hmm, I haven't done any sharing through VMware server. On my mac, I have > using VMware fusion, but generally found that using SSH was more > reliable. So, maybe you can have them SSH (well, scp, perhaps using > WinSCP on the Windows side) between their two virtual computers? > > > > I haven't found a good "VMWare for dummies" or getting started guide, > > so feel free to redirect me. > VMware's docs are pretty good but necessarily get indexed by Google -- > make sure you browse their website. > > Good luck, > Andrew > > > > Thanks, > > > > Ryan > > > > On 5/29/07, David Cournapeau wrote: > >> Ryan Krauss wrote: > >> > Hey David, > >> > > >> > Thanks for all your help. I run Ubuntu stand alone on my dual boot > >> > machines and have no network issues (actually, the computer running > >> > VMWare is dual boot). So, I think my network issues are VMWare > >> > specific. I don't know how it handles the hardware. > >> In this case, it should be even easier. In ethernet options of vmware, > >> you should choose NAT, and then vmware will have a dhcp server on the > >> HOST which will provide an address by dhcp to your guest. > >> > >> David > >> > > From ryanlists at gmail.com Tue May 29 21:19:21 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 29 May 2007 20:19:21 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> <465BAC5B.9030001@ar.media.kyoto-u.ac.jp> <465BE8A6.7080909@ar.media.kyoto-u.ac.jp> <465C733C.4060607@astraw.com> Message-ID: Sorry, I should have mentioned that the student who got Numpy running in Vista said the key was to right click on the exe and choose "Run as Administrator". He said that was all there was to it (he also said the the Python-2.5 msi just installed with no problems). If anyone can confirm that this works, please let me know. Ryan On 5/29/07, Ryan Krauss wrote: > Success. I think this is officially a viable alternative for running > Scipy/Numpy/Matplotlib/IPython in Vista. I turned off my firewall on > my host (XP) and I was able to browse the internet in Ubuntu. Andrew > was right that I still couldn't ping, but I seem to have everything > else. And with shared folders enabled I seem to have some default SMB > stuff set up. I couldn't successfully browse the shares of my > partitions (C$, E$, ...), but I did specifically share a folder (from > windows) and editted a Python file in that folder from within VMware. > The available shared folders showed up under Places > Network as I > should have expected for Ubuntu SMB shares. > > Another of my students supposedly has everything working in Vista with > no problems. I will have to check that out and report back. > > Ryan > > On 5/29/07, Andrew Straw wrote: > > Ryan Krauss wrote: > > > Thanks for your help in getting me up and going with VMWare. Andrew, > > > your virtual appiance seems to work quite well. > > > > > > I have two hurdles left (I hope there are only 2). The first is still > > > networking. I have NAT chosen in the VMWare player and that seems to > > > give me limited connectivity. I seem to be able to use apt but can't > > > ping or browse the internet. Any thoughts on how to trouble shoot > > > this? > > I'm guessing that the firewall on the host computer is preventing a lot > > of this. I'm not sure I'd expect ping to work ever over NAT, as that's a > > UDP thing, which is stateless and thus my guess is that NAT doesn't know > > where to direct the incoming packet. However, I think you should be able > > to browse, and the fact that apt works but browsing doesn't is certainly > > odd, since the standard apt repositories are simply web servers. > > > > You can try bridged mode. That's usually what I do, but I figured it was > > less likely to work than NAT (because it would require 2 external IP > > addresses, and I'm not sure how your network is configured), so although > > it allows full connectivity, including UDP, I chose NAT for that image. > > > > > > My second hurdle is how to move files between the host and guest OS. > > > I see a shared folders option in the player, but don't seem to be able > > > to add to the list of folders. My students need to be able to import > > > python modules I give them and then submit files they would be > > > developing within the guest OS. I googled for vmware shared folders > > > and it seems like a non-free option. > > Hmm, I haven't done any sharing through VMware server. On my mac, I have > > using VMware fusion, but generally found that using SSH was more > > reliable. So, maybe you can have them SSH (well, scp, perhaps using > > WinSCP on the Windows side) between their two virtual computers? > > > > > > I haven't found a good "VMWare for dummies" or getting started guide, > > > so feel free to redirect me. > > VMware's docs are pretty good but necessarily get indexed by Google -- > > make sure you browse their website. > > > > Good luck, > > Andrew > > > > > > Thanks, > > > > > > Ryan > > > > > > On 5/29/07, David Cournapeau wrote: > > >> Ryan Krauss wrote: > > >> > Hey David, > > >> > > > >> > Thanks for all your help. I run Ubuntu stand alone on my dual boot > > >> > machines and have no network issues (actually, the computer running > > >> > VMWare is dual boot). So, I think my network issues are VMWare > > >> > specific. I don't know how it handles the hardware. > > >> In this case, it should be even easier. In ethernet options of vmware, > > >> you should choose NAT, and then vmware will have a dhcp server on the > > >> HOST which will provide an address by dhcp to your guest. > > >> > > >> David > > >> > > > > > From faltet at carabos.com Wed May 30 03:42:49 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 30 May 2007 09:42:49 +0200 Subject: [Numpy-discussion] byteswap() leaves dtype unchanged In-Reply-To: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> References: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> Message-ID: <1180510969.2889.4.camel@localhost.localdomain> El dt 29 de 05 del 2007 a les 14:17 -0400, en/na Erin Sheldon va escriure: > Hi all - > > I have read some big-endian data and I want to byte swap it to little > endian. If I use > a.byteswap(True) > the bytes clearly get swapped, but the dtype is not updated to reflect > the new data type. e.g > > [~]|1> a=N.array([2.5, 3.2]) > [~]|2> a.dtype.descr > <2> [('', ' [~]|3> a.byteswap(True) > <3> array([ 5.37543423e-321, -1.54234871e-180]) > [~]|4> a.dtype.descr > <4> [('', ' > I expected the dtype to be changed so that the print of the array > would look the same as before, and mathematical operations would work > properly. This can be done with: > a.dtype = a.dtype.newbyteorder() Yes. That's the canonical way to do what you want. > Should this not be performed by byteswap()? No. From the Travis' "Guide to NumPy": """ byteswap ({False}) Byteswap the elements of the array and return the byteswapped array. If the argument is True, then byteswap in-place and return a reference to self. Otherwise, return a copy of the array with the elements byteswapped. The data-type descriptor is not changed so the array will have changed numbers. """ Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From jl at dmi.dk Wed May 30 06:02:14 2007 From: jl at dmi.dk (Jesper Larsen) Date: Wed, 30 May 2007 12:02:14 +0200 Subject: [Numpy-discussion] corrcoef of masked array In-Reply-To: <46571A72.6000903@gmail.com> References: <200705251037.45071.jl@dmi.dk> <46571A72.6000903@gmail.com> Message-ID: <200705301202.14652.jl@dmi.dk> On Friday 25 May 2007 19:18, Robert Kern wrote: > Jesper Larsen wrote: > > Hi numpy users, > > > > I have a masked array of dimension (nvariables, nobservations) that > > contain missing values at arbitrary points. Is it safe to rely on > > numpy.corrcoeff to calculate the correlation coefficients of a masked > > array (it seems to give reasonable results)? > > No, it isn't. There are several different options for estimating > correlations in the face of missing data, none of which are clearly > superior to the others. None of them are trivial. None of them are > implemented in numpy. Thanks, my previous post was sent a bit too early since it became clear to me by reading the code for corrcoef that it is not safe for use with masked arrays. Here is my solution for calculating the correlation coefficients for masked arrays. Comments are appreciated: def macorrcoef(data1, data2): """ Calculates correlation coefficients taking masked out values into account. It is assumed (but not checked) that data1.shape == data2.shape. """ nv, no = data1.shape cc = ma.array(0., mask=ones((nv, nv))) if no > 1: for i in range(nv): for j in range(nv): m = ma.getmaskarray(data1[i,:]) | ma.getmaskarray(data2[j,:]) d1 = ma.array(data1[i,:], copy=False, mask=m).compressed() d2 = ma.array(data2[j,:], copy=False, mask=m).compressed() if ma.count(d1) > 1: c = corrcoef(d1, d2) cc[i,j] = c[0,1] return cc - Jesper From fullung at gmail.com Wed May 30 09:06:04 2007 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 30 May 2007 15:06:04 +0200 Subject: [Numpy-discussion] build problem on Windows (was: build problem on RHE3 machine) References: <46571442.9020000@stsci.edu><004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za><20070525173716.GA25025@arbutus.physics.mcmaster.ca><465720BC.8090400@gmail.com><20070525175020.GA25108@arbutus.physics.mcmaster.ca><001801c7a1f0$b7e9bc00$0100a8c0@sun.ac.za><53FB9E6D-1F05-4726-897D-70A4DC07DCBE@physics.mcmaster.ca> <00ab01c7a22f$0ff36340$0100a8c0@sun.ac.za> Message-ID: <003a01c7a2bb$479bded0$0100a8c0@sun.ac.za> Hello I took a quick look at the code, and it seems like new_fcompiler(...) is too soon to throw an error if a Fortran compiler cannot be detected. Instead, you might want to return some kind of NoneFCompiler that throws an error if the build actually tries to compile Fortran code. Cheers, Albert ----- Original Message ----- From: "Albert Strasheim" To: "Discussion of Numerical Python" Sent: Tuesday, May 29, 2007 10:22 PM Subject: Re: [Numpy-discussion] build problem on RHE3 machine > Hello all > > ----- Original Message ----- > From: "David M. Cooke" > To: "Discussion of Numerical Python" > Cc: "Albert Strasheim" > Sent: Tuesday, May 29, 2007 6:58 PM > Subject: Re: [Numpy-discussion] build problem on RHE3 machine > > >>> Is there any info I can provide to get this issue fixed? >> >> Anything you've got :) The output of these are hopefully useful to me >> (after removing build/): >> >> $ python setup.py -v build >> $ python setup.py -v config_fc --help-fcompiler > > Attached as build1.log and build2.log. > > Cheers, > > Albert From erin.sheldon at gmail.com Wed May 30 09:30:49 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 30 May 2007 09:30:49 -0400 Subject: [Numpy-discussion] byteswap() leaves dtype unchanged In-Reply-To: <1180510969.2889.4.camel@localhost.localdomain> References: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> <1180510969.2889.4.camel@localhost.localdomain> Message-ID: <331116dc0705300630h70ee98a3o37f85a87b050711b@mail.gmail.com> On 5/30/07, Francesc Altet wrote: > El dt 29 de 05 del 2007 a les 14:17 -0400, en/na Erin Sheldon va > escriure: > > Hi all - > > > > I have read some big-endian data and I want to byte swap it to little > > endian. If I use > > a.byteswap(True) > > the bytes clearly get swapped, but the dtype is not updated to reflect > > the new data type. e.g > > > > [~]|1> a=N.array([2.5, 3.2]) > > [~]|2> a.dtype.descr > > <2> [('', ' > [~]|3> a.byteswap(True) > > <3> array([ 5.37543423e-321, -1.54234871e-180]) > > [~]|4> a.dtype.descr > > <4> [('', ' > > > I expected the dtype to be changed so that the print of the array > > would look the same as before, and mathematical operations would work > > properly. This can be done with: > > a.dtype = a.dtype.newbyteorder() > > Yes. That's the canonical way to do what you want. > > > Should this not be performed by byteswap()? > > No. From the Travis' "Guide to NumPy": > > """ > byteswap ({False}) > > Byteswap the elements of the array and return the byteswapped array. If > the argument is True, then byteswap in-place and return a reference to > self. Otherwise, return a copy of the array with the elements > byteswapped. The data-type descriptor is not changed so the array will > have changed numbers. > """ This doesn't make any sense. The numbers have changed but the dtype is now incorrect. If you byteswap and correct the dtype the numbers have still changed, but you now can actually use the object. > > Cheers, > > -- > Francesc Altet | Be careful about using the following code -- > Carabos Coop. V. | I've only proven that it works, > www.carabos.com | I haven't tested it. -- Donald Knuth > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From erin.sheldon at gmail.com Wed May 30 09:33:25 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 30 May 2007 09:33:25 -0400 Subject: [Numpy-discussion] byteswap() leaves dtype unchanged In-Reply-To: <331116dc0705300630h70ee98a3o37f85a87b050711b@mail.gmail.com> References: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> <1180510969.2889.4.camel@localhost.localdomain> <331116dc0705300630h70ee98a3o37f85a87b050711b@mail.gmail.com> Message-ID: <331116dc0705300633h22f8c0f2xc93e8244a90b418b@mail.gmail.com> On 5/30/07, Erin Sheldon wrote: > On 5/30/07, Francesc Altet wrote: > > El dt 29 de 05 del 2007 a les 14:17 -0400, en/na Erin Sheldon va > > escriure: > > > Hi all - > > > > > > I have read some big-endian data and I want to byte swap it to little > > > endian. If I use > > > a.byteswap(True) > > > the bytes clearly get swapped, but the dtype is not updated to reflect > > > the new data type. e.g > > > > > > [~]|1> a=N.array([2.5, 3.2]) > > > [~]|2> a.dtype.descr > > > <2> [('', ' > > [~]|3> a.byteswap(True) > > > <3> array([ 5.37543423e-321, -1.54234871e-180]) > > > [~]|4> a.dtype.descr > > > <4> [('', ' > > > > > I expected the dtype to be changed so that the print of the array > > > would look the same as before, and mathematical operations would work > > > properly. This can be done with: > > > a.dtype = a.dtype.newbyteorder() > > > > Yes. That's the canonical way to do what you want. > > > > > Should this not be performed by byteswap()? > > > > No. From the Travis' "Guide to NumPy": > > > > """ > > byteswap ({False}) > > > > Byteswap the elements of the array and return the byteswapped array. If > > the argument is True, then byteswap in-place and return a reference to > > self. Otherwise, return a copy of the array with the elements > > byteswapped. The data-type descriptor is not changed so the array will > > have changed numbers. > > """ > > This doesn't make any sense. The numbers have changed but the dtype > is now incorrect. If you byteswap and correct the dtype the numbers > have still changed, but you now can actually use the object. By "numbers have still changed" I mean the underlying byte order is still different, but you can now use the object for mathematical operations. My point is that the metadata for the object should be correct after using it's built-in methods. > > > > > > > Cheers, > > > > -- > > Francesc Altet | Be careful about using the following code -- > > Carabos Coop. V. | I've only proven that it works, > > www.carabos.com | I haven't tested it. -- Donald Knuth > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > From matthieu.brucher at gmail.com Wed May 30 09:39:44 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 30 May 2007 15:39:44 +0200 Subject: [Numpy-discussion] byteswap() leaves dtype unchanged In-Reply-To: <331116dc0705300630h70ee98a3o37f85a87b050711b@mail.gmail.com> References: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> <1180510969.2889.4.camel@localhost.localdomain> <331116dc0705300630h70ee98a3o37f85a87b050711b@mail.gmail.com> Message-ID: Suppose you get data, but it is byte-swapped. Calling this method would yeild a correct array. If the type is changed too, it would not be useful :( Matthieu 2007/5/30, Erin Sheldon : > > On 5/30/07, Francesc Altet wrote: > > El dt 29 de 05 del 2007 a les 14:17 -0400, en/na Erin Sheldon va > > escriure: > > > Hi all - > > > > > > I have read some big-endian data and I want to byte swap it to little > > > endian. If I use > > > a.byteswap(True) > > > the bytes clearly get swapped, but the dtype is not updated to reflect > > > the new data type. e.g > > > > > > [~]|1> a=N.array([2.5, 3.2]) > > > [~]|2> a.dtype.descr > > > <2> [('', ' > > [~]|3> a.byteswap(True) > > > <3> array([ 5.37543423e-321, -1.54234871e-180]) > > > [~]|4> a.dtype.descr > > > <4> [('', ' > > > > > I expected the dtype to be changed so that the print of the array > > > would look the same as before, and mathematical operations would work > > > properly. This can be done with: > > > a.dtype = a.dtype.newbyteorder() > > > > Yes. That's the canonical way to do what you want. > > > > > Should this not be performed by byteswap()? > > > > No. From the Travis' "Guide to NumPy": > > > > """ > > byteswap ({False}) > > > > Byteswap the elements of the array and return the byteswapped array. If > > the argument is True, then byteswap in-place and return a reference to > > self. Otherwise, return a copy of the array with the elements > > byteswapped. The data-type descriptor is not changed so the array will > > have changed numbers. > > """ > > This doesn't make any sense. The numbers have changed but the dtype > is now incorrect. If you byteswap and correct the dtype the numbers > have still changed, but you now can actually use the object. > > > > > > > Cheers, > > > > -- > > Francesc Altet | Be careful about using the following code -- > > Carabos Coop. V. | I've only proven that it works, > > www.carabos.com | I haven't tested it. -- Donald Knuth > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed May 30 09:57:19 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 30 May 2007 14:57:19 +0100 Subject: [Numpy-discussion] byteswap() leaves dtype unchanged In-Reply-To: <331116dc0705300633h22f8c0f2xc93e8244a90b418b@mail.gmail.com> References: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> <1180510969.2889.4.camel@localhost.localdomain> <331116dc0705300630h70ee98a3o37f85a87b050711b@mail.gmail.com> <331116dc0705300633h22f8c0f2xc93e8244a90b418b@mail.gmail.com> Message-ID: <1e2af89e0705300657s77732363vd122afd1cd818e77@mail.gmail.com> Hi, > > This doesn't make any sense. The numbers have changed but the dtype > > is now incorrect. If you byteswap and correct the dtype the numbers > > have still changed, but you now can actually use the object. > > By "numbers have still changed" I mean the underlying byte order is > still different, but you can now use the object for mathematical > operations. My point is that the metadata for the object should be > correct after using it's built-in methods. I think the point is that you can have several different situations with byte ordering: 1) Your data and dtype endianess match, but you want the data swapped and the dtype to reflect this 2) Your data and dtype endianess don't match, and you want to swap the data so that they match the dtype 3) Your data and dtype endianess don't match, and you want to change the dtype so that it matches the data. I guess situation 1 is the one you have, and the way to deal with this is, as you say: other_order_arr = one_order_arr.byteswap() other_order_arr.dtype.newbyteorder() The byteswap method is obviously aimed at situation 2 (the data and dtype don't match in endianness, and you want to swap the data). I can see your point I think, that situation 1 seems to be the more common and obvious, and coming at it from outside, you would have thought that a.byteswap would change both. Best, Matthew From erin.sheldon at gmail.com Wed May 30 10:21:03 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 30 May 2007 10:21:03 -0400 Subject: [Numpy-discussion] byteswap() leaves dtype unchanged In-Reply-To: <1e2af89e0705300657s77732363vd122afd1cd818e77@mail.gmail.com> References: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> <1180510969.2889.4.camel@localhost.localdomain> <331116dc0705300630h70ee98a3o37f85a87b050711b@mail.gmail.com> <331116dc0705300633h22f8c0f2xc93e8244a90b418b@mail.gmail.com> <1e2af89e0705300657s77732363vd122afd1cd818e77@mail.gmail.com> Message-ID: <331116dc0705300721k35fc045i8d5e8b7cc9be92b5@mail.gmail.com> Matthew, this is a very clear description of all the issues, and I now see why it can be useful to keep the two methods separate. I think an update to the doc string for byteswap() with this description would be useful. Or perhaps a keyword to byteswap() in which one could specify the behavior regarding the data type. Erin On 5/30/07, Matthew Brett wrote: > Hi, > > > > This doesn't make any sense. The numbers have changed but the dtype > > > is now incorrect. If you byteswap and correct the dtype the numbers > > > have still changed, but you now can actually use the object. > > > > By "numbers have still changed" I mean the underlying byte order is > > still different, but you can now use the object for mathematical > > operations. My point is that the metadata for the object should be > > correct after using it's built-in methods. > > I think the point is that you can have several different situations > with byte ordering: > > 1) Your data and dtype endianess match, but you want the data swapped > and the dtype to reflect this > > 2) Your data and dtype endianess don't match, and you want to swap the > data so that they match the dtype > > 3) Your data and dtype endianess don't match, and you want to change > the dtype so that it matches the data. > > I guess situation 1 is the one you have, and the way to deal with this > is, as you say: > > other_order_arr = one_order_arr.byteswap() > other_order_arr.dtype.newbyteorder() > > The byteswap method is obviously aimed at situation 2 (the data and > dtype don't match in endianness, and you want to swap the data). > > I can see your point I think, that situation 1 seems to be the more > common and obvious, and coming at it from outside, you would have > thought that a.byteswap would change both. > > Best, > > Matthew > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From chanley at stsci.edu Wed May 30 11:18:53 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 30 May 2007 11:18:53 -0400 Subject: [Numpy-discussion] Problem with numpy-1.0.3.tar.gz file Message-ID: <465D95DD.6050404@stsci.edu> Hi, Could someone please re-create the numpy-1.0.3.tar.gz file that is currently being distributed from sourceforge? That tar file includes the following: /data/sparty1/dev/tmp/numpy-1.0.3 sparty> ls -al total 44 drwxr-sr-x 3 chanley science 4096 May 23 18:30 ./ drwxr-sr-x 4 chanley science 4096 May 30 11:06 ../ -rw-r--r-- 1 chanley science 541 Feb 7 23:12 DEV_README.txt -rw-r--r-- 1 chanley science 1537 Jul 26 2006 LICENSE.txt -rw-r--r-- 1 chanley science 246 Jul 26 2006 MANIFEST.in drwxr-sr-x 14 chanley science 4096 May 23 18:30 numpy/ -rw-r--r-- 1 chanley science 1511 May 23 18:30 PKG-INFO -rw-r--r-- 1 chanley science 546 Apr 5 20:56 README.txt -rwxr-xr-x 1 chanley science 3042 Jul 26 2006 setup.py* -rw-r--r-- 1 chanley science 71 Aug 22 2006 site.cfg -rw-r--r-- 1 chanley science 2072 Mar 31 03:41 THANKS.txt However, if you look at the 1.0.3 tag in subversion the following files are included: /data/sparty1/dev/tmp/numpy_1.0.3_svn_tag sparty> ll total 56 drwxr-sr-x 3 chanley science 4096 May 30 11:07 benchmarks/ -rw-r--r-- 1 chanley science 1620 May 30 11:07 COMPATIBILITY -rw-r--r-- 1 chanley science 541 May 30 11:07 DEV_README.txt -rw-r--r-- 1 chanley science 1537 May 30 10:59 LICENSE.txt -rw-r--r-- 1 chanley science 246 May 30 11:07 MANIFEST.in drwxr-sr-x 15 chanley science 4096 May 30 11:07 numpy/ -rw-r--r-- 1 chanley science 546 May 30 11:07 README.txt -rw-r--r-- 1 chanley science 123 May 30 10:59 scipy_compatibility -rwxr-xr-x 1 chanley science 149 May 30 11:07 setupegg.py* -rwxr-xr-x 1 chanley science 3042 May 30 10:59 setup.py* -rw-r--r-- 1 chanley science 4613 May 30 11:07 site.cfg.example -rw-r--r-- 1 chanley science 185 May 30 10:59 TEST_COMMIT -rw-r--r-- 1 chanley science 2072 May 30 10:59 THANKS.txt We are have a problem with the inclusion of the site.cfg file in the current tar file. It's inclusion causes our installations to fail. However, if the site.cfg file is left as the site.cfg.example the default installation works perfectly. This was also the case with previous versions. Would it be possible to have this change made? Thank you for your time and help, Chris From peridot.faceted at gmail.com Wed May 30 13:18:14 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 May 2007 13:18:14 -0400 Subject: [Numpy-discussion] byteswap() leaves dtype unchanged In-Reply-To: <1e2af89e0705300657s77732363vd122afd1cd818e77@mail.gmail.com> References: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> <1180510969.2889.4.camel@localhost.localdomain> <331116dc0705300630h70ee98a3o37f85a87b050711b@mail.gmail.com> <331116dc0705300633h22f8c0f2xc93e8244a90b418b@mail.gmail.com> <1e2af89e0705300657s77732363vd122afd1cd818e77@mail.gmail.com> Message-ID: On 30/05/07, Matthew Brett wrote: > I think the point is that you can have several different situations > with byte ordering: > > 1) Your data and dtype endianess match, but you want the data swapped > and the dtype to reflect this > > 2) Your data and dtype endianess don't match, and you want to swap the > data so that they match the dtype > > 3) Your data and dtype endianess don't match, and you want to change > the dtype so that it matches the data. > > I guess situation 1 is the one you have, and the way to deal with this > is, as you say: > > other_order_arr = one_order_arr.byteswap() > other_order_arr.dtype.newbyteorder() > > The byteswap method is obviously aimed at situation 2 (the data and > dtype don't match in endianness, and you want to swap the data). > > I can see your point I think, that situation 1 seems to be the more > common and obvious, and coming at it from outside, you would have > thought that a.byteswap would change both. I think the reason that byteswap behaves the way it does is that for situation 1 you often don't actually need to do anything. Just calculate with the things (it'll be a bit slow); as soon as the first copy gets made you're back to native byte order. So for those times you need to do it in place it's not too much trouble to byteswap and adjust the byte order in the dtype (you'd need to inspect the byte order in the first place to know it was byteswapped...) Anne From robert.kern at gmail.com Wed May 30 13:48:22 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 May 2007 12:48:22 -0500 Subject: [Numpy-discussion] corrcoef of masked array In-Reply-To: <200705301202.14652.jl@dmi.dk> References: <200705251037.45071.jl@dmi.dk> <46571A72.6000903@gmail.com> <200705301202.14652.jl@dmi.dk> Message-ID: <465DB8E6.2080903@gmail.com> Jesper Larsen wrote: > Here is my solution for calculating the correlation coefficients for masked > arrays. Comments are appreciated: > > def macorrcoef(data1, data2): > """ > Calculates correlation coefficients taking masked out values > into account. > > It is assumed (but not checked) that data1.shape == data2.shape. > """ > nv, no = data1.shape > cc = ma.array(0., mask=ones((nv, nv))) > if no > 1: > for i in range(nv): > for j in range(nv): > m = ma.getmaskarray(data1[i,:]) | ma.getmaskarray(data2[j,:]) > d1 = ma.array(data1[i,:], copy=False, mask=m).compressed() > d2 = ma.array(data2[j,:], copy=False, mask=m).compressed() > if ma.count(d1) > 1: > c = corrcoef(d1, d2) > cc[i,j] = c[0,1] > > return cc I'm afraid this doesn't work, either. Correlation matrices are constrained to be positive semidefinite; that is, all of their eigenvalues must be >= 0. Calculating each of the correlation coefficients in a pairwise fashion doesn't incorporate this constraint. But you're on the right track. My preferred approach to this problem is to find the pairwise correlation matrix as you did and then find the closest positive semidefinite matrix to it using the method of alternating projections. I can't give you the code I wrote for this since it belongs to a customer, but here is the reference I used: http://eprints.ma.man.ac.uk/232/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ellisonbg.net at gmail.com Wed May 30 15:23:19 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Wed, 30 May 2007 13:23:19 -0600 Subject: [Numpy-discussion] New Linux build problem Message-ID: <6ce0ac130705301223r45bb059qa2f01988f89d89be@mail.gmail.com> Hi, I am building numpy on a 32 bit Linux system (Scientific Linux). Numpy used to build fine on this system, but as I have moved to the new 1.0.3 versions, I have run into problems building. Basically, I get lots of things like: undefined reference to `cblas_sdot' and undefined reference to `PyArg_ParseTuple' Here are some highlight of the build process: lapack_info: libraries lapack not found in /home2/dechow/work/txp/txpython-0.2-linux/local/lib libraries lapack not found in /usr/local/lib FOUND: libraries = ['lapack'] library_dirs = ['/usr/lib'] language = f77 FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 And later: creating build/temp.linux-i686-2.5/numpy/core/blasdot compile options: '-DNO_ATLAS_INFO=1 -Inumpy/core/blasdot -Inumpy/core/include -Ibuild/src.linux-i686-2.5/numpy/core -Inumpy/core/src -Inumpy/core/include -I/home2/dechow/work/txp/txpython-0.2-linux/local/include/python2.5 -c' gcc: numpy/core/blasdot/_dotblas.c /usr/bin/gfortran build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o -L/usr/lib -lblas -lgfortran -o build/lib.linux-i686-2.5/numpy/core/_dotblas.so build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x9a): In function `FLOAT_dot': numpy/core/blasdot/_dotblas.c:24: undefined reference to `cblas_sdot' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x14a): In function `DOUBLE_dot': numpy/core/blasdot/_dotblas.c:39: undefined reference to `cblas_ddot' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x201): In function `CFLOAT_dot': numpy/core/blasdot/_dotblas.c:54: undefined reference to `cblas_cdotu_sub' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x2ad): In function `CDOUBLE_dot': numpy/core/blasdot/_dotblas.c:68: undefined reference to `cblas_zdotu_sub' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x2e3): In function `dotblas_alterdot': numpy/core/blasdot/_dotblas.c:83: undefined reference to `PyArg_ParseTuple' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x2f8):numpy/core/blasdot/_dotblas.c:107: undefined reference to `_Py_NoneStruct' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x3e3): In function `dotblas_restoredot': numpy/core/blasdot/_dotblas.c:118: undefined reference to `PyArg_ParseTuple' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x3f8):numpy/core/blasdot/_dotblas.c:144: undefined reference to `_Py_NoneStruct' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x57c): In function `dotblas_matrixproduct': numpy/core/blasdot/_dotblas.c:196: undefined reference to `PyArg_ParseTuple' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x758):numpy/core/blasdot/_dotblas.c:226: undefined reference to `PyTuple_New' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x76c):numpy/core/blasdot/_dotblas.c:83: undefined reference to `PyArg_ParseTuple' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x785):numpy/core/blasdot/_dotblas.c:107: undefined reference to `_Py_NoneStruct' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0xbde):numpy/core/blasdot/_dotblas.c:321: undefined reference to `PyExc_ValueError' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0xbf2):numpy/core/blasdot/_dotblas.c:321: undefined reference to `PyErr_SetString' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0xca8):numpy/core/blasdot/_dotblas.c:539: undefined reference to `PyEval_SaveThread' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0xd30):numpy/core/blasdot/_dotblas.c:696: undefined reference to `PyEval_RestoreThread' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0xfca):numpy/core/blasdot/_dotblas.c:365: undefined reference to `PyEval_SaveThread' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x108f):numpy/core/blasdot/_dotblas.c:589: undefined reference to `PyEval_SaveThread' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1129):numpy/core/blasdot/_dotblas.c:495: undefined reference to `PyEval_SaveThread' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x11ff):numpy/core/blasdot/_dotblas.c:657: undefined reference to `PyEval_SaveThread' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x131c):numpy/core/blasdot/_dotblas.c:373: undefined reference to `cblas_daxpy' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x13b1):numpy/core/blasdot/_dotblas.c:406: undefined reference to `cblas_zaxpy' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x146c):numpy/core/blasdot/_dotblas.c:435: undefined reference to `cblas_saxpy' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x14f4):numpy/core/blasdot/_dotblas.c:550: undefined reference to `cblas_dgemv' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1629):numpy/core/blasdot/_dotblas.c:389: undefined reference to `cblas_daxpy' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x16a6):numpy/core/blasdot/_dotblas.c:507: undefined reference to `cblas_ddot' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x171f):numpy/core/blasdot/_dotblas.c:468: undefined reference to `cblas_caxpy' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x17a0):numpy/core/blasdot/_dotblas.c:556: undefined reference to `cblas_sgemv' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x17db):numpy/core/blasdot/_dotblas.c:512: undefined reference to `cblas_sdot' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x188d):numpy/core/blasdot/_dotblas.c:562: undefined reference to `cblas_zgemv' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x19a7):numpy/core/blasdot/_dotblas.c:422: undefined reference to `cblas_zaxpy' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1a31):numpy/core/blasdot/_dotblas.c:517: undefined reference to `cblas_zdotu_sub' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1ab2):numpy/core/blasdot/_dotblas.c:569: undefined reference to `cblas_cgemv' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1b6f):numpy/core/blasdot/_dotblas.c:669: undefined reference to `cblas_dgemm' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1bb7):numpy/core/blasdot/_dotblas.c:521: undefined reference to `cblas_cdotu_sub' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1c3f):numpy/core/blasdot/_dotblas.c:605: undefined reference to `cblas_dgemv' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1d14):numpy/core/blasdot/_dotblas.c:676: undefined reference to `cblas_sgemm' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1dda):numpy/core/blasdot/_dotblas.c:451: undefined reference to `cblas_saxpy' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1eaf):numpy/core/blasdot/_dotblas.c:611: undefined reference to `cblas_sgemv' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x1f37):numpy/core/blasdot/_dotblas.c:690: undefined reference to `cblas_cgemm' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x2020):numpy/core/blasdot/_dotblas.c:617: undefined reference to `cblas_zgemv' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x20ea):numpy/core/blasdot/_dotblas.c:484: undefined reference to `cblas_caxpy' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x21c6):numpy/core/blasdot/_dotblas.c:683: undefined reference to `cblas_zgemm' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x2255):numpy/core/blasdot/_dotblas.c:623: undefined reference to `cblas_cgemv' build/temp.linux-i686-2.5/numpy/core/blasdot/_dotblas.o(.text+0x22ec): In function `dotblas_innerproduct': numpy/core/blasdot/_dotblas.c:729: undefined reference to `PyArg_ParseTuple' Have there been recent changes to distutils that would cause these undefined references to appear? Thanks Brian From robert.kern at gmail.com Wed May 30 15:41:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 May 2007 14:41:41 -0500 Subject: [Numpy-discussion] New Linux build problem In-Reply-To: <6ce0ac130705301223r45bb059qa2f01988f89d89be@mail.gmail.com> References: <6ce0ac130705301223r45bb059qa2f01988f89d89be@mail.gmail.com> Message-ID: <465DD375.8070703@gmail.com> Brian Granger wrote: > Hi, > > I am building numpy on a 32 bit Linux system (Scientific Linux). > Numpy used to build fine on this system, but as I have moved to the > new 1.0.3 versions, I have run into problems building. Basically, I > get lots of things like: > > undefined reference to `cblas_sdot' This looks like numpy.distutils has found ATLAS's FORTRAN BLAS library but not its libcblas library. Do you have a correct site.cfg file? From Chris Hanley's earlier post, it looks like the tarball on the SF site mistakenly includes a site.cfg. Delete it or correct it. > and > > undefined reference to `PyArg_ParseTuple' This usually comes from having an LDFLAGS environment variable defined. It overwrites the linker information. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ellisonbg.net at gmail.com Wed May 30 16:02:09 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Wed, 30 May 2007 14:02:09 -0600 Subject: [Numpy-discussion] New Linux build problem In-Reply-To: <465DD375.8070703@gmail.com> References: <6ce0ac130705301223r45bb059qa2f01988f89d89be@mail.gmail.com> <465DD375.8070703@gmail.com> Message-ID: <6ce0ac130705301302m3ed6cf49i2b696a26afbc3d1e@mail.gmail.com> > This looks like numpy.distutils has found ATLAS's FORTRAN BLAS library but not > its libcblas library. Do you have a correct site.cfg file? From Chris Hanley's > earlier post, it looks like the tarball on the SF site mistakenly includes a > site.cfg. Delete it or correct it. I will look at this. > > and > > > > undefined reference to `PyArg_ParseTuple' > > This usually comes from having an LDFLAGS environment variable defined. It > overwrites the linker information. Unsetting LDFLAGS before building didn't help. Hmmm, the odd thing is that the stock numpy used to build just fine on this system (about a month ago) Brian > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From ellisonbg.net at gmail.com Wed May 30 16:05:13 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Wed, 30 May 2007 14:05:13 -0600 Subject: [Numpy-discussion] New Linux build problem In-Reply-To: <6ce0ac130705301302m3ed6cf49i2b696a26afbc3d1e@mail.gmail.com> References: <6ce0ac130705301223r45bb059qa2f01988f89d89be@mail.gmail.com> <465DD375.8070703@gmail.com> <6ce0ac130705301302m3ed6cf49i2b696a26afbc3d1e@mail.gmail.com> Message-ID: <6ce0ac130705301305q41d0f26es281bf3830e946bab@mail.gmail.com> Take that back, unsetting LDFLAGS did work!!! Thanks Brian On 5/30/07, Brian Granger wrote: > > This looks like numpy.distutils has found ATLAS's FORTRAN BLAS library but not > > its libcblas library. Do you have a correct site.cfg file? From Chris Hanley's > > earlier post, it looks like the tarball on the SF site mistakenly includes a > > site.cfg. Delete it or correct it. > > I will look at this. > > > > and > > > > > > undefined reference to `PyArg_ParseTuple' > > > > This usually comes from having an LDFLAGS environment variable defined. It > > overwrites the linker information. > > Unsetting LDFLAGS before building didn't help. Hmmm, the odd thing is > that the stock numpy used to build just fine on this system (about a > month ago) > > Brian > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an enigma, a harmless enigma > > that is made terrible by our own mad attempt to interpret it as though it had > > an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > From cookedm at physics.mcmaster.ca Wed May 30 20:08:01 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 30 May 2007 20:08:01 -0400 Subject: [Numpy-discussion] build problem on Windows (was: build problem on RHE3 machine) In-Reply-To: <003a01c7a2bb$479bded0$0100a8c0@sun.ac.za> References: <00ab01c7a22f$0ff36340$0100a8c0@sun.ac.za> <003a01c7a2bb$479bded0$0100a8c0@sun.ac.za> Message-ID: <20070531000801.GA20169@arbutus.physics.mcmaster.ca> On Wed, May 30, 2007 at 03:06:04PM +0200, Albert Strasheim wrote: > Hello > > I took a quick look at the code, and it seems like new_fcompiler(...) is too > soon to throw an error if a Fortran compiler cannot be detected. > > Instead, you might want to return some kind of NoneFCompiler that throws an > error if the build actually tries to compile Fortran code. Maybe it's fixed now :-P new_fcompiler will return None instead of raising an error. build_ext and build_clib should handle it from there if they need the Fortran compiler. Sorry about all the breakage :( > ----- Original Message ----- > From: "Albert Strasheim" > To: "Discussion of Numerical Python" > Sent: Tuesday, May 29, 2007 10:22 PM > Subject: Re: [Numpy-discussion] build problem on RHE3 machine > > > > Hello all > > > > ----- Original Message ----- > > From: "David M. Cooke" > > To: "Discussion of Numerical Python" > > Cc: "Albert Strasheim" > > Sent: Tuesday, May 29, 2007 6:58 PM > > Subject: Re: [Numpy-discussion] build problem on RHE3 machine > > > > > >>> Is there any info I can provide to get this issue fixed? > >> > >> Anything you've got :) The output of these are hopefully useful to me > >> (after removing build/): > >> > >> $ python setup.py -v build > >> $ python setup.py -v config_fc --help-fcompiler > > > > Attached as build1.log and build2.log. > > > > Cheers, > > > > Albert > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fullung at gmail.com Wed May 30 20:32:21 2007 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 31 May 2007 02:32:21 +0200 Subject: [Numpy-discussion] build problem on Windows (was: build problemon RHE3 machine) References: <00ab01c7a22f$0ff36340$0100a8c0@sun.ac.za><003a01c7a2bb$479bded0$0100a8c0@sun.ac.za> <20070531000801.GA20169@arbutus.physics.mcmaster.ca> Message-ID: <007801c7a31b$26e22db0$0100a8c0@sun.ac.za> Hello all ----- Original Message ----- From: "David M. Cooke" To: "Discussion of Numerical Python" Sent: Thursday, May 31, 2007 2:08 AM Subject: Re: [Numpy-discussion] build problem on Windows (was: build problemon RHE3 machine) > On Wed, May 30, 2007 at 03:06:04PM +0200, Albert Strasheim wrote: >> Hello >> >> I took a quick look at the code, and it seems like new_fcompiler(...) is >> too >> soon to throw an error if a Fortran compiler cannot be detected. >> >> Instead, you might want to return some kind of NoneFCompiler that throws >> an >> error if the build actually tries to compile Fortran code. > > Maybe it's fixed now :-P new_fcompiler will return None instead of > raising an error. build_ext and build_clib should handle it from there > if they need the Fortran compiler. Almost there, but not quite: don't know how to compile Fortran code on platform 'nt' Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 89, in ? setup_package() File "setup.py", line 82, in setup_package configuration=configuration ) File "C:\home\albert\work2\numpy\numpy\distutils\core.py", line 159, in setup return old_setup(**new_attr) File "C:\Python24\lib\distutils\core.py", line 149, in setup dist.run_commands() File "C:\Python24\lib\distutils\dist.py", line 946, in run_commands self.run_command(cmd) File "C:\Python24\lib\distutils\dist.py", line 966, in run_command cmd_obj.run() File "C:\home\albert\work2\numpy\numpy\distutils\command\build_ext.py", line 56, in run self.run_command('build_src') File "C:\Python24\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\Python24\lib\distutils\dist.py", line 966, in run_command cmd_obj.run() File "C:\home\albert\work2\numpy\numpy\distutils\command\build_src.py", line 130, in run self.build_sources() File "C:\home\albert\work2\numpy\numpy\distutils\command\build_src.py", line 147, in build_sources self.build_extension_sources(ext) File "C:\home\albert\work2\numpy\numpy\distutils\command\build_src.py", line 250, in build_extension_sources sources = self.generate_sources(sources, ext) File "C:\home\albert\work2\numpy\numpy\distutils\command\build_src.py", line 307, in generate_sources source = func(extension, build_dir) File "numpy\core\setup.py", line 51, in generate_config_h library_dirs = default_lib_dirs) File "C:\Python24\lib\distutils\command\config.py", line 278, in try_run self._check_compiler() File "C:\home\albert\work2\numpy\numpy\distutils\command\config.py", line 31, in _check_compiler self.fcompiler.customize(self.distribution) AttributeError: 'NoneType' object has no attribute 'customize' Cheers, Albert From cookedm at physics.mcmaster.ca Wed May 30 20:53:06 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 30 May 2007 20:53:06 -0400 Subject: [Numpy-discussion] build problem on Windows (was: build problemon RHE3 machine) In-Reply-To: <007801c7a31b$26e22db0$0100a8c0@sun.ac.za> References: <20070531000801.GA20169@arbutus.physics.mcmaster.ca> <007801c7a31b$26e22db0$0100a8c0@sun.ac.za> Message-ID: <20070531005306.GA20285@arbutus.physics.mcmaster.ca> On Thu, May 31, 2007 at 02:32:21AM +0200, Albert Strasheim wrote: > Hello all > > ----- Original Message ----- > From: "David M. Cooke" > To: "Discussion of Numerical Python" > Sent: Thursday, May 31, 2007 2:08 AM > Subject: Re: [Numpy-discussion] build problem on Windows (was: build > problemon RHE3 machine) > > > > On Wed, May 30, 2007 at 03:06:04PM +0200, Albert Strasheim wrote: > >> Hello > >> > >> I took a quick look at the code, and it seems like new_fcompiler(...) is > >> too > >> soon to throw an error if a Fortran compiler cannot be detected. > >> > >> Instead, you might want to return some kind of NoneFCompiler that throws > >> an > >> error if the build actually tries to compile Fortran code. > > > > Maybe it's fixed now :-P new_fcompiler will return None instead of > > raising an error. build_ext and build_clib should handle it from there > > if they need the Fortran compiler. > > Almost there, but not quite: > > > don't know how to compile Fortran code on platform 'nt' > Running from numpy source directory. > Traceback (most recent call last): [snip] > File "C:\home\albert\work2\numpy\numpy\distutils\command\config.py", line > 31, in _check_compiler > self.fcompiler.customize(self.distribution) > AttributeError: 'NoneType' object has no attribute 'customize' Try it now. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Thu May 31 00:24:05 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 May 2007 22:24:05 -0600 Subject: [Numpy-discussion] Problem with numpy-1.0.3.tar.gz file In-Reply-To: <465D95DD.6050404@stsci.edu> References: <465D95DD.6050404@stsci.edu> Message-ID: <465E4DE5.6020502@ieee.org> Christopher Hanley wrote: > Hi, > > Could someone please re-create the numpy-1.0.3.tar.gz file that is > currently being distributed from sourceforge? That tar file includes > the following: > I've fixed the tar-ball. It is named numpy-1.0.3-2.tar.gz -Travis From oliphant.travis at ieee.org Thu May 31 01:30:00 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 May 2007 23:30:00 -0600 Subject: [Numpy-discussion] SciPy Journal Message-ID: <465E5D58.9030107@ieee.org> Hi everybody, I'm sorry for the cross posting, but I wanted to reach a wide audience and I know not everybody subscribes to all the lists. I've been thinking more about the "SciPy Journal" that we discussed before and I have some thoughts. 1) I'd like to get it going so that we can push out an electronic issue after the SciPy conference (in September) 2) I think it's scope should be limited to papers that describe algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we could also accept papers that describe code that depends on NumPy / SciPy that is also easily available. 3) I'd like to make a requirement for inclusion of new code in SciPy that it have an associated journal article describing the algorithms, design approach, etc. I don't see this journal article as being user-interface documentation for the code. I see this is as a place to describe why the code is organized as it is and to detail any algorithms that are used. 4) The purpose of the journal as I see it is to a) provide someplace to document what is actually done in SciPy and related software. b) provide a teaching tool of numerical methods with actual "people use-it" code that would be useful to researchers, students, and professionals. c) hopefully clever new algorithms will be developed for SciPy by people using Python that could be show-cased here d) provide a peer-review publication opportunity for people who contribute to open-source software 5) We obviously need associate editors and people willing to review submitted articles as well as people willing to submit articles. I have two articles that can be submitted within the next two months. What do other people have? As an example of the kind of thing a SciPy Journal would be useful for. I have recently over-hauled the interpolation.py file for SciPy by incorporating the B-spline stuff that is partly in fitpack. In the process I noticed two things: 1) I have (what seems to me) a different recursive algorithm for calculating derivatives of B-splines than I could find in fitpack. 2) I have developed a different way to determine the K-1 extra degrees of freedom for Kth-order spline fitting than I have seen before. The SciPy Journal would be a great place to document both of these things while describing the spline interpolation design of scipy.interpolate It is true that I could submit this stuff to other journals, but it seems like that doing that makes the information harder to find in the future and not easier. I'm also dissatisfied with how information exclusionary academic journals seem to be. They are catching up, but they are still not as accessible as other things available on the internet. Given the open nature of most scientific research, it is remarkable that getting access to the information is not as easy as it should be with modern search engines (if your internet domain does not subscribe to the e-journal). Comments and feedback is welcome. -Travis From peridot.faceted at gmail.com Thu May 31 02:38:20 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 31 May 2007 02:38:20 -0400 Subject: [Numpy-discussion] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> References: <465E5D58.9030107@ieee.org> Message-ID: On 31/05/07, Travis Oliphant wrote: > 2) I think it's scope should be limited to papers that describe > algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we > could also accept papers that describe code that depends on NumPy / > SciPy that is also easily available. I think there's a place for code that doesn't fit in scipy itself but could be closely tied to it - scikits, for example, or other code that can't go in for license reasons (such as specialization). > 3) I'd like to make a requirement for inclusion of new code in SciPy > that it have an associated journal article describing the algorithms, > design approach, etc. I don't see this journal article as being > user-interface documentation for the code. I see this is as a place to > describe why the code is organized as it is and to detail any algorithms > that are used. I don't think that's a good idea. It raises the barrier to contributing code (particularly for non-native English speakers), which is not something we need. Certainly every major piece of code warrants a journal article, or at least a short piece, and certainly the author should have first shot, but I think it's not unreasonable to allow the code to go in without the article being written. But (for example) I implemented the Kuiper statistic and would be happy to contribute it to scipy (once it's seen a bit more debugging), but it's quite adequately described in the literature already, so it doesn't seem worth writing an article about it. Anne M. Archibald From matthieu.brucher at gmail.com Thu May 31 02:59:29 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 31 May 2007 08:59:29 +0200 Subject: [Numpy-discussion] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> References: <465E5D58.9030107@ieee.org> Message-ID: Hi, 1) I'd like to get it going so that we can push out an electronic issue > after the SciPy conference (in September) That would be great for the "fame" of numpy and scipy :) 2) I think it's scope should be limited to papers that describe > algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we > could also accept papers that describe code that depends on NumPy / > SciPy that is also easily available. 3) I'd like to make a requirement for inclusion of new code in SciPy > that it have an associated journal article describing the algorithms, > design approach, etc. I don't see this journal article as being > user-interface documentation for the code. I see this is as a place to > describe why the code is organized as it is and to detail any algorithms > that are used. For this point, I have the same opinion as Anne : - having an equivalence between cde and article is raising the entry level, but as Anne said, some code could be somehow too trivial ? - a peer-review process implies that an article can be rejected, so the code is accepted, but not the article and vice-versa ? - perhaps encouraging new contributors to propose an article would be a solution ? 4) The purpose of the journal as I see it is to > > a) provide someplace to document what is actually done in SciPy and > related software. > b) provide a teaching tool of numerical methods with actual "people > use-it" code that would be > useful to researchers, students, and professionals. > c) hopefully clever new algorithms will be developed for SciPy by > people using Python > that could be show-cased here > d) provide a peer-review publication opportunity for people who > contribute to open-source > software +1 for everything, very good idea. 5) We obviously need associate editors and people willing to review > submitted articles as well as people willing to submit articles. I > have two articles that can be submitted within the next two months. > What do other people have? I could talk about the design I proposed for generic optimizer, and hopefully I'll have some other generic modules that could be exposed. But it's not in scipy, and it's not an official scikit at the moment. How long should it be - some journals have limits in size, so... - ? I think I could propose one in three months - English is not my mother-tongue and I have a pressing article deadline before - It is true that I could submit this stuff to other journals, but it > seems like that doing that makes the information harder to find in the > future and not easier. I'm also dissatisfied with how information > exclusionary academic journals seem to be. They are catching up, but > they are still not as accessible as other things available on the > internet. > Given the open nature of most scientific research, it is remarkable that > getting access to the information is not as easy as it should be with > modern search engines (if your internet domain does not subscribe to the > e-journal). Same for me, I have a hard time figuring why the prices are so high, and that closes the door to very good articles :( Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu May 31 04:13:48 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 31 May 2007 09:13:48 +0100 Subject: [Numpy-discussion] byteswap() leaves dtype unchanged In-Reply-To: References: <331116dc0705291117g5d19414fne3b6e65f6c9ac3b0@mail.gmail.com> <1180510969.2889.4.camel@localhost.localdomain> <331116dc0705300630h70ee98a3o37f85a87b050711b@mail.gmail.com> <331116dc0705300633h22f8c0f2xc93e8244a90b418b@mail.gmail.com> <1e2af89e0705300657s77732363vd122afd1cd818e77@mail.gmail.com> Message-ID: <1e2af89e0705310113g6b04e336s1ad118f610cbab3d@mail.gmail.com> Hi, > > I can see your point I think, that situation 1 seems to be the more > > common and obvious, and coming at it from outside, you would have > > thought that a.byteswap would change both. > > I think the reason that byteswap behaves the way it does is that for > situation 1 you often don't actually need to do anything. Just > calculate with the things (it'll be a bit slow); as soon as the first > copy gets made you're back to native byte order. So for those times > you need to do it in place it's not too much trouble to byteswap and > adjust the byte order in the dtype (you'd need to inspect the byte > order in the first place to know it was byteswapped...) Thanks - good point. How about the following suggestion: For the next release: rename byteswap to something like byteswapbuffer deprecate byteswap in favor of byteswapbuffer Update the docstrings to make the distinction between situations clearer. I think that would reduce the clear element of surprise here. Best, Matthew From fullung at gmail.com Thu May 31 04:39:21 2007 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 31 May 2007 10:39:21 +0200 Subject: [Numpy-discussion] build problem on Windows (was: buildproblemon RHE3 machine) References: <20070531000801.GA20169@arbutus.physics.mcmaster.ca><007801c7a31b$26e22db0$0100a8c0@sun.ac.za> <20070531005306.GA20285@arbutus.physics.mcmaster.ca> Message-ID: <001801c7a35f$2fa17700$0100a8c0@sun.ac.za> Hello all ----- Original Message ----- From: "David M. Cooke" To: "Discussion of Numerical Python" Sent: Thursday, May 31, 2007 2:53 AM Subject: Re: [Numpy-discussion] build problem on Windows (was: buildproblemon RHE3 machine) > On Thu, May 31, 2007 at 02:32:21AM +0200, Albert Strasheim wrote: > >> >> don't know how to compile Fortran code on platform 'nt' >> Running from numpy source directory. >> Traceback (most recent call last): > [snip] >> File "C:\home\albert\work2\numpy\numpy\distutils\command\config.py", >> line >> 31, in _check_compiler >> self.fcompiler.customize(self.distribution) >> AttributeError: 'NoneType' object has no attribute 'customize' > > Try it now. Okay, NumPy build looks good. SciPy build is having issues because my Intel Visual Fortran is installed in C:\Program Files (the default location). I've made a ticket for that here where I've attached the build output: http://projects.scipy.org/scipy/numpy/ticket/531 Cheers, Albert From pearu at cens.ioc.ee Thu May 31 04:49:40 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 31 May 2007 10:49:40 +0200 Subject: [Numpy-discussion] build problem on Windows In-Reply-To: <003a01c7a2bb$479bded0$0100a8c0@sun.ac.za> References: <46571442.9020000@stsci.edu><004f01c79ef1$a8d6e0f0$0100a8c0@sun.ac.za><20070525173716.GA25025@arbutus.physics.mcmaster.ca><465720BC.8090400@gmail.com><20070525175020.GA25108@arbutus.physics.mcmaster.ca><001801c7a1f0$b7e9bc00$0100a8c0@sun.ac.za><53FB9E6D-1F05-4726-897D-70A4DC07DCBE@physics.mcmaster.ca> <00ab01c7a22f$0ff36340$0100a8c0@sun.ac.za> <003a01c7a2bb$479bded0$0100a8c0@sun.ac.za> Message-ID: <465E8C24.7050409@cens.ioc.ee> Albert Strasheim wrote: > Hello > > I took a quick look at the code, and it seems like new_fcompiler(...) is too > soon to throw an error if a Fortran compiler cannot be detected. > > Instead, you might want to return some kind of NoneFCompiler that throws an > error if the build actually tries to compile Fortran code. yes, this issue should be fixed in svn now. Pearu From lbolla at gmail.com Thu May 31 06:07:09 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 31 May 2007 12:07:09 +0200 Subject: [Numpy-discussion] fortran representation Message-ID: <80c99e790705310307m424c41e8w8be03fe102baf72b@mail.gmail.com> Hi all, I've got an easy question for you. I looked in Travis' book, but I couldn't figure out the answer... If I have an array1D (obtained reading a stream of numbers with numpy.fromfile) like that: In [150]: data Out[150]: array([ 2., 3., 4., 3., 4., 5., 4., 5., 6., 5., 6., 7.], dtype=float32) I want it to be considered as "Fortran ordered", so that when I do: In [151]: data.shape = (3,4) I want to get: array([[ 2., 3., 4., 5.], [ 3., 4., 5., 6.], [ 4., 5., 6., 7.]], dtype=float32) But I get: array([[ 2., 3., 4., 3.], [ 4., 5., 4., 5.], [ 6., 5., 6., 7.]], dtype=float32) How can I do that? (I know I could do data.reshape(4,3).T, but it's not very "elegant" and reshaping in ND becomes a mess!). (I tried with numpy.array(data, order = 'Fotran'), with no luck...) Thank you in advance, Lorenzo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu May 31 06:54:05 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 31 May 2007 12:54:05 +0200 Subject: [Numpy-discussion] fortran representation In-Reply-To: <80c99e790705310307m424c41e8w8be03fe102baf72b@mail.gmail.com> References: <80c99e790705310307m424c41e8w8be03fe102baf72b@mail.gmail.com> Message-ID: <20070531105404.GJ6842@mentat.za.net> On Thu, May 31, 2007 at 12:07:09PM +0200, lorenzo bolla wrote: > If I have an array1D (obtained reading a stream of numbers with numpy.fromfile) > like that: > > In [150]: data > Out[150]: array([ 2., 3., 4., 3., 4., 5., 4., 5., 6., 5., 6., 7.], > dtype=float32) > > I want it to be considered as "Fortran ordered", so that when I do: > > In [151]: data.shape = (3,4) > > I want to get: > > array([[ 2., 3., 4., 5.], > [ 3., 4., 5., 6.], > [ 4., 5., 6., 7.]], dtype=float32) In [42]: x = N.array([2., 3., 4., 3., 4., 5., 4., 5., 6., 5., 6., 7.]) In [43]: x.reshape((3,4),order='F') Out[43]: array([[ 2., 3., 4., 5.], [ 3., 4., 5., 6.], [ 4., 5., 6., 7.]]) Cheers St?fan From lou_boog2000 at yahoo.com Thu May 31 10:04:54 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 31 May 2007 07:04:54 -0700 (PDT) Subject: [Numpy-discussion] [SciPy-user] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> Message-ID: <769426.51104.qm@web34415.mail.mud.yahoo.com> I agree with this idea. Very good. Although I also agree with Anne Archibald that the requirement of an article in the journal to submit code is not a good idea. I would be willing to contribute an article on writing C extensions that use numpy arrays. I already have something on this on the SciPy cookbook, but I bet it would reach more people in a journal. I also suggest that articles on using packages like matplotlib/pylab for scientific purposes also be included. -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz From jdh2358 at gmail.com Thu May 31 10:33:49 2007 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 31 May 2007 09:33:49 -0500 Subject: [Numpy-discussion] build advice Message-ID: <88e473830705310733q7ac405c3hae6da8ee51521e3c@mail.gmail.com> A colleague of mine is trying to update our production environment with the latest releases of numpy, scipy, mpl and ipython, and is worried about the lag time when there is a new numpy and old scipy, etc... as the build progresses. This is the scheme he is considering, which looks fine to me, but I thought I would bounce it off the list here in case anyone has confronted or thought about this problem before. Alternatively, an install to a tmp dir and then a bulk cp -r should work, no? JDH > We're planning to be putting out a bugfix > matplotlib point release to 0.90.1 -- can you hold off on the mpl > install for a day or so? Sure. While I have your attention, do you think this install scheme would work? It's the body of an email I just sent to c.l.py. ---------------------------------------------------------------------------- At work I need to upgrade numpy, scipy, ipython and matplotlib. They need to be done all at once. All have distutils setups but the new versions and the old versions are incompatible with one another as a group because numpy's apis changed. Ideally, I could just do something like cd ~/src cd numpy python setup.py install cd ../scipy python setup.py install cd ../matplotlib python setup.py install cd ../ipython python setup.py install however, even if nothing goes awry it leaves me with a fair chunk of time where the state of the collective system is inconsistent (new numpy, old everything else). I'm wondering... Can I stage the installs to a different directory tree like so: export PYTHONPATH=$HOME/local/lib/python-2.4/site-packages cd ~/src cd numpy python setup.py install --prefix=$PYTHONPATH cd ../scipy python setup.py install --prefix=$PYTHONPATH cd ../matplotlib python setup.py install --prefix=$PYTHONPATH cd ../ipython python setup.py install --prefix=$PYTHONPATH That would get them all built as a cohesive set. Then I'd repeat the installs without PYTHONPATH: unset PYTHONPATH cd ~/src cd numpy python setup.py install cd ../scipy python setup.py install cd ../matplotlib python setup.py install cd ../ipython python setup.py install Presumably the compilation (the time-consuming part) is all location-independent, so the second time the build_ext part should be fast. Can anyone comment on the feasibility of this approach? I guess what I'm wondering is what dependencies there are on the installation directory. From matthew.brett at gmail.com Thu May 31 11:05:36 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 31 May 2007 16:05:36 +0100 Subject: [Numpy-discussion] build advice In-Reply-To: <88e473830705310733q7ac405c3hae6da8ee51521e3c@mail.gmail.com> References: <88e473830705310733q7ac405c3hae6da8ee51521e3c@mail.gmail.com> Message-ID: <1e2af89e0705310805x3acba707x9225c3e736d6933@mail.gmail.com> Hi, > That would get them all built as a cohesive set. Then I'd repeat the > installs without PYTHONPATH: Is that any different from: cd ~/src cd numpy python setup.py build cd ../scipy python setup.py build ... cd ../numpy python setup.py install cd ../scipy python setup.py install ? Just wondering - I don't know distutils well. Matthew From jdh2358 at gmail.com Thu May 31 11:12:50 2007 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 31 May 2007 10:12:50 -0500 Subject: [Numpy-discussion] build advice In-Reply-To: <1e2af89e0705310805x3acba707x9225c3e736d6933@mail.gmail.com> References: <88e473830705310733q7ac405c3hae6da8ee51521e3c@mail.gmail.com> <1e2af89e0705310805x3acba707x9225c3e736d6933@mail.gmail.com> Message-ID: <88e473830705310812i69e9c9den50f6f1adce93f8f1@mail.gmail.com> On 5/31/07, Matthew Brett wrote: > Hi, > > > That would get them all built as a cohesive set. Then I'd repeat the > > installs without PYTHONPATH: > > Is that any different from: > cd ~/src > cd numpy > python setup.py build > cd ../scipy > python setup.py build Well, the scipy and mpl builds need to see the new numpy build, I think that is the issue. JDH From matthew.brett at gmail.com Thu May 31 11:14:39 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 31 May 2007 16:14:39 +0100 Subject: [Numpy-discussion] build advice In-Reply-To: <88e473830705310812i69e9c9den50f6f1adce93f8f1@mail.gmail.com> References: <88e473830705310733q7ac405c3hae6da8ee51521e3c@mail.gmail.com> <1e2af89e0705310805x3acba707x9225c3e736d6933@mail.gmail.com> <88e473830705310812i69e9c9den50f6f1adce93f8f1@mail.gmail.com> Message-ID: <1e2af89e0705310814k34eb51ffw5c9d618b447803ad@mail.gmail.com> Ah, yes, I was typing too fast, thinking too little. On 5/31/07, John Hunter wrote: > On 5/31/07, Matthew Brett wrote: > > Hi, > > > > > That would get them all built as a cohesive set. Then I'd repeat the > > > installs without PYTHONPATH: > > > > Is that any different from: > > cd ~/src > > cd numpy > > python setup.py build > > cd ../scipy > > python setup.py build > > Well, the scipy and mpl builds need to see the new numpy build, I > think that is the issue. > JDH > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From cookedm at physics.mcmaster.ca Thu May 31 11:27:57 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 31 May 2007 11:27:57 -0400 Subject: [Numpy-discussion] build advice In-Reply-To: <88e473830705310733q7ac405c3hae6da8ee51521e3c@mail.gmail.com> References: <88e473830705310733q7ac405c3hae6da8ee51521e3c@mail.gmail.com> Message-ID: <20070531152757.GA24048@arbutus.physics.mcmaster.ca> On Thu, May 31, 2007 at 09:33:49AM -0500, John Hunter wrote: > A colleague of mine is trying to update our production environment > with the latest releases of numpy, scipy, mpl and ipython, and is > worried about the lag time when there is a new numpy and old scipy, > etc... as the build progresses. This is the scheme he is considering, > which looks fine to me, but I thought I would bounce it off the list > here in case anyone has confronted or thought about this problem > before. > > Alternatively, an install to a tmp dir and then a bulk cp -r should work, no? > > JDH > > > We're planning to be putting out a bugfix > > matplotlib point release to 0.90.1 -- can you hold off on the mpl > > install for a day or so? > > Sure. While I have your attention, do you think this install scheme > would work? It's the body of an email I just sent to c.l.py. > > ---------------------------------------------------------------------------- > At work I need to upgrade numpy, scipy, ipython and matplotlib. They need > to be done all at once. All have distutils setups but the new versions and > the old versions are incompatible with one another as a group because > numpy's apis changed. Ideally, I could just do something like > > cd ~/src > cd numpy > python setup.py install > cd ../scipy > python setup.py install > cd ../matplotlib > python setup.py install > cd ../ipython > python setup.py install > > however, even if nothing goes awry it leaves me with a fair chunk of time > where the state of the collective system is inconsistent (new numpy, old > everything else). I'm wondering... Can I stage the installs to a different > directory tree like so: > > export PYTHONPATH=$HOME/local/lib/python-2.4/site-packages > cd ~/src > cd numpy > python setup.py install --prefix=$PYTHONPATH > cd ../scipy > python setup.py install --prefix=$PYTHONPATH > cd ../matplotlib > python setup.py install --prefix=$PYTHONPATH > cd ../ipython > python setup.py install --prefix=$PYTHONPATH Note that will install everything to $HOME/local/lib/python-2.4/site-packages/lib/python2.4/site-packages; probably not what he wants :-) Use --prefix=$HOME/local instead. > That would get them all built as a cohesive set. Then I'd repeat the > installs without PYTHONPATH: > > unset PYTHONPATH > cd ~/src > cd numpy > python setup.py install > cd ../scipy > python setup.py install > cd ../matplotlib > python setup.py install > cd ../ipython > python setup.py install > > Presumably the compilation (the time-consuming part) is all > location-independent, so the second time the build_ext part should be fast. > > Can anyone comment on the feasibility of this approach? I guess what I'm > wondering is what dependencies there are on the installation directory. For numpy and scipy, it should be fine. An alternate approach would be to use eggs. All the above can be built as eggs (I just added a setupegg.py script to scipy), and something like http://cheeseshop.python.org/pypi/workingenv.py to give a clean environment to test in. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Thu May 31 12:48:07 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 31 May 2007 10:48:07 -0600 Subject: [Numpy-discussion] SciPy Journal In-Reply-To: References: <465E5D58.9030107@ieee.org> Message-ID: <465EFC47.7070809@ee.byu.edu> Anne Archibald wrote: >On 31/05/07, Travis Oliphant wrote: > > > >>2) I think it's scope should be limited to papers that describe >>algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we >>could also accept papers that describe code that depends on NumPy / >>SciPy that is also easily available. >> >> > >I think there's a place for code that doesn't fit in scipy itself but >could be closely tied to it - scikits, for example, or other code that >can't go in for license reasons (such as specialization). > > > I did mention scikits, but your point is well taken. >>3) I'd like to make a requirement for inclusion of new code in SciPy >>that it have an associated journal article describing the algorithms, >>design approach, etc. I don't see this journal article as being >>user-interface documentation for the code. I see this is as a place to >>describe why the code is organized as it is and to detail any algorithms >>that are used. >> >> > >I don't think that's a good idea. It raises the barrier to >contributing code (particularly for non-native English speakers), >which is not something we need. Certainly every major piece of code >warrants a journal article, or at least a short piece, and certainly >the author should have first shot, but I think it's not unreasonable >to allow the code to go in without the article being written. But (for >example) I implemented the Kuiper statistic and would be happy to >contribute it to scipy (once it's seen a bit more debugging), but it's >quite adequately described in the literature already, so it doesn't >seem worth writing an article about it. > > What I envision is multiple "levels" of artciles, much like you see full papers and correspondences in the literature. Something like this should take no more than a 1-4 page letter that describes the algorithm used and references available literature. I still would like to see it though as a means to document what is in SciPy. Of course, I'd like to see more code, but right now we have a problem in deciding what code should go into SciPy and there seems to be no clear way to transition the code from the sandbox into SciPy. This would help in that process I think. I'm always open to help somebody who may have difficulty with English. -Travis From oliphant at ee.byu.edu Thu May 31 12:53:16 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 31 May 2007 10:53:16 -0600 Subject: [Numpy-discussion] SciPy Journal In-Reply-To: References: <465E5D58.9030107@ieee.org> Message-ID: <465EFD7C.1040802@ee.byu.edu> Matthieu Brucher wrote: > > For this point, I have the same opinion as Anne : > - having an equivalence between cde and article is raising the entry > level, but as Anne said, some code could be somehow too trivial ? > - a peer-review process implies that an article can be rejected, so > the code is accepted, but not the article and vice-versa ? I would like to avoid that. In my mind the SciPy Journal should reflect code that is actually available in the PyLab world. But, I could see having code that is not written about in the journal (the current state, for example...) > - perhaps encouraging new contributors to propose an article would be > a solution ? > > I could talk about the design I proposed for generic optimizer, and > hopefully I'll have some other generic modules that could be exposed. > But it's not in scipy, and it's not an official scikit at the moment. > How long should it be - some journals have limits in size, so... - ? In my mind, we electronically publish something every 6 months to start with and then go from there. The "publication" process amounts to a peer-review check on the work, a basic quality check on the type-setting (we will require authors to do their own typesetting), and then a listing of the articles for the specific edition. Page size is not as important as "file size." Each article should be under a few MB. -Travis From oliphant at ee.byu.edu Thu May 31 12:59:45 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 31 May 2007 10:59:45 -0600 Subject: [Numpy-discussion] fortran representation In-Reply-To: <80c99e790705310307m424c41e8w8be03fe102baf72b@mail.gmail.com> References: <80c99e790705310307m424c41e8w8be03fe102baf72b@mail.gmail.com> Message-ID: <465EFF01.7090307@ee.byu.edu> lorenzo bolla wrote: > Hi all, > I've got an easy question for you. I looked in Travis' book, but I > couldn't figure out the answer... > > If I have an array1D (obtained reading a stream of numbers with > numpy.fromfile) like that: > > In [150]: data > Out[150]: array([ 2., 3., 4., 3., 4., 5., 4., 5., 6., 5., > 6., 7.], dtype=float32) > > I want it to be considered as "Fortran ordered", so that when I do: > > In [151]: data.shape = (3,4) > > I want to get: > > array([[ 2., 3., 4., 5.], > [ 3., 4., 5., 6.], > [ 4., 5., 6., 7.]], dtype=float32) > > But I get: > > array([[ 2., 3., 4., 3.], > [ 4., 5., 4., 5.], > [ 6., 5., 6., 7.]], dtype=float32) > > How can I do that? The Fortran orderness is a property of changing the shape from 1-d to 2-d. Assigning to the shape only allows you to do C-ordering. However, the reshape method allows a key-word argument: data.reshape(3,4,order='F') will give you what you want. -Travis From tobias at knoppweb.de Sun May 20 05:27:18 2007 From: tobias at knoppweb.de (Tobias Knopp) Date: Sun, 20 May 2007 09:27:18 -0000 Subject: [Numpy-discussion] flatindex Message-ID: <1073590507.17459.7.camel@ubuntu-laptop> Hi! I was looking for a method to find the indices of the smallest element of an 3-dimensional array a. Therefore i used a.argmax() The problem was, that argmax gives me a flat index. My question is, if there is a build-in function to convert the flat index back to a multidimensional one. I know how to write such a procedure but was curious if one exists in numpy. Thanks for response and the really impressive software package. Tobi From priggs at cnr.colostate.edu Mon May 21 12:13:09 2007 From: priggs at cnr.colostate.edu (Philip Riggs) Date: Mon, 21 May 2007 10:13:09 -0600 Subject: [Numpy-discussion] Convert Numeric array to numpy array Message-ID: I am not finding an answer to this question in the latest numpy documentation. I have a package that still uses Numeric (GDAL with python bindings). Is this valid code that will work as expected to convert the Numeric array to a numpy array (very simplified from my script)? import numpy from Numeric import * import gdal import gdalconst import gdalnumeric tempmap = array ([[1,0,1,0],[1,1,0,1]]) map = numpy.array(tempmap) mask = piecewise (map, map > 0, (1,0)) scalarmap = numpy.array([[0.5,0.2,0.3,0,4],[0.7,0.5,0.6,0.3]]) surfacemap = scalarmap * mask Thanks Philip PS: I considered converting the old bindings to use numpy arrays, but receive an error. I believe the problem lies in the datatype conversion shown below. Is there an easy way to fix this? # GDALDataType GDT_Unknown = 0 GDT_Byte = 1 GDT_UInt16 = 2 GDT_Int16 = 3 GDT_UInt32 = 4 GDT_Int32 = 5 GDT_Float32 = 6 GDT_Float64 = 7 GDT_CInt16 = 8 GDT_CInt32 = 9 GDT_CFloat32 = 10 GDT_CFloat64 = 11 GDT_TypeCount = 12 def GDALTypeCodeToNumericTypeCode( gdal_code ): if gdal_code == GDT_Byte: return UnsignedInt8 elif gdal_code == GDT_UInt16: return UnsignedInt16 elif gdal_code == GDT_Int16: return Int16 elif gdal_code == GDT_UInt32: return UnsignedInt32 elif gdal_code == GDT_Int32: return Int32 elif gdal_code == GDT_Float32: return Float32 elif gdal_code == GDT_Float64: return Float64 elif gdal_code == GDT_CInt16: return Complex32 elif gdal_code == GDT_CInt32: return Complex32 elif gdal_code == GDT_CFloat32: return Complex32 elif gdal_code == GDT_CFloat64: return Complex64 else: return None def NumericTypeCodeToGDALTypeCode( numeric_code ): if numeric_code == UnsignedInt8: return GDT_Byte elif numeric_code == Int16: return GDT_Int16 elif numeric_code == UnsignedInt16: return GDT_UInt16 elif numeric_code == Int32: return GDT_Int32 elif numeric_code == UnsignedInt32: return GDT_UInt32 elif numeric_code == Int: return GDT_Int32 elif numeric_code == UnsignedInteger: return GDT_UInt32 elif numeric_code == Float32: return GDT_Float32 elif numeric_code == Float64: return GDT_Float64 elif numeric_code == Complex32: return GDT_CFloat32 elif numeric_code == Complex64: return GDT_CFloat64 else: return None From sittner at lkb.ens.fr Wed May 30 05:05:49 2007 From: sittner at lkb.ens.fr (sittner at lkb.ens.fr) Date: Wed, 30 May 2007 11:05:49 +0200 (CEST) Subject: [Numpy-discussion] subscribe Message-ID: <56534.129.199.118.84.1180515949.squirrel@mailgate.phys.ens.fr> subscribe i would like to subscribe to the mailing list From oro at atomistix.com Thu May 31 05:22:16 2007 From: oro at atomistix.com (Ornolfur Rognvaldsson) Date: Thu, 31 May 2007 11:22:16 +0200 Subject: [Numpy-discussion] argument parsing in lapack_lite_zgeqrf - 64bit issue Message-ID: <465E93C8.6080400@atomistix.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! I am using numpy-1.0.1 and ran into problems with 4 routines in lapack_litemodule.c on 64 bit machines. Traced it back to parsing of ints as longs. This has been fixed in numpy-1.0.3 for three of the routines. lapack_lite_zgeqrf() still has the wrong parsing though. The attached patch should fix it, I guess. ?ssi -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.1 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGXpPIHbjsBy8IXVsRAnIEAJ0c6ln97VUI/CYl2O9IfbQD+2eOXQCgjxRE kVfuLQrDk4k5hTzqaBMj9Qw= =SVBu -----END PGP SIGNATURE----- -------------- next part -------------- A non-text attachment was scrubbed... Name: zgeqrf_fix.patch Type: text/x-patch Size: 922 bytes Desc: not available URL: From nicolas.pettiaux at ael.be Thu May 31 06:09:09 2007 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Thu, 31 May 2007 12:09:09 +0200 Subject: [Numpy-discussion] [SciPy-user] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> References: <465E5D58.9030107@ieee.org> Message-ID: 2007/5/31, Travis Oliphant : > 1) I'd like to get it going so that we can push out an electronic issue > after the SciPy conference (in September) Such a journal is a very good idea indeed. This would also support the credibility of python/scipy/numpy for an academic audience that legitimates scientific productions mostly by articles in journals. > 2) I think it's scope should be limited to papers that describe > algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we > could also accept papers that describe code that depends on NumPy / > SciPy that is also easily available. More generally, examples of uses of scipy / numpy ... would be interesting in such a journal, as well as simply the proceedings of the scipy conferences. > It is true that I could submit this stuff to other journals, but it > seems like that doing that makes the information harder to find in the > future and not easier. I'm also dissatisfied with how information > exclusionary academic journals seem to be. They are catching up, but > they are still not as accessible as other things available on the internet. having *one* main place where much information and documentation, with peer reviewed validation, could be found is IMHO very interesting. Regards, Nicolas -- Nicolas Pettiaux - email: nicolas.pettiaux at ael.be Utiliser des formats ouverts et des logiciels libres - http://www.passeralinux.org From sittner at lkb.ens.fr Thu May 31 11:55:18 2007 From: sittner at lkb.ens.fr (sittner at lkb.ens.fr) Date: Thu, 31 May 2007 17:55:18 +0200 (CEST) Subject: [Numpy-discussion] ATLAS,LAPACK compilation - help! Message-ID: <48876.129.199.118.84.1180626918.squirrel@mailgate.phys.ens.fr> Hello there, I'm new here, so excuse me if the solution is trivial: i have installed ATLAS and LAPACK on my ubuntu 7 dual core intel machine. now, when i try to install numpy, it tells me it doesn't find these libraries: " $ python setup.py install Running from numpy source directory. F2PY Version 2_3816 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/local/lib/ATLAS/src/ libraries lapack,blas not found in /usr/local/lib/ATLAS libraries lapack,blas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries lapack,blas not found in /usr/local/lib/ATLAS/src/ libraries lapack,blas not found in /usr/local/lib/ATLAS libraries lapack,blas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib NOT AVAILABLE ......" I have installed ATLAS and lapack with no errors. ATLAS is in usr/local/lib/ATLAS/: $ ls /usr/local/lib/ATLAS bin doc interfaces Make.Linux_UNKNOWNSSE2_2 README tune CONFIG include lib makes src config.c INSTALL.txt Makefile Make.top tst.o so, what seems to be the problem? thanks, t From oliphant at ee.byu.edu Thu May 31 13:05:52 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 31 May 2007 11:05:52 -0600 Subject: [Numpy-discussion] flatindex In-Reply-To: <1073590507.17459.7.camel@ubuntu-laptop> References: <1073590507.17459.7.camel@ubuntu-laptop> Message-ID: <465F0070.1050806@ee.byu.edu> Tobias Knopp wrote: >Hi! > >I was looking for a method to find the indices of the smallest element >of an 3-dimensional array a. Therefore i used > >a.argmax() > >The problem was, that argmax gives me a flat index. My question is, if >there is a build-in function to convert the flat index back to a >multidimensional one. I know how to write such a procedure but was >curious if one exists in numpy. > > > See numpy.unravel_index -Travis From erendisaldarion at gmail.com Thu May 31 13:06:06 2007 From: erendisaldarion at gmail.com (Aldarion) Date: Fri, 01 Jun 2007 01:06:06 +0800 Subject: [Numpy-discussion] [SciPy-user] SciPy Journal In-Reply-To: <769426.51104.qm@web34415.mail.mud.yahoo.com> References: <769426.51104.qm@web34415.mail.mud.yahoo.com> Message-ID: <465F007E.8040505@gmail.com> Lou Pecora wrote: > I agree with this idea. Very good. Although I also > agree with Anne Archibald that the requirement of an > article in the journal to submit code is not a good > idea. I would be willing to contribute an article on > writing C extensions that use numpy arrays. I > already have something on this on the SciPy cookbook, > but I bet it would reach more people in a journal. > > I also suggest that articles on using packages like > matplotlib/pylab for scientific purposes also be > included. and Ipython(Ipython1) :). From oliphant at ee.byu.edu Thu May 31 13:09:27 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 31 May 2007 11:09:27 -0600 Subject: [Numpy-discussion] Convert Numeric array to numpy array In-Reply-To: References: Message-ID: <465F0147.9060000@ee.byu.edu> Philip Riggs wrote: >I am not finding an answer to this question in the latest numpy >documentation. I have a package that still uses Numeric (GDAL with >python bindings). Is this valid code that will work as expected to >convert the Numeric array to a numpy array (very simplified from my >script)? > >import numpy >from Numeric import * >import gdal >import gdalconst >import gdalnumeric > >tempmap = array ([[1,0,1,0],[1,1,0,1]]) >map = numpy.array(tempmap) > > Yes, this will do the conversion. >I considered converting the old bindings to use numpy arrays, but >receive an error. I believe the problem lies in the datatype >conversion shown below. Is there an easy way to fix this? > > ># GDALDataType >GDT_Unknown = 0 >GDT_Byte = 1 >GDT_UInt16 = 2 >GDT_Int16 = 3 >GDT_UInt32 = 4 >GDT_Int32 = 5 >GDT_Float32 = 6 >GDT_Float64 = 7 >GDT_CInt16 = 8 >GDT_CInt32 = 9 >GDT_CFloat32 = 10 >GDT_CFloat64 = 11 >GDT_TypeCount = 12 > >def GDALTypeCodeToNumericTypeCode( gdal_code ): > if gdal_code == GDT_Byte: > return UnsignedInt8 > elif gdal_code == GDT_UInt16: > return UnsignedInt16 > elif gdal_code == GDT_Int16: > return Int16 > elif gdal_code == GDT_UInt32: > return UnsignedInt32 > elif gdal_code == GDT_Int32: > return Int32 > elif gdal_code == GDT_Float32: > return Float32 > elif gdal_code == GDT_Float64: > return Float64 > elif gdal_code == GDT_CInt16: > return Complex32 > elif gdal_code == GDT_CInt32: > return Complex32 > elif gdal_code == GDT_CFloat32: > return Complex32 > elif gdal_code == GDT_CFloat64: > return Complex64 > else: > return None > >def NumericTypeCodeToGDALTypeCode( numeric_code ): > if numeric_code == UnsignedInt8: > return GDT_Byte > elif numeric_code == Int16: > return GDT_Int16 > elif numeric_code == UnsignedInt16: > return GDT_UInt16 > elif numeric_code == Int32: > return GDT_Int32 > elif numeric_code == UnsignedInt32: > return GDT_UInt32 > elif numeric_code == Int: > return GDT_Int32 > elif numeric_code == UnsignedInteger: > return GDT_UInt32 > elif numeric_code == Float32: > return GDT_Float32 > elif numeric_code == Float64: > return GDT_Float64 > elif numeric_code == Complex32: > return GDT_CFloat32 > elif numeric_code == Complex64: > return GDT_CFloat64 > else: > return None > > All of these Int16, UnsignedInt16, etc. names are in numpy.oldnumeric However, they use the "new" character codes and so you have to take that into account. You might also consider using a dictionary to do this conversion instead of the if ... elif tree. -Travis From robert.kern at gmail.com Thu May 31 13:15:43 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 31 May 2007 12:15:43 -0500 Subject: [Numpy-discussion] ATLAS,LAPACK compilation - help! In-Reply-To: <48876.129.199.118.84.1180626918.squirrel@mailgate.phys.ens.fr> References: <48876.129.199.118.84.1180626918.squirrel@mailgate.phys.ens.fr> Message-ID: <465F02BF.7000206@gmail.com> sittner at lkb.ens.fr wrote: > Hello there, > I'm new here, so excuse me if the solution is trivial: > i have installed ATLAS and LAPACK on my ubuntu 7 dual core intel machine. > now, when i try to install numpy, it tells me it doesn't find these > libraries: > > " > $ python setup.py install > Running from numpy source directory. > F2PY Version 2_3816 > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in /usr/local/lib > libraries mkl,vml,guide not found in /usr/lib > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > libraries lapack,blas not found in /usr/local/lib/ATLAS/src/ > libraries lapack,blas not found in /usr/local/lib/ATLAS > libraries lapack,blas not found in /usr/local/lib > libraries lapack,blas not found in /usr/lib > NOT AVAILABLE > > atlas_blas_info: > libraries lapack,blas not found in /usr/local/lib/ATLAS/src/ > libraries lapack,blas not found in /usr/local/lib/ATLAS > libraries lapack,blas not found in /usr/local/lib > libraries lapack,blas not found in /usr/lib > NOT AVAILABLE > ......" > I have installed ATLAS and lapack with no errors. > ATLAS is in usr/local/lib/ATLAS/: > $ ls /usr/local/lib/ATLAS > bin doc interfaces Make.Linux_UNKNOWNSSE2_2 README tune > CONFIG include lib makes src > config.c INSTALL.txt Makefile Make.top tst.o > > so, what seems to be the problem? You haven't actually installed ATLAS. You've just built it. Don't put the source in /usr/local/lib/ATLAS/. Put that somewhere else, like ~/src/ATLAS/. Follow the installation instructions in INSTALL.txt. Note this section: """ There are two mandatory steps to ATLAS installation (config & build), as well as three optional steps (test, time, install) and these steps are described in detail below. For the impatient, here is the basic outline: ************************************************** mkdir my_build_dir ; cd my_build_dir /path/to/ATLAS/configure [flags] make ! tune and compile library make check ! perform sanity tests make ptcheck ! checks of threaded code for multiprocessor systems make time ! provide performance summary as % of clock rate make install ! Copy library and include files to other directories ************************************************** """ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Thu May 31 13:24:04 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 May 2007 10:24:04 -0700 Subject: [Numpy-discussion] SciPy Journal In-Reply-To: References: <465E5D58.9030107@ieee.org> Message-ID: <465F04B4.7000204@noaa.gov> Anne Archibald wrote: > I implemented the Kuiper statistic and would be happy to > contribute it to scipy (once it's seen a bit more debugging), but it's > quite adequately described in the literature already, so it doesn't > seem worth writing an article about it. It could be a very short article that refers the seminal papers in the existing literature -- that would still be very helpful. >> 2) I think it's scope should be limited to papers that describe >> algorithms and code that are in NumPy / SciPy / SciKits. I don't see any reason to limit it -- honestly, the problem is more likely to be too few articles than too many! Anything that would make sense at the SciPy conference should be fair game. We might want to have a clear structure that isolates the NumPy/SciPy/SciKits articles though. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Glen.Mabey at swri.org Thu May 31 13:28:06 2007 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Thu, 31 May 2007 12:28:06 -0500 Subject: [Numpy-discussion] flatindex In-Reply-To: <465F0070.1050806@ee.byu.edu> References: <1073590507.17459.7.camel@ubuntu-laptop> <465F0070.1050806@ee.byu.edu> Message-ID: <20070531172806.GA27586@bams.ccf.swri.edu> On Thu, May 31, 2007 at 11:05:52AM -0600, Travis Oliphant wrote: > Tobias Knopp wrote: > > >Hi! > > > >I was looking for a method to find the indices of the smallest element > >of an 3-dimensional array a. Therefore i used > > > >a.argmax() > > > >The problem was, that argmax gives me a flat index. My question is, if > >there is a build-in function to convert the flat index back to a > >multidimensional one. I know how to write such a procedure but was > >curious if one exists in numpy. > See > > numpy.unravel_index When I first learned that the value returned by the argmax() function consists of a ravel()ed index (when axis is not specified), I was told that it makes perfect sense in that it is consistent with the behavior of max() and other functions that assume a ravel()ed behavior when axis isn't specified. While I have to agree that it does make sense (some note of this behavior should probably be added to the docstrings ... ), I also feel that is it not intuitive, and will almost universally lead to confusion (initially) for those who use it. One could even argue that a ravel()ed index is not correct, in that it cannot [immediately] be used as an argument that references the indicated element. Whadaya think? Glen From oliphant at ee.byu.edu Thu May 31 13:32:31 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 31 May 2007 11:32:31 -0600 Subject: [Numpy-discussion] flatindex In-Reply-To: <20070531172806.GA27586@bams.ccf.swri.edu> References: <1073590507.17459.7.camel@ubuntu-laptop> <465F0070.1050806@ee.byu.edu> <20070531172806.GA27586@bams.ccf.swri.edu> Message-ID: <465F06AF.5050608@ee.byu.edu> Glen W. Mabey wrote: >On Thu, May 31, 2007 at 11:05:52AM -0600, Travis Oliphant wrote: > > >>Tobias Knopp wrote: >> >> >> >>>Hi! >>> >>>I was looking for a method to find the indices of the smallest element >>>of an 3-dimensional array a. Therefore i used >>> >>>a.argmax() >>> >>>The problem was, that argmax gives me a flat index. My question is, if >>>there is a build-in function to convert the flat index back to a >>>multidimensional one. I know how to write such a procedure but was >>>curious if one exists in numpy. >>> >>> >>See >> >>numpy.unravel_index >> >> > >When I first learned that the value returned by the argmax() function >consists of a ravel()ed index (when axis is not specified), I was >told that it makes perfect sense in that it is consistent with the behavior >of max() and other functions that assume a ravel()ed behavior when axis >isn't specified. > >While I have to agree that it does make sense (some note of this >behavior should probably be added to the docstrings ... ), I also feel >that is it not intuitive, and will almost universally lead to confusion >(initially) for those who use it. > >One could even argue that a ravel()ed index is not correct, in that it >cannot [immediately] be used as an argument that references the >indicated element. > > It can be used as an argument using ind = x.argmax() x.flat[ind] -Travis From oliphant at ee.byu.edu Thu May 31 13:34:23 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 31 May 2007 11:34:23 -0600 Subject: [Numpy-discussion] SciPy Journal In-Reply-To: <465F04B4.7000204@noaa.gov> References: <465E5D58.9030107@ieee.org> <465F04B4.7000204@noaa.gov> Message-ID: <465F071F.10402@ee.byu.edu> Christopher Barker wrote: >Anne Archibald wrote: > > >>I implemented the Kuiper statistic and would be happy to >>contribute it to scipy (once it's seen a bit more debugging), but it's >>quite adequately described in the literature already, so it doesn't >>seem worth writing an article about it. >> >> > >It could be a very short article that refers the seminal papers in the >existing literature -- that would still be very helpful. > > > >>>2) I think it's scope should be limited to papers that describe >>>algorithms and code that are in NumPy / SciPy / SciKits. >>> >>> > >I don't see any reason to limit it -- honestly, the problem is more >likely to be too few articles than too many! Anything that would make >sense at the SciPy conference should be fair game. We might want to have >a clear structure that isolates the NumPy/SciPy/SciKits articles though. > > > I'm persuaded by this. Yes, we could handle this by the structure so that the code-explaining articles are clear. Perhaps two sections or two publication lists. -Travis From acorriga at gmu.edu Thu May 31 15:00:23 2007 From: acorriga at gmu.edu (Andrew Corrigan) Date: Thu, 31 May 2007 15:00:23 -0400 Subject: [Numpy-discussion] f2py with ifort, code won't compile Message-ID: <465F1B47.5040707@gmu.edu> I'm trying to compile a simple Fortran code with f2py but I get a bunch of errors apparently related to my setup of python + numpy + ifort. The final error is: error: Command "ifort -L/Users/acorriga/pythonroot/lib /tmp/tmp9KOZQM/tmp/tmp9KOZQM/src.linux-i686-2.5/simplemodule.o /tmp/tmp9KOZQM/tmp/tmp9KOZQM/src.linux-i686-2.5/fortranobject.o /tmp/tmp9KOZQM/simple.o /tmp/tmp9KOZQM/tmp/tmp9KOZQM/src.linux-i686-2.5/simple-f2pywrappers2.o -o ./simple.so" failed with exit status 1 make: *** [simple.so] Error 1 The preceding errors all look like: /tmp/tmp9KOZQM/tmp/tmp9KOZQM/src.linux-i686-2.5/fortranobject.o(.text+0x1b84):/tmp/tmp9KOZQM/src.linux-i686-2.5/fortranobject.c:219: undefined reference to `PyErr_SetString' I also got one other one which looks like: /Users/acorriga/pythonroot/intel/fc/9.1.039/lib/for_main.o(.text+0x3e): In function `main': : undefined reference to `MAIN__' When I first encountered this error I was using Python 2.4. I tried reinstalling Python, but upgraded to 2.5 in case that helped, but it apparently did not. I'm using Intel Fortran version 9.1 and NumPy 1.0.3. This problem is occurring on my department's lab computers. On my home desktop, which has Ubuntu Feisty installed, using the Feisty repository's python-numpy package and gfortran the same Fortran code compiles fine with f2py. Any ideas what the problem is? Thanks, Andrew Corrigan From gael.varoquaux at normalesup.org Thu May 31 15:54:38 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 31 May 2007 21:54:38 +0200 Subject: [Numpy-discussion] subscribe In-Reply-To: <56534.129.199.118.84.1180515949.squirrel@mailgate.phys.ens.fr> References: <56534.129.199.118.84.1180515949.squirrel@mailgate.phys.ens.fr> Message-ID: <20070531195434.GA19718@clipper.ens.fr> Well just go to http://projects.scipy.org/mailman/listinfo/numpy-discussion and enter your email address in the form. Nice to see some one from LKB interested in numpy. You might want to talk to Thomas Nirrengarten, from the Haroche group, is has been learning Python and numpy recently. Cheers, Ga?l On Wed, May 30, 2007 at 11:05:49AM +0200, sittner at lkb.ens.fr wrote: > subscribe > i would like to subscribe to the mailing list > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion -- Gael Varoquaux, Groupe d'optique atomique, Laboratoire Charles Fabry de l'Institut d'Optique Campus Polytechnique, RD 128 91127 Palaiseau cedex FRANCE Tel : 33 (0) 1 64 53 33 49 - Fax : 33 (0) 1 64 53 31 01 Labs: 33 (0) 1 64 53 33 63 - 33 (0) 1 64 53 33 62 From martinunsal at gmail.com Thu May 31 17:56:05 2007 From: martinunsal at gmail.com (Martin =?ISO-8859-1?Q?=DCnsal?=) Date: Thu, 31 May 2007 14:56:05 -0700 Subject: [Numpy-discussion] GPU implementation? Message-ID: <1180648565.10490.141.camel@semtex.lamppost.us> I was wondering if anyone has thought about accelerating NumPy with a GPU. For example nVidia's CUDA SDK provides a feasible way to offload vector math onto the very fast SIMD processors available on the GPU. Currently GPUs primarily support single precision floats and are not IEEE compliant, but still could be useful for some applications. If there turns out to be a significant speedup over using the CPU, this could be a very accessible way to do scientific and numerical computation using GPUs, much easier than coding directly to the GPU APIs. Martin From charlesr.harris at gmail.com Thu May 31 18:53:54 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 May 2007 16:53:54 -0600 Subject: [Numpy-discussion] GPU implementation? In-Reply-To: <1180648565.10490.141.camel@semtex.lamppost.us> References: <1180648565.10490.141.camel@semtex.lamppost.us> Message-ID: On 5/31/07, Martin ?nsal wrote: > > I was wondering if anyone has thought about accelerating NumPy with a > GPU. For example nVidia's CUDA SDK provides a feasible way to offload > vector math onto the very fast SIMD processors available on the GPU. > Currently GPUs primarily support single precision floats and are not > IEEE compliant, but still could be useful for some applications. I've thought about it, but I think it would be a heck of a lot of work. NumPy works with subarrays a lot and I suspect this would make it tricky to stream through a GPU. Making good use of the several pipelines would also require a certain degree of parallelism which is not there now. We would also need computation of sin, cos, and other functions for ufuncs, so that might not work well. For ordinary matrix/array arithmetic the shortest route might be a version of ATLAS/BLAS, some of LAPACK, and maybe an FFT library written to use a GPU. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jturner at gemini.edu Thu May 31 20:11:45 2007 From: jturner at gemini.edu (James Turner) Date: Thu, 31 May 2007 20:11:45 -0400 Subject: [Numpy-discussion] GPU implementation? In-Reply-To: <1180648565.10490.141.camel@semtex.lamppost.us> References: <1180648565.10490.141.camel@semtex.lamppost.us> Message-ID: <465F6441.9020803@gemini.edu> Hi Martin, > I was wondering if anyone has thought about accelerating NumPy with a > GPU. For example nVidia's CUDA SDK provides a feasible way to offload > vector math onto the very fast SIMD processors available on the GPU. > Currently GPUs primarily support single precision floats and are not > IEEE compliant, but still could be useful for some applications. I wasn't actually there, but I noticed that last year's SciPy conference page includes a talk entitled "GpuPy: Using GPUs to Accelerate NumPy", by Benjamin Eitzen (I think I also found his Web page via Google): http://www.scipy.org/SciPy2006/Schedule I also wondered whether Benjamin or anyone else who is interested had come across the Open Graphics Project (hadn't got around to asking)? http://wiki.duskglow.com/tiki-index.php?page=open-graphics This would be quite a specialized combination, but I'm sure it could be useful to some people with high performance requirements or maybe building some kind of special appliances. Cheers, James. From acorriga at gmu.edu Thu May 31 20:49:19 2007 From: acorriga at gmu.edu (Andrew Corrigan) Date: Fri, 1 Jun 2007 00:49:19 +0000 (UTC) Subject: [Numpy-discussion] GPU implementation? References: <1180648565.10490.141.camel@semtex.lamppost.us> Message-ID: Martin ?nsal gmail.com> writes: > > I was wondering if anyone has thought about accelerating NumPy with a > GPU. For example nVidia's CUDA SDK provides a feasible way to offload > vector math onto the very fast SIMD processors available on the GPU. > Currently GPUs primarily support single precision floats and are not > IEEE compliant, but still could be useful for some applications. > > If there turns out to be a significant speedup over using the CPU, this > could be a very accessible way to do scientific and numerical > computation using GPUs, much easier than coding directly to the GPU APIs. > > Martin > I've thought about this too and think that it's a great idea. The existing library Brook, which has a similar programming model to NumPy, proves that it's feasible. And Brook was originally done with OpenGL & DirectX as backends to access the hardware. Needless to say, that's a lot harder than using CUDA. Since it hasn't already been pointed out, CUDA includes the cuBLAS and cuFFT libraries. I don't what the status of a LAPACK built on top of the cuBLAS is, but I'd be surprised if someone isn't already working on it. Also, NVIDIA has stated that double-precision hardware will be available later this year, in case that's an issue for anyone. I agree very much that it would make the GPUs more accessible, although CUDA has done an amazing job at that already. I think the most helpful thing about this would be if it allowed us to code using the existing array interface from NumPy in a way that the code automatically runs on the GPU in an optimized way - using shared memory + avoiding bank conflicts. I'd happily contribute to such a project if someone else got it started. From david at ar.media.kyoto-u.ac.jp Thu May 31 20:52:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 01 Jun 2007 09:52:14 +0900 Subject: [Numpy-discussion] ATLAS,LAPACK compilation - help! In-Reply-To: <465F02BF.7000206@gmail.com> References: <48876.129.199.118.84.1180626918.squirrel@mailgate.phys.ens.fr> <465F02BF.7000206@gmail.com> Message-ID: <465F6DBE.7000802@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > sittner at lkb.ens.fr wrote: >> Hello there, >> I'm new here, so excuse me if the solution is trivial: >> i have installed ATLAS and LAPACK on my ubuntu 7 dual core intel machine. >> now, when i try to install numpy, it tells me it doesn't find these >> libraries: >> >> " >> $ python setup.py install >> Running from numpy source directory. >> F2PY Version 2_3816 >> blas_opt_info: >> blas_mkl_info: >> libraries mkl,vml,guide not found in /usr/local/lib >> libraries mkl,vml,guide not found in /usr/lib >> NOT AVAILABLE >> >> atlas_blas_threads_info: >> Setting PTATLAS=ATLAS >> libraries lapack,blas not found in /usr/local/lib/ATLAS/src/ >> libraries lapack,blas not found in /usr/local/lib/ATLAS >> libraries lapack,blas not found in /usr/local/lib >> libraries lapack,blas not found in /usr/lib >> NOT AVAILABLE >> >> atlas_blas_info: >> libraries lapack,blas not found in /usr/local/lib/ATLAS/src/ >> libraries lapack,blas not found in /usr/local/lib/ATLAS >> libraries lapack,blas not found in /usr/local/lib >> libraries lapack,blas not found in /usr/lib >> NOT AVAILABLE >> ......" >> I have installed ATLAS and lapack with no errors. >> ATLAS is in usr/local/lib/ATLAS/: >> $ ls /usr/local/lib/ATLAS >> bin doc interfaces Make.Linux_UNKNOWNSSE2_2 README tune >> CONFIG include lib makes src >> config.c INSTALL.txt Makefile Make.top tst.o >> >> so, what seems to be the problem? > > You haven't actually installed ATLAS. You've just built it. Don't put the source > in /usr/local/lib/ATLAS/. Put that somewhere else, like ~/src/ATLAS/. Follow the > installation instructions in INSTALL.txt. Note this section: > > """ > There are two mandatory steps to ATLAS installation (config & build), as > well as three optional steps (test, time, install) and these steps are > described in detail below. For the impatient, here is the basic outline: > ************************************************** > mkdir my_build_dir ; cd my_build_dir > /path/to/ATLAS/configure [flags] > make ! tune and compile library > make check ! perform sanity tests > make ptcheck ! checks of threaded code for multiprocessor systems > make time ! provide performance summary as % of clock rate > make install ! Copy library and include files to other directories > ************************************************** > """ > Alternatively, if you are not familiar with compiling softwares (and Atlas can be tricky to compile/install), just install the packages provided by ubuntu: sudo apt-get install atlas3-sse2-dev atlas3-base-dev, and it should be fine. David From ellisonbg.net at gmail.com Thu May 31 22:36:01 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Thu, 31 May 2007 20:36:01 -0600 Subject: [Numpy-discussion] GPU implementation? In-Reply-To: <1180648565.10490.141.camel@semtex.lamppost.us> References: <1180648565.10490.141.camel@semtex.lamppost.us> Message-ID: <6ce0ac130705311936q31eeaa4ape1984916db7308a@mail.gmail.com> This is very much worth pursuing. I have been working on things related to this on and off at my day job. I can't say specifically what I have been doing, but I can make some general comments: * It is very easy to wrap the different parts of cude using ctypes and call it from/numpy. * Compared to a recent fast Intel CPU, the speedups we see are consistent with what the NVIDIA literature reports: 10-30x is common and in some cases we have seen up to 170x. * Certain parts of numpy will be very easy to accelerate: things covered by blas, ffts, and ufuncs, random variates - but each of these will have very different speedups. * LAPACK will be tough, extremely tough in some cases. The main issue is that various algorithms in LAPACK rely on different levels of BLAS (1,2, or 3). The algorithms in LAPACK that primarily use level 1 BLAS functions (vector operations), like LU-decomp, are probably not worth porting to the GPU - at least not using the BLAS that NVIDIA provides. On the other hand, the algorithms that use more of the level 2 and 3 BLAS functions are probably worth looking at. * NVIDIA made a design decision in its implementation of cuBLAS and cuFFT that is somewhat detrimental for certain algorithms. In their implementation, the BLAS and FFT routines can _only_ be called from the CPU, not from code running on the GPU. Thus if you have an algorithm that makes many calls to cuBLAS/cuFFT, you pay a large overhead in having to keep the main flow of the algorithm on the CPU. It is not uncommon for this overhead to completely erode any speedup you may have gotten on the GPU. * For many BLAS calls, the cuBLAS won't be much faster than a good optimized BLAS from ATLAS or Goto. Brian On 5/31/07, Martin ?nsal wrote: > I was wondering if anyone has thought about accelerating NumPy with a > GPU. For example nVidia's CUDA SDK provides a feasible way to offload > vector math onto the very fast SIMD processors available on the GPU. > Currently GPUs primarily support single precision floats and are not > IEEE compliant, but still could be useful for some applications. > > If there turns out to be a significant speedup over using the CPU, this > could be a very accessible way to do scientific and numerical > computation using GPUs, much easier than coding directly to the GPU APIs. > > Martin > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Thu May 31 22:38:21 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 May 2007 20:38:21 -0600 Subject: [Numpy-discussion] ATLAS,LAPACK compilation - help! In-Reply-To: <465F6DBE.7000802@ar.media.kyoto-u.ac.jp> References: <48876.129.199.118.84.1180626918.squirrel@mailgate.phys.ens.fr> <465F02BF.7000206@gmail.com> <465F6DBE.7000802@ar.media.kyoto-u.ac.jp> Message-ID: On 5/31/07, David Cournapeau wrote: > > Robert Kern wrote: > > sittner at lkb.ens.fr wrote: > >> Hello there, > >> I'm new here, so excuse me if the solution is trivial: > >> i have installed ATLAS and LAPACK on my ubuntu 7 dual core intel > machine. > >> now, when i try to install numpy, it tells me it doesn't find these > >> libraries: > >> > >> " > >> $ python setup.py install > >> Running from numpy source directory. > >> F2PY Version 2_3816 > >> blas_opt_info: > >> blas_mkl_info: > >> libraries mkl,vml,guide not found in /usr/local/lib > >> libraries mkl,vml,guide not found in /usr/lib > >> NOT AVAILABLE > >> > >> atlas_blas_threads_info: > >> Setting PTATLAS=ATLAS > >> libraries lapack,blas not found in /usr/local/lib/ATLAS/src/ > >> libraries lapack,blas not found in /usr/local/lib/ATLAS > >> libraries lapack,blas not found in /usr/local/lib > >> libraries lapack,blas not found in /usr/lib > >> NOT AVAILABLE > >> > >> atlas_blas_info: > >> libraries lapack,blas not found in /usr/local/lib/ATLAS/src/ > >> libraries lapack,blas not found in /usr/local/lib/ATLAS > >> libraries lapack,blas not found in /usr/local/lib > >> libraries lapack,blas not found in /usr/lib > >> NOT AVAILABLE > >> ......" > >> I have installed ATLAS and lapack with no errors. > >> ATLAS is in usr/local/lib/ATLAS/: > >> $ ls /usr/local/lib/ATLAS > >> bin doc interfaces Make.Linux_UNKNOWNSSE2_2 > README tune > >> CONFIG include lib makes src > >> config.c INSTALL.txt Makefile Make.top tst.o > >> > >> so, what seems to be the problem? > > > > You haven't actually installed ATLAS. You've just built it. Don't put > the source > > in /usr/local/lib/ATLAS/. Put that somewhere else, like ~/src/ATLAS/. > Follow the > > installation instructions in INSTALL.txt. Note this section: > > > > """ > > There are two mandatory steps to ATLAS installation (config & build), as > > well as three optional steps (test, time, install) and these steps are > > described in detail below. For the impatient, here is the basic > outline: > > ************************************************** > > mkdir my_build_dir ; cd my_build_dir > > /path/to/ATLAS/configure [flags] > > make ! tune and compile library > > make check ! perform sanity tests > > make ptcheck ! checks of threaded code for multiprocessor > systems > > make time ! provide performance summary as % of clock rate > > make install ! Copy library and include files to other > directories > > ************************************************** > > """ > > > Alternatively, if you are not familiar with compiling softwares (and > Atlas can be tricky to compile/install), just install the packages > provided by ubuntu: sudo apt-get install atlas3-sse2-dev > atlas3-base-dev, and it should be fine. Maybe, maybe not. On 64bit Intel machines running 64bit linux the fedora package raises an illegal instruction error. Since the fedora package is based on the debian package this might be a problem on Ubuntu also. For recent hardware you are probably better off compiling your own from the latest ATLAS version out there. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu May 31 23:14:51 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 01 Jun 2007 12:14:51 +0900 Subject: [Numpy-discussion] numpy.distutils, system_info and dll/exp/lib Message-ID: <465F8F2B.2050504@ar.media.kyoto-u.ac.jp> Hi, In one of my package, I use numpy.distutils.system_info to detect a C library. This works well on linux and Mac OS X, but on windows, the library is not detected if only the dll is present. If the .lib is there, then it is detected; the dll itself works OK (I can use it from ctypes normally). Am I doing something wrong in my configuration ? """ class SndfileNotFoundError(NotFoundError): """ sndfile (http://www.mega-nerd.com/libsndfile/) library not found. Directories to search for the libraries can be specified in the site.cfg file (section [sndfile]).""" class sndfile_info(system_info): #variables to override section = 'sndfile' notfounderror = SndfileNotFoundError libname = 'sndfile' header = 'sndfile.h' def __init__(self): system_info.__init__(self) def calc_info(self): """ Compute the informations of the library """ prefix = 'lib' # Look for the shared library sndfile_libs = self.get_libs('sndfile_libs', self.libname) lib_dirs = self.get_lib_dirs() for i in lib_dirs: tmp = self.check_libs(i, sndfile_libs) if tmp is not None: info = tmp break #else: # raise self.notfounderror() # Look for the header file include_dirs = self.get_include_dirs() inc_dir = None for d in include_dirs: p = self.combine_paths(d,self.header) if p: inc_dir = os.path.dirname(p[0]) headername = os.path.abspath(p[0]) break if inc_dir is not None and tmp is not None: if sys.platform == 'win32': # win32 case fullname = prefix + tmp['libraries'][0] + \ '.dll' elif sys.platform == 'darwin': # Mac Os X case fullname = prefix + tmp['libraries'][0] + '.' + \ str(SNDFILE_MAJ_VERSION) + '.dylib' else: # All others (Linux for sure; what about solaris) ? fullname = prefix + tmp['libraries'][0] + '.so' + \ '.' + str(SNDFILE_MAJ_VERSION) fullname = os.path.join(info['library_dirs'][0], fullname) dict_append(info, include_dirs=[inc_dir], fullheadloc = headername, fulllibloc = fullname) else: return #print self self.set_info(**info) return """ The problem is, the library I am using is distributed only as a dll, not with the .lib. Right now, users of my package need to generate the .lib by themselves from the dll, using visual studio compiler, which is really too much (specially since the library itself is fine; using the compiler is necessary only for distutils to find the library !). cheers, David From david at ar.media.kyoto-u.ac.jp Thu May 31 23:16:03 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 01 Jun 2007 12:16:03 +0900 Subject: [Numpy-discussion] ATLAS,LAPACK compilation - help! In-Reply-To: References: <48876.129.199.118.84.1180626918.squirrel@mailgate.phys.ens.fr> <465F02BF.7000206@gmail.com> <465F6DBE.7000802@ar.media.kyoto-u.ac.jp> Message-ID: <465F8F73.6020309@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > > Maybe, maybe not. On 64bit Intel machines running 64bit linux the > fedora package raises an illegal instruction error. Since the fedora > package is based on the debian package this might be a problem on > Ubuntu also. For recent hardware you are probably better off compiling > your own from the latest ATLAS version out there. > Well, testing the deb packages would take a couple of minutes, whereas compiling and installing atlas would take easily much more, specially if you have never done it before. So I think trying the debian packages first is the least effort path :) David From hazelnusse at gmail.com Thu May 31 15:59:40 2007 From: hazelnusse at gmail.com (Luke) Date: Thu, 31 May 2007 19:59:40 -0000 Subject: [Numpy-discussion] SciPy Journal In-Reply-To: <465F071F.10402@ee.byu.edu> References: <465E5D58.9030107@ieee.org> <465F04B4.7000204@noaa.gov> <465F071F.10402@ee.byu.edu> Message-ID: <1180641580.687505.207110@a26g2000pre.googlegroups.com> I think this Journal sounds like an excellent idea. I have some python code that calculates the Lyapunov Characteristic Exponents (all of them), for a dynamical system that I would be willing to write about and contribute. Do you envision the articles including applications of the algorithms/ ideas discussed, or simply the presentation and discussion of the algorithms/ideas themselves? ~Luke On May 31, 10:34 am, Travis Oliphant wrote: > Christopher Barker wrote: > >Anne Archibald wrote: > > >>I implemented the Kuiper statistic and would be happy to > >>contribute it to scipy (once it's seen a bit more debugging), but it's > >>quite adequately described in the literature already, so it doesn't > >>seem worth writing an article about it. > > >It could be a very short article that refers the seminal papers in the > >existing literature -- that would still be very helpful. > > >>>2) I think it's scope should be limited to papers that describe > >>>algorithms and code that are in NumPy / SciPy / SciKits. > > >I don't see any reason to limit it -- honestly, the problem is more > >likely to be too few articles than too many! Anything that would make > >sense at the SciPy conference should be fair game. We might want to have > >a clear structure that isolates the NumPy/SciPy/SciKits articles though. > > I'm persuaded by this. Yes, we could handle this by the structure so > that the code-explaining articles are clear. Perhaps two sections or > two publication lists. > > -Travis > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion